Accessibility links
  • Skip to main content
News Icon NEWS FILTER
All News Security Education Videos Scams & Phishing Your Security Mobile Security Identity Theft Corporate Security
Search Icon SEARCH
 

Email Icon SUBSCRIBE TO WEEKLY NEWSLETTER
The AI 'Crystal Ball' Predictions for 2024
Facebook   X   LinkedIn   Email

The AI 'Crystal Ball' Predictions for 2024

January 22, 2024

With artificial intelligence continuing to grab the headlines, many wonder what AI-enabled cybercrime will look like in the coming year. From deep fakes to AI-enhanced phishing emails, the risks to online security are greater than ever before.

Below are three of IBM's most prevalent "crystal ball" AI-related cybercrimes we can expect to see in the new year.

GenAI

GenAI, or generative AI, creates text, images, video, and other media based on data input (think ChatGPT). GenAI will help cybercriminals achieve their goals in a fraction of the time it once took. In general, this will enable bad actors to deceive businesses and the public at large as well as individual users. GenAI phishing tactics, audio, and deep fakes will be used by cybercriminals with laser precision. They'll also use GenAI to organize reams of data in minutes. Doing so allows hackers to create profiles and optimize potential targets ripe for cybercrime.

Lookalike Behavior

For businesses, employees whose behavior online is predictable should ring alarm bells when that behavior differs from what's expected. For cybercriminals, using AI to steal employee identities is a goal helping with cybercrimes like socially engineered attacks. These attacks manipulate a victim into breaking with business norms and unknowingly help an attacker gain access to a system. Still other attacks use AI to steal a victim’s identity and their unusual behavior online should raise red flags. Strict security protocols and bolstered passwords are key to avoiding AI-generated lookalike behavior.

"Worm-like" Effect

This effect takes a malicious attack and enhances it to where the cybercriminal wants the outcome to be. It starts with one attack and worms its way into something greater by abusing an AI platform used by a business. As the use of these promising platforms grow, you can bet cybercriminals will find emerging tactics to exploit them. Although these AI-based worm-like attacks are on the horizon, those at IBM say it's just a matter of time before they become the norm.

Current and Future Advice

We don’t know at any given time which, if any of these will strike us, we can be proactive in caring for our own data.

Have a code word or phrase ready. There have already been successful AI deep fake attacks. If you receive a phone call from someone with an urgent issue and they are asking for sensitive information or money, have a code word ready. If the other person doesn’t know the code, it could be AI calling you. Alternatively, ask something only you and they are likely to know.

Call right back. If you want to make sure you’re not sending your money off to a scammer, call the person you think it is right back from a number you already know.

Question the oddities. If someone’s behavior is out of the norm, check on it. Deviations from regular behavior could signal AI.

Create solid passwords. This is old news. Make sure all passwords are unique to each site you use, have at least eight characters, use numbers, and a whole mix of things to make them really difficult to guess or crack. Change them regularly to keep AI on its toes.

Patch and update. To avoid the worms, keep your systems and software up to date and make sure that when a patch is issued, apply it right away!

Although a crystal ball isn't exactly the definition of facts, those in-the-know are able to predict with some certainty the future of AI-based cybercrimes. Opportunists that they are, one thing we do know is that cybercriminals are figuring out how to exploit AI today and in the future. So, stay tuned.


AI Scrapes Your Data For Training: Take Steps To Protect Your Data

Your Security

AI Scrapes Your Data For Training: Take Steps To Protect Your Data

Large language models like ChatGPT have introduced complexity to the evolving online threat landscape. Cybercriminals are increasingly using these models to execute fraud and other attacks without requiring advanced coding skills. This threat is exacerbated by the availability of tools such as bots-as-a-service, residential proxies, CAPTCHA farms, and more. As a result, it's crucial for individuals and businesses to take proactive measures to protect their online presence. READ FULL STORY

Building Strong Passwords Using The “Don’ts” Of Password Security

Your Security

Building Strong Passwords Using The “Don’ts” Of Password Security

Much is made of the importance strong passwords give to online account security, and for good reason. That’s because password cracking is often the first step for a hacker looking to break into an account – your account. A formidable password can make a cybercriminal give-up and move onto the next potential victim. But what’s also important and often overlooked is what not to do when creating a password. Consider the “don’ts” of weak password creation as reminders of what not to do. READ FULL STORY

Top Phishing Scams Continue To Improve And Grow

Education

Top Phishing Scams Continue To Improve And Grow

Much to our dismay, cybercrooks keep finding ways to better the phishing tools they have and find other ways to include new and sneakier methods of thievery. Organizations and individuals are targets and money, identities, credentials, and more are stolen from both every day. Even cyber-savvy users can get caught in phishing scams if they don’t pay close attention to the signs and signals that something isn’t quite right. Reviewing the most pervasive phishing scams is always recommended. READ FULL STORY

AI ChatGPT And PaaS Merge, Further Weaponizing Email Phishing Campaigns

Your Security

AI ChatGPT And PaaS Merge, Further Weaponizing Email Phishing Campaigns

Hold on to your login credentials! A recent look at email phishing campaigns uncovered a 61% spike in attacks over the second half of last year. However, security pros find AI (artificial intelligence) is now accelerating these campaigns, and the number of attacks will significantly increase going forward. With the release of the AI ChatGPT platform coupled with PaaS (phishing-as-a-service) kit upgrades, email phishing is slated to be more pervasive and destructive than ever before. READ FULL STORY

ChatGPT AI Platform Breached – Account Holder Data Sold On Dark Web

Your Security

ChatGPT AI Platform Breached – Account Holder Data Sold On Dark Web

Not long ago, more than 100,000 ChatGPT users learned their account credentials were for sale on the dark web. ChatGPT’s parent company, OpenAI, confirms the data breach occurred, but says it had nothing to do with a lack of data security on their part. Although the breach may be a blame game for now, there’s more to it than what’s bubbling on the surface. Group-IB, a cybersecurity company, compiled a Threat Intelligence report on the ChatGPT breach, finding far more than account credentials were exposed. READ FULL STORY








Close
Fraud News & Alerts!

Keep up with the latest cyber security news through our weekly Fraud News & Alerts updates. Each week you will receive an email containing the latest cyber security news, tips and breach notifications.



You're all set!

You will receive your first official security update email within the next week.

A welcome email has also just been sent to you. If you do not receive this email within the next few minutes, please check your Junk box or spam filter to confirm our emails are not being blocked.


 
Help  
Enter any word or words you like.        

The email newsletter will arrive from news@stickleyonsecurity.com


Loading
Please wait...