Accessibility links
  • Skip to main content
News Icon NEWS FILTER
All News Security Education Videos Scams & Phishing Your Security Mobile Security Identity Theft Corporate Security
Search Icon SEARCH
 

Email Icon SUBSCRIBE TO WEEKLY NEWSLETTER
AI Becomes Criminals' New Ally with the Emergence of FraudGPT
Facebook   X   LinkedIn   Email

AI Becomes Criminals' New Ally with the Emergence of FraudGPT

March 7, 2024

The digital landscape is facing a fresh threat: FraudGPT. This nefarious AI tool, hanging around at the backdoor of WormGPT, first popped up on July 22, 2023. Numerous underground websites and private Telegram channels were the first to notice its presence. What sets FraudGPT apart from other cyber threats is its versatility—it's a multi-purpose tool designed for an array of illicit activities.

At a price of $200 a month, or a yearly package of $1,700, FraudGPT has a suite of dangerous features at its disposal. One of its standout capabilities is the writing and generation of harmful code. This feature allows it to create malicious software designed to infiltrate and damage computer systems. Even more disturbing is its alleged ability to manufacture “undetectable” malware. Once inside your computer, this software can wreak significant havoc.

FraudGPT does not stop there, though. The tool can also develop phishing pages. These are imitation webpages that mirror the real ones, fooling people into sharing their sensitive information like passwords or payment card details. This is a clear trap set for unsuspecting internet users.

Moreover, FraudGPT also boasts the capacity to generate hacking tools and scam letters. Both are crafted with the intention of exploitation, revealing the tool's overall harmful purpose.

One feature that raises concern above others is FraudGPT's capability to identify leaks and vulnerabilities in systems. Picture it as an unethical locksmith who instead of securing your home, looks for the weakest locks to pick. Once it finds these vulnerabilities, criminals can use this information to gain unauthorized access.

Yes, it’s gaining ground by phishing. The standard tips apply when determining if any message, whether email, text, messenger, voicemail, image, or AI chat bot is phishing.

  • If you receive anything unexpected, that should be the first inkling of suspicion that it might be phishing.
  • If the wording in the message tries to imply you need to act really fast or something bad will happen, that’s another good clue it’s not legit.
  • And AI still makes mistakes. So, those are still a sign of phishing.

Never reply to any message that triggers your sixth sense. If you need to investigate, do so using your own information and not what is sent within the message.

In a world where we have grown so dependent on technology, the emergence of FraudGPT compels us to reassess our online habits. The interconnectedness that makes our lives convenient also exposes us to an array of threats. This generative AI tool can be an arsenal for any individual with a malicious agenda. Therefore, it is not just about staying cautious anymore; it's about being proactive.


AI ChatGPT And PaaS Merge, Further Weaponizing Email Phishing Campaigns

Your Security

AI ChatGPT And PaaS Merge, Further Weaponizing Email Phishing Campaigns

Hold on to your login credentials! A recent look at email phishing campaigns uncovered a 61% spike in attacks over the second half of last year. However, security pros find AI (artificial intelligence) is now accelerating these campaigns, and the number of attacks will significantly increase going forward. With the release of the AI ChatGPT platform coupled with PaaS (phishing-as-a-service) kit upgrades, email phishing is slated to be more pervasive and destructive than ever before. READ FULL STORY

ChatGPT AI Platform Breached – Account Holder Data Sold On Dark Web

Your Security

ChatGPT AI Platform Breached – Account Holder Data Sold On Dark Web

Not long ago, more than 100,000 ChatGPT users learned their account credentials were for sale on the dark web. ChatGPT’s parent company, OpenAI, confirms the data breach occurred, but says it had nothing to do with a lack of data security on their part. Although the breach may be a blame game for now, there’s more to it than what’s bubbling on the surface. Group-IB, a cybersecurity company, compiled a Threat Intelligence report on the ChatGPT breach, finding far more than account credentials were exposed. READ FULL STORY

AI Helps BEC Attacks Spread Worldwide Despite Language Barriers

Your Security

AI Helps BEC Attacks Spread Worldwide Despite Language Barriers

While many schools are concerned with students using AI (artificial intelligence) for assignments, the international world of business should be on high alert too. There are a growing crop of business email compromise (BEC) attacks using AI as a translation tool for their own benefit. The previous tell-tale signs of phishing are now fixed with AI, making it more difficult to spot an attack email. Hackers were once limited by their own language skills, but not anymore, thanks to AI. READ FULL STORY








Close
Fraud News & Alerts!

Keep up with the latest cyber security news through our weekly Fraud News & Alerts updates. Each week you will receive an email containing the latest cyber security news, tips and breach notifications.



You're all set!

You will receive your first official security update email within the next week.

A welcome email has also just been sent to you. If you do not receive this email within the next few minutes, please check your Junk box or spam filter to confirm our emails are not being blocked.


 
Help  
Enter any word or words you like.        

The email newsletter will arrive from news@stickleyonsecurity.com


Loading
Please wait...