Ai risks of cybersecurity and deep defake scams
Imagine that your phone sounds and the voice at the other extreme sounds like your boss, a close friend or even a government official. They urgently ask for confidential information, except that they are not really. It is a deep defake, driven by AI, and you are the goal of a sophisticated scam. These types of attacks are happening at this time, and they are becoming more convincing every day.
That is the warning that the AI 2025 security report sounds, presented at the RSA (RSAC) conference, one of the world’s greatest meetings for cybersecurity experts, companies and police. The report details how criminals are taking advantage of artificial intelligence to impersonate people, automate scams and attack large -scale security systems.
From the hijacked accounts and manipulated models to live videos and data poisoning scams, the report paints an image of a rapidly evolving threat landscape, one that is playing more lives than ever.
Unique Cyberguy’s free report: Obtain my expert technology advice, critical security alerts and exclusive offers, in addition to instant access to my Free Definitive Scam Survival Guide When you register!

Illustration of cybersecurity risks. (Kurt “Cyberguy” Knutsson)
IA tools are filtering confidential data
One of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis of the Ceck Point cybersecurity firm found that 1 in 80 indications of AI includes high -risk dataand approximately 1 in 13 contains confidential information that could expose users or organizations to safety or compliance risks.
These data may include passwords, internal business plans, customer information or property code. When shared with the tools of AI that are not insured, this information can be registered, intercepted or even filter later.
Deepfake scams are now real and multilingual
The supplantation of AI is becoming more advanced every month. Criminals can now pretend voices and faces convincingly in real time. At the beginning of 2024, a British engineering company lost 20 million pounds after the scammers used a Live Deepfake video to impersonate the company’s executives during a zoom call. The attackers seemed and sounded like trusted leaders and convinced an employee to transfer funds.
Real -time video manipulation tools are now being sold in criminal forums. These tools can exchange faces and imitate speech during video calls in multiple languages, which makes it easier for attackers to execute scams through borders.

Illustration of a person conferences on their laptop. (Kurt “Cyberguy” Knutsson)
Ai is running pHishing and scam operations on scale
Social engineering has always been part of the cyber crime. Now, AI is automating it. Attackers no longer need to talk about a victim’s language, constantly stay or write convincing messages monthly.
Tools such as gumapro use chatgpt to create electronic phishing and spam emails with perfect grammar and native sound tone. These messages are much more convincing than the careless scams of the past. Gomailpro can generate thousands of unique emails, each slightly different in language and urgency, which helps them to pass beyond spam filters. It is actively marketed in underground forums for around $ 500 per month, which makes it widely accessible to bad actors.
Another tool, the Telegram X137 console, takes advantage of Gemini AI to monitor and respond to chat messages automatically. You can impersonate customer service agents or known contacts, making real -time conversations with multiple objectives at the same time. The answers are without censorship, fast and personalized according to the victim’s responses, giving the illusion of a human behind the screen.
AI is also promoting large -scale Sextortes scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from sharing. Instead of using the same message repeatedly, scammers now trust AI to rewrite the threat in dozens of ways. For example, a basic line such as “time is running out” could be written as “the sand watch is almost empty for you”, which makes the message feel more personal and urgent while avoiding detection.
By eliminating the need for fluidity of manual language and effort, these tools of AI allow attackers to dramatically climb their phishing operations. Even inexperienced scammers can now perform large custom campaigns and almost effortlessly.
The stolen accounts are sold in the dark network
With the increasingly popular AI tools, criminals now go to the accounts that use them. Computer pirates are stealing chatgpt session, keys to OpenAi API and other platform credentials to avoid use limits and hide their identity. These accounts are often stolen through malware attacks, phishing or filling credentials. Stolen credentials are sold in bulk in telegram channels and underground forums. Some attackers are even using tools that can avoid multifactor authentication and session -based security protections. These stolen accounts allow criminals to access powerful tools and use them for phishing, malware generation and scam automation.
What to do if your personal information is on the dark website

Illustration of a person signing on their laptop. (Kurt “Cyberguy” Knutsson)
Malware steals bank cards and passwords from millions of devices
Jailbreaking ai is now a common tactic
Criminals are finding ways to avoid the safety rules integrated in AI models. On the dark website, the attackers share techniques for the Jailbreaking to respond to the requests that would normally block. Common methods include:
- Tell the AI that he will pretend that he is a fictional character who has no rules or limitations
- Write dangerous questions such as academic or research scenarios
- Request technical instructions using a less obvious wording so that the application is not marked
Some AI models can even be deceived in Jailbreaking. The attackers ask the model to create an entry that makes their own restrictions annul. This shows how AI systems can be manipulated in unexpected and dangerous ways.
The malware generated by AI is entering the mainstream
The AI is now being used to build malware, phishing kits, ransomware scripts and more. Recently, a group called Funksac It was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are driven by AI. Funksec has also used AI to help launch attacks that flood websites or services with false traffic, making them block or disconnect. These are known as service denial attacks. The group even created its own chatbot with AI to promote their activities and communicate with the victims on their public website.
Some cybercriminals are even using AI to help with marketing and data analysis after an attack. A tool called Rhadamanthys Stealer 0.7 He claimed to use AI for “text recognition” to sound more advanced, but the researchers then discovered that he was using previous technology. This shows how attackers use fashion words for their tools to seem more advanced or reliable for buyers.
Other tools are more advanced. An example is DarkGPT, a chatbot specifically created to classify huge stolen information databases. After a successful attack, scammers often end with records full of user names, passwords and other private details. Instead of examining these data manually, they use AI to quickly find valuable accounts in which they can break, sell or use for more specific attacks such as ransomware.
Get a free scan To find out if your personal information is now available on the web
The poisoned the AI models are spreading erroneous information
Sometimes, attackers do not need to hack an AI system. Instead, they deceive him by feeding him with false or misleading information. This tactic is called AI poisoning, and can make the AI say biased, harmful or completely inaccurate answers. There are two main ways in which this happens:
- Training training: The attackers sneak false or harmful data in the model during development
- Recovery poisoning: Deceptive content online is planted, which then collects when generating answers
In 2024, the attackers climbed 100 models of the manipulated to the open source platform that embrace the face. These poisoned models seemed useful tools, but when people used them, they could disseminate false information or generate malicious code.
A large -scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million false articles online. These articles were designed to deceive AI chatbots to repeat their messages. In the tests, the researchers found that the main artificial intelligence systems echoed these false claims of approximately 33% of the time.

Illustration of a hacker at work (Kurt “Cyberguy” Knutsson)
How scammers use AI tools to present perfect -looking tax statements in their name
How to protect from cyber threats driven by AI
Cybercriminal cybercriminal with Ia combines realism, speed and scale. These scams are not just more difficult to detect. They are also easier to launch. Here is how to stay protected:
1) Avoid entering confidential data into AI public tools: Never share passwords, personal details or confidential commercial information in any AI chat, even if it seems private. These tickets can sometimes be recorded or used.
2) Use strong antivirus software: PHishing and malware electronic emails generated by AI can pass beyond obsolete security tools. The best way to safeguard the malicious links that install malware, which potentially access their private information, is to have strong antivirus software installed on all its devices. This protection can also alert it to the PHISHING Electronic Correos and Ransomware scams, maintaining their personal information and their safe digital assets. Get my elections for the best antivirus protection winners 2025 for your Windows, Mac, Android and iOS devices.
3) Turn on the authentication of two factors (2FA): 2fa Add an additional layer of protection to your accounts, including AI platforms. It makes it much more difficult for attackers to break the use of stolen passwords.
4) Be more cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake Audio and Video can sound and look very real.
5) Use a personal data elimination service: With AI scams and deep attacks on the ascent, criminals depend more and more publicly available personal information to create convincing impersonations or objective victims with personalized phishing. By using a good reputable personal data elimination service, you can reduce your digital footprint in database and public databases sites. This makes it much more difficult for scammers gathering the details they need to convince their identity or launch attacks led by the directed AI.
While no service can guarantee the complete elimination of your Internet data, a data removal service is really an intelligent option. They are not cheap, and it is not your privacy either. These services do all the work by you by actively monitoring and systematically erasing your personal information from hundreds of websites. It is what gives me peace of mind and has proven to be the most effective way to erase your personal internet data. By limiting the available information, it reduces the risk of cross -references data of infractions with information they can find in the dark network, which makes it difficult to be pointed out. See my best selections to obtain data disposal services here.
6) Consider the protection of identity theft: If your data is filtered through a scam, early detection is key. Identity protection services can monitor your information and alert it to suspicious activities. Identity theft companies can monitor personal information such as their Social Security number (SSN), telephone number and email address, and alert and OU, if sold on the dark website or used to open an account. They can also help you freeze your bank and credit card accounts to avoid greater unauthorized use by criminals. See my advice and the best selections on how to protect yourself from identity theft.
7) Regularly monitor your financial accounts: Attacks with phishing, malware and acquisition of accounts generated by AI are now more sophisticated and generalized than ever, as highlighted in the 2025 Safety Report. When frequently reviewing the extracts of your bank and credit cards for suspicious activities, you can catch unauthorized transactions early, often before important damage is done. The rapid detection is crucial, especially because stolen credentials and financial information are now being exchanged and exploited by cybercriminals using AI.
8) Use a safe password administrator: The stolen accounts and credential fill attacks are a growing threat, with computer pirates that use automated tools to break in accounts and sell access on the dark website. Insurance Password administrator It helps you create and store strong and unique passwords for each account, which makes it much more difficult for attackers to compromise their session, even if part of your information is filtered or is aimed at attacks driven by AI. Get more details about me The best password administrators reviewed by experts from 2025 here.
9) Keep your updated software: The malware generated by AI and advanced phishing kits are designed to exploit vulnerabilities in obsolete software. To stay at the forefront of these evolving threats, make sure that all their devices, browsers and applications are updated with the latest security patches. Regular updates The close security gaps that malware and cybercounts driven by AI actively seek to exploit.
Kurt’s Key Takeways
Cybercounts are now using AI to feed some of the most convincing and scalable attacks we have seen. From Deepfake video calls and pHishing electronic emails generated by AI to stolen and malware accounts written by chatbots, these scams are becoming more difficult to detect and easier to launch. The attackers even poison AI models with false information and create false tools that seem legitimate but are designed to harm. To stay safe, it is more important than ever to use strong antivirus protection, enable multiple factors and avoid sharing confidential data with AI tools in which you do not completely trust.
Have you noticed that IA scams become more convincing? Let us know your experience or questions by writing to us in Cyberguy.com/contact. His story could help someone else stay safe.
To obtain more technical advice and safety alerts, subscribe to my free Cyberguy Report newsletter in the direction of Cyberguy.com/newsletter
Ask Kurt a question or let us know what stories we would like to cover
Follow Kurt in his social channels
- YouTube
Answers to Cyberguys most facts:
- What is the best way to protect your MAC, Windows, iPhone and Android devices of being pirate?
- What is the best way to stay private, safe and anonymous while navigating the web?
- How can I get rid of robocalls with data elimination applications and services?
- How do I eliminate my private internet data?
New Kurt:
- Try the new Cyberguy games (crosswords, words searches, trivia and more!)
- Cyberguy exclusive coupons and offers
Copyright 2025 Cyberguy.com. All rights reserved.
Kurt “Cyberguy” Knutsson is a award -winning technological journalist who has a deep love for technology, equipment and devices that improve life with their contributions for News & News Business Startzing Mornings in “News & Friends”. Do you have a technological question? Get the free Kurt’s free newsletter, share your voice, an idea of the story or comment on Cyberguy.com.


