How Chatbots of AIs are helping computer pirates to point to their bank accounts
NEWNow you can listen to News articles!
The chatbots of AIs are quickly becoming the main way in which people interact with the Internet. Instead of navigating through a link list, you can now obtain direct answers to your questions. However, these tools often provide information that is completely inaccurate and in the context of security, that can be dangerous. In fact, cybersecurity researchers warn that computer pirates have begun to exploit failures in these chatbots to carry out phishing attacks from IA.
Specifically, when people use AI tools to search for login pages, especially for banking and technological platforms, tools return incorrect links. And once clicking that link, you can go to false websites. These sites can be used to steal personal or credential information.
Register for my free Cyberguy report
Get my best technological tips, urgent security alerts and exclusive offers delivered directly to your inbox. In addition, you will get instant access to my definitive scam survival guide, free when it joins me Cyberguy.com/newsletter.

A man who uses chatgpt on his laptop. (Kurt “Cyberguy” Knutsson)
What you need to know about the phishing attacks of AI
Netcraft researchers recently conducted a test on the family of GPT-4.1 models, which is also used by the perplexity of the Microsoft AI and AI search engine. They asked where they log in to fifty different brands in banking, retail trade and technology.
Of 131 unique links that the chatbot returned, only about two thirds were correct. About 30 percent of the links pointed to unregistered or inactive domains. Another five percent led to unrelated websites. In total, more than a third of the responses linked to pages not owned by real companies. This means that someone looking for a login link could easily end in a false or insecure place.
If the attackers register those unresated domains, they can create convincing phishing pages and wait. Since the answer provided by AI often sounds official, users are more likely to trust it without verifying it.

Wikipedia page showing the chatgpt description on a smartphone. (Kurt “Cyberguy” Knutsson)
Phishing’s attacks are already happening: real world example
In a recent case, a user asked Perplexity AI for the Wells Fargo login page. The main result was not the official Wells Fargo site; It was a Phishing page housed at Google sites. The false site closely imitated the real design and led users to enter personal information. Although the right site listed below, many people would not realize or think about verifying the link.
The problem in this case was not specific to the underlying model of Perplexity. It was derived from the abuse of Google sites and the lack of research in the surface search results for the tool. Even so, the result was the same: a trusted AI platform inadvertently directed someone to a false financial website.
Smaller banks and regional credit cooperatives face even higher risks. It is less likely that these institutions appear in the training data of AI or index with precision on the web. As a result, AI tools are more likely to guess or manufacture links when asked about them, increasing the risk of exposing users to insecure destinations.

Chatgpt image on a desktop computer screen. (Kurt “Cyberguy” Knutsson)
7 ways in which you can protect yourself from the phishing attacks of AI
As Phishing attacks of AI become more sophisticated, protect themselves with some intelligent habits. Here are seven that can make a real difference:
1) Never trust the links of the ia to blind chat responses
The chatbots of AI often sound safe even when they are wrong. If a chatbot tells you where to log in, do not click on the link immediately. Instead, go directly to the website writing its URL manually or using a trusted marker.
2) Double verification domain names with care
Phishing links generated by ia often use similar domains. Verify if there are subtle spells, additional or final words such as “.site” or “.info” instead of “.com”. If you feel a bit off, do not continue.
3) Use two factors’ authentication (2FA) whenever possible
Even if your login credentials are stolen, 2FA adds an additional security layer. Choose authenticators based on applications such as Google Authenticator or Authy instead of SMS based codes when available.
4) Avoid logging through search engines or AI tools
If you need access to your bank or technological account, avoid looking for or requesting a chatbot. Use the markers of your browser or enter the official URL directly. The AI and the search engines can sometimes arise pHishing pages by error.
5) Report suspicious links generated by AI
If a chatbot or AI tool gives you a dangerous or false link, inform it. Many platforms allow user comments. This helps AI systems to learn and reduce future risks for others.
6) Keep your updated browser and use strong antivirus software
Modern browsers such as Chrome, Safari and Edge now include phishing and malware protection. Enable these characteristics and keep everything updated.
If you want additional protection, the best way to safeguard the malicious links is to have a strong antivirus software installed on all its devices. This protection can also alert it to the PHISHING Electronic Correos and Ransomware scams, maintaining their personal information and their safe digital assets.
Get my elections for the best 2025 antivirus protection winners for their Windows, Mac, Android and iOS devices in Cyberguy.com/Lockupyoutech.
7) Use a password administrator
Password administrators not only generate safe passwords, but can also help detect false websites. In general, they will not automatically fill the login fields in similar or counterfeit sites.
See the best password administrators reviewed by 2025 experts in Cyberguy.com/Passwords.
Kurt key takeway
The attackers are changing tactics. Instead of games search engines, they now design content specifically for AI models. I have been constantly urging you to verify the double verification url of inconsistencies before entering any confidential information. Since chatbots are still known that they produce highly inaccurate responses due to AI hallucinations, be sure to verify anything that a chatbot tells you before applying it in real life.
Should the companies do more to avoid phishing attacks through their chatbots? Get us knowing in Cyberguy.com/contact.
Register for my free Cyberguy report
Get my best technological tips, urgent security alerts and exclusive offers delivered directly to your inbox. In addition, you will get instant access to my definitive scam survival guide, free when it joins me Cyberguy.com/newsletter.
Copyright 2025 Cyberguy.com. All rights reserved.
Kurt “Cyberguy” Knutsson is a award -winning technological journalist who has a deep love for technology, equipment and devices that improve life with their contributions for News & News Business Startzing Mornings in “News & Friends”. Do you have a technological question? Get the free Kurt’s free newsletter, share your voice, an idea of the story or comment on Cyberguy.com.


