ChatGPT-Themed Scam Attacks Are on the Rise

A pictorial representation of scam attacks like ChatGPT that also have the Unit 42 and Palo Alto Networks logo

This post is also available in: 日本語 (Japanese)

Executive Summary

Unit 42 researchers are monitoring the trending topics, newly registered domains and squatting domains related to ChatGPT, as it is one of the fastest-growing consumer applications in history. The dark side of this popularity is that ChatGPT is also attracting the attention of scammers seeking to benefit from using wording and domain names that appear related to the site.

Between November 2022 through early April 2023, we noticed a 910% increase in monthly registrations for domains related to ChatGPT. In this same time frame, we observed a 17,818% growth of related squatting domains from DNS Security logs. We also saw up to 118 daily detections of ChatGPT-related malicious URLs captured from the traffic seen in our Advanced URL Filtering system.

We now present several case studies to illustrate the various methods scammers use to entice users into downloading malware or sharing sensitive information. As OpenAI released its official API for ChatGPT on March 1, 2023, we’ve seen an increasing number of suspicious products using it. Thus, we highlight the potential dangers of using copycat chatbots, in order to encourage ChatGPT users to approach such chatbots with a defensive mindset.

Palo Alto Networks Next-Generation Firewall and Prisma Access customers with Advanced URL Filtering, DNS Security and WildFire subscriptions receive protections against ChatGPT-related scams. All mentioned malicious indicators (domains, IPs, URLs and hashes) are covered by these security services.

Related Unit 42 Topics Phishing, Cybersquatting

Trends in ChatGPT-Themed Suspicious Activities
Case Studies of ChatGPT Scams
The Risks of Copycat Chatbots
Indicators of Compromise

Trends in ChatGPT-Themed Suspicious Activities

While OpenAI was beginning its rapid rise to become one of the most famous brands in the field of artificial intelligence, we observed several instances of threat actors registering and using squatting domains in the wild that use “openai” and “chatgpt” as their domain name (e.g., openai[.]us, openai[.]xyz and chatgpt[.]jobs). Most of these domains are not hosting anything malicious as of early April 2023, but it is concerning that they are not controlled by OpenAI, or other authentic domain management companies. They could be abused to cause damage at any time.

Figure 1 shows the trend of squatting domain registration related to ChatGPT, after its release. We noticed a significant increase in the volume of daily domain registrations during our research period. Shortly after Microsoft announced their new Bing version on Feb. 7, 2023, more than 300 domains related to ChatGPT were registered.

Image 1 is a chart showing the amount of ChatGPT domain squatting over time. It starts 1 December 2022, and extends through 1 April 2023. The sharpest increase was over 300 with the new Bing release in February 2023.
Figure 1. Trend of ChatGPT squatting domain registration.

Figure 2 shows a similar pattern where the recognition and popularity of ChatGPT have resulted in a significant rise in logs from the DNS Security product.

Image 2 is a chart of the number of DNS requests of ChatGPT domain squatting over time. It starts December 1, 2022 and continues until April 1, 2023. There is a sharp increase in March 2023 with the public API release, and then another bump of activity with the ChatGPT 4 release.
Figure 2. DNS traffic trend of ChatGPT squatting domains.

We also did a keyword search in the traffic of the Advanced URL Filtering system. Figure 3 shows two large spikes on the day when OpenAI released the ChatGPT official API and GPT-4.

Image 3 is a chart of the number of detections of ChatGPT as a keyword. There are two large spikes in March 2023, and another later in March for the ChatGPT 4 release.
Figure 3. Trend of malicious detections with “ChatGPT” keyword.

Case Studies of ChatGPT Scams

While conducting our research, we observed multiple phishing URLs attempting to impersonate official OpenAI sites. Typically, scammers create a fake website that closely mimics the appearance of the ChatGPT official website, then trick users into downloading malware or sharing sensitive information.

For example, Figure 4 shows a common technique that scammers use to deliver malware. It presents users with a “DOWNLOAD FOR WINDOWS” button that, once clicked, downloads the Trojan malware (SHA256: ab68a3d42cb0f6f93f14e2551cac7fb1451a49bc876d3c1204ad53357ebf745f) to their devices without the victims realizing the risk.

Image 4 is a screenshot of a website mimicking the official ChatGPT website. This website will deliver malware to the enduser.
Figure 4. Malware delivery: chat-gpt-online-pc[.]com.
Additionally, scammers might use ChatGPT-related social engineering for identity theft or financial fraud. Despite OpenAI giving users a free version of ChatGPT, scammers lead victims to fraudulent websites, claiming they need to pay for these services. For instance, as shown in Figure 5, the fake ChatGPT site tries to lure victims into providing their confidential information, such as credit card details and email address.

Image 5 is a screenshot of an example of cryptocurrency fraud, where a threat actor is attempting to collect confidential information through an online payment form. The screenshot shows the online payment form as well as a URL that mimics ChatGPT.
Figure 5. Financial fraud: pay[.]chatgpt-oracle[.]com.
We also noticed some scammers are exploiting the growing popularity of OpenAI for crypto frauds. Figure 6 shows an example of a scammer abusing the OpenAI logo and Elon Musk’s name to attract victims to this fraudulent crypto giveaway event.

Image 6 is a screenshot of a cryptocurrency scam that mimics the OpenAI website. It has a picture of Elon Musk with the title “biggest giveaway crypto of $100 million.” There's a button for endusers to participate as well as instructions for participation.
Figure 6. Crypto scam: x2chatgpt[.]org.

The Risks of Copycat Chatbots

While ChatGPT has become one of the most popular applications this year, an increasing number of copycat AI chatbot applications have also appeared on the market. Some of these applications offer their own large language models, and others claim that they offer ChatGPT services through the public API that was announced on March 1. However, the use of copycat chat bots could increase security risks.

Before the release of the ChatGPT API, there were several open-source projects that allowed users to connect to ChatGPT via various automation tools. Given the fact that ChatGPT is not accessible in certain countries or regions, websites created with these automation tools or the API could attract a considerable number of users from these areas. This also provides threat actors the opportunity to monetize ChatGPT by proxying their service. For example, Figures 7a and 7b below show a Chinese website providing paid chatbot service.

Image 7a is a screenshot of a chatbot service. It is in Chinese.
Figure 7a. Paid chatbot service (in Chinese): chatgpt[.]appleshop[.]top.
Image 7b is a screenshot of a chatbot service. It has been translated from Chinese to English.
Figure 7b. Paid chatbot service (translated from Chinese to English): chatgpt[.]appleshop[.]top.
Whether or not they’re offered free of charge, these copycat chatbots are not trustworthy. Many of them are actually based on GPT-3 (released June 2020), which is less powerful than the recent GPT-4 and GPT-3.5.

Moreover, there is another significant risk of using these chatbots. They might collect and steal the input you provide. In other words, providing anything sensitive or confidential could put you in danger. The chatbot’s responses could also be manipulated to give you incorrect answers or misleading information.

For example, as shown in Figure 8, the squatting domain chatgptforchrome[.]com hosts an introduction page for the ChatGPT Chrome Extension. It uses the information and video from the official OpenAI extension.

The “Add to Chrome” link leads to a different extension URL chrome[.]google[.]com/webstore/detail/ai-chatgpt/boofekcjiojcpcehaldjhjfhcienopme, while the authentic URL should be chrome[.]google[.]com/webstore/detail/chatgpt-chrome-extension/cdjifpfganmhoojfclednjdnnpooaojb.

We downloaded the “AI ChatGPT” extension shown in Figure 8 (SHA256: 94a064bf46e26aafe2accb2bf490916a27eba5ba49e253d1afd1257188b05600) and found that it adds a background script to the victims’ browser, which contains a highly obfuscated JavaScript. Our analysis of this JavaScript shows that it calls the Facebook Graph API to steal a victim's account details, and it might get further access to their Facebook account. Other researchers have also reported similar campaigns involving malicious browser extensions.

Image 8 as a screenshot of a ChatGPT extension for Google. This browser extension is meant to steal account details. It says that ChatGPT will “response alongside search engine results,” list, the browsers that it works with, gives star ratings and says that it's trusted by 600,000+ users. There is a button to add it to Google Chrome. There's also a screenshot of a YouTube video.
Figure 8. ChatGPT extension: chatgptforchrome[.]com.


The growing popularity of ChatGPT worldwide has made it a target for scammers. We have noticed a significant increase in the number of newly registered domains and squatting domains related to ChatGPT, which could potentially be exploited by scammers for malicious purposes.

To stay safe, ChatGPT users should exercise caution with suspicious emails or links related to ChatGPT. Moreover, the usage of copycat chatbots will bring extra security risks. Users should always access ChatGPT through the official OpenAI website.

Palo Alto Networks Next-Generation Firewall and Prisma Access customers with Advanced URL Filtering, DNS Security and WildFire subscriptions are protected against ChatGPT-related scams. All the mentioned malicious indicators (domains, IPs, URLs and hashes) are covered by these security services.


The authors would like to thank Nabeel Mohamed, Shehroze Farooqi and Shresta Bellary Seetharam for providing data sources and examples used in this blog. We would also like to thank Jun Javier Wang, Alex Starov, Harsha Srinath, Laura Novak, Daniel Prizmant and Erica Naone for their advice and help with improving the blog.

Indicators of Compromise

Squatting Domains

  • openai[.]us
  • openai[.]xyz
  • chatgpt[.]jobs

ChatGPT Scams

  • chat-gpt-online-pc[.]com
  • ab68a3d42cb0f6f93f14e2551cac7fb1451a49bc876d3c1204ad53357ebf745f
  • pay[.]chatgpt-oracle[.]com
  • x2chatgpt[.]org


  • chatgpt[.]appleshop[.]top

Chrome Extensions

  • chatgptforchrome[.]com
  • chrome[.]google[.]com/webstore/detail/ai-chatgpt/boofekcjiojcpcehaldjhjfhcienopme
  • 94a064bf46e26aafe2accb2bf490916a27eba5ba49e253d1afd1257188b05600