Check Point Research has recently released a report that indicates Russian hackers are actively seeking ways to bypass ChatGPT restrictions for malicious purposes. Multiple underground hacking forums discuss methods on how to circumvent IP addresses, payment cards and phone numbers in order to gain access to the service.
Last month, a thread posted on an underground hacking forum revealed that a threat actor was testing ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common viruses.
Discussions on Underground Hacking Forums
ChatGPT is an intelligent AI bot that answers user questions in a friendly and conversational tone. It has become widely popular for online communication, even being used by law enforcement agencies to gather evidence against suspects. Unfortunately, the Microsoft-funded OpenAI chatbot also has its dark side; it’s being misused by hackers around the world.
According to Check Point Research, Russian hackers have been trying to circumvent restrictions on ChatGPT’s API for malicious purposes. Specifically, cybercriminals in Russia are attempting to utilize this tool for phishing and fraud operations.
Unfortunately, OpenAI’s ChatGPT tool is currently geofenced – meaning it only works for users from certain countries like Russia, China, Egypt, Iran and Ukraine. This prevents anyone in those regions from accessing the service without purchasing an upgraded account; however OpenAI has recently opened up a waitlist for professional paid memberships.
Russian hackers have been discussing ways around these restrictions on underground hacking forums. They’ve demonstrated how to use stolen payment cards to upgrade to an OpenAI account and use semi-legal online SMS services in order to register for ChatGPT without being geo-blocked in another country.
This has enabled them to circumvent restrictions and gain access to the tool, as well as write malicious code which could be used for scams or ransomware attacks. It poses a grave concern since it could enable criminals to launch more persuasive and convincing phishing attempts.
Furthermore, due to its AI technology, hackers could scale their existing attack techniques more efficiently than they could with humans. One such technique is spearphishing, which involves tailoring messages specifically for specific targets in order to increase their likelihood of responding.
Cybercriminals may find this capability of creating such attacks through ChatGPT and other technologies a huge boon, which is why most cybersecurity leaders are worried.
On an underground hacking forum, a threat actor revealed that they had created malware using ChatGPT for document scanning in Microsoft Office, PDFs and images. CRP also discovered evidence of its use to develop an infostealer.
Hackers are using ChatGPT to build tools for creating dark web marketplaces, crafting encryption scripts and propagating fraudulent schemes. It is evident that threat actors view ChatGPT as a lucrative opportunity to further their illegal activities and are eager to join in on the fun as soon as possible.
Recent survey of 1,500 IT decision makers from North America, the UK and Australia revealed that 74% were worried about potential misuse of this tool. Their main worries include that it will allow less experienced hackers to gain more technical proficiency, enhance their abilities and spread misinformation.
Conversation Notes from the Dark Web
ChatGPT, developed by Microsoft-owned OpenAI, is an innovative AI system that can respond to users in natural language through a dialog format. Currently free for the public as part of a feedback exercise, however a paid subscription is expected soon.
ChatGPT’s impressive capabilities are also being misused for malicious intent. According to Check Point Research, Russian hackers are trying to circumvent ChatGPT’s restrictions on IP addresses, payment cards and phone numbers in an effort to use it for illicit purposes.
Security researchers discovered several threads on underground hacking forums discussing ways to circumvent limitations. One thread suggests buying a virtual number and using it for bypassing restrictions; another instructs hacker on using stolen payment cards for upgraded accounts on ChatGPT.
Other forum participants discussed writing Python scripts to aid cybercriminals in creating malware. These scripts, similar to those OpenAI is already aware of, could enable someone with little or no development experience to become a fully fledged cybercriminal with technical capabilities.
These threads indicate that Russian cybercriminals are creating and advertising their own ChatGPT API-based Telegram channels in an effort to circumvent OpenAI’s anti-abuse restrictions on its service. Furthermore, they appear to be using these channels to promote their malicious activities and distribute stolen goods.
Threat actors in some cases are creating marketplaces where they can sell illegal and stolen items such as payment cards, weapons, firearms and more using ChatGPT as their trading platform. These platforms would operate under the assumption that users could send and receive payments through cryptocurrencies.
Cybercriminals are also using ChatGPT to perfect phishing emails and attacks, enabling them to expand their operations at lower costs. These efforts could potentially allow cybercriminals to scale their projects more efficiently and increase their malicious operations.
Recently, Check Point Research discovered multiple underground threads demonstrating the advantages of ChatGPT for malware on hacker forums. These threads demonstrated how cybercriminals employed ChatGPT to write malware scripts used in phishing campaigns; these included an infostealer for Microsoft Office documents, PDFs and images-based targeting; a Python script performing cryptographic operations or cryptography tools; and a dark web marketplace pushing fraudulent schemes. Moreover, cybercriminals created an AI-based bot capable of sending out phishing emails and creating malware strains.
Check Point Research
Russian hackers are actively looking for ways to circumvent ChatGPT’s restrictions for malicious purposes. According to Check Point Research, discussions on underground hacking forums suggest cybercriminals are looking for methods of bypassing OpenAI’s IP addresses, phone numbers and payment card restrictions which power the chatbot.
ChatGPT is an impressive chatbot that utilizes generative language models to simulate human conversation, with reinforcement and supervised learning techniques to improve its performance. Its primary function is to recreate human conversations but can also answer questions, compose essays, poems and teleplays.
OpenAI has quickly become an essential workflow tool for developers and writers alike, but its potential misuse means OpenAI has put limits on how it can be utilized. Furthermore, the service is geo-blocked to prevent access from users in Russia or other parts of the world.
However, a report from Check Point suggests the chatbot’s popularity has made it an attractive target for Russian hackers. Recently, cybercriminals have posted threads in underground hacking forums discussing ways they could circumvent restrictions and use ChatGPT to craft malware strains.
One thread in particular discussed how to generate a temporary number that can be used to circumvent IP and payment card restrictions. This was accomplished using semi-legal online SMS services.
Cybercriminals are also trying to circumvent chatGPT restrictions by purchasing a virtual number from a third party that can be used instead of an IP address. This temporary number can then be regenerated as and when needed.
Cybercriminals now have the capacity to make multiple malicious calls simultaneously, giving them access to valuable information and conducting other illicit activities. It also presents threat actors with an enhanced opportunity to amp up their attacks through automated content delivery.
Furthermore, the ability to generate spam and phishing messages would enable them to reach a wider audience. This could be particularly advantageous in fields like healthcare, law and education where disinformation is an ongoing problem.
OpenAI is already aware of the potential misuse of generative language models for malicious purposes and has joined forces with Georgetown University’s Center for Security and Emerging Technology and Stanford Internet Observatory to assess these risks.
Therefore, it’s wise to become informed about the various methods available for protecting your organization against cyberattacks. This can be accomplished through various techniques such as training and accreditation courses.
At Lead2Pass, we offer a comprehensive selection of training and accreditation courses to help you deepen your understanding of Check Point products and sharpen your security skills. Plus, our support and resources will help you develop and hone those skills so they can work for your organization in the long run.