Hackers exploiting ChatGPT to write malicious codes

January 30, 2023

Hackers exploiting ChatGPT to write malicious codes. If your organization is relying on a security product that uses the ChatGPT to provide a security layer, you should be aware that there are ways to stop attackers from writing malicious code.

Hackers exploiting ChatGPT to deploy malware

Hackers exploiting ChatGPT to deploy malware is a growing threat that has security researchers concerned. According to Check Point Research, ChatGPT has used by malicious hackers to develop tools for cyber crime. They have discussed ways to create dark web marketplaces, as well as how to create data encryption tools.

Check Point reported three cases of ChatGPT code used by malicious actors. The hackers wrote a tool to install a backdoor onto a computer. The tool also showed off a script that would upload malware to an infected PC.

The hackers claimed they could make $1,000 per day by using ChatGPT. They posted a thread about their exploit on an underground hacking forum. Several more advanced hackers discussed how to write malware with ChatGPT, including a way to create reverse shells.

Check Point says it is only a matter of time before more sophisticated hackers find a way to use ChatGPT for malicious purposes. In a report on the issue, they found that the tool’s inbuilt guardrails were lacking. These safeguards rely on AI to detect malicious code. This means they can’t provide immediate consequences for content policy violations.

The threat actor said that he first started working with Python to write a script. He added that he would later go on to develop tools that are more geared towards cyber criminal activity.

Check Point says that they have found examples of criminals celebrating the fact that they can use ChatGPT to generate code for malicious software. Another example is the way it can use to craft phishing emails. Having an artificially intelligent chatbot at the ready is an advantage for scammers.

Check Point says that the use of ChatGPT is a problem because it will eventually used to produce malware in real time. Its AI-driven response could speed up attacks, save hackers time, and help them write basic tools or malware.

While there are a lot of interesting things that OpenAI’s bot can do, it also has a few biases. For instance, it isn’t yet able to generate a major ransomware strain. This one of the reason that hackers are exploiting ChatGPT to write malicious codes.

Empowering less skilled cyber threat actors

The chatbot ChatGPT (Generative Pre-trained Transformer) is a prototype artificial intelligence chatbot developed by OpenAI. According to its creators, it is a useful tool for enacting social engineering attacks on vulnerable targets. However, some security researchers are questioning its effectiveness.

While it may not be as sophisticated as the real thing, this artificial intelligence tool has designed to speed up creating malware for free. It even includes a proof-of-concept attack. This is not the first time that a machine learning tool has used to create a workable piece of malware.

Although the technology is still in its research and development phase, some security experts are concerned about how useful it could be for bad actors to exploit security vulnerabilities. They believe that if it is use correctly, it may be able to reverse-engineer many security applications, including firewalls and intrusion detection systems.

Using ChatGPT, hackers can create an automated chatbot that will engage in real-time conversations and ask for sensitive information. These chatbots can also generate convincing responses. This way hackers are easily exploiting ChatGPT to write malicious codes.

Cyber threat activity is often a drain on the economy. It can include stolen funds, disruption of operations, reputational damage, and loss of customers. There is also the financial cost of securing networks.

While some of these threats are benign, others pose a serious threat. For example, they can steal credit card numbers, encrypt files, or install backdoors on systems. During a cyber crime-as-a-service transaction, the perpetrator can collect personal and business information and sell it on the dark web.

The malware created by the ChatGPT bot has been a topic of interest for some researchers. A recent blog post by Check Point Research revealed how it can infect devices. Specifically, the report described a full infection flow.

Another study on the technology cites a Python-based stealer, which scans for common file types and zips them to a pre-programmed FTP server. Eventually, the files will upload to the Temp folder.

While some security professionals are worried that this type of artificial intelligence technology may lead to increased cybercrime, there is no way to know for sure. In the meantime, it is best to be prepared for any type of artificial intelligence tool.

Guardrails designed to prevent it from doing straightforwardly malicious things

In the security world, Guardrails designed to prevent ChatGPT from doing straightforwardly malicious things have already made a splash. And, they aren’t just for hackers. The system also tested by cybersecurity defenders and experts.

A guardrail is a hazard proof barrier, a signpost or guide that restricts behavior or actions that aren’t in line with the organization’s goals. They can help a team move in the same direction, save time, and avoid errors. Ideally, guardrails built to be flexible. Some are simple, while others are more elaborate.

The best guardrails built to be agnostic, meaning that they are not a complete blockade against good ideas, or a deterrent against bad ones. This allows the most creative minds to contribute to the effort.

In particular, the most important guardrail is the code quality one. Code quality measures are crucial to ensuring that the entire development team produces clean code. Any developer who fails to adhere to coding standards will held accountable. If a developer ignores the coding standards, they may need to undergo some form of education, or else they could fire.

It’s not a secret that artificial intelligence has overhyped by marketing teams and thought leaders. But the fact that it’s actually feasible to have an AI do things that would normally require human intervention is a real-world achievement.

There’s a lot of hype surrounding chatbots, and the newest version of the software, ChatGPT, has impressed several industry leaders. As the technology evolves, there will be a need to continue testing it. Using the technology in the right way can be a boon to any cybersecurity organization.

While no guardrail will prevent an AI from being naughty, there are many that are worth the effort. Using the most important ones will increase quality, reduce error rates, and lower costs. Ensure that you are building the best possible tools and technologies for your organization. Having a clear set of guidelines will help you and your team build a secure future for everyone.

Of course, a lot of the guardrails you find out about are only as relevant to you as the context you choose to use them in. Whether it’s the oh so small code quality one, or the best way to prevent SQL injection attacks, it’s always better to be prepared.

Creating infostealers, encryption tools and dark web marketplace scripts

Cybercriminals are creating infostealers and encryption tools to exploit networks, allowing them to steal information and credentials. Often, they combine stolen credentials with social engineering. They can use this to steal debit card information, fraudulent purchases, or even funds withdrawn from an ATM. Infostealers can also use to create phishing lures.

Infostealers are a type of malware that criminals can use to harvest passwords, cookie data, and usernames. It is available as a malware-as-a-service, allowing actors with little knowledge to gain access to the system. The cost of an infostealer is relatively low, making it an attractive tool for small, unaffiliated actors. Several updates have added to the service, giving criminals constant access to new features.

Infostealers were a popular tool for Dark Web participants in 2022. According to ACTI research, the use of this malware increased during the second half of the year. This increased demand drove underground actors to publish new variants and advertise them on dark web forums. Some of the most used infostealers are Raccoon, RedLine, and Meta Stealer.

As criminals continue to develop infostealers and encryption tools, ACTI has begun monitoring compromised credential marketplaces. ACTI found a dramatic increase in the number of logs for sale on these marketplaces. Usually, sellers cultivate relationships with trusted buyers, who are willing to pay more for logs. These transactions often performed through FTP and SSH.

In October 2022, the Russian Market saw a huge spike in the volume of logs for sale. Five different infostealers used to obtain logs on the marketplace. Additionally, in October, the operators added a pre-order option to the Stealer Logs section of the marketplace. This enabled vendors to start selling their logs before the rest of the market. Normally, sellers offer the best logs to trusted buyers first, before releasing them to the general page of the marketplace.

Infostealers and encryption tools are quickly becoming a major concern for cybercriminals. While multi-factor authentication (MFA) is increasing in popularity, it is not enough to protect against credentials theft. To combat this problem, organizations should train staff on how to secure online accounts. Organizations should also consider biometrics and number-matching MFA systems.

Ammar Fakhruddin

ABOUT AUTHOR

Ammar brings in 18 years of experience in strategic solutions and product development in Public Sector, Oil & Gas and Healthcare organizations. He loves solving complex real world business and data problems by bringing in leading-edge solutions that are cost effective, improve customer and employee experience. At Propelex he focuses on helping businesses achieve digital excellence using Smart Data & Cybersecurity solutions.


Data Security Through Data Literacy

Data Security Through Data Literacy

Unlocking data security through data literacy. Explore the pivotal role of understanding data in fortifying cybersecurity measures. Data is now pervasive, and it is important for people to understand how to work with this information. They need to be able to interpret...

Trojan Rigged Tor Browser Bundle Drops Malware

Trojan Rigged Tor Browser Bundle Drops Malware

Trojan Rigged Tor Browser Bundle drops malware. Stay vigilant against cybersecurity threats, and secure your online anonymity with caution. Threat actors have been using Trojanized installers for the Tor browser to distribute clipboard-injector malware that siphons...

Siri Privacy Risks: Unveiling the Dangers

Siri Privacy Risks: Unveiling the Dangers

Unveiling Siri privacy risks: Understand the potential dangers and take steps to enhance your digital assistant's security. Siri is a great piece of technology, but it can also be dangerous to users’ privacy. This is a serious issue that should be addressed....

Recent Case Studies

Press Releases

News & Events

Solutions

Managed Security Services
Security & Privacy Risk Assessment
Cloud Platform Security
Incident Response & Business Continuity

Penetration Testing

Virtual CISO

Email Security & Phishing

Resources

Blog

About Us