Threat actors are using AI-generated YouTube video tutorials to spread stealer malware that scrapes sensitive personal information. CloudSEK researchers have observed an uptick of 200-300% month-on-month in videos containing links to stealers since November 2022.
Infostealer malware is often embedded into videos that appear to be instructional tutorials on how to download licensed software like Photoshop, Premiere Pro and Autodesk 3ds Max that is only accessible to paid subscribers.
Cybercriminals are employing AI-generated personas in YouTube videos containing Infostealer malware to spread the malicious software. According to cybersecurity firm CloudSEK, these AI-generated personas appear familiar and trustworthy, making them easier for users to click on.
These videos are targeted at YouTube’s 2.5 billion monthly active users. They entice viewers with tutorials that promise to teach them how to download a cracked version of software such as Photoshop, Premiere Pro, Autodesk 3ds Max and AutoCAD.
Cybercriminals often create videos with screen recordings or audio walkthroughs to demonstrate how to download cracked versions of licensed products. Furthermore, they incorporate fake comments and SEO poisoning techniques in order to boost the ranking of these videos on search results.
Once a user clicks on the video, they are encouraged to install infostealers malware which can scrape sensitive information from their device. This includes bank account numbers, login credentials, browser history, crypto wallet details, IP addresses and location data.
CloudSEK revealed that malicious information is then uploaded to a command-and-control server owned by the threat actors, enabling them to monitor, collect and sell this data on other platforms like underground forums, Telegram channels and malicious groups.
Threat actors are exploiting popular YouTube accounts to distribute malicious software by hijacking popular videos and promoting them to large subscriber bases. Once there, they create false links and websites in the description section of these videos in an effort to spread it further.
Since November 2022, cybersecurity researchers have observed an increase of 200-300% in YouTube videos that contain links to infostealer malware in their descriptions. These videos are produced using AI programs like Synthesia and D-ID.
These videos have been spreading not only on YouTube but also social media platforms. To avoid detection, hackers use SEO techniques like location-specific tags, fake comments and extensive lists of tags to hide malicious links in descriptions. They may shorten these links by linking to file hosting platforms or making viewers download a zip file instead of the actual link.
These video tutorials are used to spread Infostealer malware such as Vidar, RedLine and Raccoon which can collect passwords, credit card info, bank account numbers and other private data from a device’s operating system. Once installed, this infostealer malware will scrape any sensitive information found on a victim’s device and upload it to an attacker’s command-and-control server.
Cybercriminals are exploiting AI-generated YouTube Video Tutorials to spread Infostealer Malware.
According to a Monday report from CloudSEK, cybercriminals are using generative artificial intelligence (AI) platforms like Synthesia and D-ID to upload videos that appear legitimate but contain malicious links. These links often lead to infostealer malware, which can steal sensitive user data.
Infostealer malware is designed to collect personal information for illegal use, such as passwords, credit card numbers and bank account details. It scans devices for such data and stores it in a log. Furthermore, it records system details like IP address and time zone so bad actors can follow-up on other attacks and scams more easily, according to CloudSEK.
Bad actors use AI-generated personas in their videos to deceive viewers into downloading malware. These personas mimic human features and direct viewers towards a description section of the video with a link to Infostealer malware, which may be hosted within YouTube’s description section or other websites such as Discord or GiftHub.
This process has resulted in a 200%-300% month-on-month rise of videos containing links to Infostealer malware. Reports revealed this trend was mostly concentrated on popular YouTube channels with large subscriber bases, with the intention of stealing various pieces of user information.
It is essential to be aware that this malware could impact any browser or app on your device. Once activated, it collects all data and stores it on its Command and Control server. Cybercriminals could then use this information to target other users in an effort to gain access to their accounts.
Though malicious actors are constantly developing new methods to spread malware, you can take measures to protect yourself. For instance, avoid clicking on malicious links and use a reliable screen recorder to save videos of yourself in action.
To stay safe, always utilize strong anti-virus and firewall protection on your device, as well as an online security tool to block malicious websites. Doing this will help prevent infostealer malware from invading your device. Furthermore, never click on links or open attachments that you do not recognize.
Cybercrooks are using AI-generated YouTube Video Tutorials to distribute infostealer malware such as Vidar, RedLine and Raccoon. According to CloudSEK’s Monday report on AI cybersecurity companies, these videos appear to be tutorials on how to download pirated versions of popular paid-for design software like Adobe Photoshop, Premiere Pro, Autodesk 3ds Max and AutoCAD; however they actually contain links that lead directly to infostealers who steal passwords, credit card information and bank account numbers.
These malicious links are typically hidden within the video description section, leading viewers to be fooled into clicking them without realizing it. Threat actors utilize platforms like Synthesia and D-ID to generate AI generated personas with familiar faces that give the content an air of legitimacy.
Threat actors host these videos on popular URL shorteners such as Bitly and other services to quickly share the content with a wide audience. Alternatively, they can hijack popular YouTube accounts in order to spread malware to an even larger audience.
In addition to video-hosting websites, threat actors are also using social media platforms like Facebook, Twitter and Instagram for video distribution. While these tactics have been around for some time now, their prevalence has grown significantly recently.
Since November 2022, malicious videos have seen an exponential growth of 200-300% month-on-month, according to CloudSEK. While these threats are typically directed at consumers, they can also affect organizations with sensitive data on their systems.
An AhnLab ASEC analysis published in January 2023 detailed an investigation where RedLine Stealer stole VPN credentials from a remote employee of an undisclosed company, granting them access to the corporate network.
Threat actors could then use stolen credential data for other high-impact attacks against their target organization, such as phishing scams. Compromised credentials could be used to obtain banking and credit card details which would then be exchanged for illegal cash.
Security professionals have noted a growing trend. Despite their popularity, it’s essential to always remain alert for potential threats.
Cybercriminals are exploiting AI to craft YouTube Video Tutorials with malicious links to spread Infostealer malware. According to a Monday report from CloudSEK, these malware-laced videos appear as step-by-step instructions on how to download pirated versions of programs like Photoshop, Premiere Pro, Autodesk 3ds Max and others.
These malicious videos contain info-stealers that can steal sensitive data from an infected device, including bank account numbers, crypto wallet data, login credentials, browser history, location data and IP addresses. Furthermore, F5 Labs reports these details will be uploaded to attackers’ Command and Control servers.
CloudSEK noted that these AI-generated videos are uploaded to YouTube by threat actors who hijack prominent YouTube accounts and use them as a vehicle for spreading malware. These accounts typically have “educated [and] active users,” according to CloudSEK’s analysis.
AI-generated personas guide viewers to video descriptions featuring links laden with information-stealing malware, which in turn spreads it onto victims. Since November 2022, according to CloudSEK, the number of such attacks has surged 200-300% month-on-month.
CloudSEK recently observed videos featuring fake comments claiming to be from crack software developers, giving the videos an air of legitimacy and deceiving victims into downloading malicious content. Furthermore, five to ten such videos are uploaded every hour, with threat actors using search engine optimization (SEO) poisoning techniques in order to make them appear at the top of search results.
Once a user clicks one of these links, they may become infected with Raccoon, RedLine, Vidar or BlackGuard infostealers. According to F5 Labs, stolen data can then be sold or used for other cyberattacks.
Though there is always the risk, there are ways to minimize your vulnerability to infostealers. You can lower this risk by securing your browser with multi-factor authentication and regularly updating security software. Furthermore, monitor websites you visit for malicious content and remove them immediately if found.