Deepfakes and Digital Propaganda: Undermining Trust

October 31, 2023

Deepfakes and digital propaganda undermining trust. Explore the impact of manipulated content on information credibility. Whether you want to put new words in a politician’s mouth or make Jon Snow dance, deepfakes are the 21st century’s answer to Photoshop. Individuals and businesses should consider the implications of this technology and navigate a complicated legal landscape that includes copyright and right of publicity laws.

In an infopocalypse, people are more likely to trust information that confirms their views and opinions (CBS01; FT06). Yet it’s easy to fake video footage using cheap software and efficient graphical processing units (GRD05).

What is a deepfake?

A deepfake is a type of fake media that uses artificial intelligence to generate realistic images and videos. These can be used for nefarious purposes or as a form of entertainment. It is possible to make high-quality deepfakes, but this requires a lot of data and processing power. Deepfakes can be made of photos, video or audio. They can be manipulated to look like anyone, or anything. A popular example is a face swap, where one person’s head is superimposed on another. This is often used in pornographic videos.

While rudimentary photo manipulation technology has been around for decades, the advances in digital technology have made it much easier and cheaper to produce these kinds of manipulations. The rise of deepfakes is an important issue because of the impact it can have on the world, especially for world leaders. Deepfakes can cause political tensions between countries, influence election results and create chaos in financial markets. These impacts can be detrimental to global security and safety, which is why laws are being developed to combat this kind of exploitation.

As the technology behind deepfakes continues to advance, it becomes more difficult to distinguish real from false images. However, there are a few key things to watch out for when determining whether or not something is a deepfake. One is a lack of eye movement, which can be indicative of a fake image. Another is a lack of facial expressions, as these are hard to replicate on a computer. Finally, a deepfake may also have poor sound quality or have a strange, robotic tone to the voice.

Many deepfakes are created for nefarious purposes, such as to spread malicious propaganda or to deceive people into voting for an unscrupulous candidate. However, they can also be used to create memes or parodies. For instance, comedian Jordan Peele used deepfake technology to make a video of himself speaking as President Obama. This was a humorous way to point out how easy it is for politicians and other prominent figures to be manipulated online.

Unfortunately, deepfakes are also being used to attack individuals’ personal privacy. Cyber criminals can use this technology to steal someone’s identity by faking their voice and image. This can lead to a range of harmful impacts for the victim, including lost wages, debt, credit card debt and more.

How is a deepfake made?

Deepfakes are a form of synthetic media that uses machine learning to manipulate video and photo content. They are created using an algorithm that analyzes a person’s facial expressions, movements and other details to create a video that looks realistically like that person. The process is complex and requires a computer with significant graphics-editing capabilities. The technology can also be used to create fake audio.

Creating a believable deepfake is difficult but not impossible. The open source software tools needed to make one are free and relatively easy to use. The hardware required for a high quality deepfake varies from model to model, but a gaming-type GPU that costs a few thousand dollars can be sufficient. The software is evolving and becoming more advanced as researchers work to improve its ability to replicate human nuances.

Many early applications of deepfakes have been grotesque, including revenge porn and the digital destruction of celebrities. A Reddit user in 2017 began releasing videos featuring face-swapped actors that were deemed to be inappropriate by authorities. According to an analysis by AI firm Deeptrace, pornography accounted for 96% of deepfake videos found online in September 2019.

The creation of a believable, realistic deepfake requires a large number of training images. These are analyzed by a software program called an autoencoder that is programmed to identify features of the individual’s face, such as the eyes, nose and mouth. The software then generates a set of pixels that represent a person’s face and body.

These pixels are then matched to the target video clip and the face is mapped onto the video image. Various adjustments are then made to the face to make it look more authentic, such as changing the size, brightness and texture of the skin. A text generator is also added to allow for the inclusion of a voice, which can be recorded using an audio-visual capture device.

The resulting video can then be shared on social media or used for other purposes. This technology is already being used by companies and organizations to verify the identity of people in videos or photos, and to detect if those media are being manipulated. Some companies, such as Facebook, are using blockchain technology to authenticate media before it is posted, while others have developed software that detects signs of a deepfake and alerts users.

What is the impact of a deepfake?

The rapid development of artificial intelligence and associated tools enables novel forms of deception. While this has been widely discussed in the context of political disinformation, it can also facilitate financial harm.

The most prominent examples of this involve videos on YouTube and TikTok that are manipulated to appear like they come from world leaders or other high-profile individuals. These are called deepfakes and they can cause a great deal of damage.

In many cases, they can violate people’s privacy by showing them in compromising situations without their consent. They can also be used to instigate and spread political disinformation, which has the potential to disrupt financial markets and even destabilize international relations.

Moreover, they can be used to commit fraud and extort. For example, if a cyber criminal created a video that appeared to show an executive of a large company admitting to committing financial crimes, it could cause significant damage to the organization’s brand reputation and share price. It would be difficult to prove the authenticity of such a fake and it would take time and effort to disprove it.

As the technology improves, it will become easier to create high-quality deepfakes. However, this also means that the number of such attacks is likely to increase. This is why Europol recommends that businesses, governments, and individuals understand the risks of deepfakes and be prepared to respond.

To combat the rise of these threats, a number of steps have been taken. For example, Facebook has banned deepfakes that mislead viewers into thinking that someone “said words they did not actually say” and YouTube has imposed new community guidelines to prevent these types of false images and videos. These policies are still not strong enough to stop all deepfakes, however.

As these technologies become more powerful and cheaper to produce, bad actors are likely to adopt them in order to achieve their goals. Ultimately, their use will be determined by how cost-effective they are as compared to other techniques that do not rely on AI. In other words, they will be weighed against the investment required to create and use them.

What can we do about a deepfake?

For now, identifying a deepfake requires a high degree of knowledge and a keen eye. There are telltale visual clues to look for, from fuzzy or unnatural-looking facial features to disproportionate ears and eyes, too-smooth skin, and lighting anomalies. However, as deepfake technology improves, these “tells” are becoming harder to spot.

But even as these technologies continue to advance, malicious actors will likely exploit them in new ways. They may use them to deceive their audiences on a large scale, or they may target specific individuals or groups in particular for humiliation and blackmail. They could also be used to undermine companies by presenting fake evidence of their leaders’ misdeeds, potentially sending stock prices into a tailspin.

In all of these scenarios, the underlying harm is the same. People lose trust in the institutions they rely on when they are lied to or misled. And that’s what makes these kinds of attacks so dangerous.

There is no single solution to stopping bad actors from leveraging these advanced technologies to spread misinformation and cause harm, but there are some important steps that can be taken. First, we should make it easier for platforms to share information about deepfakes with one another and news agencies. This would help them spot and respond to them quickly, limiting the damage they can do.

Second, we should encourage research and development of technologies to better detect deepfakes. There is already a lot of work being done in this area, but it needs to be accelerated. Startups such as Sensity have developed detection tools that are akin to antivirus software for deepfakes and can alert users when they’re watching something that bears the telltale fingerprints of artificially generated synthetic media.

Finally, we should support policies that hold individuals and organizations accountable when they are the victim of a deepfake. If a company is tricked into appearing in a fake video, it must issue a prompt and credible denial to dispel doubts about its authenticity. Otherwise, its brand and reputation will be irreparably damaged.

The ten scenarios described in this report vary in scope and impact, but they all have a common core. They start with bad actors producing synthetic media. They then distribute it to their target audience, and eventually that audience will either believe the media or take action based on it. The resulting harm can range from embarrassment and blackmail to international conflict and political change.

Ammar Fakhruddin

ABOUT AUTHOR

Ammar brings in 18 years of experience in strategic solutions and product development in Public Sector, Oil & Gas and Healthcare organizations. He loves solving complex real world business and data problems by bringing in leading-edge solutions that are cost effective, improve customer and employee experience. At Propelex he focuses on helping businesses achieve digital excellence using Smart Data & Cybersecurity solutions.


Data Security Through Data Literacy

Data Security Through Data Literacy

Unlocking data security through data literacy. Explore the pivotal role of understanding data in fortifying cybersecurity measures. Data is now pervasive, and it is important for people to understand how to work with this information. They need to be able to interpret...

Trojan Rigged Tor Browser Bundle Drops Malware

Trojan Rigged Tor Browser Bundle Drops Malware

Trojan Rigged Tor Browser Bundle drops malware. Stay vigilant against cybersecurity threats, and secure your online anonymity with caution. Threat actors have been using Trojanized installers for the Tor browser to distribute clipboard-injector malware that siphons...

Siri Privacy Risks: Unveiling the Dangers

Siri Privacy Risks: Unveiling the Dangers

Unveiling Siri privacy risks: Understand the potential dangers and take steps to enhance your digital assistant's security. Siri is a great piece of technology, but it can also be dangerous to users’ privacy. This is a serious issue that should be addressed....

Recent Case Studies

Press Releases

News & Events

Solutions

Managed Security Services
Security & Privacy Risk Assessment
Cloud Platform Security
Incident Response & Business Continuity

Penetration Testing

Virtual CISO

Email Security & Phishing

Resources

Blog

About Us