Preparing Businesses for AI-Powered Security Threats

November 27, 2023

Preparing businesses for AI-powered security threats. Stay ahead of evolving cybersecurity challenges with proactive strategies and advanced technologies. When AI goes wrong, the repercussions can be devastating. They range from the loss of life if an AI medical algorithm is flawed, to corporate reputational damage and reduced consumer trust if an AI tool misbehaves.

A strong risk-prioritization plan anchored in legal guidance and technical best practices is critical to help prevent these risks from arising and mitigate them quickly when they do.

1. Know What You Are Trying To Do

When AI deployments go wrong, they can create a wide array of consequences — from the loss of customers and revenue to regulatory backlash, lawsuits, and diminished public trust. In many cases, the unintended consequences can be traced back to a failure to recognize, understand, and manage key risks. To avoid them, organizations must implement concrete, dynamic frameworks that allow them to move from cataloging risks to systematically rooting them out.

The rigor required to make this progress dramatically exceeds prevailing norms in most businesses, especially those that rely heavily on human beings for security and risk management. This requires a holistic approach that includes leaders in the C-suite and across the organization, and experts in areas including legal and risk, data science, and IT. It also demands an ongoing effort to build pattern-recognition capabilities and engage the organization’s broader human capital.

One of the key problems is a lack of transparency over the complexity and security requirements of each AI model and the environment in which it is deployed. When models are not clearly documented and accessible, it can be difficult to understand how they work, assess them for risks, and test them against existing policies and standards.

Another problem is that AI models can be vulnerable to attacks ranging from hacking and malware to more subtle attacks like “data poisoning,” in which attackers inject malicious inputs into the model’s training set. This can lead to erroneous outputs or even catastrophic failures such as a military AI system that provides false information to adversaries.

In addition, it’s possible to inadvertently encode bias in the models themselves or in the data feeding them. This can create bias that potentially or actually harms protected classes and groups and expose companies to liability. In some cases, this can be easy to spot: for example, when AI-powered mortgage decision systems wreak havoc on people with the same zip code or income level, or when AI-powered marketing tools deliver highly targeted ads that discriminate against certain consumers.

It’s also essential to recognize that laws, regulations, and business practices in different regions and industries can impact how an AI model performs or is interpreted. Tech trust teams must be able to identify and prioritize the specific negative events an AI model may produce based on these factors, and then describe how those risks will be addressed in accordance with relevant standards.

2. Don’t Forget About Humans

The first step in getting ready for AI powered security threats is to understand that it is not possible to prevent all risks or simply ignore them. The best approach is to get business-minded legal and risk management teams involved at the start of the process, alongside data science. This enables them to function as a tech trust team, helping to ensure that the model meets social norms and legal requirements while delivering maximum business value.

Once a complete catalog of risks has been created, it is crucial to prioritize them. This allows organizations to prevent AI liabilities from arising, and mitigate them quickly if they do. For example, a company that uses an AI model to detect migratory herds on the road might discover that the AI system could trigger accidents that result in injury to drivers or damage to vehicles. This is a potential liability because it breaches contractual guarantees and, in extreme cases, could threaten human safety.

Another common risk is the accidental or intentional encoding of bias in the model. This can be caused by the selection of training data, or it might be the result of a flaw in the design or deployment of an AI system. It is a potentially devastating liability because it violates existing laws and consumer expectations and can expose the company to significant fines, reputational damage and loss of customer loyalty.

A good way to reduce these risks is to develop standard policies and documentation for all steps in the AI model development life cycle. This will enable the risk team to spot potential areas of concern more easily and provide a foundation of transparency for future reviews and audits. For example, consistent documentation of models enables the risk team to compare apples-to-apples when reviewing a new model against past versions. It also enables the risk team to spot when there are changes to an existing model and determine whether those changes might introduce new risks. It will be impossible for a human to keep up with the volumes of alerts and data that flood SOCs on a daily basis, so automation is a welcome addition to understaffed teams.

3. Don’t Forget About Security

When it comes to protecting against AI risks, a business cannot simply turn a blind eye. Relying on people, especially understaffed security teams, to keep pace with the volume of data and alerts is not a sustainable approach, as any cybersecurity professional will tell you.

That’s why companies need to make a commitment to identify and mitigate the risks of their AI deployments. The organizations that recognize and manage these risks will be the ones that get the most value from their investments in AI.

A key element of this work is understanding the environments into which an AI model operates. This requires a standard set of practices that allows data science, legal and risk teams to understand important aspects of the AI environment – such as where models are deployed, how they connect to data sources and other factors. Standardized policies for steps in the development process – such as recording model provenance, managing metadata, mapping data and creating model inventories – will allow all teams to create an apples-to-apples view of the AI environment so that they can more easily surface areas of potential risk.

Organizations must also understand the range of harm that an AI security failure could cause, from revenue losses and consumer backlash to loss of reputation and national security risks. This catalog of potential negative events will help prioritize mitigation efforts. A clear and focused catalog of specific risks will also help the tech trust team create a comprehensive roadmap for how each of these risks will be addressed.

Disastrous repercussions ranging from the loss of human life if an AI medical algorithm fails to correctly diagnose or treat a patient to compromised national security when adversaries feed false information into military AI systems are possible. These threats are not only real but potentially very damaging to businesses, causing a wide array of problems, including public backlash, revenue losses and regulatory fines.

The introduction of RAMP as a common technical instrument where AI content origins can be reliably traced and authenticated will help address some of these issues. It does this by shifting the burden of AI traffic analysis away from Internet Service Providers internal technologies and enabling a more consistent approach to policymaking that will empower regulators, consumers and users to discern between AI-generated content and non-AI content online.

4. Don’t Forget About Data

As any security professional will tell you, it is impossible to keep up with the volume of alerts and data coming into SOCs on a daily basis. That’s why many are welcoming automation and AI to help with the heavy lifting. In fact, 100% of security professionals say that using automation and AI reduces the time it takes to perform key tasks such as incident analysis, landscape assessment, threat detection, and response.

While leveraging AI is a great idea, it is crucial to be aware of the risks associated with this technology. These risks can include legal, operational, and technology issues. Creating an effective AI risk management strategy requires the involvement of business-minded legal and IT teams alongside the data science team during the initial design process. This helps to ensure that the AI models adhere to social norms and legal requirements and are able to deliver maximum business value. It also enables the team to function as an “AI trust team” that reviews and mitigates unintended consequences of each new AI deployment, such as privacy violations, cybersecurity breaches, and unfairness.

When it comes to legal issues, AI applications may violate existing laws by collecting, storing, or using personal information without consent, breaching contracts with customers, and failing to disclose risks. These risks can even lead to lawsuits and regulatory action. Operationally, AI systems can be hacked by disgruntled workers or external foes, and these issues are particularly dangerous because of the potential to affect public safety and reputation.

Moreover, it is possible to inadvertently encode bias into the data feeding into AI models. This can expose companies to fairness liabilities, especially if the bias negatively impacts particular populations. Additionally, AI systems that are not designed to work offline can leave them vulnerable to attacks by cybercriminals who can exploit unpatched vulnerabilities.

In order to protect against these risks, businesses need to put in place a strong legal framework and robust policies, procedures, worker training, contingency plans, and security controls. These include ensuring that the AI team is aware of the underlying assumptions of each model and that the team is assessing these risks continuously. It is also critical to have a clear methodology for prioritizing and sequencing the risks so that organizations can quickly address show-stopping problems before they occur.

Ammar Fakhruddin

ABOUT AUTHOR

Ammar brings in 18 years of experience in strategic solutions and product development in Public Sector, Oil & Gas and Healthcare organizations. He loves solving complex real world business and data problems by bringing in leading-edge solutions that are cost effective, improve customer and employee experience. At Propelex he focuses on helping businesses achieve digital excellence using Smart Data & Cybersecurity solutions.


Data Security Through Data Literacy

Data Security Through Data Literacy

Unlocking data security through data literacy. Explore the pivotal role of understanding data in fortifying cybersecurity measures. Data is now pervasive, and it is important for people to understand how to work with this information. They need to be able to interpret...

Trojan Rigged Tor Browser Bundle Drops Malware

Trojan Rigged Tor Browser Bundle Drops Malware

Trojan Rigged Tor Browser Bundle drops malware. Stay vigilant against cybersecurity threats, and secure your online anonymity with caution. Threat actors have been using Trojanized installers for the Tor browser to distribute clipboard-injector malware that siphons...

Siri Privacy Risks: Unveiling the Dangers

Siri Privacy Risks: Unveiling the Dangers

Unveiling Siri privacy risks: Understand the potential dangers and take steps to enhance your digital assistant's security. Siri is a great piece of technology, but it can also be dangerous to users’ privacy. This is a serious issue that should be addressed....

Recent Case Studies

Press Releases

News & Events

Solutions

Managed Security Services
Security & Privacy Risk Assessment
Cloud Platform Security
Incident Response & Business Continuity

Penetration Testing

Virtual CISO

Email Security & Phishing

Resources

Blog

About Us