Navigate the connected world: Strategies for evaluating risks. Explore methods to enhance cybersecurity in the era of interconnected devices. By 2025, the world will be storing 200 zettabytes of data. This will come from IT infrastructures, utility infrastructures, personal computers and devices, the IoT and even cars, trains and airplanes.
Rogue actors are attacking this new infrastructure, from terrorists and organized crime stealing personal information to cyber Jihadists and nation states seeking to extort critical technology-enabled services. These threats feel both wholly new and eerily familiar.
The Internet of Things
The Internet of Things, or IoT, connects physical objects to the digital world. It uses a variety of technologies, including sensors that monitor changes in the environment, actuators that receive signals from the sensors and act on them, and connectivity networks such as Ethernet, Wi-Fi, or cellular to allow physical objects to be monitored and controlled. Examples include smart thermostats, smart lighting systems, or even fitness trackers.
IoT devices are gaining popularity because of the value they provide to people and businesses. For instance, IoT is helping to improve healthcare through medical device integration and data analytics. It has also helped boost productivity in the manufacturing industry by allowing for remote monitoring and controlling of industrial equipment. It is also becoming more widespread in agriculture, where IoT can be used to improve crop production by increasing yields and reducing expenses.
However, the IoT is not without its risks. These concerns revolve around the potential for IoT-related exploitation of sensitive information, such as health and financial data, as well as security vulnerabilities in the design and implementation of connected devices. These risks can be magnified by the rapid rate of change in IoT technology, often outpacing the ability of the associated policy and legal structures to adapt.
A growing number of companies are working to address these concerns through a range of initiatives, from building security into the design of new IoT devices, to improving the cybersecurity of existing ones, to developing tools that can detect IoT-related threats and respond to them effectively. In addition, regulatory bodies and standards organizations are stepping in to define a set of security principles for IoT devices.
The full potential of the IoT depends on strategies that respect individual privacy choices across a broad spectrum of expectations. This requires a careful balance of business value, consumer benefits, and social impacts. IoT devices need to be secure, reliable, and trusted. This can be achieved through network segmentation, zero trust, and patching, among other measures. These strategies are especially important in emerging markets, where the benefits of the IoT can be more profound but the infrastructure to support it is less developed.
Cyberattacks are costing businesses more and more each year. According to Cybersecurity Ventures, global cybercrime damages will surpass $1 trillion in 2021. That’s larger than the damage inflicted by natural disasters and more profitable than the global trade of illegal drugs. C-suite leaders should be paying attention to this growing threat, as the long tail costs of a data breach can be devastating.
In addition to the obvious monetary costs, such as lost productivity and revenue from system downtime, there’s also the damage to brands and reputations. A recent survey found that 68% of respondents believe their brand’s image has been damaged by a cyberattack.
It’s no wonder, then, that the FBI’s Herb Stapleton has placed cybersecurity high on his list of priorities. Ransomware attacks continue to be a particular worry. As the FBI explains, these malicious software attacks can prevent people from accessing their own data. They can even disable critical services like 911, hospitals and first responders. This can lead to serious consequences, including death, which is what happened when an attack halted a German hospital’s IT systems last month.
As the federal government works to improve security, it needs to incentivize companies to take proactive steps to mitigate cyberattacks. This includes establishing enforceable expectations for businesses to reduce their risk, much like the CISA agreement between Ring and the FBI that outlines the company’s cooperation in cyberattack investigations. It’s also important for the new administration to increase federal involvement in overseeing the nation’s critical infrastructure providers to ensure high levels of coordination and oversight.
As the world becomes increasingly connected, we must reassess the value of personal privacy. This should include a thorough review of the rights and responsibilities individuals have regarding the data collected by technology companies and the ways that that information is used. A comprehensive discussion of the issues will allow individuals to make informed decisions about how they use and share their data. It will help them navigate the challenges and opportunities of our digital future. It will be a journey worth taking.
Data privacy is the ability to determine how, why and when personal information is collected, stored or shared. It’s a basic right that all people need in order to feel safe in their daily lives. However, the rise of technology has created a new set of challenges to data privacy. For example, it’s not uncommon for companies to share personal information with third parties. This can be a dangerous practice that puts consumers at risk of identity theft and other threats. The good news is that a few years ago, the government began to crack down on data breaches. As a result, businesses that fail to protect their customers’ data face hefty fines and lost business.
Data breaches have made many Americans lose trust in companies that collect their personal information. In fact, a study found that 84% of US adults believe that companies do not treat their personal information with the same level of security and protection as they do financial information or health records.
Many states have also stepped in to address this issue. In 2023, the US will see a series of new state-level privacy laws go into effect, including the American Data Privacy Protection Act (ADPPA), the California Consumer Privacy Act (CCPA) and the Virginia Privacy Rights Act. The FTC is also stepping up its enforcement actions and plans to reintroduce the Electronic Communications Privacy Act.
The US has a long history of privacy leadership, starting with the Supreme Court’s recognition that the Constitution confers a right to privacy against certain forms of government surveillance. Since then, the US has been a pioneer in developing legislation and practices that protect data privacy. However, recent technological advances have posed risks that were unforeseen by legal standards. As a result, these legal rules may be failing to protect data privacy. New approaches based on both legal and scientific standards can be better suited to address these emerging threats.
AI is already transforming the way many industries work, but some are warning that it poses significant dangers. This is partly because it is difficult to understand how AI works, and thus how to predict its potential future consequences. However, some experts are starting to take action. For example, a petition signed by more than 100 leading artificial intelligence researchers and others has called on governments to regulate the technology.
This initiative is supported by the OECD, which recently published guidelines on how to use and develop AI systems that respect human rights. The guidelines include a requirement that all decisions based on AI be transparent and explainable. They also recommend that regulators make it possible to deploy AI systems with unacceptable risks only under exceptional circumstances.
Some of the biggest risks posed by AI are more subtle than scary machines that can spy on us and kill at scale. Companies using generative AI are experimenting with ways to use their algorithms for marketing purposes, for example. Nike has used a model created by OpenAI to design new sneakers; Amazon uses it for voice-controlled shopping; and Apple has incorporated it into Siri. Meanwhile, Google’s AI improves search results and finds ways to cut energy usage in data centres.
But even in industries where the benefits of AI are clear, the risk is real. Some of the more troubling applications involve healthcare, where biases in algorithms can have life-and-death implications. AI-based machine learning is being used to determine postoperative pain management plans for patients, as well as vaccine development and treatment options. But marginalized communities are more likely to be misdiagnosed or receive less effective treatments because they are not included in the training datasets of these algorithms.
The risks of AI are real, and if they’re not addressed now, it could be too late. Tech CEOs, including Andreessen, may be pushing the doomsday narrative because they stand to make more money by erecting regulatory barriers that create “a cartel of government-blessed AI vendors protected from startup and open source competition.” But the world needs a better approach to the problem.