We Won’t Be Able to Regulate AI.

Don’t get me wrong,ย  we will attempt and are trying to regulate AI, but it won’t stop what is coming.

The EU’s approach to AI regulation, embodied in its AI Act, is an effort to manage the risks and opportunities posed by artificial intelligence. ย Not to be outdone, the United States followed up with an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Both are good attempts at encapsulating the problem.

Definitely the right and reasonable ideas on approaching AI, but utterly naive.ย  Governments have to do something, but what they are publicly thinking of doing right now wonโ€™t work.

The effectiveness of regulation and legislation in the context of “bad actors” involves considering how these rules impact the broader landscape of AI development and usage, especially among those who might not adhere to such regulations. Here are key points to consider:

  • The Incentive to Gain AI Superiority:ย  If a person could build the ultimate competitive advantage that would place a country, company, or other pursuits at the top of what people do, potentially forever, what risks would people be willing to take?ย  Past history teaches us some unfortunate lessons on what people are willing to do.
  • Unlike Anything Weโ€™ve Come Up With:ย  Quite simply, we donโ€™t know what we are getting ourselves into.ย ย  The technology puzzles even the most advanced among us.ย  How can we expect to regulate something we barely understand.
  • Challenges with Global Enforcement: These regulations are limited to the countries, companies, and individuals who play by the rules. Bad actors are not going to be subject to the same rules, potentially creating major security risks not to mention the competitive disadvantage.
  • Potential Talent Migration: Stringent regulations might drive AI talent and investment to more lenient jurisdictions. Bad actors, or even legitimate actors seeking less regulatory burden, might relocate their AI development efforts outside the regulated areas.
  • Security Risks: In areas like defense and cybersecurity, adhering to stringent AI regulations might limit the development of advanced AI systems that bad actors are freely developing. This poses an unacceptable security risk.
  • Dependence on External AI Technologies: Strict regulations might lead to dependence on non-regulated AI technologies that evolve faster. This could create a strategic vulnerability.

Unfortunately, in a global context, not all players follow the same rules.

ย 

We Canโ€™t Stop Malevolent AI from Being Developed with Regulation

As we look into what can stop malevolent AI, we have to examine what it takes to create a malevolent AI.ย ย  These obstacles, also known as barriers to entry, prevent new competitors from easily entering an industry or area of development. These barriers can include high startup costs, stringent regulations, strong brand loyalty for existing companies, or the need for specialized technology or expertise. Barriers to entry can significantly impact the level of competition and the dynamics within a market, often favoring established players over new entrants — Usually.

The creation of malevolent AI, faces surprisingly low barriers to entry. This situation poses significant risks, as it enables a wide range of actors with varying intentions and resources to develop AI tools that can be used for harmful purposes. Below are key factors contributing to the low barriers to entry in the creation of malevolent AI:

  1. Technological Democratization and Accessibility: The democratization of AI technology means that advanced tools and frameworks are now widely available. Open-source platforms, free educational resources, and affordable hardware have made it easier than ever to develop sophisticated AI systems. This accessibility allows not just reputable organizations but also individuals with malevolent intentions to create and deploy AI for harmful purposes.
  2. Rapid Advancement in AI Capabilities: The field of AI is evolving at an unprecedented pace. With each advancement, the potential for creating more powerful and potentially dangerous AI increases. The rapid development cycle of AI technologies makes it challenging for regulatory frameworks to keep up, creating a window of opportunity for those seeking to create malevolent AI.
  3. Global Spread of Knowledge and Resources: AI knowledge and resources are not confined by geographical boundaries. The internet has enabled a global spread of information, allowing individuals and groups from any part of the world to access cutting-edge AI technologies and knowledge. This global spread contributes to a diverse and widespread pool of individuals capable of creating malevolent AI.
  4. Anonymity and Obfuscation: The digital nature of AI development allows for a high degree of anonymity. Developers of malevolent AI can operate in secrecy, using encrypted communications and anonymous networks to hide their identities and intentions. This anonymity makes it difficult to track and regulate the development of harmful AI applications.
  5. Dual-Use Nature of AI Technologies: Many AI technologies have dual-use potential, meaning they can be used for both beneficial and harmful purposes. For instance, an AI model designed for data analysis can be repurposed to conduct surveillance or create deepfakes. This dual-use nature complicates the regulation of AI technologies, as it is not always clear when and how a benign technology might be transformed into a tool for malevolent purposes.
  6. Collaborative Development and Crowd-Sourcing: The collaborative nature of AI development, often involving open-source projects and crowd-sourced solutions, can unintentionally aid in the creation of malevolent AI. Individuals with harmful intentions can leverage these collaborative platforms to improve their AI capabilities or to disguise their malevolent projects as legitimate research.
  7. Exploitation of AI for Cyber Warfare and Crime: The use of AI in cyber warfare and criminal activities presents a significant threat. AI can be used to automate hacking attacks, create sophisticated phishing schemes, and disrupt critical infrastructure. The low cost and high efficiency of AI-driven cyberattacks make them an attractive tool for state and non-state actors looking to conduct malicious activities.

The low barriers to entry for creating malevolent AI are a result of technological democratization, rapid advancements, global knowledge spread, anonymity in development, dual-use potential of AI technologies, collaborative development models, and the exploitation of AI in cyber warfare and criminal activities. These factors combine to create a landscape where the development of harmful AI is not only possible but increasingly feasible for a wide range of actors with varying levels of expertise and resources. Addressing these challenges requires a multifaceted approach, involving both technological solutions and international cooperation, to ensure the safe and ethical development of AI technologies.

Stopping the proliferation of malevolent AI โ€“ A Government Approach (Also known as โ€œWhen all you have is a hammer, youโ€™ll see everything as a nail)

To look at what governments typically do to contain an existential threat, consider the approach following World War II to contain the spread of Nuclear Weapons โ€“ at the time, the most dangerous weapon devised.

The approach to containing nuclear proliferation primarily involves international treaties and agreements, such as the Non-Proliferation Treaty (NPT), which aims to prevent the spread of nuclear weapons and weapons technology, promote peaceful uses of nuclear energy, and further the goal of disarmament. Key strategies include strict export controls, monitoring and inspections by international bodies like the International Atomic Energy Agency (IAEA), and diplomatic efforts to manage regional tensions and conflicts that could lead to nuclear proliferation. This framework relies heavily on international cooperation, enforcement mechanisms, and a balance of diplomatic and deterrent strategies.

What we have seen with this approach is that nation states like Iran, Pakistan, India, North Korea have been able to effectively build their nuclear programs with a few hiccups along the way.

Now if you compare that to how we are approaching AI regulation, you see the same tried and true approaches used on the AI problem.ย  From restricting Chinaโ€™s access to advanced AI chips, to the creation of regulation (and eventually treaties), we have started down this path.ย  In this case, governments will pursue regulation.ย  It is a catch 22, due to the complexity and the public need to control as best as we can. ย ย Responsible governments, corporations and citizens can help deliver on the promise of what benevolent AI can bring humanity. ย ย However, humanity and in particular governments, need to prepare for the eventuality, that the development of malevolent AI cannot be stopped.

To work towards safer and trustworthy AI, we have to be smarter.ย  The box we would think outside of normally no longer exists and we are in a brave new world.ย ย  In this world, we will need to work on defensive AI, restorative AI, AI deterrence, and ultimately AI superiority.ย  In an upcoming article I will touch on what we can do in these spaces.

About the Author: Aaron Francesconi, MBA, PMP

Avatar photo
Aaron Francesconi is a transformational IT leader with over 20 years of expertise in complex, service-oriented government agencies. Aaron is a retired former executive for the IRS, Aaron occasionally writes articles for trustmy.ai when he can . Author of "Who Are You Online? Why It Matters and What You Can Do About It," and "Foundations of DevOps" courseware, his insights offer a blend of practical wisdom and thought leadership in the IT realm.

latest video

Get Our Newsletter

Never miss an insight!