AI regulation: What businesses need to know in 2024

AI regulation: What businesses need to know in 2024

The idea of artificial intelligence, or synthetic minds able to think and reason in ways that approximate human capabilities, has been a theme for centuries. Many ancient cultures expressed ideas and even pursued such goals. In the early 20th century, science fiction began to crystallize the notion for large modern audiences. Stories such as The Wizard of Oz and films like Metropolis struck a chord with audiences around the world.

In 1956, John McCarthy and Marvin Minsky hosted the Dartmouth Summer Research Project on Artificial Intelligence, during which the term artificial intelligence was coined and introduced, and the race for practical ways of realizing the old dream began in earnest. The next five decades would see AI developments and enthusiasm wax and wane, but as the computational power driving the Digital Age grew exponentially while computation costs dropped precipitously, AI moved definitively from the realm of science fictional speculation to technological reality.

By the early 2000s, the massive amounts of funding poured into accelerating and deepening the capacities of AI systems, machine learning, in particular, paved the way for the recent breakthroughs that have made AI integral to business operations.

Generative AI tools go mainstream

In 2023, generative artificial intelligence systems — tools based on machine learning algorithms and capable of producing new content such as images, text or audio — became ubiquitous in public discourse. Companies began scrambling to understand, embrace and effectively implement OpenAI’s ChatGPT-4 and other large language models and algorithms. The potential benefits for companies of all sizes have come into sharp focus: greater efficiencies in processes, reductions in human errors, reduced costs through automation and discovery of unknown and unanticipated insights in the massive and growing heaps of proprietary and public domain data.

As AI’s capabilities and business benefits have grown, however, so have its complexities and risks, driving governments worldwide to address how best to protect the public while not stifling innovation.

The urgency of AI regulation

Governments today are laser-focused on regulating AI. Perennial concerns such as consumer protection, civil liberties, intellectual property rights and fair business practices largely explain the interest of governments around the world in AI.

But at least as important as safeguarding the citizens against the downsides of AI is the competition among governments for supremacy in AI. Attracting brains and businesses necessarily means creating predictable and navigable regulatory environments in which AI enterprises can thrive.

In effect, governments are confronting an AI regulatory Scylla and Charybdis: on the one hand, seeking to protect citizens from the very real downsides of AI implementation at scale, and on the other, needing to engineer governance regimes for a complex and dynamic train of AI shock waves that are starting to permeate society — all while not impeding the advancement of the underlying technologies.

AI regulation snapshot.
Lawmakers are under increasing pressure to regulate AI. Here’s a snapshot of current AI regulations and proposals worldwide.

Worldwide race to regulate AI systems

Amid the rapid evolution and adoption of AI tools, AI regulations and regulatory proposals are spreading almost as rapidly as AI applications.

U.S. regulatory protections and policies

In the U.S., government at virtually every level is actively working to implement new regulatory protections and related frameworks and policies designed to simultaneously cultivate AI development and curb AI-enabled societal harms.

Federal AI regulation. AI risk assessment is presently a top priority for the federal government. U.S. lawmakers are particularly sensitive to the difficulties of understanding how algorithms are created and how they arrive at certain outcomes. So-called black box systems make it difficult to mitigate risk or map processes and enable documentation of their effects on citizens.

To address these issues, the Algorithmic Accountability Act (H.R. 5628; S.2892) is presently being debated in Congress. If it becomes law, it would require entities that use generative AI systems in critical decisions pertaining to housing, healthcare, education, employment, family planning and personal areas of life to determine impacts on citizens before and after the algorithms are used.

Similarly, the DEEP FAKES Accountability Act (H.R. 3230) and the Digital Services Oversight and Safety Act (H.R. 6796), should they make it through Congress and become enforced laws, would require entities to be transparent about the creation and public release of “false personation[s]” and mis/disinformation created by generative AI, respectively.

The most significant domestic federal action, however, has been Executive Order (E.O.) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, released by the Biden Administration in October 2023. Building upon the White House’s Blueprint for an AI Bill of Rights and AI Risk Management Framework promulgated by the National Institute of Standards and Technology, the E.O. marks a new level of aggressiveness in the government’s efforts to promulgate responsible AI deployment and use.

Focusing on federal agency usage, regulation of domestic industry and collaboration with likeminded international partners, the E.O. expresses regulatory ambition manifested in at least eight discrete goals:

  1. Mitigation of risks posed by the increased adoption of AI in industry and private use.
  2. AI talent acquisition and consolidation by attending to the concerns of small business founders and intellectual property protection.
  3. Protection for workers whose livelihoods will likely be significantly affected by generative AI.
  4. Protection of civil rights via intensified monitoring and mitigation of AI biases.
  5. Stronger consumer protections.
  6. Greater protection of consumer data, personal data and privacy.
  7. More efficient use of generative AI in federal government with greater oversight and guardrails.
  8. U.S. leadership in the global governance of AI, ideally in collaboration with international partners.

U.S. state and city AI regulation. States, too, are joining the race to regulate AI. California, Connecticut, Texas and Illinois are each attempting to strike the same balance as the federal government: encouraging innovation, while protecting constituents from AI downsides. Arguably, Colorado has gone furthest with its Algorithm and Predictive Model Governance Regulation, which would impose nontrivial requirements on the use of AI algorithms by Colorado-licensed life insurance companies.

Similarly, municipalities are moving ahead with their own AI ordinances. New York City is leading the way with Local Law 144, which focuses on automated employment decision tools — that is, AI used in the context of human resource activities. Other U.S. cities are bound to follow the Empire City’s lead in short order.

Global AI regulations

In Europe, the dominant source of AI governance is the European Union’s Artificial Intelligence Act, which went into effect on August 1, 2024 and applies to all 27 member countries. It is expected to be fully implemented by August 2, 2026.

Like the Blueprint for an AI Bill of Rights in the U.S., the main objective of the EU’s AI Act is to require developers who create or work with AI applications associated with high risk to test their systems, document their use of them and mitigate risks by taking appropriate safety measures. Importantly, the EU AI Act will likely generate a “Brussels Effect,” or significant regulatory reverberations well beyond the bounds of Europe.

China’s Interim Measures for the Management of Generative Artificial Intelligence Services was enacted August 15, 2023, following a period of public comment. The Cyberspace Administration was the primary agency behind the measures, which focus on regulating generative AI services provided to users in Mainland China.

The Canadian Parliament is currently debating the Artificial Intelligence and Data Act, a legislative proposal designed to harmonize AI laws across its provinces and territories, with particular focuses on risk mitigation and transparency.

Across the Americas and Asia, at least eight other countries are at various stages of developing their own regulatory approaches to governing AI.

AI regulation in 2023.
Companies that prepare for the regulatory challenges ahead and embrace responsible AI will be better positioned to ride the wave of AI innovation.

Potential impact of AI regulation on companies

For companies located in, doing business in or searching for talent in one or more of the above-mentioned jurisdictions, the significance of this regulatory ferment is twofold.

First, companies should anticipate a continued and perhaps accelerated rate of arrival of new regulatory proposals and enforced laws across all major jurisdictions, and across all levels of government, over the next 12 to 18 months. Chief among them is the recently enacted EU AI Act and its provisions, but it’s possible that domestic action at the state or municipal level could also have an effect.

This crescendo of regulatory activity will very likely lead to at least two predictable outcomes: increased complexity of doing business, and increased compliance costs. A veritable thicket of AI regulation will require new expertise and likely regular updating from AI law specialists in order to assure workplace compliance as well as compliance in engagements with clients and customers, particularly in highly regulated industries.

A particularly thorny issue will likely be the potential interactions between AI policies and preexisting regulations. For example, in the U.S. context, the Federal Trade Commission has clearly signaled its intention to clamp down on exaggerated claims for enterprise AI. Companies will need to be especially aware of the ways that old laws apply to new algorithms, rather than simply focusing on the new laws that contain “AI” in their titles.

As well, businesses will need to be extremely careful in engaging with newly minted vendors emerging to facilitate the soon to be common legal requirements for various types of AI use certification. Finding trustworthy vendors, or in all likelihood certified certifiers, will be essential to not only complying with the new laws, but also signaling to consumers that they are as safe as possible, and to satisfying business insurers that companies are effectively managing AI risks.

The immediate future of AI regulation will almost certainly be a bumpy road. Companies that stay focused on their missions, embrace AI with clear eyes and without being seduced by the hype, while simultaneously sourcing high-quality legal advice and effectively implementing it, will fare well.

Michael Bennett is director of educational programs, policy and law in The Institute for Experiential Artificial Intelligence at Northeastern University in Boston. Previously, he served as Discovery Partners Institute’s director of student experiential immersion learning programs at the University of Illinois. He holds a JD from Harvard Law School.

link