Product

Solutions

Security

Community

Company

What Might AI Regulation Look Like?

What Might AI Regulation Look Like?

Written by

Margaret Jennings

Future
8 min

A lot has changed since ChatGPT’s launch as a “Research Preview''. In under a year, it is one of the world’s leading consumer and enterprise products with over 1.4B monthly views and over $1B in enterprise sales reported.

Its ascent reflects how independent AGI research labs have evolved into product-led companies.

Why are they doing this? Research leaders believe that embedding product with AI model training drives better model performance and cost structure. We’ve seen this play out with Midjourney, CharacterAI, Inflection AI’s Pi, and Meta’s in-app celebrity chatbots and image generator along with OpenAI’s DALLE and ChatGPT.

As billions of people use these products every day, countries at the world stage are beginning to organize around AI regulation. The following essay will expand on how countries are balancing how democratic values are upheld as well as safeguarding economic interests as generated content becomes more accessible, election interference becomes more likely, and productivity and its subsequent automation accelerates across industries.

I suspect the UK and Canada will take strong regulatory stances within the year, establishing new rules on AI monitoring, evaluation, and accountability. For the UK, it’s an opportunity to lead on the world stage, safeguarding their economic interests, and charting their own path unencumbered by the EU (“Downing Street trying to agree statement about AI risks with world leaders”, Guardian Oct 11, 2023). Sunak’s government has an opportunity to set precedence by enforcing guardrails that protect our democratic values and fosters innovation without being shortsighted (i.e. limited to the models and capabilities we see today but instead imaginative as to what could be in the not so distant future).

Over the course of the coming weeks, I’ll be sharing essays on what I’ve learned this past year building two LLM-powered products. One thing is certain amid the rapid change and paradigm shifts: LLM-powered products will continue to evolve as we interpret new business, regulation, research, and user needs into product design.

Existential risk, regulation, and Big Tech

Global AI regulation came to the forefront this past spring when the Center for AI Safety’s “Statement on AI Risk” published the following sentence with over 100 leading executives, researchers, and academics: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” (May 30 2023).

Their argument is that AGI has the potential to cause serious harm to countries and its citizens; therefore, its research and deployment needs to be heavily monitored and privately built to ensure evil actors have limited to no access.

Many have argued that this statement fails to address its signatories own role in the threat, i.e. those who signed it were largely from Big Tech with the resources - research talent, GPU allocation, advanced AGI models, and distribution - to cause harm.

On his recent book tour, Mustafa Suleyman, DeepMind and Inflection AI co-founder, argued against “naive open source” AI, stating it accelerates the “risks of large-scale access” (“Mustafa Suleyman on getting Washington and Silicon Valley to tame AI”, 80,000 Hours Podcast, Sept 1 2023).

Those from the Open Source community counter argue that Suleyman along with other tech CEOs are fear mongering to enforce a "Pull up the Ladder" strategy, whereby the interests of established companies are safeguarded at the expense of smaller players and the commons. By advocating for regulatory measures that restrict the work of others, well-funded companies aim to maintain their dominant position in the race to AGI and its commercialization. 

In response to the “Statement on AI Risk”, many from the research community believe that the focus on AGI existential risk overshadows the importance of addressing the current, measurable AI risks such as bias, misinformation, discrimination, exploitation, and election interference.

Startup leaders believe that the “existential risk” argument distracts regulators as well as inadvertently creates an anti competitive environment between incumbents like OpenAI, Anthropic, and Google DeepMind versus university AI research labs, early stage startups, and the open source community.

The threat of disinformation and harm are more prominent now than ever as the G7 fights two proxy wars in Ukraine and Israel. One can imagine that an open source, open weights GPT4-level model will be available for free in the coming months, and could easily be fine-tuned by bad actors to convincingly spread misinformation online. These existential and present day risks make AGI safety and regulation a centerpiece in each country’s foreign policy. How our leaders balance our safety, economic interests, and the industry’s guardrails will continue to unfold in the coming months.

Open Science is the bedrock of AGI research

As the debate between private and open research continues both in and outside politics, it’s important to review the potential benefits of small players and open source contributions to ensure that AGI benefits everyone and is accessible to all. The following section will look at how Open Science is at the foundation of Generative AI and regulation will be required to both preserve as well as hinder accessibility.

Last month Mistral, an early stage startup co-founded by LLaMA author Guillaume Lample and Flamingo author Arthur Mensch, released a smaller, open source model that outperformed the current gold standard - LLaMA 2’s larger models - on a wide range of benchmarks. Mistral researchers only used data found on the internet (e.g. the chat model was trained on free instruction datasets found on Hugging Face, “Mistral 7B” Mistral, Sept 27 2023).

Over the weekend, reports on Twitter highlighted how easy it was to manipulate the model towards unsafe outcomes as well as how this “uncensored model” caused it to be far more 'helpful' than models like LLaMa2 with its inbuilt safety features.

Jack Clark, the Anthropic co-founder, writes “this illustrates the challenging tradeoffs inherent to safety-as-censorship; by shaving down the sharp edges of models you can also make them less useful, which makes safety interventions seem more like a tax than a benefit” (“Import AI 442: Mistral dumps an LLM on BitTorrent; AMD vs NVIDIA; Sutton joins keen” Jack Clark on Oct 2, 2023).

Open source models like Mistral highlight the importance of data provenance. Up until recently, the internet was largely seen as a distribution network, where businesses and individuals alike go to publish and market their work. Today, that largely free publisher is being used as training data for large language models both at open and private research labs.

To ring fence large language models - aka determine who has access and at what price - would be the antithesis of both how these models are created. One could argue that if we - as active online contributors - participate in the making of the model by publishing online, we should also receive the benefits.

As a result, open source models ensure an even playing field between private and public research labs and their product builders; however, their free access raises concerns on trust and safety. Open Science should be safeguarded as well as regulated as we begin to draw the contours of global AI regulation.

Free Speech, Copyright Law, and the Fair Use Doctrine

As LLMs continue to commodify knowledge, its research and applications are sparking new legal interpretations of Constitutional Rights including Free Speech as well as Copyright Law and the Fair Use Doctrine. 

Cass Sunstein, the author of Nudge Theory, writes in his recent paper ”Artificial Intelligence and the First Amendment”: “AI itself is not human and cannot have constitutional rights just as a vacuum cleaner does not have constitutional rights. But it seems pretty clear that content created by Generative AI probably has free speech protections.” (April 28 2023)

Cass goes on to write: “speech generated by AI might be unprotected, but AI might be in some sense autonomous; what it has learned, and what it is saying, might not be traceable to any deliberate decisions by any human being.” Sunstein highlights that the content or speech created by LLMs, although not having constitutional rights inherently, may be deserving of free speech protections. This distinction is crucial in understanding the legal landscape surrounding AI-generated content. Cass notes the complex issue of liability for disseminating speech generated by AI.

Sunstein raises new questions about whether humans should be held responsible for AI-generated speech and, if so, under what circumstances. He suggests that the autonomy of AI, its learning processes, and the intentions of human actors involved in disseminating AI-generated content are factors that could influence the determination of liability. This reflects the evolving legal and ethical discourse around AI and the challenges in defining responsibility and culpability in a rapidly advancing technological landscape.

The legal and ethical considerations play a pivotal role in shaping how AI developers along with Model and Cloud providers operate within the bounds of copyright laws and free speech. The acquisition of data for training AI models raises concerns about intellectual property rights.

Many argue that AI developers must ensure compliance with relevant laws, including licensing and compensating individuals for their intellectual property used in training data (”Generative AI Has an Intellectual Property Problem”, HBR April 2023). In the future, it may involve sharing revenue generated by AI tools and appropriately licensing copyrighted materials.

A critical aspect of AI's legal landscape involves the interpretation of the fair use doctrine, permitting the use of copyrighted work without the owner's permission for transformative purposes such as criticism, comment, teaching, scholarship, or research.

In a recent Stanford Law paper titled “Foundation Models and Fair Use”, the authors write: “what happens when anyone can say to AI, ‘Read me, word for word, the entirety of Oh, the Places You’ll Go! by Dr. Seuss’?” Henderson asks rhetorically. “Suddenly people are using their virtual assistants as audiobook narrators — free audiobook narrators”.

It’s expected that there will be a wave of legal cases related to AI-generated content, and it’s anticipated to hinge on the interpretation and application of this doctrine.

In the meantime, model and cloud providers are now marketing that they will assume all legal risks if their customer is sued for generated content copyright infringement. Arguably a first for the industry, Microsoft and OpenAI have taken a proactive approach by offering legal protection to customers for their AI systems (“Microsoft offers legal protection for AI copyright infringement challenges”, Sept 8, 2023).

In conclusion, the rise of AI and its impact on constitutional rights, copyright law, and the fair use doctrine has introduced new legal and ethical considerations. While AI itself does not possess constitutional rights, content created by generative AI may be deserving of free speech protections. The issue of liability for disseminating AI-generated speech raises complex questions regarding the autonomy of AI, the learning processes involved, and the intentions of human actors.

I’m personally super interested in how these open questions around Free Speech and Copyright Law get answered and interpreted by researchers, product leaders, executives, and governments alike; as undoubtedly, it will have impact on how we create, apply, and interact with LLMs in the future.

Shaping the contours of AGI regulation within the context of economic growth

AGI research labs - and their subsequent products and services - have the potential to be large economic drivers for a country’s GDP. Think of the outsized role the banking or tech industries have on global economies. As the knowledge industry becomes further commoditized, LLM-powered products, services, and companies could take a similar path of power and influence.

Where the EU, UK, Canada, and US regulation appear to be headed

In April 2023, we saw the G7 begin to grapple with the challenges of balancing innovation and the need for guardrails to prevent harm or discrimination. To promote trustworthiness in AI systems, the G7 AI Declaration emphasized the importance of tools such as regulatory frameworks, technical standards, and assurance techniques. Additionally, they wrote that AI related laws, regulations, policies, and standards “should be human centric and based on democratic values, including the protection of human rights and fundamental freedoms and the protection of privacy and personal data” (G7 Digital and Tech Ministerial Declaration of April 2023).

In the subsequent months, the European Union (EU), the United Kingdom (UK), Canada, and the United States (US) continued to identify how AI regulation might be enforced nationally - finding a common ground by emphasizing the importance of safety, transparency, non-discrimination, data privacy, and providing notice and explanation to users. This convergence of priorities suggests a shared commitment to ensuring responsible and ethical AI practices both nationally and internationally.

The EU is notably focused on misinformation and the potential harm Russia could inflict on their elections. Their current framework offers a risk based approach and continues to safeguard open source community, stating in “The Digital Services Act” adopted in July 2022: “the European Union has adopted a modern legal framework that ensures the safety of users online, establishes governance with the protection of fundamental rights at its forefront, and maintains a fair and open online platform environment.” (”The AI Act” proposed in April 2021 (presentation), ”Hugging Face, Github, and more unite to defend open source in EU Regulation” July 26 2023). 

The UK with PM Rishi Sunak’s leadership and upcoming AI Safety Summit continues to thread a fine line between bolstering economic development and mitigating “existential risk”. In March 2023, his government set forth “A pro-innovation approach to AI regulation” white paper stating: “responding to risk and building public trust are important drivers for regulation”. For context, the UK is an epicenter of AI research talent - second only to the Bay Area. London is home to Google DeepMind along with leading AI research labs like OpenAI, Anthropic, Cohere, Tesla, Mistral, and Wayve as well as universities like University College London, University of Edinburgh, Cambridge, and Oxford.

During this year’s UN General Assembly, Oliver Dowden, the UK deputy Prime Minister, made existential risk a centerpiece of his foreign policy, stating: “The AI revolution will be a bracing test for the multilateral system. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” (”AI poses ‘bracing test’ to multilateral system, says UK deputy prime minister” Sept 24 2023). Within a few days of Dowden’s speech, venture capitalist Nathan Benaich spoke at the House of Lords: ”premature regulation on the basis of safety fears is bad for competition - that’s why big companies are happy to advocate for policies like licensing regimes, as they know they’ll hamper open source model providers” (“The UK LLM opportunity: Air Street at the House of Lords, Sept 28 2023”). How the UK government will foster homegrown talent, attract international investment, and enforce regulatory guardrails remains to be seen.

In September 2023, Canada introduced a "Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems," which includes measures such as human oversight and monitoring for addressing emerging risks. The Voluntary Code of Conduct states that “while they have many benefits, advanced generative AI systems also carry a distinctly broad risk profile, due to the broad scope of data on which they are trained, their wide range of potential uses, and the scale of their deployment… the capability to generate realistic images and video, or to impersonate the voices of real people, can enable deception at a scale that can damage important institutions, including democratic and criminal justice systems.” The Conduct has been signed by Cohere and Blackberry, whilst other companies have expressed concerns that policy could stifle innovation and the ability to compete with companies based outside Canada (“Canada’s voluntary AI code of conduct is coming - not everyone is enthused”, CBC Oct 2 2023).

The US efforts led by President Biden and Congress are largely focused on transparency, accountability, and the right to know when one is interacting with an AI model. The White House, influenced by “algorithmic justice”, set forward a ”Blueprint for an AI Bill of Rights” in October 2022 focused on how technology, data, and automated systems should respect and uphold the rights of the American public. A year later, the “Bipartisan Framework for US AI Act” takes this policy a step further spelling out legal accountability for harms: “Congress should ensure that AI companies can be held liable through oversight body enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms”. President Biden will most likely not move forward with AI regulation until after the US 2024 General Election.

In conclusion, the past year has seen increased momentum globally towards developing thoughtful and measured regulation of AI systems. While approaches vary, common themes have emerged across major regions focused on ensuring safety, explainability, transparency, accountability, and avoidance of unfair bias. Regulation will likely remain an ongoing discussion as governments balance innovation and economic competitiveness with managing potential risks. The degree to which international alignment can be achieved will impact how effectively the AI ecosystem evolves. Continued public-private partnership and multistakeholder collaboration will be important to develop regulation that enables, rather than stifles, continued progress towards responsible AI development.

AGI art by Valeria Palmeiro

Elevate your work

Supercharge your apps and data with fully integrated AI, utilizing the best and any AI model available

Analyze

Learn

Summarize

Brainstorm

Elevate your work

Supercharge your apps and data with fully integrated AI, utilizing the best and any AI model available

Analyze

Summarize

Elevate your work

Supercharge your apps and data with fully integrated AI, utilizing the best and any AI model available

Learn

Summarize

Brainstorm