Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.
Artificial intelligence (AI) raises profound legal and regulatory questions. And while AI promises huge benefits for society, it also poses major risks. The challenge for legislatures and regulators is getting the balance right between innovation and risk. Making rules adaptable for a technology that is likely to change rapidly is difficult. Regulators are currently focused on issues such as AI model safety, bias, transparency, data privacy, security, trust, copyright protections, content regulation, discrimination and economic impacts (job loss, workforce adjustments, competition). For companies, a key area of focus will be the impact of regulation on innovation and productivity. The AI regulatory debate is taking place globally.
Here is a summary of current regulatory initiatives in key global markets.
Biden administration
There is no comprehensive federal legislation or regulations in the U.S. that regulate the development of AI or specifically prohibit or restrict its use. However, there are existing federal laws that concern AI with limited applications. Federal agencies are currently implementing President Biden’s October 2023 Executive Order (EO) guiding the government’s use of AI and mandating some private sector requirements.
The most significant agencies to watch include:
Federal regulations could face “Chevron doctrine” legal challenges if mandated without congressional authority. The U.S. government will likely boost spending on AI research and development, including in defense and intelligence areas, using its buying power to help shape the market.
Congress
Senate legislation is being considered to provide $32 billion in funds to strengthen national security, address labor displacements, fund research and innovation, promote election transparency and ensure consumer protections. House legislation is also being considered to protect consumers from deceptive AI and manage federal governance.
States
Legislative initiatives have passed or are being considered in California, New York, Colorado and several other states.
2024 election
Vice President Harris led the U.S. delegation to the 2023 Global Summit on AI Safety in London. Absent action in Congress, a Harris administration would likely be limited to existing executive authorities and maintain many of the policies of the current administration. Former President Trump says if elected he will rescind and replace Biden’s EO.
Timelines
Biden’s EO mandates have several deadlines leading up to December 1, 2024 compliance. It is uncertain if any bipartisan AI legislation can be approved by both chambers by the end of 2024.
EU AI Act
The recently approved Artificial Intelligence Act (AI Act or the Act) aims to create a secure and trustworthy environment for the development and use of AI in the European Union. The Act, which the European Council approved on May 21, 2024, is the first of its kind globally and may set global standards for AI regulation, much as the General Data Protection Regulation (GDPR) did for data privacy.
The goals of the Act include: promote AI public trust; provide strong protections for public health, safety and fundamental rights (including democracy, rule of law and environment); improve the functioning of the internal market and support innovation and AI-related investment.
The Act provides for a four-tiered risk-based classification of AI applications, with “unacceptable” risk uses prohibited and with obligations and requirements for “high” risk uses, transparency measures for “limited” risk uses and exemptions for “low” risk uses.
The EU AI Act applies to foreign companies doing business in the EU. The Member States will provide governance and enforcement. Non-compliance with the AI Act could result in substantial fines that vary based on the nature of the violation and the size of the organization. Infractions involving prohibited AI systems may incur fines of up to €35 million ($38.1 million) or 7% of global turnover.
Timelines
China: Enacted regulations in July 2023 and May 2024 to govern AI content, with state-centric values to promote social harmony and stability. Developing more than 50 new national and industrial standards by 2026. AI companies must undergo government reviews to confirm their large language models reflect “core socialist values.” China’s regulatory landscape raises questions about the future of innovation in China, as laws might force companies to prioritize government compliance over creativity and technological progress.
Japan: Created an AI Strategy Council in May 2024 to develop a legal framework for AI.
South Korea: Created a Presidential AI Committee to develop the government’s AI approach in July 2024. Plans to establish an AI Safety Institute in late 2024.
Singapore: Published the Model AI Governance Framework for Generative AI in May 2024.
India: Ministry of Electronics and IT has begun drafting standalone law on AI with a focus on content moderation.
United Kingdom: The new Labour government plans to introduce AI regulation in targeted areas, including binding regulations on the “handful of companies developing the most powerful AI models” and prohibiting the creation of sexually explicit deepfakes.
France: Will host the second global AI Summit in February 2025 and propose a set of global governance standards.
Canada: The proposed Artificial Intelligence & Data Act (AIDA), introduced in 2022, aligns with the EU’s AI Act by taking a risk-based approach. Implementation may depend on the outcome of Canada’s 2025 election.
Brazil: Several AI bills are being considered in the Congress. The Brazilian government plans to launch a national plan for AI development. The strategy will emphasize the importance of transparency, accountability and inclusivity in AI development. President Lula also plans to present a global governance initiative at the UN General Assembly this fall.
OECD: An Organization for Economic Co-operation and Development (OECD) initiative, launched in May 2024 and supported by 49 countries and regions (primarily OECD members), aims to advance cooperation for global access to safe, secure and trustworthy generative AI.
G7: At their May summit in Italy, G7 leaders affirmed the importance of creating international partnerships to ensure all people can access the benefits of AI, recognizing the need to make sure it enables increased productivity, empowers workers and creates inclusiveness and equal opportunities.
UNITED NATIONS: In May, the UN introduced a draft resolution on AI encouraging Member States to implement national regulatory and governance approaches toward a global consensus on safe, secure and trustworthy AI systems. The UN does not have the ability to pass laws or regulations. However, the UN Charter gives the General Assembly the power to initiate studies and make recommendations to promote international law. In the future, the General Assembly may vote on AI resolutions, which are expressions of the Member States’ views but not legally binding.
Author
Washington DC | International consultant to governments, multinational corporations and foundations on global economic, trade, development and climate issues.
Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.