Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.
Welcome to the November edition of the AI Policy Pulse newsletter.
Artificial intelligence (AI) regulation, collaboration and risk management actions are accelerating in the United States, United Kingdom and Europe. This edition of AI Policy Pulse provides an overview of recent events in Washington, D.C., London and Brussels.
President Biden signed a long-awaited Executive Order on October 30 designed to reduce the risks that AI poses to consumers, workers, minority groups and national security. The Order is primarily directed toward U.S. federal agencies. However, it includes requirements for private companies that are developing and testing AI systems.
AI testing requirements are the most significant provisions in the Order
Several federal agencies are tasked with specific AI actions
The Department of Commerce will develop guidance for content authentication and watermarking for labeling items generated by AI, to make sure government communications are clear. The Departments of Energy and Homeland Security will study the threat of AI to critical infrastructure. The Federal Trade Commission will develop a set of antitrust rules to ensure competition and eradicate anti-competitive behavior in the marketplace. The Labor Department will focus on preventing workplace discrimination and bias. The Treasury Department will provide guidance on AI cybersecurity risks for financial institutions. The Department of Justice and federal civil rights offices will address algorithmic discrimination to prevent civil rights violations and develop best practices of the use of AI across the criminal justice system. Homeland Security will harness the expertise of highly skilled immigrants and non-immigrants with expertise in AI to stay, study and work in the U.S. through changes in visa processes. Health & Human Services will work with other agencies to develop an AI plan in areas such as research and discovery, drug and device safety and public health. The State Department will work with international partners to align and implement AI standards around the world.
What companies and contractors should watch
What happens next
Prime Minister Rishi Sunak 2-day AI Summit concluded with 27 countries and the European Union signing the Bletchley Park Declaration on AI which focuses on so-called “frontier AI” and aims to ensure “an internationally inclusive network of scientific research on frontier AI safety.”
The signing of a shared declaration indicating an international commitment to work collaboratively to address AI risks was cited as a diplomatic achievement for the Prime Minister, who invested his political capital to convene global leaders, tech executives, academics and civil society groups.
The Prime Minister hailed the fact that the United Kingdom and China both signed the communique as a “sign of good progress,” and said it vindicated his decision to invite China. China’s vice minister of science and technology said his country was willing to work with all sides on AI governance. “Countries regardless of their size and scale have equal rights to develop and use AI,” Minister Wu Zhaohui told delegates.
Other highlights
While the UK Summit is a start to international cooperation, a global agreement for overseeing the technology remains a long way off. Disagreements remain over how that should happen – and who will lead such efforts.
The AI Act is entering its last mile. EU policymakers met on October 25 for another round of political negotiations. They managed to finalize a key point on the classification of high-risk AI with the introduction of a filtering system. AI systems not categorized as high-risk are those performing narrow procedural tasks, detecting patterns, not influencing critical decisions (e.g., loan approvals or job offers), or aiming to enhance work quality.
What happens next
The EU also welcomed the G7 International Guiding Principles and voluntary Code of Conduct for AI developers agreed to on October 30, viewing it as complementary to the EU AI Act, and called on AI developers to sign and implement the Code of Conduct as soon as possible. The G7 principles and code of conduct contains a set of rules that AI developers are encouraged to follow, on a voluntary basis, to mitigate risks throughout the AI lifecycle.
Commission President Ursula von der Leyen attended the UK’s AI Safety Summit and signed the Bletchley Declaration. In a meeting with Prime Minister Sunak, von der Leyen said the European AI Office – which will be set up under the AI Act – should have a global vocation and cooperate with the AI Safety Institutes announced by the United States and United Kingdom. Looking to build on the UK Summit, Italian Prime Minister Giorgia Meloni unveiled plans to hold an international conference focused on AI and its impact on labor markets during the Italian G7 Presidency next year.
While the Commission has not officially commented on the White House’s AI Executive Order, several Members of the European Parliament welcomed it as a good step forward and noted the convergence on mitigating risks of foundation models. Some Parliamentarians are worried the EU risks falling behind in the global discourse on AI regulation by getting stuck in “regulatory overkill.”
For more information, contact our team:
James Meszaros (Washington, D.C.)
Oliver Drewes (Brussels)
Ella Fallows (London)
Sign up for Powell Tate Insights for monthly fresh takes on disruptions, innovations and impact in public affairs.