ARTIFICIAL INTELLIGENCE
What the GDPR can teach us about AI regulation
Jun 16, 2023

The EU looks set to adopt the world’s first comprehensive AI regulatory framework.

Image: Unsplash/Mathew Schwartz

qatar airways

Chandler Morse
Vice President Corporate Affairs, Workday
Share:
OUR IMPACT
What’s the World Economic Forum doing to accelerate action on Artificial Intelligence?

THE BIG PICTURE
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
CROWDSOURCE INNOVATION
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:
Artificial Intelligence
Listen to the article

9 min listen
The EU is set to adopt the world’s first comprehensive AI regulatory framework in early 2024.
The bloc has been a global leader in the development of tech policy following the adoption of its privacy legislation, the GDPR.
US stakeholders and policy-makers should work together to ensure they too can play a leading role in developing meaningful and workable AI regulation.
Discussions on regulating artificial intelligence (AI) have intensified in the United States, coinciding with a significant development in Europe. The legislative process in Europe, which began in 2018, reached a major milestone this week, as the European Parliament voted on amendments to the proposed Artificial Intelligence Act (AI Act). The vote sets up the so-called trilogue negotiations, the final phase of the EU’s process, and paves the way for the likely adoption of Europe’s — and the world’s — first comprehensive AI regulatory framework in early 2024.

AI has the power to unlock human potential. At the same time, these technologies demand a mature policy approach, which includes smart regulatory safeguards that help build trust in AI. While refinements are still needed, the amendments made so far — such as the use of a more tailored definition of AI, and a nuanced risk-based approach — suggest that the AI Act’s requirements will be both meaningful and workable, and that it will play a foundational role in building an emerging global consensus on AI regulation.

Have you read?
How to tell if artificial intelligence is working the way we want it to
How businesses should respond to the EU’s Artificial Intelligence Act
For many in Washington, there may be a sense of déjà vu when it comes to Europe playing a leading role in the development of tech policy. In 2016, Europe adopted privacy legislation – the General Data Protection Regulation, better known simply as GDPR – that has gone on to play a significant role in global privacy regulation.

Following the EU’s GDPR passage and impact, there are three key lessons the US can learn as AI policy conversations around the globe heat up and Europe is poised to, yet again, take a significant and early step.

1. Congress should act
As a global leader in the technology sector, the US has a crucial role to play when it comes to setting the direction of technology policy, including on AI. To date, Congress has made progress. It directed NIST to launch the AI Risk Management Framework, an important step for which Workday was an early champion.

In addition, Congress created the National AI Advisory Committee, a group of experts – including Workday’s co-president Sayan Chakraborty, in his personal capacity – tasked with providing well-timed advice to the White House. However, as concepts turn to concrete policies in capitals around the globe, a lack of further congressional action will grow conspicuous. Now is the moment for Congress to pass legislation addressing the need for meaningful AI safeguards.

2. International cooperation is critical
In the case of Europe’s AI efforts, we are starting from a foundation of shared values and even some consensus on core elements of responsible AI. However, a world in which innovators are subject to contradictory regulatory regimes must be avoided.

The US has taken steps to drive cooperation on these issues with its European counterparts, including partnering to launch the US-EU Trade and Technology Council, and recently confirming Ambassador Fick as the inaugural US Ambassador at Large for Cyberspace and Digital Policy. Beyond our EU and US advocacy, Workday is currently engaged in ongoing or emerging AI-related policy conversations in Canada, the UK, Singapore and Australia.

We are in a nascent moment on AI policy and, if the past is prologue, the pace of change after Europe adopts the AI Act will ramp up dramatically. Against this backdrop, the US should increase its investment in AI-focused international cooperation.

DISCOVER
How is the World Economic Forum ensuring the ethical development of artificial intelligence?

3. State legislatures will not wait
In the absence of congressional action on privacy legislation, state governments moved relatively swiftly to fill the void. This trend has started even earlier with AI, for example, with New York City’s law focused on AI and employment going into effect next month. With state and local activity inevitable, Workday has leaned in to play a constructive role in processes like New York City’s, while also engaging with lawmakers in Sacramento, Albany and elsewhere to create effective and workable rules. We’re pleased to see thoughtful contributions to the debate like California’s AB 331, a legislation introduced by Assembly Member Bauer-Kahan, which seeks to take a risk-based approach to AI regulation while embracing tried-and-true accountability tools like impact assessments. We anticipate a dramatic increase in the number of state proposals in the coming year.

A seismic shift in AI policy
The European Parliament’s adoption of the AI Act position marks the beginning of the end-game for what will represent a welcome and seismic shift in the global AI policy landscape. Much of the talk around technology is about the future. When it comes to Europe’s role in technology policy, it is helpful to look to the past for cues on how to successfully navigate towards a harmonized approach to much-needed safeguards for AI that build trust and support innovation. And that harmonized approach is needed now.

Stakeholders and policy-makers in the United States should work together to ensure we do not miss the opportunity presented by this momentum to ensure the future of responsible AI development and advance legislation that enables continued innovation while building trust.

Don’t miss any update on Artificial Intelligence
Sign up for free and access the latest publications and insights across various topics.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

LEAVE A REPLY