A.I. regulations doomed to fail without whistleblower protections
By Jennifer Gibson, Legal Director for The Signals Network
In 2023, artificial intelligence (A.I.) went from a niche topic to a primary concern for policymakers around the world. The E.U. reached a landmark deal on the A.I. Act; the U.S. Senate held a series of hearings on A.I. and the White House issued an executive order regarding new standards for A.I. safety and security; the U.K. and other countries also published their own national strategies and guidelines. This year, these policies will start to morph into concrete action.
Yet these efforts are missing a key ingredient to effective A.I. regulation: internal checks and balances that allow workers to safely blow the whistle when they witness wrongdoing or potential harm.
Technology companies generally rely on trade secrecy and corporate confidentiality to shield information from public view. This lack of information contributes to the “pacing problem,” where technological innovation outpaces the ability of laws and regulations to keep up. Far too often, whistleblowers are the only way government and the public find out about problematic dynamics that might otherwise remain invisible.
Consider, for example, Facebook whistleblower Frances Haugen who, in 2021, exposed how the social media giant downplayed the harms its products caused, including worsening body image issues among teenagers and rampant misinformation worldwide. Haugen’s disclosures prompted public outrage, government investigations and increased pressure on lawmakers to act. In 2022, former Uber executive Mark MacGann leaked a trove of documents showing how the company muscled its way into markets and lobbied governments around the world to the detriment of drivers’ rights and physical safety. His revelations sparked protests and inquiries worldwide, and MacGann has since testified to lawmakers in Belgium, France, the Netherlands and the European Parliament.
Other recent whistleblowers include Timnit Gebru, a former A.I. ethics researcher at Google; Anika Collier Navaroli, a member of Twitter’s U.S. safety policy team during the January 6th attack on the U.S. Capitol; and former Facebook content moderator Daniel Motaung. Their rising numbers have led some observers to speak of a “new era of tech whistleblowing.”
Many of those who have come forward have remained anonymous, fearing repercussions. These include the loss of their careers, financial ruin and even legal battles to silence them. Perhaps most traumatizing is the loss of colleagues, friends and family who don’t understand why they risked everything to speak out.
Yet, we should all be thankful they did choose to speak out. In the world of tech – and now the expanding world of A.I. – whistleblowers have been instrumental in keeping the public safe and ensuring our democracies remain strong.
That’s why whistleblowers – and strengthened laws to protect them – should be at the heart of any regulatory frameworks aimed at putting guardrails around AI. Individuals on the inside should be assured they will be protected if they speak out. And if top A.I. companies, such as OpenAI and Google, are serious when they say they too are concerned about the harm new technologies might cause, then they will ensure they have robust internal reporting mechanisms that actively encourage – and protect – workers who raise concerns.
New laws like the E.U. Directive on whistleblowing are a good start. The E.U. Directive now requires Member States to provide workers in the public and private sectors with effective channels to report breaches of E.U. rules confidentially, establishing a robust system of protection against retaliation. While much more can and should be done, the U.K., the U.S. and other governments should work quickly to replicate this effort.
There also needs to be more support and resources for civil society organisations that are supporting whistleblowers. The journey is not simple. Even with the best protections in the world, it is a path that comes at great personal risk and often trauma. Nobody wants to become a whistleblower. But when workers witness wrong-doing, they need support figuring out what to do next. Recently, The Signals Network worked with Protect, the Whistleblowing International Network, and other whistleblowing experts to create two new guides for tech workers on their whistleblowing rights. These types of resources should be promoted and made available through government and company portals.
And when individuals do speak up, those writing policy should make sure they are listening and speaking directly with them. Individuals such Timnit Gebru and Joy Buolamwini can help them understand where the risks are and how to devise strategies for mitigating them. They can equally help regulators write policy that protects those on the inside from retaliation.
The growing momentum for regulating A.I. is welcome and urgently needed. But unless they are tied to stronger whistleblower protections, governments will be regulating in the dark, always two steps behind. We must urgently listen to the workers developing and using A.I. tools and make it easier for them to speak up. We all benefit from the light they shine when they do.