Whistleblowers, Stop Talking to Generative AI Chatbots
The enthusiastic adoption of generative AI has prompted users to share more with chatbots than they probably should. What happens to these conversations is uncertain at best. For whistleblowers, this is a risk not worth taking.
By Delphine Halgand-Mishra, Executive Director, The Signals Network
I have a simple message to whistleblowers or anyone considering releasing sensitive information in the public interest: stay away from generative AI chatbots. I’m saying this as the head of an organization that has supported hundreds of whistleblowers and journalists’ sources for years.
Ironically, AI has been a champion of sorts for us. Over the past few months, several whistleblowers approached The Signals Network following ChatGPT’s recommendations. While I am proud to see our work recognized, I can’t help but worry about how much they said in the prompt. Did they treat AI as a confidant? As a lawyer? As a therapist? Did they share confidential and sensitive details of their situation when asking for advice to ChatGPT, Claude or Gemini?
These questions take on a new urgency as we receive unprecedented levels of requests for help. The number of whistleblowers we supported almost doubled in 2025, and the number of people who contacted us rose 268%, a surge that other professionals in our field are experiencing too. Our first quarter data only confirms this trend in 2026.
This is happening at a time when an enthusiastic embrace of generative AI has outpaced any attempts at putting safeguards around the technology.
As of August, almost 55% of U.S. adults were using AI, far surpassing the adoption rate of the personal computer and of the Internet at a similar stage of commercialization, according to a survey by the Federal Reserve Bank of St. Louis. While research shows people tend to turn to ChatGPT for practical guidance, information seeking or writing, its creator, OpenAI, says they are also looking for life advice and support.
The AI revolution is so mind blowing we tend to forget it is still a work in progress, involving high privacy risks. A recent Stanford study comparing the privacy policies of Amazon, Anthropic, Google, Meta, Microsoft and Open AI found that all six U.S. companies, by default, use chat data to train their model. Those who have other platforms also add social media engagement, search queries and the like to the training mix.
If that was not worrying enough, we are now seeing courts ordering people and companies to turn over their conversations with generative AI chatbots. OpenAI, for instance, was recently told by a U.S. federal court to produce user logs as part of a copyright lawsuit. As Outten & Golden Partner Dave Jochnowitz pointed out in a recent TSN conversation, treating this material like any record of communication will have broader implications. To obtain information about a whistleblower, the adverse party could also start requesting such logs, which would not necessarily be protected by the attorney-client privilege, even if the whistleblower shared their AI conversation with an attorney.
ChatGPT agrees. When a colleague, posing as a whistleblower, asked the large language model whether it was safe to discuss her case, the bot was quick to stress the limits of its own confidentiality and security. “This chat is not an attorney-client relationship,” it said. “Conversations may be stored and reviewed for system improvement, so this should not be treated as a secure whistleblowing channel. I also cannot guarantee anonymity beyond what you choose not to disclose.”
Why am I so worried? Because I keep thinking of what happened in the mid-2000s, when Yahoo helped the Chinese government identify Chinese Yahoo users and activists. The Chinese activists were subsequently imprisoned. This drew outrage and Congressional hearings. Two decades later, this could still happen. But we now live in a AI world, where AI companies know much more about us than our email or IP address. A world where governments send hundreds of thousands of requests to access the trove of user data that big tech companies keep, or bypass a warrant by buying the personal data they need.
In this world, whistleblowers are more needed than ever. Luckily, they are not alone. A number of resources and a network of organizations, lawyers and other professionals who understand confidentiality can help them make informed decisions. Still, they should pay particular attention to technology.
If you are considering speaking up, adopting safe communication habits is essential. Use end-to-end encrypted apps like Signal Messenger. Check out more recommended steps for journalists, which everyone could learn from. And, I repeat, don’t share anything sensitive with chatbots. While we are now seeing potentially safer platforms on the market, speaking to an expert lawyer still offers the best protection yet. Whistleblowers, please trust the human experts.
