Campbell Brown warns AI models are failing the news accuracy test
The high stakes of murky information
We are currently witnessing the birth of a new information funnel. Every breakthrough in technology brings a period of chaos, and is sounding the alarm: the large language models currently dominating our lives are essentially "slop" when it comes to high-stakes information. In the pursuit of coding efficiency and mathematical precision, the tech giants have largely ignored the nuanced, murky world of news and geopolitics. This isn't just about a broken link; it's about the erosion of the shared reality required for a functioning society. If we don't fix the funnel, we risk raising a generation that lacks the tools to discern truth from sophisticated hallucination.
Moving from engagement to truth
The fundamental mistake of the social media era was optimizing for engagement. We learned the hard way at that human beings react most strongly to emotional triggers and opinion validation. My perspective is that represents the necessary pivot. We need to move away from "what do people like?" and toward "what is real and truthful?" Enterprise demand will be the catalyst for this change. While a teenager might tolerate a chatbot's creative liberties, a bank making credit decisions or a government agency assessing geopolitical risk cannot. The liability is too high for theater; the market is now demanding actual reliability.

Expert reasoning over generalist guesses
Scaling trust requires more than just smart generalists or automated box-checking. To build a truly reliable benchmark, you must architect systems that capture the reasoning of elite experts like or . It is about training judges to mirror the nuances of human consensus. We are seeing a massive gap where pulls sources from propaganda sites and lags days behind on breaking news. Fixing this requires a commitment to source selection and the inclusion of missing perspectives, moving beyond the "left-leaning bias" that currently plagues most foundation models.
A mandate for AI literacy
There is a profound disconnect between the visionary rhetoric of Silicon Valley and the actual experience of the consumer. While leaders talk about curing cancer, the average user is getting wrong answers to basic health questions. We need to implement AI literacy alongside traditional media literacy. This isn't just a challenge for students; it’s a requirement for the teachers and the professionals who are currently being told that their jobs are on the line. We must bridge the gap between the "hopefulness" of the tech elite and the "low levels of trust" in the general public.
The opportunity of the neutral model
Despite the controversy surrounding political mandates, the underlying principles of truth-seeking and neutrality are the only path forward. We have a rare opportunity to use AI to push back against the echo chambers and filter bubbles that have defined the last decade. If we optimize for truthfulness rather than clicks, we can reconstruct a consensus reality. The power to decide these principles is the ultimate leverage in the modern economy. Those who build the most truthful systems won't just win the market—they will secure the future of informed discourse.
- 7%· people
- 7%· products
- 7%· products
- 7%· people
- 7%· companies
- Other topics
- 64%

Campbell Brown on Going From Anchor to Facebook to Founding Forum AI | StrictlyVC
WatchTechCrunch // 19:27
TechCrunch is a leading technology media property, dedicated to obsessively profiling startups, reviewing new Internet products, and breaking tech news.