Hao: why the AI industry thrives on myths and labor exploitation

The Myth of Artificial General Intelligence

Artificial General Intelligence, or

, exists more as a marketing vehicle than a scientific destination. The term serves as a convenient container that
OpenAI
and its peers redefine based on their immediate audience. When
Sam Altman
speaks to Congress, he frames
AGI
as a humanitarian miracle capable of curing cancer and solving climate change. When the same executive speaks to investors at
Microsoft
, the definition shifts to a system capable of generating hundreds of billions in revenue. On the company's website, it is defined as autonomous systems that outperform humans in economically valuable work. This lack of a coherent, scientific definition allows these companies to move goalposts at will, using the promise of a "god-like" technology to ward off regulation and extract astronomical amounts of capital.

The historical roots of the field reveal this ambiguity was baked in from the start. In 1956, when

coined the term at
Dartmouth University
, his colleagues expressed concern that the name pegged the discipline to recreating human intelligence—a concept for which there is still no biological or psychological consensus. Every historical attempt to quantify and rank human intelligence has been driven by nefarious motives, often aiming to prove the inferiority of certain groups. By chasing a goalpost that doesn't exist, the AI industry has created a religious-like mythos that requires the public to seed power to a handful of self-appointed guardians.

Hao: why the AI industry thrives on myths and labor exploitation
AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI!

Internal Power Struggles and the Firing of Sam Altman

The internal culture of

has been far from the harmonious mission-driven environment portrayed in press releases. The dramatic firing of
Sam Altman
by the board was the culmination of long-standing concerns regarding his leadership style and transparency.
Ilya Sutskever
, the company's chief scientist, became increasingly alarmed by what he saw as a chaotic environment where teams were pitted against one another and information was selectively shared. These were not merely management gripes; in a company that believes it is building a technology capable of destroying humanity, instability is viewed as an existential threat.

and
Mira Murati
eventually approached independent board members like
Helen Toner
and
Adam D'Angelo
with documentation of
Sam Altman
's behavior. They argued that the problem could not be fixed unless he was removed. One specific point of contention involved the
OpenAI Startup Fund
. The board discovered that despite the name, the fund was legally owned by
Sam Altman
personally, a detail that exacerbated the lack of trust. When the board finally moved to fire him, they did so in secret, fearing his persuasive abilities would derail the process if he caught wind of it. This secrecy backfired, leading to a massive employee revolt fueled by
Microsoft
and other stakeholders who were left out of the decision, ultimately resulting in his reinstatement and the departure of his critics.

The Imperial Structure of Modern Tech

The metaphor of empire is the only framework that fully captures how modern AI companies operate. Like the empires of old, they lay claim to resources that are not their own—in this case, the intellectual property of artists, writers, and every person who has ever posted on the open internet. They engage in a global land grab for supercomputer facilities, often choosing vulnerable communities to host these resource-intensive hubs. They also monopolize knowledge production, bankrolling the majority of the world's AI researchers to ensure that only convenient truths are published. When researchers like

find inconvenient evidence of harm, they are swiftly silenced or terminated.

This imperial agenda is justified by a narrative of "the good empire" versus "the bad empire."

and its peers argue that they must be allowed to extract data and exploit labor because if they don't do it first, an evil actor—usually
China
or a profit-driven
Google
—will win the race. This creates a false dichotomy that forces the public to accept a deeply anti-democratic approach to development. If we believe we are in a civilizational arms race, we are less likely to question the environmental cost of a data center or the ethics of mass data scraping. This narrative is a tool used to consolidate power in the hands of a few billionaires who believe they alone should have their "finger on the button."

Labor Exploitation and the Data Annotation Underclass

While the industry markets AI as a tool that will liberate humans from drudgery, the reality for a growing number of workers is the exact opposite. AI is not a self-learning machine; it is a system that requires millions of hours of human labor to function. This labor comes from a global underclass of data annotators who painstakingly label images, text, and video to teach the models. As

of
Klarna
notes, companies are aggressively downsizing their human workforce in favor of these models. However, the people being laid off—including highly educated professionals and creative directors—are often finding themselves forced into the very data annotation jobs that are automating their previous careers.

This work is often precarious and inhumane. Third-party firms pit workers against each other in a race to the bottom, requiring them to stay glued to their screens for pings that signal a new project. This "mechanization" of human life erodes dignity and removes any semblance of a career ladder. There are no rungs to climb when entry-level and mid-tier roles are gouged out by automation, leaving only high-level orchestrators and a vast, invisible workforce of annotators. The industry is not making us more human; it is atomizing work and devaluing expertise to serve a machine that executives claim will eventually make everyone redundant. This is a political choice, not a technological inevitability.

Environmental Racism and the Physical Cost of AI

The physical infrastructure required to sustain the "cloud" is exacting a devastating toll on public health and the environment. Data centers are not ethereal; they are massive industrial facilities that consume gigawatts of power and millions of gallons of fresh water. These facilities are frequently built in working-class or minority communities that are not given a say in their construction. In

,
Elon Musk
built the
Colossus
supercomputer using dozens of methane gas turbines. Residents only discovered the facility's existence when they began to smell gas in their homes and experienced exacerbated respiratory issues.

These communities face a double burden: they are displaced by the technology's economic impacts while their local resources are drained to power it. In regions facing droughts, data centers compete with residents for water to cool their servers. The utility bills for the local population often rise to cover the infrastructure needed for these industrial giants. This is environmental racism in its modern form—extracting the health and resources of the vulnerable to fuel the "abundance" promised to the global elite. The disparity between those who benefit from AI and those who pay for its production is widening into a chasm.

Breaking Up the Empire through Alternatives

The current path of "brute-force" scaling is not the only way to develop artificial intelligence. We have historically seen that specialized models, like

by
DeepMind
, can provide extraordinary scientific benefits without requiring the entire internet as a training set. These are the "bicycles of AI"—efficient, targeted, and useful tools that don't require the resource consumption of a rocket. By focusing on curated data and specific utility, we can preserve the benefits of the technology while stripping away the imperial baggage of mass extraction and exploitation.

Breaking up the AI empires requires a reassertion of democratic agency. This includes supporting the 80% of Americans who want to regulate the industry and backing the grassroots movements protesting data center expansion. Artists and writers suing for

protection are not just protecting their paychecks; they are withholding the "fuel" that the empire needs to perpetuate itself. We must stop viewing AI development as a flawless, inevitable progression and start viewing it as a series of choices that can be contested. If we do not agree with the world these companies are building, we have the right and the responsibility to make its construction as difficult as possible until they agree to a fair exchange of value.

Summary of the Future Outlook

The AI industry stands at a crossroads between imperial domination and democratic integration. While

and other leaders project a future of post-labor abundance, the current trajectory points toward heightened inequality and environmental degradation. The "race" against
China
is frequently used as a shield to bypass ethical scrutiny, but the real contest is between the public interest and private power. As social media usage plateaus and younger generations seek more "IRL" connections, there is a growing appetite for a world that prioritizes human flourishing over machine efficiency. Whether AI becomes a tool for collective progress or a mechanism for global extraction depends entirely on our willingness to dismantle the myths and demand a more humane path forward.

8 min read