Artificial intelligence is messy. It hallucinates facts, generates nonsense, and can produce deeply harmful content. To clean up this mess, tech companies rely on an invisible and essential workforce of thousands of human raters. These contractors are the janitors of the AI world, working day and night to ensure the chatbots presented to the public are safe, accurate, and appropriate.
This critical job, however, is often thankless and grueling. Workers are organized into tiers, from “generalist raters” to “super raters” with specialized knowledge, but all face similar pressures. They are isolated, working in silos with little information about the overall project, and are held to increasingly demanding productivity standards that sacrifice quality for speed.
The nature of the work itself can be psychologically damaging. Beyond fact-checking, a significant portion of the job involves content moderation of the AI’s worst outputs. Raters are exposed to violent, sexually explicit, and hateful material, often without any warning or mental health resources. This exposure is a hidden cost of making AI “safe” for billions of users, borne by a workforce that remains largely in the shadows.
An expert in the field describes this system as a “pyramid scheme of human labor,” where these raters form a middle rung that is both essential and expendable. They are paid more than data labelers in developing countries but far less than the engineers designing the systems. Their experience reveals that the polished facade of AI is maintained by a hidden class of workers cleaning up the technology’s inherent flaws.