— Contractors and freelance annotators working for platforms such as Outlier and Appen describe a late-night, piecework economy where short conversations and red‑teaming tasks can pay from $8 per 10‑minute chat to as little as $2 an hour, yet the work is unpredictable, often distressing, and increasingly vulnerable to rapid shifts by large clients and investors.
Key Takeaways
- Some annotators earned about $8 per 10‑minute chat (roughly $70 on a good night); others were paid as little as $2 per hour on certain platforms.
- Contract work is volatile: projects, rates, and available tasks can change suddenly without clear explanation.
- Annotators commonly encounter graphic, illegal, or hateful content when testing or moderating AI systems.
- Platforms often limit disclosure about who will use the labeled data and for what purpose, citing client confidentiality and NDAs.
- Major investments and client shifts — including a large Meta stake in Scale AI’s parent company — triggered project pauses and worker uncertainty in June 2025.
- Some AI firms are shifting toward more advanced reasoning models and bringing more training in‑house, favoring specialized, higher‑paid experts over mass generalist work.
Verified Facts
Multiple contractors who worked for Outlier described earning around $8 for a 10‑minute conversational task called Xylophone; one worker said he could complete four such sessions an hour and sometimes made nearly $70 in a night. Other contributors reported sudden rate cuts — for example, one worker said his hourly pay dropped from $50 to $15 for certain generalist projects in March, a change the platform later described as a reconfiguration of skill assessments.
Longtime annotators such as Krista Pawloski recalled starting with basic tagging and image labeling in the 2000s and later moving into content moderation and red‑teaming. Workers described prompts that asked them to try to elicit illegal or sexualized content from chatbots to test safety filters. Platforms say adult or disturbing tasks are labeled and opt‑in, and that workers can decline tasks at any time.
Internationally, pay varies widely. One Nairobi worker said he earned about $2 per hour on Appen projects, amounting to roughly $16 on days he spent a few hours transcribing recordings. Appen told reporters its rates in some countries exceeded local minimum wages. Researchers from the Oxford Internet Institute surveyed hundreds of cloudwork contributors and found most had little information about how their data would be used.
In June 2025, news that Meta bought a sizable stake in Scale AI’s parent company coincided with contractor reports of empty dashboards and paused projects. Contractors said some clients, including Google, paused work; Scale AI disputed that pauses were linked to the investment and said project changes are client‑driven.
Context & Impact
Data annotation underpins much of today’s AI progress: humans label, rate, and stress‑test models so systems learn to match human preferences and avoid harmful outputs. Yet the business model often channels that labor through gig platforms, where freelancers lack stable hours, benefits, or clear disclosure about downstream uses.
The industry is evolving. Newer reasoning models — examples cited include DeepSeek R1, OpenAI’s o3, and Google’s Gemini 2.5 — reduce some reliance on large pools of low‑paid generalists by requiring more specialized reviewers and fewer repetitive reward signals. Platforms now also advertise high rates for subject‑matter experts: listings show opportunities paying over $100 hourly for legal or medical review.
Those shifts create two opposing pressures: fewer entry‑level gigs in lower‑wage regions, and more selective, better‑paid roles for specialists. The result is concentration of opportunity and a continued need for transparency about ethical uses and data protection.
Official Statements
Project pay rates are determined by the skills required for each task, and client contracts influence project availability.
Platform spokesperson (paraphrased)
Unconfirmed
- Whether the Meta investment directly caused the June pauses in specific contractor dashboards — companies involved have denied a direct link, while many workers perceived a connection.
- The ultimate downstream uses for some image and voice data collected via crowd projects remain unclear to workers because of client confidentiality clauses.
Bottom Line
Human annotators remain essential to current AI systems, but their work is often precarious, underpaid, and exposed to harmful content. As models become more capable and companies reorganize training pipelines, many generalist gigs may shrink while demand for specialist reviewers grows — raising urgent questions about pay fairness, informed consent, and the ethics of outsourced labor.