Lead: On 8 February 2026, two AI researchers told podcaster Lex Fridman that a ‘996’ style schedule — roughly 9 a.m. to 9 p.m., six days a week — is becoming more common among AI teams in San Francisco. Nathan Lambert of the Allen Institute for AI and Sebastian Raschka, an AI lab founder, described how intense competition to advance models is fueling longer hours and rising burnout. They said the trend shows up at prominent firms associated with cutting-edge model development, and that the human costs include strained family life and health problems. Companies named in the discussion did not provide public comment to the reporting outlet.
Key Takeaways
- ‘996’ means about 72 hours per week (12 hours a day, six days a week) and is the pattern researchers say is appearing in some AI teams in San Francisco.
- Speakers on the 8 February 2026 podcast included Nathan Lambert (Allen Institute for AI) and Sebastian Raschka, who reported firsthand exposure to sustained long hours.
- Lambert identified OpenAI and Anthropic as examples where a high-pressure, deliver-or-die mindset is present; both organizations were asked for comment and did not respond to the reporting outlet.
- Reported individual effects include burnout, reduced time with family, narrower worldviews and physical symptoms such as neck and back pain from skipping breaks.
- Researchers attributed the shift to intense competition in the model race, where startups and established labs push to iterate rapidly to remain competitive.
- The trend raises concerns about workforce sustainability, equity of access to AI careers and potential regulatory scrutiny of workplace practices.
Background
The term ‘996’ originated in parts of China’s tech sector to describe a schedule of 9 a.m. to 9 p.m., six days a week. It became shorthand for a highly driven, often informal culture that prizes output and rapid scaling over work-life balance. Silicon Valley has long had a culture of long hours and all-hands commitment, but observers say the current phase of model-driven competition has hardened those expectations into a more regimented rhythm in some places.
AI development today often resembles an arms race: teams release successive model versions quickly to demonstrate scale, capability or cost advantages. That pressure is most acute where venture capital timelines, product launches and media attention intersect, creating incentives for firms and individuals to prioritize speed. Academic labs, startups and large companies each face slightly different constraints, but the common denominator is a premium on continuous delivery and iteration.
Main Event
On 8 February 2026, the Lex Fridman podcast hosted Nathan Lambert and Sebastian Raschka to discuss workforce dynamics in AI. Raschka said his experience in academia and industry exposed him to long, self-driven work stretches; he emphasized that passion often motivates people to push themselves rather than formal mandates. Lambert said he sees a similar mindset at recognizable AI firms, where engineers, in particular, accept intense schedules to work on projects they find meaningful.
Both guests linked the trend to the high-stakes competition to develop next-generation models. Raschka described a cycle of constant delivery pressures, while Lambert warned the pace is unsustainable for many people. The guests noted concrete personal impacts: missed family time and physical issues such as neck and back pain when breaks are skipped. They framed these not as isolated anecdotes but as signals that the broader workforce could shoulder growing human costs.
The conversation named OpenAI and Anthropic as illustrative of demanding environments, with Lambert stating that employees at those firms often commit to the grind because they want to contribute to ambitious projects. The two firms did not reply to requests for comment made to the reporting outlet. That absence of a public response left open questions about whether such schedules are formal policies or emergent cultural norms within teams.
Analysis & Implications
First, the normalization of 996-style rhythms in US tech would mark a cross-cultural shift from informal intensity to a routinized, long-hours expectation. In China the term carried public debate and regulatory attention; if Silicon Valley follows, it could prompt labor discussions here about overtime, equitable conditions and enforcement. For AI specifically, rapid cadence can boost short-term progress but may erode longer-term human capital if experienced engineers burn out or leave the field.
Second, the trend has implications for diversity and inclusion. A workplace that rewards extreme hours may skew toward people without caregiving responsibilities and those willing to accept physical and mental strain, narrowing the pool of perspectives that shape AI systems. That narrowing could affect both who builds foundational models and the range of problems those models prioritize.
Third, firms face a trade-off between speed and sustainability. Investors and customers often prize rapid advances, but employee turnover, reputational risk and potential regulatory scrutiny impose countervailing costs. Companies that rely on voluntary overwork risk losing institutional knowledge and incurring healthcare and recruitment expenses that undermine the very productivity gains sought by 996-style intensity.
Comparison & Data
| Schedule | Typical Hours/Week |
|---|---|
| Standard US full-time | ~40 hours |
| ‘996’ pattern | ~72 hours |
This simple comparison highlights a near-doubling of weekly hours under 996 relative to a standard full-time schedule. Sustained work at 72 hours per week is associated in occupational health research with higher risks of stress, sleep disruption and musculoskeletal complaints. The immediate productivity per hour can decline as fatigue accumulates, meaning overtime gains are not linearly additive over weeks or months.
Reactions & Quotes
It is really hard because you have to deliver constantly.
Sebastian Raschka, AI researcher (podcast interview)
Raschka used this phrase to underscore the relentless expectations present in competitive labs. He framed the workload as a mixture of passion and external pressure rather than a top-down edict in every instance.
You can only do this for so long, and people are definitely burning out.
Nathan Lambert, senior research scientist, Allen Institute for AI (podcast interview)
Lambert warned that the human toll shows up in reduced family time and health impacts. He named prominent AI organizations as places where this mindset is visible, though he did not claim that every team enforces identical schedules.
For those seeking impact in AI, geographic concentration matters; San Francisco remains a focal point for talent and opportunity.
Nathan Lambert, contextual comment
Lambert noted that physical presence in hubs like San Francisco still offers network and collaboration benefits, but he emphasized the trade-offs individuals make when choosing that path.
Unconfirmed
- No public documentation was provided showing that OpenAI or Anthropic have formal, company-wide ‘996’ policies; researchers reported workplace culture impressions rather than citing official schedules.
- The overall prevalence of 996-style schedules across all San Francisco AI firms has not been quantified in public data; available accounts are anecdotal and localized.
Bottom Line
The podcast exchange on 8 February 2026 spotlights a growing concern: a model-competition-fueled acceleration that can translate into regularized long hours with measurable human costs. While rapid iteration has been central to AI progress, the balance between speed and sustainable labor practices is now a strategic question for researchers, firms and regulators alike.
For workers and leaders, the immediate choices are consequential. Companies that prioritize short-term output through informal overwork risk burnout, turnover and narrower participation in the field; policymakers and industry groups may face pressure to clarify expectations around hours, rest and workplace protections. Observers should watch whether the anecdotal trend hardens into institutional norms or triggers corrective responses from stakeholders.
Sources
- Business Insider — News reporting on the 8 February 2026 podcast and researcher comments
- Allen Institute for AI — Official (employer profile for Nathan Lambert)
- Lex Fridman Podcast — Podcast/interview platform (episode with Nathan Lambert and Sebastian Raschka)
- OpenAI — Official organization website
- Anthropic — Official organization website