Cursor Automates 80% of Employee Support Tickets with Internal AI

At Fortune’s Brainstorm AI conference in San Francisco on Dec. 8, 2025, Cursor CEO Michael Truell said the AI coding-assistant start-up has automated roughly 80% of its internal support tickets and deployed an AI-driven company-wide query system. The remarks outlined two parallel efforts: an internal help desk that answers staff questions and a set of forward-deployed engineers building bespoke tooling for operations and sales. Truell said the company has customized those systems heavily to fit its workflows. The announcements add to Cursor’s broader narrative of rapid growth and enterprise-focused AI adoption.

Key Takeaways

  • Cursor reports it has automated about 80% of employee support tickets using internal AI systems, according to CEO Michael Truell at Fortune’s Brainstorm AI (Dec. 8, 2025).
  • The company said it has a searchable, AI-powered internal communications layer that lets employees ask questions about the business and receive machine-generated answers.
  • Cursor is valued at $29.3 billion, exceeded $1 billion in annualized revenue last month, and employs more than 300 people since founding in 2022 by four MIT graduates.
  • Cursor launched its AI coding assistant publicly in 2023 and has seen strong uptake among developers for code generation and editing tasks.
  • Independent research is mixed: METR (July 2025) found experienced developers took 19% longer on some tasks when using AI tools, while a University of Chicago study reported teams using Cursor merged 39% more pull requests in large-company settings.
  • Cursor has a program of forward-deployed engineers embedded across teams to build custom internal tooling for operations and sales, per Truell.

Background

The push to use AI inside companies mirrors broader enterprise interest in applying generative models beyond customer-facing products. Firms hope internal AI can cut manual work, speed information flow, and reduce routine IT or HR friction. However, many organizations still confront structural barriers: data silos and years of accumulated point solutions that make integrating a single AI layer difficult.

Cursor emerged in 2022 from a four-person MIT founding team and released its coding assistant in 2023. The product gained traction among developers as an assistive tool for generating and refining code, helping Cursor scale quickly to more than 300 employees and surpass $1 billion in annualized revenue. As startups push internal AI adoption, the return on investment often depends on engineering resources to adapt models and connect them to organizational data sources.

Main Event

At Fortune’s Brainstorm AI, Michael Truell described two main internal AI initiatives. First, an automated help-desk layer that resolves a large share of routine employee tickets — he estimated roughly 80% — freeing staff and technical teams for higher-complexity tasks. Truell emphasized significant customization to align model outputs with internal policies and documentation.

Second, Cursor has implemented an AI-driven query interface for employees. Truell said staff can pose questions about company processes, benefits, or project status and receive AI-generated answers that draw on connected internal sources. He framed the system as an internal knowledge assistant rather than a replacement for subject-matter experts.

Truell also highlighted a network of “forward-deployed engineers” embedded in business teams to create tailored tooling for operations and sales and to iterate on integrations. That approach aims to close the gap between generic AI models and organization-specific workflows, addressing common enterprise obstacles like fragmented data and bespoke business logic.

Analysis & Implications

Cursor’s internal deployment illustrates how startups can use their own products as testbeds for capability and credibility. Automating a high share of routine support requests reduces direct labor costs and can accelerate internal response times, but the magnitude of benefits depends on ticket complexity and the accuracy of model outputs. In practice, teams must weigh automation gains against the cost of building and maintaining integrations and of monitoring AI answers for correctness.

The conflicting academic findings underline that AI’s productivity effects are context-dependent. METR’s report (July 2025) found slowdowns for experienced developers on large, mature codebases, attributing delays to prompting, response wait times, and review overhead. By contrast, University of Chicago researchers measured a sizable increase in merged pull requests for teams using Cursor in large-company settings, suggesting AI can boost throughput when workflows and review practices adapt to the tool.

For larger enterprises, the biggest technical hurdles remain data accessibility and architectural complexity. Data silos and technical sprawl limit a model’s ability to produce contextually accurate answers without substantial engineering effort. Cursor’s strategy — embedding engineers and customizing the AI stack — is one way to bridge that gap but requires investment that not every organization can afford.

Comparison & Data

Study Context Key Finding
METR (July 2025) Experienced developers on large, mature codebases Tasks took 19% longer with AI tools despite perceived speed gains
University of Chicago (2025) Teams in large companies using Cursor 39% more pull requests merged by teams using Cursor vs non-users

The two studies capture different populations and metrics: METR focused on task completion time for experienced engineers working on complex, legacy code, while the University of Chicago analysis measured PR merge rates at team scale in enterprise settings. Discrepancies can arise from differences in sample selection, performance measures, and how teams integrated AI into existing review and planning processes.

Reactions & Quotes

Executives and researchers reacted to Cursor’s disclosures with interest and caution. Below are selected remarks and their context.

“We’ve actually done a lot of work internally on customizing that setup.”

Michael Truell, CEO, Cursor (Fortune Brainstorm AI)

Truell used this line to stress that the internal help desk is not an out-of-the-box deployment; it required adaptation to company-specific documentation and workflows.

“We have a system where folks can ask any question about the company and get it answered by an AI.”

Michael Truell, CEO, Cursor (Fortune Brainstorm AI)

He framed the interface as a knowledge layer designed to surface internal information quickly, while still depending on engineers and reviewers when accuracy matters.

“Experienced developers took 19% longer on some tasks when using AI tools.”

METR (nonprofit research group)

METR’s finding was cited to highlight that AI can add hidden overheads — more prompting, waiting and review — especially in complex engineering contexts.

Unconfirmed

  • The precise method Cursor used to measure the “80%” automation rate (time range, ticket types included) has not been publicly detailed by the company.
  • Cursor’s internal accuracy rates, false-positive/negative rates, and remediation costs for AI-generated answers have not been disclosed.
  • Generalizability of Cursor’s internal results to large, highly regulated enterprises is unclear without more published case studies or independent audits.

Bottom Line

Cursor’s claim that internal AI handles roughly 80% of employee support tickets is a notable demonstration of applying generative models to routine operational work. The approach — combining an AI layer with embedded engineers and bespoke tooling — addresses common enterprise barriers but requires engineering investment and governance to manage accuracy and compliance risks.

Academic results on AI’s productivity impact remain mixed, reflecting differences in context, task complexity, and team practices. Organizations considering similar internal deployments should pilot on well-scoped ticket categories, instrument outcomes carefully, and plan for ongoing maintenance and human oversight.

For observers, Cursor’s internal use case is both a proof point for AI’s operational value and a reminder that scaling those gains across diverse organizations demands technical integration, clear metrics, and transparent reporting.

Sources

Leave a Comment