The collision that totaled Raffi Krikorian's Tesla Model X wasn't just another data point in the ongoing debate over driver-assistance technology. It was a profound indictment delivered by one of the few people on the planet uniquely qualified to give it. Krikorian, the current CTO of Mozilla and the former head of Uber's self-driving car division, was using Tesla's Full Self-Driving (FSD) system on a residential street with his children in the back seat when the vehicle, as he describes, "drove straight into the side of an oncoming truck." His subsequent essay in The Atlantic transcends a simple accident report, offering a chilling, expert analysis of what he calls the "supervised autonomy" trap—a flaw he argues is systemic to Tesla's current philosophy.
The Illusion of "Supervised" Autonomy
Krikorian's core argument dismantles the very premise of a human effectively supervising a highly capable AI driver. He draws a critical distinction between a "safety driver" in a traditional autonomous vehicle project—whose sole job is to monitor the system—and a Tesla owner asked to perform the same task while also being a passenger. The former is a focused, trained professional; the latter is inevitably a distracted commuter, parent, or traveler. Tesla's FSD, he contends, creates an impossible cognitive burden: it lulls the user into complacency during long stretches of competent driving, only to demand instantaneous, life-saving intervention during its rare but catastrophic failures. This handoff, he asserts from professional experience, is a problem the industry has long known to be fraught with human limitations.
A System Primed for Driver Disengagement
The accident itself underscores this mismatch. Krikorian details how the system failed to recognize an oncoming truck making a wide turn, interpreting it as a stationary object it could pass. There was no alert or "takeover immediately" warning; the Model X simply proceeded on its fatal path. His expertise allowed him to diagnose the likely sensor fusion or perception error, but it did not grant him superhuman reaction times. The system's overall competence, he suggests, is precisely what makes it dangerous. It fosters a degree of trust that the underlying technology, which can fail in unpredictable and "spectacular" ways, does not yet deserve. This creates a scenario where the human supervisor is set up to fail, a reality far removed from the marketing narrative of a steadily improving chauffeur.
For Tesla investors and owners, Krikorian's account is a sobering counter-narrative to the company's bullish autonomy updates. It challenges the foundational assumption that incremental software improvements will linearly lead to safety and true autonomy, highlighting instead a fundamental human-factors roadblock. The incident raises acute questions about liability, the ethical implications of public beta testing on non-professional drivers, and whether the "supervision" paradigm is a viable bridge to higher levels of automation or a dangerous detour.
The implications are immediate. For owners using FSD or Autopilot, this is a stark reminder that the system's capabilities, however impressive, exist within a framework requiring a level of sustained, alert supervision that contradicts human nature. For investors, it signals that regulatory scrutiny may intensify around the "supervised" classification and the driver-monitoring solutions—or lack thereof—that support it. Tesla's technological lead in this space is undeniable, but as Krikorian's wrecked Model X illustrates, leading the charge does not always mean having the safest path mapped out.