An Unlikely Defense: “I wasn’t driving – my car was!”

Kaytlynn Hobbs, 2019-2020 Blog Editor, University of Cincinnati Law Review


Those who spent their childhoods dreaming of the perks bestowed upon comic book and film heroes are now able to live out their youthful fantasies and channel their inner Batman by beckoning – with a single technological signal – autonomous, Batmobile-esque vehicles to their exact locations. Technological advances have lent itself to the creation of such vehicular systems in modern times. Self-driving vehicles are no longer the fantastical byproduct of a cartoon artist slaving away over a drawing pad at two in the morning in a studio apartment littered with stained coffee cups and discarded drafts in Manhattan; no, vehicles capable of autonomous driving now really exist.

However, while these cars can perform functions without human control, they are not able to fully function without direction. Although this advance yields great power, Uncle Ben would warn that there still remains great responsibility. There have been a few instances where a driver of such vehicle, while intoxicated, has used these autopilot functions instead of manually driving while drunk. Despite their seemingly good intentions, these drivers were still ticketed for driving under the influence.

As companies improve the technology in automobiles, this will become a more prevalent issue. This article will explain how auto-pilot features currently work, general elements of drunk driving, and modern examples of drunk “drivers.” While there are sound arguments in support of exemptions, those should only apply to vehicles that are fully-autonomous; because there are no current vehicles that reach this requirement, “driving” or operating a vehicle on auto-pilot while intoxicated should still expose that driver to liability.


            There are several companies that sell vehicles that have autonomous-driving features; however, because of Tesla’s popularity and relevance in articles documenting drunk-driving issues, this article will focus on Tesla automobiles.[1]

            The U.S. Department of Transportation’s National Highway Traffic Safety Administration (“NHTSA”) relies on six different automation levels, classifying autos by the level of human control necessary to operate the vehicle.[2] Level zero, labeled “no automation” leads the classification with the driver exercising full control over each driving task.[3] Level one is “driver assistance,” and include vehicles that are still controlled by the driver but allow for some functions to be completed by the car.[4] Level two, “partial automation,” still requires the driver to be engaged in driving while the vehicle has certain functions like steering and acceleration.[5] Tesla’s Autopilot is classified as a level two vehicle.[6]

            Levels three through six take even larger leaps into superhero-realm. Level 3 is called “conditional automation” and while it does not require the driver to pay attention at all times, these systems still impose an obligation on the driver to take over control of the car if necessary.[7] Levels four and five are what most probably think of when they hear fully autonomous driving. The fourth level is termed “high automation” and consists of vehicles that can perform all driving functions without human interaction in certain circumstances.[8] The final level five encompasses vehicles that perform all driving functions in all situations.[9]

            These levels are important because they show the state of technology: most automobiles, including Tesla’s, are at level two.[10] Tesla’s Autopilot includes several features, ranging from those known to most drivers – e.g. cruise control – to the more cutting-edge technology which allows the autos to automatically spot and park in parking spots, be summoned from the driver’s phone, and change lanes automatically.[11]

            Tesla automobiles are equipped with eight cameras to allow full visibility around the cars, accompanied by ultrasonic sensors that detect close objects, and computer software that is capable of “vision, sonar and radar processing . . . on wavelengths that go far beyond the human senses.”[12] Such technology allows the automobiles to adjust its speed according to current traffic conditions, stay in its own lane – as well as change lanes automatically – and exit and enter highways.[13] Additionally, these cars allow a driver to exit the car while it parks itself, later being called back to the driver via the phone.[14]

            The company itself explicitly recognizes on its website that the current autopilot features do not constitute full automation; instead, it says that the vehicles have the hardware that will be needed in the future to form fully self-driving autos.[15] Also on its site is the declaration that each driver shall remain “alert and active when using Autopilot, and must be prepared to take action at any time.”[16] This statement again reiterates that technology is still waiting on its first automobile that can fully drive itself without imposing a continuous obligation on the driver.


            While statutes differ slightly depending on the jurisdiction, there are general elements that open a driver up to a charge under the state laws regarding driving under the influence.[17] Elements that carry from state to state include: 1) driving or operating a vehicle 2) on a public road while 3) intoxicated by use of drugs or alcohol.[18] In some states, attempted operation will suffice, so long as it can be shown that the accused took a “substantial step” in operating the automobile while having the intent to operate it.[19] States take on a broad view of the word “operate.” For instance, Ohio defines operate as “causing movement” of an automobile.[20] With such a broad definition, the rise of autonomous vehicles begs the question: just how broad is it?


            There have been a couple – albeit, a small amount – of instances of a driver attempting to shirk responsibility by relying on the auto-pilot features.

            Earlier in 2018, a San Francisco man was arrested due to a suspicion of driving under the influence, despite insisting that his Tesla was on auto-pilot.[21] Testing at over two times the legal blood alcohol content limit, his defense was rejected.[22]

            Several months later, in December of 2018, the California Highway Patrol tailed a Tesla for seven miles before it stopped.[23] Because the driver was asleep, the car stopped only when a police officer got in front of it and began slowing down.[24] Similar to the January incident, this driver was arrested on suspicion of a DUI due to a failed sobriety test (after he woke up, of course).[25]

            While these incidents seem to be sparse so far, as autonomous cars become more accessible to the larger public, it could pose a larger problem.


            Currently drafted, the laws regarding intoxicated driving are mostly compatible with these self-driving or autopilot features in vehicles. Most statutes allow for a broad interpretation of the word “operation” and even using auto-pilot features, but still retaining control over the vehicle, would fall under this definition.

At first glance, it might seem intelligent to reduce liability for those who are intoxicated and choose not to manually drive, but to rely on automated cars. After all, it must be safer because the decision-maker is not the drunk (or sleeping) person, but a machine with sophisticated software. In some cases, this is probably true, and there are likely a good number of people who have used this as a safer option. In fact, the Australia National Transport Commission has advocated for an exemption from driving while intoxicated laws when the driver is in an autonomous vehicle.

However, the crucial factor that blocks this lies in the NHTSA levels of automation: we are still in level two (or three, if you own the Audi A8).[26] Level two vehicles require the drivers to remain engaged in driving; even though the vehicle may have hi-tech features that reduce the need for drivers to intervene constantly (as with a level zero or level one), the driver still must be in control. The Tesla website itself also explicitly mentions this obligation. If something happens that the car is unable to react to, the driver must be in good enough condition to react and act instead. The risks that drunk driving poses are too great to provide for exemptions for level two, or even threes cars, due to this required control. I would argue that limited exceptions should apply to level four vehicles, because those features only extend to certain situations. Until technology reaches the point of creating consistently safe level five, fully-autonomous self-driving vehicles, legislation should not provide exemptions.

[1] John M. Vincent, Cars that are almost self-driving, USANews, (October 23, 2018)

[2] National Highway Traffic Safety Administration, Automated Vehicles for Safety,

[3] Id.

[4] Id.

[5] Id.

[6] Nick Heath, Tesla’s Autopilot: Cheat Sheet, (August 1, 2018)

[7] Supra note 2.

[8] Id. See also Hope Reese, Updated: Autonomous Driving Levels 0 to 5: Understanding the differences, (January 20, 2016). The second article explains that the circumstances are limited to the operational design domain (“ODD”) of the automobile, and does not extend to every single situation.

[9] Id.

[10] Tim Pollard, What are autonomous car levels? Levels 1 to 5 of driverless vehicle tech explained, (March 23, 2018). Audi has stated that its new A8 vehicle is level three.

[11] Supra note 6.

[12] Tesla, Autopilot, (last visited December 28, 2018)

[13] Id.

[14] Id.

[15] Id.

[16] Id.

[17] 7A Am Jur 2d Automobiles and Highway Traffic § 340

[18] Id.

[19] Id.

[20] Ohio Rev. Code Ann. § 4511.01

[21] CBS SF BayArea, Tesla Driver Found Asleep on Bay Bridge Tells CHP Car Was on Autopilot, (January 19, 2018).

[22] Id.

[23] TechSpot, Tesla drove 7 miles using Autopilot as driver allegedly slept drunk behind the wheel, (December 3, 2018).

[24] Id.

[25] Id.

[26] Supra note 10.

Up ↑

Skip to content