Torts, Negligence, AI: Oh My!

I found Ripstein’s writing to be an interesting examination of how tort law fairly distributes responsibility for harm. His emphasis on the fault system as a way to ensure fairness, which importantly holds both the injurer and the injured party to equal standards, makes sense in a world of interactions caused by human actions. However, as I read, I couldn’t help but wonder how his framework works in our current world of AI and automation. When visiting the exotic ‘Bay Area’ last year, I sought safety on the outside of the sidewalk when Waymo’s drove by, having internally weighed the risks of possible algorithmic failures. Did the Waymo see me as a risk? In this blog post, I tackle the topic of how Ripstein’s fault system might struggle to define causation and determine responsibility in a society with artificial intelligence. How then, I consider, should torts approach AI?

Ripstein explains that “causation turns my failure to exercise appropriate care into a wrongful crossing of your boundary” (46). This principle works well in human-driven interactions where people make clear choices that lead to harm. In a world with autonomous systems, the concept of boundary crossings becomes less clear. Who is responsible when an AI-driven car, which makes decisions based on algorithms, makes a decision that wrongfully harms a person? Is it the programmer who wrote the algorithm, the company that deployed the vehicle, or even the AI itself? Ripstein argues that “risks are the product of interactions, not of actions as such” (52), but how do we apply this to machines, which behave using algorithms that differ greatly from human fault-based thinking? Take the example of AV’s, which I have been using. Algorithms are programmed to minimize risk, so they would not struggle with a moral dilemma as a human driver might in a sudden accident.

Ripstein also writes that “the appropriate distribution of risks may sometimes depend on the protected interests of other parties” (52), therefore some balance must be struck between the actor’s duties. I found this interesting as it led me to consider whether the developer of an algorithm could strictly define the duties of a robot or autonomous vehicle to protect the security interests of certain groups of people. In that case, would Ripstein’s framework prevent tort law from holding that developer liable for any negligible harm? 

These questions and more jumped out to me when reading. I found this report, after writing this post, which shared my curiosity (the researchers/analysts who wrote it were equally speculative). If you are also interested in the topic of AI in torts, give it a read. Skip to the ‘Negligence Cases Against AI Developers and Users’ for a similar discussion to my post:

https://www.rand.org/pubs/research_reports/RRA3243-4.html

*To clarify, I understand that Ripstein wrote this in 2012 with a clear focus on torts between human interaction. Regardless, I found it very interesting to apply his persuasive framework, which assigns liability to protect people’s liberty and security interests, to a realistic near-future society where human interaction with autonomous machines is more commonplace.

- Eliot

Comments

  1. a VERY interesting question. Some remarks that he makes later about the role of strict liability with a cause/fault system might be relevant here...

    ReplyDelete
  2. I really enjoyed reading your blog post about applying Ripstein's fault-based framework to AI and autonomous systems. As well, I enjoyed the report you cited. Both raise important questions about how tort law might evolve in response to increasingly autonomous decision-making. In the example of the Waymo car, a software engineer encoded their own ideas about risk and decision-making. However, the car operates in more specific contexts than someone could code. And, algorithms are programmed to minimize risk rather than navigate moral dilemmas the way a human might. This highlights a fundamental disconnect: Ripstein's framework assumes decisions made by rational actors considering contextual details, while algorithms process data patterns without the same contextual understanding. So, does the decision-making then shift to the car when something is not fully coded, or is it always with the developer or always with the car?
    The situation becomes even more complex when considering large language models (LLMs). While the physical harm caused by an autonomous car crossing a boundary is more straightforward to identify, LLM-related harm is less tangible. As the report notes, "The requirement that plaintiffs show they have suffered a concrete and particularized injury has been a major barrier to software-related suits because common alleged injuries, such as invasion of privacy, often have not been concrete enough, or been able to point to sufficiently clear damages to plaintiffs, to satisfy the requirement for standing" (RAND).
    I think that this gets even more interesting when we think about the aspect of speech-related liability when it comes to LLMs. RAND highlights how interconnected tort and first amendment law is: “Courts often balance the concerns of the First Amendment and tort law, choosing to make success in a tort case more difficult if the conduct in question was or resembled speech.[40] The primary implication of these precedents is that if an AI's output resembles speech, it is possible that a court might make it more difficult to recover in a tort case related to that output.[41]” This raises intriguing questions about how legal frameworks might adapt as AI-generated content continues to blur the lines between human and machine communication.
    While many view the questions about AI being conscious or sentient as fruitless, this legal framework shows that these questions are incredibly important for weighing our understanding of tort law against first amendment rights.

    ReplyDelete

Post a Comment

Popular posts from this blog

Updated Syllabus

securing legitimate expectations - rawls (ft chamallas)

Anderson, Brettschneider, and Shiffrin: What a Trio.