Abstract
Artificial intelligence (AI) is a part of everyday life. From our phones, to social media accounts, to online shopping, AI is present and enhances our daily experiences. One area where AI has a heavy (and an increasing) presence is the medical industry. Just as humans make mistakes, so does AI. However, when a human doctor makes a mistake, they can be sued for malpractice, but when AI makes a mistake, who is to be held responsible? Because tort law was designed with humans in mind, it may be hard to apply to medical AI, who’s “black box” algorithms make their thoughts and decisions hard to decipher. This note examines current American tort law and suggests that the current tort regime is inadequate when applied to medical AI. This note suggests adopting a framework similar to that proposed by the European Parliament in 2017, in which medical AI could be granted quasi personhood and insured directly, alleviating the burden of determining liability and allowing victims to seek redress right away.
Recommended Citation
Benedict See,
Paging Doctor Robot: Medical Artificial Intelligence, Tort Liability, and Why Personhood May Be the Answer,
87 Brook. L. Rev.
417
(2021).
Available at:
https://brooklynworks.brooklaw.edu/blr/vol87/iss1/10
Included in
Health Law and Policy Commons, Medical Jurisprudence Commons, Science and Technology Law Commons, Torts Commons