Restraining ChatGPT
Roee Sarel  1@  
1 : Institute of Law and Economics, University of Hamburg

ChatGPT is a prominent example of how Artificial Intelligence (AI) has stormed into our lives. Within a matter of weeks, this new AI—which produces coherent and human-like textual answers to questions—has managed to become an object of both admiration and anxiety. Can we trust generative AI systems, such as ChatGPT, without regulatory oversight?

Designing an effective legal framework for AI requires answering three main questions: (i) is there a market failure that requires legal intervention? (ii) should AI be governed through public regulation, tort liability, or a mixture of both? and (iii) should liability be based on strict liability or a fault-based regime such as negligence? The law and economics literature offers clear considerations for these choices, focusing on the incentives of injurers and victims to take precautions, engage in efficient activity levels, and acquire information.

This Article is the first to comprehensively apply these considerations to ChatGPT as a leading test case. As the United States is lagging behind in its response to the AI revolution, I focus on the recent proposals in the European Union to restrain AI systems, which apply a risk-based approach and combine regulation and liability. The analysis reveals that this approach does not map neatly onto the relevant distinctions in law and economics, such as market failures, unilateral versus bilateral care, and known versus unknown risks. Hence, the existing proposals may lead to various incentive distortions and inefficiencies. The Article, therefore, calls upon regulators to place a stronger emphasis on law and economics concepts in their design of AI policy.



  • Poster
Personnes connectées : 4 Flux RSS | Vie privée
Chargement...