•  
  •  
 
Brooklyn Law Review

Authors

Abstract

Corporate governance structures have proven fundamentally inadequate for managing the unprecedented challenges of artificial intelligence development, as demonstrated by OpenAI's dramatic 2023 governance crisis and the broader failure of both traditional and hybrid corporate forms to balance massive capital requirements with public safety concerns. Current approaches create three critical failures: structural accountability gaps between boards and management, distorted power dynamics from concentrated capital needs, and an inability to enforce safety commitments against commercial pressure. While companies like Anthropic have attempted innovative private solutions through benefit corporation structures and specialized trusts, these voluntary mechanisms ultimately prove inadequate against the extraordinary pressures of AI development. This Note proposes the creation of an "Artificial Intelligence Corporation" (AIC) classification under Delaware law, adapting the proven German dual-board oversight system to create mandatory governance requirements specifically designed for AI development's unique combination of technical complexity, capital intensity, and catastrophic risk potential. The AIC framework establishes separate management and technical oversight boards with clearly defined authorities, mandatory development gates for AI systems, and enforceable statutory obligations that move beyond voluntary commitments to create institutional counterweights capable of balancing innovation with responsible development.

Share

COinS