January 14, 2024, Paper: "Observers and practitioners of artificial intelligence (AI) have conjectured the possibility of catastrophic risks associated with its emergence and development, risks that have led some to propose an FDA-style licensing regime for AI. In this essay I explore the applicability of approval regulation – that is, a model of product introduction that combines experimental minima with government licensure conditioned partially or fully upon that experimentation – to the regulation of frontier AI. There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks. Domains of weak fit include the difficulty of defining the regulated “product,” the presence of Knightian uncertainty or deep ambiguity about harms from AI, the potentially transmissible nature of risks, and the potential for massively distributed production of foundation models with minimal observability of production. I consider four themes for future theoretical and empirical research: (1) the proper mix of approval regulation and other models such as liability or intellectual property regimes; (2) the possibility that deep ambiguity or Knightian uncertainty may require a kind of speculative pathology in which conjecturing scenarios is at least as important as placing probabilities upon them, in part because of the Lucretius problem; (3) the likely structure of industry and foundation-model generation, as the feasibility of approval regulation is higher with fewer producers, and much of the future of AI regulation may consist in labs and models monitoring one another; and (4) the possibility of community option value in the incremental development of AI regulation (including approval regulation), as regulatory policies may be more reversible in AI than in other settings, experimentation generates important public goods, and regulatory learning by doing is likely to be a property of any portfolio of policies in this arena."