HKS Faculty Research Working Paper Series
HKS Working Paper No. RWP23-030
October 2023
Abstract
In this article, we focus on recent advancements in Generative AI, and especially in Large Language Models (LLMs). We first present a framework that allows understanding the core characteristics of centaurs. We argue that symbiotic learning and incorporation of human intuition are two main characteristics of centaurs that distinguish them from other models in Machine Learning (ML) and AI. Using these core characteristics, we also present a few specific methods of creating centaurs. We then argue that the growth and success of LLMs are to a great extent due to the fact that they are moved from pure ML algorithms to human-algorithm centaurs. We present various evidence to demonstrate this, particularly by focusing on the advantages of the so-called “fine-tuning” approaches such as the Reinforcement Learning with Human Feedback (RLHF) method used in various LLMs (e.g., OpenAI’s GPT-4, Antropic’s Claude, Google’s Bard, and Meta’s LLaMA 2-Chat). We also discuss evidence showing that these fine-tuning approaches can turn Generative AI tools into cognitive models, capable of representing human behavior. In addition, we elaborate on three main advantages of centaurs: removing barriers with respect to algorithm aversion, huma aversion, and casual aversion. We then briefly conclude by discussing two main points: (1) recent advancements in creating centaurs have moved us closer to reaching the goals that the founding fathers of AI—John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon—stated in 1955 as part of their proposed 2-month, 10-man study of AI to be held at Dartmouth; and (2) the future of AI development and use in many domains will most likely need to focus on centaurs as opposed to other traditional approaches in ML and AI.
Citation
Saghafian, Soroush. "Effective Generative AI: The Human-Algorithm Centaur." HKS Faculty Research Working Paper Series RWP23-030, October 2023.