Excerpt
2024, Paper: "Can generative artificial intelligence (AI) transform the role of the CEO by effectively automating CEO communication? This study investigates whether AI can mimic a human CEO and whether employees’ perception of the communication’s source matter. In a field experiment with a firm, we extend the idea of a Turing test (i.e., a computer mimicking a person), to the idea of generative AI mimicking a specific person, namely the CEO. We call this the “Wade test” and assess if employees can distinguish between communication from their CEO and communication generated by an AI trained on the CEO’s prior communications. We find that AI responses are correctly identified 59% of the time, somewhat better than random chance. When employees believe a response is AI generated, regardless of its actual source, they perceive it as less helpful. To assess causal mechanisms, a second study with a general audience, using public statements from CEOs and from an AI intended to mimic those CEOs, finds that AI-labeled responses (irrespective of their actual source) are rated as less helpful. These findings highlight that, when using generative AI in CEO communication, people may inaccurately identify the source of communication and exhibit aversion towards communication they identify as being AI generatedCan generative artificial intelligence (AI) transform the role of the CEO by effectively automating CEO communication? This study investigates whether AI can mimic a human CEO and whether employees’ perception of the communication’s source matter. In a field experiment with a firm, we extend the idea of a Turing test (i.e., a computer mimicking a person), to the idea of generative AI mimicking a specific person, namely the CEO. We call this the “Wade test” and assess if employees can distinguish between communication from their CEO and communication generated by an AI trained on the CEO’s prior communications. We find that AI responses are correctly identified 59% of the time, somewhat better than random chance. When employees believe a response is AI generated, regardless of its actual source, they perceive it as less helpful. To assess causal mechanisms, a second study with a general audience, using public statements from CEOs and from an AI intended to mimic those CEOs, finds that AI-labeled responses (irrespective of their actual source) are rated as less helpful. These findings highlight that, when using generative AI in CEO communication, people may inaccurately identify the source of communication and exhibit aversion towards communication they identify as being AI generated."