Until very recently, I gave only passing attention to advances in artificial intelligence. I certainly read news stories about ChatGPT and other programs that can write papers, pass exams and carry on conversation in a seemingly human way. We have, of course, joked about this on NPR.
STEVE INSKEEP: We've been talking a lot about artificial intelligence. We heard of programs that can write and can talk, so I guess we should be clear that human beings are talking to you now.
MICHEL MARTIN: So far as you know.
The development of virtual newscasters has prompted people to ask if I think a computer could do my job. I haven’t worried about it. If I lose my job, I’ll get to sleep in! Maybe, also, I’ve had confidence that something about me, or something about people generally, is distinctively human and valuable. Beyond that, I know that past technological advances, while disruptive and important, have not fundamentally changed human life as we know it.
My awareness of the stakes changed when, for the second time in as many days, I was on a flight where the internet didn’t work. Once again I was annoyed, and once again it was good for me. I was left alone with my thoughts and a paper copy of the Economist, which had half a dozen articles on AI, mostly focused on “large language models” that scrape enormous amounts of text from the internet and use it to write original sentences and engage in conversation.
One article was devoted simply to explaining how large language models work. It’s fair to assume that most readers wouldn’t know. (Here’s my summary of their summary: computers deal in numbers, one and zeros, rather than words. Ask ChatGPT a question, and the computer converts the words into numbers, and then conducts statistical calculations to guess which words might be most appropriate in reply.) Using enormous computing power, an LLM can be trained or even train itself to give more precise answers.
LLMs can not only engage in conversation, or serve as a fancy version of Google, but can even pass bar exams and create stories, sometimes acting in ways that even their creators profess not to understand. An app available on Apple phones promises that GPT-4 can “generate essays,” “create social posts,” and “write love messages.” Think of the lucky recipient!
The potential for abuse is obvious—from spreading propaganda to a terrorist act. China’s government, the Economist reports, is moving to regulate artificial intelligence to ensure it does not destabilize the state. But what happens when a large language model outruns its creators with unpredictable consequences? The Future of Life Institute, an NGO, gathered signatures for a letter calling for a pause in AI development while scientists, governments and the rest of us decide how best to contain the risks.
For now, the consequences are more prosaic, and still interesting. This NPR story reports on a study of a large unnamed company that employed a large language model to assist people answering phone calls. Apparently, the assistance made call centers more productive and delivered better customer satisfaction. AI learned the techniques of the best customer service representatives, and suggested these techniques in real time to less skilled or experienced reps, who suddenly had a smarter “colleague” guiding them through each call.
AI was a real-time training tool. One possible use for it is, in fact, training, according to this article: It may tutor students, so long as the students don’t just ask it to do the work for them.
What happens if we reach a point where learning itself seems unnecessary, because computers will do it for us? By way of an answer, I’ll tell you that I opened ChatGPT and asked it to try roughly what I am doing here. “Can you write 200 words on the dangers of large language models?”
The answer came back quickly. “Large language models like myself, while powerful and useful in many ways, also come with significant risks.” It was a competent take. I’d like to think mine was more personal, but you might differ!
But there was one thing the LLM could not do for me. Writing this post has forced me to think more deeply than I had about something that concerns me as a citizen. It will probably shape my journalism on NPR in the near future.
That’s one of the most important things that writing can do: focus our minds. We will need that focus as technology evolves.
Thanks for reading Differ We Must, which is the title of my forthcoming book about Lincoln, and also a phrase that captures our divided society today. If you haven’t, I hope you’ll subscribe and support my exploration.