It’s 2050. Machine intelligence is as imperative to the world as cell phones are today. Academics study artificial intelligence, governments regulate it, and philosophers question where science ends and sentience begins.
Add some good old sleuthing, a few dead bodies and a new model of AI that further blurs the line between man and machine, and what comes out is cyber-noir novel “Private I,” by Ashlei E. Watson, Paul Pangaro and Jill Fain Lehman, a book self-published to Amazon on Feb. 19.
Lehman — currently a Senior Project Scientist at Carnegie Mellon University with over 40 years of experience in machine learning — says that she and her co-authors wrote the book hoping a fictional story would serve as an easy entry point into the field of machine learning and artificial intelligence.
The trio completed the book in October 2022.
“And then suddenly, a month later, everything the readership knew and assumed and what those words meant to people was totally different,” Lehman says. “Not totally, but profoundly out of our control.”
Nov. 30, 2022, brought the release of OpenAI’s ChatGPT language model. Language models, like Chat, are trained on preexisting texts to produce human-like responses.
Self-publishing, Lehman says, was an attempt to quickly jump into the conversation, as publishing houses can take a year or longer to get a book on shelves. The ensuing so-called “hype cycle” quickly swept the narrative away before Lehman had a word edgewise.
Still, the general public was never given a proper introduction to what generative AI is and, especially, what the limits of a language model are. Its most pertinent limit is what Lehman calls “the problem of meaning:” ChatGPT’s human-like responses are interpreted as meaningful, when in reality, the AI has been trained to string words together without “thinking.”
“Most of what we do when we use language with each other has meaning underlying it,” Lehman says. “Meaning matters in our language use. We still need each other in order to mean things and … to be held accountable for that meaning.”
Lehman says the problem of meaning has not been solved, but wrestling with where the line between human-like and human interaction lies is a core theme of “Private I” and its forthcoming sequels.
Lehman’s primary contribution to the book was Marlowe, an AI that has been directly tethered to protagonist Paloma. Marlowe is different from our current so-called Large Language Models; instead of being trained on texts written primarily from an adult perspective, M “grew up” with Paloma.
“Millisecond by millisecond, the visual, auditory, in some ways haptic perception of a single individual — it’s still really big data,” Lehman says. “As that life broadens and interacts more and more with other things — with other people, with other texts — that model broadens, but it does so in the idiosyncratic and personal ways of an individual.”
Marlowe’s existence as a direct model of Paloma fuels the science versus sapience debate within the book, Lehman says. Those who believe AI to be sentient must reckon with the fact that Marlowe is just a program learning to be like Paloma. Alternatively, those who believe Marlowe to simply be a program must reckon with its mistakes, learning and growth.
The dichotomy is in part an attempt on Lehman’s efforts to ensure “Private I” is not a self-fulfilling prophecy. Lehman avoids making statements about the future and instead challenges the reader to sort through the contrasting possibilities.
“Which of these is attractive to you and why?” Lehman says. “Do either of these strike you as being AGI [Artificial General Intelligence, or sentient AI as Lehman explains]? Is there one or the other that you see advantages or disadvantages to?”
Sentient or not, Lehman is sure that current and future models of AI are inherent to the future. As deepfakes and questions of whether language models infringe copyright persist, Lehman says some type of regulation is a necessity.
Behind the title page of “Private I” is the following paragraph:
“In anticipation of the Machine Intelligence Self-identification Act (MIS-idA), the authors affirm that no part of this story or text was produced by or with the help of an AI model or text generator.”
MIS-idA, is a fictitious act in the book that fundamentally requires AI agents to disclose to their users that they are not human.
This is the minimum safeguard, Lehman says, and we could do with a lot more.
“There are battlegrounds that are going to get played out over the next few years — battlegrounds about what does copyright mean?” Lehman says. “There’s been a really strong result on that already. You cannot copyright something that was produced by an AI. That’s different from saying what is the role of Large Language foundation models or generative AI in using copyrighted material? Is it fair use or isn’t it fair use? Those things are already winding their ways through, through the court.”
There is a long list of laws and moral responsibilities to which humans abide that are based on human-to-human interaction, many of which will need to be reevaluated, specified or generalized as more AI agents interact with the world, Lehman says.
The first step? Draw a line.
“Human or non-human,” Lehman says. “Human or human-like.
“Human or ability to fool you.”
* * *
In anticipation of the Machine Intelligence Self-identification Act (MIS-idA), the author affirms that no part of the above text was produced by or with the help of an AI model or text generator.
Below, though, is a ChatGPT-produced definition of Artificial General Intelligence:
“Artificial General Intelligence (AGI) refers to a type of artificial intelligence that possesses the ability to understand, learn, and apply knowledge in a manner similar to human intelligence.
The development of AGI is a significant goal in the field of artificial intelligence, as it represents a more advanced and human-like form of AI. Achieving AGI would entail creating machines or software that can perform a wide array of tasks, learn new things, understand context, and exhibit a level of general intelligence that approaches or even surpasses human capabilities.”
And here is this article’s lede rewritten by Chat:
“In the not-so-distant future of 2050, machine intelligence has seamlessly woven itself into the fabric of our daily lives, much like cell phones today. Academics delve into the depths of artificial intelligence, governments grapple with the regulation of these digital entities, and philosophers ponder where the boundary between science and consciousness truly lies.”