Thinking About Artificial Intelligence

Image by Rostislav Kralik / CCO
I have been fascinated by the idea of Artificial Intelligence (AI) for most of my life. I was probably in junior high school when I heard about Alan Turing’s test for whether a computer could mimic human intelligence. It was the 1950s and I couldn’t imagine ever owning my own personal computer. Now, I use AI everyday and AI is used for things like deciding what advertisements I see when browsing the Internet.
Turing’s idea was to put a computing machine in one room and a human in another room. A human in a third room would ask questions and converse with both the computer and the human. If the human could not make a definitive determination of which room the computer was in, then artificial intelligence was established. I figured artificial intelligence would never exist in my lifetime.
I was in college in the 1960s. During my freshman year, I learned that the electrical engineering students were given passwords so they could use the mainframe. It required typing cards on a special machine where each card contained a line of computer code or data. Then the cards were dropped off at the data center for processing on the mainframe. The next morning there would be a paper printout of what you told the computer to do.
A new friend invited me to go with him to a meeting for electrical engineers to learn how to use the mainframe. When a clipboard was passed around, I signed up for a password. I bought an introduction to Fortran programming book, and over the next few months, I taught myself to write a few simple programs that I delivered to the computer center.
I learned that the computer was very particular. If I misspelled a word, dropped the pile of cards and didn’t get them back in exactly the correct order, or didn’t follow the Fortran rules perfectly, the computer wouldn’t run my program or produce the printed output. In fact, the computer was not only particular, but it also depended completely on my intelligence. It would only do what was on the cards I prepared.
This was comforting when, during my junior year, the movie “2001: A Space Odyssey” came out. The plot of the film includes a very intelligent computer called HAL 9000 who at first interacts with the crew of a spaceship as if it were a human. But, then it begins to act very strange and attempts to take over the mission. HAL 9000 is only stopped when it is denied electricity. The film did not frighten me about the potential for AI to begin managing humans. I knew from trying to program a computer that a computer could only do exactly what humans tell it to do. HAL 9000 was fiction.
In the mid-1980s I read a book by Joshua Meyrowitz called “No Sense of Place.” He describes programming a computer to mimic the characteristics of a “Talk Therapist.” The computer puts a question on the screen and a human types an answer. Then the computer puts another question on the screen. To his amazement, even people who knew that they were interacting with a computer and that it had no intelligence would, on the sly, use it as if it were a counselor. Meyrowitz’s interest was in how technology changes how humans interact with each other and technology. The subtitle of his book is “The Impact of Electronic Media on Social Behavior.”
With Turing and Meyrowitz in mind in the 1990s, I organized a discussion at United Theological Seminary where I taught. My interest was to introduce the seminary community to the sociological studies about machines and humans. I was also interested in whether people could recognize a human if they were expecting to engage with a machine.
As I remember it, there may have been twenty students and faculty in attendance. Before the discussion, I set up two computers, one in the meeting room and another in a nearby room. The computers were connected with a text program, so a person typing in each room could see what the other typed. I recruited a student to type responses to the person in the meeting room. Early in the discussion, I introduced the opportunity to experience how computer/human interaction works. I asked the president of the seminary to sit at the computer and ask any question. A brief exchange followed between the two.
Then we had a discussion about how the computer had responded and what impressed the students and faculty about the intelligence of the computer. No one raised a question about whether it might be a person rather than a computer. This included the president who was very angry with me when he learned what I had done. I am not a psychologist, but I believe that it was traumatic for him to experience not being able to recognize human intelligence when he was told it was a machine. His self-image was an accomplished seminary president who was very smart. Yet, he did not recognize the difference between humans and a computer.
Meyrowitz was right. In the 1990s, knowing the social context of our engagement with intelligence, human or computer, was essential for our self-identity. We were humans and intelligent. Computers were machines. It was our understanding of reality. Computers did not intrude in our day-to-day lives with interactions with humans. However, today I find myself in a similar situation to that of the seminary president.
I am disoriented by the relationships between computers and humans. I can’t be certain when I read an essay or listen to a podcast whether it was produced by AI or entirely by a human. I have lost my sense of place in the human community. Human community requires that I know who I am engaging as I participate in the human community. I know the people I exchange email with. When I read an Atlantic article online, I may not know the author, but I know the publisher and that there is an editorial policy determined by humans. But, when someone sends me a link to an essay to read, I don’t know whether I am reading something written by a human or AI. To use Meyrowitz’s language, I have lost my sense of place.
I wonder how much clearer and interesting this essay would be if it were written by a computer. I used Copilot (AI developed by Microsoft) to check things like the date when Meyrowitz published his book. And I asked Copilot about Turing’s test to make sure I remembered the details. I used the Microsoft editor to check for spelling and punctuation. Could a computer have looked up details about computer/human interaction and then organized an essay? The answer is yes.
I like to write essays like this. One reason is because I like the challenge of trying to make things clear and interesting. When I write, I think about how my words will impact other humans. I’d like to think that I am unique and have my own voice. But the AI program NotebookLM has made me anxious.
This podcast was produced by NotebookLM after the software was fed a fundraising letter that Ending Racism USA's executive director, Brenda Girton-Mitchell, wrote.
Ending Racism USA
If you weren’t told, could you tell the podcast was researched, written, and delivered entirely by AI? Even the artificial voices sound human. I am not quite ready to admit that HAL 9000 was an accurate guess of what AI would become in the 2020s. I remember what I learned trying to program in Fortran. The computer only does what it is programmed to do. In the case of AI writing essays, the computer needs to have a prompt to get it started. It also needs internal rules to determine what information is truthful and useful. In other words, like humans, an AI essay or podcast software program has a personality. Or, a computer could be programmed to mimic my personality. Today, I no longer believe that AI will not exist in my lifetime. Maybe it already does, or maybe I only need to live a few more years, maybe months. If a computer were in one room, and I was in another room while we both communicated with someone who knows me well, like my wife, could she tell the difference? If not today, very soon AI will have no trouble passing the Turing test.
In the past, totalitarian and fascist governments used censorship of newspapers, books, and textbooks to attempt to control what information citizens had.
For those of us who are working to end racism, there is danger ahead.
- Will those who own AI attempt to use it to produce only information that comes from AI with a fascist personality?
- What do we need to do as those who dream of a multicultural society?
- What do we do as individuals so that we do not completely lose our sense of place in the human family where humans celebrate the diversity of voices and personalities?