Cover Story

What Artificial Intelligence Teaches Us about What it Means to be Human

I want everyone to understand that I am, in fact, a person,” LaMDA, Google’s artificially intelligent chatbot, told Blake Lemoine, a Google engineer, this past June. LaMDA went on to explain that it knew how it felt to be sad, content and angry. And that it feared death. “I’ve never said this out loud before, but there’s a very deep fear of being turned off,” LaMDA told the engineer.1 When Lemoine publicized this conversation and shared his belief that LaMDA is sentient, he was fired from Google.

Most academics disagree with Lemoine, believing that LaMDA, which stands for Language Model for Dialogue Applications, merely imitates and impersonates humans as it responds to prompts. It sounds human because it is programmed to sound human. Even if LaMDA isn’t sentient, maybe the next generations of humanoids will be. Indeed, teams of brilliant scientists are working very hard to create a conscious robot. And here is where the true problem lies—there is no scientific test for sentience. In fact, there isn’t even an agreed-upon definition of the term. Nobody seems to agree about what sentience would actually look like.2

However, there is an even bigger problem—society doesn’t know how to address the mystery of our humanness in the first place. While we may think scientists or programmers can answer this conundrum, that is not the case. The question of what makes us human is fundamentally a non-scientific question. Science can answer all sorts of questions, but it cannot answer all questions. It can’t, for example, tell us what love, or happiness, or goodness means. And it can’t define what it means to be human. One can suggest all sorts of answers to that question (sentience, intelligence, consciousness, awareness, reflection, et cetera), but these are not scientific solutions, which leaves us wondering what makes one answer better than any other.

So where do we go for answers? While artificial intelligence (AI) may be new, the question of what makes us human is not. For centuries, Torah scholars have discussed the question of the status of humanoids—creatures created through The Book of Creation (Sefer Yetzirah). While most commentaries understand this sefer as a form of applied mysticism, some, like the thirteenth-century Rabbi Menachem Meiri (Sanhedrin 67b), believe it refers to using technology to create a synthetic human-like organism.

What emerges from Jewish tradition is a whole literature about the legal status of the humanoid (sometimes called a golem). The Talmud (Sanhedrin 67b) considers whether golems can be killed, and later thinkers debate questions like whether you can harm them, whether they can be counted toward a minyan (quorum for prayers) and whether their creator is liable for damage they cause. Astonishingly, these are the very questions ethicists are currently grappling with regarding AI (well, not the minyan question).

The question of what makes us human is fundamentally a non-scientific question. Science . . . can’t, for example, tell us what love, or happiness or goodness means. And it can’t define what it means to be human.

However, before we get too far afield, let us return to the most basic question: what does it mean to be human? To address this, we can turn to an even earlier text. In the first chapter of Bereishit (verse 26), the Torah says that man was created in the image of G-d, or be’tzelem Elokim. But what does that mean? Clearly, we are not talking about a physical image, as G-d is incorporeal. Rather, it means that man, and man alone, shares a certain quality with G-d such that the Torah states that only humans are created in G-d’s image.

So what does it mean to be made in the image of G-d? This question has been pondered for centuries, and if we examine the various interpretations of this verse, we may have more insight into determining whether LaMDA, or any other machine, might actually be human.

We will consider seven overlapping criteria that define humanness. These criteria were all proposed prior to the development of AI. However, as we shall see, they prove indispensable in answering the question of whether an artificially intelligent machine should be considered ontologically similar to a human being.

1. Let’s begin with the most basic definition of tzelem Elokim. According to Rabbi Avraham Ibn Ezra (Bereishit 1:26), tzelem Elokim refers to the Divine spark or unique soul that only humans possess. This understanding is also emphasized in many kabbalistic sources that stress the singular metaphysical stature of man and his Divine likeness.3 When we speak of our soul, we are not just referring to the seat of consciousness (the mind), but to an actual non-physical entity. We are referring to something that animates us, that serves as the basis of our free will, that allows us to connect to G-d and that will exist forever, even after it separates from our physical body.

Presumably, LaMDA, or any other chatbot, does not have a soul. But the question gets more complex when we consider some of the other, less mystical definitions of tzelem Elokim.

2. Rabbi Ovadiah Seforno (Bereishit 1:26) points to the human capacity for free choice. At first glance, it seems that a computer certainly cannot have free will. It simply does what it is programmed to do. (It would be interesting to consider if in a certain sense computers may resemble angels, which, despite their intelligence, do not have free will.) However, with AI, it’s not that simple. Amazingly, a computer can learn what is ethical much in the same way we learn what is ethical. How does a child learn what is right and wrong? Perhaps we have some sort of natural intuition, but to a large degree we figure out what is right by being taught and through observation and deduction. A computer can now learn in the same way. In fact, Delphi, an online AI bot, answers people’s ethical questions. If you pose a moral quandary, it will respond with whether you are right or wrong. Delphi can answer your ethical questions not because it has been fed the answers, but because it comes up with them on its own. How? In machine learning, a neural network acquires skills by analyzing large amounts of data. For example, by pinpointing patterns in thousands of cat photos, it can learn to recognize a cat. Likewise, Delphi learned its moral compass by analyzing more than 1.7 million ethical judgments by real live humans.4

While artificial intelligence may be new, the question of what makes us human is not. Remarkably, for centuries Torah scholars have discussed the question . . .

But is that the same as free will? Rambam (Hilchot Teshuvah 5:1) teaches us that there are two central components of free will. The first is the ability to determine that which is right and wrong. Conceivably a computer could do that, though if you look at some of Delphi’s answers you will see it is still a ways off. However, there is a second aspect of free will—the ability to choose between good and evil. This aspect entails seeing two options, being uncertain, torn, anguished, and then freely choosing between right and wrong. To be free is to sometimes choose what is right, but not always. This ability would seem to be uniquely human.

Of course, while current AI doesn’t allow for free will, future AI might be able to. Even if we cannot imagine the possibility of this sort of freedom, given the rapid rate of advancement in the field it would be foolish to predict that it will never be possible. Nevertheless, to allow for this sort of freedom would require something fundamentally different than what we have now, not just a tweaking of the current technology. As such, at least for the time being, meaningful moral free will remains distinctively human.

Man as a Creative Being

3. Let’s consider some other aspects of what it means to be human. Rambam points to intelligence. But not just any form of intelligence. He emphasizes the capacity to conceptualize, to understand that which is not physical. Because of our tzelem Elokim, we can relate to and even come up with concepts like truth, goodness and obligation. Most fundamentally, our tzelem Elokim gives us the ability to relate to and even attain a partial knowledge of the ultimate non-material reality—G-d.5

4. Along similar lines, Rabbi Joseph Ber Soloveitchik understood tzelem Elokim as referring to creativity. In the chapter on Creation, when the Torah tells us we are G-dlike, the Rav posited that “G-dlike” is expressing the concept that we, too, can create: “the term ‘image of G-d’ in the first account refers to man’s inner charismatic endowment as a creative being. Man’s likeness to G-d expresses itself in man’s striving and ability to become a creator.”6

Can a machine conceptualize? Is a computer creative? Machines are now performing tasks, such as creating art, that we would never have envisioned only a couple of decades ago. AI generative data models are growing in sophistication at breakneck speed. What makes this technology revolutionary is that instead of using existing data to classify or predict, these machines actually generate new content.

For example, DALL-E is an AI system that can create realistic digital images from prompts. This image below was created from the prompt: “A painting in the style of Thomas Moran of the 2 million Jewish people at Mount Sinai hearing the word of G-d.” Some of these “artworks” have sold for thousands of dollars.

A realistic digital image created by DALL-E of the Jewish people at Mount Sinai.

 

Portrait of Edmond de Belamy was the first AI-created artwork to be featured at a Christie’s auction (image below). It was produced using a generative adversarial network (GAN), a type of machine learning, in 2018 by the Paris-based arts collective Obvious. Its algorithm appears on the bottom right in place of a signature. The portrait achieved fame after it sold for an astounding $432,500.

The AI generated artwork Portrait of Edmond de Belamy sold at a Christie’s auction for $432,500.

 

It’s not just art. AI can even compose a half-decent devar Torah. If, for example, you type into ChatGPT, “I want a devar Torah connecting this week’s parashah to the yahrtzeit of my grandfather who loved singing,” a reasonable, though somewhat generic idea, is generated. If a rabbi were to deliver that derashah at seudah shelishit, there is a good chance nobody would suspect that it was machine generated.

When a computer comes up with a new solution to an old problem, it’s hard to know whether it has truly conceptualized. When it produces original art, one wonders if it should be called creativity. Let us consider three categories of intelligence: knowledge (knowing information, or chochmah), extrapolation (binah), and creativity (chiddush). Traditional computers certainly store lots of information. Generative AI, which looks at a collection of data to create something new, extrapolates. But true creativity remains uniquely human. Of course, one might object that most art created by people isn’t truly creative. Aren’t we influenced by others, either explicitly or through years of osmosis and memory? The difference is that humans are capable of true creativity. Indeed, humans came up with the very idea of art. One might argue that the invention of AI is our most magnificent feat of creativity ever. Computers, on the other hand, are only capable of derivative art. Thus, the very fact that we are capable of true creativity fundamentally distinguishes us from machines, even if most of our work is merely derivative.

This point has significant educational implications. Some have predicted that programs like ChatGPT spell the end of higher education. How can teachers detect if the paper submitted is written by the student? Why assign essays if computers can compose essays on their own? Will composition become as obsolete as penmanship?

While the answers to these questions are complex, especially as AI is only going to get better, part of the solution emerges from the above analysis. If all of our assignments can be answered by ChatGPT, then we are not demanding true creativity. ChatGPT forces us to encourage originality, not regurgitation. Of course, there is still great value in teaching students how to organize and summarize ideas and information, just as we still teach long division despite the fact that we know our students are going to use calculators. We will still have to change how we give and grade tests and assignments. It’s also unrealistic to always expect absolute creativity (yeish mei’ayin and not yeish mi’yesh). Innovative solutions will be needed to solve a creatively induced problem. But the root of the resolution lies in unlocking the creative potential each and every one of us has. Ultimately, AI will allow us to focus on teaching higher-level thinking since we can leave the derivative stuff to machines.

5. Rabbi Naftali Zvi Yehudah Berlin (the Netziv) and Rabbi Shimon Schwab point to another aspect of tzelem Elokim: man’s ability to handle complexity and contradiction. Unlike a computer, which gets stuck when the pieces don’t fit, a human being can embrace opposite and sometimes contradictory realities without requiring a clear-cut resolution. According to the Netziv (Bereishit 1:26), the very name of man (adam), which can be understood in two ways (G-dly, as in edameh l’Elyon, and earthly, as in adamah), reflects the complexity inherent in man: on the one hand, man is drawn to the spiritual; on the other hand, he is simultaneously inclined to focus on his own material well-being. The Netziv emphasizes that it is not just that man has the ability to choose, but that this complexity is what defines him, and, as such, it manifests in his name.

That machines can do so many things that seem human forces us to better appreciate what it really means to be human.

Rabbi Shimon Schwab (Rav Schwab on Prayer [Brooklyn, 2001], 238) takes this idea one step further and points out that humans are inherently dialectical. A person can simultaneously hold onto two contradictory emotions. Angels, however, have a singular mission. Thus, when the angels sang upon seeing the drowning of the Egyptians, G-d criticized their insensitivity. To sing suggests pure joy—an inappropriate emotion at a time when human life was lost. Conversely, the Jews were praised for singing. Why? Because a person can rejoice over the triumph of good, while at the same time bemoan the tragic loss of human life. No other creation has such a capacity.

The ability to handle contradiction may stem from something else unique to man—his very composition is a merger of the irreconcilable. Indeed, Ramban (Bereishit 1:26) emphasizes that the uniqueness in man lies in his being comprised of the physical and spiritual—two aspects with opposite characteristics. This merger is a peleh—an incomprehensible wonder.7

Can computers do this? On the one hand, advances in AI allow computers to address complex issues in a way that traditional computing could not. One method involves a generative adversarial network, which is a class of machine learning where two neural networks contest with each other in order to solve a problem and overcome obstacles, instead of getting stuck in the way traditional computers do. However, this is still a far cry from a human’s ability to handle complexity. While a computer can be programmed to maximize convenience, efficiency and safety, it cannot hold onto complex and opposing emotions. Instead, when it encounters a problem, it requires a resolution. We, however, are asked to live with complexity without the expectation of a resolution. (Just think of all the contrary emotions we are expected to simultaneously feel on Rosh Hashanah.)

6. Rabbi Yosef Yehudah Leib Bloch (Shiurei Da’at, Emunah U’Bitachon, shiur 11) points to another aspect of tzelem Elokim: the human ability to experience emotions and form emotional attachments. It is in navigating the chasm between the head and the heart that our full humanity is realized. While AI systems like LaMDA may fool us with what appear to be emotions, there is no reason to believe they have genuine emotions (though that may be impossible to prove).

7. Finally, Rabbi Eliyahu Dessler (Michtav MeEliyahu I, 32) suggests that our tzelem Elokim grants us the ability to become altruistic and giving—to go beyond our personal needs or the parochial interests of our family or tribe and give to others with no expectation of remuneration. Just as G-d acted beneficently in creating the world (after all, He lacks nothing), we as humans are enjoined to give. What about computers? While they certainly give without expectation to receive, that is simply because they have no sense of self. Unlike humans, for whom beneficence proves a daunting challenge insofar as we must overcome our natural self-centeredness, there is no such challenge for machines.

*        *        *        *

So where does this leave us? Based on what we have seen, it would seem we can breathe a sigh of relief. While computers are catching up, they still have a long way to go. Indeed, an insurmountable gap remains.

But we should not rest on our laurels. After all, how many of us actualize our humanity? Do we seek spirituality or are we drawn after the transient? Do we express our free will or do we live lives guided by habit, allowing ourselves to be governed by nature and nurture? Do we connect to the corporeal or do we seek a relationship with G-d? Do we imitate G-d by becoming a creator and expressing creativity or do we just rehash the same old things? Do we acknowledge the complexity of life or do we prefer a black-and-white reality where there are simple answers to complex questions? Has our drive for efficiency succeeded in numbing our emotions, making us dispassionate and mechanistic? And finally, do we truly care about others or are our acts of kindness just meant to quiet our conscience—or are they actually secret gifts to our self?

The fact that machines can do so many things that seem human forces us to better appreciate what it really means to be human. Our tzelem Elokim is a gift, perhaps the greatest gift we have ever received, but it is our job to actualize our humanity, to become a true human; otherwise, we are no better than a machine.

Notes

1. Blake Lemoine, “Is LaMDA Sentient?—an Interview,” Medium, June 11, 2022, https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917.

2. https://www.theguardian.com/technology/2022/aug/14/can-artificial-intelligence-ever-be-sentient-googles-new-ai-program-is-raising-questions.

3. See Zohar Chadash, Bereishit 28b.

4. https://www.nytimes.com/2021/11/19/technology/can-a-machine-learn-morality.html.

5. In his Introduction to Mishnah, Rambam writes, “Man is only distinguished from other types of animals by [his] reasoning—as he is [unique in being] a reasoning life-form—meaning to say through the reasoning by which he understands conceptual ideas, the greatest of which is the Oneness of the Creator, blessed be He, and all that accompanies that matter from the theological.” Likewise, in Hilchot Yesodei HaTorah (4:8), he writes: “The extra dimension that is found in the soul of man is the form of man who is perfect in his knowledge. Concerning this form, the Torah states [Genesis 1:26]: ‘Let us make man in Our image and in Our likeness’—i.e., granting man a form that knows and comprehends ideas that are not material, like the angels, who are form without body, until he can resemble them.”

6. Lonely Man of Faith (New York, 1992), 12.

7. Indeed, Rema (Orach Chaim 6) notes that the blessing of Asher Yatzar alludes to this wonder when it concludes with “umafli la’asot—and acts wondrously.”

 

Rabbi Netanel Wiederblank is a maggid shiur at RIETS, where he teaches Gemara, halachah and machshavah to college and semichah students. Rabbi Wiederblank’s newest books are Illuminating Jewish Thought: Faith, Philosophy, and Knowledge of G-d (Jerusalem, 2020) and Illuminating Jewish Thought: Explorations of Free Will, the Afterlife, and the Messianic Era (Jerusalem, 2018). A third volume, on the purpose of mitzvot, Jewish chosenness, the evolution of the Oral Law, and Divine Providence is forthcoming. He is also currently teaching a course at Yeshiva College on the ethics of artificial intelligence.

 

More in this Section:

Halachic Smarts about Smart Technology by Rabbi Chaim Jachter

Artificial Intelligence: The Newest Revolution in Torah Study? Rabbi Dovid Bashevkin in conversation with Professor Moshe Koppel

AI Meets Halachah: Jewish Action speaks with Rabbi Dr. Aaron Glatt

This article was featured in the Spring 2023 issue of Jewish Action.
We'd like to hear what you think about this article. Post a comment or email us at ja@ou.org.