Cover Story

AI Meets Halachah

Jewish Action in conversation with Rabbi Dr. Aaron Glatt

 

Jewish Action: Can one use ChatGPT to find answers to halachic questions?

Rabbi Dr. Aaron Glatt: I wouldn’t trust ChatGPT for a halachic pesak.

One of the best uses that I can see for AI right now is in data gathering. If one wants to study, for example, the halachot of Ya’aleh V’yavo, AI can be a phenomenal gatherer of information. It can provide you with a listing of all the sources on the subject and can even cite the full text of all of the relevant responsa. Many sefarim may be familiar to you; other sefarim you may not even recognize or have at your disposal. In this scenario, the purpose is not to pasken halachah (render halachic decisions), but to use AI as a tool for information gathering.

As AI matures, the potential for it playing more of a role in pesak halachah may change as well.

 

JA: So, is AI a more enhanced version of “Rabbi Google”?

Rabbi Glatt: You can do Google searches that will bring up plenty of sources, but AI could theoretically be much more comprehensive. Many sources are simply unavailable via a Google search. You can purchase Judaic digital libraries, such as the Bar-Ilan Responsa Project, but even that kind of database is not as comprehensive as AI. Theoretically, if every single sefer were to be scanned into AI, it should be able to provide a comprehensive compendium of all piskei halachah on a particular topic. Now I wouldn’t rely on that for pesak halachah, but it can certainly be relied upon as a summary document for one who is investigating the issue.

One of the controversial areas in medicine is how halachah views brain death, for example. Even at this point, AI could come up with numerous published opinions that say brain death constitutes halachic death. At the same time, it could come up with an equal number of published opinions that say brain death is not halachic death. So if one is writing a comprehensive survey of the halachic literature, he could use AI as a data gatherer.

Using AI, a posek could render a halachic decision more easily as he has access to all the sources he needs. In that sense, AI could be a phenomenal resource for a posek.

 

JA: Is there a danger in having access to too much information?

Rabbi Glatt: For the layperson, absolutely. Too much information is not helpful. A comprehensive document, for instance, of all the opinions on reheating food on Shabbat is not going to help the layperson know what to do. He might see many contradictory opinions depending on the kind of food and other factors. He could very well throw up his hands and say, “I have no idea what to do. I’ll do whatever I want, and then I’ll find one of the rabbis cited online who agrees with me.” This would represent a serious misunderstanding of the halachic process.

In the Gemara, Rabbi Yosi HaGlili rules that one can eat chicken with cheese, lechatchilah (a priori). But we don’t pasken like that. If, however, one does an online search and sees Rabbi Yosi HaGlili’s opinion, he might think that that’s acceptable in halachah. He won’t necessarily realize that it’s a minority opinion that is not accepted.

Other people might deliberately seek out non-accepted halachic opinions. ChatGPT could easily write a convincing document based upon non-accepted halachic positions, albeit from great individuals, illustrating how eating chicken and cheese together is permissible in Jewish law, when, of course, it is not. Some might use the information to then proceed to do what they want to do. This is a distortion of the halachic process.

 

JA: Right. So it would seem that AI would be most useful as a tool for Torah scholarship.

Rabbi Glatt: Currently, that seems to me to be the best use for it. One could use AI not to get a halachic pesak, but rather for limud Torah, to study the various opinions of Chazal for the sake of learning. One could ask ChatGPT: Can you provide me with all the gemaras in Bavli and Yerushalmi on this topic? Can you then show me the Rishonim on the subject, then the piskei halachah and any relevant she’eilot and teshuvot that are in Shulchan Aruch?

So yes, AI could be an excellent tool for learning.

 

JA: Could a machine ever really pasken anyway?

Rabbi Glatt: The human element is essential in pesak halachah. There’s a well-known story about the great posek Rabbi Shlomo Zalman Auerbach, zt”l. In response to a person who asked him a she’eilah, Rav Shlomo Zalman got up from his chair. I must have asked Rav Shlomo Zalman a really good she’eilah, thought the questioner. I’m making him pace. He’s walking to the window. Rav Shlomo Zalman then motioned to the individual to come to the window. He approached the rav, anxious to hear what he would say. Rav Shlomo Zalman pointed to a house down the street and said, “That’s where your rav lives; ask your rav this she’eilah.”

When it comes to pesak halachah, the relationship is critical. The rav has to know the individual asking the question. There are many considerations that are taken into account when rendering a halachic decision. Is the questioner wealthy? Is he poor? Will the halachic decision impact a couple’s shalom bayit, et cetera?

A gadol b’Yisrael does not simply provide a mechanical yes-or-no answer to a she’eilah. He recognizes the real question underlying the question that is being asked.

Rabbi Hershel Schachter recalls that his rebbi, Rabbi Yosef Dov Soloveitchik, would sometimes be asked the same she’eilah twice in one day and would give two different answers. Rabbi Schachter explains that the Rav understood the individual’s personal situation, and therefore the halachah for that person was X. For the second individual, whose circumstances didn’t allow for that leniency, the halachah was Y.

It’s not that the halachah changes willy-nilly, but it allows for factors other than objective data to be taken into consideration. The halachah of the beit midrash, that is, the theoretical halachah, will always be the same. But its application will depend upon various factors.

There’s another aspect as well. A man once came to the Beit Halevi and asked, “Is it permissible for me to fulfill the mitzvah of dalet kosot at the Seder with milk?” The Beit Halevi responded to the man’s question in the affirmative. But he realized that if the man was asking about using milk at the Seder, he obviously didn’t have enough money for meat or chicken at the Seder. The Beit Halevi told his wife to give the family funds to ensure their needs would be met for Pesach. A gadol b’Yisrael does not simply provide a mechanical yes-or-no answer to a she’eilah. He recognizes the real question underlying the question that is being asked.

 

JA: Can you explain the halachic process?

Rabbi Glatt: The halachic process is thousands of years old. A posek does not decide a halachic she’eilah, such as the permissibility of a heart transplant, in a vacuum. In order to render a halachic decision, he builds upon the incredible edifice erected by the Tanaim, Amoraim, Rishonim and Acharonim and the she’eilot u’teshuvot of contemporary gedolei Yisrael who preceded him.

This is a fundamental reason why one cannot rely on AI or on Google for piskei halachah. There is a halachic process that has evolved over the generations—an understanding of what is acceptable and what is not acceptable, what was accepted lechatchilah, what was accepted bedieved (ex post facto), and what was accepted only b’sha’at hadechak, in an emergency situation.

It’s also critically important to know what is being programmed into AI—the values and piskei halachah being programmed constitute a bias in and of itself. To use an example cited earlier: in every state in the United States, for example, brain death is officially recognized as death. Therefore, if a person is brain dead and the family doesn’t object, a death certificate will be written and the patient will be removed from a respirator. But there is an intense controversy among posekim as to whether halachah recognizes brain death as the definition of death. If AI is programmed to accept brain death as halachic death, that will steer its piskei halachah in one direction. Conversely, if it is programmed not to accept it, that will steer all of its conclusions in the opposite direction. And this is the exact problem AI will encounter in every situation where there are legitimate conflicting halachic opinions. This doesn’t even touch upon the differences between Ashkenazic and Sephardic pesak, Litvish and Chassidic, and so on.

 

 

JA: What role does mesorah play?

Rabbi Glatt: That’s a good question. There is a mesorah when it comes to halachah. AI doesn’t have access to anecdotal material. In other words, it can never say: “I heard from my rebbi.” AI wasn’t in a shiur with Rabbi Yosef Dov Soloveitchik. It wasn’t in shiur with Rabbi Moshe Feinstein.

There is a famous teshuvah written by Rabbi Tzvi Hirsch Kalischer in the 1800s, in which he opines that one is permitted to bring a korban Pesach. He addresses the issue from all angles, including the fact that we don’t have bigdei kehunah, a parah adumah or a Beit Hamikdash. After addressing every concern, he concludes that one is permitted to bring a korban Pesach today.

The Binyan Tzion, Rabbi Yaakov Ettlinger, wrote a treatise opposing Rabbi Kalischer’s position. The overwhelming consensus of the posekim is that Rabbi Kalischer’s position is not accepted and we cannot bring a korban Pesach—which is why we don’t bring a korban Pesach nowadays. That’s the mesorah.

Mesorah is not only the oral tradition that your rebbi taught you in the classroom; it is also what you observed your rebbi pasken in real-life cases (shimush). That is not something AI can do.

 

JA: Any concluding thoughts for our readers?

Rabbi Glatt: Is the internet good or bad? I would say it’s neutral. On the one hand, it could, G-d forbid, lead one to see immorality worse than Sodom and Amora; on the other hand, there is a tremendous proliferation of Torah learning through the internet. AI is like the internet. It’s a tool. Used properly, it can be a fantastic aid in harbatzat Torah. Used inappropriately, it could lead to the opposite. The full potential of AI is unknown, and it is certainly much greater than what we discussed. It is a dynamic new tool with seemingly limitless capabilities.  

 

Rabbi Dr. Aaron Glatt is the associate rabbi at the Young Israel of Woodmere and is an international lecturer on medical and halachic issues. He has been giving a Daf Yomi shiur for thirty years and also gives a weekly gemara b’iyun shiur, daily halachah shiurim, and many other shiurim. His Dirshu Mishnah Berurah shiurim can be accessed at outorah.org. A board-certified infectious disease physician, he is currently a professor and chair of the Department of Medicine at Mount Sinai South Nassau.

 

More in this Section:

What Artificial Intelligence Teaches Us about What it Means to be Human by Rabbi Netanel Wiederblank

Halachic Smarts about Smart Technology by Rabbi Chaim Jachter

Artificial Intelligence: The Newest Revolution in Torah Study? Rabbi Dovid Bashevkin in conversation with Professor Moshe Koppel

This article was featured in the Spring 2023 issue of Jewish Action.
We'd like to hear what you think about this article. Post a comment or email us at ja@ou.org.