Cover Story

AI in Medicine: Halachic Reflections on Emerging Challenges

 

 

AI’s potential in healthcare is vast. It can improve detection and diagnosis, increase efficiency and access to care by minimizing geographic, economic and time barriers, reduce medical errors that lead to accidental deaths and injuries, and alleviate administrative burden and clinician burnout. It may even enable the generation of new scientific knowledge and discoveries.  

Given the seemingly boundless promise of AI in healthcare, it is imperative to ask whether halachah discourages, encourages or possibly even obligates its use. Jewish law obligates healthcare professionals to save lives, provide the best possible care to their patients, and not stand idly by when they have potentially lifesaving information. If AI enhances healthcare providers’ ability to deliver care and improve patient outcomes, one might argue that its responsible use may itself be halachically required. Such an obligation would also entail a corresponding duty to mitigate the ethical concerns associated with AI in healthcare.  

Indeed, such use of AI has been met with resistance from healthcare professionals. Their concerns include confidentiality, data security and privacy; patient autonomy and informed consent; misinformation and liability when mistakes occur; overreliance leading to diminished critical thinking skills; an increased generational divide among providers; and bias and equity concerns. Moreover, some AI companies have been accused of cutting ethical corners in their haste to release new products.  

Taken together, these promises, potential obligations and ongoing concerns highlight the need to explicitly consider what Jewish law has to say about healthcare-related uses of AI, and whether such uses raise unique religious concerns, in order to help ensure that these emerging technologies develop in alignment with halachah.  

This article will focus on the role that values play in medical practice, how this becomes particularly significant when AI is used for medical predictions and decision-making, the implications for end-of-life care and surrogate decision-making on behalf of patients who lack capacity, and rabbinic precedents that may help navigate this rapidly evolving field. 

 

Why Medical AI Is Never Value-Neutral  

To appreciate the challenges that AI can pose in clinical decision-making, it is helpful to begin with the role that values play in medical decision-making more broadly. Although concerns about bias in AI are frequently raised, typically due to biased clinical data in which some populations are not well represented, there are also concerns about values-based bias. As with human decision-making, AI-supported decision-making is not value-neutral. What may appear to be objective, computerized output can in fact embed particular assumptions and priorities reflecting the values of certain groups rather than those of the individual patient.  

When making even the simplest medical decisions, a multiplicity of values comes into play, reflecting various social norms, professional commitments, institutional and economic considerations, and personal beliefs and perspectives, such as definitions of “healthy,” societal “importance,” or assumptions about quality of life. These values may not always align with the patient’s own values, potentially imposing standards upon patients who do not share those values.  

Some AI companies have been accused of cutting ethical corners in their haste to release new products. 

Patients may have a legitimate concern about what values underlie AI systems and whether the systems’ primary focus is on the patient’s best interests alone. Given the soaring costs of healthcare, it is reasonable to presume that some healthcare systems will be motivated to utilize AI support systems to reduce costs or improve reimbursement or profitability, rather than focus solely on what is most clinically beneficial for the patient. We hope our physicians always have their patients’ best interests in mind and will be transparent with their patients, but they might not even be aware of what factors are taken into account in AI decisions. Although certain decisions, such as a surgical AI determining the optimal placement of a prosthetic knee or recommending a specific chemotherapy regimen, appear purely clinical or technical, they may nonetheless be shaped by underlying value assumptions. For instance, an AI developer might prioritize maximizing revenue or operational efficiency over what is best for the patient. In one case, a large language model (LLM), GPT-4, generated opposite medical recommendations depending on whether it was prompted as a physician or as an insurance reviewer. Although the patient and clinical facts were identical, the AI’s moral and clinical judgment shifted based on the assigned role.1  

This situation can be especially concerning for observant Jews, who base major life decisions on Jewish law and hold specific views regarding crucial issues such as the sanctity of life. While AI provides significant medical benefits through decision support and outcome prediction, values-based bias is a significant ethical concern and potential source of conflict for people with distinctive religious values, such as those inherent in Jewish tradition.  

This issue is exacerbated by the fact that AI algorithms are not always intelligible or transparent. Such a system is often referred to as a “black box,” in which patients and even healthcare providers have little access to details of how its decisions are made, hindering the ability of anyone to scrutinize the output of such systems. AI could therefore undermine patients’ ability to receive care aligned with their own values and preferences, as well as their meaningful participation in shared decision-making. This could also undermine patients’ willingness to accept AI decision-making and erode public trust in the healthcare system more broadly. 

The desire to make medical decisions in accordance with one’s own values heightens the need for transparency and explanation. This need becomes even more acute when AI is involved in medical decision-making, as people tend to overtrust and overrely on computer systems and AI decision-support systems, a tendency known as “automation bias.” 

 

How Values Guide Medical Decisions  

The role of values in medical decision-making has previously arisen in rabbinic literature, such as in the context of determining the definition of death. Although many assume that it is the physician’s role to define what it means to be deceased, Jewish law recognizes that death is more than a medical or scientific issue. Death is a process, and the point at which a human being no longer retains the status of a living entity is the subject of ongoing complex religious, philosophical and moral debate. Modern medicine has articulated its own definitions of death, and medical tests have been designed to help determine when a patient’s condition meets those criteria. However, this definition itself represents a choice of a particular point in the dying process that medical tests can isolate and assess.  

There is no purely “scientific” justification to choose that point in the process. Even the medical establishment must rely on meta-scientific considerations, whether ethical, social, philosophical or some combination thereof, to justify its position. The medical tests are valid, but only insofar as they determine whether specific criteria corresponding to a predefined definition have been satisfied.  

Religious values establish the foundational definition, while the clinician’s role is to identify and assess the criteria that satisfy that definition.2 While the implementation of criteria and appropriate tests to determine death is best done by physicians based on contemporary science and technology, when it comes to religiously oriented patients and their families, the process of determining death cannot end there. Rather, decisions about patients in such liminal states should be made in consultation with the patient’s family and should be facilitated by the patient’s spiritual care providers to ensure that they are made in accordance with their values and worldview. 

 

When Prediction Shapes Care: DNR, DNT, and the Risk of Self-Fulfilling Prophecy 

While much of what has been discussed above can relate to any healthcare activity, not just AI, AI is developing capabilities that further accentuate these issues. For example, AI can provide predictive tools, sometimes referred to as “outcome prediction models” (OPMs), that help generate clinical predictions and assist in decision-making by offering personalized assessments of how a given patient is likely to fare. To the extent that this reduces clinical uncertainty, thereby enabling better-informed decision-making and alleviating some of the burdensome tasks placed on physicians so that they can focus on providing good patient care, such systems can be very beneficial. However, they also raise significant concerns.  

While the impact of prayer on patient outcomes is complex and nuanced, it is a central factor in Jewish thought, one that predictive AI models do not account for.   

Some experts caution that AI outcome predictions could lead to a “self-fulfilling prophecy” by steering clinicians toward one treatment path over another without adequately considering the individual patient’s goals, values and preferences. While this is true of non-AI decision-making as well, the concern with AI is that clinicians may feel compelled to follow the decisions or recommendations of the system due to automation bias, as discussed above.3 For example, if a model predicts a very poor prognosis for a patient, clinicians may overly rely on that prediction and decide that it is therefore appropriate to limit life-sustaining interventions and focus only on comfort measures. Suggestions that an AI model makes may be very accurate, but they can thus also cause harm when such predictions are trusted and followed without question. A patient whose life could have been saved may be allowed to die as a result.  

A similar issue was raised years ago by rabbinic bioethicists. Although Jewish law sometimes permits a Do Not Resuscitate (DNR) order, rabbis who approved such orders emphasized that even if resuscitation is not required in the event of cardiac arrest, all routine care must still be provided at all other times. They cautioned that a DNR must never become a “DNT” (Do Not Treat).4 Although healthcare providers typically strive to avoid equating DNR with DNT, some studies have unfortunately shown that clinicians may sometimes subconsciously interpret DNR orders as indicating a desire to reduce the intensity of care provided to these patients.5 This issue is thus not a new one, but with the excitement surrounding the integration of AI into all aspects of healthcare, it may become especially acute and pervasive as a result of these predictive models and the realities of automation bias and self-fulfilling prophecies.  

There are also particularly salient concerns for those who follow Jewish law. For example, hospitals could benefit significantly from AI assistance in triage and resource allocation by using predictive tools that estimate mortality rates and the effectiveness of interventions, though these predictions are highly influenced by value judgments, such as prioritizing patients based on age or assumptions about quality of life. Predictive data about various patients’ survival may be taken into account by rabbinic authorities, but value judgments about the worth of different individuals’ lives—or decisions to withdraw life-sustaining interventions based on such predictive models—are not. Furthermore, algorithms make predictions that can give us a sense of what is likely to occur, but humans are dynamic and unpredictable. Thus, algorithms are most valuable not for predicting inevitable outcomes, but rather for guiding people toward changes that can help prevent worst-case scenarios.  

Additionally, from a traditional Jewish perspective, the future is not considered entirely predetermined,6 which raises theological concerns about making absolute predictions. Notwithstanding Divine foreknowledge, Judaism affirms that prayer and repentance can influence outcomes. For example, the Talmud teaches that even if a sharpened sword is on one’s neck, one should not despair of G-d’s mercy.7 Similarly, the Talmud states elsewhere that if two patients have identical conditions but only one recovers, the differences can be attributed to prayer.8 While the impact of prayer on patient outcomes is complex and nuanced, it is a central factor in Jewish thought, one that predictive AI models do not account for.  

Finally, end-of-life care in Judaism is governed by highly specific guidelines. Predictions of a patient’s mortality could be beneficial if they encourage conversations that facilitate the implementation of practices mandated by Jewish law, such as advance care and estate planning as well as the recitation of end-of-life prayers. However, Jewish law also addresses how much detail a fragile patient should be told, whether they should be told such information at all, and how it should be shared, all of which must be handled cautiously and respectfully.9  

An increased emphasis on predicting patient outcomes could have a significant impact on these aspects and should therefore be approached with appropriate religious and cultural sensitivities. 

 

Who Speaks for the Patient When the Patient Cannot? 

Another emerging AI capability is decision support (sometimes referred to as “Clinical Decision Support” or CDS), which can assist with weighing risks, potential benefits, and burdens of a given intervention to help inform medical recommendations. However, this raises significant concerns for patients who lack decision-making capacity and whose preferences are unknown in a given situation.  

An AI algorithm designed to predict patient preferences seeks to determine what a patient would likely want in various situations by extracting values from available data. This may include demographic information, social media activity, electronic communications, charitable donations, organizational affiliations, and recordings of past interactions with healthcare providers collected using ambient AI documentation. Using this data, the model aims to infer the patient’s values and predict the decisions the patient would make.10 While there are valid concerns about which data should be used and how it is interpreted, some studies suggest that, given the vast amount of information available, these models may actually predict patient preferences more accurately than the patient’s own family. (In an as-yet unpublished study conducted at the University of Florida College of Medicine, family members acting as surrogate decision-makers demonstrated an accuracy rate of approximately 40 percent, whereas an AI system achieved an accuracy rate of about 75 percent!) 

The current standard for making these decisions is to consult, when possible, with the patient’s family to determine what the patient would prefer (the “Substituted Judgment Standard”). In Jewish law, the role of the family in making these decisions is a matter of some dispute. Some rabbinic authorities contend that the family does not have any independent status as decision-makers, and that the healthcare team should simply decide what is the best course of action for a given patient, based on the clinical realities and whatever is known about the patient’s own goals, values and preferences.11 If that is the case, and AI is able to make the most accurate prediction of what a patient would prefer, then it seems clear that it should indeed be consulted to help inform the decision.  

By contrast, Rabbi Shlomo Zalman Auerbach suggests that the reason the family is asked is because they are presumed to be able to best determine what the patient would have preferred.12 According to this view, if the family is unable to determine the patient’s wishes, their input regarding treatment may carry less weight.13 In such cases, consulting an AI that can reliably predict the patient’s preferences could be beneficial. It would follow, then, that the AI’s recommendation could override the family’s judgment in cases where the AI’s predictions are more accurate than the family’s presumed knowledge of the patient’s wishes. 

However, Rabbi Moshe Feinstein implies that the primary reason the family makes decisions for the patient is because it can be assumed that the patient would want what their family suggests, since they have the patient’s best interests at heart.14 According to this approach, the input of the patient’s family (or close friends) is valuable even if they do not know what the patient would have preferred.15 It would therefore seem that the family’s perspective should take precedence over an AI prediction model, regardless of its accuracy. This is because, from this standpoint, the central factor is the love and commitment inherent in these relationships, not merely the precision of any predictions.  

People tend to overtrust and overrely on computer systems and AI decision support. 

This becomes especially significant given that, as previously discussed, we cannot guarantee that an AI will prioritize the patient’s best interests. Furthermore, the accuracy of such predictive models typically depends on collecting data from large populations and assessing how closely those data align with the AI’s predictions. Given the unique values of observant Jews discussed above, effectively using such predictive models may prove particularly challenging for our community. Additionally, engaging family members in meaningful dialogue during the decision-making process often has intrinsic value for familial relationships, something that relying solely on AI predictions might preclude. Although consulting an AI algorithm can indeed serve as a helpful aid in decision-making, and its insights can certainly be taken into account, the final decision, when all of the above rabbinic rationales are considered, should remain with those closest to the patient.16  

 

Keeping Human Judgment at the Center 

There have been numerous suggestions for how to address these issues. Ideally, AI systems should be transparent and as explainable as possible to those who use them, particularly when it comes to how they reach their conclusions and which values inform those decisions. Although some advanced AI systems are inherently difficult to explain, we can either advocate for the use of simpler models when appropriate, especially when we know they align with our values, or push for more complex models to be adapted so that their reasoning can be meaningfully evaluated, if not by all users, then at least by qualified experts.  

Ethical considerations are sometimes ignored when they conflict with profit or speed of development. Just as a good clinician is expected to be able to explain why they arrived at a particular recommendation was reached, AI should be able to describe its core values and reasoning. This ability would make AI accountable and is essential for an observant Jew. In fact, such capabilities could give AI decision-making an advantage over human decision-making, since so many human biases remain unconscious and are very difficult to explain. Nevertheless, this may not be fully practical or possible with AI either.  

 

The Need for Oversight 

Clinicians may be uncertain about whether they can trust AI-generated recommendations in the same way they trust their colleagues or peer-reviewed studies. As a result, some have proposed subjecting AI programs to clinical trials and publishing their results in the medical literature before integrating them into practice,17 along with conducting periodic audits of AI performance and its impacts on clinical decision-making and health outcomes.18 This approach might also help us better understand how these models function, which leads to another approach: human oversight. 

In the Orthodox community, in order to permit the use of artificial reproductive technologies (ART), many rabbis required the safeguard of specially trained (female) religious supervisors (mashgichot) to oversee the IVF process before permitting ART in their communities.19 Here too, there may be a need for specially trained individuals who have both religious and AI expertise to understand how a given AI is being utilized in relation to a given patient’s own value system. These individuals can serve as patient advocates and liaisons for rabbis when AI is utilized in values-based medical decisions. Indeed, it is generally wise to keep humans in the loop to provide oversight, guarding against unforeseen consequences and addressing concerns as they arise.  

The use of AI in healthcare raises numerous ethical concerns, which can be particularly acute for religious individuals who adhere to distinct value systems. Many of these ethical challenges are not entirely new, and Jewish tradition can offer valuable guidance. Nevertheless, these issues are evolving rapidly and may become significantly more pronounced with AI than in the past, making careful thought and proactive preparation essential as we adapt to this changing reality. Regardless of the strategies employed to address these concerns, it is increasingly important for clinicians to guide families with humility and with an awareness of the multiple biases that can influence decision-making. Clinicians should receive training to avoid overreliance on AI predictions and to prevent such predictions from becoming self-fulfilling prophecies. While we should utilize AI to enhance healthcare, we must simultaneously strive to preserve patients’ human relationships—with their loved ones and healthcare providers—to ensure that healthcare remains as effective, compassionate and appropriate as possible for all patients. 

 

Notes 

1. Olivia Farrar, “AI Is Making Medical Decisions — ButFor Whom?” Harvard Medical School Department of Biomedical Informatics, Harvard Magazine, May 23, 2025, https://dbmi.hms.harvard.edu/news/ai-making-medical-decisions-whom. 

2. James L. Bernat, Charles M. Culver, and Bernard Gert, “On the Definition and Criterion of Death,”Annals of Internal Medicine94, no. 3 (1981): 389–94.  

3. Charles Binkley and Tyler Loftus,Encoding Bioethics (California: University of California Press, 2024), 29.

4. NishmatAvrahamYD (English ed.), 325. 

5. Patricia A. Kelly, et al., “Original Research: Nurses’ Perspectives on Caring for Patients with Do-Not-Resuscitate Orders.”American Journal of Nursing121, no. 1 (2021): 32; Fuchs, et al., “Quantifying the Mortality Impact of Do-Not-Resuscitate Orders in the ICU,” Critical Care Medicine 45, no. 6 (2017): 1019–27. 

6. See, for example, Rambam,ShemonehPerakim, 8.  

7. Berachot 10a.  

8. Rosh Hashanah 18a.

9. See Rabbi Dr. Jason Weiner,Jewish Guide to Practical Medical Decision-Making(Jerusalem/New York: Urim Publications, 2017), chap. 1C. 

10. Teva D.Brender, Alexander K. Smith and Brian L. Block, “Can artificial intelligence speak for incapacitated patients at the end of life?”JAMA Internal Medicine 184, no. 9 (2024): 1005.  

11. Rabbi David Zvi Hoffman,MelamedLeho’il 2:104. This approach is endorsed by Rabbi Zalman Nechemia Goldberg (Jewish Medical Ethics, vol. 2, 346) and Rabbi Hershel Schachter (Be’Ikvei HaTzon [Eilav Hu Nosei et Nafsho”], 228).  

12. Rabbi ShlomoZalmanAuerbach, Shulchan Shlomo: Erchei Refuah, vol. 1, 75. 

13. She’eilotUteshuvot Minchat Asher 1:116 (3). 

14. IggerotMosheCM 2:74 (5). See also Rabbi Eliyahu Bakshi-Doron, Binyan Av: Refuah BeHalachah, 40. 

15. She’eilotU’teshuvot Minchat Asher 1:116 (3). 

16. Personal conversation with Rabbi Asher Weiss. 

17. Encoding Bioethics, 29.

18. Kristin M.Kostick-Quenet, “A caution against customized AI in healthcare,”NPJ Digital Medicine 8, no. 13 (2025): 3. 

19. Rabbi ShlomoZalmanAuerbach and Rabbi Yosef Shalom Elyashiv (Nishmat AvrahamEven Ha’ezer 1:6 [2] [1:13 (29) in 3rd ed.]); Yabia Omer 2, Even Ha’ezer 1 (13). 

 

Rabbi Dr. Jason Weiner, BCC, serves as senior rabbi of Cedars-Sinai Medical Center in Los Angeles and rabbi of Knesset Israel Congregation of Beverlywood. He also serves as senior consultant to Ematai, which educates Jewish individuals and families about end-of-life issues. 

 

In This Section

Torah in the Age of Artificial Intelligence

Is AI the Printing Press of the Twenty-First Century? Excerpted from the 18Forty and American Security Foundation summit with Dr. Moshe Koppel, Dr. Malka Simkovich, Tikvah Wiener and Rabbi Dovid Bashevkin

How to Use AI (And How Not to Use It) by Dr. Moshe Koppel

Can AI Make Better Teachers? Rabbi Gil Student speaks with Chavie Kahn, principal of the Marilyn and Sheldon David IVDU Upper Boys School

When Rabbis Meet AI by Rachel Schwartzberg

AI in Medicine: Halachic Reflections on Emerging Challenges by Rabbi Dr. Jason Weiner

Spotify for Shiurim? The OU’s AI-Powered App Provides Customized Torah Learning by Sandy Eller

The Mashgiach’s Algorithm: Is the Future of Kosher Supervision Smarter with AI? by Rabbi Gavriel Price

 

This article was featured in the Spring 2026 issue of Jewish Action.
We'd like to hear what you think about this article. Post a comment or email us at ja@ou.org.