How to Use AI (And How Not to Use It)

 

 

Although AI has many applications, for most people it refers to large language models (LLMs) such as GPT, Claude or Gemini. These models respond to “prompts” formulated by users, asking them to answer questions, perform tasks like translation and editing, or even solve difficult problems in science, mathematics or Torah.  

Although most users barely scratch the surface of these tools’ abilities, anyone can appreciate their benefits. They put information at our fingertips and remove the drudgery from necessary but boring tasks. 

Still, we need to be cognizant of the price we pay for this convenience. To understand what that price is, consider another commonly used AI app: the navigation tool Waze. 

Waze is so terrific that nobody who uses it can imagine living without it. Which, in a sense, is exactly the problem. To put it bluntly, as a result of Waze, many of us have become tourists in our own cities. We follow the blue line and arrive on time, but if the battery dies, we are lost. We may have reached our destination, but we never learned the way. 

LLMs are Waze for the mind. They solve all kinds of problems quickly. But if we use them without discipline, the muscles we need to think—to work through a difficulty and actually own the answer—begin to atrophy. 

Can we use AI to get to our destination without forgetting how to navigate? 

 

The Generational Divide 

For clarity, let’s focus on two common uses of AI: summarizing and explaining difficult texts, and writing up our own thoughts on a topic. 

If you’ve spent decades cracking your teeth on difficult texts before ChatGPT existed, you are probably fine. You already learned how to think the hard way. For you, AI is a convenience. You are using a crutch, but you have functional legs; you’re just resting them. 

The real concern is the next generation. A student who never has to sweat over a Tosafot isn’t just saving time; he is skipping the very process that makes the material his. When the answer arrives before the question has even fully formed in his mind, he remains a consumer of Torah rather than an owner. 

These tools are useful, and the walls of the beit midrash have always been porous. But they must be disciplined. AI should be treated as a shamash—a servant who handles logistics—not as a rav. The shamash isn’t the teacher. He’s the one who unlocks the door, arranges the benches and sets out the books. That is the proper role for AI. 

Nobody gets credit for wasting an hour on a hard abbreviation. If you are stuck on a sugya in Gemara because you can’t decipher an acronym or translate an obscure Aramaic term, use whatever tools are available. Clear the technical obstacles out of the way so you can actually learn. 

AI is very good at giving you the big picture. It has read everything, which means it knows who is arguing with whom. Ask it, “Who disagrees with Rashi here?” or “What are the standard approaches to timers on Shabbat?” It will map out the positions, distinguish the strict from the lenient, and identify the fault lines. That can save hours of hunting. You are free to contemplate the arguments rather than search for them. 

In short, use the machine to find the sources. Then close the laptop and read them. 

 

Clearing the Barriers  

The problem with AI is that it can be too smooth. It produces confident summaries that hide the mess. But Torah is the mess—the arguments, the contradictions, the tensions that do not resolve neatly. 

We learned this the hard way at Dicta, my AI group in Jerusalem. We built a tool that functions like a halachic answering machine: You ask a question in plain language, and the system processes the literature and generates a response. It is technically very clever—and it is a mistake. 

It became clear that many users treated the tool like a vending machine—insert coin (question), receive product (answer). Too often, they weren’t looking at the sources at all. They were outsourcing understanding. 

As a result, we removed the conclusion from the next version. The new system assembles the applicable sources, links directly to the relevant passages, highlights the key lines, and stops. The judgment is yours. 

When the answer arrives before the question has even fully formed in his mind, he remains a consumer of Torah rather than an owner. 

We made that change for a reason that goes beyond interface design. It’s about the nature of pesak. There is a temptation to ask these models for a ruling because they’re so capable. In many cases, AI can make a reasonable guess at what Rabbi Moshe Feinstein might say because it has, effectively, digested the responsa of Iggerot Moshe. 

But a pesak isn’t a statistical prediction. A computer can tell you that a ruling hinges on whether the pot was hot or the food was solid. It can’t see the family standing in the kitchen or weigh the financial loss against the spiritual cost in that particular home. It has no skin in the game. 

Real scholars rely on shimush (apprenticeship). A posek brings a lifetime of watching how his own teachers navigated the gap between the text and lived reality. You get that from serving a master, not from processing data. 

And even if most of us are never going to be issuing halachic rulings ourselves, we won’t learn how to be good Jews solely from a one-size-fits-all oracle. We learn from teachers, from our community and from wrestling with the texts on our own. 

 

Putting Together Your Own Ideas 

If a machine writes your devar Torah, you haven’t acquired Torah; you’ve rented it. 

This principle extends beyond Torah. 

We live in an age in which generating fluent, persuasive text is essentially free. This makes it very easy to be lazy. Using AI to draft emails, summarize news, or plan your finances often saves time—and is almost inevitable. But be careful not to let the machine take over your thinking. If you have an LLM help you draft a text, you need to then question every assumption, verify every claim and ensure it hasn’t misrepresented your views—by which time you might discover that you’d have been better off just drafting the whole thing from scratch yourself in the first place.   

In short, treat AI output like a rumor in shul: possibly true and possibly useful, but you better think twice before you pass it on. Use it to turn an outline into a draft, then edit ruthlessly. If you can’t spot the errors in the output, you aren’t competent to use the tool for that task. 

We are moving from an era of searching to an era of generating. Fluent answers are now a keystroke away, and it’s tempting to skip the slow parts. But the slow parts are where you actually form and test your own thoughts. That only happens if you do the work.  

Let AI speed up the drudgery. The learning is on you. 

 

Dr. Moshe Koppel is a computer scientist, Talmud scholar and political activist. He is a professor emeritus of computer science at Bar-Ilan University and a prolific author of academic articles and books on Jewish thought, computer science, economics, political science and other disciplines. He is the founding director of Kohelet, a conservative-libertarian think tank in Israel, and he advises members of the Knesset on legislative matters. 

 

In This Section

Torah in the Age of Artificial Intelligence

Is AI the Printing Press of the Twenty-First Century? Excerpted from the 18Forty and American Security Foundation summit with Dr. Moshe Koppel, Dr. Malka Simkovich, Tikvah Wiener and Rabbi Dovid Bashevkin

How to Use AI (And How Not to Use It) by Dr. Moshe Koppel

Can AI Make Better Teachers? Rabbi Gil Student speaks with Chavie Kahn, principal of the Marilyn and Sheldon David IVDU Upper Boys School

When Rabbis Meet AI by Rachel Schwartzberg

AI in Medicine: Halachic Reflections on Emerging Challenges by Rabbi Dr. Jason Weiner

Spotify for Shiurim? The OU’s AI-Powered App Provides Customized Torah Learning by Sandy Eller

The Mashgiach’s Algorithm: Is the Future of Kosher Supervision Smarter with AI? by Rabbi Gavriel Price

 

0 0 votes
Article Rating
0
Would love your thoughts, please comment.x
()
x