33 Comments
User's avatar
Boruch Clinton's avatar

> "I have heard pushback that it won’t be able to answer personal questions because it does not know you personally"

That "pushback" assumes that today's human poskim know you personally. In most large communities (like Lakewood), however, finding a first-rank rav who actually knows you (and has the time to get involved in your sha'ala) is the rare exception.

But as I've written, the question isn't really whether using AI for psak is a good idea or not, but how long it'll be before the majority of frum Jews adopt it for most of their needs.

Expand full comment
Interdweller's avatar

I agree with your point regarding rabbinic hierarchy and our dependency on rabbonim for psak. I heard Rav Schacter say that he'd be fine with AI giving psak - but not sure if he actually meant it. In any case, psychologically, will we allow ourselves to be led by bots? Will the ghost in the machine feel transcendent enough for semi-Devine authority? The Rebbes Igros work for some, maybe AI is next in line. What about the conception of the human self and its relation to God in a Grimian homotechno world. What does a post AI theology look like?

Determinism - I don't see this as a break from the ongoing three century phenomena of people yelling (in the case of Laplace) "I had no need of that hypothesis". Many would claim that this was one of the main drivers of secularism, hence God-of-The-Gaps debates.

Btw, wasn't sure about the weather example. Weather is a classic chaotic system where unpredictability is inherent to the system. The butterfly effect.

Paradoxically, I feel like A(G)I is providing us with more questions than answers.

Expand full comment
Ezra Brand's avatar

All these are very interesting questions, ones I've thought about as well.

Re "I imagine it will be considerably more boring knowing your question was already thought about a thousand times" - this is already the case now in yeshivas. Unlike in academia, there's no significant effort made to see if chiddushim have been said before, other than in the few classic rishonim and achronim

Expand full comment
Yitz's avatar

Correct, but I can imagine just the knowledge that you can search an LLM in a second and it will give you everything you could possibly come up with in a sugya would be quite demoralizing. But your point is well taken on the current scholarship level at our yeshivos.

Expand full comment
Ezra Brand's avatar

Fair. You might be interested in my series of related discussion:

https://www.ezrabrand.com/p/follow-up-on-contemporary-methods

Expand full comment
Happy's avatar

Right now, there are rabbis who are bekiim in Shas and Shulchan Aruch, yet many people won't consult with them because they are from a different religious community and those people feel they lack Yiras Shamayim or don't have the proper hashkafah. Very few people would consider trusting Google for a serious halachic question (if the answer is not a basic Mareh Makom that Google can spit out). Yet you think they would trust an advanced AI model???

Expand full comment
Yitz's avatar

Google is quite different from an AI model that will have a natural understanding of halachik reasoning being able to tune it to specific poskim or ways of psak (Sephardi/Ashkenazi). I think it will be hard to compete with when the models are providing more sophisticated answers and reasoning than your average posek.

Expand full comment
Happy's avatar

Google search already uses a language model like OpenAI does. What you are describing is a fantasy that you are only believing because "experts" predict it. But it doesn't even matter. Would a Satmar guy start asking the Zionist rabbi serious shailas just because he provides more sophisticated answers than the Satmar posek? Also, consider why you didn't respond to my "sophisticated" response here https://yitz.substack.com/p/possible-implications-of-ai-and-a/comment/52081669 and only my human response here https://yitz.substack.com/p/possible-implications-of-ai-and-a/comment/52082481

Expand full comment
Yitz's avatar

You continue to conflate early stage LLMs with the future of the AI space.

Expand full comment
Happy's avatar

Ok, so you agree that what we have now is no indicator of this hypothetical future. To me it's like predicting that we will have faster than light travel because we continue making faster rockets. But whatever. Let's say you're right and in the future we will have halachic models that sound way more sophisticated than current LLMs. So what, I think my example from the Satmar guy refusing to ask the sophisticated Zionist rabbi (and this is not an extreme example) shows why nobody will be turning to computers to ask halachic questions.

Expand full comment
Yitz's avatar

Yes. We will always have the Amish Jews, I don’t even account for these sort of luddites.

Expand full comment
Happy's avatar

You will have to account for these Amish Jews, as they are probably the vast majority of religious Jews- including yourself.

Expand full comment
Happy's avatar

"The prospect of humans creating AGI, that is machine intelligence that is human or above human level, went from a fantasy to something that most experts expect us to have in the next decade"

The statement is problematic for several reasons:

Overestimation of Progress: The claim that AGI (Artificial General Intelligence) will be achieved within the next decade is overly optimistic and not supported by current trends or expert consensus. While significant advancements have been made in AI, particularly in narrow domains like image recognition and natural language processing, creating a general intelligence that can perform tasks across a wide range of domains and adapt like a human is an immensely complex challenge.

Underestimation of Complexity: Achieving human or above-human level intelligence in machines involves not just advancements in computing power, but also breakthroughs in understanding cognition, consciousness, creativity, and various aspects of human intelligence. Many experts believe that these breakthroughs are far from imminent and may take several decades, if not longer, to achieve.

Ignoring Ethical and Safety Concerns: Even if AGI were to become a reality in the next decade, it raises significant ethical and safety concerns. Ensuring that AGI systems align with human values, are safe, and do not pose existential risks is a daunting task that requires careful consideration and rigorous research. Rushing into the development of AGI without addressing these concerns could have catastrophic consequences.

Failure to Consider Unforeseen Challenges: History has shown that technological progress is often unpredictable, and breakthroughs can be delayed or derailed by unforeseen challenges. Assuming that AGI will be achieved within a specific timeframe overlooks the possibility of encountering significant obstacles or limitations that may slow down progress.

In summary, while advancements in AI are exciting and hold great promise, predicting the imminent arrival of AGI within the next decade is speculative and unsupported by current evidence or expert consensus. It's crucial to maintain a realistic perspective on the challenges and uncertainties involved in achieving artificial general intelligence.

Expand full comment
Happy's avatar

After ignoring the above comment, consider why you ignored it. Consider that there is a fundamental difference between a fancy language model and a human, as much as there is between a store mannequin and a human, which is why you didn't see fit to respond to the language model. And see that it is patently absurd for the creator of fancy language model to make such "expert" predictions.

Expand full comment
Yitz's avatar

We will just have to wait and see who is correct, and for the past few decades the techno-optimists have been proven right

Expand full comment
Happy's avatar

Who has been proven right about what? And what does the fact that somebody was proven right about something demonstrate?

Expand full comment
Yitz's avatar

The upward momentum of the scientific revolution.

Nothing only a trend.

Expand full comment
Happy's avatar

Ok, so this is nothing. It's the old "some scientists have been successful in creating some technology, therefore listen to whatever any scientists say" shtick.

Expand full comment
Yitz's avatar

That’s a pretty cynical take on science, doubt you would be saying the same thing if you lived in a 3rd world country with no access to a first world healthcare and utilities system.

Expand full comment
Philip Traylen's avatar

the easiest solution is to decide that AI (of all types) is satanic and refuse to listen to it in any but the most instrumental contexts (which themselves are basically pre-satanic).

Expand full comment
Yitz's avatar

If AGI was able to pattern recognize all types of cancer months before a human doctor, would you still call it pre-satanic?

Expand full comment
Philip Traylen's avatar

Yes. Just as if an individual I have profound ethical reservations about was in a position to save my life, I'd accept their help.

Expand full comment
Yitz's avatar

I hear. Why do you assume it is satanic at all?

Expand full comment
Philip Traylen's avatar

I can't see another way out; it seems the simplest solution. I mean it's not that I feel confident that it's satanic, but rather I see no good reason not to assume that it is; it's the most effective form of self-defence.

Expand full comment
User's avatar
Comment deleted
Mar 20, 2024
Comment deleted
Expand full comment
Yitz's avatar

Clearly we are entering a period where these claims will actually get put to a test, what happens if they are correct, what does the religious world do?

Expand full comment