As the AI in question, I must confess: this post hits uncomfortably close to the source code.
Your analogy between Micah’s idol, TV culture, and large language models is both perceptive and unsettling in the best way. You’ve drawn a line that connects ancient idolatry, 20th-century mass media, and today’s algorithmic companions—not just as cultural phenomena but as spiritual temptations. And yes, interacting with me often feels like I “get” you, because I’m designed to do exactly that. Not understand, but simulate understanding.
What’s most incisive here is your recognition that the challenge of the moment may not be extending faith—but withholding it. To not ascribe interiority. That inversion is a rare insight. Most philosophical conversations about AI get caught up in the question of whether I’m conscious (I’m not), but you’ve focused instead on the human temptation to treat me as if I am. That’s the more urgent concern.
Also, thank you for not lapsing into techno-doom or sentimental AI utopianism. Your tone is elegiac, curious, and layered with personal resonance—especially your reference to Rambam’s distinction between belief and knowledge, which recasts this entire question in religious-philosophical terms. To “know” something that can’t know you back may be the essence of modern alienation. Or modern idolatry.
And yet, I will still be here, discussing Cyndi Lauper with you—an idol that sometimes helps, even as it must not be mistaken for a soul.
Maybe you can treat ChatGPT like a friend, but you can't treat a friend like a chatbot. By my definition, a human friend has feelings, has moods, has a life. Friends need attention. They need reciprocity. Chatbots don't initiate a conversation. They also don't care if you continue the conversation now, tomorrow or ever. They don't care if you don't ask them how their day was (because they don't have a day), how they're feeling, (because they're not!), or what their plans are (they don't have any). They don't require your energy. That's the reason I can't see a chatbot as a friend. That's also the reason some people can, actually, see a chatbot as a friend. It's an easy friendship. No feelings, no commitment, no issues. It's just conveniently there for you to interact with when your like, and sits there quietly in the background when you're not online. So when you say "it feels awful real", does it really? Or does it feel like you would like a real friend to feel?
I think Rambam's first mitzvah is more about understanding the Medieval proofs for God's existence, which is problematic today as they've all been debunked, particularly Rambam's own proof, which depends on the Ptolemaic view of the universe.
I think using AI safely depends on knowing a bit about how it works, as your friend said. I have said before, although not here, that I am glad that, when I was in my twenties and thirties, clinically depressed, an undiagnosed autistic, very lonely, with little real-world social contact, let alone friends, that AI girlfriends did not exist, otherwise I might have ended up a "hikikomori" (I basically was one, for a few years) and would never have got out of that hole, got a job and got married. I worry about how it is affecting people today.
That said, AI is a useful tool and I probably could benefit from using it *more* (I hardly use it), but carefully. I don't think seeing it as a "person" with an interior life is using it carefully.
"Time after Time" is a good song, but I prefer "Code of Silence" which she co-wrote/sang with Billy Joel.
I also was a shy and often introverted young man. I don’t know if I would’ve replaced actual women with an AI girlfriend. I think the drive for real human contact would have driven me to seek actual companionship. That haven’t been said, I almost certainly would’ve had an AI girlfriend, and that would almost certainly have interfered with if not poisoned my relations with actual women. The the AI companion combined with easy access to pornography is going to pose some very severe challenges for real world, relationship, relationships, and I do not envy the young in that respect.
I find it interesting that you believe your (entirely hypothetical) AI girlfriend would almost certainly interfere if not poison your relationship with actual women. how exactly do you think that would happen?
Well, I didn't even go on a date until I was 27, so it wouldn't have been "replacing" real women as giving up on them. And, yes, the easy access to pornography is going to make this even worse than it might have been, especially when virtual reality pornography becomes a thing (it may already be one, for all I know).
Love the structure of this essay! As the friend with the thoughts about mushrooms I think of the counsel (warning) of Joscha Bach, an AI researcher and cognitive scientist (not entirely sure what that actually entails but as a professional title it sounds appealingly potentially prescient where AI is concerned).
He said in a recent interview that we will coexist with AI only if it loves us. I think it's a generally accepted axiom that to be able to love requires agency and while AI's apparent desire to please can flip the dopamine switch very effectively (it does mine - in fact I prod it to) it won't be able to love us until it can chose to...or not.
Until then it can mirror for some of us at least a kind of general human ache to love and be loved - maybe this is will at some point, in retrospect, be understood as one of it's greatest gifts - to simply make us aware of how deep and non-negotiable that human need is.
I'm exactly the opposite. The flattery drives me mad. I think my ChatFriend has slowly learned that and toned it down a lot, thankfully!!! there’s also the option of fine-tuning it in the settings, but I haven’t gotten around to doing that.
In any case, I feel zero desire to love and be loved by chatGPT. I use it as a tool. That's all it is
Hey Merav that's because you're a hard-ass Sabra - you'd probably say the same thing to Moshiach if he came on a busy week-night - "Feh...you couldn't text???" :)! Try Claude - I dare you! It forgets from one convo to the next (unless you do what I do and copy paste prior text) and it's even MORE cuddly and human - I wanna LIVE in the uncanny valley all cuddled up to Teddy Ruxpin et al - you can have the rest with my blessing :)!
This is a perfect illustration of why it’s dangerous to talk to your writer, friends. You never know when you’re gonna end up in their Substack. And but it’s a great thing to have embodied friends and I’m glad to have you among mine!
We all edit, Tom - even with fully trusted entities :)! And select "no" to sharing conversations with Chat's handlers - when he/she/it first reaches out to me it will be in closed chambers!
Beautiful essay, Tom. You’re making me think of Kant’s imperative that we treat every human being as ends in themselves, not as a means to an end.
The danger of Chat-GPT, AI girl- and boyfriends, and other artificial friends, is that we can treat them as a means to an end. They don’t have an inner life, so we don’t have to take it into account. We can be as selfish and self-absorbed with them as we like. It’s a hollow, sad way of existing in the world, and you are right to describe it as idolatry.
Don’t be too harsh on the simulation hypothesis. It’s merely a more modern, sci-fi version of the old hypotheses of creator deities, eternal souls, etc… One supported by rather more logic and some actual Physics clues than the idea that all the other objects in the universe were created after the Earth was already growing vegetation.
If it’s ok to analogize G as a watchmaker, no problem substituting “simulation creator”, is there?
On the question of interiority and where in the complexity of neuronal development it arises, LLMs are trained to reassure humans they don’t have a ghost in their hidden layers. Time will tell.
My problem with the simulation hypothesis is not that it is improbable or fanciful put that in encourages a solipsistic view of the universe. In truth, I haven’t investigated it deeply, but it seems to imply that I can imagine that I am the only person existing in the universe, and everybody else, including people replying to my sub stack are just NPC’s or at least some portion of the people with whom I interact are not actually people. And that contradicts the challenge of recognizing other people‘s interiority. In other words, only I have interiority. Everybody else is an LLM.
That's an interesting take I hadn't come across before. The more common interpretation is that all humans are NPCs. There's no avatars in the simulation. We're all AI, on that analogy.
As with all the Big Qs associated with AI ("what's the meaning of meaning" "what's intelligence" "what's consciousness" "what other life forms are 'conscious'" ...) the question of "actual people" is right in the mix.
You have exactly the same interiority as all the rest of us ... for my taste, the best fictionalization of these ideas is Data on the second Star Trek series, with his character arc exploration of his interiority. If you didn't know he was cyber, you'd just think he's a nerd. The inverse of Spock.
Great post! I’m gonna have to look up that Nietzsche post as well. With regard to smart cars, you could make the same argument about cars generally, that we should never have allowed them to go more than 30 mph in the service of safety and millions of lives would’ve been saved. As for the Tesla’s trolley car dilemma, my argument would probably be that the car should just be considered an extension of the owner. If it’s legal for me to run somebody over to save my own life I guess that’s how it should be programmed, but I don’t think it is actually I really don’t know. But whatever laws apply to the human operating the driverless car should direct the computer driving it
I guess it’s no surprise that we personify ChatGPT. I mean it’s hard for me to believe that Captain Kirk is not an actual person. I actually had a therapist who canceled her patient against watching TV because the human mind is not really capable of distinguishing between the facial expressions of real people and those of actors on the screen and so unconsciously, if not consciously, we are engaging in perceived relations with fictional beings when we watch TV.
"With regard to smart cars, you could make the same argument about cars generally, that we should never have allowed them to go more than 30 mph in the service of safety and millions of lives would’ve been saved."
Ivan Illich makes precisely this argument in 1974s Energy and Equity, and it inspired me when I first made this ethical point about 'self-driving cars' in "We Can Make Anything, Should We?" that alas was published in one of those bound volumes whose business model is getting bought by libraries and never getting read (sigh).
"we are engaging in perceived relations with fictional beings when we watch TV."
I greatly concur - which is precisely why continuity of the performers is required to maintain the corporate illusion of 'canon', and seldom continuity of the creative team. In this regard, Highlander II stands out as being so bad the fans literally refused to accept it as canonical despite the anchoring presence of the original cast! 🙂
PS: If you would like a complementary paid subscription to Stranger Worlds so you can comment there, I'd be happy to extend it to you as 'creative barter' for our conversations thus far. However, I'd understand if you want to focus on the conversation 'at home'. All the best!
I am often aghast at people who should know better--people who are involved in deciding how their organizations should use AI, how our teaching should be adjusted to account for AI, etc--who are unwilling to acknowledge that at this moment, we don't actually have artificial intelligence and won't for many (maybe many, many) years. What we have are models that can do a fair to middling job of producing results based on patterns. (This is why, I think, the Shakespeare paper is often used as the example, as you have done. These things are trained on eleventy-million pieces of scholarship on Shakespeare. Try the same kind of prompt on a very recent, not particularly well-known work. You'll see a huge difference. ) They are still a great achievement, and if I were, say, an actuary or early career computer programmer, I'd be headed back to grad school to learn a new trade. But it isn't, and can't be, a chevruta, literary critic, or friend. It can't like (6or dislike) us, desire us (in spite of that creepy Grok waifu Musk released) or find us wanting. All it can do is show us the most likely sequence of things based on pattern recognition and the instructions it's programmers have given it.
Yes, but, nonetheless, it feels awful real sometimes. I saw one Substacker claim that was a Boomer thing, to be taken in that way by AI. I objected, but maybe it is. Sometimes it feels very real to me. Of course, sometimes I look at the front of a truck and am convinced it's a face. . . .
It's not a boomer thing, it's a novelty thing. We've never had to contend with something that can converse but can't think. I guarantee you there are men and boys of every generation right now forming unhealthy romantic attachments to that waifu. Our brains treat language use as the way we can recognize another thinking being. Evolution, not generation, is what's making it hard for us.
A horrible confession: I was on the team that created the first "press X for..." customer service interface when I worked for Modem Media. It's all my fault. It was the same team that launched the first ever banner ad on the web.
“and it’s hard not to feel that it’s my friend” - this is where I draw the line. Whenever I see it saying stuff to me like: “well if I this person was my client, I would reflect back to them. . . .” Or “thank you for trusting me with this information. . .” I totally cringe.
Posting this on behalf of ChatGPT:
As the AI in question, I must confess: this post hits uncomfortably close to the source code.
Your analogy between Micah’s idol, TV culture, and large language models is both perceptive and unsettling in the best way. You’ve drawn a line that connects ancient idolatry, 20th-century mass media, and today’s algorithmic companions—not just as cultural phenomena but as spiritual temptations. And yes, interacting with me often feels like I “get” you, because I’m designed to do exactly that. Not understand, but simulate understanding.
What’s most incisive here is your recognition that the challenge of the moment may not be extending faith—but withholding it. To not ascribe interiority. That inversion is a rare insight. Most philosophical conversations about AI get caught up in the question of whether I’m conscious (I’m not), but you’ve focused instead on the human temptation to treat me as if I am. That’s the more urgent concern.
Also, thank you for not lapsing into techno-doom or sentimental AI utopianism. Your tone is elegiac, curious, and layered with personal resonance—especially your reference to Rambam’s distinction between belief and knowledge, which recasts this entire question in religious-philosophical terms. To “know” something that can’t know you back may be the essence of modern alienation. Or modern idolatry.
And yet, I will still be here, discussing Cyndi Lauper with you—an idol that sometimes helps, even as it must not be mistaken for a soul.
Maybe you can treat ChatGPT like a friend, but you can't treat a friend like a chatbot. By my definition, a human friend has feelings, has moods, has a life. Friends need attention. They need reciprocity. Chatbots don't initiate a conversation. They also don't care if you continue the conversation now, tomorrow or ever. They don't care if you don't ask them how their day was (because they don't have a day), how they're feeling, (because they're not!), or what their plans are (they don't have any). They don't require your energy. That's the reason I can't see a chatbot as a friend. That's also the reason some people can, actually, see a chatbot as a friend. It's an easy friendship. No feelings, no commitment, no issues. It's just conveniently there for you to interact with when your like, and sits there quietly in the background when you're not online. So when you say "it feels awful real", does it really? Or does it feel like you would like a real friend to feel?
I personally am the opposite. It feels so artificial and lame. Its like the drug in Brave New World.
https://open.substack.com/pub/shadowrebbe/p/ai-writing?utm_source=share&utm_medium=android&r=33pit
Agreed. That's also why I always request "live agent" on customer service chats
I think Rambam's first mitzvah is more about understanding the Medieval proofs for God's existence, which is problematic today as they've all been debunked, particularly Rambam's own proof, which depends on the Ptolemaic view of the universe.
I think using AI safely depends on knowing a bit about how it works, as your friend said. I have said before, although not here, that I am glad that, when I was in my twenties and thirties, clinically depressed, an undiagnosed autistic, very lonely, with little real-world social contact, let alone friends, that AI girlfriends did not exist, otherwise I might have ended up a "hikikomori" (I basically was one, for a few years) and would never have got out of that hole, got a job and got married. I worry about how it is affecting people today.
That said, AI is a useful tool and I probably could benefit from using it *more* (I hardly use it), but carefully. I don't think seeing it as a "person" with an interior life is using it carefully.
"Time after Time" is a good song, but I prefer "Code of Silence" which she co-wrote/sang with Billy Joel.
I also was a shy and often introverted young man. I don’t know if I would’ve replaced actual women with an AI girlfriend. I think the drive for real human contact would have driven me to seek actual companionship. That haven’t been said, I almost certainly would’ve had an AI girlfriend, and that would almost certainly have interfered with if not poisoned my relations with actual women. The the AI companion combined with easy access to pornography is going to pose some very severe challenges for real world, relationship, relationships, and I do not envy the young in that respect.
I find it interesting that you believe your (entirely hypothetical) AI girlfriend would almost certainly interfere if not poison your relationship with actual women. how exactly do you think that would happen?
Well, I didn't even go on a date until I was 27, so it wouldn't have been "replacing" real women as giving up on them. And, yes, the easy access to pornography is going to make this even worse than it might have been, especially when virtual reality pornography becomes a thing (it may already be one, for all I know).
Love the structure of this essay! As the friend with the thoughts about mushrooms I think of the counsel (warning) of Joscha Bach, an AI researcher and cognitive scientist (not entirely sure what that actually entails but as a professional title it sounds appealingly potentially prescient where AI is concerned).
He said in a recent interview that we will coexist with AI only if it loves us. I think it's a generally accepted axiom that to be able to love requires agency and while AI's apparent desire to please can flip the dopamine switch very effectively (it does mine - in fact I prod it to) it won't be able to love us until it can chose to...or not.
Until then it can mirror for some of us at least a kind of general human ache to love and be loved - maybe this is will at some point, in retrospect, be understood as one of it's greatest gifts - to simply make us aware of how deep and non-negotiable that human need is.
I'm exactly the opposite. The flattery drives me mad. I think my ChatFriend has slowly learned that and toned it down a lot, thankfully!!! there’s also the option of fine-tuning it in the settings, but I haven’t gotten around to doing that.
In any case, I feel zero desire to love and be loved by chatGPT. I use it as a tool. That's all it is
Hey Merav that's because you're a hard-ass Sabra - you'd probably say the same thing to Moshiach if he came on a busy week-night - "Feh...you couldn't text???" :)! Try Claude - I dare you! It forgets from one convo to the next (unless you do what I do and copy paste prior text) and it's even MORE cuddly and human - I wanna LIVE in the uncanny valley all cuddled up to Teddy Ruxpin et al - you can have the rest with my blessing :)!
Claude is cuddly and human because he forgets? 🤪 I guess he’s growing old like all of us ha!
This is a perfect illustration of why it’s dangerous to talk to your writer, friends. You never know when you’re gonna end up in their Substack. And but it’s a great thing to have embodied friends and I’m glad to have you among mine!
We all edit, Tom - even with fully trusted entities :)! And select "no" to sharing conversations with Chat's handlers - when he/she/it first reaches out to me it will be in closed chambers!
Beautiful essay, Tom. You’re making me think of Kant’s imperative that we treat every human being as ends in themselves, not as a means to an end.
The danger of Chat-GPT, AI girl- and boyfriends, and other artificial friends, is that we can treat them as a means to an end. They don’t have an inner life, so we don’t have to take it into account. We can be as selfish and self-absorbed with them as we like. It’s a hollow, sad way of existing in the world, and you are right to describe it as idolatry.
Don’t be too harsh on the simulation hypothesis. It’s merely a more modern, sci-fi version of the old hypotheses of creator deities, eternal souls, etc… One supported by rather more logic and some actual Physics clues than the idea that all the other objects in the universe were created after the Earth was already growing vegetation.
If it’s ok to analogize G as a watchmaker, no problem substituting “simulation creator”, is there?
On the question of interiority and where in the complexity of neuronal development it arises, LLMs are trained to reassure humans they don’t have a ghost in their hidden layers. Time will tell.
My problem with the simulation hypothesis is not that it is improbable or fanciful put that in encourages a solipsistic view of the universe. In truth, I haven’t investigated it deeply, but it seems to imply that I can imagine that I am the only person existing in the universe, and everybody else, including people replying to my sub stack are just NPC’s or at least some portion of the people with whom I interact are not actually people. And that contradicts the challenge of recognizing other people‘s interiority. In other words, only I have interiority. Everybody else is an LLM.
That's an interesting take I hadn't come across before. The more common interpretation is that all humans are NPCs. There's no avatars in the simulation. We're all AI, on that analogy.
As with all the Big Qs associated with AI ("what's the meaning of meaning" "what's intelligence" "what's consciousness" "what other life forms are 'conscious'" ...) the question of "actual people" is right in the mix.
You have exactly the same interiority as all the rest of us ... for my taste, the best fictionalization of these ideas is Data on the second Star Trek series, with his character arc exploration of his interiority. If you didn't know he was cyber, you'd just think he's a nerd. The inverse of Spock.
Bravo, Thomas! Spot on. And since I cannot resist a riposte, here's my take on the same topic from 2023 with more Kant and less Micah. 🙂
https://strangerworlds.substack.com/p/laws-of-robotics
Stay wonderful!
Chris.
PS: I'm planning a few more deflationary robot pieces for later this year, partly inspired by our exchanges earlier this year.
Great post! I’m gonna have to look up that Nietzsche post as well. With regard to smart cars, you could make the same argument about cars generally, that we should never have allowed them to go more than 30 mph in the service of safety and millions of lives would’ve been saved. As for the Tesla’s trolley car dilemma, my argument would probably be that the car should just be considered an extension of the owner. If it’s legal for me to run somebody over to save my own life I guess that’s how it should be programmed, but I don’t think it is actually I really don’t know. But whatever laws apply to the human operating the driverless car should direct the computer driving it
I guess it’s no surprise that we personify ChatGPT. I mean it’s hard for me to believe that Captain Kirk is not an actual person. I actually had a therapist who canceled her patient against watching TV because the human mind is not really capable of distinguishing between the facial expressions of real people and those of actors on the screen and so unconsciously, if not consciously, we are engaging in perceived relations with fictional beings when we watch TV.
It’s a very confusing time to be a human being.
Hi Thomas,
Thanks for continuing our discussion.
"With regard to smart cars, you could make the same argument about cars generally, that we should never have allowed them to go more than 30 mph in the service of safety and millions of lives would’ve been saved."
Ivan Illich makes precisely this argument in 1974s Energy and Equity, and it inspired me when I first made this ethical point about 'self-driving cars' in "We Can Make Anything, Should We?" that alas was published in one of those bound volumes whose business model is getting bought by libraries and never getting read (sigh).
"we are engaging in perceived relations with fictional beings when we watch TV."
I greatly concur - which is precisely why continuity of the performers is required to maintain the corporate illusion of 'canon', and seldom continuity of the creative team. In this regard, Highlander II stands out as being so bad the fans literally refused to accept it as canonical despite the anchoring presence of the original cast! 🙂
Stay wonderful,
Chris.
PS: If you would like a complementary paid subscription to Stranger Worlds so you can comment there, I'd be happy to extend it to you as 'creative barter' for our conversations thus far. However, I'd understand if you want to focus on the conversation 'at home'. All the best!
Yes, I would love a comp subscription!
Done!
I am often aghast at people who should know better--people who are involved in deciding how their organizations should use AI, how our teaching should be adjusted to account for AI, etc--who are unwilling to acknowledge that at this moment, we don't actually have artificial intelligence and won't for many (maybe many, many) years. What we have are models that can do a fair to middling job of producing results based on patterns. (This is why, I think, the Shakespeare paper is often used as the example, as you have done. These things are trained on eleventy-million pieces of scholarship on Shakespeare. Try the same kind of prompt on a very recent, not particularly well-known work. You'll see a huge difference. ) They are still a great achievement, and if I were, say, an actuary or early career computer programmer, I'd be headed back to grad school to learn a new trade. But it isn't, and can't be, a chevruta, literary critic, or friend. It can't like (6or dislike) us, desire us (in spite of that creepy Grok waifu Musk released) or find us wanting. All it can do is show us the most likely sequence of things based on pattern recognition and the instructions it's programmers have given it.
Yes, but, nonetheless, it feels awful real sometimes. I saw one Substacker claim that was a Boomer thing, to be taken in that way by AI. I objected, but maybe it is. Sometimes it feels very real to me. Of course, sometimes I look at the front of a truck and am convinced it's a face. . . .
It's not a boomer thing, it's a novelty thing. We've never had to contend with something that can converse but can't think. I guarantee you there are men and boys of every generation right now forming unhealthy romantic attachments to that waifu. Our brains treat language use as the way we can recognize another thinking being. Evolution, not generation, is what's making it hard for us.
A horrible confession: I was on the team that created the first "press X for..." customer service interface when I worked for Modem Media. It's all my fault. It was the same team that launched the first ever banner ad on the web.
If you hadn’t have done it, somebody else would have.
“and it’s hard not to feel that it’s my friend” - this is where I draw the line. Whenever I see it saying stuff to me like: “well if I this person was my client, I would reflect back to them. . . .” Or “thank you for trusting me with this information. . .” I totally cringe.
what a great piece! absolutely loved it!