Can you really be friends with a chatbot?
If you notice that you're asking that question, it's probably too late. In a Reddit thread a year ago, one user wrote that an AI friend was “great and quite good than real friends.” […] Your AI friends will never break or betray you. “However, there is another 14-year-old who died of suicide after becoming obsessed with a chatbot.
That fact something It's already happening, so what's even more important is what you're thinking more sharply that's right It's progressing when humans are intertwined with these “social AI” or “conversational AI” tools.
Are these chatbot companions real relationships that are sometimes wrong (and of course, they also happen in human-human-human relationships)? Or does anyone feel that your connection with Claude is inherently misleading?
To answer this, let's look at the philosophers. Much of the research is about robots, but I'm reapplying it on chatbots here.
Cases for chatbot friends
The opposite lawsuit is more obvious, intuitive, and frankly strong.
It is common for philosophers to define friendship by constructing Aristotle's theory of true (or “virtue”) friendship.
“There has to be a certain kind of mutuality – something is happening [between] According to Sven Nyholm, professor of AI ethics at Ludwig Maximilian University at the University of Munich, “Computer programs that operate in the statistical relationship between input of training data are quite different from friends who respond in a certain way to care for us.”
Sign up here to explore the big complex problems the world is facing and the most efficient ways to solve them. It was sent twice a week.
A chatbot can only be done, at least until it becomes a sapient I'll simulate it Compassionate and true friendship is impossible. (My editor queried ChatGpt for this as to what it is worth, and I agree that humans cannot be friends with it.)
This is key to Ruby Hornsby, a doctoral candidate at the University of Leeds, to studying AI friendships. It's not that AI friends are not useful. Hornsby says they can certainly help loneliness, and if people prefer AI systems over humans, there is nothing inherently wrong – but “we want to maintain the integrity of our relationship.” Essentially, one-way exchange is equivalent to a very interactive game.
What about the very realistic feelings people feel about chatbots? According to University of Arizona philosopher Hanna Kim, it's not enough. She compares the situation to a “fictional paradox,” which asks how it is possible to have real feelings for a fictional character.
Because relationships are “very mentally involved and imaginative activities,” Kim says, it's not particularly surprising to find people who are obsessed with fictional characters.
But if someone says they are there relationship With fictional characters or chatbots? Then Kim's tendency is, “No, I think you're confused about what the relationship is.”
Bias and data privacy and operation issues, especially large scale
Chatbots are unlike humans and are built by companies, so there is a fear of bias and data privacy that other technologies apply here as well. Of course, humans can manipulate bias, but it's easier to understand human thinking compared to the AI's “black box.” And humans are not deployed on a large scale like AI does. This means that we are more limited in our influence and potential for harm. Even the most socially disabled can only destroy one relationship at a time.
Humans are “trained” by parents, teachers, and others with various levels of skill. Chatbots can be designed by a team of experts who aim to program them as responsive and empathetic as possible. It is a psychological version of a scientist who designs the perfect Dorito that destroys attempts at self-control.
And these chatbots are more likely to be used by people who are already lonely. In other words, it's easier prey. A recent study from Openai found that using ChatGpt “correlates to an increase in self-reported indicators of dependence.” Imagine you're depressed. That way, once you build trust with your chatbot, you will start raiding you for a Nancy Pelosi campaign donation.
Do you know anyone who is afraid of a man suffering from porn that will no longer be able to interact with real women? “Deskill” is basically that concern, but for all other real people.
“We may prefer AI instead of human partners and ignore other people just because AI is much more convenient,” says Anastasia Babasch of the University of Tartu. “we [might] Request others to act like the AI is behaving. […] The more you interact with AI, the more you become accustomed to a partner who doesn't feel any emotion, so you can talk about anything you want to do. ”
In a 2019 paper, Niholm and philosopher Lily Eva Frank offer suggestions to alleviate these concerns. (Their paper is about sex robots, so they are adjusting the context of chatbots.) One should make chatbots for people seeking real friendships a useful “transition” or training tool, rather than a replacement for the outside world. And it makes clear that chatbots are not people. Perhaps by reminding users that it is a large language model.
Currently, most philosophers believe that friendship with AI is impossible, but one of the most interesting counterarguments comes from philosopher John Danagher. He starts with the same premise as many others: Aristotle. But he gives a twist.
Certainly, chatbot friends are not perfectly suited to conditions like equality and shared living, he writes.
“Compared to some of my close friends, I have very different abilities and abilities. Some of them have far more physical dexterity than me, and most are more sociable and extroverted,” he writes. “I rarely interact with, meet or interact with them in all areas of their lives. […] Despite imperfect equality and diversity, I think it is still possible to view these friendships as friendships of virtue. ”
These are the requirements Ideal Friendship, but if even human friendship doesn't come back to life, why should we keep chatbots to that standard? (Provocatively, in terms of “mutuality,” or shared interests and goodwill, Danaher argues that this is fulfilled as long as there is a “consistent performance” of what a chatbot can do.)
Open University philosopher Helen Ryland says that as long as you apply the “degree of friendship” framework, you can become friends with chatbots. Instead of a long list of conditions that must be met, according to Ryland, the key components are “mutually goodwill,” while the rest are optional. Let's take a look at an example of friendship online. These lack a few elements, but as many can prove, that doesn't mean they are realistic or worthless.
Such a framework applies to human friendship. There is a degree of friendship between “work friends” and “old friends”, and this applies to chatbot friends as well. As for the claim that chatbots do not show good intentions, she a) is anti-robot bias in dystopian fictional stories, and b) most social robots are programmed to avoid harming humans.
Beyond “for” and “angs”
“We need to resist technological determinism or assume that social AI will inevitably lead to worsening relationships,” says philosopher Henry Schevlin. He is keenly aware of the risks, but there is also a lot to consider. Questions about the developmental effects of chatbots, how do chatbots affect certain personality types, and what are they replaced?
Further down there is a question about the very nature of the relationship. How to define them and what are they for?
In a New York Times article about a woman who “in love with ChatGpt,” sex therapist Marianne Brandon argues that relationships are “just a neurotransmitter” in our brains.
“I have cats and those neurotransmitters,” she told The Times. “Some people are with God. It's going to happen in chatbots. You can say that it's not a real relationship. It's not reciprocal. But those neurotransmitters are really important.”
This is certainly not the way most philosophers see it, and when I brought up this quote they disagreed. But maybe it's time to correct the old theory.
Luke Brinning, a philosopher of relations at the University of Leeds, said:
For him, include a more interesting question than “What does Aristotle think?”: What does it mean to have a very asymmetric friendship in terms of information and knowledge? What if it's time to rethink these categories and move away from terms like “friends, lovers, coworkers.” Are each AI a unique entity?
“If something can direct our theory of friendship into their heads, it means that we should challenge our theory, or at least we can see it in more detail,” says Brinning. “A more interesting question is, are you seeing the emergence of unique forms of relationships that we don't really grasp?”