Nautilus Members enjoy an ad-free experience. or Join now .
Explore

One question for Jon Rueda, a Ph.D. candidate and La Caixa INPhINIT Fellow at the University of Granada, where he studies the intersection between bioethics, ethics of emerging technologies, and philosophy of biomedical innovations.

In Body Image
Photo courtesy of Jon Rueda
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Can AI help us be better people?

Yes. I have published a new article with a colleague, Bianca Rodriguez, in which we argue that, indeed, AI assistants could help us improve some aspects of our morality. Some AI models aim to make us more aware of some of the limitations of our psychology when we are trying to decide what to do, or provide relevant factual information. Some of these AIs start by knowing your values and preferences, and then try in concrete moments to offer a best course of action. These are controversial in some ways, because they are not going to improve your capacity to make your own decisions. We analyze another, more promising system called the Socratic assistant, or SocrAI, which is based mainly on the idea that through dialogue we can advance our knowledge, think about complex moral issues, and improve our moral judgements. 

Nautilus Members enjoy an ad-free experience. Log in or Join now .

This AI-based voice assistant hasn’t been developed commercially. But I know there’s interest because one of the proponents of this idea, the philosopher Francisco Lara, told us that some companies reached out to him about it. This interest is going to grow. Because of the very famous ChatGPT, there is an increasing awareness about how AI is improving. We feel that we are having a real conversation with an AI system. 

The AI-based Socratic assistant we discuss in our paper wouldn’t necessarily be trained on Socrates’ words as we know them from Plato’s writings—it would just try to emulate his Socratic method. It’s based on a more procedural understanding of ethics, which is the more philosophically provocative aspect of our paper. This Socrates is not going to tell you, “You should do that,” in a concrete moment, but will help you improve your reasoning—to consider empirical facts, to think more logically and coherently. So it won’t tell you what is good or wrong. Socrates never says what is the truth, the concrete truth. But through the dialogues, he tells us what the weak points of your arguments are. Through irony, he tells you that what you have said can be counter argued. And in that process you learn and improve your moral reasoning.

We are optimistic in our article, but there are also many concerns that we are not dealing with, like data protection: What will happen with the data that is being created through the interaction with the users? This data is also important and will help to improve the system.

These systems could also have some kind of problematic tendency to shape the autonomy and agency of the people. AI could influence our character, and manipulate or nudge us toward certain types of behavior. There could also be a problem of deskilling moral abilities. Imagine that we create a kind of dependence with these systems, and if these systems do not protect our autonomy—if people start deferring to the advice of AI systems when making ethical decisions—in the long term that could be negative. So it’s difficult to have a balanced appreciation of this technology.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Would it be good to have children grow up with a Socratic assistant? I have the intuition that we should be more protective with children because they are still developing. They are creating their own autonomy, and it’s more sensible to try to not offer technologies that will limit or narrow it. But on the other hand, children are already exposed to other kinds of technologies that can manipulate them, that shape their preferences and perspectives. So the relationship between children and new technologies is something that is already happening. And of course, AI applications could have a role in this, and if we give children good tools to improve their moral abilities, that would be good, but also we should be more concerned about the deleterious effects.

Some people argue that, because of our evolutionary history, we are more biased toward those closer to us in time and space, and that we have a lot of tendencies to be partial, and that AI could help us to be more like an ideal observer. This view in some sense is also problematic, because we know that AI systems have different kinds of biases. Some of these biases are particular to AI, but they are very common and very similar to the biases that we have in our psychology. In that sense AI could not only reproduce but also amplify human biases, so we should not be super optimistic about using AI to overcome our limitations of our moral psychology.

Lead image: Mariart0 and Sabelskaya / Shutterstock

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.