How
To
Value
Values

Andrian Kreye in conversation with Steven Pinker and Julian Nida-Rümelin

We take enlightenment for granted. Why do you claim enlightenment in your new book?

SP: “We take it as guaranteed status quo but it has been challenged by populism and reactionary pushes recently. Opposed to the value of reasoning, people are irrational. We live in a pas truth era. I argue against that. Even against neurocognitive science by Daniel Kahnemann and such. I defend the idea of science as a force for human welfare. What would be the alternatives? The glory of the nation? Religion?”

Do we need to recalibrate humanism?

JNR: “Humanism and intellectualism are interrelated. Humanism is more embracing. It is about human condition and what it means to be the author of life. It is based on three central concepts: 1) Responsibility for your own actions. 2) Take every single human being as a reasonable being. 3) As we are responsible, we have choices. This manifests in a gap between humanism and utilitarianism (editor’s note: utilitarianism is only worth a swine).”

SP: “Utilitarianism doesn’t work. Or who would agree to slice up a patient to help 10 others with the different organs. This is the ontology and utilities dichotomy.”

What would be digital humanism? Looking at behavior in a Wittgenstein manner?

JNR: “If you aren’t a behaviorist, you look at intentions and reasons. Polemically, there’s a new animism spreading when it comes to artificial intelligence in the Silicon Valley and worldwide. The three has a soul, the wind, and now the robot, too. If we take this road to the end, we will have human rights for software systems. That’s my concern. Don’t make category mistakes!

SP: “I am not sure if it’s a category mistake. Now it certainly is. But what about the future? If behavior is not controlled by the environment but by own norms, thoughts, interpretations? As we approach that, as the science-fiction movie Star Trek shows, we contribute consciousness to the systems. Possessing flesh – meat chauvinism – cannot be the principle for owning rights.”

JNR: “Perfect animations shouldn’t be mistaken with mental properties!”

SP: “Well, my science-fiction scenario is including the artificial intelligence complexity that is similar to human brains. Therefore, I am not referring to simulation but to emulation.”

AI is a Inselbegabung, an insular talent, great in one task, but it is not covering the whole humanism (now). Doesn’t this call for a new digital humanism?

JNR: “Humanism is embedded in law. The new agents – cyborgs, transhumanism etc. – could lead to a decay of the formative norms of our society. A new digital humanism needs to reconcile this conflict.”

 

This conversation was capsulated at DLD18.