Humanity - AI Symbiosis
Anna-Kynthia Bousdoukou
"There are philosophers who believe that if there is no organic matter to breed life, then there can be no emotions, such as pain, pleasure, desire, or more complex ones, such as fear, or anxiety."
A live dialogue between humans and an artificial intelligence system was attempted on the stage of the 45th SNF Dialogues event, held in collaboration with the SNF Agora Institute at Johns Hopkins University on Wednesday, 25 August at the Stavros Niarchos Foundation Cultural Center (SNFCC).
The audience and the speakers posed questions to two AI systems, specially designed for the Dialogues event, which responded live in front of the audience.
Reknowned experts then joined the discussion, which took place as part of the SNF Nostos the evening before the SNF Conference on Humanity and Artificial Intelligence. They grappled with moral, social, and political questions related to the concept of symbiosis between humans and machines, questions whose relevance may seem far off, yet is in fact already present: What role will AI play in society? Can AI systems have morals? Should they be given rights, like humans? And, do they have - or can they develop - emotional intelligence?
“There are philosophers who believe that if there is no organic matter to breed life, then there can be no emotions, such as pain, pleasure, desire, or more complex ones, such as fear, or anxiety… But other philosophers will tell us that there is no need for organic matter, but for electrical circuits as in machines, i.e., mechanical parts that will gradually develop emotions by interacting with the environment, with other human beings and machines, and thus develop a state of mind, and a sense of self,” said Stelios Virvidakis, Professor of Epistemology and Ethics at the Department of History and Philosophy of Science at the National and Kapodistrian University of Athens. “We cannot always decide on a moral dilemma on the basis of the use of algorithms. Aristotle spoke of prudence, which equals wisdom, the acumen that allows us to discern the complexity of an issue within a complex situation. Can machines ever develop this type of wisdom, which includes emotions, empathy, and relates to emotional intelligence? Morality is not just a matter of obedience and strict unyielding rules. That’s what worries me: which moral system are we going to use to power a machine?” While commenting on the ethical dimension of artificial intelligence, Virvidakis said that “machines can be good consistently, while us humans are notoriously inconsistent in our goodness.”
Stelios Virvidakis
George Giannakopoulos, Artificial Intelligence Research Fellow at the National Center for Scientific Research and Co-founder of SciFY PNPC, talked about training artificial intelligence systems, specifically the two systems - GPT2 and GPT3 - used during the dialogue on stage. The first of these systems was created by George Petasis, a researcher at the National Centre for Scientific Research Demokritos and SKEL, The AI Lab at the Institute of Informatics and Telecommunications. The second system was based on the Philosopher AI application. When asked who is ultimately responding to the questions posed to these AI systems, Giannakopoulos replied, “when we include these linguistic models in a dialogue, what we see is essentially a reflection of human expression through a broken mirror. This mirror has been created by science, using data to train the system. Human nature is not only evident in our writings. All of its experience, all of its interaction is absent from the systems we have seen today. Therefore, what we have is a broken reflection of humanity, as expressed through the system and ‘fed’ into it.”
George Giannakopoulos
Considering the extent to which an AI system can be fair, Ethan Zuckerman, Associate Professor of Public Policy, Communication and Information at the University of Massachusetts, Amherst, posited that “the danger of an AI system is that we have this piece of computer code that appears to be very smart. It appears in some cases to be sort of all-knowing and it tells us to do this or that. The problem is, we end up encoding those sorts of biases. So, the risk of AI is that we take this unfairness, and we put it in code and we can’t even interrogate it anymore. What is the best way to react to these AI systems? It is to demand transparency, and then to use this experience of trying to understand the biases of these systems not just to question the technology, but to question the social disparities that underlie the technology.”
Ethan Zuckerman
On August 26 and 27, the SNF Nostos Conference on Humanity and Artificial Intelligence explores the many possible ways in which AI may interact with humanity in the coming decades. The SNF Nostos, a multifaceted international festival, this year in person, at the Stavros Niarchos Foundation Cultural Center (SNFCC) in Athens is on until August 29, as always free for all.
Highlights
Humanity - Al Symbiosis - In Brief
The SNF Dialogues discussion, held through the journalism non-profit organization iMEdD (the incubator for Media Education and Development), was moderated by journalist and SNF Dialogues Executive Director Anna-Kynthia Bousdoukou, SNF Agora Institute Director Hahrie Han and iMEdD Lab Project Manager Thanasis Troboukis.