by Nicholas Lim
This article is a recap of Cogito Collective’s 1st Meeting on February 20, 2019 at Thirsty @ Liang Court.

Humans are, and will always be, fundamentally afraid of the unknown. This is amplified when the particular fear is beyond our control, because we do not know where it will lead us to. One prime example comes from the field of Artificial Intelligence (AI). If and when machines gain sentience, it will raise a number of philosophical questions about ethics, justice, and human interactions. It is one thing to say machines are simply tools which are morally neutral, it is another when they can start thinking and making decisions for themselves. This was the question Cogito came together to discuss on our first meeting: “Will AI be benevolent or malevolent to humanity?”
We examined two different sources, one for each horn of the dilemma. The movie “Her” depicts a human falling in love with an AI, and the social consequences that arose. On the other hand, “…That Thou Art Mindful of Him” shows a flaw in the Three Laws of Robotics, when two robots ask themselves the question from the Biblical book of Psalms, “what is Man, that Thou art mindful of him?” What makes humanity so special that robots should listen to their orders? These two texts depict two fundamentally different perspectives of robots in relation to humanity, and fuelled some very interesting discussions for the night.
1. Ethics

(ironically the home of atheistic philosophers)
Imagine a sentient pig came up to you and says, “Master, I was born and bred for your consumption, and I have been fattened up for your pleasure. Please, cut me up and feast on me tonight!” This scenario was raised by Julian Baggini in his book, and raises the ethical question of whether your decision to eat animals change with the recognition of their sentience. Similarly, we explored the question, “when machines gain sentience, is it moral for us to keep them purely as workers?”
Our working definition for “sentience” was anything that could experience emotions and sensations. This means that AIs could possibly feel “tired” (whatever that means to an AI), or “pain”, or even emotions that we humans cannot feel. By keeping them as beasts of burden, would that be tantamount to slavery? The 16th century polymath Rene Descartes held that animals were mere automations that were programmed to react to certain stimuli in preprogrammed ways, and therefore he often conducted vivisections on animals without anaesthesia. In his definition, then, animals would not be sentient beings.
It is no secret that humans like to organise society into hierarchies. Parents have some sort of authority over their children and humans have authority over domesticated beasts. In this scheme, where do newly sentient AIs fit in? Where should the new challenger be placed in this chain of command? The point was raised that robots are created by us, just as the act of domestication “creates” beasts of burden. That seems to justify why we can use some animals for work and food. Machines cannot provide us food, unfortunately.
One interesting point brought up was that, because robots are programmed by humans, they cannot have the creativity to pose new problems. Whatever the robot can think about will be still confined to the parameters which their programmers have set for them. Therefore, robots cannot create new problems to solve. This seems to reflect a fundamental difference between robots and “the rational soul”, as Aristotle put it. AI seems to be confined to the structures which humanity have put in, and even with enough if-else loops, “machine learning” can only be within certain boundaries.
Robots are programmed by humans, they cannot have the creativity pose new problems.
2. Human Communications

Source: Venture Beat
What we discussed thus far assumes sentient robots would simply be workers that would help us out with simple tasks, and they would only be capable of basic emotions. What if, by some technological advancements, robots would be capable of more complex emotions, like love and hate?
The question of whether humans can love robots is a simple one. Love is a feeling that is projected onto another, an admiration of the character of another. Oftentimes this means that we love values that we see in the other person, which are just values which we wish to see in ourselves. This love isn’t usually reciprocal, and is a huge reason why people love anime characters. And in certain cases, this love is more literal.
The reciprocal question is slightly more difficult to answer. Can robots love humans? Given the above discussion on limits to robot creativity, it’s difficult to imagine a plausible scenario that they can develop such a complex emotion. There are some physics workarounds, like the gaps in Heisenberg’s uncertainty principle allowing for a robotic “free will”, but these fringe theories are unlikely at best and completely wrong at worst.
The love (that humans have for robots) isn’t usually reciprocal.
Conclusion

Source: Vox.com
One tantalising conclusion drawn was “it seems like creating AI could create more problems for ourselves, let’s just not make AI then.” This seems, at the surface level, the easiest way forward. But, reminded of humans’ fundamental need for progress and to satisfy their curiosities, this seems like a lost cause.
Another conclusion we drew was that humans make absolutely bad AI, and vice versa, AI make absolutely bad humans. Our irrationality makes us so thoroughly human. This is understandably so – we forgive others for their faults when we look at their circumstances. The evidence does not lie. The top 10 best selling fiction books are those where humans like us were subjected to abuse, suffering, love, and appreciated their frailty as flawed creatures. This is why Humans of New York is so enthralling – liked and shared by all on Facebook. Yet, nobody would purchase a software that does not follow its programming.
Humans make absolutely bad AI, and vice versa, AI make absolutely bad humans.
It seems that even with basic sentient AI which cannot feel complex emotions, we uncovered many ethical questions, some of them fundamental enough that have plagued humanity since the dawn of civilisation. However, it somehow also seems that AI would not be able to achieve a sentience deep enough that they could have “free will”. This actually saves us from a lot more complex problems. So, the good news seems to be that we can apply the same ethical frameworks which we have been using for the longest time to analyse the advent of sentient robots. The same conundrum still remains – whose code of ethics do we use? And if so, when we program if-else codes, who do we blame when things go wrong. Do we blame the program, the programmer, the government that legalised this, the legal entity business that operationalises this AI? We’d better decide fast before they decide to crush us beneath their metallic hoofs though.
The same conundrum still remains – whose code of ethics do we use? And if so, when we program if-else codes, who do we blame when things go wrong. Do we blame the program, the programmer, the government that legalised this, the legal entity business that operationalises this AI?
Featured image from: The Playlist
Join us in our next meet-up on friendships!
You’ve got a Friend in me
What does it mean to be a good friend?
March 23 2019, Saturday, 7-9pm @ Raffles Town Club Swimming Pool