Thursday, February 21, 2019

Asimov and the Great Conversation

Isaac Asimov’s books may not be considered “Great” as in canonical, but he does contribute to what Mortimer Adler called the Great Conversation, exploring in his stories questions of human agency and free will, of what minds are, what thoughts are, and what emotions are. Sometimes I don’t agree with his answers, and sometimes I don’t know if I agree with his answers, but he always makes me think.

Although it was written rather late in Asimov’s life, Robots of Dawn takes place relatively early in his world’s history: to give a rough indication, over the course of this ten-year reading plan, I’m going through all of his robot, empire, and Foundation novels in in-world chronological order, and this is only year three of my schedule. Humans have settled a few planets, but there is no empire yet. So what will happen as they expand? Will the future empire be designed by robots? For robots? Or will robots, bound by the Laws of Robotics not to harm humans, stay out of the way and let humans do the work themselves since challenges are good for society?

Asimov seeks the truth behind the truth (I’m tempted to say the foundation under the foundation, but that’s getting ahead of the story) in asking this question of whether robots will best help humanity by not helping humanity. I would add yet a third layer by asking whether strengthening society is in fact keeping humans from harm. The First Law of Robotics says nothing about Humanity in the collective plural but only one-at-a-time humans: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Many individual humans have led happy lives in societies less knowledgeable than ours or Asimov hero Lije Baley’s, less scientific, less crowded, less organized. Since the Robot Daneel Olivaw recognizes that challenges are good for humans, might it not be good to make society weak so as to give societal challenges to individual humans? In other words, I’m asking if robots would best help individual humans by not helping society, which they would in turn do by helping to make it easy for human society to expand to other planets.

Near the end of the book, it is stated as undebatable that robots have no feelings, only positronic surges interpreted (mostly by us) as feelings. But perhaps, the narration suggests, humans have no feelings, either – that what we call feelings are only neuronic surges interpreted as feelings. I don’t really understand how Asimov can ask this. Humans do interpret these phenomena as feelings, and that very interpretation makes them feelings without a doubt. How can I think I’m feeling something and not have a feeling? As the old philosophers pointed out, if you think you see a green man, that is in fact the visual sensation you are having, even if you are hallucinating, which is likely in this scenario. On the very next page after this supposition, Baley envies the robots for having no fear of rain, while Baley himself is terrified. Believing that the fear is irrational and comes from neuronic activity doesn't negate the terror that he knows exists in at least one conscious mind: his own. No one could hold a hot coal in the hand and believe that pain was only a neuronic surge.

Suppose you’re reading a book, and at the bottom of page 3, you find the words "Please don't turn the page." You turn the page anyway (who wouldn’t?), and on the left side of the next spread, page 4 says, "That hurt." Page 5 says, “Please don’t do that again,” but of course you again turn the leaf over. Page 6 bears a single word, “OUCH!” The next says, "If you continue turning pages, you will kill me." And on and on it goes. The book responds to every page turn you make; it delivers a message that would not be delivered if you didn’t turn the pages. But would anyone consider this series of responses the equivalent of a human’s internally felt pain after stepping on a nail?

A random note as a coda: I was thinking the other day about the Turing Test, which a computer would pass if its part in a conversation with you – a conversation carried out in written form as, for instance, in phone texts – made you believe you were talking with a human. I think that Alan Turing imagined the test as an incentive to make AI grow ever more sophisticated. But does it take all that much sophistication for a computer program to convince you that you are texting with a typical teenager?

One month later, I have to add a coda to the coda of this post by recommending a story by my friend Jared Oliver Adams. Here my buddy speculates on what might happen if God were to give robots sentience, that is, if they were able to feel the pain indicated (or caused by or accompanied by) their “positronic surges.” At least that’s one interpretation of the story’s wonderfully speculative setting. The tale of Pope Packard also has suspense and adventure, theology and philosophy, and possibly ghosts. If you enjoy it, please visit Jared’s website and let him know!

No comments:

Post a Comment