?

Log in

No account? Create an account
entries friends calendar profile Elf Sternberg's Pendorwright Projects Previous Previous Next Next
The Automatic Sweetheart Has Her Own Feelings, You Know - Elf M. Sternberg
elfs
elfs
The Automatic Sweetheart Has Her Own Feelings, You Know
Re-reading Sam Brinson's Are We Destined To Fall In Love With Androids?, and my response to it, I noticed a pattern between the stories to which I linked, the ones in which I showed how much the "literature of the future" (which is, in fact, really about the present, and ways to address the present) has addressed the question of "human / cyborg relations" (to use fussy C-3PO's term). One of the overriding questions asked in these stories, one which was elided in 2001 and addressed directly if awkwardly in 2010, was this:

What is our moral obligation to the robots we create?

In a lot of ways, science fiction writers use this as a metaphor for the question of our moral obligation to our children and our progeny, but as experience with actual AI starts to get real we (science fiction writers) are already starting to ask questions about our moral obligations to our creation. This isn't a new problem. The very first "artificial life" story, Frankenstein, addresses the issue head-on in the last dialogues between Victor and the Monster, and later between Walton and the Monster.

If you, like me, believe that consciousness is the story we tell ourselves about ourselves, a way of maintaining a continuity of self in a world of endless stimuli and the epiphenomenal means by which we turn our actions into grist for the decisions we make in the future, then maybe there will never be conscious robots, only p-zombie machines indistinguishable from the real thing, William James' automatic sweetheart.

But if we want our robots to have the full range of human experiences, to be lovable on the inside as much as we are, then we're going to have to give them an analogous capacity to reason, to tell themselves stories that model what might happen, and what might result, and therefore we have to ask ourselves what moral obligations we have toward people who are not entirely like us, or whose desires are marshalled in a way that suits us entirely.

My own takes has been rather blunt: we are obligated to actually existing conscious beings as if they are moral creatures, and they have the rights and responsibilities of all moral creatures. At the same time, the ability to alleviate them of the anxieties and neuroses of human beings, our own vague impulses shaped by evolutionary contingency that make us miserable (and they do: happy people lack ambition; they do not build empires may make them more moral than we are. (Asimov addressed this a lot; in many ways he was far ahead of his time.)

Tags: , , ,
Current Mood: amused amused

1 comment or Leave a comment
Comments
elenbarathi From: elenbarathi Date: February 17th, 2017 06:33 pm (UTC) (Link)
"What is our moral obligation to the robots we create?"

I'd say, it's to not give them the capacity to reason, to tell themselves stories, to feel and care and grieve. Robots are slaves - that's the literal meaning of the word robot - and the whole point of building mechanical slaves is so we don't have to have human or animal slaves any more. A horse has a heart to break; a car doesn't. An elephant can hate its masters; a bulldozer can't.

"But if we want our robots to have the full range of human experiences..."

Just because we want to do something, doesn't mean it's either kind or wise to do it. The full range of human experience in slavery is pretty grim; if we give our poor robots the ability to feel that, we'll be morally obligated to end their enslavement and affirm their civil rights. And then who's going to do all the work?

Edited at 2017-02-17 06:33 pm (UTC)
1 comment or Leave a comment