Tuesday, August 21, 2007

How much of us is human?

In an article entititled Enhancing Humanity, in the May/June edition of Philosophy Now magazine, Ray Tallis explores the growing concern about the potential for dehumanization as technology continues to take over our lives. This goes beyond the use of electronic devices that remove the requirement for actual contact to effect communication. It also extends to the potential for replacing one or more elements of the human body with synthetic, artificial parts.

Tillis makes a statement late in the piece in which he asserts that


"...our identity and our freedom lie in the intersection between our impersonal but unique bodies and our personal individual memories and shared cultural awareness."

He goes on to write that it is his belief that it is the "distinctive" genious of humanity to establish an identity that distances itself from the organic.

I take Tallis' argument to mean that the organic elements of our humanity--our bodies, feelings, emotions--are not what makes us truly unique or necessarily what makes us human. I have to disagree.

While I agree with Tallis' view that organic existence is fraught with disease and suffering, it also has the potential for compassion, joy, and loving kindness. Intelligence without the underlying organic framework does not a human make. It is the balance of these that allows us to care for one another--to reach beyond what is rational to that which speaks of the mystical, of "magic." Deprive humanity of the feelings and emotions that accompany our intelligence, and intuition and empathy fall away. If someday we choose to follow a path that rids us of this pus, blood, snot, and gas-filled bag of flesh, I don't believe that which is human will survive the journey.

Monday, August 20, 2007

Robot Rights

I've been reading about the ethical question concerning whether robots with some modicum of self-determination might have some or all of the same rights as humans. Although we might be some distance from this technologically, it isn't unreasonable--given the pace of development in robitics and artificial intelligence--to assume that at least limited-function robots will be flitting around homes and businesses within a generation.

The most important question that comes with the issue of rights, in my opinion, is that of personal responsibility. In other words, who's responsible if a robot intentionally injures or kills someone? Our current laws hold the individual responsible when dealing with humans. However, will we tend to blame the engineers or programmers when a robot goes awry? After all, they determined the robot's wiring and installed the code that makes it function. And if we do blame the robot's "creators", does that call into question the humans' creator(s) as well?

If we accept that nature endows each human with genetic wiring that predisposes us to certain personality traits and intelligence capacity, and that our environment provides the programming for that wiring, then why wouldn't a deity and our parents be responsible for our actions and not us? This is an interesting question that philosophers and legal minds will no doubt struggle with within the next two to three decades, but it seems obvious that the introduction of robotic intelligence will begin our questioning of ethical responsibility for all sentient entities.