Writing artlessly about artificial intelligence

I’m usually drawn to online writing that reports good science or worthwhile social ideas. I seldom bother to respond to bad science or sloppy thinking. Today is an exception.

Browsing science digest sites, I recently ran across “Hidden Smiles and the Desire of a Conscious Machine,” by Malcolm Ramsey, published by H+ (Humanity +) on June 6th.

Intended as a thoughtful look at the possibility of artificial consciousness, “Hidden Smiles” is a muddled mash-up. It’s a good example of the bad writing wandering about out there in cyberspace.

Too harsh? Crossing the line from the critical to the uncivil? Hold your judgement until we’ve taken a look at the piece itself.

“Hidden Smiles” recounts that “recent research at MIT has lead [sic] to the development of a computer system which is far better than human volunteers at recognizing subtle differences in human smiles.” What the experiment showed was that the computer system was twice as accurate as human volunteers when tasked with identifying “frustrated smiles.”

After a pro forma acknowledgement that the results are relevant only “in one very narrow area of artificial intelligence,” Ramsey immediately claims that the research “shows the way in which AI is already encroaching on areas that we traditionally assumed were tied to human intelligence or conscious thought.”

A very good question at this point would be, “How does it do that?”

Ramsey writes,”We surmise that humans have consciousness since we identify the feelings that we have with the reactions that people around us have.” Yet he follows this reasonable enough claim with a much shakier proposition:

If computers can reach the stage where they are better at identifying and categorizing these reactions then they must have some claim to being able to judge consciousness. If a computer can predict that someone was feeling frustrated correctly while another human gets it wrong then surely the computer is in some way better at understanding the frustration of the subject?

To start with, there is a substantial category difference between “identifying and categorizing” and “being able to judge,” whether what’s being judged is consciousness or a figure skating routine. And “understanding”? I can train a lab rat to push a lever of a certain colour to obtain food. The rat can identify the colour, and it can categorize the levers it sees as meeting or not meeting that colour criterion. Does anyone really think that the rat has now developed anything that we would recognize as “understanding” the nature of its situation?

More specifically, does correctly recognizing the facial characteristics of a smile require, in any way, an understanding of what a smile represents? Does the computer system need a “theory of mind” to perform the kinds of measurement functions required of it? Doesn’t the computer system measure smiles with the same kinds of spatial and mathematical algorithms with which it would also analyze the tolerances of a piece of machinery? Does the computer system have, therefore, a “theory of machinery”? Is  it, then, conscious in the way that the piece of equipment is conscious?

This is just one of the logic traps that await anyone who tries to transpose computer programming into human cognition. Undeterred, Ramsey follows this essentially introductory section with a flight into soaring speculation.

He accepts with scant discussion an extreme version of the “Turing test” notion:

With enough small steps it is entirely possible that a robot simulacrum could be created that mimics human behaviour to the extent that every biological human on the planet is fooled. If this is the case then the machines actions will be indistinguishable from human consciousness.

Ramsey elides all of the philosophical steps required to get from there to here and moves directly to the assumption that a computer system that we can’t determine to be non-conscious must, therefore, be conscious. And, since that “robot simulacrum” will be made of different materials and operate by different systems than we do, it will have “different goals and intentions.”

Wait a minute here. We’ve skipped a few rather large and very significant steps to get to this point, haven’t we? If it quacks like a duck, it’s a duck? If a robot’s behaviour can’t be distinguished from a human’s, the robot must therefore be conscious. We’re not told what special feature makes this true when a robot evaluates human smiles but not when it balances a spreadsheet. By this reasoning, an iPod that plays a sound file of a quacking duck is itself a duck.

Ramsey writes that his robot’s goals will likely be so unlike our own as to be alien and unrecognizable. Why should a machine want what we want? We will be in Matrix territory: “There is no reason to presume that any machine consciousness will necessarily be similar to that of humans. In fact it quite likely that it will be entirely alien. In this case the goals, intentions or desires of this machine will be in a sense invisible to us.”

In support, Ramsey points to the way that the wheat plant has domesticated us to spread, nurture, and protect it. Just as we have been conditioned to serve the purposes of the wheat plant, we may be turned to the purposes of an alien computer consciousness. Now, the idea that wheat (and dogs and lawn weeds) have “tamed” us is not new, but there’s little development of the idea to show how it fits with AI, much less serves as evidence to prove his point about computers.

So in a brief article we have been taken from a computer algorithm that measures the facial dynamics of our smiles to a world of servitude to our machines. It’s a breathtaking progression, unencumbered by either careful reasoning or substantial empirical support.

The view that I’ve been criticizing would seem to be consistent with my frequent arguments against dualism. Indeed, in an article titled “Could a robot be conscious,” British philosopher Barry C. Smith observes that for the many thinkers and writers who reject dualism, “surely it is the brain that is responsible for controlling the body and so it must be the brain that gives rise to our consciousness and decision making.”

However, Smith writes, “many of the same thinkers would agree with Descartes that no machine could ever be conscious or have experiences like human beings.”

So don’t we have a contradiction? If a robot passes the Turing test, isn’t it conscious? Smith’s counter:

We need to draw a distinction between our thinking that the robot was conscious and it actually being conscious. We may be tempted to treat it as a minded creature but that doesn’t mean it is a minded creature.

It may be true, as Smith writes, that with AI research “the hope is that if we can create or replicate consciousness in a machine we would learn just what makes consciousness possible.” Does creating a correlate of the mechanisms of consciousness make a robot, which is essentially a model of our conscious systems, conscious? Is an anatomical diagram a human body? Or an orrery the solar system?

Smith takes his speculation in a different direction than Ramsey did. Noting that we are learning more and more about the substantially unconscious mechanisms that motivate our emotions and behaviour, Smith writes:

If we managed to produce a robot that behaved just like one of us in all respects that might be a proof not of the consciousness of a robot or machine, but instead may be a convincing demonstration of how much we could manage to do without consciousness.

In other words, our robot simulacrum just as likely could end up showing us not so much what consciousness is, but rather how little of what we are is truly conscious.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s