I just started reading Don Norman's (who teaches at Northwestern) The Design of Future Things. One thing he was concerned with was the interaction between humans and machines, particularly when the machines become more intelligent. Microwaves that "know" when your food is cooked, refrigerators that "monitor" your cholesterol, cars which brake when it "detects" a collision. Norman finds that, while in certain very constrained settings these technologies work great, in the real world they very often fail, resulting in human frustration.
Normal also points out that it's not that the machines are not capable of doing the job, or that our expectations are too high; humans are quite happy with manually over looking the machines. It's when the machines become smart, but not smart enough, that problems occur.
This actually reminded me of another problem, commonly called the uncanny valley. This is a term used to describe how as robots look more and more like humans, people will begin to develop a strong negative reaction towards it. The explanation was that people stop looking for the human like qualities of an inanimate object, but instead look for the non-human qualities in an animate object. Although the theory is not entirely scientific (it was first proposed 1906, way before any kind of human look-a-like was made), it's fairly pervasively talked about.
It occurred to me that this seems to apply not only to appearances and motion, but also to intelligence. By the way Norman described it, when machines move beyond mere suggestion and into actually automating action, there will inevitably be times when the machine misinterprets what is going on, and therefore takes the wrong action. This is a slightly different situation from appearance. Instead, I think it has to do with how the machine goes from being passive and out of the way, to a dumb active agent.
I might say more when I'm done with more of the book.
No comments
Post a Comment