A Narrative of Externalization

The follow is a response I wrote for my Human Computer Interaction course. It references three books/papers: Interaction Design by Sharp, Rogers, and Preece, The Myth of the Paperless Office by Sellen and Harper, and The Atlantic Monthly article As We May Think by Vannevar Bush.

I thought it interesting how the readings for this week seemed to provide a narrative of not only computer interfaces, but a history of how humans have been externalizing memory. Starting all the way back when papyrus was invented in ancient Egypt, these thin pieces of reed were used to record the daily activities of humans. Of course, even before that there was writing, mostly carved and painted on rock or stone. Papyrus and paper (invented by the Chinese) has an obvious advantage to writing on stone. For one thing, paper is a lot cheaper to manufacture; there is no heavy manual labor involved, unlike the quarrying and cutting of stone. Paper is more efficiently stored; a stack of paper the size of stone tablet would contain many times more information. Finally, paper is easier to transport, which might as well have been a side effect of the invention of paper. When stone carving is still used, who would have thought of moving a giant block of stone (or not much better, a heavy stone tablet) to another place? Messengers were used instead, with the associated limit in how much can be transmitted at once. The lightness of paper, however, allowed much more information to be transmitted.

Vannevar Bush's paper suggested that in the late 1940s, the human capacity for production (and to a lesser extent, distribution) of information is rapidly expanding, while there is no mechanism for accessing all the information in a easy manner. Arguably, this same effect has occurred over the several thousand years between the Egyptians and the modern age. The printing press has made the production of books, therefore information, much easier and widespread. At the time, only the rich and clergymen were literate. From the Enlightenment onwards, the skills of reading and writing slowly disseminated, so that more people could write and contribute their own knowledge. By the time of Bush's paper, "information" is no longer simply counted by how many pages it will take, but by other units such as words on a telegram, or stacks of microfilm, or reels of video. It is hard to say whether all this is merely a "side effect" of the industrial age, but the was no distinct movement to dispose the use of paper. Until the telegram was invented, there was simply no easier way of communicating than with paper, and even with telegram there is no cheaper way per word. It would perhaps not be too big an exaggeration to say that paper was still the cheapest and easier way of externalizing memory.

As the personal computer began to emerge, however, more of our mental life could be externalized. Computers are not only capable of storing information (so we don't have to remember) and transmitting information (so we don't have to physically meet and talk), they are also capable of computation - dare I say, so we don't have to think. Before the technology matured, there was only limited methods of how to externalize thinking, perhaps also due to how foreign the concept seems at the time (as it does now to me). For thinking to be externalized, there has to be methods of instructing the external agent with what to think about. The development of computer instruction could again be traced back from the time of Bush's paper. In the 1940s computers were still mostly hardware, the program to run being an entire circuit board. Punch cards were then developed, as a unified way of representing the abstract notion of computation, and finally programming languages were invented. It is interesting to note that when humans think, we do not necessary have to physically move; this parallels the development of software ("computation") going from the initially physical circuits, to abstract bits and bytes as the high level programming languages we are now used to.

As computers got more powerful, so programs are not only run in batch but in real time, the interface for the computer had to change as well. As outlined in chapter 6 of Interaction Design, the need for more powerful, and I would argue more _natural_, ways of interacting with the computer was met by different "paradigms". The command line was a text-only interface, oblivious to visual stimuli or other gestures, while the GUI provided information which humans could scan and absorb at a much quicker rate. The development of speech, pen/stylus, and even gesture based input allows people to interact with computers not as abstract computation machines, but as an appliance or perhaps even another human being. Computers are being integrated into the everyday lives of humans.

Also hidden in this narrative of the development of computers is the different form our external memory have taken; slowly (although Sellen and Harper of The Myth of the Paperless Office would argue it is /much/ more slowly) our memory have gone from writing on paper to typing on magnetic disks. Again, the same comparison between stone and paper could be made between paper and magnetic disks: disks can store more in less space, although there is no clear price or transportation benefit. As Sellen and Harper have pointed out in the first few chapters of their book, it is not the case that paper is inherently bad or backwards and must be disposed of. Instead, the developments outlined above take advantage of the power of computers. No piece of paper in the world could produce writing at the sound of your voice. The medium, either paper or computer, makes certain interactions easier and others harder. The computer perhaps cannot totally replace paper, but it can, as shown in the case study of IMF, be used seamlessly side by side.

Here I would like to make a quick note of how Vannevar Bush, in his paper 50 years ago, have foreseen a lot of what computers could do. Different though his methods of implementation may be, he nonetheless thought of a "memex" as a way to relate large number of written articles - a forerunner of the internet. It is astounding that he can envision such a complex system simply by extrapolating for the technology of his day.

This brings us back to the present, and as Vannevar did 50 years ago, we cannot help but wonder how computers will develop in the future. The authors of Interaction Design pointed to mobile and web based interfaces. While I don't necessarily disagree, I would instead argue that the bigger theme is for computers to become more natural for humans. Already there are more device being made with touch screen capabilities. I believe this is because the keyboard, while useful for typing, was not made to encompass human activity. Even the key layout (QWERTY) was not for speed or efficiency (like the Dvorak keyboard was), but to solve the problem of early typewriters jamming when multiple keys are pressed in quick succession. The ability to manipulate information by touch comes much more naturally than typing. To move an image to the right, simply drag it as you would a piece of paper on your desk. Similarly, although writing is slightly harder to master than dragging, it is still commonly used. I would not be surprised if handwriting recognition is developed to much better levels, and people would compose on small, mobile "sheets" of writing interfaces. These devices have no computational power of their own, and thus can be mass produced. The main advantage is, again, that anything written or drawn on it can be saved for future reference. I don't think, however, this idea will be popular until it becomes much cheaper than paper.

The other aspect of computation I expect to develop is the "intelligence" of the machine. Bush had suggested that the "memex" will be referenced by relevance to an idea, not by a single word. Although not a commonly used feature, some search engines can already to that; in Google appending '~' in front of a key word will make the engine search for similar terms. For example searching for "~cars" returns results with "automobile" "motor" and "BMW" highlighted. The technology behind this is no doubt similar to what is used to power Google Sets; perhaps the associates are even formed that way, by watching what users type in and the click on. Finally, research in artificial intelligence have also taken the concept of association to heart, and the product is relational knowledge databases like Cyc.

I would finally like to propose a system which I myself will find helpful. While composing this very exposition I have paper with reading notes taped to the wall in front of me, so I easily refer to it. The wall offers a giant space for me to annotate and draw lines of connections. All this is hard to store for future reference, however. I could imagine a giant touch/stylus screen instead of a wall, where I would write and diagram, and with the push of a button, have all that be saved. This is a scheme which not only imitates paper, but in fact attempts to replace a system of paper, wall, and whiteboard. This trend of computers representing larger and larger systems is, in my opinion, the way computers will grow computationally.

No comments

Post a Comment