Linux in the News

Several interesting Linux articles came floating my way these past few days.

This blog post has an interesting analogy to broadcast. Then this blog picked it up and extended the analogy.

It's also time that Linux start getting celebrity endorsements.

Maybe Linux can help with your personal productivity problems, too.

Finally, I've found a lot of Linux vendors, but nothing beats what I've already found in terms of price.
No comments

Temperature Gradient

I turned on the heater in my room for a good while today, while keeping my door open. I also happen to have a length of PVC pipe for a project stored in my room, just leaning against a cupboard.

I was pleasantly surprised that the PVC has a distinct temperature difference at the top from the bottom. The material traps heat so well and is such a poor conductor that the rising warm air heats the top, but the bottom stays cool.

I love science.
No comments

Writing Paradoxes

Would it be a paradox if I write only to say that I'm not writing?
No comments

Food Taboo

Here's an interesting thought I had at lunch the other day.

Most people don't eat certain things because it's bad for your health. The Wikipedia article for Judaism states,
Major prohibitions exist on eating pork, which is considered an unclean animal...
which conforms to our expectations.

However, there's also the case of Hinduism, for which Wikipedia has,
Observant Hindus who do eat meat almost always abstain from beef. The largely pastoral Vedic people and subsequent generations relied heavily on the cow for protein-rich milk and dairy products, tilling of fields and as a provider of fuel and fertilizer. Thus, it was identified as a caretaker and a maternal figure. Hindu society honors the cow as a symbol of unselfish giving. Cow-slaughter is legally banned in almost all states of India.
Interesting how the final result (of abstaining from a food) is the same, but the rationale is the complete opposite.

(For those wondering, I knew about the eating habits before. I just find quoting Wikipedia so much easier than trying to explain it myself.)
No comments

A Narrative of Externalization

The follow is a response I wrote for my Human Computer Interaction course. It references three books/papers: Interaction Design by Sharp, Rogers, and Preece, The Myth of the Paperless Office by Sellen and Harper, and The Atlantic Monthly article As We May Think by Vannevar Bush.

I thought it interesting how the readings for this week seemed to provide a narrative of not only computer interfaces, but a history of how humans have been externalizing memory. Starting all the way back when papyrus was invented in ancient Egypt, these thin pieces of reed were used to record the daily activities of humans. Of course, even before that there was writing, mostly carved and painted on rock or stone. Papyrus and paper (invented by the Chinese) has an obvious advantage to writing on stone. For one thing, paper is a lot cheaper to manufacture; there is no heavy manual labor involved, unlike the quarrying and cutting of stone. Paper is more efficiently stored; a stack of paper the size of stone tablet would contain many times more information. Finally, paper is easier to transport, which might as well have been a side effect of the invention of paper. When stone carving is still used, who would have thought of moving a giant block of stone (or not much better, a heavy stone tablet) to another place? Messengers were used instead, with the associated limit in how much can be transmitted at once. The lightness of paper, however, allowed much more information to be transmitted.

Vannevar Bush's paper suggested that in the late 1940s, the human capacity for production (and to a lesser extent, distribution) of information is rapidly expanding, while there is no mechanism for accessing all the information in a easy manner. Arguably, this same effect has occurred over the several thousand years between the Egyptians and the modern age. The printing press has made the production of books, therefore information, much easier and widespread. At the time, only the rich and clergymen were literate. From the Enlightenment onwards, the skills of reading and writing slowly disseminated, so that more people could write and contribute their own knowledge. By the time of Bush's paper, "information" is no longer simply counted by how many pages it will take, but by other units such as words on a telegram, or stacks of microfilm, or reels of video. It is hard to say whether all this is merely a "side effect" of the industrial age, but the was no distinct movement to dispose the use of paper. Until the telegram was invented, there was simply no easier way of communicating than with paper, and even with telegram there is no cheaper way per word. It would perhaps not be too big an exaggeration to say that paper was still the cheapest and easier way of externalizing memory.

As the personal computer began to emerge, however, more of our mental life could be externalized. Computers are not only capable of storing information (so we don't have to remember) and transmitting information (so we don't have to physically meet and talk), they are also capable of computation - dare I say, so we don't have to think. Before the technology matured, there was only limited methods of how to externalize thinking, perhaps also due to how foreign the concept seems at the time (as it does now to me). For thinking to be externalized, there has to be methods of instructing the external agent with what to think about. The development of computer instruction could again be traced back from the time of Bush's paper. In the 1940s computers were still mostly hardware, the program to run being an entire circuit board. Punch cards were then developed, as a unified way of representing the abstract notion of computation, and finally programming languages were invented. It is interesting to note that when humans think, we do not necessary have to physically move; this parallels the development of software ("computation") going from the initially physical circuits, to abstract bits and bytes as the high level programming languages we are now used to.

As computers got more powerful, so programs are not only run in batch but in real time, the interface for the computer had to change as well. As outlined in chapter 6 of Interaction Design, the need for more powerful, and I would argue more _natural_, ways of interacting with the computer was met by different "paradigms". The command line was a text-only interface, oblivious to visual stimuli or other gestures, while the GUI provided information which humans could scan and absorb at a much quicker rate. The development of speech, pen/stylus, and even gesture based input allows people to interact with computers not as abstract computation machines, but as an appliance or perhaps even another human being. Computers are being integrated into the everyday lives of humans.

Also hidden in this narrative of the development of computers is the different form our external memory have taken; slowly (although Sellen and Harper of The Myth of the Paperless Office would argue it is /much/ more slowly) our memory have gone from writing on paper to typing on magnetic disks. Again, the same comparison between stone and paper could be made between paper and magnetic disks: disks can store more in less space, although there is no clear price or transportation benefit. As Sellen and Harper have pointed out in the first few chapters of their book, it is not the case that paper is inherently bad or backwards and must be disposed of. Instead, the developments outlined above take advantage of the power of computers. No piece of paper in the world could produce writing at the sound of your voice. The medium, either paper or computer, makes certain interactions easier and others harder. The computer perhaps cannot totally replace paper, but it can, as shown in the case study of IMF, be used seamlessly side by side.

Here I would like to make a quick note of how Vannevar Bush, in his paper 50 years ago, have foreseen a lot of what computers could do. Different though his methods of implementation may be, he nonetheless thought of a "memex" as a way to relate large number of written articles - a forerunner of the internet. It is astounding that he can envision such a complex system simply by extrapolating for the technology of his day.

This brings us back to the present, and as Vannevar did 50 years ago, we cannot help but wonder how computers will develop in the future. The authors of Interaction Design pointed to mobile and web based interfaces. While I don't necessarily disagree, I would instead argue that the bigger theme is for computers to become more natural for humans. Already there are more device being made with touch screen capabilities. I believe this is because the keyboard, while useful for typing, was not made to encompass human activity. Even the key layout (QWERTY) was not for speed or efficiency (like the Dvorak keyboard was), but to solve the problem of early typewriters jamming when multiple keys are pressed in quick succession. The ability to manipulate information by touch comes much more naturally than typing. To move an image to the right, simply drag it as you would a piece of paper on your desk. Similarly, although writing is slightly harder to master than dragging, it is still commonly used. I would not be surprised if handwriting recognition is developed to much better levels, and people would compose on small, mobile "sheets" of writing interfaces. These devices have no computational power of their own, and thus can be mass produced. The main advantage is, again, that anything written or drawn on it can be saved for future reference. I don't think, however, this idea will be popular until it becomes much cheaper than paper.

The other aspect of computation I expect to develop is the "intelligence" of the machine. Bush had suggested that the "memex" will be referenced by relevance to an idea, not by a single word. Although not a commonly used feature, some search engines can already to that; in Google appending '~' in front of a key word will make the engine search for similar terms. For example searching for "~cars" returns results with "automobile" "motor" and "BMW" highlighted. The technology behind this is no doubt similar to what is used to power Google Sets; perhaps the associates are even formed that way, by watching what users type in and the click on. Finally, research in artificial intelligence have also taken the concept of association to heart, and the product is relational knowledge databases like Cyc.

I would finally like to propose a system which I myself will find helpful. While composing this very exposition I have paper with reading notes taped to the wall in front of me, so I easily refer to it. The wall offers a giant space for me to annotate and draw lines of connections. All this is hard to store for future reference, however. I could imagine a giant touch/stylus screen instead of a wall, where I would write and diagram, and with the push of a button, have all that be saved. This is a scheme which not only imitates paper, but in fact attempts to replace a system of paper, wall, and whiteboard. This trend of computers representing larger and larger systems is, in my opinion, the way computers will grow computationally.
No comments

Sound Aslip

Allow me to introduce to you my latest invention: Sound Aslip! Don't be fooled by its small size; you will be amazed at how useful it is. Simply clip it to the headboard on your bed, or leave it on your bedside table, and it'll work it's magic. Let me give you a tour.

Has anyone ever lied in bed, brain churning, unable to sleep, and getting all these solutions to the problems you were solving during the day? I've certainly had that happen multiple times. You never want to get up for a pen and notepad, or even turn on the lights to write, because you're afraid you won't be able to sleep for an even longer time. But you have all these good ideas (guess when the Sound Aslip was invented? That's right.), and you really want to keep them so they can be acted on in the morning.

This is where the Sound Aslip steps in. At the extreme, the Sound Aslip is just a recorder. To record your thoughts while lying in bed, simply speak out loud to yourself, as all genii do, and the Sound Aslip will capture everything you said. You can play back the entire recording in the morning, at your leisure.

But of course you're not paying all that money for a simple voice recorder. The Sound Aslip was design especially for bedside convenience. On the top are photovoltaic solar cells, so no power cords are necessary. The Sound Aslip will simply recharge when it's the morning, from the sun or from the light in your room. If your room happen to not get any light during the day (how many hackers, engineers, and vampires do we have in the audience? About half?), you can also use an AAA battery to power the Sound Aslip.

Now, solar cells and every alkaline batteries don't last long, so Sound Aslip has several features built in to save power. For one thing, since it's only for bedtime use, the recorder will only activate when it is put in a dark environment. This makes sure there's enough memory to record everything. For those people who would like to use Sound Aslip for other purposes, or those who prefer to sleep with your lights on (Would all the babies in the audience please raise your hand as high as you can? It's hard to see you among the adults. Thanks.), the brightness sensor could be turned off from this button on the side.

The other thing Sound Aslip does to make itself useful is to not record everything. You certainly don't want to spend half your day listening to yourself snoring, only to find the 10 seconds of ideas you had in the middle of the night. So, besides the brightness requirement, Sound Aslip also has a loudness requirement. Only when a noise over 30 dB is made, or roughly the background noise of a quiet bedroom at night, will Sound Aslip start recording. If no other noise is made in 10 seconds, Sound Aslip will go back into standby mode; otherwise, it will keep recording as long as the noise is present.

Sound Aslip uses flash memory, the same stuff that keeps your music in your iPod nano, and so will have no problems with quality or memory lost. Each Sound Aslip comes with 1 GB of internal memory, enough to store a full day's recording and then some. To get the recording of previous times, simply connect your Sound Aslip to your computer with the mini USB cable, and all your recordings will be neatly in the folder.

As you can see, the Sound Aslip is prefectly designed to capture those ideas you have at night, when you are in bed but can't go to sleep because you're too active a thinker. With Sound Aslip, you can rest assured that your brilliant ideas won't be lost, but will be waiting for you the next morning, as surely as (dare I say, more surely than) your daily newspaper.

Don't let your ideas a-slip you again with Sound Aslip. Don't forget our special offer: buy one now, and you can get your second at half the price. I repeat, ladies and gentleman, buy one and get the second for half the price.
No comments

Time Management

This quarter is going pretty crazily so far. From high school up till now, I've had no trouble keeping track of my schedule in my head. Sure, sometimes I would look ahead to make sure my calendar was correct, or I would put in a new task to do later, but most of the time I know what I'm doing when.

This past two weeks, however, I seem to have lost my ability. I'm in four different project classes, resulting in a lot of meetings at different places. I find that I constantly forget about my meetings, and keep trying to schedule things over them.
  • I tried to have a project meeting when I had a dinner appointment with a friend
  • I tried to go project shopping when I had another dinner appointment
  • I planned to go rock climbing when I had to help with a group coding session
  • I planned to go rock climbing (again) when I had a group meeting
You know what that means? It means I should upgrade my brain. Buy pack of neurons, rig up the synaptic connections, probably get a new and faster charger as well so I can stay awake for longer. Um. Need to visit US Robots.
No comments

Three Wonders

I wonder if...
  • Paying people to create fake blogs and discussions is as ethical as paying them to fix up your resume.
  • the usage of the phrase "put on a brave face" during moments of tragedy say anything our culture's definition of courage.
  • we are really "surprised" by magic tricks, if we are only surprised by unexpected things.
No comments

If a Tree Falls in a Forest...

... and only deaf people are around, does it make a sound?

I'm working with a deaf person (excuse me, someone hard of hearing) in one of my classes, and he's also in another class of mine. I don't mean to offend; he's actually a pretty cool guy, humorous, and he understands American Sign Language (ASL) and can read lips, so it's not actually too big of a problem.

One coincidence that I noticed is that I could have learned ASL myself. I applied to Rochester Institute of Technology, and since they share the campus with a school for the deaf, a number of students on campus learn ASL. Although I got in, I ended up choosing Northwestern instead, and lost that opportunity.

There are, however, a few things about deaf people I'm curious about.

Do they understand or even like puns? Since puns are inherently sound based, it's hard for someone who can't hear sounds to appreciate them. In fact, it might even be the case that they wouldn't recognize it, although since my friend can speak normally, he would probably have a good idea that those words are formed in similar manners. For people who don't learn how to speak, however, I don't think there's any way to discover a pun except by looking up the phonetic pronunciation of the word in a dictionary. ASL is a meaning based system too, so that doesn't provide any clues.

On that note, how do interpreters signal that the speaker just made a pun? Is there some sign to indicate the double entendre used?

I can't even tune out the speaker and focus on the interpreter; I tried during class. It's so hard to even imagine what being deaf is like.
No comments

Snow off your Shoe

In Chicago with its damnable conditions, one's sole will inevitably be tarnished. Sure, one can try to follow the narrow path away from the muck, but one misstep and the sole will be weighed down by filth. While going to a church will keep you clean for a while, you will also be inevitably dirtying to what is originally clean.

The lesson: get that snowy yuck off your shoe before stepping inside.

I used to do this by stomping, or jumping and making as hard an impact as possible. The momentum of the snow is enough to dislodge it from the bottom of the shoe, and I'm done. Today was the first time I've seen someone else do this.

Today I also witnessed someone banging their foot sideways into a wall, getting a horizontal momentum to do the same thing. That definitely requires less effort, although it would need a wall and a good balance.

My latest antic, therefore, is to knock one foot against the other. It works pretty well, actually, and it's easier than stomping around.

Remember to keep your sole clean!
No comments

Bugger Blog IV

I don't mean to always pick on Google (yeah, pick on someone your own size...), but I love it and use it so much that any small problem becomes really annoying.

Since I've installed the Better Gmail 2 Firefox extension, I've become accustomed to using the keyboard to read my emails. Most keys also work without the extension, but I love my 'l' shortcut to label conversations. Anyway (see how much I love Gmail?), the bug I'm talking about is for the starring key, or 's'. Every key that applies to emails work the same way, except for this one. If you try to remove the current label ('y'), it works on all the conversations you've selected. If you try to delete an email ('#') or mark it as spam ('!'), it works on the conversations you've selected. Starring, however, works on the conversation the pointer is pointing to. That is, you cannot star multiple emails at the same time; you have to star, move down ('j'), star, move down, and repeat.

Again, I wish Google would just follow the rules, instead of having different functions operate differently.
No comments

Week of 2008-01-14

Visited the Museum of Science and Industry today, in freezing weather. There weren't as many interactive physicsy stuff as I thought there was, and the shop was definitely sub-par compared to, say, the shop at the Science Museum in London, which had solar powered hot air balloons for sale.

Besides that, this week has basically been work, work, work. Sad.
No comments

Frosting

Chicago's kind of cold these days. Because of that, I sleep with my windows closed, to keep the warm air in. Yesterday night, it was cold enough that my window frosted over... on the inside.

That's new.

I've posted a gallery of pictures, for your viewing pleasure.
No comments

Evolution of Morals

There was a feature in NYTimes Magazine written by the famous psychologist Steven Pinker, titled "The Moral Instinct". It talks about how moral choices are difficult to understand from psychological and philosophical points of view, and how morals are not necessarily rational.

Some where in the article is this little passage:
In his classic 1971 article, Trivers, the biologist, showed how natural selection could push in the direction of true selflessness. The emergence of tit-for-tat reciprocity, which lets organisms trade favors without being cheated, is just a first step. A favor-giver not only has to avoid blatant cheaters (those who would accept a favor but not return it) but also prefer generous reciprocators (those who return the biggest favor they can afford) over stingy ones (those who return the smallest favor they can get away with). Since it’s good to be chosen as a recipient of favors, a competition arises to be the most generous partner around. More accurately, a competition arises to appear to be the most generous partner around, since the favor-giver can’t literally read minds or see into the future. A reputation for fairness and generosity becomes an asset.

Now this just sets up a competition for potential beneficiaries to inflate their reputations without making the sacrifices to back them up. But it also pressures the favor-giver to develop ever-more-sensitive radar to distinguish the genuinely generous partners from the hypocrites. This arms race will eventually reach a logical conclusion. The most effective way to seem generous and fair, under harsh scrutiny, is to be generous and fair. In the long run, then, reputation can be secured only by commitment. At least some agents evolve to be genuinely high-minded and self-sacrificing — they are moral not because of what it brings them but because that’s the kind of people they are.

Of course, a theory that predicted that everyone always sacrificed themselves for another’s good would be as preposterous as a theory that predicted that no one ever did. Alongside the niches for saints there are niches for more grudging reciprocators, who attract fewer and poorer partners but don’t make the sacrifices necessary for a sterling reputation. And both may coexist with outright cheaters, who exploit the unwary in one-shot encounters. An ecosystem of niches, each with a distinct strategy, can evolve when the payoff of each strategy depends on how many players are playing the other strategies. The human social environment does have its share of generous, grudging and crooked characters, and the genetic variation in personality seems to bear the fingerprints of this evolutionary process.
The thing that fascinates me is how seemingly mathematical this description is. The appearance of generosity, the actual generosity, the percentage of lying, all seems to be describable through numbers, and individuals are just a function with these numbers as input. These individual agents would interact and learn, eventually leave offsprings and die. A multi-agent simulation seems to go well with what Trivers describes.

One problem that I have trying to understand this simulation conceptually is what the default values for the offspring should be. Intuitively, the offspring would be influenced by his parents (learning by example in society). There might also be some influence from the average of societal values, as the culture that the offspring grows up in. Neither of these ideas, however, give a precise mathematical definition of how to mix the two values.

I really want to try and write this simulation... when I'm not swamped with work.

Could a simulation of this be done?
No comments

Name Calling

I have had to write a lot of emails to people lately, some to my professors. Although a lot of them tell us (their students) to call them by their first name, and I use that to address them in these emails, I'm still for the most part uncomfortable about it. It just seems informal somehow.

I thought at first that it's because I don't know them that well, but then again I don't know my TAs that well either, and yet I call them by their first name. Age might be part of the problem, since I do lack friends from that age group.

The larger part, I think, is the prestige. There are in fact a few professors with whom I'm comfortable using their first name, but I think they are generally more approachable. They tend not to stand at the front of the class and lecture, but would come by and talk about things in a more personal, friendly manner. This is not the same as a discussion though, but something in the manner of speech. What I mean by prestige is, I feel that some professors really do know a lot, and that by calling them by their first name would be disrespectful to that prestige.

Strange idea huh.
No comments

Characteristics of Free Software Users

This article was in my RSS feed a few days ago, and while it talks about free software users (of which I definitely am one; the only non-free software on my computer right now is Acrobat Reader, Java), it also reminded me of the conversation about whether being a computer scientist changes the way I think.

Of the 9 characteristics, I found the following 4 to be the most true:
  • Free software users expect to work the way they choose
  • Free software users want control of their own systems
  • Free software users explore
  • Free software users expect to help themselves
I wonder, however, if it is the usage of free software which creates these characteristics in people, or if they were here to begin with.

The first two points go together; to work the way we choose we need to have control of the system. This I find might be a consequence of using free software. Before I started using Linux, I was not as particular about the system I'm using. I did write Konfabulator widgets to make life easier, but that was the only major modification I made to the system. Compared to how I make my own Flubox themes and heavily mod Firefox, those widgets were only minor configurations. I now get easily annoyed when I have to run the command line to get the uptime of my laptop in Windows (I use conky for Linux). Picky.

For the third point, that free software people like to explore, I think that might be a personality trait before using free software. The very fact that they are users indicate that they are looking for alternatives, since no software is bundled with Windows or Macs. I've always had broad interests, and have been (and tried to be more so lately) a curious child. I think that counts as exploring.

Finally, that free software users expect to help themselves... I'm not sure. Part of helping themselves is trying new things and exploring, which is covered by the third point. Another part of it, however, might be that it is so much easier (more central, if not more common) to find help for open source software. I don't know which one caused which.

I do think all of these are good traits though.
No comments

Videogames

I've spotted a curious difference between people in Hong Kong and people in Chicago: a lot less people in Chicago have hand held video game consoles.

That may be just because I'm in a university environment, and people don't have time for games (*snicker*). Since a lot of people have Xboxes or PlayStations in their rooms, however, it's not that they don't like games, but somehow consoles like the PSP or Nintendo DS is just not as popular.

In Hong Kong, I would see people playing games on one of those everywhere, while traveling or even just walking down the street. From what I can see of people on the El though, no one has one of these things. I really doubt I've even seen a single handheld game (except maybe really old Game Boys.

I wonder if it's a poverty thing, a stress thing, or simply a cultural trend.
No comments

PostSecret Graphology

The postcards on PostSecret, seeing that they contain secrets, are supposed to be anonymous. The idea behind that is the secret itself should be what you're focusing on.

I think it will be fun to do handwriting analysis on the postcards.

From this past Sunday's post, the postcard with "my Husband and I moved to A smaller house so the kids will __Stop__ Visiting as _MUCH_!" scares me. Not only are the capitals out of place, but considering it's a mom who has kids (and is possibly near/at retirement), her writing is really flowery.

I can't stop thinking about Delores Umbridge and her pink office...
No comments

Week of 2008-01-07

Obviously I'm back in Chicago. So, this week, mostly classes:
  • Theory of Knowledge. I've already written a post about this class. Philosophy, if it's abstract enough (as epistemology clearly is), has a way of making simple things complicated. While that's not always desirable, for something as (shall we say) impractical as philosophy it helps with broadening your mind to other people's thinking.
  • Intelligent Information Systems. We haven't done anything yet. Lots of work and lots of fun.
  • Human Computer Interaction. The class is about exactly that, but it's turning out to be more work than I though. I wrote a good short paper on giving users choice for different interfaces. Could you guess I run Linux?
  • Design. Still working with dolphins, and as we try to actually build something our world is torn apart by what we don't know.
  • Locomotion. Still working on that, too. The goal for this quarter is to evolve a single layer perceptron controller.
  • GSW. Um. Still teaching.
  • Outing Club. I need trips. Lots of trips. So I can get away from this crap.
No comments

Dorm Humor

I would like to share two incidences of dorm humor.

The first one occurred last year, a little after the Virginia Tech shooting. Understandably, Northwestern was worried about security, and had locked most of the side doors to dorms. The students were not happy about that, so they eventually relented and allowed some dorms to use their side doors during daylight hours. It remained, however, a concern as to who could get into the dorms. During that period, the lock in one of my dorm's door broke, and it just didn't turn. A few days before it was fixed, someone posted a note next to the door: "Thank God we're finally safe. ZERO entrances."

This little incident is more recent. For the past couple of days, flyers were posted in my dorm advertising a discussion on human trafficking. Some talked about child labor, some about slavery. The one that caught my eye had a a women's body obscured by a sign, on which was written "Sex for Sale". Then one day when I was coming back from lunch, two different people have added to the flyer. The first person had written, "Illegal in IL." So the second person replied, "Legal in Amsterdam."
No comments

The Fountain

I didn't feel like doing work tonight, and so I randomly picked a film to watch. I've heard of The Fountain a while back, and I think I even read the plot summary on Wikipedia, but it never stuck with me. So, since I know it's a sci-fi movie, and I don't know the plot, I picked it.

It made a good decision. After watching and going back to Wikipedia to find out how it was received, I was surprised that it was only received moderately well. I myself liked it a lot, although the plot (except for one part) was predictable. One of the critics on Rotten Tomatoes had called it a piece of poem, and I think that describes the film more than anything else. I can see how the director had aimed it at being comparable to 2001: A Space Odyssey, and in a way I think it is.

I highly recommend this film.
No comments

Survival of the Fittest

This topic has been sitting in my drafts folder for over a month now, but a recent NYTimes article ("God and Small Things" by Barnaby J. Feder) gave me a different angle to draw this topic in.

Over the past year, my thoughts about some of the biggest problems the world is facing (nuclear proliferation, global warming, petroleum production, etc) inevitably go back to the same thought: I don't care. This is not an "I don't care" because it doesn't affect me (which is more or less my views of politics), but an "I don't care" because I don't consider it a problem.

I'd imagine a lot of people will take issue with my previous statement, but let me explain. I'll start with the problem that has the least effect.

Nuclear proliferation is a strange sort of issue, because it's effect is so small. The attack on Hiroshima kill about half a million people, including deaths from burns, radiation, and related diseases, but not including deaths from cancer within 10 years. On the other hand, the 2003 "invasion" of Iraq has killed about 1.2 million Iraqis. Certainly, nuclear weapons are more dangerous than conventional attacks (for example, the attack on the twin towers which killed about 3000), but it is still a one time attack. Only a small portion of the population of the world is affected, and while it will change global politics, most people's way of life will not change.

The lack of petroleum is a slightly larger issue. Since there is only so much petroleum on earth, and our rate of extraction is much much higher than the rate of petroleum production in nature, all the usable oil will be used up in the near future, whether that's 100 years or 500 years. What are the consequences of not having petroleum? Well, it will be harder to get around, for sure. There won't be quite so many cars, planes or ships going around, and for those which still do go around, it will cost a lot more to get on them. In reaction, people will develop or put into more common use alternate sources of energy: nuclear, electric, biochemical (that is, human powered). There will be a period when these technologies are being developed, but in another 100 or 500 years power will not be a problem anymore. So really, the lack of oil is not so much a long term threat, than a warning that we should be switching to more sustainable forms of energy.

Global warming, on the other hand, has much larger impact. The affects of global warming might be felt even a million years from now, and it not only changes the way humans live, but also all plants and animals on earth. A large number of species may well become extinct from the rapid change in temperature, and it's also possible that a significant portion of humans will die too.

Here's the reason why I'm still calling it a small problem, and it will probably give more insight into what I'm saying about the previous two issues as well: even with global warming, life will continue. Note that I didn't say "my life will continue," or that "the life of people I know will continue." For that matter, I didn't even say "human life will continue."

Yes, that's right. I'm a lot more interested in the general survival of life than the survival of human life. Of course, since the survival of life encompasses more than the survival of humans, it deserves more attention. That's not quite how I feel; in fact, I feel more than a little apathetic about the survival of any particular species. Certainly, it will be a loss, but nothing that the phenomenon of can't withstand.

Let's go back to global warming for a moment. There are already species of plants and animals, cultures of humans living in the deserts and the tropics. An over-simplification this may be, the region these animals and cultures live in will just be expanded and/or moved north and south. These regions will remain habitable by these creatures and humans, and so life will continue.

This detachment from small entities extends beyond life, to other human constructs as well. That may be part of the reason I don't join cultural student associations, or college democrats or republicans, or perhaps even environmental groups. Cultural and political groups have a clearly narrow focus, and similarly I don't actively support Northwestern's various teams. If I display interest in someone else's Northwestern background, I feel it has more to do with the coincidence than pride. As for environmental groups, I've always felt that those groups are misnamed; they are not so much saving the environment than advocating a different way of using it. The environment doesn't need saving and it will survive regardless of what humans do (unless we actively destroy it, of course).

On the other hand, I invest time in science, maths, and philosophy. All three have to do with understanding the world around us, to discover hidden relationships in nature. While science and maths have practical applications, philosophy is more abstract. They are all, however, longer lasting than the study of politics. I do have a little interest in psychology and other social sciences, although one can argue they are much more limited in scope than the others.

After I read the NYTimes article mentioned at the beginning of this post, I read the Nature article mentioned, and then a few Wikipedia articles on transhumanism. That gave the realization that I've never contemplated how this philosophy extends to the future. Since I don't care too much about the human race, it is perhaps not surprising that I have nothing against biotechnological or cybernetic modification of human beings. Sidestepping the question of whether there is such a thing as "human nature", I don't think it's something sacred or above alteration.

In fact, as a student of computer science and believing that artificial intelligence will one day rise above human intelligence, I don't even mind if humans are taken over by machines. By "take over" I don't mean a violent suppression - while this exposition may portray me as unfeeling, I'm not completely amoral - but I mean I'm not against machines being the superior "race". I care more about general superior intelligence than I do about human intelligence, and if it takes creation to bolster evolution, then that will be the case.

I am previously aware of most of my views expressed here, but this is the first time I've connected them all with the theme of a detachment from smaller, concrete objects. I wonder where this detachment grew from.
No comments

Deserve Victory

I thought I will share a personal motto I had for... the past few months, if not a year:
"Deserve Victory"

I know that Churchill used this phrase on his posters for WWII, but that wasn't where I got it from. Instead, it's from the Sword of Truth series by Terry Goodkind. Each book in the series has a wizard's rule, and "Deserve Victory" is from the 8th book, Naked Empire. I personally haven't read that book (I stopped at book 6 or so), but a fan site gave the explanation as:
"Be justified in your convictions. Be completely committed. Earn what you want and need rather than waiting for others to give you what you desire."
I really like the motto, as well as the explanation. "Deserve Victory" is not telling you that you deserve victory, and that you are not happy is life's fault. Rather, it is an imperative, telling you to work to "deserve victory". You could sit in a dark room and watch as life passes, or you could stand up and take a chance.

The most successful case of applying the motto was that I got my CTY job. Yes, I haven't told you yet, but I'm telling you now. I had to find an adviser for my independent study, to satisfy the requirements set forth by my student visa. I wrote my resume and cover letter after reading samples online, found references and mailed my transcript, and prepared and did my interview, all on my own. Having gotten the job, knowing that I have worked hard for it, I know I deserve it. Of course, if I didn't, it would be a shame, but at least I could say I tried.

I have, however, yet to work this motto into my social sphere. I've said I would several times in my journal, but it's less certain how to proceed. Although, I could be convinced that it only feels more risky, because the path is clear enough. I just need to pick up my courage.

Go Forth and Deserve Victory.
No comments

Not Wetting Your Pants.

Walking in heavy rain with an over large waterproof jacket results in the legs of your pants getting wet.

I just thought that it looked like wetting your pants in reverse, since those areas are the only dry ones.

Interesting.
No comments

Packaging Folding

I get annoyed when, after I finish a pack of chips, I've left with the packaging, a big, empty plastic bag. This especially annoys me when I'm on a plane, and there's limited space to put that bag. It only gets worse when the plane goes into turbulent winds in the middle of serving food, and so it takes an hour for the flight attendants to come and collect the trash.

What I do now is I fold the packaging into neat little "coins". This is not the same as simply folding it into halves over and over, as the packing is usually stiff and has the annoying tendency to spring open. What I do is, I first fold the packaging into thin strips; the width of the strip depends on how long the strip is. Then I fold the strip one section at a time, into 90 degree angles. Make sure that the strip turns into a spiral - that is, either use all mountain folds or valley folds. After four folds, the strip will be wrapping back over itself. Simply tuck upper end under the lower (or the lower end over the upper), and the strip should stay in its squarish shape, with no intention of springing open. Repeat for the entire strip, and you've reduced a large stiff plastic bag to a small square of trash.

Anyone else with the same pet peeve?
No comments

Computer Science in Epistemology

I had my first epistemology class today, and while we didn't talk about anything in detail, I did have a thought.

The professor spent some time on skepticism, to emphasize why epistemology is deeper than it might seem. The conclusion of skepticism is that we know close to nothing about anything at all, because all our knowledge depends on other pieces of knowledge, and each of these can in turn be questioned as to their validity. In other words, the question for validity regresses forever.

I thought about that for a bit, and decided that infinite regression actually does not apply to all knowledge. There are certain domains which have basic, undeniable truths, and everything else can be reasoned deductively from these first principles. This allows the knowledge of that field to be valid and consistent - provided that

Specifically, I'm talking about the field of mathematics.

Mathematics as a study of numbers can be thought of to be not entirely real, in the sense that numbers are not physical entities, but properties of objects, like color. Unlike color, however, numbers do not represent a physical property, like the frequency of light reflecting off an object. According to Wikipedia, this is similar to Plato's idea of forms, in contrast with the other understandings of numbers. Other fields of mathematics study similar abstract mental constructs.

Let's focus on number theory for a moment, since I know it best. Processes like addition, multiplication, exponentiation, concepts such as prime numbers, exist as long as the concept of numbers exist. These processes and concepts do not describe the physical world, but only abstract numbers. To prove their validity, therefore, we only need to prove that numbers exist.

That sounds like an absurd goal, since numbers are abstract entities, and clearly do not exist like the screen this text appears on exists. There are, however, restrictions on what numbers are, and in fact there have been definitions of numbers. The Peano axioms try to do just that. Of particular note is that the axioms start of defining 0 as a number, and then a successor function S(n) which takes one number as an argument and output another number. Under this definition of mathematics, skepticism only works to reduce the field to a pair of axioms already taken to be true, and so fails to put all of mathematics into doubt.

It is certainly possible to "cheat" in the same way with all knowledge, asserting that certain things be taken for granted, but it is extremely difficult to say exactly what needs to be assumed, and what can form the basis for all other knowledge. In my opinion, it is also interested to note that, if the assumptions "God exists" and "God is all powerful" are taken to be true, then religion is also exempt from the perils of skepticism. I think the difference between the two lies in how practical they are; that is, how it can influence reality, which unfortunates requires a study of how we know what is real (metaphysics). For the curious: I would classify religion (or at the very least, God), under intuitionalism, one of the other theory of numbers.

All this, by the way, can be said in much less words.

I've written a lot already, and I still haven't mentioned computer science. I was thinking about the infallibility of mathematics today, and was wondering if computer science can be reduced to similar axioms as well. My first thought was that computer science could be confined to objects and processes similarly artificially constrained, a study of Turing machines. While some aspects of computer science rely heavily on mathematics, however, a large part of the field also rests on reality, which cannot be simply defined to have first principles. I will give this idea more thought.
No comments

Computer as a Container

I came across a New York Times article the other day titled "If Your Hard Drive Could Testify..." It talked about how the law is ambiguous as to the nature of the computer, whether it is a container like a box, or whether it is an extension of our body, like how the brain "stores" memories. The difference is that a container can be freely searched by authorities at the border (for illegal firearms, drugs, etc.), while bodily invasions require a "reasonable suspicion".

The implications of such a definition is broad. If a computer can be freely searched at the border, then "border authorities could systematically collect all of the information contained on every laptop computer, BlackBerry and other electronic device carried across our national borders by every traveler, American or foreign."

CNET goes into the second mentioned case with a little more detail. I did a quick search and also came across this little incident. A full account from the reporter is also online.

More so than privacy issues, it was the technological capabilities of the scanner which I was more curious about. When I read the NYTimes article, I had immediately wondered if the traveler had been using Windows, which surely the scanners would work on, or at the very least the border authorities could operate. Macs are becoming more popular, too, so they might work. I use Linux, which natively runs on ext2/ext3 filesystems. Somehow I don't think the scanner would work on that. And then the incident I found mentioned encrypted drives...

I wish I was there to see the customs officer's face. It would also be fun to see the computer start, only for her to be dropped into a command line login shell. "Oh, it doesn't work on... er, Linux, right?"

*Snicker*
No comments

Week of 2007-12-31 (and 2007-12-24)

I'm back in Chicago. I was hoping it would be all snowing and nice, but the snow's all melted, and I think there's been a few days when Hong Kong was colder this break. Sad.

In these two weeks I:
  • Watched American Gangster, Atonement, and Da Vinci Code. I liked Atonement the best. Da Vinci Code had a significantly different back story than in the book, which church-ified the story a bit. American Gangster was a good story.
  • Hung out with my friends. I will miss them; this is the least willingly I've left Hong Kong since the start of college.
That's about it... boring holidays huh. Maybe classes will bring some challenge into my life...
No comments

Journal Helper

I thought I would say a few things about my other journal.

I've kept it for 6 years, my first entry on 2002-04-18. The funny thing about that is I had started writing because I wanted to try programming a database, and journaling software was the first thing I thought of; I had written the first few entries as test data. The program I wrote then has long been deleted, but the entries remain. Well, I actually lost the first five or so when I deleted the directory containing the program, but what could I do.

The original journals were written in one plain text file, one line for the date and one line for the concatenated text. The test text didn't need paragraphs, but now I just dump them all into one line. Since I've given up my journaling program, I've instead put the entries into HTML files, one year per file. The advantage there is that I can read my journal in a web browser, and the text will word wrap automatically. I still read my entries in a browser sometimes, especially when I need to read chronologically.

Most of my reading, however, is done in a terminal. I wrote a script a while back since I started using Linux heavily. Besides giving me options for doing statistics on my entries (how many entries I've written, how often I write, the most common words I use) and giving me a single command to backup my journal, it does a pretty neat conjunctive search. Because I have over 1000 entries with 700,000 words (see? the script comes in useful), it's becoming increasingly hard to find previous events to reference. Instead, I would do a simple search for it. For example, I might remember going a school concert then watching a movie. I could simply do
./tools -Sd concert movie
and that will give me all the dates in which the words 'concert' and 'movie' appears. Again, it's a neat little script that takes all the search terms and uses them in grep, piping them one after the other. Such is the power of Linux.

I haven't changed the basic layout of my journal for a few years now, but lately I've been trying to think of a different format. Because grep doesn't how I have one line for dates and one line for the text, it has problem searching for a certain date. If I want entries which reference, say, the entry for Christmas of 2005, the output will be all messed up, since it actually matches the date of the entry, but gives the text of the entry before that (since it's the line above, which is how it gets the date). There's probably a really easy way of changing the script to handle that, but I haven't found it yet.
No comments

Interactive Environment

We were talking about the car brand Lexus the other day, and how Lexuses (Lexi?) have a large touch screen in the front of the car, which can automatically connect to your phone through bluetooth, and allow calls to be dialed and taken with the car's speaker (and microphone, I assume) system. Lexi also have keys which work by proximity, so you don't actually turn the key to start the engine, or press any buttons to unlock the door.

What interested me the most was how the car reacts to your presence, whether it's the door unlocking, or being able to pick up contacts and numbers from your phone. It is integrating you into the environment, and allowing you to interact with it in more natural ways.

I was taken by the idea that the environment is static, but waits for data (stored in the phone) to come to it. One thing it reminded me of was the whole software as a service idea. The concept here is similar, that the car waits for input before doing anything. It also reminded me of one of my professor's ideas, where at a party a computer would choose music so some people like it and some people don't, driving the activity more than choosing music randomly. This would work for art galleries, gatherings, or any other social event as well where the different interests of people could be exploited.
No comments

New Year Resolution

Three days into the new year, it's time I shared my new year resolution:

1280 x 800 @ 60 Hz

:D
No comments

Philosophy Game

Last year I took a computer game programming class for a few weeks before dropping it. The class (or at least the part I sat in) was more about narrative and story telling than about things specific to computer games. It should be fine, really, except that I suck at telling stories. Since our first assignment was to write a piece of interactive fiction, I didn't feel like I would do that well. I wrote a terrible outline for a detective story, then dropped the course. The class did, however, make me pay more attention to the interactive fiction scene since.

I did have a better idea in mind, but it wasn't so much a story than an exploration. Not the "let's go on an adventure" type exploration, but a "let's sit and reflect on yourself" type inner exploration. I wasn't sure if it would qualify as a game, so I didn't choose to develop it. The idea stuck with me though, and I would dearly love to see someone write an IF with this.

Here's the idea: the "game" is really an exploration of different possible philosophical beliefs, as many as the author can cram into something like an IF. When the player starts the game, the "room" has little to no description. The idea is that the world is determined by what the player chooses to do. This is hard to explain, so let me demonstrate.

The player starts the game, but seems the initial empty prompt. Pressing enter and seeing nothing, they type "look", the standard descriptive command in IF. The description then replies with that the player discovers he "perceives" something.

The trick to this "game" is that the player's words are taken to be literal, in an unformed universe. By "looking", the player assumes that they (as a character in that game world) have eyes (or a "mental eye"), and which in turn implies that not only the player exists, but they have a physical body and have the sense of sight.

Ideally, the "plot" would be a series of discoveries until the player arrives at the philosophy or a certain period, or even a certain philosopher. The interaction in this game is merely a way of deciding which branch of the game tree the player would traverse.

It is interesting to note that this idea has been used partially in fiction before. Here's the beginning of a chapter from Harry Potter and the Deathly Hallows:
He lay face down, listening to the silence. He was perfectly alone. Nobody was watching Nobody else was there. He was not perfectly sure that he was there himself.
A long time later, or maybe no time at all, it came to him that he must exist, must be more than disembodied though, because he was lying, definitely lying, on some surface. Therefore, he had a sense of touch, and the thing against which he lay existed too.
Almost as soon as he had reached this conclusion, Harry became conscious that he was naked. Convinced as he was of his total solitude, this did not concern him, but it did intrigue him slightly. He wondered whether, as he could feel, he would be able to see. In opening them, he discovered that he had eyes.
He lay in a bright mist, though it was not like mist he had ever experienced before. his surroundings were not hidden by cloudly vapour; rather the cloudy vapour had not yet formed into surroundings. The floor on which he lay seemed to be white, neither warm nor cold, but simply there, a flat, blank something on which to be.
He sat up. His body appeared unscathed. He touched his face. He was not wearing glasses anymore.
This is the same way the player would discover the philosophical terrain around his character. Once the player understands what is going on (I suspect most players would end up with the same "philosophy", since IF assumes a certain background), they would begin trying to see what other philosophies are in the game. I have thought a few interesting branches:
  • "let there be light" would be a biblical worldview, turning the IF into a God game of sorts
  • "think" is another common IF verb, which might result in Descartes' rational "I think therefore I am", a completely non-physical existence
  • "quit" or something similar, firmly setting the world as artificial and voluntary
Interactive fiction is always hard to write, especially when the verbs are numerous in this case. For this philosophy game, the author must not only anticipate what people will write, but also know the logical implications of that verb and what it means for the player's worldview. It will be an exercise in creating a taxonomy of different philosophic viewpoints.

When I read Raph Koster's book A Theory of Fun for Game Design, one of the things that struck a cord with me was his projection for how games can become an art. One of his complaints about current games is that they don't change the player. The player is not affected by the game, but only steps into and out of the game world as the same real world person. This is in contrast to how art has the capacity to influence the individual, and lead the perceiver to some unexplored aspects of life. I think a "game" like this, while it probably will not be immensely popular, definitely falls closer to the category of art than of game.
No comments

Ambigrams and Psychology

I'm more than a little behind in posting.

Every year around winter I get into an art mood, and work on either ambigrams or origami architecture cards. I've done a few ambigrams before, but I didn't think about the psychology behind it until now.

When I first started playing around with amibgrams, I showed a few to other people. Someone then commented that it's just a visual trick, because humans tend to fit what they see into categories they are already familiar with. In this case, we only see letters the right side up, and ignore the other potential interpretation of the letters being up side down.

Thinking about that a few days ago, I was reminded of how humans seem to have a special brain area for recognizing faces. Humans apparently need to recognize faces so frequently that by two months there is already a brain area which activates on perceiving a face. We have a tendency to see faces in random objects, and some people have a selective disability in recognizing faces. More related to ambigrams though is the Thatcher effect, how humans can't immediately identify "problems" in upside down faces, which are otherwise rather... eye-catching when viewed the right side up.

I think the same thing happens with ambigrams. We become so accustomed to reading letters and words right side up that we ignore small features changes. It just so happens that these changes form the same letters (or sometimes different letters) up side down, hence creating the ambigram.

If this theory works out, words that are more common would be easier to see, because of the reader's familiarity with that sequence of letters. For example, someone named Earnest might be faster to read his name in an ambigram, compared with other people who would see the adjective. Seems like it would be a pretty cool psychology experiment.
3 comments

Silence is Golden... Registered?

Ah, I caught up. That was fast.

A few days ago I watched a movie at an AMC theater. Before the movie started, they showed short slip telling people to be quiet while the movie is showing. The slip ended with the words "Silence is Golden®".

I was really surprised that they could register something as common as "silence is golden", especially since it's not being used outside of its usual context.

A quick google search shows this has been brought up a number of times. At least AMC has the wits not to sue anyone over it.
No comments