Linux in a Recession

I don't have much time this week, so this will be a short post. I also happen to have an economics midterm coming up, so why not write about that?

I was thinking about Linux adoption and how the operating system market fits into the nice supply and demand diagram. Linux, as a free (as in beer) alternative, should in theory rapidly take over the market. Of course, that is not the case, with Microsoft still having a strong hold with Windows.

Then I learned about inferior goods. For normal goods, when people have more spending power (ie. more income), people buy more. For inferior goods though, people buy more when they have less income. The word "inferior" refers not to the quality of the product, but to when people are willing to pay for them.

Which makes me wonder - would Linux be considered an inferior good? At some point of income loss, people must balance the familiarity of Windows with the lost cost of Linux. Below that income, people will buy (er, download) Linux; above that income, people will buy Windows. From this reasoning, it seems that a recession - like the one we're going through - can only help spread Linux to more people.

I wonder where I can find the Linux adoption data to match up with economy recessions.
1 comment

Aliens, therefore God

Something occurred to me over the weekend. This argument was inspired by Douglas Hofstadter's Godel, Escher, Bach, although when I first read it I didn't connect it to religion.

In several places in his book, Hofstadter states that the "inherit" meaning of messages depends on the interpreter of the message. For example, if you had a vinyl record but not a phonograph, it would be difficult to understand the message contained on the disc. (It should be possible, by the way, to connect a pin by string to a styrofoam cup, and move the pin along the grooves to hear the music. Please don't try this, or at least use a cheap record.) Similarly, writing on paper must be in the right language (and the right size) for people to understand. Otherwise, a translator or a microscope might be needed. The most famous case of this is of course the translation from  Egyptian hyroglyphics to a modern language, through the Rosetta Stone.

On a similar vein, messages which make clear they are messages are easier to understand than messages which are hidden. This seems obvious, but it in fact has great applications. Invisible ink is useful precisely because people don't know there's a message there; even if the message was written in plain English, the average person would have trouble extracting the meaning from that. Another example: I could chew on my pen in different ways during an exam to signal answers for multiple choice questions. My intended audience would know what those symbols mean, but to other people (most importantly, the teacher) the chewing would be merely random. (Again, I would ask you not to try this, but if you do, you have to first solve how to signal the question number, or at least the start of the sequencing.)

Hofstadter then brings up an interesting consequence: assuming this is true, JS Bach would be easier to understand than John Cage to aliens. I quote, "Intelligence loves patterns and balks at randomness." Just as the teacher couldn't see the answers because it was thought to be a random gesture, John Cage's music is too random for aliens to deduce there is an intelligence behind it (putting aside the medium of storage). Bach's music, with clear rhythm and variation and repetition and pattern, would be more readily picked up as the product of intelligent beings.

Reversing the context, if aliens are sending us messages it had better contain patterns. If the aliens are sending us random noise, it would be hard to distinguish it from the background noise of the universe. I'm no expert in this area, but I assume SETI uses some kind of pattern recognition (or anomaly recognition) to detect messages. Underlying that is the assumption that aliens will be conceying in a distinguishable manner.

Here's the twist: if in the background noise of the universe we find patterns and therefore claim that there are intelligent beings out there, how should we treat the patterns in plants, animals, and ourselves? Could it be a message from God?

Note: I just learned that SETI does not in fact look for patterns, but looks for radio signals which nature could not produce. The existence of this signal would therefore indicate the existence of equipment necessary to generate it, hence extraterrestrial intelligence. I'm not sure what the theological equivalent would be. It was fun for a while, eh?
No comments

Plane on a Plane

The URL of the third '???' post was indeed blog-post-19.html. Science, bitches!

Last week's question was: why do men like women with long blond hair, blue eyes, and large breasts?

I've actually known the answer for a while now, but for some reason it came to my mind recently. No, it's not because I met a large breasted women with long blond hair and blue eyes...

I first heard about this from (to my eternal shame) Psychology Today. In their article on 2007-09-20, they mentioned that there are evolutionary reasons behind these attractions. Specifically
  • Hair is a good indicator of health. Since hair grows slowly, long hair shows health over a long period. It's lustrousness shows nutrition in a healthy body. Finally, blonde hair tends to turn brunette over age, and so it's also an indicator of youth.
  • Blue eyes make the pupils easier to see, and hence easier to notice the dilation when the female is interested or aroused.
  • Large breasts... I had apparently skipped over this one when I first read it. There are two theories: One is that it is an indication of fertility (but not the ability to lactate). The other is that large breasts sag more with age, and is therefore an indicator of youth.
It would be interesting to see if these hold true over people/populations who have no contact with at least the first two (the last one being hard to control). For example, if photos of women with long blonde hair only, with blue eyes only, with both, and with neither were shown to, say, men from a small village in China, would they rank their appearance in the same way?

For interested readers, the article I'm refering to is here.

This week's question is something I've wanted to solve for a while, but never got around to it. Might as well make this blog force me.

When you look up at a passing plane, usually you'll miss where it is at first glance. This is because the plane is high enough that there is some delay between the image of the plane and the sound of the plane reaching you. From this simple fact, it should be possible to calculate how quickly the plane is flying, as well as how far the plane is from you. The solution should be symbolic, and/but you can assume that:
  • The speed of light is clight = 299 792 458 m / s
  • The speed of sound is csound = 340.29 m / s
  • The plane is of length l. Assume you can recognize the model of the plane, and therefore know l.
  • In the direction of light and sound, the plane is equidistant, d, from you. That is, at any moment the point of origin of light (which you see), the point of origin of sound (which you hear), and where you are standing are on a plane and forms an isosceles triangle.
  • The plane flew in a straight line since it produced the sound until where you're looking at it now. The distance between those two points is kl, where k is any real number.
  • The path of light and sound are separated by an angle θ (theta) as perceived at your location.
For bonus points: is it possible to calculate the same data if the plane was not equidistant from you, but flying in an arbitrary straight line? How/What other data is necessary?
No comments

Musings on Human-Level Artificial Intelligence

I've been thinking a lot about human-level artificial intelligence (for convenience and humor, let's just call it HAI) lately. I suppose it started in November, when I was looking at Carnegie Mellon for grad school and I stumbled across Professor Scott Fahlman's blog, "Knowledge Nuggets". Although his research is in knowledge representation, he writes about HAI as well. In several posts, he outlined what we are missing in current AI research, and what he thinks a knowledge base (KB) for HAI would be like. Although I've never communicated with him, he was the main reason I chose to take Knowledge Representation this quarter instead of Introduction to Computational Linguistics. It reminded me that my real interest is in the artificial creation of a psychology. For a while I was distracted by other, perhaps much easier and more practical fields of AI like textual analysis, but reading Fahlman's blog brought back my interest in strong AI (an AI which can actually think, as opposed weak AI, which only gives the appearance of thinking).

Since then I have thought more about this problem, and I would like to use this post to organize my thoughts. Reading first Alan Turing's essays from the beginning of the digital computer and his visions of what computers can do, then Douglas Hofstader's Godel, Escher, Bach on symbols manipulating symbols, resulted in a large number of ideas. Comparing those ideas with the current state of AI, I also see some difficult problems to solve. I would love to tackle some of them, and while I don't think having HAI is impossible within my lifetime, it will definitely take a lot of smart people and clever innovations.

Let me start, then, with a quote from Turing, from his paper "Computer Machinery and Intelligence":
Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?
It occurred to me that a lot of AI research is done in replicating what we see in adults. This not only applies to early AI research, when theorem provers and chess masters were written. Planning, problem solving, knowledge representation, reading and understanding language... these are all behaviors which humans learn relatively late in life. While I don't doubt there are many practical applications of results from these areas - that may perhaps even be the reason why there is so much research - it seems difficult if not impossible to arrive at a general HAI from this direction. Intelligence itself is a complex enough creature; studying it after it has matured and grown is like trying to reconstruct a tree. Although recreating the tree may be the ultimate goal, studying the structure of the seed is the better path for research. A successful replication of the seed necessarily leads to the replication of the tree, and yet the seed is infinitely simpler than the tree with its myriad of branches and leaves and flowers.

Similarly, understanding the cognition of a child - or perhaps even an infant - might be a more worthwhile direction of research. Continuing from Turing's previous quotation:
If this [the child brain] were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer's. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.
Of course, I understand that while there may be little in the brain, that doesn't mean that it's not complex. It's simply because it's less than an adult brain that it's our object of study. A similar argument can be made for studying the brains and intelligence of other animals, and there would probably be contributions to be made there, but the gap between human intelligence and animal intelligence is too wide to only study animals.

Here, I would like to point out that studying the intelligence of humans or animals is not the equivalent of studying the brains of humans or animals. Studying the neurological processes in the brain to arrive at intelligence would be like building a car from quarks and electrons. Hofstadter writes:
... we hope that thought processes can be thought of as being sealed off from neural events in the same way that the behavior of a clock is sealed off from the laws of quantum mechanics, or the biology of cells is sealed off from the laws of quarks.
Studying neurons again makes the problem too complex. That is not to say the neurological study of the brain is useless. We can learn much about human intelligence if we could
... step back... towards a higher, more chunked view. From this vantage point, we hope we will be able to perceive chunks of program [or groups of neurons] which make each program [or group] seem rationally planned out on a global, rather than a local, scale - that is, chunks which fit together in a way that allows one to perceive the goals of the programmer [or the brain]... There is some sort of abstract "conceptual skeleton" which must be lifted out of low levels before you can carry out a meaningful comparison of... two animals [or intelligences].
That was Hofstadter again. The assumption is that intelligence can be abstracted out from the neural structure and implemented on a computer. There is of course a chance that this assumption is unjustified, in which case strong HAI is impossible.

But taking the assumption for now, what "mechanism" does a child brain (from here I will use the words "brain" and "mind" interchangeably, to match Turing's wording) consist of? In a letter from Christopher Strachey to Turing, he wrote:
I am convinced that the crux of the problem of learning is recognizing relationships and being able to use them... there are, I think, three main stages in learning from a teacher. The first is the exhibition of a few special cases of the rule to be learned. The second is the process of generalization - ie. the underlining of the important features that these cases have in common. The third is that of verifying the rule in further special cases and asking questions about it.
I think what Strachey said has credit, and I will return to it momentarily. There are, I think, other considerations. Without saying, a child brain in the form of a computer and a child brain in the form of a human has significant differences. The biggest one is the lack of physical environment, or to give it another name, reality. A lot of what the child first learns relates to the external environment. Infants at around 10 months of age learn that objects still exist when out of sight; before that, however, when infants see a toy hidden under a blanket, they wouldn't know to look under the blanket for it. This is called object permanence, and is a perfect example of how the environment helps children's cognitive development. Presumably they notice that objects keep reappearing when they disappear, and eventually realized that objects do not in fact "disappear".

How can a HAI learn such a concept without the environment around it? It may turn out that getting an abstract intelligence doesn't require learning object permanence. This would be a great test of the abilities of a developing HAI though. There is another, more serious, potential consequence of existing in digital space: the HAI may never learn language. Language is partially a mapping, through the phonetic sounds we produce, between our thoughts and objects in the real world. If a computer never "encounters" a chair, it wouldn't know what a chair is. More disturbingly, a computer never encounters the three dimensional space we live in. A number of other concepts familiar to us as humans become meaningless for a disembodied HAI. As Hofstadter wrote, "thoughts must depend on representing reality in the hardware of the brain." For a HAI to have thoughts, then, there must be some reality in which it exists whether real (such that the HAI is embodied in a robot) or virtual (a digital, artificial reality which we have control over). In either case, the environment should not only be passive, but created such that the HAI could react to it. A particularly interesting idea I had was to give the HAI a lower dimension reality - so it lives on a plane, and its "vision" consists of a colored line. This simplification serves both the purpose of giving the HAI an environment, while keeping it simple enough for it to understand and for us to understand its understanding.

Assuming these external (to the HAI) issues are solved, we now turn back to the problem of what internal mechanisms a child brain must have to being the learning process. What is the starting state, the tabula rasa from which knowledge will be written? It seems to me that the ability to recognize patterns is of utmost importance. Without it we wouldn't learn object permanence, or recognize that an armchair and a stool both belong in the category of chairs. The whole idea of creating ontologies, which is what KBs are, is based on the ability to recognize patterns and classify objects. The way we learn, too, is from recognizing patterns: we classify both hand written and printed letters as part of the alphabet, and we look at past experiences for insight on how to solve unfamiliar problems.

The last point on solving unfamiliar problems actually pushes the ability behind simple recognition of patterns to using these patterns, what Strachey called "the process of generalization". I will give it another name: induction. Note that this is not the mathematical meaning of induction - the reduction of an infinite number of cases to finite base cases - but the reasoning meaning of induction. We may burn our fingers once, twice on a hot stove, and we learn to stop putting our fingers on stoves or other hot objects. There are more abstract generalizations, too. We learn the quadratic formula by applying it on many different equations, but we know that the formula doesn't only apply for these exact numbers: it applies for any equation of the same form. This is due to our generalization of the pattern we have recognized in the equations which the formula solves.

There are admirable advances in pattern recognition. Within a KB, a Structure Mapping Engine (SME) can make analogies between two domains. This is how it works. The KB contains statements about how different components of a domain relate. SME first finds a relationship which exists in both domains; the things related by this relationship are then mapped onto one another. Each of these mappings be generate deeper mappings - that is, there are more relationships that are similar in these components. These nested relationships give each domain a structure - hence the name of the engine. By comparing not the actual objects but the relationships between objects in the two domains, a "deeper" analogy is drawn.

While this is a highly successful way of recognizing patterns and a solid step towards HAI, I can see at least one extension to the KB-SME: the pattern found should be added back into the KB as an object of its own. This stems from the fact that humans do not only reason on one level, but finds patterns made up of patterns, and patterns made up of those, and so on. Induction is the step for the HAI to create an ontological KB for itself, and is necessary for it to learn anything of significance. This is of course, easier said than done: how should the pattern be represented?

This, in fact, leads to a broader question: how should anything be represented in the first place? The idea of creating a child HAI is not complicated, but it brings into question many processes which we do not yet understand about our brains. If the child brain is to be "lots of blank sheets", how are the perceptions of the HAI written on the blank sheets? The knowledge in the knowledge bases currently in use all came from humans; some programmer/knowledge worker devised an organizational scheme, and the objects are related correct to each other. For humans there is some hard-wired method of translating our sensations into objects of thought. Children don't know what any "thing" is when they are born, but eventually they have the concept of a generic "object". Not only do they have the concept, but they can reason with such a generic object without ever having seen one. Without the abstract representation of the world, an induction engine - no matter how good - will be useless.

Not that our unbuilt HAI has a good induction engine. There is a large obstacle in this area as well: how should the actions of the HAI be represented such that the HAI can change them? To take a simple case, I am capable of using deductive logic. When I first learned it, I probably made lots of mistakes, for example, affirming the antecedent. But as I learned about logical fallacies, my thinking changed as well. I not only stopped myself from making these errors, but I am able to catch myself when I do make them. The same kind of introspection is need when we try and fail to remember a salient event (say, ate breakfast with the president), and therefore know it did not happen. The method of thinking suddenly became the object of thinking. It is perhaps not co-incidental that this forms the kind of "strange loop" which lies at the heart of Hofstadter's book. Without the ability to modify its own behavior, any HAI will still only be following algorithms and incapable of truly surprising us - not to mention not really a HAI.

The opposite end of the same problem is, what set of algorithms should a HAI not be able to change? There are processes which I cannot stop myself from doing - the best example being the recognition of faces in inanimate objects. I could not change how I look at faces any more than herring gull chicks can stop pecking at yellow sticks with red spots. Similarly, the will be aspects of a HAI which is hard coded; the perception of causality being a highly possibly example. More over, it is impossible for every line of code to be modifiable by the HAI itself. The answer to this question would not only be what needs to be hard coded for the HAI to learn, but also how little can be hard coded to have the same affect.

An additional difficulty of modifiable processes is that current KBs are ill suited to hold procedural knowledge. This runs against the current consensus that procedural knowledge and declarative knowledge is kept separately. Episodic memory should be included in the knowledge base as well, since this forms the basis of induction as well as the subjects of thought. Both these types of knowledge requires that the causal and temporal relationships in current KBs be greatly expanded and specified. It would be interesting to build a microtheory on action and its possible consequences.

I have presented a few ideas here, but also raised a lot of obstacles which the path to HAIs must conquer. I would just like to close by saying that, creating a HAI may not in fact by very illuminating for human cognition. When a HAI does start learning, all the symbols it creates will be radically different from the symbols we use; we may not be able to assign meaning to those symbols at all. Of course, that doesn't diminish the allure of creating a human-level artificial intelligence at all.
No comments

Book Reviews

I finished 3 books in the last 2 weeks, and I would like to jot down some thoughts on them. 

The Essential Turing, edited by Jack Copeland

This was a heavy book, and I don't mean weight (although it was too). Turing is one of those people I find hard to reconcile; while I simultaneously know of the Turing Machine and the Turing Test, somehow I separate them so far in my mind that I forget it's the same person who made those two. The first part of this book is highly mathematical, as it contains the very paper which Turing suggested his universal Turing machine. I skimmed most of that paper, as I didn't want to follow all the details, but I did learn the basic reason why the halting problem is undecidable. The book goes through another paper, then talks about Turing's involvement with breaking the Enigma. Only the last part is on AI, and in retrospect Turing does have some far-sighted ideas. The editor seems a little bit too fond of Turing though. Overall, the book was not too bad a read (not including the mathematics part).

Flowers for Algernon, by Daniel Keyes 

My first fiction book in a while. While I've felt empathy with the character on human relations, I also share his feelings on academia. On talking with some people at his local college, the protagonist finds that people take their fields too narrowly, and there's not enough overarching work being done. I really am afraid of falling into the mold of Ziman's scientists, knowing "more and more about less and less". That aside, this book is worth a read.


Godel, Escher, Bach: an Eternal Golden Braid, by Douglas Hofstadter

I had high expectations for this book, as it was highly recommended by many people, especially for compute science/artificial intelligence types like me. After reading it's entirety in a week, however, I find the hype a little inaccurate. While the concepts in the book are somewhat novel, I don't like several choices in Hofstadter's writing. The first definition stems from this book being 30 years old, during which computers reached the world class level of chess, genetic programming developed, and computers in general became a lot more powerful (computationally, not mathematically) than at the time of writing. These, however, are merely artifacts of when the book was written. I am more mindful that Hofstadter prefers to give his own names to theories than use the original. I feel it actually made the proofs a little more opaque. Hofstadter tries to tie the book together with a theme, but this theme is hard to detect - no wonder other readers have been confused as to what the central message of the book was, as the author noted in the 20 anniversary preface. In praise of the book though, I think the dialogues are ingenious, not only through their formal (form-related) isomorphisms to Bach's works, but in various word play and other humor.

I did, on the other hand, get one or two - not too many, just one or two - ideas from the book. I would say that they have been near the surface of my thoughts for a while, although no doubt Hofstadter would argue that the information was all in the book...

As a result of reading these books, and taking courses in operation semantics and knowledge representation, suddenly my quarter is filled with references to Godel, Church, and Turing. Which isn't bad, I suppose; I've always wanted to learn about them.

PS. I highly recommend book darts (also available on Amazon) to note interesting passages. Despite their crappy website, they have been invaluable in keeping track of worthwhile sections to transcribe later en masse.
No comments

???

As you can see from the title, I have a further question (actually, a test to confirm a hypothesis) about the way Blogspot posts posts with non-alphanumeric titles. Last week's post had the URL http://justinnhli.blogspot.com/2009/01/blog-post_12.html, the "12" seemingly from the date. If I am correct, this post should have the URL http://justinnhli.blogspot.com/2009/01/blog-post_19.html. Let's see if I'm right.

As for last week's real question, the answer may be surprising. There were two ungrammatic sentences in the list, and they are:
  • I'm doing good. (Obviously)
  • I'm doing great. (wah?)
I was surprised by the second one too, but according to Wiktionary, great is only an adjective and therefore cannot describe the "doing". It doesn't work in this case as an injection either, which means it's just plain ungrammatical. Of course, being grammatical doesn't mean everything, as is shown by:
  • I'm doing best.
This is funny, actually, because "better" and "best" are both superlatives, which inherently compare one thing with another. "I'm doing better" compares my current feelings with my past, but "I'm doing best" - which would implicitly mean I'm feeling the best I have in my life - just comes across weird.

But that's English for you. If you need further evidence that grammar isn't everything, consider the following sentence:
  • Colorless green ideas sleep furiously.
This week's question: Why do men like women with long blonde hair, blue eyes, and large breasts? No, it's not because they have more fun...
No comments

A little education can't not do nobody no good...

...wait, what?

A week or two ago Genia wrote a blog post on whether education is a privilege or a right. Although I'm not entirely convinced by my position, I will attempt to make a stand, if only to help understand my thoughts better.

I am a pretty liberal person (my views of marriage being the best evidence), and from working with complex systems I believe people are capable of self-organization without overt cooperation. The open source community is a great example, which one of my friends had written a game theory paper on why people don't "defect". The ability for humans to cooperate means that the role of the government should be small, only intervening where humans acting in their self interest wouldn't suffice (crime, foreign policy, equity, etc.). When I discussed this with another friend though, I was asked about my views of education and whether the state should fund public education. I had to mull over that one, and never gave an answer. Reading Genia's post brought the topic to my mind again.

Under my view of government, the question to ask is: will people get educated if everyone acted only in their self-interest? This "self-interest" is not the same as that assumed in economics; I don't mean strictly the self, but whatever that person values most, which may turn out to be his family, his immediate neighborhood, or so on. In this sense, I think the people will continue to get educated without government intervention

My reasoning goes like this: rich people certainly can afford to go to college. It is the less financially well off students who lies at the heart of the question. I have, however, reason to believe schools will still take them, for several reasons:
  1. As I am learning in Sociology of Religion, America has a low tax rate (that is, small government redistribution of wealth) but a very high rate of charitable giving. That is, money which goes towards caring for the poor and homeless is for the most part not coming from the government, but from (primarily religious) non-profit organizations. As it turns out, these organizations not only give out food and shelter, but also build schools and provide for schooling in other financial ways. This gives the less fortunate a way to afford school.
  2. As I am learning in Introduction to Macroeconomics, education is a field with a high entry cost but very low marginal cost. Schools need a lot of capital to build classrooms and hire teachers, but after some enrollment the cost of each additional student is low. Through pricing discrimination (which schools already do with scholarships, work study jobs, etc,), schools can get more money for the facilities they've already paid for.
  3. Besides internal scholarships, the existence of multiple schools also offers different prices. Under a market where the schools are supplying education and the people are buying, there will inevitably be schools which offer a low price. There might be arguments on quality of education; my answer is that such a difference in quality already exists in the current public/private school system.
  4. Schools have a reputational incentive to offer places to students by merit or grace.
  5. The internet allows schools to distribute educational material at a very low cost to everyone with internet. The access to internet is potentially a larger social problem.
I feel my answer actually avoided the philosophical question of whether education should be a privilege or a right. The justification for the above reasoning seems to suggest that education is a privilege, with a price to be paid. If under the system everyone can go to school, however, and the richer pay more of the cost for teachers and environment by virtue of paying at all, is this really different from a right?

The Universal Declaration of Human Rights state in Article 26:
Everyone has the right to education. Education shall be free, at least in the elementary and fundamental stages. Elementary education shall be compulsory. Technical and professional education shall be made generally available and higher education shall be equally accessible to all on the basis of merit.
The only part that's missing from this system is the compulsory part. Again, however, I think the demand for education will be high enough that most everyone will want to send their kids to school. It is simply not possible to survive in the current environment without some education, and that should , er, compulse parents to get their kids an education.

Here's a good question: Orphans?

I know this piece is highly biased by my background. If I grew up as an inner city kid I would probably have very different views. I would love to discuss this further with someone.
No comments

The Shape of an Educated Man

I wrote about educating a universal man last time. Last week, I had the strange idea of representing the amount of knowledge on a graph.

On these graphs, the horizontal axis represents the spectrum of academic fields. Obviously there is no one order to put the fields in, but just let that rest for now. The vertical axis represents amount of knowledge; it's a percentage, so 1 means complete knowledge (or world-class expert, as the former might not be possible) and 0 means no knowledge or novice in the field.

I also try to keep the area of under the curve constant, 1 in this case. This is not always possible, but it does represent a limit to the amount of information the brain could hold.

They are done in the style of Indexed. I don't follow them, but I've found a few which are amusing.

First, a correction on one of the links above (the "are"):



Then, quotes by John Ziman:

"A philosopher is a person who knows less and less about more and more, until he knows nothing about everything."



"A scientist is a person who knows more and more about less and less, until he knows everything about nothing."



Finally, Homo universalis, which Thomas Huxley puts succinctly, "Try to learn something about everything and everything about something."

No comments

???

Last week's easy question was: what's the URL for a blog post titled "???"? The answer is http://justinnhli.blogspot.com/2009/01/blog-post.html So it turns the zero-length title (after stripping punctuation) into "blog post". This non-intuitive result proves that it was a good question to ask.

The other question was, why don't we have languages based on our senses of olfaction and gustation?

I can only speculate, of course, but I came up with the following:
  • Difficulty in Production. Humans have no biological method of producing different types of smells or tastes, and there is even harder to standardize these between cultures and races. Even in the modern world, it is relatively difficult to produce pieces of material which smell. The existence of Braille, however, suggests that this will be possible in the future, momentarily disregarding the points below.
  • Difficulty in Storage. Even if we had a way to produce smells, we have no non-destructive way of storing these senses for preservation. Braille is the same way, which is perhaps part of why it's used less often. Note that our sense of hearing is also short term, and technology has developed to capture that sense (tape recorders, etc).
  • Difficulty in Perception. This doesn't mean that we have to try hard to smell or taste, but that we have limitations in these two senses. It is hard to know precisely when one smell ends and another begins, which is a feature in all our other senses. Further more, what smells we can distinguish has a biological basis (through genes), so not everyone can detect every smell.
  • Lack of Necessity. The most important reason on here is probably the lack of necessity. We simply have no need to depend on our senses of olfaction and gustation for language. What we see, hear, and touch is much more salient, and so few people are blind, deaf, and amputated at the same time that there is no need to specially cater to them.
Also one and a half questions this week.

The first half question is a continuation of last week's. This post is also titled "???"; what is the URL? I'm guessing it will be something like blog-post-2.html, but we'll see.

As for the real question: some people have a pet peeve related to how others respond to "how are you doing?" Which of the following is grammatically correct, and why?
  • I'm doing good.
  • I'm doing well.
  • I'm doing fine.
  • I'm doing better.
  • I'm doing best.
  • I'm doing great.
No comments

Journalism as Sociology as a Social Science

There's a trend in journalism which I've noticed in the past couple years. I think it's becoming more and more common all the time, but it baffles me because I don't understand it's value.

When reporters cover public events, like parades, dances, and so on, they have a tendency to join the demonstration. They would walk in the parade, or paint their faces, or do something which shows them participating in what they're reporting about.

My question is, how is this helpful for the audience to understand what is going on? How does this help represent the truth? For one, the reporter is most probably not as skilled in whatever the demonstrators are doing, and certainly won't have a background of working with that group. So the reporter is not representative of the events. It seems to me that showing someone from the event being interviewed, and hearing about the event from them, is more productive and useful than seeing the reporter do stuff.

This actually reminds me of what I learned about social interactionism, and how sociologists have to immerse themselves in the culture they're studying. Another question arise, this one more dire than the previous: how does the sociologist remain objective if one is to be part of the culture? It seems to me like the whole religion thing: Believe, and I will show you proof. This seems to be against how science is done.

The above ideas are all poorly written, but I really am puzzled by this.
No comments

Flowers for Algernon

I just read Daniel Keyes' Flowers for Algernon. It's the first serious fiction book I've read in a while. I empathize because sometimes I feel lonely in the same way - my intelligence not as high as the protagonist, of course. Keyes creates a sad yet loving and lovable character, and I find parts of myself mirrored in him. I suppose that's what people find all the time in other novels and movies.

Then, in a sudden intuition... I knew it wasn't the movies I wanted, but the audiences. I wanted to be with the people around me in the darkness.

The walls between people are thin here, and if I listen quietly, I hear what is going on. Greenwich Village is like that too. Not just being close - because I don't feel it in a crowded elevator or on the subway during the rush - but on a hot night when everyone is out walking, or sitting in the theater, there is a rustling, and for a moment I brush against someone and sense the connection between the branch and trunk and the deep root. At such moments my flesh is thin and tight, and the unbearable hunger to be part of it drives me out to search in the dark corners and blind alleys of the night.
I am solemn right now, as though the book talks not fiction, but describes my future. I don't think I would have the strength to see myself devolve, not after seeing what I am capable of. I would have killed myself.
 ... I could see how important physical love was, how necessary it was for us to be in each other's arms, giving and taking. The universe was exploding, each particle away from the next, hurtling us into dark and lonely space, eternally tearing us away from each other - child out of the womb, friend away from friend, moving from each other, each through his own pathway toward the goal-box of solitary death.
I just want to say I love you. Really, I do. I just don't know how to show it.
I've been starved for simple human contact.

No comments

???

Last week's question was: if a car's wheels fit perfectly on train tracks, and the wheels have no tires, does the car needs to be steered?

Here's what I think. If you visit science museums, you may remember seeing a demonstration where there were parallel downhill tracks, and you could try different shapes of wheels to keep the axle on the track on its way downhill. The winner is the wheels where the inner diameter of each wheel is larger than the outer. This means that there is an equilibrium point in the middle where the axle sits very well. Unlike the opposite case where the outer diameter is larger than the inner diameter, the train wheels push the axle to keep it in the center of the track, and not pull it on two sides.


This image gives a better idea of what I'm talking about.

Car wheels, without tires, are similarly shaped. Here's an image I found of the wheel, and you can see it also has a rim on the inside edge. The progression from center to the rim is not as smooth as on train wheels, but as long as there is some protrusion it should hold the car in place on the tracks.

Which means, like trains, if a car does go on tracks without tires, and it fits the track perfectly, it will not need to be steered.

For this week, I have one and a half questions:

The title of this post is "???". Blogspot gives a link each individual post,but removes any punctuation, so that "Question of the Week?" ends up at question-of-week.html (I guess they remove common words too). So what happens if the title was only punctuation? Click on the title of this post to find out.

Here's the real question. Humans have 5 senses: sight, hearing, touch (which can be subdivided into pain, heat, and so on), smell, and taste. Of these 5, language is expressed for sight (reading/writing), hearing (listening/speaking), and touch (Braille). What would a language based on the olfactory (smell) or gustatory (taste) systems be like, and seeing that we haven't developed one, why is it impractical?
No comments

Religious and Scientific Domains of Inquiry

I want to write something short about the difference between religious and scientific domains of inquiry.

Sciences aims to expand our knowledge of the world, which I think we all agree is a good and necessary thing. There is however a restriction: this knowledge must be repeatably testable in controlled experiments. This means that there are things slightly outside the reach of science: the existence of unicorns, for example. Although we can say with extremely high probability that unicorns don't exist, science cannot prove that they don't. With unicorns it's easier, since we inhibit the same environment as they do. With beings like God, it's a lot harder to say whether He exists or not.

That's where religion comes in. Besides telling people that God exists and is loving and good, religion also tells us that God wants us to treat others kindly, to respect other people's properties, and so on. As with the existance of God, there is no morality in nature, and thus no truth to find. Religion or philosophy or something similar is needed to give us direction as to how we should act, and what our purpose in life is.

The problem, in my opinion, arises when religion talks about the natural world - claiming that animals did not evolve, for example. There is clear evidence which supports evolution, and science offers the better explanation of these processes than religious texts. Other similar topics include the possibility of a world wide flood, dividing the Red Sea, and so on. Just as science cannot offer much in terms of morality, religion is lacking in its explanation of natural phenomenon.

This shows that religion and science actually have mutually exclusive domains, and hence my problem (if I have one; I'm still deciding) with religion does not come from science, but from something else.
No comments