Inebriated, v2.
Unknown, 2009-05-31
As Faye so aptly put it, I am once again inebriated. But I'm close to graduating, so it's my excuse to loosen up a bit. I will post the answer to last week's question, as well as a new question, tomorrow at midnight.
Thoughts on Being a Student
Unknown, 2009-05-29
I recently read the first few chapters of The Closing of the American Mind, by Allan Bloom. I didn't finish it - while the content was interesting, I find Bloom taking too long to get to a point - but be that as it may, I would like to first offer a summary of what I think Bloom's point is, before offering some thoughts.
Bloom's basic thesis is that the modern drive for openness and tolerance takes away the individuality of cultures and of people. By asking students to be open to different ideas, everything becomes relative to students, and therefore unimportant. They are equally passionate - or rather, equally dispassionate - about everything. They do not argue for a way of life, and do not see things above money, happiness, and life. In Bloom's words, "they do not drink the spirit of life, but prefer the water of the crowd."
Bloom's explanation of this is the lost of the classics: people are not reading great books anymore, and therefore do not experience, through the characters in the book, what it means to sacrifice their lives for a cause, or to struggle through truly difficult circumstances. They do not have anymore what Bloom calls "wisdom". The Bible is one of Bloom's classics, and he writes,
Students, following only the common and mundane goals of money, fame, "success", are missing out from the greater enjoyment of life. Too few, and very often far too late, realize that they don't like what they're doing. "The most common student view lacks an awareness of the depths as well as the heights, and hence lacks gravity." There needs to be some energy, some driving force, whether from the reading of the classics or from some other source to propel students above hoi polloi. Bloom at one point called this "indignation". "Indignation is the soul's defense against the wound of doubt about its own; it reorders the cosmos to support the justice of its cause. It justifies putting Socrates to death. Recognizing indignation for what it is constitute knowledge of the soul, and is thus an experience more philosophic than the study of mathematics."
But what can schools and universities do? "Education in our times must try to find whatever there is in students that might yearn for completion, and to reconstruct the learning that would enable them autonomously to see that completion."
My favorite quote in the book simultaneously points out the problem and the solution:
Bloom's basic thesis is that the modern drive for openness and tolerance takes away the individuality of cultures and of people. By asking students to be open to different ideas, everything becomes relative to students, and therefore unimportant. They are equally passionate - or rather, equally dispassionate - about everything. They do not argue for a way of life, and do not see things above money, happiness, and life. In Bloom's words, "they do not drink the spirit of life, but prefer the water of the crowd."
Bloom's explanation of this is the lost of the classics: people are not reading great books anymore, and therefore do not experience, through the characters in the book, what it means to sacrifice their lives for a cause, or to struggle through truly difficult circumstances. They do not have anymore what Bloom calls "wisdom". The Bible is one of Bloom's classics, and he writes,
"With [the Bible's] gradual and inevitable disappearance, the very idea of such a total book and the possibility and necessity of world-explanation is disappearing. And fathers and mothers have lost the idea that the highest aspiration they might have for their children is for them to be wise - as priests, prophets, or philosophers are wise. Specialized competence and success are all that they can imagine. Contrary to what is commonly thought, without the book even the idea of the order of the whole is lost."Personally, I'm not convinced that students are truly lost without the classics. I do, however, agree with Bloom's general belief that today's students are lacking something. I'm not sure there's a word for it, but it encompasses the desire to keep learning, the desire to be convinced of something and to work to proof or defend that conviction. Bloom uses this anecdote to illustrate what students are missing, and how modern education are failing to help students regain that drive:
I once had a debate about education with a professor of psychology. He said that it was his function to get rid of prejudices in his students. [...] I began to wonder what he replaced those prejudices with. He did not seem to have much of an idea of what the opposite of a prejudice might be. [...] Did this professor know what those prejudices meant for the students and what effect being deprived of them would have? Did he believe that there are truths that could guide their lives as did their prejudices? Had he considered how to give students the love of the truth necessary to seek unprejudiced beliefs, or would he render them passive, disconsolate, indifferent, and subject to authorities like himself, or the best of contemporary thought?After reading this story I was reminded of an AI koan in the Jargon File:
In the days when [Gerald Jay] Sussman was a novice, [Marvin] Minsky once came to him as he sat hacking at the PDP-6.The moral of this story is that randomized preconceptions are not the lack of preconceptions. You just don't know what they are. The psychology professor is doing much the same thing. Simply by getting rid of what prejudices the students have which the professor can detect, doesn't mean that the students don't have any prejudices left. Worse, now no one knows what those prejudices are, and cannot act to instill a more useful set of beliefs. In computer science terms, the goal of initializing a neural network is not to get rid of any "preconceptions", but to give it the best "preconceptions" for the the neural net to start learning with.
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-Tac-Toe" Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?", Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.
Students, following only the common and mundane goals of money, fame, "success", are missing out from the greater enjoyment of life. Too few, and very often far too late, realize that they don't like what they're doing. "The most common student view lacks an awareness of the depths as well as the heights, and hence lacks gravity." There needs to be some energy, some driving force, whether from the reading of the classics or from some other source to propel students above hoi polloi. Bloom at one point called this "indignation". "Indignation is the soul's defense against the wound of doubt about its own; it reorders the cosmos to support the justice of its cause. It justifies putting Socrates to death. Recognizing indignation for what it is constitute knowledge of the soul, and is thus an experience more philosophic than the study of mathematics."
But what can schools and universities do? "Education in our times must try to find whatever there is in students that might yearn for completion, and to reconstruct the learning that would enable them autonomously to see that completion."
My favorite quote in the book simultaneously points out the problem and the solution:
The only way to counteract this tendency [that the here and now is all there is] is to intervene most vigorously in the education of those few who come to the university with a strong urge for the un je ne sais quoi, who fear that they may fail to discover it, and that the cultivation of their minds is required for the success of their quest. We are long past the age when a whole tradition could be stored up in all students, to be fruitfully used later by some. Only those who are willing to take risks and are ready to believe the implausible are now fit for a bookish adventure. The desire must come from within.
Drunk
Unknown, 2009-05-27
I have too much wine in me to properly write a post, so I'll postpone that to some other time. Thanks for understanding.
Eliza the Psychologist
Unknown, 2009-05-25
Last week's question: Why do children like playing on swings and slides?
You know, even after spending some time looking on Google, I couldn't find a satisfactory answer to this question. If I may, however, I would like to propose the following partial solutions:
Building on the last point above, this week's question is: why do humans have social needs?
You know, even after spending some time looking on Google, I couldn't find a satisfactory answer to this question. If I may, however, I would like to propose the following partial solutions:
- Swings and slides both offer the kid motion, and it's faster motion than they at that age can acheive themselves. So it offers a novel sensation for the children.
- Along similar veins, at least the slide offers a new tactile experience as well. I can't think of that much else where you can feel solid material sliding underneath your fingers.
- Slides and swings are usually located in playgrounds, where lots of children gather. The opportunity to play with other children form the last part of the appeal.
Building on the last point above, this week's question is: why do humans have social needs?
Python Bashing
Unknown, 2009-05-22
As a computer science major, it is not surprising that I write a lot of small programs every day. From simple things like modifying some text input in some way (a chain of sed's), to writing my alarm clock, a twitter backup, and an ISBN converter. My journal search tool was written by myself, and that goes back quite a ways.
All of the programs mentioned above, however, are Bash scripts.
I'm actually a little sheepish about this. After all, Bash is not a real programming language, but just a way to automate a few administrative operations. There is no type system, no support libraries, no object oriented utilities.
I've thought about doing more things in python, but somehow there's a barrier to entry to it. I reasoned that since what I do is mostly with text, it's easier to read from files in Bash than in python (a single cat as opposed to open().read()). But really that shouldn't be a problem. The output might be an issue too, if I wanted things in nice columns and what not. But then instead of column or paste I would just be using printf or equivalent.
I was digging around Paul Graham's older essays, and I came across his one titled "Being Popular". It's about what he things makes programming languages popular. Most of the attributes in the essay don't describe the Bash script very well, except for the section on Throwaway Programs. And that, I think, is exactly why I use Bash scripts so often.
Because I'm using Linux, a lot of my operations are done on the command line. I don't just mean the crazy programmer things like compiling or system administration, but I mean every day things like moving files, writing essays, even reading a book. My volume control, in fact, is also a command line program (alsamixer; although now I have a keyboard shortcut for that and I rarely actually open alsamixer). Given that I'm in a terminal so often, Bash is practically always open for me.
It is precisely this always available attribute which makes me use Bash. As Graham mentioned, I don't want to write something, then wait for it to compile, then run it to see if it works. Bash works as an interactive prompt, and I can type the whole program in "one line" and just run that. More that that, I can test things a lot quicker in Bash than I can in C or Java. Just open a new terminal (Ctrl-N) and I have a clean slate to test if a certain line of my program does what it wants. Python, of course, is also interactive, but it is unfortunately not as easily accessible from the desktop. I have to type a full 7 keys (P-Y-T-H-O-N-(enter)) to get to the interactive shell, and this slows down the entire process.
So until someone manages to use python as their main shell, I'm sticking with Bash.
All of the programs mentioned above, however, are Bash scripts.
I'm actually a little sheepish about this. After all, Bash is not a real programming language, but just a way to automate a few administrative operations. There is no type system, no support libraries, no object oriented utilities.
I've thought about doing more things in python, but somehow there's a barrier to entry to it. I reasoned that since what I do is mostly with text, it's easier to read from files in Bash than in python (a single cat as opposed to open().read()). But really that shouldn't be a problem. The output might be an issue too, if I wanted things in nice columns and what not. But then instead of column or paste I would just be using printf or equivalent.
I was digging around Paul Graham's older essays, and I came across his one titled "Being Popular". It's about what he things makes programming languages popular. Most of the attributes in the essay don't describe the Bash script very well, except for the section on Throwaway Programs. And that, I think, is exactly why I use Bash scripts so often.
Because I'm using Linux, a lot of my operations are done on the command line. I don't just mean the crazy programmer things like compiling or system administration, but I mean every day things like moving files, writing essays, even reading a book. My volume control, in fact, is also a command line program (alsamixer; although now I have a keyboard shortcut for that and I rarely actually open alsamixer). Given that I'm in a terminal so often, Bash is practically always open for me.
It is precisely this always available attribute which makes me use Bash. As Graham mentioned, I don't want to write something, then wait for it to compile, then run it to see if it works. Bash works as an interactive prompt, and I can type the whole program in "one line" and just run that. More that that, I can test things a lot quicker in Bash than I can in C or Java. Just open a new terminal (Ctrl-N) and I have a clean slate to test if a certain line of my program does what it wants. Python, of course, is also interactive, but it is unfortunately not as easily accessible from the desktop. I have to type a full 7 keys (P-Y-T-H-O-N-(enter)) to get to the interactive shell, and this slows down the entire process.
So until someone manages to use python as their main shell, I'm sticking with Bash.
Killing Children
Unknown, 2009-05-20
It is commonly known that a lot of computer jargon are tongue in cheek, especially when they were first developed. For examples, processes are compared to people, so when they spawn other processes, the original one is called the parent and the other one the child. When you stop a process, it's called killing it. There are also times when the process is dead but still taking up resources, and very naturally we call them zombies...
Recently I ran into the problem of killing child processes. I want the output of a command in a variable, so something like this:
The problem is two fold: the command itself spawns new children, and the command I'm running may not terminate. What I want to do is set a timer, then when the time is up I would kill the process. To kill it though, I would have to kill the youngest (inner most) child. I couldn't find a simple way to do this, so this is what I came up with:
Recently I ran into the problem of killing child processes. I want the output of a command in a variable, so something like this:
output="$(command...)"
The problem is two fold: the command itself spawns new children, and the command I'm running may not terminate. What I want to do is set a timer, then when the time is up I would kill the process. To kill it though, I would have to kill the youngest (inner most) child. I couldn't find a simple way to do this, so this is what I came up with:
output="$(command...)" & t=1 d=0 while (( "$t" < 120 && "$d" == 0 )); do sleep 1 if jobs | grep -v 'Done' | grep -v 'Exit 1' | grep 'parse' > /dev/null; then t="$(( $t + 1 ))" else d=1 fi done if (( "$d" == 0 )); then ps -AH | grep -A 10 "^ *$(jobs -l | awk '{print $2}')" | grep -B 10 ps | grep -v ps | awk '{print $1}' | while read pid; do kill -9 "$pid" done output='' else output="$(command..)" fi
My question is, is there a simpler way?
Swinger Party
Unknown, 2009-05-18
Last week's question: What is the legal status of the verdicts shown on court tv programs?
Apparently, the legality and degree of legal binding for these shows are done on a per show basis. The People's Court's verdict is supposed to be binding, as are the verdicts in Judge Judy. Judge Joe Brown's verdicts, however, has no real judicial power. What makes this partially interesting is that both the persecutor and the defendant are paid some amount to be on the show, and whatever damages award at the end are deducted/added to that payment as necessary. The only advantage to doing this over going to actual small claims court (which has a similar damage limit of $5,000) is that you get more publicity, and you don't have to get a lawyer. Still, I would never do something like that even if the public might be on my side. I just can't take seriously people who want to go on TV during a court case.
This week's question: I've visiting a local playground a lot lately, suddenly being in the mood to play on the swings. But why do kids (and adults) like to play on swings and slides?
Apparently, the legality and degree of legal binding for these shows are done on a per show basis. The People's Court's verdict is supposed to be binding, as are the verdicts in Judge Judy. Judge Joe Brown's verdicts, however, has no real judicial power. What makes this partially interesting is that both the persecutor and the defendant are paid some amount to be on the show, and whatever damages award at the end are deducted/added to that payment as necessary. The only advantage to doing this over going to actual small claims court (which has a similar damage limit of $5,000) is that you get more publicity, and you don't have to get a lawyer. Still, I would never do something like that even if the public might be on my side. I just can't take seriously people who want to go on TV during a court case.
This week's question: I've visiting a local playground a lot lately, suddenly being in the mood to play on the swings. But why do kids (and adults) like to play on swings and slides?
Much Ado About Nothing
Unknown, 2009-05-15
I would like to offer a concrete example of my stoicism and rationality:
I think birthdays are stupid.
I mean, it's really all arbitrary. The earth year is merely a coincidence, depending on the mass of the earth and the sun and the constant of gravitation (although, I supposed I could use the anthropic principle and say that any planet years would be roughly equal, to achieve life sustaining conditions). While this is important for agriculture, there is no reason would would celebrate this for our birth. And what's more, it's not even correct. We are celebrating our birthdays every 365 days, when it actually takes 365.242199 days for the earth to go around the sun. That means every year, we are celebrating earlier in earth's orbit - a whooping 387,483 miles from where we celebrated last year. Add in leap years, and counter leap years every 100 years (except the counter counter leap years every 400 years), and that imprecision just builds up.
And what's so important about our piece of rock circling a giant ball of gas? Why not use the lunar year - so we have birthdays every 27.3 earth rotations? Or the galactic year, which makes it's round once every 250 million years? Another nice and arbitrary one is the Halley year - which goes by the orbit of Halley's comet (who wasn't even the first person to observe that lump of ice).
We can even go the other way, and not just celebrate birth days, but also birth months (30 days of partying!) and birth seconds (make a wish! too late). And when you think about it, birth seconds are much better for your ego. Think about it: 6,740,000,000/365.242199 = 18,453,508 people have the same birthday as you, while only 6,740,000,000/31,556,926 = 215 people have the same birth second. If you go down to birth nano-seconds, then you'd even be /unique/. Think about that.
Oh, and there's the whole business with half-birthdays (which I guess are okay), unbirthdays (which are awesome, if only because it appears in Alice's adventures), as well as decimal birthdays (which are just plain bad, because of the additional arbitrariness of the decimal system. Really, people were arrogant enough to just use their number of fingers as the base?).
Personally, I think if you're going to celebrate someone's life, just celebrate it whenever, I think I've listed enough birth-[time unit]s that you'll have reasons to party for the rest of your life. So why not just enjoy your day, take pleasure in being alive, and be glad that other people are suffering for 364.242199 days of the year?
Now, if you're happy and you know it... clap your hands!
I think birthdays are stupid.
I mean, it's really all arbitrary. The earth year is merely a coincidence, depending on the mass of the earth and the sun and the constant of gravitation (although, I supposed I could use the anthropic principle and say that any planet years would be roughly equal, to achieve life sustaining conditions). While this is important for agriculture, there is no reason would would celebrate this for our birth. And what's more, it's not even correct. We are celebrating our birthdays every 365 days, when it actually takes 365.242199 days for the earth to go around the sun. That means every year, we are celebrating earlier in earth's orbit - a whooping 387,483 miles from where we celebrated last year. Add in leap years, and counter leap years every 100 years (except the counter counter leap years every 400 years), and that imprecision just builds up.
And what's so important about our piece of rock circling a giant ball of gas? Why not use the lunar year - so we have birthdays every 27.3 earth rotations? Or the galactic year, which makes it's round once every 250 million years? Another nice and arbitrary one is the Halley year - which goes by the orbit of Halley's comet (who wasn't even the first person to observe that lump of ice).
We can even go the other way, and not just celebrate birth days, but also birth months (30 days of partying!) and birth seconds (make a wish! too late). And when you think about it, birth seconds are much better for your ego. Think about it: 6,740,000,000/365.242199 = 18,453,508 people have the same birthday as you, while only 6,740,000,000/31,556,926 = 215 people have the same birth second. If you go down to birth nano-seconds, then you'd even be /unique/. Think about that.
Oh, and there's the whole business with half-birthdays (which I guess are okay), unbirthdays (which are awesome, if only because it appears in Alice's adventures), as well as decimal birthdays (which are just plain bad, because of the additional arbitrariness of the decimal system. Really, people were arrogant enough to just use their number of fingers as the base?).
Personally, I think if you're going to celebrate someone's life, just celebrate it whenever, I think I've listed enough birth-[time unit]s that you'll have reasons to party for the rest of your life. So why not just enjoy your day, take pleasure in being alive, and be glad that other people are suffering for 364.242199 days of the year?
Now, if you're happy and you know it... clap your hands!
The Data Experience
Unknown, 2009-05-13
I've been thinking a lot about the task of rapidly conveying vast amounts of information in a short period of time recently. I'm not sure why that has been on my mind. I'm calling it the data experience, because (perhaps with the thing on synesthesia and all) I feel that it is more than just data visualization, but it involves the other senses as well.
Experiencing data, if done properly, can give the sensation of more than just the 3D world we live in. For example, stress analysis simulations put areas of high stress in red, and areas of low stress in blue - for a 3D object. The color actually gives the model a fourth dimension. Of course, it's not the entire 4D object - in the case of stress analysis there is no such thing - but it is a mapping from 3-space to 1-space. Just as a 3D surface can be represented on a 2D graph with color (a heat map; the two representations can be combined), a 3D model with color can express a 4D surface. Although less common, other orthogonal features of graphs can be used to express other information, taking the visualization into even higher dimensions. An animated heat map (for example, a stress analysis which shows the change as the stress is applied then relieved) is convey 5D data. I can imagine other things - for example, perhaps the thickness of the mesh outline - which can be used to show even more information at once.
On the other hand, sometimes the data can just be in 3D, but the user experiences higher dimensions. One of the authors of The Mathematical Experience talks about a 4D intuition. A project at Brown University allows people to manipulate a tesseract (also known as a hypercube - a 4D structure where all edges are the same length and are orthogonal to every edge it shares a vertice with). By using the controls to rotate the virtual tesseract in different ways, the author suddenly could feel and reason about the tesseract, just as normal people can mentally rotate a cube. I find the possibility of lesser dimensional beings able to have an intuition about higher dimensional objects fascinating.
Another use of data visualization is not so much about the data, but about the visualizer's partition and organization of the world. Randall Munroe, for example, does this rather often, as does Indexed.
I don't have much more to say, but I would like to share some cool data experiences:
Experiencing data, if done properly, can give the sensation of more than just the 3D world we live in. For example, stress analysis simulations put areas of high stress in red, and areas of low stress in blue - for a 3D object. The color actually gives the model a fourth dimension. Of course, it's not the entire 4D object - in the case of stress analysis there is no such thing - but it is a mapping from 3-space to 1-space. Just as a 3D surface can be represented on a 2D graph with color (a heat map; the two representations can be combined), a 3D model with color can express a 4D surface. Although less common, other orthogonal features of graphs can be used to express other information, taking the visualization into even higher dimensions. An animated heat map (for example, a stress analysis which shows the change as the stress is applied then relieved) is convey 5D data. I can imagine other things - for example, perhaps the thickness of the mesh outline - which can be used to show even more information at once.
On the other hand, sometimes the data can just be in 3D, but the user experiences higher dimensions. One of the authors of The Mathematical Experience talks about a 4D intuition. A project at Brown University allows people to manipulate a tesseract (also known as a hypercube - a 4D structure where all edges are the same length and are orthogonal to every edge it shares a vertice with). By using the controls to rotate the virtual tesseract in different ways, the author suddenly could feel and reason about the tesseract, just as normal people can mentally rotate a cube. I find the possibility of lesser dimensional beings able to have an intuition about higher dimensional objects fascinating.
Another use of data visualization is not so much about the data, but about the visualizer's partition and organization of the world. Randall Munroe, for example, does this rather often, as does Indexed.
I don't have much more to say, but I would like to share some cool data experiences:
- Did You Know - a video with surprising statistics. Notice how the background images are made to convey meaning as well, although this is now very common with newspaper polls and such.
- AlloSphere - an enclosed sphere where sound and images can be used to help scientists explore connections.
- TwitterVision - a simple but elegant way to show not only what people are doing, but where they are doing it.
- Ball Droppings - a Google experiment with interactive AJAX. Not exactly data visualization, but it does link the different senses.
Synesthesia
Unknown, 2009-05-11
Last week's question: how do people with grapheme-color synesthesia perceive ambigrams?
I talked to a friend who has graphame-color synesthesia, and did some "testing" with her. The results are at once unexpected and intuitive.
As she said in a comment for last week's post, the color all depends on the recognition of the letter. For normal text, even if the letters are upside-down, the colors would still be associated with them as long as they are legible. It makes sense, therefore, when she looks at an ambigram, the "color" of the letters change according to which interpretation she has in mind. If she consciously reads the ambigram upside-down, the letters will have those colors.
Getting to the heart of my question, the letters change color abruptly. If anything, there is a middle ground where she doesn't recognize the letters at all, and there is no color associated with the scribble. This is more clearly demonstrated when I showed her an unfamiliar ambigram sideways. Although she recognized one letter, the other were just scribbles to her, and so they remained black. Even after seeing the ambigram with the oriented correctly, when shown it sideways again it stills remains black, unless she tilts her head. She can't seem to get two colors appear with one glyph at the same time, like how in my favorite optical illusion it's hard to see the ballerina turning both ways at once.
The answer which I didn't know I wanted was that the color comes after recognition. It is not like the brain sees some lines, puts a color to it first, before the person recognizes it to be a letter. Rather, even if the person knows it's a letter, if the glyph is transformed (in this case, rotation and/or merged with other lines to form a different letter) and the person forces themselves to see just some scribbles, then the "letter" loses its color.
Personally, I'm intrigued by synesthesia, because of how it explicitly connects two unconnected stimuli. On some high level it works the same way as those leaps logic that people can do to get an unintuitive but effective solution. And sometimes, I wish I am a synesthete just so I can have that experience.
This week's question: a number of networks have "court tv" programs, where a prosecutor and a defendent get "tried" in front of a "judge", usually setting some small monetary issues up to several thousand dollars. What I want to know is, are the prosecutors and defendents legally bond to honor that verdict? Do the networks do anything to ensure that the money/goods changes hands? Or is this all just another "reality tv" show?
I talked to a friend who has graphame-color synesthesia, and did some "testing" with her. The results are at once unexpected and intuitive.
As she said in a comment for last week's post, the color all depends on the recognition of the letter. For normal text, even if the letters are upside-down, the colors would still be associated with them as long as they are legible. It makes sense, therefore, when she looks at an ambigram, the "color" of the letters change according to which interpretation she has in mind. If she consciously reads the ambigram upside-down, the letters will have those colors.
Getting to the heart of my question, the letters change color abruptly. If anything, there is a middle ground where she doesn't recognize the letters at all, and there is no color associated with the scribble. This is more clearly demonstrated when I showed her an unfamiliar ambigram sideways. Although she recognized one letter, the other were just scribbles to her, and so they remained black. Even after seeing the ambigram with the oriented correctly, when shown it sideways again it stills remains black, unless she tilts her head. She can't seem to get two colors appear with one glyph at the same time, like how in my favorite optical illusion it's hard to see the ballerina turning both ways at once.
The answer which I didn't know I wanted was that the color comes after recognition. It is not like the brain sees some lines, puts a color to it first, before the person recognizes it to be a letter. Rather, even if the person knows it's a letter, if the glyph is transformed (in this case, rotation and/or merged with other lines to form a different letter) and the person forces themselves to see just some scribbles, then the "letter" loses its color.
Personally, I'm intrigued by synesthesia, because of how it explicitly connects two unconnected stimuli. On some high level it works the same way as those leaps logic that people can do to get an unintuitive but effective solution. And sometimes, I wish I am a synesthete just so I can have that experience.
This week's question: a number of networks have "court tv" programs, where a prosecutor and a defendent get "tried" in front of a "judge", usually setting some small monetary issues up to several thousand dollars. What I want to know is, are the prosecutors and defendents legally bond to honor that verdict? Do the networks do anything to ensure that the money/goods changes hands? Or is this all just another "reality tv" show?
Labels and Folders
Unknown, 2009-05-08
In the last few years, the words "label" and "tag" have gotten a whole new meaning on social networks. del.icio.us, Gmail, flickr... you name it, you can probably tag it. Aside from the definite usefulness and convenience, tags are interesting to me because they are an example of an idea which broke the mold.
Try if you can (if you use Hotmail Windows Live Mail, it shouldn't be too hard) to remember what email was like without labels and tags (which are the same thing). We used these things called folders, and each email (oh yeah, we didn't have conversations, either) could only be in old folder at any time. The terminology is not surprising, because we borrowed that from how people organize real world letters on their desktop (I hope you're making the connection now). If you had a physical letter, then it could only go in one folder - unless you make copies, but then the copies don't reflect each other. If you highlighted one letter or made annotations, the other copy in another folder won't change to reflect that. Because this was the only way of treating communication, it was the model used for the first implementations of email.
Now, what I just said wasn't quite true. There are physical ways of giving an object several categories at once. They're what we call post-its. You can put different colored post-its in a book to note where, say, the author talks about the life of that time period versus the symbolism of socks. If they happen to be on the same page, no problem - you can tell the two apart because they're different colored. The only problem was that you couldn't tag a page with too many things - there's only so much page border for the post-its to stick out (not to mention that they only sell post-its in so many colors, although this can be mitigated by writing the theme on the post-it - hey, labels!).
When you think about it, tags are the logical extension of that. As mentioned, there is no limit to how many colors/labels you can have. This is only restricted by your ability to create strings (er, computer strings, that is, a series of characters). The adhesive on the post-it will also never die, and you can put as many labels on one thing as you want.
The digital tag can be thought of as a superset of folders. If you only put one tag on each object, then they act in the same way as folders. You can even put slashes (/) in the tags to simulate nested folders. And why not? Tags are more powerful than folders, and can keep the same feature (that is, it's backwards compatible). It seems to me that eventually the tag system will be moved offline to the desktop as well (if cloud computing doesn't completely eliminate the desktop market). I'm sure there are difficulties (the datastructures in the OS would have to be completely reorganized, for example), but I think the final result would be worth it.
On a side note: shortcuts on Windows, or links on Linux, allow sort of the same funcationality. One difference is that (for shortcuts and soft links) what is stored is not actually the file, but directions on how to find the file. It's like looking in a folder for one thing, then having something there telling you to look in another folder. Linux does offer hard links, which puts the actual file there (I'm dumbing this down; if you know better, good for you), but that's all just technical talk.
I really admire the person who thought to bring tags to the computer (according to Wikipedia, it was the del.icio.us folks. Kudos), because they're seeing through the limitations of the current system and doing something better (although it turns out not to be "new" in this case). In my mind I associate it with Newton and Leibniz seeing through functions and discovering calculus, or Einstein cutting through the luminiferous ether to arrive at relativity. Of course, tags are not quite as history making as those events, but there is something similar in the minds behind all of these creations.
I tried to think of more instances where people have taken ordinary concepts, applied it to computers, then created a more powerful and more general version of it, but I didn't come up with anything (except some weak ones, like bookmarks). Can you find any examples?
"bookmarks" (more like book darts now)
Try if you can (if you use Hotmail Windows Live Mail, it shouldn't be too hard) to remember what email was like without labels and tags (which are the same thing). We used these things called folders, and each email (oh yeah, we didn't have conversations, either) could only be in old folder at any time. The terminology is not surprising, because we borrowed that from how people organize real world letters on their desktop (I hope you're making the connection now). If you had a physical letter, then it could only go in one folder - unless you make copies, but then the copies don't reflect each other. If you highlighted one letter or made annotations, the other copy in another folder won't change to reflect that. Because this was the only way of treating communication, it was the model used for the first implementations of email.
Now, what I just said wasn't quite true. There are physical ways of giving an object several categories at once. They're what we call post-its. You can put different colored post-its in a book to note where, say, the author talks about the life of that time period versus the symbolism of socks. If they happen to be on the same page, no problem - you can tell the two apart because they're different colored. The only problem was that you couldn't tag a page with too many things - there's only so much page border for the post-its to stick out (not to mention that they only sell post-its in so many colors, although this can be mitigated by writing the theme on the post-it - hey, labels!).
When you think about it, tags are the logical extension of that. As mentioned, there is no limit to how many colors/labels you can have. This is only restricted by your ability to create strings (er, computer strings, that is, a series of characters). The adhesive on the post-it will also never die, and you can put as many labels on one thing as you want.
The digital tag can be thought of as a superset of folders. If you only put one tag on each object, then they act in the same way as folders. You can even put slashes (/) in the tags to simulate nested folders. And why not? Tags are more powerful than folders, and can keep the same feature (that is, it's backwards compatible). It seems to me that eventually the tag system will be moved offline to the desktop as well (if cloud computing doesn't completely eliminate the desktop market). I'm sure there are difficulties (the datastructures in the OS would have to be completely reorganized, for example), but I think the final result would be worth it.
On a side note: shortcuts on Windows, or links on Linux, allow sort of the same funcationality. One difference is that (for shortcuts and soft links) what is stored is not actually the file, but directions on how to find the file. It's like looking in a folder for one thing, then having something there telling you to look in another folder. Linux does offer hard links, which puts the actual file there (I'm dumbing this down; if you know better, good for you), but that's all just technical talk.
I really admire the person who thought to bring tags to the computer (according to Wikipedia, it was the del.icio.us folks. Kudos), because they're seeing through the limitations of the current system and doing something better (although it turns out not to be "new" in this case). In my mind I associate it with Newton and Leibniz seeing through functions and discovering calculus, or Einstein cutting through the luminiferous ether to arrive at relativity. Of course, tags are not quite as history making as those events, but there is something similar in the minds behind all of these creations.
I tried to think of more instances where people have taken ordinary concepts, applied it to computers, then created a more powerful and more general version of it, but I didn't come up with anything (except some weak ones, like bookmarks). Can you find any examples?
"bookmarks" (more like book darts now)
Thoughts on Being a Professor
Unknown, 2009-05-06
Faye tumbl'd upon a NY Times article which goes very well with what was on my mind.
Let me give some comments about the article first. There are some things which I disagree with - none more strongly, perhaps, than "young people enroll in graduate programs, work hard for subsistence pay and assume huge debt burdens, all because of the illusory promise of faculty appointments." While for some the decision to become a graduate student (and later on a professor) may indeed be an economic calculation, I - perhaps romantically and naively - believe that most choose to do so because they are interested in the field, and really do want to learn more about the subject.
That little note aside, however, I find Taylor's op-ed most accurate, both on economic and academic sides of university education. Let me start with the economic side.
As Taylor noted, the tenure system works against free-market economics. There is no incentive for tenured professors to continue working on break-through research (except the personal interest mentioned above, which for the same reason I cannot consider insignificant). In fact, I think the abolishment of tenure can help academia in at least three (inter-related) ways:
Note I don't mean to imply that all professors slack off after tenure; however, the fact that the system allows them to do so means the system is broken.
Of course, the abolishment of tenure has some implications. A good question to ask to arrive at those are, why was the idea of tenure created in the first place? According to Wikipedia, tenure was created to guarantee the academic freedom of professors, so that they can investigate what they are truly interested in, without fear of an unapproving administration or board of trustees. But in this sense, tenure is no more than a legally binding contract, saying what the university can and cannot fire the professor for. And for that, any legal contract will do, regardless of the length of the contract. Additionally, since tenure approval is given by a committee of other professors, academics who don't folow the majority opinion are unlikely to get tenure in the first place. Examples include Norman Finkelstein (who was mentioned in a reference in the above Wikipedia article) and a whole host of others, the cases for which you can read about on the American Association of University Professor (AAUP) website.
Before I read the NYTimes article, I had spent a lunch talking with a friend about the tenure system. Our conclusions were surprisingly similar to what Taylor suggested. We thought that tenure should not be lifetime, but limited in scope, and subject to regular review. In particular, we thought of something like a year of evaluation, and if it is satisfactory then you get four years of "tenure". The continuous seven year contract which Taylor suggests is probably easier to implement, as one year is not a very long time for evaluation, and there's always the possibility of professors delaying the publishing of results (what?) until their year of evaluation.
Besides addressing the problem of tenure, Taylor's op-ed actually spends a lot of time on restructuring university departments. I have expressed before my concern that I am not knowledgeable enough (the post script for this post) to do anything ground-breaking. I don't know what the balance between the breadth of knowledge and the depth of field is. Taylor seems to lean towards the side of breath of knowledge, with the restructuring of departments to be problem based. There are, however benefits to being around people in the same field. Problems in AI may have been previously solved by people working on systems (for example, synchronization is more often discussed in the latter context than the former), and these problems would be unique to that field. So while I think putting faculty and students into problem-oriented groups is a good idea, I think the current grouping by subject should also be kept. One structure for breadth, the other for depth.
That's all I have to say for now. I recently read parts of Allan Bloom's Closing of the American Mind, which offers a critique of the education scene. Although it was written over 20 years ago, I find the trends it describes still accurate, if not more so, today. We disagree on the cause of the trends, but Bloom makes it clear that it's a problem. But that's a story for another time.
Let me give some comments about the article first. There are some things which I disagree with - none more strongly, perhaps, than "young people enroll in graduate programs, work hard for subsistence pay and assume huge debt burdens, all because of the illusory promise of faculty appointments." While for some the decision to become a graduate student (and later on a professor) may indeed be an economic calculation, I - perhaps romantically and naively - believe that most choose to do so because they are interested in the field, and really do want to learn more about the subject.
That little note aside, however, I find Taylor's op-ed most accurate, both on economic and academic sides of university education. Let me start with the economic side.
As Taylor noted, the tenure system works against free-market economics. There is no incentive for tenured professors to continue working on break-through research (except the personal interest mentioned above, which for the same reason I cannot consider insignificant). In fact, I think the abolishment of tenure can help academia in at least three (inter-related) ways:
- Professors need to work hard even after tenure, so it increases their output
- Professors who lose tenure will leave spots open for recent graduates
- Both of the above increases the competition for professorship, and thus the overall quality of professors increase.
Note I don't mean to imply that all professors slack off after tenure; however, the fact that the system allows them to do so means the system is broken.
Of course, the abolishment of tenure has some implications. A good question to ask to arrive at those are, why was the idea of tenure created in the first place? According to Wikipedia, tenure was created to guarantee the academic freedom of professors, so that they can investigate what they are truly interested in, without fear of an unapproving administration or board of trustees. But in this sense, tenure is no more than a legally binding contract, saying what the university can and cannot fire the professor for. And for that, any legal contract will do, regardless of the length of the contract. Additionally, since tenure approval is given by a committee of other professors, academics who don't folow the majority opinion are unlikely to get tenure in the first place. Examples include Norman Finkelstein (who was mentioned in a reference in the above Wikipedia article) and a whole host of others, the cases for which you can read about on the American Association of University Professor (AAUP) website.
Before I read the NYTimes article, I had spent a lunch talking with a friend about the tenure system. Our conclusions were surprisingly similar to what Taylor suggested. We thought that tenure should not be lifetime, but limited in scope, and subject to regular review. In particular, we thought of something like a year of evaluation, and if it is satisfactory then you get four years of "tenure". The continuous seven year contract which Taylor suggests is probably easier to implement, as one year is not a very long time for evaluation, and there's always the possibility of professors delaying the publishing of results (what?) until their year of evaluation.
Besides addressing the problem of tenure, Taylor's op-ed actually spends a lot of time on restructuring university departments. I have expressed before my concern that I am not knowledgeable enough (the post script for this post) to do anything ground-breaking. I don't know what the balance between the breadth of knowledge and the depth of field is. Taylor seems to lean towards the side of breath of knowledge, with the restructuring of departments to be problem based. There are, however benefits to being around people in the same field. Problems in AI may have been previously solved by people working on systems (for example, synchronization is more often discussed in the latter context than the former), and these problems would be unique to that field. So while I think putting faculty and students into problem-oriented groups is a good idea, I think the current grouping by subject should also be kept. One structure for breadth, the other for depth.
That's all I have to say for now. I recently read parts of Allan Bloom's Closing of the American Mind, which offers a critique of the education scene. Although it was written over 20 years ago, I find the trends it describes still accurate, if not more so, today. We disagree on the cause of the trends, but Bloom makes it clear that it's a problem. But that's a story for another time.
Colored Language
Unknown, 2009-05-04
Last week's question: what answers do people give for the question "why do good things happen to bad people?
Here's a survey of what people have written. These were taken off the first 6 pages of Google results, searching for the question without quotes. Just browsing through, it's clear that people as the reverse question (bad things to good people) quite a bit more often. A quoted search gives 72,700 for bad things/good people, but only 4,200 for good things/bad people.
I'm a little surprised, because I think it's a much more bothersome question. Let's take a look at what people think.
Justice is delayed, not non-existent
This is in essence the same answer to the bad things/good people question. The idea is that there is an objective good and you will be rewarded, but that reward is not immediate (read: it will come after you die). In the same vein, punishment is not immediate either.
It's all Satan's trick; just focus on what God has given us
This is an interesting explanation, because the logical extensions are curious. Paying attention on what God has given us, while it's not a bad thing, is not really a solution. It is implied that justice is delayed, or that what you get in this life is not as important as what you get in the next one. Otherwise, if it is Satan's trick, then Satan seems to be a perfectly valid way of getting rewards. But if that's the case, why aren't people giving up their posessions and living like Mother Teresa? Behaviorially, then, wordly posessions do have some value. Turning inwards to what we do have, therefore, is more ignoring the problem than an explanation.
It's to show God is merciful to everyone (who are all bad)
I assume that God is merciful to them is so they may repent... when the "bad people" do well in this life, and don't believe that they do well because of God, but because of what they did. That is, they're probably not going to repent. Also, it's delayed for the good people, so the good people can... what?
The "good things" may not be really good
Sure. The people with "good things" (ie. wealth) also have bankruptcies and divorces. But not all of them do. Just as one bad thing happening to a single good person makes justice questionable, one good thing happening to a single bad person asks the same questions. That's like saying, oh, that good person is vegetative in a hospital, so he won't be rained on now. Great.
It is paid for in something else, like regret and guilt
You don't feel guilt if you don't get caught. Also, fascinating discussion on circumcision/"genital mutilation".
No Christian would take a bad-person-with-good-thing's place
Yeah, not all atheists have horrible family lives. Please. To put it in perspective, would you change places with someone who talks to an invisible friend every week, who believe they and other people are sometimes possessed by spirits, and that there are shadowy figures plotting to ruin their lives at every turn? No, it's not a delusional new-age conspiracy theorist (or John Nash crossed with Emily Rose). Just prayers, the Holy Ghost, and Satan respectively.
My personal answer? Dumb luck. Some people just get lucky, and others get unlucky. Deal with it.
This week's question: Synesthesia is the intriguing phenomenon that some people have. Synesthetes, as those with synesthesia are called, have two senses inexplicably linked, such that when one sense is activated, they involuntarily experience something in the other sense as well. One of the most common forms of synesthesia is grapheme-color synesthesia, where letters, numbers, and parts of words are associated with a color. Given this type of synesthesia, how do synesthetes perceive ambigrams?
PS. I have no idea what the answer is, so if you know someone with grapheme-color synesthesia, can I contact them and find out? Thanks.
Here's a survey of what people have written. These were taken off the first 6 pages of Google results, searching for the question without quotes. Just browsing through, it's clear that people as the reverse question (bad things to good people) quite a bit more often. A quoted search gives 72,700 for bad things/good people, but only 4,200 for good things/bad people.
I'm a little surprised, because I think it's a much more bothersome question. Let's take a look at what people think.
Justice is delayed, not non-existent
This is in essence the same answer to the bad things/good people question. The idea is that there is an objective good and you will be rewarded, but that reward is not immediate (read: it will come after you die). In the same vein, punishment is not immediate either.
It's all Satan's trick; just focus on what God has given us
This is an interesting explanation, because the logical extensions are curious. Paying attention on what God has given us, while it's not a bad thing, is not really a solution. It is implied that justice is delayed, or that what you get in this life is not as important as what you get in the next one. Otherwise, if it is Satan's trick, then Satan seems to be a perfectly valid way of getting rewards. But if that's the case, why aren't people giving up their posessions and living like Mother Teresa? Behaviorially, then, wordly posessions do have some value. Turning inwards to what we do have, therefore, is more ignoring the problem than an explanation.
It's to show God is merciful to everyone (who are all bad)
I assume that God is merciful to them is so they may repent... when the "bad people" do well in this life, and don't believe that they do well because of God, but because of what they did. That is, they're probably not going to repent. Also, it's delayed for the good people, so the good people can... what?
The "good things" may not be really good
Sure. The people with "good things" (ie. wealth) also have bankruptcies and divorces. But not all of them do. Just as one bad thing happening to a single good person makes justice questionable, one good thing happening to a single bad person asks the same questions. That's like saying, oh, that good person is vegetative in a hospital, so he won't be rained on now. Great.
It is paid for in something else, like regret and guilt
You don't feel guilt if you don't get caught. Also, fascinating discussion on circumcision/"genital mutilation".
No Christian would take a bad-person-with-good-thing's place
Yeah, not all atheists have horrible family lives. Please. To put it in perspective, would you change places with someone who talks to an invisible friend every week, who believe they and other people are sometimes possessed by spirits, and that there are shadowy figures plotting to ruin their lives at every turn? No, it's not a delusional new-age conspiracy theorist (or John Nash crossed with Emily Rose). Just prayers, the Holy Ghost, and Satan respectively.
My personal answer? Dumb luck. Some people just get lucky, and others get unlucky. Deal with it.
This week's question: Synesthesia is the intriguing phenomenon that some people have. Synesthetes, as those with synesthesia are called, have two senses inexplicably linked, such that when one sense is activated, they involuntarily experience something in the other sense as well. One of the most common forms of synesthesia is grapheme-color synesthesia, where letters, numbers, and parts of words are associated with a color. Given this type of synesthesia, how do synesthetes perceive ambigrams?
PS. I have no idea what the answer is, so if you know someone with grapheme-color synesthesia, can I contact them and find out? Thanks.
First Date?
Unknown, 2009-05-01
I learned something cool in social psychology today. Psychologists Don Dutton and Arthur Aron did a study, commonly called the "bridge study", where they tried to see if people would misattribute adrenaline due to fear/anxiety to attraction (Wikipedia/paper). They found that people did tend to "report" more sexual imagery when they took the participated in the experiement after crossing a narrow suspension bridge, and they were more likely to accept a phone number ("to find out the results of the experiement") than if they just crossed a solid concrete bridge.
I thought about this result, and came to this idea: doesn't this mean that on first dates you should do something that's adrenaline inducing? A thriller, a roller coaster ride, rock climbing... anything to get those adrenaline flowing, so it can be misattributed to you.
That sounds like a fun psychology experiement...
I thought about this result, and came to this idea: doesn't this mean that on first dates you should do something that's adrenaline inducing? A thriller, a roller coaster ride, rock climbing... anything to get those adrenaline flowing, so it can be misattributed to you.
That sounds like a fun psychology experiement...
Subscribe to:
Posts
(
Atom
)