tag:blogger.com,1999:blog-43443600812388239552024-02-02T16:33:28.360-05:00Justin's Think TankUnknownnoreply@blogger.comBlogger403125tag:blogger.com,1999:blog-4344360081238823955.post-75691192519391102202014-09-26T20:00:00.001-04:002014-09-26T20:01:12.946-04:00New BlogMy blog has moved! It now lives at <a href="http://justinnhli.com/writings.html">http://justinnhli.com/writings.html</a>. The migration is still taking place, so some things (especially images, but also the commenting system) may still be a little off. Regardless, I will now be publishing there exclusively, and the contents on this site may disappear without warning in the future.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-36350300863427592342014-08-22T10:27:00.003-04:002014-08-22T10:29:29.901-04:00In Defense of a Public Digital Life<p>I am a private person, but you wouldn't know that from looking at my online presence. I have a fairly active <a href="https://twitter.com/justinnhli/">Twitter account</a>, which is entirely open to the public. A lot of my tweets are <a href="https://twitter.com/justinnhli/status/499999983685603328">inane</a>, <a href="https://twitter.com/justinnhli/status/500191303570042880">sexual</a>, often <a href="https://twitter.com/justinnhli/status/472743073316077568">potentially offensive</a> and <a href="https://twitter.com/justinnhli/status/502487429321089024">trigger-warning-y</a>. This content is also mirrored on Facebook, although on there it's visible only to friends of friends. And, of course, I keep a blog, which stretches back a good number of years and has covered the <a href="http://justinnhli.blogspot.com/2010/07/some-shit.html">same</a> <a href="http://justinnhli.blogspot.com/2013/11/mental-objectification.html">range</a> <a href="http://justinnhli.blogspot.com/2013/11/asexuality.html">of</a> <a href="http://justinnhli.blogspot.com/2012/01/thoughts-on-cognitive-sport.html">topics</a> as my tweets, if about slightly more personal topics and in more words.</p><p>I am about to enter the job market and, despite common sense and advice, am seriously considering keeping all this content online and public (and searchable through my shared user name). I am well aware that employers often look through applicant's social network activity to help get a picture of them as a person, so this may lead to my unemployment later down the road. But, I also have reasons for wanting these aspects of me to be up, and I want to defend that here.</p><p>One reason is that I think we judge others in a biased way. Imagine you are choosing between two people for a job as a high school teacher, A and B, who are otherwise equally qualified. For A, you can find nothing about them on Facebook - let's say it's because their privacy setting is too high. For B, you can look through their profile, and in a couple of minutes of browsing, you find a picture of them mugging for the camera with a beer in each hand. Instinctively, what you want to think is "Jeez, high school students are going to think this person's an alcoholic, so let's hire the other person."</p><p>The problem with this conclusion is you haven't considered what you <em>haven't seen</em>: given that this is Facebook, that there are no pictures of B passed-out-drunk in their own vomit is probably a point in their favor. More disturbing is that <em>you have seen no evidence about A whatsoever</em>; you know strictly more about B than you know about A and that B has a lower variance around what is normal Facebook behavior, <em>and yet they were punished for giving you that information</em>. To make an online dating analogy, this is like focusing on the fact that someone likes My Little Pony and thereby deciding not to message them, when the rest of their interests look compatible; but then turning around and messaging someone who wrote very little on their profile, but has listed no obvious clashes with your interests. (Raise your hand if you're guilty; I am.)</p><p>There are several cognitive biases reflected in this example: <a href="http://en.wikipedia.org/wiki/Selection_bias">biased selection</a> of information, leading to the <a href="http://en.wikipedia.org/wiki/Availability_heuristic">availability heuristic</a>, <a href="http://en.wikipedia.org/wiki/Base_rate_fallacy">neglecting the base rate</a> of compromising pictures on Facebook, and the <a href="http://en.wikipedia.org/wiki/Pseudocertainty_effect">preference for risk taking</a> when the outcome is negative. The bottom line is that knowing more about a person is a good thing (provided there are no, say, pictures of them abusing animals), and all else being equal, the rational thing to do is to choose what has a less chance of being bad (<a href="http://en.wikipedia.org/wiki/Framing_effect_%28psychology%29">or a higher chance of being good</a>). I want my online activity to stand as indicators of me as a person, and I want this post to prove to the hiring committees that I'm smart and I know what I'm doing and that they're biased in an undesirable way.</p><p>That's the somewhat snarky, condescending reason, but I have a more idealistic and constructive reason as well.</p><p>As someone looking to go into teaching at the college-level, I believe that the learning experience extends beyond the classroom. This is (one of) the reason I'm hesitant about <a href="http://en.wikipedia.org/wiki/Udacity">Udacity</a> and <a href="http://en.wikipedia.org/wiki/Coursera">Cousera</a> and <a href="http://en.wikipedia.org/wiki/EdX">edX</a> and all the other <a href="http://en.wikipedia.org/wiki/Massive_open_online_course">Massively Open Online Courses</a> (MOOCs). I think it's a good thing that people are putting course material online, and that they are organized in ways that force interaction and engagement, but if you ask me whether a student online will learn as much as a student on a brick-and-mortar campus, given the same level of instruction, I will say no, hands down. It wasn't until recently (today, in fact) through a <a href="https://twitter.com/justinnhli/status/502296980744597504">conversation</a> with a <a href="https://twitter.com/mclarkk">friend</a> did I understand why: it's because, deep down, I think that university education is more than about the knowledge learned in a class, and even more than the relationships you develop. It's about developing students into someone who enjoys, and is still continually, learning.</p><p>And you can't teach that over the internet.</p><p>Well, no, you can, but not in hour-long videos one hour a week for fourteen weeks and, I suspect, not even through seven years of irregular blog posts. What I learned from that conversation is that I believe this requires exposure to an atmosphere, a culture of learning. This means that students need to feel the energy of people excited about things, to have conversations with people only tangentially related to any course topic, to ponder questions where not only the answer, but also the path to that answer, is unknown. All these things are made exponentially harder when conducted through the medium of text on an online forum, especially not with teaching assistants whose focus is on answering questions about the course material and nothing more. I would argue that most of these things are not present even in physical classes, unless the participants are willing to digress; most of the absorbing of culture occurs in <a href="http://en.wikipedia.org/wiki/Third_place">third places</a> around campus, and requires a community open and dedicated to such exploration of topics. And while there are education startups that try to create such an environment - the company about which the <a href="http://www.theatlantic.com/features/archive/2014/08/the-future-of-college/375071/">article</a> that started the conversation was written, <a href="http://en.wikipedia.org/wiki/Minerva_Schools_at_KGI">Minerva</a>, physically co-locate their students - having a culture where such learning is commonplace and expected is much harder to cultivate.</p><p>This is why I want to keep my tweets and my blog posts online and public, because I think it is part of the culture of education. My tweets may often be inane, but I think there is an undercurrent of curiosity, and it serves as a reminder to be observant of the world - I ask <a href="https://twitter.com/justinnhli/status/486126761432719363">questions of social idioms</a>, connect <a href="https://twitter.com/justinnhli/status/485512869371318272">ideas from disparate fields</a>, and comment <a href="https://twitter.com/justinnhli/status/293220584433795072">on the nature of research</a>. The very fact that I spend time writing short essays for my blog is evidence that I spent personal time on (somewhat) academic pursuits. While I wouldn't consider these things an essential part of my academic persona, I do think it is a core part of who I am, as well as a part of the culture that I want students to take part in. In the past, these tweets and blog posts have led to interesting discussions with both friends and students, and this is possible only because these artifacts are publicly linked and searchable. To make my tweets private and to take down my writings would mean losing this opportunity for conversation, and that is something I do not want.</p><p>Perhaps it is arrogant of me to put such importance in my own ramblings - more so for telling hiring committees that they're biased - but I think, too, this gives a very concrete picture of who I am.</p><p>This is who I want my students to be, and it's worth the risk of being judged if I can achieve that.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-62802130151072830552014-05-14T19:04:00.001-04:002014-05-14T19:04:24.160-04:00The First Rule of Rock Climbing<ol>
<li value="1">Do not talk about climbing.</li>
<li value="1">Just stand up. </li>
<li value="1">Stop sucking.</li>
<li value="1">Grip harder.</li>
<li value="1">Be strong.</li>
<li value="1">Don't be weak.</li>
<li value="1">Don't be not strong.</li>
<li value="1">Be taller.</li>
<li value="1">Go up.</li>
<li value="1">When in doubt, skip.</li>
<li value="1">When in doubt, dyno.</li>
</ol>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-8451514857054956162013-11-27T14:12:00.001-05:002013-11-27T14:12:07.369-05:00Don't Shoot the Symptom<p><em>The usual disclaimers about not seeing outside my life experience apply.</em></p><p>This is my last rant on social issues. I find it amusing that I feel compelled to put the warning above, even though it's almost tautological.</p><p>I saw the latest James Bond film (<a href="http://en.wikipedia.org/wiki/Skyfall">Skyfall</a>) several months back, and was afterwards surprised to find that <a href="http://bitchmagazine.org/post/draft-backlot-bitch-mr-bond-and-i">there's</a> <a href="http://reverttosaved.com/2012/11/17/skyfall-james-bonds-return-to-male-gaze-misogyny/">some</a> <a href="http://exiledstardust.wordpress.com/2012/11/02/women-the-makers-of-skyfall-hate-you/">backlash</a> on the blogosphere. Not because the film was badly made - most critics praised the cinematography - but because the women in the film were disposable Specifically (spoiler alert):</p><ul><li>Bond seduces the sex slave Severine, with full knowledge of her past</li>
<li>Bond fails to save Severine due to bad shooting, despite a minute later overcoming his captors</li>
<li>M (played by Judi Dench) is killed, and is replaced by a man</li>
<li>Moneypenny is reduced from an active agent to a desk job</li>
</ul><p>Now, these things (especially the first one) are certainly bad things to portray, but I wonder how out of the ordinary they are. That is, because the film is a work of fiction but set in a close-to-real world, these events in themselves may not be misogynistic, although the <em>selection</em> of their portrayal might be.</p><p>Let me try to make this idea more clear. The events in a work of fiction, which is set in the real world, could be <em>representative</em> while still being misogynistic. We could, for example, write a fictional book with the vilest, most sexist human trafficker as the protagonist, and have them get away scot-free at the end. The events and characters in such a novel would certainly be misogynistic, but they would also be realistic (as in, such people exist in the world). The question is, is the work as a whole misogynistic?</p><p>I think the underlying question - which applies not just to feminism, but also to heteronormativity, stereotypes, and so on - is the act of criticizing people for being realistic. Back to Skyfall, <a href="http://www.cnn.com/2010/POLITICS/08/09/woman.intel.chief/">27% of senior positions</a> in six US intelligence agencies are held by women; <a href="http://management.fortune.cnn.com/2013/05/09/women-ceos-fortune-500/">22 of the Fortune 500 companies</a> have female CEOs. That is, if the writers of Skyfall had wanted to determine, at random, whether the new M should be male or female if the statistics of reality were followed, three out of four times the new M would still be male. The other portrayals are less defendable, although again there is a question of whether it was unrealistic.</p><p>Moving outside of fiction, complaining about realistic sentiments is actually quite common. I caught myself doing it for a couple weeks, when I noticed how infrequently the <a href="http://justinnhli.blogspot.com/2013/11/asexuality.html">asexual perspective</a> is brought up. I was started to get annoyed at this, until I remembered that asexuality represents <a href="http://en.wikipedia.org/wiki/Asexuality#Prevalence">maybe 1%</a> of the population. Even if this is an underestimate, sexual people are still in the vast majority, and I should not expect to find the two be represented equally frequently.</p><p>This asexuality example is not particularly compelling one, but other, less politically-correct examples are everywhere. One such case is the struggle between meritocracy and equality in, for example higher education (this is <a href="http://www.newyorker.com/reporting/2013/05/27/130527fa_fact_packer?currentPage=all">also an issue for Silicon Valley</a>). In the ideal world, universities would like to select their students purely on the basis of their academic achievements. The problem is that, since education opportunities are not equally given to everyone, those with the most "merit" tend to be Caucasian and, to a lesser extent, Asian. This means that if we attempt to continue with meritocracy, the student population at the most prestigious universities will no longer resemble the demographics of the country at large. Note that this is not due to the universities being <em>actively</em> discriminatory, but that as a result of societal/structure issues, the <em>result</em> is a discriminatory one. For universities, the solution is often to ignore pure meritocracy and instead accept students based on a mix of merit and equality. Silicon Valley, on the other hand, has no pressure to make this change, and as a result has <a href="http://www.weeklystandard.com/articles/silicon-chasm_768037.html?nopager=1">increased the economic inequity</a> in the Bay area.</p><p>Here's an even more contentious example. <a href="http://www.bjs.gov/content/pub/pdf/htus8008.pdf">"In 2008, the [homicide] offending rate for blacks (24.7) offenders per 100,000) was 7 times higher than the rate for whites (3.4 per 100,000)."</a> If we take this seriously, that means all else being equal, the black person you just walked pass is seven times more likely to be a murder than the white person you just walked pass. If we were to take precautions against murder based on the <a href="http://en.wikipedia.org/wiki/Bayesian_inference">estimated likelihood</a> of being the victim of an attempted murder, we should be more cautious when around a black stranger. But this is, of course, entirely inappropriate (and possibly unethical); as with the meritocracy case, the resulting action is discriminatory even if the logic that led to it, and the statistics that the logic is based on, are both sound.</p><p>I could continue to list examples, but instead I'll highlight the lesson. The point here is not that we should stop caring about these issues, but that we need to separate the realistic reaction to an undesirable circumstance, from the undesirable circumstance itself. Sometimes these are not separable - the reaction may itself further perpetuate the circumstance (think victim blaming and rape culture) - but often the reaction is merely a symptom of the underlying problem. This is especially important for people who want to fix the undesirable circumstance, as removing the reaction would not solve the problem. For everyone else, separating the symptom from the cause will maybe help quell your frustration.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-23809462607655056682013-11-25T09:40:00.002-05:002013-11-25T09:40:19.273-05:00On Vulnerability<p>I think a lot of times we use words without really understanding what they mean. I don't mean this in the "here's a new word let's use it" way, but that often the use of a word has more meaning than might be found in a dictionary. "Vulnerability" is one such word.</p><p>I started thinking about this word over the summer, when I was having a conversation with a friend. I was sharing a personally-meaningful quote with them, when they told me that I'm making them feel vulnerable. I didn't pursue it at the time, but afterwards I thought it was strange: since it was <em>me</em> who was saying something personally meaningful, how was it that <em>they</em> felt vulnerable?</p><p>The dictionary definition of (psychological) vulnerability is the feeling of likelihood of being hurt. While this seems correct on the surface, it is also insufficiently nuanced. We do not, for example, feel vulnerable while succumb to a disease, nor when facing a tiger. (Granted, not having experienced either, I can only speculate; but these scenarios doesn't evoke the description of vulnerability to me.) At least, it's not the disease or the tiger, in and of itself, that leads to the feeling of vulnerability.</p><p>I started thinking of instance where we might describe the feeling as vulnerable:</p><ul><li>discussing personal trauma</li>
<li>giving a sincere compliment</li>
<li>sharing a secret</li>
</ul><p>Of these three, the second one about compliments needs elaboration. Imagine a friend who has gone through a rough period. You meet up with them after being out of contact for a couple years, and find that they have moved, with a new job and a circle of friends; they seem to have moved pass their previous difficulties. You want to them that you're glad that they're okay, and more than that, that you're proud that they've managed. The latter, in particular, seems to evoke feelings of vulnerability.</p><p>Weirdly enough, the breakthrough came for me from considering whether there were people who would <em>never</em> be vulnerable. Specifically, I was thinking those with impaired affect: the sociopaths, the <a href="http://en.wikipedia.org/wiki/Schizoid_personality_disorder">schizoid</a>, and so on. It was a hunch, but it seemed to me that the stereotypical emotionless sociopath wouldn't feel vulnerable, not because they don't feel things, but because they don't care about the reaction of others.</p><p>If my hunch is correct, it would mean that vulnerability is not so much about the fear of getting hurt, but about the fear of <em>indifference</em> to some strong emotion. This is what connects the three examples above: it's that the person feels strongly about the subject (trauma, compliments, or secrets), and they fear that this feeling might be dismissed as unimportant.</p><p>A more recent experience of mine seemed to confirm this explanation. I was meeting a friend for dinner, someone whom I used to have a crush on; she knew this, but I was rejected, and I eventually got over it. A couple days before, I suddenly learned that her boyfriend (who I also knew) was visiting, and would be joining us as well. Upon learning this, I was suddenly ambivalent about the whole thing, which puzzled me. It wasn't exactly romantic jealousy, since I no longer desired a relationship with her. Eventually, though, I realized I was feeling vulnerable about the dinner and, by the above definition, I figured out that I was afraid she would somehow downplay my previous feelings for her by, for example, openly making out with her boyfriend. Nothing of that sort happened, of course, but I felt better knowing the source of my feelings.</p><p>That, I think, is one of the best reasons for figuring out the true meaning of words: it lets you more quickly understand what is going on when you are tempted to describe yourself with it. I'm not sure if this is really a subfield of philosophy or linguistics - the study of <a href="http://en.wikipedia.org/wiki/Semasiology">semasiology</a> or <a href="http://en.wikipedia.org/wiki/Lexical_semantics">lexical semantics</a> comes close. Regardless, I've been thinking a lot about what people mean when they use different words (also on the list: "true", as in this <a href="http://leftoversoup.com/archive.php?num=294">"expresses something <em>true</em> [about people]"</a>), and I thought people would be interested in my thought process.</p><p>PS. I'm aware I never finished the story about my friend; this is deliberate, as the quote I shared reflects as much about them as it does about me. This makes it somebody else's secret, and not my story to tell.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-13604634914138593932013-11-23T09:42:00.001-05:002013-11-23T09:43:12.402-05:00A Rambling on Effortlessness<p>There's a quality that's been popping up in many places, and I want to take the time to, if not nail down what it is, at least feel its edges.</p><p>There are many guises to this thing. In <a href="https://www.goodreads.com/book/show/629.Zen_and_the_Art_of_Motorcycle_Maintenance">Zen and the Art of Motorcycle Maintenance</a>, it's called Quality. In <a href="https://www.goodreads.com/book/show/106728.The_Timeless_Way_of_Building">The Timeless Way of Building</a>, it's also called quality. <a href="http://en.wikipedia.org/wiki/Abraham_Maslow">Abraham Maslow</a> calls it self-actualization. Both the <a href="https://www.goodreads.com/book/show/186074.The_Name_of_the_Wind">Kingkiller</a> <a href="https://www.goodreads.com/book/show/1215032.The_Wise_Man_s_Fear">Chronicles</a> and the <a href="https://www.goodreads.com/book/show/43889.Wizard_s_First_Rule">Sword of Truth</a> series call it rare, or at least, that someone who has it is a rare person. In <a href="https://www.goodreads.com/book/show/189989.Finite_and_Infinite_Games">Finite and Infinite Games</a>, that person would be called a player of infinite games. Buddhists and Taoists would probably call it <a href="http://en.wikipedia.org/wiki/Mindfulness">mindfulness</a> or presentness. <a href="http://en.wikipedia.org/wiki/Mihaly_Csikszentmihalyi">Mihaly Csikszentmihalyi</a> might call it <a href="http://en.wikipedia.org/wiki/Flow_%28psychology%29">flow</a>. For the lack of a better word, I will call it Effortlessness, because it's most recognizable in that form.</p><p>Think of your favorite athlete, or artist, or dancer. Imagine them doing their work: while they're focused, what they do always seems effortless. There's a simplicity and elegance in their movement, and they act as though what they're doing is the most natural thing in the world.</p><p>Except that effortlessness is not quite the word for it, in that it's not correct. Take the best climber, say, Adam Ondra. <a href="https://vimeo.com/73040942">Here he is</a> climbing one of the world's hardest routes. Except for the crux, he looks effortless - as exemplified by the need to have a disclaimer that "no adjustments have been made to the speed of the climbing". We all know that Ondra spent a lot of time and effort to get to this stage; Bill Ramsey calls it <a href="http://whitneyboland.com/2010/09/22/the-box-of-pain/">the box of pain</a>, which is to say, you're choosing between the pain of failure or the pain of training.</p><p>There's an old <a href="http://www.catb.org/jargon/html/H/ha-ha-only-serious.html">joke</a> about how, to be a great painter, all you have to do "make yourself perfect and then just paint naturally" (I stole this wording from Zen and the Art). Part of the joke is that being perfect is harder than being a great painter. To paint "effortlessly" is not, in fact, effortless; it may be effortless in the moment, but much effort is put in to make it <em>look</em> effortless.</p><p>That's the first hint about the nature of effortlessness. Effortlessness is never something that the participant feels in the moment; I doubt Ondra thought the route was effortless while climbing it. He might, however, think that it was easy afterwards. <a href="http://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow#Two_selves">Psychologists</a> call this the <em>remembering self</em>, as opposed to the <em>experiencing self</em>. I suspect it's the same reason I feel <a href="http://justinnhli.blogspot.com/2012/11/generative-protection-aka-graduate.html">protected</a>. I often know how much time I <em>actually</em> put into research, but afterwards it always seems like it's nothing. When I first realized this, I <a href="https://twitter.com/justinnhli/status/114737110761738242">tweeted</a>, "For someone with a pretty huge ego, I tend to trivialize my accomplishments."</p><p>We've all had moments where we're so focused on something we forget about time passing, and only afterwards can we look back and think, "that was awesome!" I had one such experience at Red Rocks last year, climbing this <a href="http://www.mountainproject.com/v/physical-graffiti/105732266">5.6 crack</a>. Usually, after I climb something, I can replay most of the moves; on that particular route, I just remember feeling awesome coming out, and have no recollection of what happened on it.</p><p>Here, the idea of effortlessness intersects with that of flow. I personally think that it's the same thing as presence - because all your energy is focused on what you're doing - but that seems contrary to what Buddhists call mindfulness, which instead brings to mind an inherent meta-level of conscienceless. I'm not trying to be mystical here; take this passage from Zen and the Art:</p><blockquote><p>But the biggest clue seemed to be [the bad mechanics'] expressions. They were hard to explain. Good-natured, friendly, easygoing - and uninvolved. They were like spectators. You had the feeling they had just wandered in there themselves and somebody had handed them a wrench. There was no identification with the job. No saying, "I am a mechanic." At 5pm or whenever their eight hours were in, you knew they would cut it off and not have another thought about their work. They were already trying not to have any thoughts about their work <em>on</em> the job. In their own way they were achieving the same thing [my friends] were, living with technology without really having anything to do with it. Or rather, they had something to do with it, but their own selves were outside of it, detached, removed. They were involved in it but not in such a way as to care.</p></blockquote><p>Of course, Pirsig here was trying to get at what he called Quality. Somehow, the internal description of "caring", of attachment and presence, shows through to observers. Pirsig also raised the example of someone who does something for amusement - and to that person's surprise, other people notice these small things. There's a sense of playfulness to this, drawing in the idea of games. This is why, I think, you <a href="http://justinnhli.blogspot.com/2013/08/unteachables.html">cannot teach</a> someone to be effortless. You cannot tell someone to be <em>playful</em> - or as Karse would put it, "to allow for possibility whatever the cost to oneself", as opposed to seriousness, which is to "press for specified conclusion."</p><p>It's curious that Maslow has listed all of these ideas under what he calls the <em>metaneeds</em> of the self-actualized. By this definition, the self-actualized desire effortlessness and playfulness. Despite this connection, it's unclear to me <em>how</em> these attributes are all connected. There is a certain <em>je ne sais quoi</em> about them, in how they cannot be taught and cannot be described. I think the point of this post is just to mention all these related concepts, so I can find them again as I keep thinking about these ideas.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-29060138406200944252013-11-21T09:26:00.002-05:002013-11-21T09:58:46.237-05:00Two Ambigrams<p>Earlier this year I <a href="https://twitter.com/justinnhli/status/351846210367266816">tweeted</a> that I was excited about something that I couldn't reveal at the time. The <em>something</em> is actually an <a href="http://en.wikipedia.org/wiki/Ambigram">ambigram</a>, which I design on occasion. I first learned of the idea from Dan Brown's <a href="https://www.goodreads.com/book/show/960.Angels_Demons">Angels and Demons</a>, in which ambigrams played a major role. Incidentally, the <a href="http://www.johnlangdon.net/angelsanddemons.php">ambigrams in that book</a> were designed by <a href="http://www.johnlangdon.net/">John Langdon</a>, who commented on <a href="http://justinnhli.blogspot.com/2008/01/ambigrams-and-psychology.html">my previous post</a> about ambigrams and psychology. I designed my first ambigram back in 2004, and have done several more since, mostly as gifts to people.</p><p>My best work up till the beginning of the year was a birthday gift I made for my friend Emily.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5mctU4UMMPeIFpB1LjhwQ3LUargjt-sbKvRSRxuVCsVj-Au9wLQ-xOtLGKjjok4J7uoueh2uFBRICkZb9QftHpjnyzf-uveNKnKjPYgjTxI28K2SiXXv4KtFchBpo_YctlNinbF67At0/s1600/emily.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5mctU4UMMPeIFpB1LjhwQ3LUargjt-sbKvRSRxuVCsVj-Au9wLQ-xOtLGKjjok4J7uoueh2uFBRICkZb9QftHpjnyzf-uveNKnKjPYgjTxI28K2SiXXv4KtFchBpo_YctlNinbF67At0/s320/emily.png" /></a></div><br />
<p>One interesting thing about this design is that there is no <em>i</em>. The <em>m</em> flows directly into the <em>l</em>; the <em>i</em> is an illusion created by the <a href="http://en.wikipedia.org/wiki/Tittle">tittle</a> above the last stroke of the <em>m</em>. I actually made a version with the <em>i</em>, but it isn't as elegant, and I stuck with this version.</p><p>The ambigram I made earlier this year was a going-away gift for my friend Laura, who moved to join her boyfriend Neale in Philly. Neale had moved there about a year ago, but I failed to make a gift for him then. The idea was therefore to make a gift for each of them. Having a <a href="http://tvtropes.org/pmwiki/pmwiki.php/Main/ComplexityAddiction">complexity addiction</a>, however, made me wonder whether I can make the two gifts interact somehow, to represent their relationship.</p><p>Long story short, here's the result:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNmZXgtTbDUyAijxe65XPyeJsTOOA8nlICTcZH4D6R6qXWErKPzs6smNSdbcTgAtsvN3YFaX3S3DaCYjrnRpvIt4aMBtadQon73gMh30KJG6rhysHI3AnLq1k_wa6sh3RKxmoMx0vmGAA/s1600/full-circle.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNmZXgtTbDUyAijxe65XPyeJsTOOA8nlICTcZH4D6R6qXWErKPzs6smNSdbcTgAtsvN3YFaX3S3DaCYjrnRpvIt4aMBtadQon73gMh30KJG6rhysHI3AnLq1k_wa6sh3RKxmoMx0vmGAA/s320/full-circle.png" /></a></div><br />
<p>It's actually very difficult to see what's going on; it's better to show each one separately:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8u-pvosGbMJFOejltomvVzJRNzDyiVDA2WILEZCQfBYu5eCTHjtV_3D4EBT6IFqoFtExLi5yUXuntOeN8M0lwdAM5NZzWIGijrCcYY5dEFNnnn0uKNOxB7X5riA4a71BHFAX1-d5pN_E/s1600/laura-circle.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8u-pvosGbMJFOejltomvVzJRNzDyiVDA2WILEZCQfBYu5eCTHjtV_3D4EBT6IFqoFtExLi5yUXuntOeN8M0lwdAM5NZzWIGijrCcYY5dEFNnnn0uKNOxB7X5riA4a71BHFAX1-d5pN_E/s320/laura-circle.png" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbXvzQVNqo3wwIabMPjP4lrOlbtxCbVBMAhjbeAiDXaTaGs3GUcwCR4OPvFXG6snSjjBk0DE7ccTon2Oxo5rqeZ8cjB81XZRi5sewBYbMBwhm9jFcJTFpwNxXhP05Sflmug8oW9-MPQrs/s1600/neale-circle.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbXvzQVNqo3wwIabMPjP4lrOlbtxCbVBMAhjbeAiDXaTaGs3GUcwCR4OPvFXG6snSjjBk0DE7ccTon2Oxo5rqeZ8cjB81XZRi5sewBYbMBwhm9jFcJTFpwNxXhP05Sflmug8oW9-MPQrs/s320/neale-circle.png" /></a></div><br />
<p>Individually, each half-circle isn't anything special: it's just their names in some <a href="http://www.google.com/fonts/specimen/Ruthie">fancy typeface</a>. Together, however, is where the magic happens. The more observant of you will have noticed that the letters in <em>laura</em> and <em>neale</em> are not independent. In fact, they share all but the middle letter: the <em>ra</em> in <em>laura</em> turns into the <em>ne</em> in <em>neale</em>, and the <em>le</em> in <em>neale</em> forms the <em>la</em> in <em>laura</em>. Here is the combined circle again, with some highlighting: <em>laura</em> in red, <em>neale</em> in blue, and the overlap in purple.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglUCfMN-gJVfcyTXPsCKor0Hz8gdZjCgk4VZXE5WYwN-dQq_o7OEvbKGewL2_imJC5bv0FQbsE18yy9R-A3XlvSTzVZhR-sBGKr8tihwkfJAxGZLe9ypBhUaPPsm0SXmYuSjws4Sd_-tc/s1600/colored-circle.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglUCfMN-gJVfcyTXPsCKor0Hz8gdZjCgk4VZXE5WYwN-dQq_o7OEvbKGewL2_imJC5bv0FQbsE18yy9R-A3XlvSTzVZhR-sBGKr8tihwkfJAxGZLe9ypBhUaPPsm0SXmYuSjws4Sd_-tc/s320/colored-circle.png" /></a></div><br />
<p>The <em>r</em>/<em>n</em> were easy to merge; the crux of the design was in the <em>a</em>/<em>e</em>. It had to look good enough as both for each name to be readable; this was mostly done by adjusting the length of the closing stroke and of the <a href="http://en.wikipedia.org/wiki/Swash_%28typography%29">swash</a>. <em>neale</em> presented an extra challenge: if the <em>e</em>'s can be read as <em>a</em>'s, then what should happen to the remaining <em>a</em>? I was lucky in that <em>a</em> has <a href="http://en.wikipedia.org/wiki/A#Typographic_variants">two variants</a>. Having used the single-story variant for the merged glyph, in the end I created the double-story variant from scratch, making a Frankenstein glyph from various bits of different letters.</p><p>The final part of this is the presentation. As you saw, the combined circle is hard read, which meant that each half-circle would have to be independent. I hit on the solution of printing each on a transparency, then put in a cardboard frame for sturdiness. This way, each card would individually show a single name:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEim2jt9IFqqhHcvMWhKAYCkroFCBMIdGCzDxFRTCfpjJ0uy4q_jnsxU7NkBMTWPqProZ3byJsZ5FfJP0t0Dke3byBif9BqUsZYxoPytdhl6AKECRp7_vHZ7vcx-95pbMK8BZGQfNKsjtpA/s1600/separate.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEim2jt9IFqqhHcvMWhKAYCkroFCBMIdGCzDxFRTCfpjJ0uy4q_jnsxU7NkBMTWPqProZ3byJsZ5FfJP0t0Dke3byBif9BqUsZYxoPytdhl6AKECRp7_vHZ7vcx-95pbMK8BZGQfNKsjtpA/s320/separate.jpg" /></a></div><br />
<p>But when stacked and aligned, it shows the full ambigram:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgX_eYUn6LT4aFqFwH6rY_2NEjMTk7QkPjfkoKDTavKCQtlEC5POP44U67yQRE6LZCtvxw1gPMA07lr8QJOn4gp3eX3ThNskVukkvZ93t6IyILtowscZms3FI_f3IhGYQjiR9Ieiskw-Ro/s1600/together.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgX_eYUn6LT4aFqFwH6rY_2NEjMTk7QkPjfkoKDTavKCQtlEC5POP44U67yQRE6LZCtvxw1gPMA07lr8QJOn4gp3eX3ThNskVukkvZ93t6IyILtowscZms3FI_f3IhGYQjiR9Ieiskw-Ro/s320/together.jpg" /></a></div><br />
<p>Typographically, I still like my <em>emily</em> card more, but in terms of the final product, this surpasses the former by far (which I had simply printed on cardboard). This is not just because you can fiddle around with the cards until they align, but because the re-interpretation of the letters is so unconscious. None of the few people I tested the ambigram on realized that the <em>a</em> changed to an <em>e</em> (or vice versa) until I pointed it out to them, despite them moving the cards into position and reading back out the individual names. This goes to show how powerful our preconceptions are in interpreting what we see.</p><p>I do not have any future ambigrams planned, although I do have a different art project in mind. Maybe I will share that when I'm done.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-16977773471911048012013-11-19T09:51:00.001-05:002013-11-23T09:43:59.081-05:00Understanding Privilege: An Attempt<p><em>The usual disclaimers about not seeing outside my life experience apply.</em></p><p>Many of the blogs I follow, including those of my friends, often write about social issues. One concept that keeps coming up is <em>privilege</em>. The idea, according to <a href="http://en.wikipedia.org/wiki/Privilege_%28social_inequality%29">Wikipedia</a>, is to highlight not only the disadvantages that some people face, but also how the advantaged may not recognize that they are so. This concept is most commonly applied to <a href="http://en.wikipedia.org/wiki/Male_privilege">gender</a> and <a href="http://en.wikipedia.org/wiki/White_privilege">race</a>, but people also talk about <a href="http://en.wikipedia.org/wiki/Heteronormativity">heterosexual privileges</a> and <a href="http://en.wikipedia.org/wiki/Able-bodied_privilege">able-bodied privilege</a>.</p><p>Coming from a technical background as a straight (if <a href="http://justinnhli.blogspot.com/2013/11/asexuality.html">asexual</a>) Asian male, I have never quite understood the idea of privilege. To me, it seems like a mere reframing of the same issues around gender, race, sexuality, and disability discrimination. Maybe it makes the societal origins more visible, and how discrimination may be passive instead of active, but I don't think that's true of most discussions of privilege. As such, the idea of privilege seemed a little redundant.</p><p>The definition of privilege, however, is in some sense broader than discrimination. One of my problems with privilege is that it can be applied to a lot of things other than gender and race. An lo, I found articles about the <a href="http://paintingthegreyarea.wordpress.com/2012/11/26/literacy-privilege/">literacy</a>, <a href="http://tastytufts.com/2013/10/16/food-privilege-the-unfortunate-truths-of-veganism/">food</a>, and <a href="http://embracetheindoguity.wordpress.com/2013/08/30/i-wash-my-butt-with-your-misfortune-on-the-privilege-of-water/">water</a> privileges. These are not attributes that we commonly associate discrimination with, and it is difficult to imagine what it would mean to discriminate against those without water. (It should be noted that the lack of some other <a href="http://www.un.org/en/documents/udhr/index.shtml#a25">universal human rights</a> do get discriminated against; this is the case for the lack of housing, or what might be called the shelter privilege.)</p><p>The article which originally introduced the idea of privilege is Peggy McIntosh's <a href="http://www.library.wisc.edu/EDVRC/docs/public/pdfs/LIReadings/InvisibleKnapsack.pdf">White Privilege: Unpacking the Invisible Knapsack</a>. In it, the author noted that when confronted with male privilege, men</p><blockquote><p>[have an] unwillingness to grant that they are over-privileged, even though they may grant that women are disadvantaged. They may say they will work to improve women's status, in the society, the university, or the curriculum, but they can't or won't support the idea of lessening men's.</p></blockquote><p>McIntosh goes on to note that there are different kinds of privilege, that "some... should be the norm in a just society[, while] others... distort humanity of the holders as well as the ignored groups." The implication is that, for the former type of privileges, it is sufficient to grant the same privileges to those currently without them, while for the latter types the current holders of those privileges must also give them up.</p><p>Which brings me to my last confusion about privilege: all this academic discussion is great, but what is the (now consciously) privileged supposed to do with this information? On the blogs I follow, privilege is mostly brought up as an argument for the privileged to be more sensitive (or, as <a href="http://whatever.scalzi.com/2011/08/31/the-sort-of-crap-i-dont-get/#comment-272706">this comment</a> eloquently puts, "Shut the fuck up and <em>listen</em>"). More condemningly, privilege <a href="http://www.feministlawprofessors.com/2011/11/harassment-male-privilege-jokes-women-dont/">has been used to explain</a> the prevalence of rape jokes on the internet. Michael Kimmel, in the foreword to <a href="https://www.goodreads.com/book/show/7400069-privilege">Privilege: A Reader</a>, suggests that the task (of privilege researchers and advocates) is "to make visible the privilege that accompanies and conceals that invisibility [of being white, straight, male, and middle class]." But then he continues:</p><blockquote><p>While noble in intention, however, this posture of guilty self-negation cannot be our final destination as we come to understand how we are privileged by race, class, gender, and sexuality. Refusing to be men, white, or straight does neither the privileged nor the unprivileged much good. One can no more renounce privilege than one can stop breathing. It's in the air we breathe.</p></blockquote><p>So what are we to do? Kimmel's analogy with breathing is apt, considering the discussions of food and water privileges. That his words are in a book is, itself, a sign of the literacy privilege that he holds. Everything we do is a product of privilege of some kind or another - here I am, typing this blog post about privilege (intellectual) on a personal laptop (wealth), in a text editor I learned to use (education), that I will publish on the web without fear of filter or governmental backlash (internet, or freedom of speech). There are potentially an infinite number of privileges to <a href="http://blog.shrub.com/archives/tekanji/2006-03-08_146">check</a>, and it's unclear to me what results are desired. Deterrence of privileged speech does not apply equally to sexism and to the internet, and the idea of privilege doesn't contribute to distinguishing the two. While I appreciate new viewpoints as much as the next person, I'm not sure we need the idea of privilege simply to teach a lesson of respect.</p><p>I want to end with an example of privilege checking that made me further hesitate on the idea of privilege. A friend's blog (to which I will not link) quoted <a href="http://girlmeetsnyc.blogspot.com/2012/12/words-for-new-year.html">a blog post</a> motivating people to leave their jobs and travel the world. In the middle is this passage:</p><blockquote><p>You can say farewell to your family, your friends, and if they love you, they'll let you go, they'll know it's not forever. You'll make new friends on the road, people from near and far, from all walks of life, with one thing in mind: to travel. You might be surprised at the hospitality of strangers who will love you like family, usher you into their dwellings, give you a place to rest your blistered, wandering feet, and a plate of home-cooked food to eat.</p></blockquote><p>My friend found that the whole post "reeks of privilege", and said specifically of the first sentence in the excerpt above that some people may "have family to support (not everyone can care only for their own needs)." The first part of this reaction I agree with: it's definitely true that many people have to support their families. (Although, I suppose this in itself is a privilege, considering the alternative of being orphans in a war-torn country.) What struck me was the second part, how people who "care only for their own needs" are privileged. Depending on your viewpoint, these people are variously described as "selfish" all the way to "sociopathic". Neither ends of this spectrum are attributes people find desirable, so calling them privileges seems at least odd. Certainly, not being close to your family gives you advantages - such as the freedom to travel - but if everything that gives you benefits is a privilege, then the idea is so dilute as to be meaningless.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-67989985561077013852013-11-18T14:02:00.003-05:002013-11-23T09:43:59.088-05:00Generalizing the Turing Test<p>Are people familiar with the <a href="http://en.wikipedia.org/wiki/Turing_test">Turing test</a>? Named after <a href="http://en.wikipedia.org/wiki/Alan_Turing">Alan Turing</a>, the WWII British mathematician and code-breaker, it was proposed as a way of testing whether computers are "intelligent". In addition to inventing this test and helping break the <a href="http://en.wikipedia.org/wiki/Enigma_machine">Enigma</a>, Turing is also famous for the conception of a <a href="http://en.wikipedia.org/wiki/Turing_machine">Turing machine</a>, a mathematical construct that is useful for understanding computation in the abstract. Unfortunately for Turing, he was also homosexual, and was prosecuted by the state for it after the war; this eventually led to his suicide. In 2009 - 55 years after Turing's death - the British government <a href="http://en.wikipedia.org/wiki/Alan_Turing#Government_apology_and_pardon_support">formally apologized</a> for its treatment of Turing, and only earlier this year, in 2013, was he officially pardoned for his crimes.</p><p>Turing is also the subject of an upcoming biopic, <a href="http://en.wikipedia.org/wiki/The_Imitation_Game">The Imitation Game</a>, starring <del><a href="http://en.wikipedia.org/wiki/Sherlock_%28TV_series%29">Sherlock</a></del> <a href="http://en.wikipedia.org/wiki/Benedict_Cumberbatch">Benedict Cumberbatch</a>. The title of the film is, in fact, based on the Turing test (or rather, is famous because of the test). In the Turing Test, a computer and a person both try to convince a human judge that they are human. The judge can only communicate with both parties through text, but they can ask questions. A computer or a program is said to have passed to the Turing test if, over multiple conversations, the judge cannot reliably and correctly label which is the person and which is the computer. In some sense, the computer needs to be <em>imitating</em> a person, whence the name of the game.</p><p>(It can, of course, also go the other way, to have a <a href="http://en.wikipedia.org/wiki/Reverse_Turing_test">reversed Turing test</a>: the goal could be to convince the judge that both participants are computers. This doesn't make sense as a test of intelligence, but the idea is the same. You can also <a href="http://xkcd.com/329/">try to convince the <em>judge</em></a> that they're a computer...)</p><p>Setting the question of whether the Turing test is a good test for whether a computer is intelligent, the test is nonetheless elegant in its design. There are several crucial elements of the design, that the computer is not "the most intelligent" or "the most human like" compared to other computers, and that the judge is not simply deciding whether one conversant is a human or a computer; these features mean that the computer must match the human in performance. Further more, because the judge can ask the computer any question they want, the computer must be able to talk about any subject the judge could think of. Thus, even though no one has a good definition of "intelligence", the Turing test at least allows us to apply a "<a href="http://en.wikipedia.org/wiki/I_know_it_when_I_see_it">know it when I see it</a>" criterion.</p><p>For the most part, familiarity of the Turing test has remained within the academic fields of computer science, psychology, and philosophy. I was recently surprised, however, that it came up in an <a href="http://econlog.econlib.org/archives/2011/06/the_ideological.html">economics article</a>. In this article, it is suggested that liberal economists can successfully present conservative economic arguments, while the same cannot be said of the opposite, for a conservative economist to present liberal economic arguments. The author therefore proposes a variation of a Turing test: put him (a conservative) in a room with five liberals, and see if people can tell who's the fake; then put Paul Krugman (a liberal) in a room with five conservatives, and again see if people can tell who's the fake.</p><p>What, then, is the more general version of the Turing test? The Turing test is really a special case of a <a href="http://en.wikipedia.org/wiki/Discrimination_testing">discrimination test</a>. The key, however, is that the Turing test is being used as a criterion: <em>if</em> the computer cannot be distinguished from a human, <em>then</em> we shall consider it as intelligent. This means that, crucially, we <em>do not need to define the exact attribute</em> that we are judging by, only that it behaves indistinguishably from the real thing. While we do not have a perfect definition (or even a good one) of intelligence, the beauty of the Turing test is that we don't need one to decide that a computer is intelligent. This may be a very behavioral definition of intelligence - it doesn't address the <a href="http://en.wikipedia.org/wiki/Chinese_room">Chinese room</a> problem, for example - but it's better than endlessly debating the definition of intelligence.</p><p>Of course, the Turing test, clever as it is, is not the silver bullet for all indefinable attributes. In fact, there are strong restrictions on when the Turing test will be useful. For one, the judge must be familiar with the attribute that is being judged; for example, I would not be a good judge of liberal/conservative economists, since I myself cannot tell the difference. More generally, the more discerning the judge, the more "powerful" the Turing test becomes. Additionally, the attribute being judged must be different from how the attribute is generated. In the original Turing test, what we cared about was whether the computer is intelligent, not how it came to be that way (that is, whether it does so through neurons or silicon). This means that care must be taken to remove elements that may give the distinction away, which is why the original test uses text as the medium of communication, as opposed to a face-to-face conversation, which would detract from the task of judging intelligence.</p><p>It is curious to me that, despite spending some time on this post, I cannot think of a single application of the Turing test. The closest I've come is for <a href="http://en.wikipedia.org/wiki/Google_driverless_car">driverless cars</a>, and using the Turing test to see whether they drive safer than humans do. Such a test would involve putting human and computer in simulated drives, and having the judges simply watch a recording of the performance without knowing which is which. While such a test would show that self-driving cars are as safe - if not safer than - humans, it is also unnecessary: the metrics for safety seems sufficiently well-defined that we don't need a Turing test, and that any old comparison would suffice. This is nothing more than an argumentation from <a href="http://en.wikipedia.org/wiki/Argument_from_ignorance#Argument_from_incredulity.2FLack_of_imagination">lack of imagination</a>, it may seem that we are simply not trained to think of tests in this way. More often, when we think of an attribute we want to measure, we define the attribute such that it is measurable, or else we give up trying to get any accurate measurements. Maybe the Turing test is a good way to attack some of these measurements we've given up on.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-16976714199294867102013-11-15T11:46:00.000-05:002014-02-20T12:27:48.624-05:00Asexuality<p><em>Author's note: Sunday's post will be delayed due to climbing.</em></p><p>Let's get the elephant in the room out of the way: I'm asexual.</p><p>Or at least, I would consider myself practically asexual, in the same way that I'm practically atheist, since you can't prove a negative (meaning I'm technically <a href="http://en.wiktionary.org/wiki/agnostic#Etymology">agnostic</a>, but that word is already taken). I originally wanted to put "I identify as asexual", but I didn't quite know what "identify" means. I also considered "I think I am asexual", but in the end I decided to <a href="http://en.wikipedia.org/wiki/KISS_principle">keep it simple, stupid</a>.</p><p>Maybe I should backtrack and say what it means to be asexual. As a point in the spectrum/space of sexuality, it has a different meaning from the biological sense of the word (eg. "<a href="http://en.wikipedia.org/wiki/Asexual_reproduction">asexual reproduction</a>"). Here, I'm taking an asexual to mean someone who does not feel sexual attraction. Again, as with atheism, having a group defined around a negative means that the members will have a lot of other differences. There is a distinction, for example, between someone's sexual orientation and their romantic orientation; many people on <a href="http://www.reddit.com/r/asexuality">r/asexuality</a>, for example, are also aromantic (that is, they don't feel romantic attraction), although I remain heteromantic (contrary to <a href="http://justinnhli.blogspot.com/2013/11/mental-objectification.html">what I wrote several days ago</a>). There are also divides between people who will and will not have sex, people who find sex disgusting, and so on. Note that this definition of asexuality is silent on sexual arousal and libido. We can argue whether these things are independent, but here I assume they are separate phenomena.</p><p>I don't remember when I first learned about asexuality - it probably wasn't more than two years ago - but I do remember that for a long time I didn't have a good understanding of what it is. Not that I would say my understanding is good now, but at the time, the diversity in the people who identify as asexual confused me. It's ironic that the definition of asexuality is the least useful to those who are asexual; after all, if you've never experienced sexual attraction, how do you know you're missing it? It doesn't help that the separation of sexual arousal from sexual attraction is not intuitive; it's unclear to me how you can be aroused by porn but not feel sexual attraction. The relationship between attraction, arousal, and libido still eludes me.</p><p>It was only earlier this year when I really started thinking about what asexuality meant, and whether it was a label I'd apply to myself. In hindsight, this questioning period didn't feel that long. Looking over my journal, within a space of two months, I had almost entirely accepted "asexual" as a generally accurate description of myself. I still had various doubts, of course, if not due to the negative-proof thing, then due to the possibility of demisexuality, where sexual attraction only occurs after establishing emotional connection. (Tailsteak, of <a href="http://leftoversoup.com/">Leftover Soup</a>, comments that this suggests the <a href="http://www.leftoversoup.com/archive.php?num=436">theoretical position of <em>anti</em>-demisexuality</a>, where someone is only attracted to people they have no emotional connection with. Symmetry for the win!).</p><p>Even before this year, before I heard of asexuality, I've noticed that my calibrations of feelings for people might be off. Robert Sternberg has a <a href="http://en.wikipedia.org/wiki/Triangular_theory_of_love">triangular theory of love</a>, where the legs of the triangle are passion, intimacy, and commitment; the passion component had always bothered me, since I have never felt myself as worked up as other people seem to be. I had never thought of people as "hot", even though I've heard it often enough to know what it means. The biggest clue, though, was probably that whenever I think about sex, I have to remind myself that to other people, sex is not just an activity to maintain/increase intimacy in a relationship, but also one which people engage in for physical pleasure. While I can intellectually understand the sentiment, it's not something I've ever felt. The whole hook-up culture has, therefore, always felt foreign to me. Rather than saying that I realized I'm asexual, it might be more accurate to say that I discovered the asexual label, and found that it applied to me.</p><p>Since recognizing myself as an asexual, I have often wondered how much this trait has (unconsciously) influenced my thinking. For example, my quibble with objectification (<a href="http://justinnhli.blogspot.com/2013/06/defining-objectification.html">here</a> and <a href="http://justinnhli.blogspot.com/2013/11/mental-objectification.html">here</a>) probably stems from my lack of sexual attraction. More than just a thought experiment, it's a way for me to explore how sexuals see and classify the world. Fundamentally, it's about whether and why someone's physical assets (which I'm not attracted to, at least sexually) is treated differently from someone's mental assets (which I <em>am</em> attracted to). My philosophical nature, of course, plays a part as well, but I suspect that if I were not asexual the question would not occupy as much of my spare time.</p><p>Another train of thought in which I could identify the influences of asexuality is the boundary be friendships and relationships. Perhaps this should not be as difficult a question for me, given that I'm heteromantic, and therefore have this distinction at least between men and women. And partially this may be due to me being male, meaning that, according to legend, I treat all women as potential partners, while women have a much cleaner distinction between the two. Regardless, given the central role that sex plays in people's definitions of romantic relationships (and I've had conversations with multiple people about this), I was curious how they imagined this would change for asexuals. I have yet to hear a good answer, although almost everyone agrees there is <em>something</em> aside from sex that distinguishes the two.</p><p>A final question, which came out of a conversation with a friend. When I asked her to explain the importance of sex to an asexual, she paused, then suggested that sexual attraction may be a sensory dimension by itself. This was a hypothesis I could not immediately dismiss. I don't think sexual attraction is as central as one of the five senses, since we don't have the organ to detect it, as eyes do for sight. I don't even think it's as central as one of the <a href="http://en.wikipedia.org/wiki/Sense#Other_senses">other, less-well-known ones</a>. Still, my friend may have a point. The best analogy I know comes from Raymond Smullyan's <a href="https://www.goodreads.com/book/show/219106.The_Tao_Is_Silent">The Tao is Silent</a>, where he compared the Tao to melodies:</p><blockquote><p>Suppose two people, one a musician and the other extremely unmusical, are listening to a theme. The unmusical one admits frankly, "I hear the notes, but I don't hear the melody"... The true Taoists... directly perceive that which they call the Tao (or which others call God, Nature, the Absolute, Cosmic Consciousness) just as the musical directly perceives the melody. The musician... obviously has a direct experience of the melody itself. And once the melody is heard, it is impossible ever again to doubt it.</p></blockquote><p>Who's to say that sexual attraction is not some similar sense, more of an inability to recognize a pattern than the inability to receive a signal? The analogy is not perfect; I <em>can</em> recognize the pattern of what people are sexually attracted to, it's just that the same pattern doesn't elicit any response from me. Still, if the difference between sexuals and asexuals runs this deep into physiology, then I'm not sure the experiences between the two can ever be fully described.</p><p>In the end, I consider "asexual" as merely a convenient label for the psychological phenomena (or lack thereof) that I experience. I haven't felt relieve, or conversely, any stress, from this new understanding of myself. If I run with the idea of sexual attraction as a sense, then maybe my experience is comparable to someone with <a href="http://en.wikipedia.org/wiki/Synesthesia">synesthesia</a>. And since I have asked <a href="http://justinnhli.blogspot.com/2009/05/synesthesia.html">questions</a> of my synaesthetic friend, I suppose people should feel free to ask me questions too.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-46243446119433255822013-11-13T11:00:00.001-05:002013-11-23T09:43:59.084-05:00Nerd: A Retrospective<p>One of the few instances of shopping I remember as a kid probably also says the most about me. This was around 1999, and software was still sold on CD's. Back then, Hong Kong had a giant market for bootleg software. The Wan Chai Computer Centre, then and still one of the biggest specialty malls for computer related stuff, was simply store after store of ripped games and productivity products. I remember spending my slowly-earned pocket money on some deal - any three CDs for some special price. The three CDs I chose: <a href="http://en.wikipedia.org/wiki/Rise_of_the_Robots">Rise of the Robots</a>, <a href="http://en.wikipedia.org/wiki/Populous:_The_Beginning">Populous: The Beginning</a>, and... <a href="http://en.wikipedia.org/wiki/Encarta">Microsoft Encarta</a> 1997. Actually, I'm not sure if the first two were what I bought (they were both games I eventually owned), but I am certain Encarta came from that incident.</p><p>I think even then, my parents were shocked that I would spend money on reference software. I myself cannot tell you why I did it, except that I did. I still have fond memories of the thing: the encyclopedia came with an interactive orbit simulator, where you have to set the Moon's initial position and velocity such that it is captured by Earth. There was also <a href="https://www.google.com/search?q=mindmaze&tbm=isch">Mind Maze</a>, a trivia game where you must find your way out of a grid of connected rooms.</p><p>Now that I've jogged my memory, Encarta wasn't even the only educational software we had. I remember spending hours on <a href="http://en.wikipedia.org/wiki/Operation_Neptune_(video_game)">Operation Neptune</a> and Treasure Galaxy!, both of which were by <a href="http://en.wikipedia.org/wiki/The_Learning_Company">The Learning Company</a>. Ditto for <a href="http://en.wikipedia.org/wiki/Where_in_the_World_Is_Carmen_Sandiego%3F_(1996)">Where in the World is Carmen Sandiego</a>, the full version for which includes a two-inch-thick geography reference. I also had a virtual observatory, and one where you're in this cave-turned-museum, and you can move around and watch crystals and minerals grow. I think the last two are both by DK, but I can't find any reference to it.</p><p>In retrospect, a lot of my nerdier and more academic interests exhibited themselves early. I remember in primary school (that's grade 1 to 6 for you Americans) having more than a few Dorling Kindersley reference books, including one on space (who doesn't have one of those?), one on dinosaurs (or one of these?), one on gemstones... As I grew older, my collection of of non-fiction books continued to expand, now adding books on the occult and on philosophy. I somehow read Thomas More's <a href="http://en.wikipedia.org/wiki/Utopia_%28book%29">Utopia</a> in middle school, as well as <a href="http://en.wikipedia.org/wiki/Sophie%27s_World">Sophie's World</a>. I suppose this is <a href="http://www.urbandictionary.com/define.php?term=humblebrag">humble-bragging</a>, but I think it's also representative of my childhood.</p><p>It's curious that, despite the above description, I've slowly stopped identifying myself as a nerd. It's not that I have lost interest in these topics, but my definition of "nerd" has changed. In pop-culture, nerds are more highly identified with people who follow particular franchises: Star Trek/Wars, Middle Earth, Doctor Who, Dungeons and Dragons, Halo... While I have enjoyed some of these universes, I am also deficient in much of the canon; for example, I have yet to read <a href="http://en.wikipedia.org/wiki/Dune_%28book%29">Dune</a>. I don't follow any TV series (<em>maybe</em> with the exception of Sherlock), nor am I a gamer, video, tabletop, board, or otherwise. By these definitions, then, I'm not a nerd.</p><p>It's fun to speculate about the origins of these two different meanings of "nerd". Being in academia, and computer science at that, you would expect them to be nerd stereotype incarnates. But, in fact, most of my friends in computer science are not hardcore gamers (with the exception of a group who plays board games), and while pop-culture references do come up, they do not form the backbone of our discourse. Perhaps unsurprisingly, they are instead more likely to have diverse academic interests, who would read books on sociology, math, or philosophy for pleasure.</p><p>This makes me question how the stereotype of a nerd even started. The origin story of hackers as tabletop gamers now seem less plausible to me. If that is only a myth, then something else must have connected academics with gamers. I don't think it's the social ineptitude, either, and neither groups are obsessive about what they do. Wikipedia <a href="http://en.wikipedia.org/wiki/Nerd">suggests</a> that in academia, nerds are more likely to be interested in science, mathematics, engineering, linguistics, history, and technology. If I had to guess, it has something to do with finding interesting combinations in complex, rule-based systems. This doesn't jive with the history aspect though, and it's surprising that law nerds is not a thing.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-61240795085311179842013-11-11T10:27:00.001-05:002013-11-23T09:43:59.086-05:00The Quantified Self<p>This post means that I'm a third of the way through NaBloPoHaMo (National Blog Posting Half Month). So far I've managed to stay ahead of the posting schedule about two days, which is good because I'll be in Kentucky next weekend, without internet access. I don't expect there to be a problem, although I won't be able to announce new posts on Twitter. If I do fall behind, I'll make a special post to say so.</p><p>To celebrate this endeavor being a third done, and to give myself a break, this post will be mostly graphical. By "graphical", I mean "chartical" and "plotical"; these are all data visualizations of some data I have about myself. I'm not part of the <a href="http://en.wikipedia.org/wiki/Quantified_Self">quantified self movement</a>, but I do have various pieces of collections of data about myself. My journal, for example, which I've kept for eleven years and has over 1.75 million words, is a rich source of information about myself. I've started projects to data-mine my journal multiple times, although I have never settled on a good enough method to do so. I also use <a href="https://www.mint.com/">Mint</a>, which keeps track of my finances, and I use <a href="http://askmeevery.com/">AskMeEvery</a> to track my sleeping times (because I keep claiming that I don't need much sleep, but have never backed it up with data). There are a few other sources I could have drawn from to make pretty pictures as well, such as my command line history and my journal search history, and also my Google/Twitter/Facebook dumps, but I have less interest in those.</p><p>So here are three cool graphs. They are made with <a href="http://en.wikipedia.org/wiki/Gnuplot">gnuplot</a>, using Python for preprocessing. Having written the scripts for this post, the next step is to unify the scripts into pure Python. This will be a good excuse to learn <a href="http://en.wikipedia.org/wiki/Matplotlib">matplotlib</a>, which finally supports Python 3.</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNc0wqsdLqvRZZEbrC802dBxLNkVdQWHb1NTTQL-uhXG7J31xqhXm7UjcvX3ZJtHhOyKv007mih2QqnLYczbO3DqbVxGpZY_7F06VaWCA86Rn9VJponDTLDSX29cRSCx4oBqx1FS-0l6Q/s1600/readability.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNc0wqsdLqvRZZEbrC802dBxLNkVdQWHb1NTTQL-uhXG7J31xqhXm7UjcvX3ZJtHhOyKv007mih2QqnLYczbO3DqbVxGpZY_7F06VaWCA86Rn9VJponDTLDSX29cRSCx4oBqx1FS-0l6Q/s320/readability.png" width="320" /></a></div><br />
<p>This first graph comes from my journal (and which I've shared on Twitter <a href="https://twitter.com/justinnhli/status/385042496079675392">before</a>). I measured the <a href="http://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readability_tests#Flesch.E2.80.93Kincaid_Grade_Level">Flesch-Kincaid reading scale</a> of each of my entries, then averaged the numbers for each month. I took the color scheme from a <a href="https://speakerdeck.com/cherdarchuk/remove-to-improve-the-data-ink-ratio">Dark Horse Analytics presentation</a>, and I think it came out quite well. I was surprised how linear (slope = 0.38 grades/year) the result is, although there also seems to be a slight sinusoidal component that gets noisier over time. Before you laugh at my 9th grade writing level, I challenge you to calculate the same for your own recreational writing, while keeping in mind that most of the blog posts I've written in the past two weeks are around 10th grade. The only writing of mine that goes above that are my <a href="http://www-personal.umich.edu/~justinnh/">scientific publications</a>, which rank in at 13th grade.</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4U3zz4Yn475jnAmzWHlb35B1JSBdMhEVMkHxY3IujXX6pY9zCPMwBZZQpaInKiwLwI-CV6BCkZhHSUGm6rqw7TqY-k5O1UawDoht3rXp64P5T4D6Pa3GHhac1LnTz78pbwVp6g5Kyqc0/s1600/spending.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4U3zz4Yn475jnAmzWHlb35B1JSBdMhEVMkHxY3IujXX6pY9zCPMwBZZQpaInKiwLwI-CV6BCkZhHSUGm6rqw7TqY-k5O1UawDoht3rXp64P5T4D6Pa3GHhac1LnTz78pbwVp6g5Kyqc0/s320/spending.png" width="320" /></a></div><br />
<p>The second graph shows where my spending went over the last four years, since I started using Mint in 2009. There are eleven categories here, representing the ten categories with the largest total spending, with an additional catch-all "other" category. I've hidden the scale (or rather, it's all percentages), although you can probably figure out the real amounts if with some estimates. I will tell you that the red at the bottom is rent, and that the dark purple, on the left side, is credit card payments; you can tell when I've stopped using a credit card. Having made this graph specifically for this post, I have no insights, except that it would be fun to further analyze this data.</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxpYIFYwX4n_JabkNN6b_J9ga6bYj38sYf5aWWg1jiXPVJPSXEKHzbFHYWpI2O4L5IvX1I4uYasQYHKo_h323UO81yr28hzbZhhpfiE1jllxjpwVFd3M_rEwzDpq8h5L54mxGfLO6PpDs/s1600/bedtime.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxpYIFYwX4n_JabkNN6b_J9ga6bYj38sYf5aWWg1jiXPVJPSXEKHzbFHYWpI2O4L5IvX1I4uYasQYHKo_h323UO81yr28hzbZhhpfiE1jllxjpwVFd3M_rEwzDpq8h5L54mxGfLO6PpDs/s320/bedtime.png" width="320" /></a></div><br />
<p>The last graph shows when I've gone to bed and gotten up in the last three months, at 15 minute resolution. The y-axis requires some explanation; it shows a day from one noon to the next, with the midnight being the transition to the day marked on the x-axis (which I've hidden anyway). There are some gaps for this data, notably in October, which all come from camping trips when I can't be bothered with clocks. In terms of summary statistics, min = 5hrs, first quartile = 6.75hrs, median = 7.25hrs, third quartile = 8hrs, max = 10.25hrs; the mean is 7.30hrs, with a standard deviation of 106 minutes. This is a below average amount of sleep, but not so low to be ridiculous; it only barely supports my claim that I don't sleep much (although it's silent on the issue of whether I need much sleep). In case you're wondering, my sleeping and waking times only have a correlation of 0.48, which is fairly low; I suspect that if I separate the weekdays from the weekends the correlation will be higher, but I have yet to do that analysis.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-62513351685092541392013-11-09T13:00:00.002-05:002013-11-09T16:13:56.262-05:00Restriction-free Relationships<p>I want to explore a curious trend in interpersonal relationships. For the purposes of this post, I will restrict myself to four types (<a href="http://en.wikipedia.org/wiki/Interpersonal_relationship#Development">stages</a>) of relationships: strangers, acquaintances, friends, and significant others. Not all relationships have all four stages, and a lot of subtleties are left out, but it illustrates a point that applies to the missing pieces as well.</p><p>What does it mean for two people to be acquaintances instead of strangers? The status of a relationship is, of course, <a href="http://justinnhli.blogspot.com/2013/11/invisible-colors.html">a Color</a>: there's nothing in the physical composition of two people which classifies them as acquaintances. This particular Color is determined by whether the two people have been introduced before or otherwise know each other. Behaviorally, however, there are observable differences: acquaintances will say hi if they pass each other in the hallway; they might make ask about each other's family, and so on. Importantly, these are behaviors that would be considered weird for strangers perform, and it would be bordering on inappropriate to ask about a stranger's family. We might say, then, that acquaintances have (implicitly) given each other <em>permission</em> to perform these actions.</p><p>More generally, there are four types of attributes I want to examine in social relationships: permissions, freedoms, obligations, and restrictions. Permissions, as demonstrated above, are behaviors that it is now acceptable to do (say, as an acquaintance). <em>Freedoms</em> are, in some sense, the opposite: it is something you were doing before, but is now acceptable to <em>not</em> do; an example might be the need to avoid meeting someone's gaze. On the other hand, <em>obligations</em> are things that you are now <em>required</em> to do (for example, a nod of the head to an acquaintance), while <em>restrictions</em> are things that you are not required to not do (for example, yell at an acquaintance for no reason). These describe changes in how two people relate to each other, as they progress through the stages, from strangers to acquaintances to friends to significant others. The exact behaviors differ from relationship to relationship, but these examples seem to be relatively universal, and in either case, the general idea survives.</p><p>As you might have noticed from the description, these concepts are symmetric:</p><table><tr> <th></th> <th>may</th> <th>must</th> </tr>
<tr> <th>do</th> <td>permissions</td> <td>obligations</td> </tr>
<tr> <th>not-do</th> <td>freedoms</td> <td>restrictions</td> </tr>
</table><br />
<p>That is, permissions are about what a person <em>may do</em>, freedoms are about what a person <em>may not-do</em>, and so on. There is a duality between the do's and the don't's, since one can always be framed as the inverse of the other. To take an example from above, the freedom from having to avert your gaze can also be framed as the permission to match someone's gaze. This relates to the <a href="http://en.wikipedia.org/wiki/Action_%28philosophy%29">philosophical idea of action</a>, but since that's outside the scope of what I want to talk about here, I will keep all four terms for clarity.</p><p>Given this scale, how do the four stages of interpersonal relationships differ? Examples of behavior for acquaintances have already been given, so I will move on to behavior for friends. As friends, the permissions granted are increased: friends are allowed to insult each other, which acquaintances may misinterpret or take offense by. That in itself is also a freedom, from worry about easily offending someone. The obligations of friendship are harder to define; some people would say that you are obligated to provide emotional support, although I feel this is more elective than a requirement, especially for friendships among men. I can think of no additional restrictions on friends beyond those of between acquaintances either.</p><p>If we review the trends at this point, it would seem that the permissions and freedoms grow as a relationship deepens, while the obligations and restrictions remain minimal; even the ones that do exist are mostly the ones expected of polite human beings in general, not ones that apply specifically between friends. We might expect that this trend continue to hold when friends become significant others; we would also be wrong.</p><p>When people begin dating, the permissions (eg. sexual contact) and freedoms (eg. needing to present a particular image) continue to increase, but somehow, a host of obligations and restrictions also come into play. The most prominent of these may be the restriction of sexual fidelity, or otherwise known as "not cheating". (I must again emphasize that these behaviors are not universal, although it seems to be accurate for the majority.) This restriction is particularly notable, because its scope is not between the two people in the relationship, but between them and everyone else. Translated into the friendship context, it would be the equivalent of requiring your friends not to be friends with someone else; while these kinds of people do exist, they are usually not the kind of people we want to be friends with in the first place. There are obligations too; there is a definite expectation to provide emotional (as well as financial) support, as well as the expectation that the relationship will be somewhat long-term, especially for marriages.</p><p>That romantic relationships have obligations and restrictions that friendships don't is not a bad thing a priori. Evolutionary psychology suggests that the difference in biology between men and women has led to <a href="http://en.wikipedia.org/wiki/Sexual_jealousy_in_humans">sexual jealousy</a>, and obligations and restrictions may be a way of making people feel secure, that was eventually got adopted into the societal narrative of relationships. Since obligations and restrictions are often desired by both parties in a relationship, man people can in accordance with them voluntarily, without questioning whether they want these obligations or not. Given how ingrained these expectations are in society, I suspect that most people are not even aware that alternatives exist.</p><p>Well known or not, the are models where the obligations and restrictions are more lax. The most common is the family of practices which do not require sexual fidelity, such as polyamory and open relationships. Although less restrictive than mono-amory, they are not necessarily restriction free; elements of exclusivity remain, as there are often stipulations about whether and when one partner can have sex with someone outside the relationships, as well as limits on the amount of emotional investment allowed. Friend-with-benefit and no-strings-attached type relationships have similar restrictions on emotional investment.</p><p>Alternatives to other aspects of normal relationships are harder to find. I don't know if a word exists for romantic relationships where partners do not provide emotional support, even though this is almost the default among friendships. The lack of expectation for the relationship to last seems to be equally rare, although again common between friends. At least, I have not heard of people celebrating any anniversaries of friendships (outside of the friendship forming over a particular event), while the time frame for celebrations for couples seem to shrink by the <del>year</del> month. These celebrations do not directly suggest that the partners want the relationship to last, but it does suggest that many future celebrations are expected.</p><p>(EDIT: The above two paragraphs are not being fair to polyamorous relationships; many in the community also question using length as a metric for relationships. See the comments for details.)</p><p>If we remove obligations and restrictions, what is left are relationships based on permissions and freedoms - the permission to cuddle and have sex, the permission to have long conversations about any topic, the permission to access your thoughts and feelings; the freedom from all the other pressures that society normally exerts on you, the freedom to truly <em>be yourself</em>, the freedom to do whatever you want.</p><p>I'm not sure whether such relationships are possible, and if they are, whether there is a reason to prefer them over traditional relationships with obligations and restrictions. From the single conversation I've had on this topic, it's unclear to me whether the need to impose these obligations on others can be deliberately reduced; if they can't, that restriction-free relationships (or, if you prefer to be positive, permission-based relationships) remain a thought experiment. Regardless of whether it can be made into reality, I must admit that such a relationship holds a lot of appeal to me, as I would not be bound by anyone, and others would be similarly not bound to me. Such a conception of relationships would also bring it in line with acquaintanceships and friendships, while blurring the line between all three. I myself have never been clear on the boundaries anyway, but that would be the topic of another post.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-89794092151820942172013-11-07T10:07:00.003-05:002013-11-07T10:08:39.767-05:00Mental Objectification<p><em>DISCLAIMER: I’m new to the discussion about objectification, and being a heterosexual, cis-gendered, upper-middle class Asian male makes me blind to a lot of things. Please let me know if I miss something obvious. Also, <strong>TRIGGER WARNING</strong>: rape is briefly mentioned as a point of comparison.</em></p><p>Imagine you’re a woman, walking home after work one day. You pass a local bar, where a couple people are just coming out. One of them, a young man in a T-shirt and jeans, sees you nearby and leers, “Hey Sexy...”.</p><p>You might think this is a typical scene, and maybe you’re even experienced it before (I apologize for my gender). But now imagine the scene again: you’re walking by, the young man looks at you, and leers, “Hey Intelligent...”. How would your reaction be different? I imagine this second scenario feels <a href="https://twitter.com/justinnhli/status/354321746570117120">more acceptable, or at least weirder and less offensive</a>, while the first one is repulsive and objectifying. The question is, why?</p><p>One answer that comes up almost immediately is that the first scenario is objectifying, while the second one isn’t. For me, objectification <a href="http://justinnhli.blogspot.com/2013/06/defining-objectification.html">is not well defined enough</a> to make this argument either way; specifically, I don’t see why we can’t have <em>mental</em> objectification. Consider the <a href="http://en.wikipedia.org/wiki/Objectification">criteria for objectification</a>. The above-linked Wikipedia article quotes from Martha C. Nussbaum’s <a href="http://www.jstor.org/stable/2961930">paper</a>, from which I reproduce this list of “seven ways to treat a person as a thing”:</p><blockquote><ol style="list-style-type: decimal"><li>Instrumentality: The objectifier treats the object as a tool of his or her purposes.</li>
<li>Denial of autonomy: The objectifier treats the object as lacking in autonomy and self-determination.</li>
<li>Inertness: The objectifier treats the object as lacking in agency,and perhaps also in activity.</li>
<li>Fungibility: The objectifier treats the object as interchangeable (a) with other objects of the same type, and/or (b) with objects of other types.</li>
<li>Violability: The objectifier treats the object as lacking in boundary-integrity, as something that it is permissible to break up, smash, break into.</li>
<li>Ownership: The objectifier treats the object as something that is owned by another, can be bought or sold, etc.</li>
<li>Denial of subjectivity: The objectifier treats the object as something whose experience and feelings (if any) need not be taken into account.</li>
</ol></blockquote><p>To give Nussbaum credit, she refuses to say whether these criteria are <a href="http://en.wikipedia.org/wiki/Necessity_and_sufficiency">necessary or sufficient</a>. Still, all of these criteria (except for the last) applies to using someone for mental stimulation as well as using for sexual stimulation. Consider, for example, the fictional <a href="http://torment.wikia.com/wiki/Brothel_for_Slaking_Intellectual_Lusts">Brothel for Slaking Intellectual Lusts</a> in <a href="http://en.wikipedia.org/wiki/Planescape:_Torment">Planescape: Torment</a>. By the <a href="http://www.wischik.com/lu/senses/pst-book.html">in-game description</a>, the Brothel was established</p><blockquote> <p>to give those lustful fevers that strike the mind more avenues of expression rather than the simply carnal. Much pleasure can be had in conversation and engaging in the verbal arts with others. [...] This brothel is intended to slake the lusts of even the hardened intellectual. It is designed to stimulate the mind, to heighten one’s awareness of themselves and others, to create new ways of <em>experiencing</em> another person. It is for those who seek something more than the shallow physical pleasures.</p></blockquote><p>More succinctly, the brothel aims to satisfy any intellectual urgings its clients might have. The “prostitutes” in this fictional brothel are objectified just as much as prostitutes in real brothels – except that it’s their minds the clients are after, not their body. (I’m not suggesting that prostitutes are inherently objectified; I’m simply saying that <em>if</em> sexual prostitutes are objectified by their clients – which seems likely – then the same line of argument applies to mental prostitutes.)</p><p>This discussion about objectification is technically a digression from the two scenarios presented, since whether the scenarios are examples of objectification is debatable. None the less, the discrepancy between discussions about sexual and mental objectifications is noteworthy. The latter may not be a pressing problem for feminists – I have no doubt that sexual objectification is much more prevalent than mental objectification – but the ratio should not be so great as for me to not have heard about the latter at all.</p><p>Returning to the central question then: why does “Hey sexy...” evoke a more viscerally repulsive response than “Hey intelligent...”? One major difference stands out between sex and the brain: while we could tie someone up and force them to have sex with us, we can’t do the same to make them tell us stories or argue in a debate. This objection, however, only holds superficially. While a person cannot be held helpless to engage in an argument, they can be blackmailed into doing so. The difference is therefore not a matter of consent; we can imagine that the young man in the scenarios can get what he wants either way with sufficient coercion.</p><p>There is another avenue of inquiry that leads to a dead end: that of the intellect as a more prominent definition of self. This is to say that, for most people, who they consider themselves to be is more dependent on their thought processes than on their physical appearance. If we take this hypothesis for granted, however, it would imply the theoretical orientation of someone who finds “Hey intelligent...” more repulsive than “Hey sexy...”. All this would require is that their physical appearance determines more of their sense of self than their intellectual prowess. This inverted reaction seems unlikely, and for that I dismiss this hypothesis about self-image. (Although if someone does have these beliefs, I would be very interested in talking to them.)</p><p>This leaves me with just two hypotheses (which may have been obvious to some of you from the beginning).</p><p>My first hypothesis is that “Hey sexy...” is more repulsive solely because of cultural/historical reasons. As <a href="https://twitter.com/kylelady/status/354324190440079361">my friend suggested</a>, women have never been judged by their brains. Greeting a woman with “Hey sexy...” is, therefore, as loaded as it would be to greet a black person with “Hey nigger...”. In this case, the word “sexy” brings with it the historical inequality between man and woman, and therefore the objectification of the latter. “Intelligent”, on the other hand, carries no such cultural baggage, and therefore elicits less of a reaction. Interestingly, "sexy" is unique in that it is the only complimentary (if interpreted literally) adjective that may apply to oppressed groups (through its connection with women; I think this is a stretch too); we can't use "black", "queer", or "faithless" as compliments, but we can with "sexy".</p><p>My second hypothesis is that “Hey sexy...” is more repulsive because of the negative cultural connotation associated with sex. I have to admit while this hypothesis is attractive (because it’s an association people can choose to ignore), it doesn’t hold up well in cross-cultural comparisons; as far as I know, “Hey sexy...” is still repulsive regardless of which society it is uttered in. Regardless, this suggests the thought experiment of other greetings, such as “Hey violent...” or “Hey prostitute...”. Only the latter of these evokes repulsion (in my simulation of a woman), which suggests that sexuality is heavily involved, since the former is also viewed negatively in modern society. It should be noted that sexiness (as general physical attractiveness) is the only non-modifiable, positive trait</p><p>In the end, I don’t quite understand the reactions to “Hey sexy...” as compared to “Hey intelligent...”, or even a third metric such as “Hey caring...”. There’s no a priori reason that sexual objectification has more emotional impact than mental objectification, and yet this seems to be the case. Can anyone tell me what I’m missing?</p><p>PS. Maybe <a href="http://en.wikipedia.org/wiki/Embodied_cognition">embodiment</a> has something to do with it?</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-48866643198549034102013-11-05T09:51:00.002-05:002013-11-05T09:52:46.686-05:00Invisible Colors<p>Often, when I have conversations with non-computer-science people, these "conversations" often resemble "arguments". One recurring argument is how we decide whether something has a particular property. To give a concrete example, we might try to figure out whether a <a href="http://en.wikipedia.org/wiki/Literary_criticism">critical review</a> (of a book, say) is <em>authentic</em>, that is, that it was derived from the true beliefs of the literary critic. It turns out that the disagreement runs deeper than literary criticism, and I want to address that in this post.</p><p>Sidenote: the core ideas in this post were inspired by <a href="http://ansuz.sooke.bc.ca/entry/23">another article</a>. I think I push the idea a little further here, and also give better examples, so feel free to read the article and come back. At the very least, it’ll plug any explanation pitfalls in this post.</p><p>The big takeaway of this post is that sometimes we care about properties of objects that cannot be found in the make up of that object. To use a simple examine, my name is “Justin”, but if you take me apart atom by atom, there is no “Justin-ness” property to be found anywhere. Note that this is not simply the distinction between a system of components versus individual components; the “circulatory system” may not reside in any individual atom, but we can point to some group of atoms (ie. the heart, blood vessels, etc.) and say that they make up the “circulatory system”. In this case, the circulatory system is the result of the interactions of atoms; the same cannot be said of the name “Justin”.</p><p>Maybe a real-world example would make the point clearer. In business, when you receive funding from some source, sometimes there will be stipulations for how you may use that money; this is the case for the <a href="http://en.wikipedia.org/wiki/Government_procurement_in_the_United_States">US government</a>. Closer to home, the <a href="http://www.eecs.umich.edu/CSEG/">Computer Science and Engineering Graduates (CSEG)</a> student group gets money from the University of Michigan, but they can’t use that money to buy alcohol. At the same time, CSEG also collects money from graduate students; this money <em>can</em> be used to buy alcohol. To keep these funds separate, CSEG actually has two bank accounts, and is not allowed to arbitrarily move money from one to the other (for some definition of arbitrary). There is no difference between the money in these two accounts, of course; money is <a href="http://en.wikipedia.org/wiki/Fungibility">fungible</a>, and merchants would happily accept money from either. But to abide by university regulations, CSEG must distinguish between the two, and only use one of them to buy booze. In finance, this distinction is called the Color of money, a term I will adopt here (and also used in the source article above, although for a different reason).</p><p>If we want to get technical, Color-like properties are ones that do not exist in the object itself. We can take apart CSEG’s two bank accounts (they won’t be happy about this), and nothing in the account will say that one can be used to buy beer. Nothing in the external world can tell us the Color of the accounts; instead, Color is something that we need to keep track of ourselves. In memory research, properties like Color are called <a href="http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=29215">“non-projectable”</a>, because the outside world cannot give us this information (if you want to be poetic, it cannot “project” the information into our heads). What this usually means is that Color is not about the object itself, but about the <em>history</em> of the object. When we say that this account has a Color of “cannot buy booze”, what we mean is that the money in this account came from the university and not from students. It’s how the money got there, not what the money is, that defines this distinction.</p><p>Since Color is not an intrinsic part of an object, we can do strange things with it. For example, CSEG can do tricks to change the Color of money, by paying for some social event (for example, rock climbing), then charging grad students to attend. The money with the “no booze” Color is now spent (notably, not on booze); the money that CSEG now has came from students, and so has the “yes booze” Color, even though these two may be the exact same amount. We have now changed the Color of money, and can do different things with it (eg. a bar crawl). This is, of course, a form of <a href="http://en.wikipedia.org/wiki/Money_laundering">money laundering</a>, but no one seems to mind.</p><p>I want to return to computer science now, and give another example of Color: whether a file is subject to copyright (this was the example in the source article). Let’s say I have three copies of a song (say, <a href="http://www.youtube.com/watch?v=dQw4w9WgXcQ">this one</a>), which I got from different places: one I got from the artist as a gift, one I got as a digital download from iTunes, and one that I got from a BitTorrent file-share. From the computer’s perspective, these three files are <em>exactly the same</em>, down to every 1 and 0 at the binary level. (This is not strictly true, but we’ll run with it for now.) The computer can’t stop me from making a copy of my iTunes version for a friend – not because it doesn’t have the power to, but because it doesn’t know which one came from iTunes. There’s nothing in the bits that say whether I’m legally allowed to copy a file; the legality of that operation is a Color.</p><p>For obvious reasons, this is a big problem for lawyers. To make sure everyone is paying for their music, they want to impose restrictions on what people can do with their files, so they come up with <a href="http://en.wikipedia.org/wiki/Digital_rights_management">digital rights management (DRM)</a>. They require iTunes to add extra 1’s and 0’s to their files, so that my computer knows it’s from iTunes, and can stop me from making copies; in computer science jargon, this is called <em>metadata</em>, since it’s data about the data itself. The problem is that metadata doesn’t change the file in any intrinsic way; it merely adds some numbers to the end. If I want, I can decide to add this metadata myself – or more likely, remove this metadata – and all of a sudden my computer can again copy files from iTunes.</p><p>(In reality, there’s the issue of encryption – changing the file in some way that requires special knowledge to read. This is a bigger problem, since encryption is designed to make its reversal – “removing the metadata” – difficult. But none the less, if the encryption can be reversed (ie. if the file can be decrypted), the file can again be read and copied.)</p><p>If all this computer science stuff seems complicated, it turns out that the same problem exists in <a href="http://www.urbandictionary.com/define.php?term=meat%20space">meat space</a>. Copyright, ultimately, is about the <em>ownership</em> of an idea, and ownership applies equally well to real objects. Let’s say you and I both own the same coffee mug, which are identical in every way. I can take both, move them around behind my back, then bring them back out and ask you which one is yours. You can’t tell, of course, because the mugs are identical; you are now in the same position as the computer with the three files.</p><p>“But wait!” you say, “I can just stencil my name onto my cup, and now my cup is different! What do you think of that?” You’re right, of course, but that’s the point: your name on your cup doesn’t really mean that you own the cup. I can, for example, <em>also</em> stencil <em>your</em> name on <em>my</em> cup; nothing stops me from doing this, and the cup is still <em>mine</em>, even though it has <em>your</em> name on it. The ownership survives regardless of what changes you make to your cup and what changes I make to mine; it does not depend on the physicality of the cup, and is therefore a Color.</p><p>(Encryption in this case, I suppose, is equivalent to locking up your mug. Even then, nothing stops me from picking the lock and claiming the cup is mine.)</p><p>As the source article pointed out, it’s not that computer scientists don’t care at all about Color. When we need a random number (excuse me, a randomly-generated number), we are a lot that the numbers are actually random. Theoretically, rolling a die and getting five 6’s consecutively (6, 6, 6, 6, 6) is just as likely as getting any other sequence (say, 4, 6, 5, 1, 6); they both have <math><msup><mrow><mo>(</mo><mfrac bevelled="true"><mn>1</mn><mn>6</mn></mfrac><mo>)</mo></mrow><mn>5</mn></msup><mo>=</mo><mfrac bevelled="true"><mn>1</mn><mn>7776</mn></mfrac><mo>=</mo><mn>0.000129</mn></math> probability of happening. But, like any Color, randomness is not in the sequence itself, but in how the sequence is generated. Which is why we care about how a <a href="http://en.wikipedia.org/wiki/Random_number_generation">random number generator</a> works, precisely because we can’t tell just by looking at the numbers. In the ideal case, we’ll have the <a href="http://en.wikipedia.org/wiki/Source_code">source code</a>, so we can mathematically prove that the generated numbers are random; in practice, we leave this job to a small subset of people, and trust that whatever random number generator we use is good enough.</p><p>I am finally ready to go back to my opening example about authenticity in literary criticism. In real life, the discussion came about because my friend mentioned that in the school of <a href="http://en.wikipedia.org/wiki/New_Historicism">New Historicism</a>, critics frame the piece of literature in its historical context; they might, for example, comment on how colonialist ideas show up in Shakespeare’s works. This is notable because the idea of colonialism wasn’t around during Shakespeare’s time; it is only in hindsight that we can point out these influences. I then asked the question of whether a critic can <em>pretend</em> to be from a different time period, and review a work from that perspective; an example might be to review Shakespeare as someone from a inter-stellar civilization. My friend was rather horrified, and said that such a review would be science fiction and not literary criticism, that it would not be <em>authentic</em>. I then modified my example: what if a critic is familiar with modern psychology, but <em>pretended</em> not to be, and wrote a review using psychoanalytic theories? Would such a review be accepted into literary journals? My interest at the time was more about the need for scientifically-accurate literary criticism, but it also touched on the issue of authenticity. Given my friend’s previous vehement reaction, I was surprised to hear that such a review would be publishable, despite, of course, it also being inauthentic.</p><p>As might be obvious by now, the problem is that the authenticity of a review is not a property of the review itself, but a Color. Two critics can have completely difference view points but write the exact same sequence of words in a review; nothing in the words would distinguish which one is authentic and which isn’t. To be fair, this is not just a problem for literary criticism, but for academia as a whole: any manuscript to be reviewed for publication should be true, but that truth is not a property of the manuscript, and therefore cannot be determined 100% accurately every single time. The difference between literary criticism and (say) computer science is that in computer science, the manuscript contains experimental results; this allows the experiment to be replicated and the results reproduced, thus checking the correctness of the manuscript (albeit after it has already been accepted or rejected for publication). This path to determine correctness (or the criticism equivalent, authenticity) is not open to literary criticism: there is nothing to compare a review against, since a review is by nature subjective, and a different critic cannot “reproduce” the views of the original author. Of course, we’ve actually already encountered this problem before. Correctness in the sciences is like having the random number generator available; if we don’t believe the sequence is random, we can look at the random number generator itself and “reproduce” the results. Authenticity, on the other hand, is like being given a <a href="http://en.wikipedia.org/wiki/Black_box">black-box</a> random number generator; we might be able to say something is not authentic if it is painfully obvious (like if the generator always produces 6’s, or if a critic claims to be part of an inter-stellar civilization), but for less egregious violations, there’s simply no way to tell.</p><p>I didn’t mean this post to be an attack on literary criticism; it was merely a convenient example (and a true and authentic one; you’ll just have to trust me on this…). The issue of Color comes up in many other places, including within computer science itself, and I’m sure has sparked many debates similar to the one between me and my friend. Color is therefore a useful thing to keep in mind, especially since it makes an appearance in both the sciences and the humanities. Instead of arguing over why it is unrealistic to care about some property, we might invoke the concept of Color and move on to whether and how accurately we can determine that Color instead.</p><p>PS. I do want note the irony in how literary criticism as a field cares so much about authenticity, given that they spend a lot of effort separating the creator and the work created; this is the (famous?) <a href="http://en.wikipedia.org/wiki/Death_of_the_Author">“death of the author”</a> view of literature. Somehow, this philosophy does not extend to criticism itself.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-17003760804416488072013-11-03T10:18:00.003-05:002014-05-11T13:56:21.505-04:00Teaching Active Learning<p>It’s <a href="http://justinnhli.blogspot.com/2009/08/reflections-on-teaching.html">been a little while</a> since I’ve <a href="http://justinnhli.blogspot.com/2009/09/michigan-updates.html">talked about teaching</a>. Although I only taught for one more semester after that last post, I got involved with the <a href="http://www.crlt.umich.edu/">Center for Research on Learning and Teaching (CRLT)</a> to train new non-faculty instructors and help them get feedback. One of the main things I’m supposed to do is to encourage student instructors to use <em>active learning</em>.</p><p>The main idea behind active learning is to minimize the time that students spend sitting and listening to a lecture. Research has shown that a student can only <a href="http://www4.ncsu.edu/unity/lockers/users/f/felder/public/Papers/Prince_AL.pdf">pay attention for about 15 minutes in lecture (pdf)</a>, after which retention goes from 70% to 20%. Instead of having students merely <em>passively</em> taking in material, <em>active</em> learning would require the student to participate in some way. Some more common ways of using active learning in the classroom include project-based work, case studies, group discussions, <a href="https://images.google.com/images?q=iclicker">iClickers</a>, and so on. These teaching methods are not meant to complete replace the lecture, but to supplement the lecture by forcing students to process the material. Use correctly, active learning does help students achieve the goals of a class.</p><p>My first problem with active learning is already present, which is that the term “active learning” is too broadly defined. If we go by the above definition, then something as simple as inviting student questions would be considered active learning. While allowing students to ask questions is a good thing – and while you’ll be surprised how often students don’t feel comfortable asking questions – setting the bar this low means that the majority of teachers are already practicing active learning. If this is the definition of active learning that is being sold, there is no incentive for new teachers to try the more involved and more effective methods. It’s the equivalent of calling driving a sport, which would let everyone who lives in the suburbs to say they do sports and live an “active lifestyle”. Then they congratulate themselves on being active, then never play anything that involves exercise.</p><p>The problem is that there is no good, short definition of active learning that precisely captures the idea. <a href="http://en.wikipedia.org/wiki/Confirmation_bias">Confirmation bias</a> is at fault here. Given the descriptions “making students think” and “more than passively listening”, people who understand active learning think it fits the concept well, failing to realize that if they <em>didn’t</em> know what active learning was, it fits a whole lot more things too. Unfortunately, <a href="http://www.imdb.com/title/tt0133093/quotes?item=qt0324246">no one can be told what <del>the Matrix</del></a> active learning is; they simply have to be presented with example after example, slowly letting their brain learn the <a href="http://en.wikipedia.org/wiki/Fuzzy_concept#Psychology">fuzzy concept</a>.</p><p>Wherein lies my second problem with active learning: we don't spend close to enough time training instructors for them to learn this distinction. At Michigan, a new instructor is required to go through a “full day” of training, which in reality only goes from 9am to 4pm. Not counting lunches, breaks, welcoming speeches, and other non-teaching material (such as policies for the <a href="http://michigandaily.com/news/geo-negotiating-contract-extension-university">Graduate Employee Organization</a>) leaves maybe five hours of teaching orientation. The first hour is about classroom climate – discrimination, student-instructor dynamics, and so on. The next two hours contains concurrent sessions on different types of instructional duties, such as discussions, labs, office hours, etc. The last two hours is for practice teaching, but since the new instructors are in groups, the actual teaching is limited to five minutes. If you’re lucky, active learning will be brought up, defined, and (briefly) discussed in the concurrent sessions, and maybe again during practice teaching; if you’re not lucky, the term will be thrown around, leaving you none the wiser.</p><p>To be fair, Michigan does require new instructors to go through “ongoing professional development”. One of the possibilities here is advanced practice teaching, where the instructor is required to use an active learning technique, selected from a handout. In reality, very few instructors actually do so, partially because the idea of active learning is still abstract, partially because instructors only have ten minutes (a 100% increase!), and partially because the other instructors role-playing as students have no incentive to participate. Often, the most “active” the students ever get is to spend two minutes trying to guess what the instructor wants from them.</p><p>I want to make clear that I understand the difficulty of getting this right. There is only so much time both new instructors and the university is willing to spend in training, and having that bit of training is better than not having that training at all. Instructors do have another alternative: to get a Midterm Student Feedback (MSF), which is actually the main component of my job. In an MSF, a third-party facilitator observes a class session of the instructor’s. Then, taking 15-20 minutes of class, the instructor leaves the classroom to allow the facilitator to talk to the students, to see what the students like about the instructor, and what the instructor can change to help the students. As a result, the suggestions are personalized to the instructor, and missed opportunities for active learning will be pointed out. Research suggests that MSFs do have <a href="http://crlte.engin.umich.edu/wp-content/uploads/sites/7/2013/06/jee_selects_article_finelli.pdf">a net positive effect (pdf)</a> on student evaluations of instructors.</p><p>Having sat through 15 student-taught discussions, labs, office hours, and so on, I think there is a third problem with active learning which, in my opinion, is the most serious one. Even if active learning is better defined, even if new instructors have enough training to understand what active learning is, <em>they may still be using active learning incorrectly</em>. Good active learning is not simply, say, getting students to talk to each other; the content of what they talk about also matters. Take the example of an algebra class, and consider the following <a href="http://www.crlt.umich.edu/gsis/p4_8">concept question</a> for the iClicker:</p><br />
<blockquote><p>How many times does the polynomial <math><mi>y</mi> <mo>=</mo> <mo>-</mo><mn>2</mn><mo>⁢</mo><msup><mi>x</mi><mn>3</mn></msup> <mo>+</mo> <msup><mi>x</mi><mn>2</mn></msup> <mo>-</mo> <mn>1</mn></math> touch the x-axis?</p><ul><li>It doesn’t touch the x-axis.</li>
<li>It touches the x-axis once.</li>
<li>It touches the x-axis twice.</li>
<li>It touches the x-axis three times.</li>
<li>It touches the x-axis four times.</li>
</ul></blockquote><br />
<p>This is not an unreasonable question to appear in an algebra lecture, but it’s not an effective one to gauge how students are doing. For one, the question is overly dependent on error-prone algebraic manipulation, so even the students who know how to get the answer could get it wrong. As an iClicker question, it would also take students took long to calculate the answer. Finally, because the problem is specific, it doesn’t test the students’ general mastery of polynomials. (The answer, by the way, is that it <a href="http://www.wolframalpha.com/input/?i=y+%3D+-2x^3+%2B+x^2+-+1">touches the x-axis once</a>.)</p><p>Now consider this question instead:</p><br />
<blockquote><p>If a polynomial touches the x-axis twice, what must be true about its degree?</p><ul><li>It must have a degree of one.</li>
<li>It must have a degree of two.</li>
<li>It must have an even degree.</li>
<li>It must have a degree of two or more.</li>
</ul></blockquote><br />
<p>This question requires the students to understand and reason about the relationship between the algebraic representation of the polynomial (ie. its degree) with the graphical representation (ie. its roots). If students disagree, it’s also likely to generate discussion, where individual students can try and come up with counter examples. Finally, the answers will tell the instructor how the students are distributed in terms of their understanding, and allow the instructor to adjust the class content as necessary. (The answer, by the way, is that it must have a degree of two or more, since it needs at least two to have two real roots, and any higher degrees can have non-real roots, and therefore would not cause the curve to touch the x-axis.)</p><p>The bigger lesson here is that effectively using active learning is more than just the structuring of the class. It also requires the instructor to read the mood of the students, to have a feel for how students will react to certain activities, and to understand where students may have trouble. This is <a href="http://en.wikipedia.org/wiki/Lee_Shulman#Pedagogical_content_knowledge_.28PCK.29"><em>pedagogical content knowledge</em></a> – not just content knowledge (eg. algebra), not just pedagogical knowledge (eg. active learning), but pedagogy as applied the particular content of a class (eg. what active learning methods work for teaching algebra). Active learning cannot be divorced from the teaching goals of the instructor, the prior knowledge of the students; it needs to be planned for an as integral part of the class, not as an independently-designed silver bullet, to be inserted anywhere for instant perfect teaching. Without these considerations, active learning in the classroom will only ever bear a surface resemblance to good teaching.</p><p>Ultimately, I fear that the entire package of being a good instructor cannot be learned outside of extensive practice, with a good dose of innate talent to boot. I don’t know how to force people to reflect on their teaching enough to reach this point. I don’t think the training and services we do provide to instructors is a waste of time, since it’s better than nothing; but I am also skeptical that the training has any large effect on the teaching quality, even if the usage of active learning is increased.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-22536518201043513702013-11-01T19:40:00.000-04:002013-11-01T19:46:16.210-04:00National Blog Posting Month<p>Because I’m trying to propose my thesis topic and have so much time on my hands, I’m trying out National Blog Posting Month (NaBloPoMo). For people who don’t know, this is a blog version of <a href="http://en.wikipedia.org/wiki/National_Novel_Writing_Month">National Novel Writing Month</a> (NaNoWriMo), where people write a 50,000 word novel in the month of November, regardless of quality. The goal for blogging is more relaxed: 30 blog posts in 30 days, one per day, of any length.</p><p>My goal is more meager still: I only intend to write 15 blog posts, published on the odd days of the month. I already have the topics planned out, in some semblance of order. Most of the topics are things that I’ve been thinking about for a while now, and have therefore amassed some notes for; others are fillers, fun things I’ve encountered in the last year or so.</p><p>I’ve <a href="http://justinnhli.blogspot.com/2009/09/writing-about-writing.html">written about writing</a> before, and how it’s an outlet for me to sharpen my ideas. In the years since, I’ve also come to realize that I keep a blog in particular because I often want to share things I have learned. I’m surprised that I needed a friend to tell me this, that it wasn’t more obvious before. Part of the reason is probably that I consistently underestimate my social needs, and simply fail to realize that I’m deriving joy from other people reading my writing. I feel the larger part of the answer though (here I go underestimating again) is that most of my enjoyment comes directly from figuring out my ideas. I like it when I find an expansion for people's behavior that fits into my worldview, or when I develop my own viewpoint on some issue; these can then be used to understand other people. The desire to tell people is a result of my excitement at figuring something out.</p><p>There <em>is</em> something I want to get out of sharing though: criticism. During a meeting, my adviser once said, “what I want is for people to argue with me.” The same is true here; some of the things I write will be objectionable, if not ignorant and downright wrong. I want these things pointed out, so that I will know better, or at least have a better understanding of why other people disagree with me. So, I humbly (fine, not that humbly) ask you to call me out, debate with me, and show me that I’m stupid.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-54989340681739429292013-09-11T21:57:00.002-04:002013-11-21T23:04:12.087-05:00Effing the Ineffable<p>I think it’s interesting that language, despite it being a powerful way of communication, cannot express a lot of things.</p><p>To use a cliched example, language cannot express what love is. People resort to saying, <a href="http://www.imdb.com/title/tt0133093/quotes?item=qt0324318">“No one needs to tell you you are in love, you just know it, through and through.”</a> Same with art, which people can only know it when they see it (or <a href="http://en.wikipedia.org/wiki/I_know_it_when_I_see_it">obscenity</a>).</p><p>That’s not the same as saying that language cannot lead to such things. We may not be able to describe love, but we can tell stories about it, and depend on our <a href="http://justinnhli.blogspot.com/2013/05/reflection-on-wittgenstein.html">shared neuron structure</a> for the reader to get in the character’s shoes. More indirectly, language can express instructions that leads to the reader to have unexpressable feelings. We may not be able to <a href="http://justinnhli.blogspot.com/2013/08/unteachables.html">tell someone how to be effortless</a> or be spontaneous, but we can tell them to do things that will eventually lead them to be effortless or spontaneous.</p><p>Buddhist <a href="http://en.wikipedia.org/wiki/Koan">koans</a> are, in theory, exactly this, another level down. Language cannot express what enlightenment is. It can’t even describe the process of attaining enlightenment in any useful way. Instead, koans try to make you think differently by <a href="http://www.catb.org/jargon/html/K/koan.html">denying its normal way of thinking</a>. The reader is supposed to then think on the koan and, eventually, adopt the requisite state of mind. In a sense, a koan describes the shape of unexpressable directions that lead to unexpressable states of mind.</p><p>All in all, it’s amazing that we can induce patterns of thought in other people, even for things that language fail to describe. Maybe, this is why sometimes a passage appears vague and unclear to us; maybe it’s not that the author is doing a bad job, but that language itself is incapable of expressing what the author wants to say.</p><p>PS. The title of this post is, of course, <a href="http://en.wikipedia.org/wiki/Ineffability#Notable_quotations">paraphrased</a> from Douglas Adams.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-22536515925549393862013-08-19T00:16:00.000-04:002013-08-19T00:17:00.346-04:00Unteachables<p>A partial list of things that cannot be taught:</p><ul><li>Effortlessness</li>
<li>Heroism</li>
<li>Subversiveness</li>
<li>Resoluteness</li>
<li>Commitment</li>
<li>Individuality </li>
<li>Sympathy</li>
<li>Spontaneity</li>
</ul><p>This is not to say that you can't <i>induce</i> people to be some of these things, it's just unclear how they can be <i>taught</i> them without already knowing how. Some teachable things, but difficultly so, include:</p><ul><li>Detachment</li>
<li>Critical Thinking</li>
<li>Inner Peace</li>
<li>Creativity </li>
<li>Responsibility </li>
</ul><p>The only line I can draw is that the second list seem to be more about ways of <i>thinking</i>, while the first list contains ways of <i>being</i> or <i>feeling</i>. If anyone can draw a better line, or has ways of teaching things on the first list, leave a comment below.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-26365579507142244522013-08-09T17:40:00.002-04:002013-08-09T17:41:24.597-04:00Nerd Sniped by Nerd Sniping<p>I got <a href="http://xkcd.com/356/">nerd sniped</a> today reading <a href="http://www.goodreads.com/book/show/6232657-a-mathematician-s-lament"><em>Paul Lockhart’s A Mathematician’s Lament</em></a>. It’s an excellent rant on the state of mathematics education, but it also contains a short essay on why he finds mathematics fun. It contained this problem:</p><blockquote><p>Do all <a href="http://en.wikipedia.org/wiki/Graph_%28mathematics%29#Undirected_graph">undirected graphs</a> (of order > 2) contain at least two nodes with the same <a href="http://en.wikipedia.org/wiki/Degree_%28graph_theory%29">degree</a>?</p></blockquote><p>Actually, the precise problem that got me is not important (I just wanted to nerd snipe you too). As I was getting coffee and thinking about it though, I wondered if the susceptibility of being nerd sniped is correlated with the breadth of interest. It makes sense that the more things you are interested, the more easily you’d be distracted by a random, sufficiently difficult question. Of course, the “sufficiently difficult” part is hard to measure, but we’ll let that go for now.</p><p>But then it occurred to me that all the nerd sniping questions I know are (at least somewhat) logical in nature. I can’t imagine someone being nerd sniped by a history question. For example, I don’t know anything about the causes of World War I, other than that it involved the assassination of Archduke Ferdinand (wow, I even got the name right; I wrote that without checking my sources). But I can’t imagine myself dropping everything to think about this problem. I don’t think this is a lack of interest, either; I can’t even imagine my historian friends doing this. (<a href="http://leftoversoup.com/archive.php?num=244">Yes, I have friends. Shut up.</a>)</p><p>I feel the difference is that, for the logic-based questions, it’s not the answer that matters, but the process of getting there. I can tell you the answer to the graph problem: yes (…probably. I haven’t solved it yet). But that answer is not satisfying. It’s like looking at the solution of a solved Sudoku puzzle; the only thing it tells you is that the board can be solved. (For this reason, I think it's pointless to print Sudoku solutions, as long as I trust them to be solvable.) For both the original infinite-grid-of-resistors problem and the graph problem, I want to know how that answer was derived. I might not even believe your answer until you’ve shown me the proof. (By the way, the answer to the resistor problem is 4/pi - 0.5 ohms.)</p><p>But that’s not true of the question about World War I. If you tell me (to whatever detail) <a href="http://en.wikipedia.org/wiki/Causes_of_World_War_I">why WWI occurred</a>, and I would nod and go on my way. I wouldn’t question your explanation (unless it contradicts something I already know), and I wouldn’t question your source of knowledge.</p><p>I first thought that this is because the answer is trivially obtainable from, say, Wikipedia; but then, I could also have looked up the proof and be done with it. It’s also not the case that the question about WWI requires a long explanation (eg. it asks “why/how”), while the graph question requires just a binary answer (eg. it asks “do”). I can transform both questions the other way (“Was the assassination of Archduke Ferdinand a factor in starting WWI?” and “Why do all undirected graphs have two nodes with the same degree?”), and the feeling remains the same. Notice, though, that the reworded graph question now presumes an answer, so the new question actually gives more information than the old one.</p><p>As I thought more about this, I realized that I get nerd sniped by questions outside of the maths and sciences as well. I’ve nerd sniped someone before with the question, “Is the statement “Unicorns have one horn” true or false?”. I myself have spent entire afternoons thinking about questions from psychology/philosophy, the latest being the nature of vulnerability. Although, if I apply the same reasoning of whether I’d want to know the reasoning behind the answer, I’m not sure if the psychology question is truly a nerd sniper.</p><p>Maybe there are also shades of being sniped too. While I wouldn’t think too long on the question of, say, the <a href="http://www.mugglenet.com/books/oddities_socks.shtml">significance of socks in the Harry Potter series</a>, I can imagine myself disagreeing with someone else’s answer, leading to an afternoon of debate. It’s not as powerful a sniper as the logical questions, and there’s confusion between the appeal of the problem itself with the appeal of a good discussion. I wouldn’t call it nerd sniping for this reason, but it’s still something that would cause me to stop what I’m doing.</p><p>I don’t have any answers to my questions about nerd sniping raised here. I am curious whether and how much my mathematical and scientific background has biased me in what I get sniped by. I would love to hear from people in history or anthropology or related subjects, and see if there are questions that get them but don’t get me.</p><p>…Once you get over how I’ve just nerd sniped all of you, of course.</p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-4344360081238823955.post-12555817473499458652013-08-07T12:56:00.002-04:002013-08-11T22:34:19.732-04:00Regret and Responsibility<p>A couple months ago I had a conversation with a friend. In that conversation, I ended up expressing a belief that hurt them, leading to a month of tense interactions. But, we eventually sorted it out, and in the resulting conversation, my friend asked me, “Do you regret saying what you did?” I replied, “I’m sorry that you were hurt, but I don’t feel guilty about saying it.”</p><p>The words “sorry” and “regret” have several related meanings. ”Sorry” can be used to <a href="http://xkcd.com/945/">express condolence without implying guilt</a>, as in “I’m sorry for your loss.” It can, of course, also be used as an apology, as in “I’m sorry I yelled at you.” Although “regret” can be used in the same manner (”I regret yelling at you yesterday.”), it can also simply express a wish for a different outcome, as in “I regret not getting the chocolate ice cream.” I take my friend’s question to use the apologetic meaning of “regret”.</p><p>Let’s assume that hurtful behavior is considered bad by both parties. I posit that when someone (the “perpetrator”) apologizes, they must believe they had somehow wronged the “victim”. The question is, could the perpetrator have hurt the victim without having done the wrong thing?</p><p>Here’s an example. It turns out that, in the story at the beginning (which is a true story, by the way), my friend had previously told me that they “would always rather have the truth than ignorance”. So, in our conversation, when they had asked me for my thoughts, I had simply told them what I believed, explicitly as a belief of mine (that is, without implying whether that belief was correct). Of course, they then took it personally, leading to the episode above.</p><p>Who was at fault in this scenario? It’s true that my friend was hurt because of what I said, and that I had said what I did intentionally. Intuition suggests that, if that was the full story, I probably did something wrong and should apologize and feel bad.</p><p>What throws this intuition off is that my friend had asked what I thought, and that they had stated that they “always” prefer the truth. For past-me, the choice between telling and not telling is obvious: there’s no reason I should withhold what I was thinking. I could argue that, even if I <em>had</em> known that what I said would hurt them, <em>given</em> their preference of truth, I should <em>still</em> have told them. Since I had considered my choice before acting, and since a <a href="http://en.wikipedia.org/wiki/Reasonable_person">reasonable person</a> would have done the same, I don’t feel guilty about saying what I believed.</p><p>But I could argue the other way. Clearly, I <em>hadn’t</em> fully considered my choice; if I had, I would have realized that my friend’s previous statement was not true. They could have been lying then, or more plausibly, they themselves had not considered all the options before making that statement. That is, while they believed they always prefer the truth, their belief was incorrect. Knowing what I know about human psychology (ie. that people don’t know much about themselves), I should have foreseen that they would be hurt despite their statement, and that should have influenced my decision. I was, in other words, <em><a href="http://en.wikipedia.org/wiki/Negligence">negligent</a></em>.</p><p>The question here is <em>not</em> whether I knew that my friend would be hurt; clearly I didn’t know, which is what caused this problem. The question is whether I <em>should have known</em> that my friend would be hurt, whether I am <em>responsible</em> for having that knowledge, and therefore ultimately responsible for their suffering. But on this question I’m stuck. On one hand, I can’t be telepathic or clairvoyant; there must be a limit to what I can know, and whether my friend had told the truth as they knew it, or had actually told the truth truth, seems to be past this limit. Plus, having to decide between correct and incorrect beliefs for every statement anyone ever makes about themselves seems overly cynical. On the other hand, it is true that people often have incorrect beliefs about themselves, particularly <a href="http://en.wikipedia.org/wiki/Illusory_superiority">ones that give a more generous picture</a>, in this case, that they are more “rational” than they actually are. Moreover, I know this from reading papers in cognitive bias, and have had enough personal conversations to observe it occurring multiple times.</p><p>Personally, I lean towards the position that I should have known, or at least should have doubted the veracity of the statement. At least, I do now that I’ve experienced this whole episode, and I will keep an eye out for these situations in the future. As for the episode itself, I do want to support my friend in their pain, and I do wish something else had happened. In that sense, I’m sorry and I’m regretful, but I still don’t think that I did anything wrong.</p><p>EDIT 2013-08-11: It occurred to me that intentionality seems to have nothing to do with regret. Either the perpetrator knew the victim will be hurt, and had therefore deliberately acted to hurt the victim (even if it is the lesser of two evils), or the perpetrator did not know the victim will be hurt, and had therefore chosen as well as they could have. Neither case seems to fit the template of someone who should be regretful, although apologies may be necessary in both cases. This suggests that either regret (or lack thereof) does not depend on intentionality, or the whole concept of regret is faulty.</p><p>Regardless, the passages about responsibility still hold, although personally I think both concepts of regret and responsibility are incoherent.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-76282606804281028742013-06-21T18:17:00.000-04:002013-06-21T18:23:53.435-04:00Defining Objectification<p>I want to discuss an interesting example of objectification. The point of this post is <em>not</em> to show that the incident is not an example of objectification, but to encourage crisper and more nuanced divisions between what is objectification and what isn’t. I will start with the original behavior, then describe why one might consider the behavior objectifying, then point out gray areas in the reasoning which goes against intuition.</p><p>Somewhat ashamedly, I was actually the “perpetrator” of this incident. Instead of repeating exactly what I said, however, I’m going to expand on the scenario. Let’s say that I know someone through reading their blog, enough to know their personality and thoughts. From that, I find that their intelligence, desires and life choices has artistic/aesthetic appeal, and therefore think of the author as a work of living art. Is that objectification?</p><p>I asked a friend this question. She told me that yes, it is objectification, and as explanation linked me to <a href="http://pervocracy.blogspot.com/2013/03/yellow-cover-kardashian-dont-know-whats.html">this blogpost on the Pervocracy</a>, which offered a simplified definition (emphasis in original):</p><blockquote><p>Objectification is focusing on a person’s usefulness to you <em>with total disregard for their desires</em>. In the context of compliments, it’s not saying “You turn me on.” It’s saying “You turn me on, and whether you want to turn me on is utterly irrelevant."</p><p>Saying “nice ass” to a person who’s deliberately wiggling their ass at you is a compliment; saying “nice ass” to a person who’s just walking by is objectification. “I want to sleep with her” is expressing desire; “I’d hit it” is objectification. “You’re sexy” is nice to say on a date because it’s a compliment; “you’re sexy” is hideously undermining to say at a business meeting because it’s objectification.</p></blockquote><p>These examples suggest that the definition should be further qualified by adding the phrase, “when they have not given you explicit consent” – setting aside whether non-verbal behavior could be considered consent. Back to my incident, this assumes (let’s say correctly) that the author of the blog did not intend their audience to appreciate them as art. By thinking of them as art, without their explicit consent, I am therefore objectifying them.</p><p>There are two unintuitive implications of this logic. The first one has to do with the features with which I’m objectifying the author, namely, their intelligence and life choices. These are not the usual features for objectification and, crucially, are things that people consider crucial parts of personhood. It would therefore seem that by focusing on exactly the things that make a person a person (without consent), I am still objectifying them.</p><p>The second unintuitive implication focuses on the fact that I don’t know the author, but have only read their blog. Let’s say that, a couple days later, I found out that the blog is actually a work of fiction, and that the real author had written it for their own amusement (that is, I still don’t have consent to admire it). My actions haven’t changed, and in fact now I could never have gotten the fictional character’s consent. Intuition suggests that I am still objectifying <em>something</em> – but it’s unclear whether it is possible to objectify fictional characters. And if the answer is that yes, it’s possible, how is that different from any the analysis of any character in any novel?</p>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-4344360081238823955.post-72516836880780003392013-05-19T00:38:00.000-04:002013-09-02T17:44:21.970-04:00Reflections on Wittgenstein<p>I just finished <a href="http://www.goodreads.com/book/show/393801.Wittgenstein_s_Philosophical_Investigations">David Stern's introduction</a> to Wittgenstein's Philosophical Investigations, and my mind got blown in a non-minor way. I want to share some of the thoughts that went through my head while going through the book. I am not a philosopher, and I didn't read the original text (English or German), so I can't say I am representing Wittgenstein's views. But what I can do is to play both the role of what I think I read, and how I respond to that those particular ideas. This will be presented as a short dialog between a devil's advocate (indented) and a god's advocate, where the former takes a skeptical position. The devil's advocate should be taken as one would Zeno in proposing <a href="http://en.wikipedia.org/wiki/Zeno%27s_paradoxes">his paradoxes</a>: he is trying to show how something that "obviously" happens as an impossibility. The god's advocate will then propose explanations as solutions.</p><hr /><blockquote class="tr_bq"><p>It is impossible to learn a language. Consider how we understand a new word: we look it up in a dictionary, or ask someone for a definition. The definitions themselves are words, however, which means that the learner needs to already know some words in order to understand this new word. This is true of all words - all their definitions depend on knowledge of other words. In order to learn a language, at least one word must be learned first, but learning it already requires partially knowing the language. Thus it is impossible to learn a language.</p><p>Of course, words don't have to be learned by definition. The teacher may point to a car and say "car", or may wave a fork in front of the baby and say "fork"; nouns could be learned in this way. Similarly, the teacher can repeat a verb while performing a motion, or repeat an adjective while pointing to successive objects that the word describes. This method of teaching avoids words, and so avoids the need for a priori knowledge of the language.</p><p>The problem with this approach is that impossibility covers not only language, but all communication. When the teacher points to a car and says "car", how does the learner know that the word "car" is the object pointed to, and not the act of pointing? The act of pointing itself requires knowledge that it is only a reference, not the object itself - prior knowledge that the learner does not have. Repeating a word may be used to indicate subtle differences, that the learner believes they have not yet learned to distinguish. Even gestures of affirmation or of denial - nods and shakes of the head, for example - has meaning which can only be explained through additional communication.</p><p>Example-based learning - such as pointing to red objects for "red" - has an additional problem of induction. For any finite set of examples, there is an infinite number of patterns that can describe those examples. 1, 2, and 3 may describe the set of natural numbers, or the set of real numbers, or the set of all glyphs, or the set of all objects.</p><p>There is always the possibility of misinterpretation for any communication. This gets worse with interaction units that do not represent concrete objects or actions. The verbs "think", "feel", and "remember" has no demonstrable behavior, and one cannot point to "imaginary numbers". These concepts must be transmitted through existing common ground, and since that does not exist, communication is impossible.</p></blockquote><p>Common ground is necessary, but it is a mistake to assume that common ground must be some form of communication. Any two strangers, in any culture in any period of history, already have a common ground: their shared evolutionary history.</p><p>Any two humans, when they communicate, already share 3.5 billion years of ancestral cohabitation on earth. Our physiology and psychology have been shaped by common forces, leading to similarities before even the first grunt. Our brains interpret - and misinterpret - perception in the same way; we process information through similar algorithms with similar biases. When a teacher waves a fork, the learner is biologically programmed to pay attention to the movement. In fact, this behavior is so primitive that it is a shared trait between a large portion of animals. We also associate and cluster stimuli that occur together, sufficiently so to mistake the ring of a bell for the presence food - and perhaps the audio signal "car" for a wheeled vehicle. Again, this is common ground with many other animals.</p><p>Other traits are more recent in evolutionary history. A number of vertebrates - cats, dogs, great apes, dolphins - are also social in nature, and our brains are also adapted for such an environment. We are biased to mimic other people's behavior, and project ourselves into their position to understand them. At least a partial understanding of facial expression is inborn, allowing infants to replicate certain expressions. There are theories which hypothesize the hard-wiring of stronger forms of communication structure, but it is clear regardless that common ground for communication exists. The information processing commonalities above, together with the correct assumption that the teacher operates in the same way, allows the listener to infer the meaning that the teacher intended.</p><p>It goes without saying that this shared evolutionary history does not rely on further prior common ground. It can therefore safely serve as the foundation for the building of communication and language.</p><blockquote class="tr_bq"><p>This explanation of common-ground puts the burden of communication on the brain, and therefore leaves out one thing: precision in communication. For one, evolution does not account for everything in a particular brain; its structure is also determined by the specific genetic inheritance as well as the environment in which the brain matured. There is no guarantee that these differences will keep the biases for communication intact. Even if the neural machinery were all there, these differences make it impossible to recreate exactly what the speaker meant.</p><p>The more fundamental problem, though, is that brains are fuzzy. People not begin with definitions of words they want to communication, but general ideas that they themselves cannot define. All that exists in the brain are approximations and conglomerations of previous experiences. "Humans" are not "feather bipeds with broad nails", but some incoherent mixture of all humans the speaker has seen. This concept cannot be transmitted through any communication, language, gesture, or otherwise. What does gets communicated is itself misinterpreted by the listener for a different mixture of people. What two people call "human" may not in fact be the same thing at all.</p><p>All that occurs during communication is a double illusion of transparency, both sides thinking they understand the other, but neither getting the point across.</p></blockquote><p>Brains are probabilistic, but so is the world! It is a fallacy to assume that communication needs to be precise exactly; it is sufficient for it to be precise enough. Precision is a double-edged sword: too little precision and there remains ambiguity, too much precision and the communication cannot be generalized. Language is learned and used in the real world, where nothing is clear cut. It is in fact a power of communication to capture these ambiguities.</p><p>That language is ambiguous doesn't mean that language is unlearnable. Language is not learned as a whole, but slowly, piecemeal, over a long time. The first word doesn't need to be precise - and as pointed out, it can't be. All it needs to be is a first approximation, and this approximation can be refined over time. The fuzzy concepts between the speaker and the listener do not need to match up exact, but just needs to have sufficient overlap. As long as this overlap is precise enough for the world, then communication is meaningful.</p><blockquote class="tr_bq"><p>Most of the world may be ambiguous enough for communication, but not all of it is. That two speakers can get meaning overlap is not good enough. From whence does the logical calculus of mathematical symbols?</p></blockquote><hr /><p>The first arguments from the devil's advocate is somewhat close to my understanding of Wittgenstein. Given that most of the counter-arguments are made decades after Wittgenstein's death, I don't know how much of it was known to Wittgenstein. The idea that human physiology and psychology has such unescapable influence on everything was a new thought to me. A smaller version of the idea, that no one sees an object/event in the same way, has been pushed on me before, but this is different. It suggests that even concepts we find "objective" - for example, the maxim of Ockham's razor - may be the result of our evolutionary history (although <a href="http://www.johndcook.com/blog/2011/01/12/occams-razor-bayes-theorem/">a Bayesian explanation exists</a>). I have been occasionally interested in how cunning linguists propose we establish communication with aliens. Most schemes begin by establishing numbers, then showing mathematical concepts. But how does mathematics arise from probabilistic biology?</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-38964214371142918312012-11-07T22:29:00.003-05:002013-09-02T15:28:07.310-04:00Generative Protection (aka. Graduate School in Midsight)<p>I feel like I should talk about grad school and how my fourth year has somehow crept up on me, but really what interests me is the idea of generative protection, so I'll talk about that and use examples from grad school to illustrate it.</p><p>I first came across the idea of generative protection - or more generally, generativity - while I was working in Web Communications back at Northwestern, in 2006. We handled the website redesign for Northwestern Magazine, and one article I had to check was Elizabeth Blackwell's <a href="http://www.northwestern.edu/magazine/winter2005/feature/redemption.html">Redemption</a>. The article was about Dan McAdams, a Northwestern psychology professor doing research on how people tell their life stories. One trait he points out is generativity, which is a measure of how much people want to leave a legacy. At the time, the only thing I noticed was that generative people also tend to be narcissistic. About a year later, I came across an article in the New York Times, <a href="https://www.nytimes.com/2007/05/22/health/psychology/22narr.html?pagewanted=all">This is Your Life (and How You Tell It)</a>. That article focused more on telling life stories and how it is healthy, but one quote about generativity stuck with me: "Often, too, [generative adults] say they felt singled out from very early in life - protected, even as others nearby suffered."</p><p>That sentence put words to an idea I've had since high school. An episode from 2004-01-16 in my journal, when someone asked me how I did in my chemistry exam and it was better than the scores they've heard so far, I wrote, "I knew, and I kind of felt sorry, like survivor's guilt, since every did so poorly but I was unscathed." This feeling was touched on in an earlier <a href="http://justinnhli.blogspot.com/2009/11/challenges.html">blog post about grad school</a>. A month later, a friend asked me why I think I'm not good at taking compliments, and I replied:</p><blockquote><p>I think my reaction to compliments comes from not being able to return the compliment. This happened a lot in high school and early in college, when I would get really good grades without really trying. Inevitably someone would ask me what I got, and after telling them they would say good job or something similar. But I know intuitively that they didn't do as well, so I feel guilty about it. Kinda like survivor's guilt, I guess. I've gotten better at just saying thankyou though.</p></blockquote><p>I used the word "guilt", but the feeling was never explicitly negative. If I had to explain it, I would say that it's a sort of puzzlement - about why other people didn't do as well as I did, about why people find things so difficult while I have barely exerted myself. It was only recently that the phrase "generative protection" came to mind, but it conveys how I feel very well. It was as though I am detached from the situation in some way, so hardships that affect other people only pass through me.</p><p>The idea of generative protection came back to me at the end of last year, after I worked to pass my prelim that summer. The previous post on grad school might suggest that this feeling of easy accomplishment would have completely disappeared by then (and more so now, another year of the <a href="http://pgbovine.net/PhD-memoir.htm">PhD grind</a> later); this is true with research, but not with grad school in general.</p><p>In research, even by the first semester, I was feeling the full force of the <a href="http://en.wikipedia.org/wiki/Impostor_syndrome">impostor syndrome</a>. So named because people feel like they're merely faking it, grad school is an environment that pumps out these kinds of people. As a result, every grad student thinks everyone else is doing better than they are, and no one feels like they know what they're doing. In a lab with three senior students, who seem to talk knowledgeably about their research areas, I was understandably intimidated. The funny thing was, after knowing them a little better, I learned that they were also intimidated by me, as I was quiet and always seemed to be working. It took me another long while before I understood that I was as smart as other people in the lab, that we were good at different things, and that we all tend to only notice the times when other people were smarter than us but not vice versa. This depression and later rebound in confidence fixed my feeling of generative protection. I now have a healthy, and hopefully relatively objective, view of my abilities and my own research.</p><p>But being a grad student is more than about research; it is also about being able to strike that balance between work and everything else. There are, I think, three reasons why this balance comes easily to me. The first is that I'm not afraid to take time off. I spent three of the last four weekends away from Ann Arbor: the first was part of a week-long bouldering trip to Georgia, while two more weekends were spent in Kentucky. I do feel slightly ashamed that I'm not working, but not enough to stop me from going. I am, of course, writing this post when I could be reading papers. The second reason is that I find other non-research grad school activities to busy myself with. I am the secretary for the <a href="http://cseg.eecs.umich.edu/">computer science graduate students group (CSEG)</a>, so I have a non-passive role organizing events for the department. This semester I'm also an <a href="http://www.engin.umich.edu/teaching/crltengin/gsi_serv/etc">engineering teaching consultant (ETC)</a>, so I spend time observing and giving suggestions to teaching assistants. Doing "work" for these roles gives me a break from research without inflicting too heavy a sense of guilt. The final reason work-life balance is easy is that, to be honest, work is fun. I enjoy separating theoretical confounds and laying down theory for my research, and to some extent am willing to spend "play" time doing so. I've previously mentioned the spars with my dad about the <a href="http://justinnhli.blogspot.com/2009/03/smile.html">work-play distinction</a>; my relative lack of such distinction greatly reduces how stressful I feel. Plus, as <a href="http://infolab.northwestern.edu/people/larry-birnbaum/">Larry Birnbaum</a> said, one doesn't have to be brilliant all the time; if one is brilliant ten minutes a day, that's already pretty good.</p><p>In truth, the work-life balance issue isn't what makes me feel protected. I have yet to talk to a grad student who truly feels that their work is destroying their non-work life. I wrote the previous paragraph because I felt like <a href="http://www.urbandictionary.com/define.php?term=humblebrag">humblebragging</a>, but also because I promised I would talk about grad school. What makes me feel protected is that I am still in grad school at all. In the past year and a half, I know four people who left grad school without their (PhD) degrees. I only know two of them with any depth, but both of their reasons for leaving are the same: they are not sure whether grad school is right for them. Keep in mind that this is after three years into the program, after what most people consider the horrible second year; they have both passed their prelims (and will therefore likely get a Master's), and are capable of conducting independent research. For them, it wasn't a matter of ability, but a matter of desire. Neither of them have found a topic they are willing to invest time in, and they are not sure it's worth the time to continue banging their heads against the wall to figure it out. At least one of them is not sure whether it's the research topic (or lack thereof) or the research process that is putting them off, and feels it's better to try something new. Their stories made me realize how easily I have taken on the role of a grad student and how I feel grad school is, if not the right choice, at least not the wrong choice for me.</p><p>The descriptions I have given so far of why I might feel I am protected have been about events in my life. One could ask what it was that led me to excel in school and to be so sure that I want to teach at the university level. The only answers I have for these two questions are that "I am good at recognizing patterns" (which is something someone else said of me) and that "I am introspective". These answers do not provide any insight, at least not at this level, and I am not prepared to explore them at this time. Another path of questioning, one I am more interested in, is why I perceive myself as being protected. Equivalently, one might ask why I don't feel as though I have tried very hard in my accomplishments. After all, generative protection is a subjective feeling, not an objective fact; the same event of remaining in grad school could be interpreted as the result of hard work and perseverance. I spend a lot of weekends coding or writing papers, and evenings are often spent sitting at coffee shops exploring the theoretical foundations of my work. Given that I do invest time in research (or climbing), why do I still feel shielded from the difficulties of life - that, to quote from <a href="http://justinnhli.blogspot.com/2010/07/atlas-shrugged.html">Atlas Shrugged</a>, I've "never suffered"?</p><p>I don't have a clear answer to that question, but I do have two related hypotheses. The first is that, once I have accomplished something difficult, that task seems much easier in hindsight. The closest approximation to this idea is what education psychologists call <i><a href="http://www.aps.org/publications/apsnews/200711/backpage.cfm">the curse of knowledge</a></i>. It describes the phenomenon where experts - people who are good at a particular task - find it difficult to explain how to perform that task to novices. The underlying reason for this curse is that expertise changes the neural pathways in the brain, making what previously required conscious thought into something that is automatic and instinctive. For me, the same mechanism may lead to a strong myopic bias, leading me think that the task was never difficult, despite the memory of how I struggled to accomplish it in the first place. I had <a href="http://twitter.com/#!/justinnhli/status/114737110761738242">tweeted</a> this idea some months ago: "It turns out that - for someone with a pretty huge ego - I tend to trivialize my accomplishments."</p><p>The second theory is encapsulated by this quote from the climber Adam Ondra in the film <a href="http://www.bigupproductions.com/#/films/Progression/">Progression</a>, when he compared himself to Chris Sharma: "I think I'm basically weak." Ondra was comparing their ability to do strength-based climbing moves, in which there might be a legitimate difference in their ability. Objectively, however, Ondra remains one of the strongest (if not <i>the</i> strongest) climber in the world; his accomplishments alone is strong evidence that he is not a weak climber. I have come to call this the weakness mindset: the idea that there is nothing special about our ability despite being extremely competent. The perception of weakness, together with my continued survival despite that weakness, leads to the conclusion that I am protected. A second implication of this belief is that I must not have worked very hard or trained very long, as otherwise I would not be weak. This elegantly explains both the feeling of protection and the feeling that no effort was exerted.</p><p>My suspicion is both the hindsight explanation and the weakness mindset are expressions of something deeper, a get-it-done-at-all-cost mentality that makes any effort worthwhile. That, however, is the subject of another post.</p><p>Postscript: There are at least two omissions in this essay, which were realized only in the process of writing (hence, an <a href="http://justinnhli.blogspot.com/2009/09/writing-about-writing.html"><i>essay</i></a>). First, I realized that there are two ways to define the lack of suffering: that of not having worked hard, and that of not having encountered difficult external circumstances. This ambiguity in the meaning of "suffer" is important, as it is the quote from Atlas Shrugged that first inspired this subject. Second, I acknowledge that there is no immediate connection between not exerting oneself and feeling protected. Their relationship will have to be cleared up at a later date.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4344360081238823955.post-91360126006083634322012-04-13T01:31:00.001-04:002013-06-09T20:16:13.236-04:00Technology and SocietyI have been meaning to write about this for a while, but have always procrastinated on it because I didn't want to figure out the logic behind my intuition. But then today I was linked to an <a href="http://www.theatlantic.com/business/archive/2012/04/how-computers-are-creating-a-second-economy-without-workers/255618/">article</a> about how computers are making a lot of people unemployed, so I thought it's time explain my idea:<br />
<br />
I think we are taking the first steps towards the guarantee of food, shelter, and clothing for everyone in the world.<br />
<br />
This seems crazy, but here are the trends I'm seeing:<br />
<br />
<ul>
<li>As the <a href="http://www.theatlantic.com/business/archive/2012/04/how-computers-are-creating-a-second-economy-without-workers/255618/">article</a> suggested, networked computers are very quickly replacing a number of jobs. Where before humans are necessary for communication between individuals and organizations (grocery stores, shopping malls, airlines), this can now all be done digitally. From the users' perspective, this is desirable, since it's generally faster and less hassle. But it does mean that most people in the communication pipeline will be out of a job.</li>
<li>A separate <a href="http://www.nytimes.com/2012/01/25/opinion/friedman-average-is-over.html?_r=1">article</a> suggests that the people out of a job will be forced to move either up or down the social hierarchy. There are three main classes of jobs that haven't been automated yet: jobs that require significant creative input, such as scientists, CEOs, etc; jobs that require non-information-based interaction with humans, such as actors, babysitters, etc.; and jobs that exist in a chaotic environment, such as janitors, plumbers, etc. These categories obviously overlap; surgeons, for example, fit all three categories. Back to the point, only a subset of these roles can be filled given the average education of workers. These jobs, in general, are lower-paying than information-exchange-based jobs in the service industry.</li>
<li>But that's not all - the bigger problem is that jobs are disappearing. This <a href="http://www.whywork.org/rethinking/whywork/rawilson.html">article</a> says the same thing. It is not the case that technology is merely changing the distribution of jobs, but it is actively shrinking them. A system which allows people to buy things online replaces thousands of workers who would otherwise have to be in a brick-and-mortar store - while maybe creating a hundred jobs of mostly warehouse workers and a couple programmers. I think, for the first time in history, the sustenance of human civilization no longer requires the input of the majority of its population.</li>
<li>Which begs the question: are jobs necessary? This <a href="http://www.cnn.com/2011/OPINION/09/07/rushkoff.jobs.obsolete/index.html">article</a> has a great quote: "We want food, shelter, clothing, and all the things that money buys us. But do we all really want <i>jobs</i>?" Taken to the extreme, which considering the growth of technology is very likely to happen, society will be automated to the extent that only a minority is needed to maintain the automation. By the law of supply and demand, these are the only people who will have jobs. Of course, the rest of the population will still have to eat, sleep, and fulfill other needs, but it's unclear how.</li>
</ul>
This is one of the reasons I've said to anyone who would listen that we live in interesting times. Technology is changing at a quick enough pace that governments can't keep up. Just look at the mess that surrounds every contact point between technology and law: <a href="http://en.wikipedia.org/wiki/Software_patent_debate">software patents</a>, <a href="http://en.wikipedia.org/wiki/Internet_censorship_in_the_People%27s_Republic_of_China">internet censorship</a>, <a href="http://en.wikipedia.org/wiki/Network_neutrality_in_the_United_States">net neutrality</a>, <a href="http://en.wikipedia.org/wiki/Privacy_concerns_with_social_networking_services">social network privacy</a>, <a href="http://en.wikipedia.org/wiki/Anonymous_Internet_banking">online anonymity</a>, and so on and so forth. There are also spill over effects from the massive userbase of the internet: <a href="http://en.wikipedia.org/wiki/Wikileaks">WikiLeaks</a>, <a href="http://en.wikipedia.org/wiki/List_of_cases_of_police_brutality#2000.E2.80.932009">dissemination of police brutality recordings</a>, <a href="http://en.wikipedia.org/wiki/Project_Chanology#Internet_activities">DDoS attacks</a>, and so on and so forth.<br />
<br />
While we're here, I want to bring up a rarely discussed facet of copyright, that of 3D printing. As material extruders become more popular, the fight over design of physical items is going to eclipse that of over music, films, and other pure information. This <a href="http://arstechnica.com/tech-policy/news/2011/04/the-next-napster-copyright-questions-as-3d-printing-comes-of-age.ars?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+arstechnica%2Findex+%28Ars+Technica+-+Featured+Content%29">article</a> gives more details.<br />
<br />
What I find even more intriguing is that, while technology is disrupting the fabric of social by creating a jobless world, it may also be on the cutting edge of suggesting solutions to those problems. The main insight I had was that the digital world already has many of the properties of a jobless society. They include:<br />
<br />
<ul>
<li>Zero marginal cost. Software, once written, can be copied at essentially no cost. This is already somewhat true of what has been automated - we are being charged very little for buying things online.</li>
<li>Universal access. Anyone can get online (assuming the minimum equipment of a computer and a connection), and without censorship, can access everything the internet has to offer.</li>
<li>Voluntary contribution. It used to be the case that most users of the internet are passive receptors, only receiving information but not contributing any. With the rise of social networks and online collaboration, many of the users are now also contributing - voluntarily.</li>
</ul>
It is not surprising that, giving these properties, we are still struggling with some of the implications:<br />
<br />
<ul>
<li>Artificial scarcity. In other words, companies are trying to artificially increase the marginal cost. This is, ultimately, what <a href="http://en.wikipedia.org/wiki/Digital_rights_management">DRM</a> comes down to: making people pay for something which otherwise costs nothing. This can also be seen in the (however small) charges we incur for making online transactions.</li>
<li>Competing with free. How does a company remain in business when other people are giving the same product away? I am writing this post in Firefox on Linux, neither of which I paid for. Ditto for most of the tools I use for research; in fact, our entire research project can be <a href="http://sitemaker.umich.edu/soar/home">downloaded for free</a>. Chris Anderson's <a href="http://en.wikipedia.org/wiki/Free:_The_Future_of_a_Radical_Price">book</a> is all about this topic.</li>
<li>Attention-based economy. This is somewhat surprising to me, since I don't understand it. I rationally know that Google makes 40 billion USD each year, most of it from advertisements. I have personally never clicked on an ad, and have <a href="https://adblockplus.org/en/">software</a> which blocks them. But the plus side is that I have free access to a lot of things - including this very blog - because other people are clicking on the ads.</li>
</ul>
I think there are lessons that can be translated from the digital world to the physical world, which may give rough predictions of what will happen in the next century. At the risk of looking like an idiot when that time comes, here are some analogous societal changes I think might (and I hope will) occur:<br />
<br />
<ul>
<li>Copyright holders will finally give it up as a lost cause. Beginning with software patents, it is becoming very hard to say when something is novel or not. There will still be patents, but their domain would be strictly limited. In the mean time, people will be sharing a majority of the creative output of the world, free of charge.</li>
<li>Work will be for the most part voluntary. It would be for pay if the final product is conducive to such; otherwise, attribution credit is all the creator will get. A lot of the results of "work" will fall into the public domain by default, which benefits the population as a whole.</li>
<li>Companies will be the main providers of social welfare, including food and shelter. They will do this because the economy is no longer focused on physical currency, but on something less separable from the individual, such as attention, time, and so on. Basically, the company will stand to gain more from you being well fed than from you starving. An interesting take on this is that people will start buying <a href="http://www.aaronsw.com/weblog/experiences">experiences</a>.</li>
</ul>
I have been keeping an eye out on how technology - computers and the internet in particular - have changed society. It's astounding to think that even my childhood twenty years ago is very different from the childhoods of the people growing up now. I cannot begin to imagine the world in which children in another twenty years will be accustomed to. I wonder to what degree will the above problems be solved, and what new problems would have arisen as technology takes more unpredictable turns.<br />
<br />
I guess there is one last question I haven't asked. Let's say all this becomes true. Is this a world we want to live in?<br />
<br />
PS. A <a href="http://www.fullmoon.nu/articles/art.php?id=tal">more speculative, transhumanist take</a> on technological evolution.Unknownnoreply@blogger.com0