Monday, June 28, 2010

Lessons From FIFA's Ineptitude

Today, two calls in the World Cup (a missed goal here, a missed offside here) were clear mistakes and require us to assign blame. There are a number of ways to respond. The comforting but ultimately misguided tact is to blame the individual refs. It is statistically unlikely that the majority of refs would be so much better than the refs today.

Or, one might blame the goals on FIFA's failure to allow instant replay. Replay definitely would have helped. But that tact also strikes me as not general enough. Instead, I see the problem as yet another example of the status quo bias against new technology. To me, inertia and rationalization are the real culprits.

The only reason that FIFA's decision making seems so dumb is because the games are public enough to allow for constant scrutiny. But imagine if millions of fans were watching as yet another patient's info was copied down by hand instead of being uploaded to a time-saving server. They too would be vitriolic. Or, imagine if announcers called the action as somebody wasted minutes every day by visiting websites individually instead of setting up an RSS reader. They too would be unimpressed.

So, sure, condemn FIFA for its failure to adapt to useful new technology. But be careful not to frame your blame as for a singular case of stupidity. Indeed, we can all fall victim to the status quo bias against adopting new technologies.

Friday, June 25, 2010

Two Forms Of Self Deception

Self-deception can be divided into at least two types:

1) Deluding yourself into thinking that you are better than the median in some particular trait, like driving ability, teaching ability, and etc.

2) Deluding yourself into thinking that the traits you just happen to be good at are considered objectively more virtuous. In this form of self-delusion, you do not actually think of yourself as better or worse at any particular trait, but rather rationalize traits you are good at as especially virtuous, and traits you aren't good at as especially useless or even evil. As Michael Vassar explains, people "delude themselves about what traits a human without specific information about his or her self sees as good, choosing to see many of their own traits as good rather than as bad and failing to notice that people who lack those traits consistently see things otherwise."

If you tested them, I'd bet that people will be more likely to self-deceive using type #1 self-deception for traits about which they cannot plausibly self-deceive using type #2 self-deception. For example, almost everyone in our society agrees that getting along well with others is a virtuous trait, and it would be relatively difficult to come up with a plausible argument otherwise. So, it makes sense that in one study (see pg 11 here), when asked how well they get along with others, all students thought they were above average, 60% thought they were in the top 10%, and 25% thought they were in the top 1%.

In the modern world type #2 self-deception has become especially common due to the desire to find one's own niche. Given arbitrary weights of values for traits and their interactions anyone can delude themselves into believing that they are the most virtuous person in the world.* This also helps explain some of the benefits for being part of an "in" group, as your group will validate your type #2 self-deception about what makes a virtuous person.

* This is provable via Arrow's impossibility theorem.

Wednesday, June 23, 2010

Is Toy Story 3 The Best Movie Of All Time?

Yesterday it was #8 and currently it is #7 on imdb's top 250 with a weighted average of 8.8. But on its own page it has an unweighted rating of 9.3, meaning that it is currently getting killed on the top 250 due to a low total number of votes. So, if it maintains the score once it has around 100k votes, its weighted average will approach its unweighted average asymptotically and it will take the reigns from Shawshank as The Greatest Of All Time.

However, such an upset is very unlikely. For example, look at the current demographic breakdown:


You'll note that the movie is doing really well with voters under 29. But right now the under 29 bracket represents 78.5% of total voters, and that number is going to drop. For example, if you look at the demographics of Toy Story One and Two, the under 29 bracket only has 51.9% and 49.0% of the overall totals, respectively. Also, US voters are likely to become the minority, which should further push the score down.

But the main problem is that right now, Pixar fanboys are much more likely to have seen the movie, and they are also more likely to enjoy it and rate it high. Once more average moviegoers join the party, they will curb some of the hype.

To give some perspective, The Dark Knight had a 9.6 on its own page and a 9.5 weighted average when it had 69,000 votes. But even The Dark Knight fell out of the top spot less than a month after it came out, and is now #11. So, you expect a rise for the next few weeks, and it may even take the top spot for a very brief time, but then it should fall fairly fast. I'd bet below #15 by this day next year. Toy Story 3 is cute and endearing, but it is certainly not the best movie of all time.

Thursday, June 17, 2010

Watching The Top 250

Late last night I finished watched my last of imdb's top 250 movies. As of today, I have watched every damned movie on the list. It's been a wild ride. And since Toy Story 3 comes out this weekend, I need to get a post up documenting the occasion while it's still technically true.

So... thanks to everyone who works at imdb and spends hours configuring their database. In particular, thanks to the nerds who suggested that they run their ratings data through a bayesian filter, whoever you are. Thanks to my mom and dad for subsidizing my video rentals and for not complaining when I was monomaniacal about the netflix queue two summers ago. Thanks to the workers at the reserve desk of the Vassar library, who were kind enough to waive a few late fees for a starving student. Thanks to everyone who ever sat through a weird old movie with me over the past four years. Gratitude. Finally, a shout out to all the playa haters, who never thought I'd make it here. What now, haters?

Among the subset of movies that I watched specifically due to my quasi-compulsive need to finish the list, my faves were On the Waterfront, The Killing, The Seventh Seal, Dial M for Murder, Kind Hearts and Coronets, M, Manhattan, Mulholland Drive, The Ox-Bow Incident, The Thing, Touch of Evil, and Yojimbo. These movies tend to be funny, short, and don't spell out too much for the audience. The movies that I tended to like the least were ones that seemed to be popular for political reasons, like Crash or Gran Torino.

All in all, the top 250 is undoubtedly violence obsessed. Every one of the current top 25 movies has violence as a major crux of the plot. And of the top 100, only 9 movies do not fit this criteria: It's A Wonderful Life, Citizen Kane, Forrest Gump (arguable), Amelie, Wall E, Spirited Away, Elephant Man, All About Eve, and The Apartment. In that sense they are nothing like my daily life.

My quantitative vote history on imdb is here. There is one useful site to track your own progress on the top 250, which, should you choose that path, you can find here. I warn you, it may not always be easy. There will be times you'll want to give up, times when you'll forget why you devoted a precious slice of your time to such a desolate and inhuman list in the first place. All I can say is, get busy living, or get busy dying.

Tuesday, June 15, 2010

Niche Finding

Holding quality constant, I tend to enjoy movies more the lower my expectations are. This seems like a fairly universal tendency. For example, the main predictor of a student's enjoyment of a class is the extent of positive deviation in their actual grade from their expectations (here).

My explanation for this tendency is as a mechanism to spur niche finding. In this large world, it is hard to stake out our own identity. Thus, we constantly are on the lookout for things that we enjoy more than others to portray our unique values and thus define us.

As evidence for this, consider how much people love to note that some particular work of art is underrated. The next time you hear someone say something is underrated, probe a bit.

If you disagree with how good that work of art objectively is, they may give some playful rebuttals but won't really mind. However, if you disagree with their assumption that the work of art is rated low by the majority, and thus imply that they are not really unique for liking it, they will get rather annoyed. So, it is not the actual quality of the underrated thing that people mostly care about, but rather their own uniqueness in liking it.

Monday, June 14, 2010

Trade Offs Between Status And Interestingness

In a response to Kaj Sotala's post about how to have interesting conversations, HughRistik mentions two problems with asking questions:
1) If you are beginning a conversation with some who you don't know well, they may not give you very extensive or useful answers to your questions. 2) You can only ask so many questions in a row before you are interviewing them. Worse, it looks low status.
Asking questions lowers your relative status. I certainly buy this, as questioning is a sign that you care about the other person's opinion, which the higher status person will tend not to do. But asking questions is also one of the best ways to learn new info, satiate curiosity, and generally expose oneself to interestingness.

A higher status person will tend to be more aggressive, smile less, and act more selfishly. But being aggressive raises everyone's blood pressure, smiling releases endorphins, and being selfish makes it harder to create long lasting friendships of any value.

To me, these qualities that raise status don't seem to be worth it.

On a more macro level, Elena Kagan is now a very high status person and likely to become a supreme court justice. But she sacrificed interestingness along the way. Was it worth it? I'd say no.

As another example, blogging is fairly low status. But lots of people choose to do it anyway, in large part because they find it interesting.

On both a micro and macro level, it seems that there are trade-offs between status and interestingness. So, where do you stand on this widespread trade-off?

Four Distinct Claims In The Web Attention Debate

Claim #1: The internet is changing our brains (e.g., see here). This is a tautology, as everything we experience changes our brains. Even no activity should change the strength of our synaptic connections! Now, the typical connotation is that these brain changes are for the worse, as the assumption is that deviations from normal biology are bad. But from what I can tell, we just don't know enough about systems neuroscience to correctly evaluate the effects of web-induced brain changes. So let's stick to psychology and behavior, which brings us to...

Claim #2: Overall, the internet decreases our attention span and makes us less likely to engage in contemplative deep-reading and thinking (e.g., see here). Carr and Lehrer primarily extrapolate from controlled psyc studies to address this claim, but those do not longitudinally track the same subjects. Instead, Cowen seems more on track in focusing on the "market data." Unfortunately, longitudinal book reading stats are very difficult to come by, although see here for one aggregation of survey stats. Book revenues did not change much from 2002 to 2009 (see here), but that is confounded by pop growth, inflation, and the shrewdness of Amazon. So, we need better data. Another strategy is to look at previous tech advances and how see people's contemplative deep thinking has changed. I would say, not much. For example, introspection habits haven't don't differ drastically between the 150 CE Marcus Aurelius and the internet-era's own Katja Grace.

Claim #3: Developmentally, if one is not forced to focus for long periods of time often during adolescence, one will be less able to focus as an adult (e.g., see here). It is certainly true that teens will on average be more impulsive than adults, which may be because the amygdala and nucleus accumbens develop more quickly than the prefrontal cortex. So, it makes sense that internet multitasking will be particularly tempting to teens. Indeed, teens may lose time that could have been spent studying. But school is largely zero-sum anyways. Plus, once adolescents become adults their focus should improve. This is a very controversial claim though, as people differ both on their beliefs about focus and on their values with respect to paternalism.

Claim #4: The gains we reap from more immediate access to info and more efficient reading are not worth the costs of habitually skimming and a reduced willingness to commit to valuable but demanding texts (e.g., see here). I mostly agree with Steven Pinker's assessment of this claim as bullshit, because the internet is the best / only way to keep up with the exponentially accelerating increase in knowledge. But he also says that deep reflection, thorough research, and rigorous reasoning "must [!] be acquired in universities," to which I call bullshit. The existence of autodidacts and other individuals self-educated primarily on the internet proves the non-necessity of universities.

Full Disclosure: By the end of writing this post, I have 24 tabs open in my Firefox browser. Namaste, cabrĂ³nes.

Tuesday, June 8, 2010

The Neuroplasticity Of Doing Absolutely Nothing

Piggybacking on Vaughan Bell's account of how the word "neuroplasticity" is abused in the public sphere, here's one specific example from to show how fuzzy the picture really is. This particular study by Minerbi et al, available via open access here, measures the structural changes to synapses over a fairly long time frame (~ 5 days) in the presence and absence of electrical input.

The researchers cultured rat neurons in dishes and tethered the genetic expression of a common post synaptic protein (PSD-95) to the expression of green fluorescent protein, in order to measure changes in synapse size over time.

At the population level, increases in electrical network activity (i.e., more action potentials) correlate with increases in post synaptic size, as expected. And at the population level, blocking electrical action potentials with the tetrodotoxin stops the increase in post synaptic size.

But when researchers looked at individual synapses, their simple relationship breaks down. The fluorescence of the post synaptic protein, a measure of the size and thus strength of the synapse, varies somewhat randomly over time. This is true even when the activity blocker tetrodotoxin was applied to the neurons.

If the structure of synapses were constant, there would be little change in the flourescence of the post synaptic protein over time. In the following 8 second video, each circle represents one synaptic puncta, the Y-intercept shows fluorescence, and as you can see, the fluorescence is definitely not constant:


Bottom Line: When marketers and pundits claim that "[something] changes the brain!", what they are saying is technically true. But the connotations are misleading, because any sort of stimulation changes your brain in some way. And moreover, even with no electrical at all stimulation individual synapses are constantly changing their size and configurations and exhibiting "neuroplasticity."

Monday, June 7, 2010

The Mathematical Improbability Of Being The Best

The unheralded gem Eric Falkenstein muses that,
The key is doing the best with what you can, the self-awareness and motivation to develop one's strengths so that your hard work generates a maximum payoff going forward. As Muhammad Ali once said, "You can be the best garbage man or you can be the best model--it doesn't matter as long as you're the best." 'The best' is mathematically improbable, 'really good' generates the same result. If you are really good at your job your day is filled with sincere gratitude by colleagues and customers....
There are two reasons that being "really good" generates the same outputs as being "the best." First, in professions that scale really well, randomness is going to play by far the largest role. This is the implication of Dean Simonton's equal odds rule (here) that the average publication of any particular scientist does not have a statistically significant chance of having more citations (a proxy for impact) than any other scientist's average publication. Furthermore, in Csikszentmihalyi's interviews of highly creative individuals (here), the most commonly mentioned explanation for success was luck. Second, in professions that don't scale well, it is really hard to determine who is the best anyways, because context becomes so important.

So, it's just as useful and probably less stressful to just try to be really good than to try to be the very best.