Wednesday, January 27, 2010

Cognitive Dissonance And Time Perception In Harry Potter

In Sep '08 I wrote that readers rate longer books as better, often ignoring opportunity cost, because they "spend so much time and energy reading the book that they come to believe it must have been good." Today the BPS Digest reports that "participants who'd experienced the sense of the time flying rated the task as far more enjoyable than did the participants who'd experienced the sense of time dragging." These two factors help to explain my pet theory about the success of the Harry Potter books. Here goes:

1) Readers are initially intimidated by the page number length. How could they ever get through that? Yet most start anyway, perhaps looking forward to the challenge.

2) What most readers don't consciously recognize is that there's so much dialogue and there's so little text per page that in reality the books aren't all that long. So when they get to the end, both cognitive dissonance ("wow, I read this whole long book --> I must have liked it") and time perception ("wow, I don't remember this long book taking such a long time to read --> time flew by --> I must have liked it"), increase their opinion of the book. Add in some decently funny jokes, ensure the approval of the liberal intelligentsia, stir with a dollop of teen angst, and viola, you can explain the success of Harry Potter.

3) Note that the books didn't really take off until the 3rd and especially the 4th were published, when they started to look abnormally long:

Book 1 = 320 pages
Book 2 = 352 pages
Book 3 = 448 pages
Book 4 = 752 pages
Book 5 = 870 pages
Book 6 = 652 pages
Book 7 = 784 pages

4) My recommendation to authors is to write plenty of dialogue and pressure the publisher to include lots of numbered fluff pages at the front. Also check out my advice on how to make a paper look longer than it really is, which may turn out to be more profound than I had anticipated.

PS Wow, you just read a fairly long post with three links, four numbered points, and a ton of analysis --> You must have liked it --> You must have agreed with the theory.

Tuesday, January 26, 2010

Why College Goes By Fast

Among high school and college seniors it's common to claim that "the past four years have gone by so fast!," and make similar declarations of outright shock at the objectively standardized passage of time. Let's assume that they're telling the truth about the subjective time expansion, as opposed to merely taking a roundabout route to say that they care about their friends and will miss them. Why might the time expansion occur? There's plenty of research on this, explained by three competing theories:

Lack of Attention: We seem to recall events that we pay more attention to as being longer. For example, adding white static noise in an auditory detection task causes people to pay more attention to those intervals, and they are judged as lasting longer (see here). This one is a little controversial; some studies have found opposite effects. However, it is plausible that if students don't pay attention to what's going on, time may be perceived as passing more quickly.

Predictability: Novelty causes time perception to increase. For example, the first time a moving dot is shown to people for 480 milliseconds, they consider it to be visible for ~120 milliseconds longer, an increase of ~25% (see here). So if students get into a routine, they might consider time to be passing faster because they don't get this novelty effect.

Causality: The feeling of control makes events seem longer. For example, when people press a button to cause a 900 millisecond tone to start, they perceive the interval as lasting ~ 30 milliseconds longer than if the tone is started without their control (see here). So, if students make something happen at school of their own doing, they might remember it as lasting longer.

On this basis, if you want to extend your perceived tenure in college, you should attend more interesting classes, expose yourself to more randomness, and throw more of your own parties.

In fact, compared to "the workplace" (scare quotes emphasized), college has lots of time perception extending advantages: many new people to meet and new things to do. So if you think that these four years went by fast, get ready for the next four to go even faster.

Sunday, January 24, 2010

Arguing to Authority

John Cochrane dismisses Richard Posner in a interview posted Jan 13:
I don’t want to comment on Posner. He’s a nice guy. But I spend my life trying to understand this stuff. My last two papers, which took me three years, were on determinacy conditions in New-Keynesian models. It took me a lot of time and a lot of math. If Posner can keep with that and with Law and Economics, good for him.
It reminds me of congressman Pete Stark telling Jan Henfeld that his opinions weren't relevant because he hadn't taken econ classes at a prestigious enough school. Eric Falkenstein explains the crux of the debate:
That we don't have enough data to say what optimal monetary policy is only an abstruse concept if you are being disingenuous. Every technical debate in economics comes down to a pretty common sensical debate, and if you can't articulate it in such a way, you are either an idiot-savant who does not understand what the models really mean, or you are trying to brow-beat outsiders via intellectual intimidation.
Folks arguing to authority always come off very poorly, and I wonder why. We often judge people by their relevant expertise, but for the expert to mention this specifically is usually considered poor form.

One explanation is that arguing to authority doesn't add anything to the debate. It's more efficient to let the viewers judge your expertise on their own time and instead use their attention to make your actual case. This is especially true in the era of the internet when everyone's personal info is only a google search away.

Tuesday, January 19, 2010

The Inevitability of Modernity

Razib Khan offers up a subtle and fascinating look at the indicative equilibrium shifts in human history. He concludes that agriculture was basically inevitable and that capitalism probably was too. The inevitability of modernity has lots of anthropic implications. If getting from "intelligent, social species traveling and gathering resources in small groups" to "social species gathering resources in fixed location" to "massively specialized and coordinated resource producing across nation-states" was inevitable, then either "creating life" or "evolving intelligence" or "nation-states surviving and advancing to galactic space colonization" must be really unlikely, because last I checked I haven't seen any alien space ships consorting in the troposphere lately. Let's hope that the surviving and developing space tech option is not the relevant bottleneck.

Thursday, January 14, 2010

It Doesn't Have to Be This Way

Bryan Caplan presents four alternative worlds in which raising the cost of driving might not be the most effective way to curb congestion:
[I]f drivers were unselfish in the right way, all of the following would be equally economically plausible solutions: 1. Ask everyone to drive less "because they're inconveniencing others." 2. Tell people they're contributing to global warming. 3. Announce that if traffic doesn't fall by 20%, we'll abolish foreign aid to Senegal. 4. Denounce materialism so people quit their jobs and stop commuting.
Purely a priori these are plausible, but based on human behavior in the past we should predict that they will be far less helpful than appealing to driver's economic self-interest.

Bryan's false hypothetical is great way to get his point across and now strikes me as the most potent counter-argument to the distillation of ideas, a counter-argument which Tyler Cowen doesn't mention in his post against distillation. As opposed to the primary literature, summaries like Wikipedia usually don't articulate the ways that the world could be--they just state our current best guess for the way the world is. For example, the Wiki article for the citric acid cycle explains the steps very well, but doesn't explicate some of the other possible ways the cycle could occur.

Hindsight bias often causes people to be unsurprised when you explain the consensus best guess for how the world works. This lack of surprise will most likely lead to a deficit in deep understanding of the idea. Usually, given our time constraints, this deep understanding isn't necessary and trust in the consensus makers is enough. But on topics where deep understanding is critical, try to explain not only why that one fact is our best guess, but also why other possible alternatives are less likely.

Wednesday, January 13, 2010

Friend or Algorithm?

Mark Sisson poses a question:
Quick. How’d you hear about your favorite book or album of all time? Did you let an online algorithm determine what genre/artist/author/etc you’d prefer? Or did a trusted friend, colleague, or family member make a recommendation? I dunno about you, but I’ll take personal recommendations from people I trust over what some impersonal line of code thinks I should like, given the choice between the two.
This is a pervasive yet ultimately false dichotomy. Rating systems aren't based on what computer algorithms reverse engineer from the raw electromagnetic waves.* They're either based on the average ratings of other average people (like imdb) or the preferences of specific people who share some of your average characteristics (like netflix). That's the reality. Now, can you not trust these because you consider yourself too special to agree with the plebeian majority? You're free to be elitist, but at least admit it.

What's the other main reason to prefer a "trusted" friend over "impersonal line[s] of code"? To signal loyalty to your group or clique. People signal loyalty all the time** so you shouldn't necessarily feel bad about this, but again you might as well admit the truth to yourself and others before you perpetuate the information cascade.

Even though it is a false dichotomy, if I had to choose I'd still take the algorithm all day. Aggregating more opinions leads to less noise in opinion markets! What about you?

####

* Although that would be outrageously baller.
** I don't want to make it seem like I consider myself above this. In fact this very disclaimer is an example of signaling my loyalty to fellow lovers of transparency, as is this one, this one, etc.

Tuesday, January 12, 2010

Mark McGwire and the Self Serving Bias

He's admitted to steroid use but denies that they helped his performance:
McGwire insisted the performance-enhancing drugs he used did not actually enhance his performance. The dosages were too low and his physical ability too divine, turns out, for the drugs to have an impact on his body, particularly as it related to his hitting. “I was given the gift,” he told Costas, “to hit home runs.” He said he would have hit every single one of them had he never injected a drop of anything. “Absolutely,” he said. “I truly believe so.”
Some sportscasters are calling him out for lying in order to boost his chances of getting into the HOF, or something. I do not believe that he is consciously lying, in part because I have read about how powerful the self-serving bias can be. For example, children given methylphenidate attribute their success on impulsivity tests to effort and ability much more so than to medication, even when a double blind design ensures that in fact the medication leads to significantly fewer errors than placebo.

This is clearly adaptive--attributing success to internal factors builds your confidence and helps you perform better the next time. In the case of McGwire, it allowed him to use steroids off and on while minimizing anxiety that he'd perform worse without them.

As with Kobe, I wonder: Is McGwire's profligate self-serving bias merely an aberration? Or is it in fact one of the main reasons that he was able to have so much success in the first place?

Monday, January 11, 2010

The Internet Echo Chamber?

Some of the comments on Charlie Hoehn's recent post focused on whether the internet is merely an echo chamber or if intrinsic quality plays a larger role. As always in these "nature / nurture proxy" debates the winning answer is "somewhere in the middle," and the more useful question is how much each variable can explain.

To the extent that folk's behavior in listening to and downloading music is indicative of folk's propensity to e-mail, re-blog, or re-tweet articles*, then Mathew Salganik and Duncan Watts's two studies of web-based music listens and downloads, here and here, may be helpful in resolving this debate.

The researchers created a music downloading web site and uploaded 48 songs by unknown bands. They then recruited somewhat tech-savvy individuals to listen to, rate, and possibly download the songs. Folks downloaded on average 1 out of 7 songs they listened to, indicating some modicum of selectivity.

In one study, the researchers assigned all incoming visitors to either the "social influence" condition, in which they could see the rating and downloading behavior of others, or the "independent" condition in which they could not. Within the "social influence" condition, visitors were also assigned to one of a few identical "worlds," which should have different rating and downloading trends due to random chance.

When the songs were presented to visitors in a single column sorted by popularity, social influence was at its highest. Participants listened to the most downloaded song about ~45% of the time and the second and third most downloaded songs ~30% of the time, while they only listened to songs downloaded an average number of times ~5% of the time.

Salganik and Watts then used the download trends of individuals in the independent condition to predict download trends of individuals in the social condition. In experiment 2, knowledge of the independent data decreased naive prediction errors for the social influence condition by 16%. In experiment 3, with older and more international demographics, knowledge of the independent data decreased naive prediction errors for the social influence condition by 38%. This averages out to 27% as a rough proxy for the usefulness of independent appeal data for predicting which songs will be succesful in the social influence condition. Not great, but not that bad!

In the next study, the researchers had similar set up but in two of their social influence conditions they used an intervention: inverting the download rankings after 752 visitors (~27% of the overall number) had visited the site. This immediately increases the number of downloads for the previously lower rated songs, but eventually some of the top rated ones begin to climb back:
This study also included a non-inverted social influence condition to compare and an independent condition to measure intrinsic appeal. The r correlation between download ranks in the non-inverted social influence condition and independent condition is a strikingly high 0.82, corresponding to an explained variance of 67%. The inverted social influence conditions have much weaker correlations of 0.40 and 0.45 (corresponding to explained variances of 16% and 20%), but these show that even when social influence is directly manipulated against what folks independently prefer, there is still a positive trend between intrinsic appeal and downloading trends.

Salganik and Watts also mention the rating incompleteness theorem (see here): "On the one hand, by revealing the existing popularity of songs to individuals, the market provides them with real, and often useful, information; but on the other hand, if they actually use this information, the market inevitably aggregates less useful information." So, it's hard to prevent people from becoming biased by other's preferences because looking at them is is often a rational choice designed to save precious time. In other words, it's hard to nudge away from a Nash equilibrium.

* This is not necessarily an apt comparison. Music downloading is much more private and personal, whereas what you choose to blog or tweet about is much more visible and thus will subject you to more public judging. On the other hand, reading and discussing articles on the internet is much nerdier than music listening and thus participants may have less emotional attachment, leading to more quality-driven preferences. I don't know of any more applicable experiments but please get at me if you do.

Bottom Line: To say that "the internet is an echo chamber, full stop" is foolhardy. Based on these music download experiments, it seems that around 25 to 70% of folk's decisions to are based on the intrinsic appeal of the material. There is also reason to expect that this percentage would be higher if the download data were less public and estimates of popularity were more noisy, as they are in real life.

Sunday, January 10, 2010

Status Nihilism

One premise of Lennon's song Imagine is that without religion the world would have fewer between group conflicts. Over the long run, I doubt it. Humans will still find ways to form cliques and credibly signal allegiance to their cliques by denigrating other cliques in irreversible ways.

Look at the evolution of educated society's morality standards. Moralizing about other's sexual tendencies used to be the high status way to signal superiority. But hitherto oppressed groups with non heteronormative tendencies convinced educated people that this was an unfair practice. Yet, educated people still find ways to signal their superiority! For example, they now often do so via their preferences for certain types of food.

So, here is a short list of issues in which I believe an elimination of diversity would not reduce the amount of between group vitriol, following perhaps a short adjustment period: geographical origin, race, ethnicity, political party, sport team affiliation, attractiveness, and religion.

If you are not a nihilist in general and wish to remain consistent, the salient question becomes: towards maximizing which qualities should we nudge the inevitable human status competitions?

Saturday, January 9, 2010

Deterrent Effects of the Death Penalty in Texas?

Land et al recently evaluated the effects of the death penalty on a monthly basis in Texas. They correlated the month-to-month fluctuations of executions with month-to-month fluctuations in homicides between 1994 and 2005. Their two preferred models (i.e., ones that minimizes cross-correlation between the transfer function and noise) have deterrent effects of 2.5 homicides and 0.5 homicides per execution.

Looking at the 2.5 homicide reduction model's function (p 1031 if you have access), I don't buy the displacement effect which postulates an increase of homicides 2 months after an execution. If we are going to do analysis at the margin, we have to consider what the effects of an execution will be on the likelihood that one particular potential criminal will murder.

Psychologically, the postulation of the model is that a would-be murderer would (consciously or unconsciously) be less likely to commit murder for one month after hearing of a recent execution, but then forget about it in the second month and actually increase in the likelihood of murdering. The authors call this "displacement." Then in the third month the probability is back to around baseline and finally in the fourth month there is once again deterrance, although not as strong as in the first month. Sorry, but this doesn't make any sense.

Nevertheless, if you favor the model of state governments as vehicles for policy experiments, then perhaps you have to laud Texas's iconoclastic tendencies, morbid as they may be.

Wednesday, January 6, 2010

Slaying the Black Horseman

David Reiff writes in TNR about Cormac Ó Gráda's new book on famine and draws some surprising insights. He argues that famine is now basically preventable, and that the UN's World Food Program, which provides food to 90 million people per year, is making useful strides to prevent famines before they occur. Here is one particularly interesting portion describing the post-Malthusian discussion of hunger:
Sen emphasized that a famine caused by a failure, or even just a serious shortfall, in the harvest would rapidly engender a devaluation of all non-food possessions--what famine specialists call “entitlements,” so that the poorest people basically lose the purchasing power they need to ensure their own survival. Looking at the data without Malthusian prejudice, Sen demonstrated that it was simply not the case that food shortfalls were necessarily greater in periods of famine than they were in times when there was no threat of famine--and that, conversely, there were many periods, not only in Bengal but globally, in which the availability of food had actually declined and no famine had ensued. To state it simply, if a bit reductively: Sen’s work put an end, once and for all, to the false belief, derived from Malthus, that famines are primarily the result of food shortages and overpopulation.
Fluctuations leading to excesses or shortages of rainfall and volcanic eruptions still do have an impact on famines, but political systems play an even larger role. Elsewhere, here is Robin Hanson on whether we would be so nice following an apocalyptic scenario:
We like to think that moral progress has made us nice people. We’ve heard that our distant ancestors were mean and cruel and ruthless, and we can’t imagine that we would be such people – but we’re nice mainly because we’re rich and comfortable. And when we’re no longer rich and comfortable, we won’t be as nice.
So while we are still nice, let us take a moment to laud famine prevention efforts and beneficial political organization on utilitarian grounds.

Tuesday, January 5, 2010

Brain Uploading in Avatar

Noah Hutton already wrote an in-depth review of the neuroscience of Avatar, so I'll focus on just the brain uploading part. Would it be possible to lie down on a surface with electrochemical capabilities and somehow transfer the human mind from the cellular substrate to another form of substrate?

The Tree of Souls would need to be able to communicate with the neurons in the human's original cellular substrate at a high level of precision. There are two ways that the tree could achieve this. One would be to have some sort of biological scanner that could read at the nanometer scale in the x, y, and z directions. This would need to include penetration of at least the length of the average brain (6 inches) in the z direction. But this strikes me as highly implausible. What could possibly function as the vacuum and the electron gun?

Instead, it seems more likely that the tree would need to directly probe each of the ~ 86 +/- 8 billion neurons and ~ 85 +/- 10 billion glial cells. Perhaps by stimulating each of the brain cells individually and measuring its response curve over a number of iterations, the tree could reverse engineer a model of all of the relevant properties of that cell. In order to include learning and memory the tree would have to detect the NMDA receptor density of hippocampal neurons. It is hard to say what other details of each cell the tree would have to detect. Perhaps it would need to detect some sort of regional mRNA expression or measure of epigenetic changes to the histones and DNA of each cell. It's even possible that the tree wouldn't have to go down to that level at all and that a map of all cortical minicolumns could do the trick.

Once all of the relevant properties of the original human brain were known, the tree would have to transfer these properties to the Na'vi substrate. Since it's unlikely that the Na'vi have the same type of micro unit (the cell) as humans, this might be sort of challenging. But since most people who study the topic conclude that it'd be possible to upload the human brain in some sort of silicon substrate, there is likely to be a way to accomplish this task.

With the benefit of human technology like SSTEM (pdf) and computers, a solution to this task would be a lot easier to design. We're not so far away...

Monday, January 4, 2010

Time Transparency in Blogging, 2009

People often ask me how much time I spend blogging. I've been keeping track of this for the past year and now I can finally say with some degree of preciseness. Here's the monthly breakdown:

January: 7 hours, 12 posts
February: 7.5 hours, 19 posts
March: 8.25 hours, 16 posts
April: 10.75 hours, 17 posts
May: 7.5 hours, 13 posts
June: 9.5 hours, 15 posts
July: 1.5 hours, 4 posts
August: 2.5 hours, 3 posts
September: 9.75 hours, 13 posts
October: 12 hours, 20 posts
November: 10.25 hours, 13 posts
December: 9 hours, 12 posts
Overall: 95.5 hours, 157 posts
Time Spent Per Week: 1.83 hours, 3.01 posts

Click here for the raw data. Click here for all of my posts in 2009.

I write down how much time I spent blogging the previous day when I wake up in the morning. The numbers are almost all in half hour blocks because being more detailed than that would be infeasible.

These numbers are only for time spent specifically researching and writing posts on this blog. They do not account for the time I spent reading articles that I eventually ended up blogging about, which I almost certainly would have done anyways. They are estimates and certainly not exactly correct. But I do strive not to be systematically biased either towards too much or too little time and round appropriately.

Bottom Line: When I say that it does not take long to upkeep a reasonably decent blog, I mean it!