The NYTM's Year in Ideas is consistently good. My favorites for this edition were D.I.Y. Macro, Performance Enhancing Shoes, Relaxation Drinks, and The 2000's Were a Great Decade. Inspired by their ideas, my retrospective for the year will consist of twelve articles / blog posts, one for each month, that seem especially representative of the year in ideas.
January: "Lessons from a pandemic," Nature editorial, 685 words. The H1N1 virus ended up not being that lethal, but it could have been, and this article highlights the lessons. Among them is that six months is too long of a time for vaccine production, given that viruses now easily spread around the world in "a matter of weeks."
February: "The biomechanics of barefoot running," editor's summary, 283 words. Running on one's toes is healthier than running on one's heels, even though most running shoes promote the latter. This finding is largely academic for me personally, as over the years I have come to loathe jogging. Nevertheless, it is emblematic of the larger "back to nature" craze that has taken over in 2010. This includes the paleo diet, probiotics, and restroom posture designed to prevent hemorrhoids.
March: "Snake oil? The scientific evidence for health supplements," by David McCandless and Andy Perkins, infographic. Aside from being fascinating, this is a good example of an effort to harness the academic lit for the benefits of the masses. Also, it is representative of the open data movement, as the authors transparently aggregate their data set in a google doc, which anyone can view.
April: "The data-driven life," by Gary Wolf, 5808 words. Discusses the growing trend of self-experimentation. More generally, he discusses how many more people are using tech and data to inform decisions, trumping their raw intuition.
May: "The moral life of babies," by Paul Bloom, 6026 words. He discusses how our preference towards actors who "do the right thing" emerges very early. That is, it presumably emerges far earlier than the babies would be cogent enough to consciously reason about morality. This is part of a movement in psychology that is emphasizing the arbitrariness of our beliefs and decisions.
June: "Smarter than you think: IBM's supercomputer to challenge 'Jeopardy!' champions," by Clive Thompson, 6609 words. At any given point, the AI iteratively calculates the probability that an answer is correct, and then checks whether that probability passes a certain threshold. This probabilistic thinking seems to be invading fields beyond just machine learning, so it's important to understand.
July: "New developments in AI," by Steve Steinberg, 5496 words. An innocuous and perhaps unfortunate title, but a tour de force of a blog post. He discusses trends in smart cars and massive knowledge-bases, and speculates on how they will affect society. One sentence that's particularly near and dear to my heart is when he writes, "consider that 'what is the best burrito in SF' (an opinion), and 'what do most people consider the best burrito in SF' (a fact) are normally considered equivalent."
August: "A world without mosquitoes" by Janet Fang, 1929 words. She discusses whether we should try to eliminate all of these nasty, virulent insects. The downside is that it would mess with biodiversity in ways difficult to predict, while the upside is that it could save millions of lives. We will face plenty of these type of trade-offs in the coming years, specifically with respect to climate change and geoengineering, and more generally in changing aspects of our natural world that we disapprove of.
September: "Jumping to joy," by Robin Hanson, 212 words. He wonders whether we should experiment more with different lifestyles, and what our failure to do so implies about our precarious sense of self. Questioning which of our selves is the "real" one is trendy these days, boosted in part by things like the implicit association test. Experimentation is also enjoying a resurgence, championed by Dan Ariely.
October: "Lies, damned lies, and medical science", by David Freedman, 6022 words. Explains the problems with current scientific publication and data dissemination systems. Many scientists broadly agree with these critiques of their infrastructure, but lack personal incentives to change them.
Movember: "Hangover theory and morality plays," by Steve Waldman, 1986 words. He discusses the need to frame causes of the recession in moral terms that anyone can understand, synthesizes relevant economic theories, and holds no punches. It'd be hard to describe the ideas of 2010 without including reactions to the recession.
December: "The hazards of nerd supremacy: The case of Wikileaks," by Jaron Lanier, 4704 words. Wikileaks is one of the defining stories of the year. He explains that we might support the hackers in our hearts, because we perceive them to be the underdogs, but that in our heads we should be much more skeptical.
It's been a fun year of blogging and thanks as always for reading.
Wednesday, December 29, 2010
Tuesday, December 21, 2010
Lehrer On Plasicity Vs Specialization
He discusses it here, a month ago:
A beautiful exposition. However, I think the trade-off can be found more generally than in just the human brain. Indeed, most evolving biological systems impose limits on plasticity because of the costs. This suggests new minds or systems we might design will probably deal with this trade-off too. But this is all still hotly debated.
Expertise might also come with a dark side, as all those learned patterns make it harder for us to integrate wholly new knowledge. Consider a recent paper that investigated the mnemonic performance of London taxi drivers. In the world of neuroscience, London cabbies are best known for their demonstration of structural plasticity in the hippocampus, a brain area devoted (in part) to spatial memory....
The problem with our cognitive chunks is that they’re fully formed – an inflexible pattern we impose on the world – which means they tend to be resistant to sudden changes, such as a street detour in central London....
The larger lesson is that the brain is a deeply constrained thinking machine, full of cognitive tradeoffs and zero-sum constraints. Those chess professionals and London cabbies can perform seemingly superhuman mental feats, as they chunk their world into memorable patterns. However, those same talents make them bad at seeing beyond their chunks, at making sense of games and places they can’t easily understand.
A beautiful exposition. However, I think the trade-off can be found more generally than in just the human brain. Indeed, most evolving biological systems impose limits on plasticity because of the costs. This suggests new minds or systems we might design will probably deal with this trade-off too. But this is all still hotly debated.
Sunday, December 19, 2010
Douglas Adams On Robustness Vs Fragility
From The Hitchhiker's Guide:
"The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair."
Aging And Happiness
How does subjective well-being vary with age? To find out, Stone et al (here, HT: The Brits (HT: BC)) conducted a large random-digit dial survey of 300,000+ US citizens. They asked about well-being and a few other variables, like age. We can only hope they had a good wireless plan. Here's the big result:
The covariates are unemployment, marital status, whether one has children living at home, and gender. Younger people increase in well-being ratings once you adjust for these because they're more likely to be unemployed. Of course be careful of the axes, as their real rating scale varies from 0 to 10. But the large sample size and continuous trend across age groups lends credence: I buy it.
What about anxiety and age? Here's the proportion of respondents who reported feeling "a lot" of stress the previous day, in different age groups:
I wonder what explains this trend. Perspective? Fiscal and emotional stability? Norepinephrine levels in the amygdala?
Finally, for those readers who do not trust their eyes, here's their table showing the percent of variances explained:
So anger and stress show a pretty consistent decline across age groups, while the curves are more U-shaped for measures of subjective well-being. Note sadness follows an inverted U-shaped curve.
Many top 250 movies explore this curious relationship between happiness and age, like Up, Ikiru, Cinema Paradiso, and The Wrestler. Much of it seems counter-intuitive. Here's a post of mine from '07 wondering whether we become happier with age, but I apparently didn't see the U-shaped curve coming.
The covariates are unemployment, marital status, whether one has children living at home, and gender. Younger people increase in well-being ratings once you adjust for these because they're more likely to be unemployed. Of course be careful of the axes, as their real rating scale varies from 0 to 10. But the large sample size and continuous trend across age groups lends credence: I buy it.
What about anxiety and age? Here's the proportion of respondents who reported feeling "a lot" of stress the previous day, in different age groups:
I wonder what explains this trend. Perspective? Fiscal and emotional stability? Norepinephrine levels in the amygdala?
Finally, for those readers who do not trust their eyes, here's their table showing the percent of variances explained:
So anger and stress show a pretty consistent decline across age groups, while the curves are more U-shaped for measures of subjective well-being. Note sadness follows an inverted U-shaped curve.
Many top 250 movies explore this curious relationship between happiness and age, like Up, Ikiru, Cinema Paradiso, and The Wrestler. Much of it seems counter-intuitive. Here's a post of mine from '07 wondering whether we become happier with age, but I apparently didn't see the U-shaped curve coming.
Sunday, December 5, 2010
Why Nature Didn't Choose Arsenic
Phosphate is of course a part of the structures of DNA and RNA, but it is also in many metabolic intermediates like ATP and glucose-6-phosphate. The idea that a bacterium could survive without it (see here) would require updating some concepts about the flexibility of reaction rates in physiological systems.
A 1986 paper by Frank Westheimer, cited 400+ times, abstract here and pdf here, explains why phosphates are preferred. In particular, he notes how the negative charge of the phosphate ester makes it relatively more resistant to hydrolysis, while it still can act as a leaving group if enzymatically activated. Then, in an intriguing section, he discusses why various other alternatives would not make sense, including arsenic:
A 1986 paper by Frank Westheimer, cited 400+ times, abstract here and pdf here, explains why phosphates are preferred. In particular, he notes how the negative charge of the phosphate ester makes it relatively more resistant to hydrolysis, while it still can act as a leaving group if enzymatically activated. Then, in an intriguing section, he discusses why various other alternatives would not make sense, including arsenic:
Another compound that must be considered as a basis for a possible genetic material is arsenic acid, which is also tribasic. The poisonous effects, however, of compounds of arsenic probably cannot be avoided, since these effects are centered in the lower valence states of arsenic, and the reduction of pentavalent arsenic is much easier than that of pentavalent phosphorus. In any case, arsenic esters are totally unsuitable; the hydrolysis of esters of arsenic acid is remarkably fast. The triisopropyl ester in neutral water at room temperature is completely hydrolyzed in a couple of minutes. Apparently the hydrolysis of the diesters is even faster than that of the triesters.The idea is that esters of arsenic are too liable to be cut by water, thus making them poor linkers for bases of DNA and RNA. But this is assuming that the reaction will occur under relatively stable conditions (i.e., pH and temp), and perhaps those conditions are somehow altered in this particular bacterium, sufficiently lowering the hydrolysis rate of arsenic. We will have to wait and see, but in the meantime where are the prediction markets when we need them?
Saturday, December 4, 2010
Trade Off #16: Impulse vs Incentives
At one of the stoplights on my drive to the gym, there is often someone walking through the cars asking for change. Let's assume, reasonably, that she needs the money more than I do. Let's also assume, somewhat less reasonably, that she'll spend the money in a productive manner. We can now break the decision down into the benefits she'd gain from leveraging my money versus the perverse incentives I'd reinforce by rewarding people for begging in traffic.
Generally, this trade-off is between the benefits from an impulse meant to rapidly improve and stabilize a condition, versus the costs of long-term instabilities that could result. Some examples:
- Keynesian econ emphasizes the multiplier of a gov intervention, which they consider to be an impulse, whereas Austrian econ emphasizes the moral hazard (i.e., bad future incentives) of such impulses. ("in the long run we're all dead", also see here)
- Radiotherapy has a good chance of killing tumors, and thus can be thought of as an impulse. But it also makes mutations in other genes more likely, which could develop into secondary tumors in the future, and thus can be thought of as bad "incentives." (see here)
- WikiLeaks might incite people to speak out against or question their government, thus acting as an impulse to increase freedom, but it also incentives governments to be even more secretive and centralized, thus decreasing freedom. (see here)
Now, the previous paragraph is somewhat of a technicality, and is probably boring. But if I didn't mention it now, just imagine the kind of incentives that would introduce for sloppiness in the future.
(Kudos to Alan Grinberg for the photo)
Thursday, November 25, 2010
Bill Simmons On Uniqueness
He gripes in his most recent chat:
But why do the musicians / comedians themselves not want more followers? That's a bit trickier. Simmons mentions Steve Martin being wary of his new fans, but the same is even more true of Dave Chappelle. When he skipped out on the third season of his show, he turned down millions.
Any theory to explain this phenomenon also has to account for the fact that neither athletes nor academics tend to express these sentiments. Jordan, Manning, Hawkings, Volkow--they do not worry about "going mainstream." Indeed they tend to welcome it.
Perhaps the musicians / comedians are signaling loyalty to their core constituents, and the real emotional and financial costs they pay in doing so just makes their signaling more credible. So it seems that the less badly you want to go mainstream, the more your pursuit is about signaling as opposed to results.
[I]t's funny to take heat from soccer fans that I'm a bandwagon Tottenham fan. I mean... of course I am. I am something like 17 months into this thing. But what I don't get about sports like the UFC/soccer/NHL (and even baseball with the saber community towards people who just like baseball and don't want to dive into the stats) is why the diehards are so protective/condescending towards casual fans. What's the goal there? To just drive away everyone who might like the sport and want to become more of a fan? I think there's a difference between local bandwagon fans (like the Pink Hat Red Sox fans) and "I am starting to like your sport, I genuinely want to follow it and learn about it" fans and it would just seem like the diehards should embrace the latter group. Or am I crazy?...
I do think that diehard fans tend to exclude newcomers - the same phenomenon works with music, you always want your favorite band to be the little band that not everyone knows about (and never have them get to the U2 level). I think Kings of Leon are a good recent example of this and even the band members hated that they became "mainstream" because it brought in fans that they didn't necessarily want. The best breakdown of this was in Steve Martin's book about his standup career when he talks about becoming hugely famous and how he started dreading doing his shows because he felt like people weren't there for the right reasons. It's an interesting topic I think.Once more people join a given group, affiliation loses lots of its signaling benefits due to diffusion. So it makes sense that fans who have invested in a team / band would discourage newbies from joining.
But why do the musicians / comedians themselves not want more followers? That's a bit trickier. Simmons mentions Steve Martin being wary of his new fans, but the same is even more true of Dave Chappelle. When he skipped out on the third season of his show, he turned down millions.
Any theory to explain this phenomenon also has to account for the fact that neither athletes nor academics tend to express these sentiments. Jordan, Manning, Hawkings, Volkow--they do not worry about "going mainstream." Indeed they tend to welcome it.
Perhaps the musicians / comedians are signaling loyalty to their core constituents, and the real emotional and financial costs they pay in doing so just makes their signaling more credible. So it seems that the less badly you want to go mainstream, the more your pursuit is about signaling as opposed to results.
Saturday, November 20, 2010
Our Regrets Change Over Time
Ben Casnocha recently posted about the regrets of the dying, and Robin Hanson replied that in fact people on their deathbed do not spontaneously offer such regrets. Robin is technically correct, but he is taking the claim far too literally.
Indeed, there is broader data to suggest that the regrets of older people are quite different from the regrets of younger people. In particular, as time since a decision grows, people tend to shift their regrets towards not making the hedonistic decision.
An '06 study (link, pdf) shows how the intensity of regrets towards work or enjoyment changes as time passes. For events last week, people express slightly higher levels of regret towards enjoying instead of working (2.2 vs 2.0 out of 6). But for events five years ago, people feel more regret for working instead of enjoying (3.4 vs 1.4 out of 6). This change is even more pronounced for feelings of guilt vs missing out, and there's lots of replicating data (for example, see Ran Kivetz's other papers here).
This is part of what makes using the regret heuristic so complicated. One must not only project future regrets for a decision, weighted by the probability of each outcome, but also consider how those regrets might change in direction and strength over time, and integrate over all probabilistic future time points. If this computation were easy, there wouldn't be so much demand for strategies to get an approximate answer.
Indeed, there is broader data to suggest that the regrets of older people are quite different from the regrets of younger people. In particular, as time since a decision grows, people tend to shift their regrets towards not making the hedonistic decision.
An '06 study (link, pdf) shows how the intensity of regrets towards work or enjoyment changes as time passes. For events last week, people express slightly higher levels of regret towards enjoying instead of working (2.2 vs 2.0 out of 6). But for events five years ago, people feel more regret for working instead of enjoying (3.4 vs 1.4 out of 6). This change is even more pronounced for feelings of guilt vs missing out, and there's lots of replicating data (for example, see Ran Kivetz's other papers here).
This is part of what makes using the regret heuristic so complicated. One must not only project future regrets for a decision, weighted by the probability of each outcome, but also consider how those regrets might change in direction and strength over time, and integrate over all probabilistic future time points. If this computation were easy, there wouldn't be so much demand for strategies to get an approximate answer.
Friday, November 19, 2010
P-Value Polemics
As I am always up for a good scholarly debate, I was quite pleased, after reading this '05 article calling for a replacement to the p-value called p-rep (cited 200+ times), to see a somewhat vitriolic '09 rebuttal (pdf). First, the abstract of the '05 paper by P. Killeen:
####
Reading about p-values makes me want to start a blog about them (how does such a blog not already exist?!). A good subtitle could be "where one in every twenty posts will be significant by chance alone."
"The statistic Prep estimates the probability of replicating an effect. It captures traditional publication criteria for signal-to-noise ratio, while avoiding parametric inference and the resulting Bayesian dilemma. In concert with effect size and replication intervals, Prep provides all of the information now used in evaluating research, while avoiding many of the pitfalls of traditional statistical inference."A rather bold claim! And, shortly after its publication, the journal Psychological Science (6th highest psyc impact factor) recommended that authors report p-rep instead of the traditional p-value. Which makes the rebuttal article by Iverson et al that much more tantalizing. They write:
"This probability of replication prep seems new, exciting, and extremely useful. Despite appearances however prep is misnamed, commonly miscalculated even by its progenitors, misapplied outside a common but otherwise very narrow scope, and its seductively large values can be seriously misleading. In short, Psychological Science has bet on the wrong horse, and nothing but mischief will follow from its continued promotion of prep as a scientifically informative predictive probability of replicability."Now that is what I call a take down! These same authors calm down quite a bit in their '10 article and even make the level-headed suggestion that p-rep is a step in the right direction, but that is uncool so I won't quote from it.
####
Reading about p-values makes me want to start a blog about them (how does such a blog not already exist?!). A good subtitle could be "where one in every twenty posts will be significant by chance alone."
Monday, November 15, 2010
Trade Off #15: Acquiring Info vs Altering Subject
- Increasing the energy voltage in transmission electron microscopy can lead to higher image resolution (meaning more info), but it also does more damage to the tissue. (see here; there are similar trade offs in lots of med imaging techs, like PET scans, see here)
- When a model of a complex psychological phenomenon becomes widespread, reality often begins conforming to the model. This is often called performativity, and it is perhaps why Keynes called economics a "moral" science. (see here)
- The Heisenberg uncertainty principle says that the more precisely the position of a particle is measured, the less precisely its momentum can be, and vice versa. The explanation for this is controversial, but it's likely due to observer effects--the measuring apparatus delivers a force to the particle which alters it. (see here)
(photo is of a peptide fiber, taken with EM, credit to Christoph Meier)
Tuesday, November 9, 2010
We're All Individuals Now
Christian Jarrett explains how we tend to value our uniqueness,
1) We often agree to disagree. Aumann's classic disagreement theorem (pdf) says that rational truth-seekers cannot and will not agree to disagree; given the same priors and the same data, they must each reach the same conclusions. Cowen and Hanson (pdf) discuss how this disagreement result is quite robust. What's the deal? Disagreeing allows us to show off our independence and individual intelligence, which are among the most credible ways to establish uniqueness.
2) Our opinions oscillate in cycles away from what we perceive as the current consensus. A recent PLoS Comp Bio paper shows that in order to explain this "hype cycle" you need to model the preference for individuals to feel unique:
Before you say, "but I'm not an individual!", you need to check out this classic Monty Python scene.
Whether it's a gift for small talk or a knack for arithmetic, many of us have something we feel we're particularly good at... this strength then becomes important for our self-esteem... children tend to choose friends who excel on different dimensions than themselves, presumably to protect their self-esteem from threat... [W]hen making hiring decisions, people tend to favour [sic] potential candidates who don't compete with their own particular strengths... Participants tricked into thinking they'd excelled at the maths [sic] tended to choose the potential team member who was weak at maths but stronger verbally, and vice versa for those participants fed false feedback indicating they'd excelled verbally.I think this drive can explain two anomalies:
1) We often agree to disagree. Aumann's classic disagreement theorem (pdf) says that rational truth-seekers cannot and will not agree to disagree; given the same priors and the same data, they must each reach the same conclusions. Cowen and Hanson (pdf) discuss how this disagreement result is quite robust. What's the deal? Disagreeing allows us to show off our independence and individual intelligence, which are among the most credible ways to establish uniqueness.
2) Our opinions oscillate in cycles away from what we perceive as the current consensus. A recent PLoS Comp Bio paper shows that in order to explain this "hype cycle" you need to model the preference for individuals to feel unique:
[W]e identify a missing ingredient that helps to fill this gap: the striving for uniqueness. Besides being influenced by their social environment, individuals also show a desire to hold a unique opinion. Thus, when too many other members of the population hold a similar opinion, individuals tend to adopt an opinion that distinguishes them from others....There is a trade-off between uniqueness and accuracy in beliefs, so we typically seek uniqueness on opinions for which the cost of being wrong is low: art, politics, sports, etc. That's why there is less of drive for uniqueness regarding med. Few would claim that covering an open wound is a bad idea.
[T]here is a third, pluralistic clustering phase, in which individualization prevents overall consensus, but at the same time, social influence can still prevent extreme individualism. The interplay between integrating and disintegrating forces leads to a plurality of opinions, while metastable subgroups occur, within which individuals find a local consensus. Individuals may identify with such subgroups and develop long-lasting social relationships with similar others.
Before you say, "but I'm not an individual!", you need to check out this classic Monty Python scene.
Sunday, November 7, 2010
Trust The Ratings Of Others
An '09 paper (link here, pdf here, HT to TC) claims that people make more accurate emotional predictions about a future event when they are simply told how someone else reacted to that event, as opposed to when they are given info about the event. This is somewhat counter-intuitive, so let's look at their evidence.
One of their tests was speed dating. Each guy submitted a photo and some demographic info about himself. Then, each girl predicted how much she would enjoy the date based on either this photo / info or the enjoyment rating of a girl who earlier had gone on a date with the same guy. Next, the guy and girl had their five minute date, (ignore the heteronormativity, my fellow Vassar alums), and finally the girl rated how much she enjoyed herself on a sliding scale of 1 - 100.
The authors define prediction error as the difference between the girl's predicted and actual enjoyment ratings. Participants made more accurate predictions when they used the first girl's enjoyment rating to predict their own (the avg error was 11.4 +/- 8.7) than when they predicted their enjoyment on the basis of the info (an avg error of 22.4 +/- 10.8).
In classic psyc study fashion, they also asked their participants to say which condition they thought would lead to more accurate predictions. 75% said the info would be more useful than the rating of a girl who had already been on a date with that guy. Oops. Now, indulge me in a few reactions:
1) Why do people underestimate the value of someone else's rating? Probably because people think of themselves and their opinions as more unique than they actually are. This sets up my public choice theory for why popular critics like Anthony Lane tend to be negative and contrarian. Although on the surface this annoys readers, people on a deeper level prefer to read opinions about art that they disagree with, because it allows them to think of their own opinions as more unique.
2) There is only one specified relationship between the study participants: they are all undergrads at the same school. So although the authors toss the word "social network" in towards the end of the paper, their results do not speak to the predictive power of a friend's opinion as opposed to a stranger's opinion. This remains an open question--in predicting their own enjoyment, will people find the opinion of someone in their network more valuable than the average opinion of strangers? Even if you say yes, you must take into account the trade-off of sample size, which is larger when you listen to the masses. The high valuation of sites like facebook relies in large part on the assumption that we'll prefer recommendations from those in our network, but I'm not so sure.
3) A subsequent study looked at how people combine their own mental simulations and third-person reports of other's experiences in making judgments. Corroborating the results of this study, they found that people assign far too much weight to their own simulation of how an event will play out as opposed to the feelings of other people who have actually experienced the event. I myself find this all very relevant to imdb's movie ratings. Remind me again why you trust yourself to judge a movie instead of deferring to the aggregated ratings of others?
One of their tests was speed dating. Each guy submitted a photo and some demographic info about himself. Then, each girl predicted how much she would enjoy the date based on either this photo / info or the enjoyment rating of a girl who earlier had gone on a date with the same guy. Next, the guy and girl had their five minute date, (ignore the heteronormativity, my fellow Vassar alums), and finally the girl rated how much she enjoyed herself on a sliding scale of 1 - 100.
The authors define prediction error as the difference between the girl's predicted and actual enjoyment ratings. Participants made more accurate predictions when they used the first girl's enjoyment rating to predict their own (the avg error was 11.4 +/- 8.7) than when they predicted their enjoyment on the basis of the info (an avg error of 22.4 +/- 10.8).
In classic psyc study fashion, they also asked their participants to say which condition they thought would lead to more accurate predictions. 75% said the info would be more useful than the rating of a girl who had already been on a date with that guy. Oops. Now, indulge me in a few reactions:
1) Why do people underestimate the value of someone else's rating? Probably because people think of themselves and their opinions as more unique than they actually are. This sets up my public choice theory for why popular critics like Anthony Lane tend to be negative and contrarian. Although on the surface this annoys readers, people on a deeper level prefer to read opinions about art that they disagree with, because it allows them to think of their own opinions as more unique.
2) There is only one specified relationship between the study participants: they are all undergrads at the same school. So although the authors toss the word "social network" in towards the end of the paper, their results do not speak to the predictive power of a friend's opinion as opposed to a stranger's opinion. This remains an open question--in predicting their own enjoyment, will people find the opinion of someone in their network more valuable than the average opinion of strangers? Even if you say yes, you must take into account the trade-off of sample size, which is larger when you listen to the masses. The high valuation of sites like facebook relies in large part on the assumption that we'll prefer recommendations from those in our network, but I'm not so sure.
3) A subsequent study looked at how people combine their own mental simulations and third-person reports of other's experiences in making judgments. Corroborating the results of this study, they found that people assign far too much weight to their own simulation of how an event will play out as opposed to the feelings of other people who have actually experienced the event. I myself find this all very relevant to imdb's movie ratings. Remind me again why you trust yourself to judge a movie instead of deferring to the aggregated ratings of others?
Monday, October 25, 2010
Making Me Guess
It is often tricky when you are asked to estimate some odd quantity, like how many people dressed in panda suits there were at a party. The person asking you to guess is usually hoping to surprise you with the large deviation of your guess ("what you'd expect") and the actual quantity. But you, the guesser, can't help but take into account the fact that they are asking you, which they presumably wouldn't be doing unless the quantity to be guessed were extremely unlikely and otherwise surprising.
So, the supposedly naive guesser is forced to either guess especially high and annoy the asker, or guess low and appease the asker. Guessing high makes you look like a smartass, while guessing low is tedious.
As a young lad, I would never really know how to choose between guessing high and low, and the choice often brought on some anxiety. But now I usually just explain the whole situation, indicating how being asked to guess has instantaneously shifted my probability distribution. Incidentally, I no longer have any friends.
(Inspired by a representative convo with Brittany)
So, the supposedly naive guesser is forced to either guess especially high and annoy the asker, or guess low and appease the asker. Guessing high makes you look like a smartass, while guessing low is tedious.
As a young lad, I would never really know how to choose between guessing high and low, and the choice often brought on some anxiety. But now I usually just explain the whole situation, indicating how being asked to guess has instantaneously shifted my probability distribution. Incidentally, I no longer have any friends.
(Inspired by a representative convo with Brittany)
Sunday, October 24, 2010
Trade Off #14: Precision vs Simplicity
When you describe something, the more precisely your model explains the given data, the more complicated it must be. Don't believe me? Lo, behold these examples, then:
- In describing the path up to my apartment, I could say "there are stairs", or I could say "there are fourteen stairs"; vagueness is less precise but it is also simpler. The bottom line is that having to walk up any number of stairs is too many.
- In fitting a model to data, one can explain more variance by including more free variables, at the cost of complication. There are plenty of ways to punish a model for having additional parameters and thus make the model earn each of its parameters through explanatory ability. (see here and here)
- The failure of humans to adequately trade off precision and simplicity in certain contexts, like when we say that the prob of X and Y is greater than the prob of just X, is one of our well-documented cognitive biases. (see here)
But in the view of this committee, these precision-enabling paradigm shifts are especially complicated, involve the shifting of assumptions at a fundamental level, and only seem simple in distant hindsight. That's one reason why they are so hard to come upon.
(photo of spiral galaxy, which Johannes Kepler probably would have marveled at, goes to NASA's Marshall Center)
Being Skeptical Of Your Skepticism
If you notice one particular flaw in an author's fact checking or reasoning, to what extent do you discount the rest of what that author claims?
If you hold a narrative view of other's actions--X did Y because she is a Z--then you will tend to assume that the author is a liar and cannot be trusted.
But truth-seeking, like any other human tendency, is on a spectrum. (A psychopath is at one extreme, while Abe Lincoln, god bless his honest soul, held down the other). So that binary categorization is almost surely wrong.
Yet, we are heavily biased towards categorizing people and against seeing the full spectrum. Sometimes this bias is due to laziness, as labeling others as liars liberates us from the effort of actually understanding their claims. But less trivially, labeling others allows us to feel like part of a more exclusive group, a group that would presumably never commit such an error.
Surely, we must downshift our faith in the author's other claims some upon finding that they have made a mistake. But remember the high prior probability that the authors are merely fallible, and don't differ much in the degree of their truth-seeking from the rest of us. Now, if they get two things wrong...
Bottom Line: While it's healthy to be skeptical, it's healthier still, for the body if not the ego, to be appropriately skeptical of your own skepticism.
If you hold a narrative view of other's actions--X did Y because she is a Z--then you will tend to assume that the author is a liar and cannot be trusted.
But truth-seeking, like any other human tendency, is on a spectrum. (A psychopath is at one extreme, while Abe Lincoln, god bless his honest soul, held down the other). So that binary categorization is almost surely wrong.
Yet, we are heavily biased towards categorizing people and against seeing the full spectrum. Sometimes this bias is due to laziness, as labeling others as liars liberates us from the effort of actually understanding their claims. But less trivially, labeling others allows us to feel like part of a more exclusive group, a group that would presumably never commit such an error.
Surely, we must downshift our faith in the author's other claims some upon finding that they have made a mistake. But remember the high prior probability that the authors are merely fallible, and don't differ much in the degree of their truth-seeking from the rest of us. Now, if they get two things wrong...
Bottom Line: While it's healthy to be skeptical, it's healthier still, for the body if not the ego, to be appropriately skeptical of your own skepticism.
Tuesday, October 19, 2010
Zuckerberg On Reward And Interest
He discusses The Social Network here. Precisely what he says is that Hollywood "can't wrap their head around the idea that someone might build something because they like building things." And... trigger the applause lights.
The reality is somewhere in the middle. To intimate that making facebook was totally intrinsic, which he in fact intimates, is going too far--social esteem from the Peter Thiels and Sean Parkers of the world had to be a motivating factor. But murky middle grounds are much harder to convey in a movie or sound bite. Harder to convey in a blog post, too.
The reality is somewhere in the middle. To intimate that making facebook was totally intrinsic, which he in fact intimates, is going too far--social esteem from the Peter Thiels and Sean Parkers of the world had to be a motivating factor. But murky middle grounds are much harder to convey in a movie or sound bite. Harder to convey in a blog post, too.
Sunday, October 17, 2010
Trade Off #13: Robustness vs Fragility
One of the more ridiculous scenes in Star Wars: A New Hope is when one measly torpedo hitting a thermal exhaust sets off a chain reaction which destroys the entire Death Star. Even if the Empire doesn't consider a small one-man fighter to be any threat, and even though the shot is ostensibly "one in a million," buying into the whole fiasco requires some boyish naïveté.
But outside of Hollywood, it's surprising how many systems behave similarly. Designs built to maintain function despite large perturbations of a certain type are often highly vulnerable to perturbations from a different angle. It seems that optimizing for robustness to expected deviations generally comes at the expense of increasing fragility to unexpected deviations. Examples:
- Forest buffer zones that are designed to prevent against particular types of fires can be superseded by unexpected types of fires, that, for example, come from a different direction. (see diagram here)
- A Boeing 777 has complicated chips that can account for variation in the distribution of cargo or atmospheric conditions, but it is vulnerable to an electrical outage or computer error in a way that a simpler plane would not be. (see here)
- Genetic regulatory networks are designed to sense and maintain function in a variety of environments, but a mutation that changes the internal connections of this regulatory network is almost always lethal. (see here)
On the other hand, this trade-off is a fairly recent idea, it's not particularly well-defined, and it will be important to see what consensus develops towards it before we draw too many implications. Still, as far as this committee is concerned, robustness vs fragility is indeed canonical.
(Photo comes from flickr user Scott Beale)
Saturday, October 9, 2010
Trade Off #12: Protection vs Freedom
Preventing deleterious forces from harming individuals typically comes at the cost of constraining the actions of those individuals in some way. Thus we come to the common trade off between protection and freedom. Examples:
- When herds of prey animals are large enough, they stand a chance to fight off a given predator. Thus they tend to aggregate together, lowering their freedom but increasing the probability of their survival. (see here)
- Work is one way to trade freedom now for protection from various exogenous forces in the future. Like preparing for the zombie apocalypse. (see here)
- Economic interventions that increase freedoms, like organ donation markets, are typically argued against on the basis of protecting individuals from exploitation. (see here)
(Credit for photo of Harlech Castle goes to theroamincatholic)
Thursday, October 7, 2010
Your Relationship With Your Former Self
Fernando Pessoa considers this question in The Book Of Disquiet,
So with the masses leaving digital footprints in tweets and status updates, will we all soon find it more difficult to believe in our redemption stories? As the world freaks out about others peering into their privacy, perhaps the person we should be most concerned about finding our innermost thoughts is ourselves, in the future. Our syntax might seem a little too tight, our inner monologues a little too kindred.
This is one of the questions I ponder as I scroll through old posts on a rainy evening. And my other question is... was I more alive then, than I am now?
I often find texts of mine that I wrote when I was very young--when I was seventeen or twenty. And some have a power of expression that I do not remember having then. Certain sentences and passages I wrote when I had just taken a few steps away from adolescence seem produced by the self I am today, educated by years and things. I recognize I am the same as I was. And having felt that I am today making a great progress from what I was, I wonder where this progress is if I was then the same as I am today.Pessoa realized he was underestimating his former self after reading his old writing. This makes sense. It's harder to construct a personal narrative of growth when the sentences showing that you used to be just as sweet remain visible, instead of diffusing into infinity like spoken words, or being lost in the synaptic puncta of the cortex, like most thoughts.
So with the masses leaving digital footprints in tweets and status updates, will we all soon find it more difficult to believe in our redemption stories? As the world freaks out about others peering into their privacy, perhaps the person we should be most concerned about finding our innermost thoughts is ourselves, in the future. Our syntax might seem a little too tight, our inner monologues a little too kindred.
This is one of the questions I ponder as I scroll through old posts on a rainy evening. And my other question is... was I more alive then, than I am now?
Monday, October 4, 2010
Trade Off #11: False Alarm vs Oversight
We can distinguish mistakes into two forms. The first type is a false alarm, in which you overestimate the likelihood that an event will occur, and the second type is an oversight, in which you underestimate the likelihood that the event will occur. Suppressing the probability of an oversight will make a false alarm more likely, and vice versa. Plenty of examples, I'll just give three:
- Statisticians make a distinction between "type one" errors, rejecting a null hypothesis when it shouldn't be, and "type two" errors, failing to reject a null hypothesis when it should be. If the null hypothesis is that a given event will not happen, then type one errors can be thought of as false alarms, and type two errors as oversights.
- A lifeguard can choose to pay less attention to each individual momentary dip under water, and thus lower his stress from false alarms. But he inevitably does so at the risk of increasing the risk of an oversight--not noticing when someone is underwater for too long.
- Rhodopsin switches conformational states in response to photon exposure. We can think of a false alarm as when rhodopsin changes states even when a photon has not hit it, and an oversight as when rhodopsin fails to switch states despite photon exposure. Evolution seems to have strongly selected for minimizing false alarms as opposed to minimizing oversights. (That is, oversights still occur ~ 30% of the time; see here)
(Above photo credit goes wholly to flickr user Abhijit Patil)
Tuesday, September 28, 2010
Strategic Swearing
You can find a discussion of it in this (unfortunately gated) article, in the context of med. They discuss two types. The first is "swearing so as to enable empathy and catharsis," in which you swear to mirror the feelings of the patient and spur him to discuss those feelings further. The second is "swearing to create a feeling of social equality," which shows that you are willing to break pointless rules, and instead care about actual results. Generally, they argue that swear words can lend an emotional edge to a sentence that other words cannot. I agree.
It might have been nice if they had discussed swear word overuse. One high school bball coach of mine used to swear only sparingly, but when he did you really listened. Yet if you are known to swear almost never, when you do swear that event itself might detract attention from what you are talking about. Thus it is tricky to approach the equilibrium of swearing that maintains optimal effectiveness in case of maximum need. Very fucking tricky.
It might have been nice if they had discussed swear word overuse. One high school bball coach of mine used to swear only sparingly, but when he did you really listened. Yet if you are known to swear almost never, when you do swear that event itself might detract attention from what you are talking about. Thus it is tricky to approach the equilibrium of swearing that maintains optimal effectiveness in case of maximum need. Very fucking tricky.
Monday, September 27, 2010
Three Surprising Findings On "Genius"
From Dean Simonton, via Irfan Alvi's review of his book. The more surprising ideas are that:
1) "In some domains, overtraining can be detrimental to creativity. In these cases, cross-training can be more effective." Presumably cross-training involves learning about a variety of different topics. This seems useful either because it allows you to apply ideas in other fields to your own, or because it allows you time away from your main field to avoid getting bogged down in details. Or both.
2) "The creative processes underlying genius-level achievement are still not well understood, although use of heuristics and combinatorial thinking appear to be typically involved." It'd be nice to know precisely which heuristics lead to genius level output!
3) "Psychopathology has a positive correlation with level of genius, but outright madness inhibits genius and higher intelligence tends to provide the capacity to prevent outright madness." This jives with some research. For example, this study considers the effect of the interaction between cognitive ability and neuroticism on managerial performance. Regressing cognition and neuroticism alone explained 4% of the variance in performance, but adding the interaction term raised this to 19%. And after partitioning their sample in half by cognitive ability, they found the higher half had a positive and significant relationship between anxiety and performance, while the lower half had a negative and barely non-significant relationship.
I am still heartily recommending Simonton's Creativity in Science to people interested in these issues.
1) "In some domains, overtraining can be detrimental to creativity. In these cases, cross-training can be more effective." Presumably cross-training involves learning about a variety of different topics. This seems useful either because it allows you to apply ideas in other fields to your own, or because it allows you time away from your main field to avoid getting bogged down in details. Or both.
2) "The creative processes underlying genius-level achievement are still not well understood, although use of heuristics and combinatorial thinking appear to be typically involved." It'd be nice to know precisely which heuristics lead to genius level output!
3) "Psychopathology has a positive correlation with level of genius, but outright madness inhibits genius and higher intelligence tends to provide the capacity to prevent outright madness." This jives with some research. For example, this study considers the effect of the interaction between cognitive ability and neuroticism on managerial performance. Regressing cognition and neuroticism alone explained 4% of the variance in performance, but adding the interaction term raised this to 19%. And after partitioning their sample in half by cognitive ability, they found the higher half had a positive and significant relationship between anxiety and performance, while the lower half had a negative and barely non-significant relationship.
I am still heartily recommending Simonton's Creativity in Science to people interested in these issues.
Saturday, September 25, 2010
Trade Off #10: Plasticity vs Specialization
In college, I was constantly reading neuro lit extolling the virtues of neural plasticity, which is the ability of neurons to change based on feedback from the environment. Plasticity certainly has huge benefits. Specifically, plasticity allows for a better match between phenotype and environment across variable environments than a single, constant phenotype would.
But after a while, the idolatry of plasticity began to annoy me, in part because much of the lit discussed plasticity as if it had no downsides, which seems impossible. (If there really were no costs to plasticity, then evolution should have selected for it like woah).
The general downside seems to be that plasticity hinders specialization. That is, if a system has the ability to change easily (i.e. it has high plasticity), then it will tend to expend resources on a wide range of trait values, and will have fewer resources to focus on the most important and relevant traits. A few examples:
- Synaptic pruning and other mechanisms for synaptic plasticity allow for learning and memory, but they are energetically costly. Indeed, one hypothesis holds that sleep is the price we have to pay for plasticity the previous day. (see here)
- In an evolutionary framework, the major costs to more plasticity are 1) actually sensing the current environmental conditions, and 2) producing the actual trait in a less efficient way. Both of these divert resources from other tasks. (see here and here)
- People with autism spectrum disorders often find it difficult to parse novel stimuli, but can sometimes concentrate for especially long periods of time on specific niches. So one might think of the autistic cognitive style as shifted towards the specialization side of this trade off. (see here)
But given our current working definitions (plasticity = the ability, which is highly correlated with the tendency, for context-dependent change; specialization = funneling energy expenditures to a narrow purpose), and because it is sort of one level "meta" to switching costs vs change gains, we are granting this trade off its own place in the canon.
(Above photo taken by flickr user uncle beast. Plants are often studied w/r/t genetic plasticity because they can't simply pack up shop and move if the environment changes, like an animal or insect could.)
Thursday, September 23, 2010
Never Have I Ever
Robin Hanson published an interesting thought today,
Consider how Tyler Cowen is so wont to point out that he's never tried coffee, or how Ben Casnocha loves to discuss how he's never smoked weed ("love" might be harsh, but do see his #2).
Our "nevers"'s allow us to signal loyalty to our groups in ways that our "tried once but didn't like it so please still accept me"'s can only dream of.
[W]hy are so many of us (including me) so reluctant to experiment with so many joys with strong fans? After all, fans argue, their suggested drug, sex style, or religious experience would only take a few hours to try, and could give us a lifetime of joy if we liked it. It seems we see far larger costs than the time for a trial. My guess: we value our current identity, integrated as it is into our job, hobbies, friends, etc. We fear that if we try new joys, we will like them, and jump to practicing them, which will change us. We fear that by jumping to juicy joys, we won’t be us anymore.Twenty six google reader "likes" speak for themselves, but one quibble. I think it's more likely that the reason we don't try "fun" things once or twice is not because we fear how we will change, but fear how others might perceive us as having changed.
Consider how Tyler Cowen is so wont to point out that he's never tried coffee, or how Ben Casnocha loves to discuss how he's never smoked weed ("love" might be harsh, but do see his #2).
Our "nevers"'s allow us to signal loyalty to our groups in ways that our "tried once but didn't like it so please still accept me"'s can only dream of.
Wednesday, September 22, 2010
Trade Off #9: Some Now vs More Later
In expected value terms, this trade off is everywhere. By doing work, we are trading some of our freedom now for more freedom later, since you can usually buy freedom (see here). When we invest, we give up some cash $ now to get more later, since interest rates typically beat inflation.
But, taking the long view of time preference, why would anyone ever rationally prefer less of something they want? There are a few reasons:
- Everything is ephemeral (including, probably, the universe itself). So, there is no guarantee that if you wait until later, either you or the thing you are waiting for will even exist.
- Values usually change as time passes. So, you might expect to value something so much less later that you can justify settling for the relatively lesser current amount of it.
- If you learn to always delay gratification, you may find it difficult to ever actually reap the rewards you are meant to be aiming for, and continue to put off having fun into it is too late (see here).
Ultimately, there actually are some legit reasons to prefer some now to more later. Sometimes there really is a dark, beneficial side to procrastination. Honesty demands us to admit these things, even though it might be simpler and easier to have a rule towards always favoring the delayed reward.
(Photo of clouds covering all but the tops of buildings in london from trexcali)
Saturday, September 18, 2010
Trade Off #8: Random vs Determined
This trade off concerns the decision to either leave things completely up to chance or to wholly micro-manage them, and every option in-between.
Now, it's true that attempting to alter the path of current events is the only way we can shift future outcomes towards our preferences. So because of that we should favor determining events. But there are large costs to determining that typically push us towards the random side of the trade-off, such as:
- The opportunity cost of even weighing all the choices and outcomes, when you could be spending that energy on other ends. (see here)
- The likelihood that, because we have attempted to change an outcome, we will come to regret our decision and see missed opportunities, which is psychologically taxing. (see here)
- The likelihood that, even in taking action, we'll fail to improve outcomes over the status quo. And, despite our best intentions, the non-zero probability that our actions could even have negative expected value with respect to our goals. (see here)
(above photo by flickrer Jef Safi)
Thursday, September 16, 2010
Long Run Optimism Depends On Short Run Pessimism
Matt Ridley, according to the highest rated amazon review of his book*, believes that we have little to fear from the the long run sustainability of our energy consumption, because,
This works on an interpersonal level too. If you truly convince yourself that a situation will end up fine, you have zero incentive to do work to alter it. Thus, in the case that you are not worried at all about a situation, the situation will likely not turn out fine. This has tons of implications... but for tonight I'll just say that it's a cool paradox.
####
* 95.9% of people agree that, instead of reading the book, you mindswell just rely on Kevin Currie's review to draw sweeping inferences.
Yes, depending on non-renewable fuel, by definition, means that at some point, the fuel will run out. Ridley only points out that naysayers rely on a hidden but faulty premise: that the future will resemble the past. Yes, we will run out of fossil fuels if we keep using it, but whose to say that we will keep using them? ... these folks' error lies in assuming that future ways of production will resemble past ways, and time and time and time again, this assumption has proved erroneous! Ridley's point is that while we can NEVER say that the future WILL solve all pressing problems, so far we have. And we can assume we will in the future because our method of exchange has globalized the "collective brain," assuring that innovation will keep occurring and the best minds will all be working on the pressing problems of the day.But what if the best minds are convinced by this very argument? Then they will, rationally, cease to work on solutions to these types of problems. So Ridley's argument is only true to the extent that people don't believe in it.
This works on an interpersonal level too. If you truly convince yourself that a situation will end up fine, you have zero incentive to do work to alter it. Thus, in the case that you are not worried at all about a situation, the situation will likely not turn out fine. This has tons of implications... but for tonight I'll just say that it's a cool paradox.
####
* 95.9% of people agree that, instead of reading the book, you mindswell just rely on Kevin Currie's review to draw sweeping inferences.
Monday, September 13, 2010
Intellectual Hipsters
Yvain defines them as the third person in this scenario:
His idea is pretty similar to my beloved conformity theory. One difference is that I conceptualize opinions moving around a circle (i.e. with two axes) whereas he seems to view them as oscillating between two poles.
To me, it doesn't make sense that beliefs would just jump from one point to another. At least subconciously, there has to be some kind of intermediate stance, like "I'm not sure, but I don't consider this issue interesting." The second axes of "caring" or "interestingness" allows the individual to justify holding the belief at every given time point. Otherwise, belief changing would have to happen on a threshold, and there are no thresholds.
A naive person might think that [X] is an absolute good thing. Someone smarter than that naive person might realize that [Y] is a strong negative to [X] and desperately needs to be stopped. Someone even smarter than that, to differentiate emself from the second person, might decide [Y] wasn't such a big deal after all.He also gives some examples, like "don't care about Africa / give aid to Africa / don't give aid to Africa", where the third position is the intellectual hipster.
His idea is pretty similar to my beloved conformity theory. One difference is that I conceptualize opinions moving around a circle (i.e. with two axes) whereas he seems to view them as oscillating between two poles.
To me, it doesn't make sense that beliefs would just jump from one point to another. At least subconciously, there has to be some kind of intermediate stance, like "I'm not sure, but I don't consider this issue interesting." The second axes of "caring" or "interestingness" allows the individual to justify holding the belief at every given time point. Otherwise, belief changing would have to happen on a threshold, and there are no thresholds.
Wednesday, September 8, 2010
A Critique Of The 10,000 Hour Rule
Suzanne Lainson offers one here,
[Anders] Ericsson says that if you have experienced people who don't do any better than the average person, then they aren't experts. This seems to provide a good loophole to explain why average people can sometimes beat those with more experience. What he seems to be saying is that his theories are right, and when there appear to be exceptions, the exceptions don't count.Now that's what I call a take down! The rest of the post is a veritable smattering of research bits, certainly not for the faint of nucleus accumbens, but it is interesting. She concludes that deliberately attempting to practice something for ten thousand hours isn't worth it, because it might not be sufficient for success and random unplanned experiences tend to be more life-defining anyway.
Tuesday, September 7, 2010
Three End Of Summer Thoughts
1) As of today there are 800k+ articles currently tagged as "interesting" on del.icio.us, making it one of the site's most popular tags. But of course any link you go through the trouble of tagging you must have found interesting! The only way I can imagine this making sense is if people are using the word "interesting" to distance themselves from the conclusions of the article while admitting that some of the ideas are worth further contemplation. But that is unlikely to be the case for 800k articles, so what gives? (This annoys me more than it probably should.)
2) A co-worker brings in cupcakes early in the morning to share with everyone. I want to wait to have one so I can reward myself for working hard that day. But I'm worried that if I do so there will be none left. Gratification delay and the commons: antagonistic.
3) One question you may need to ask to determine whether you might be living in a simulation: given the ability, would you simulate your own life? Granted, this is frightfully solipsistic, but if I had such a power, I myself can definitely imagine simulating my own life again, tweaking various parameters to test the consequences. But how should I change my behavior based on the non-zero probability that I am living in my own simulation?!
2) A co-worker brings in cupcakes early in the morning to share with everyone. I want to wait to have one so I can reward myself for working hard that day. But I'm worried that if I do so there will be none left. Gratification delay and the commons: antagonistic.
3) One question you may need to ask to determine whether you might be living in a simulation: given the ability, would you simulate your own life? Granted, this is frightfully solipsistic, but if I had such a power, I myself can definitely imagine simulating my own life again, tweaking various parameters to test the consequences. But how should I change my behavior based on the non-zero probability that I am living in my own simulation?!
Monday, September 6, 2010
The Arguments For And Against Re-Rating Movies
The Prosecution: Changing one's mind about the quality of a given movie, or for that matter any given work of art, is a disgusting practice that ought to be accompanied by ruthless social disapproval. Re-rating allows and even encourages one to incorporate other's opinions into one's own ratings, heavily biasing them. Naïvely, many assume that this influence will always move ratings upwards and assure themselves that they won't merely follow the opinions of the most popular critics. But the reality of the conformity cycle is much more insidious: you are just as likely to learn that too many others like a movie and thus dislike it. There is no defense against these influences once you have been exposed to them, thus rating must happen early and remain steady despite the greatest of protestations. Ladies and gentlemen of the jury, I believe strongly, and upon contemplation I believe you will come to agree, that re-rating really is the bane of a high-functioning rating system.
The Defense: The vitriol of the prosecution's ad hominem attacks against everyday folks who happen to re-rate now and then, justified only by some childish appeal for purity, is dangerously short-sighted. If you don't understand a movie the first time you see it, that's not necessarily the movie's fault, it could be your own fault too. Thus it's totally understandable that, if you come to understand some angle of the movie better after conscious or unconscious contemplation, your rating might change. Moreover, the quality of a movie cannot be fully judged right after watching, because the quality of a movie is based not only on your experience during the movie, but also the value over replacement of any subsequent thoughts about that movie after watching. Thus a rating must be dynamic; it will change with the ebbs and flows of one's thought processes, the structure and patterns of one's interior life, and yes, maybe even one's interactions with other people. Re-rating is only natural given all of our other human tendencies, and its availability takes much unnecessary pressure and anxiety off of the initial rating. If we want to evolve as people, and more specifically as a society of movie watchers, then we must be willing to accept the consequences of such dynamicity. The defense rests.
The Verdict: Death by reruns of imdb's bottom 100.
The Defense: The vitriol of the prosecution's ad hominem attacks against everyday folks who happen to re-rate now and then, justified only by some childish appeal for purity, is dangerously short-sighted. If you don't understand a movie the first time you see it, that's not necessarily the movie's fault, it could be your own fault too. Thus it's totally understandable that, if you come to understand some angle of the movie better after conscious or unconscious contemplation, your rating might change. Moreover, the quality of a movie cannot be fully judged right after watching, because the quality of a movie is based not only on your experience during the movie, but also the value over replacement of any subsequent thoughts about that movie after watching. Thus a rating must be dynamic; it will change with the ebbs and flows of one's thought processes, the structure and patterns of one's interior life, and yes, maybe even one's interactions with other people. Re-rating is only natural given all of our other human tendencies, and its availability takes much unnecessary pressure and anxiety off of the initial rating. If we want to evolve as people, and more specifically as a society of movie watchers, then we must be willing to accept the consequences of such dynamicity. The defense rests.
The Verdict: Death by reruns of imdb's bottom 100.
Sunday, September 5, 2010
Our Hands Evolved Because Our Feet Evolved
A legitimately fascinating article (here) by Campbell Rolian, Daniel Lieberman, and Benedikt HallgrÃmsson makes this claim, with substantially more precision. From the abstract:
Rolian et al go on to argue that:
So, the use of tools by early humans was due in large part to random chance. Since any event that relies on coincidence is less likely to be replicated, I read this as making the development of intelligence less likely / inevitable. If so, this means that the lack of other species in the night sky can be explained away in one more way--maybe most biological replicators don't get as lucky as we did in our evolutionary past to get a jumpstart towards tool use. Granted, this is rampant speculation and it's possible that we would have started using tools anyway.
But still, this makes it at least slightly more likely that we'll end up colonizing the known universe, instead of withering away on this doomed planet. And for today, I'll celebrate that.
Human hands and feet have longer, more robust first digits, and shorter lateral digits compared to African apes. These similarities are often assumed to be independently evolved adaptations for manipulative activities and bipedalism, respectively. However, hands and feet are serially homologous structures that share virtually identical developmental blueprints, raising the possibility that digital proportions coevolved in human hands and feet because of underlying developmental linkages that increase phenotypic covariation between them....In particular, selection pressures on the feet led to longer, stronger thumbs, and slightly shorter other fingers. This makes it easier to hold something with the tips of our other fingers and thumb, in part because they are closer together, and in part because the forces on the thumb are dissipated over a larger surface area. This diagram might help you visualize:
modern human's pad-to-pad precision grasping, doi:10.1371/journal.pone.0011727.g001 |
[C]hanges in manual digital proportions that enabled digit tips to be brought into opposition may have been a prerequisite for the development of precision gripping capability in australopiths. Also, although pedal changes associated with facultative bipedalism probably provided australopiths with hands capable of producing and using Oldowan stone tools by at least 3.5 million years ago, it should be noted that manufactured stone tools do not appear in the archaeological record until about 1 million years later. Australopiths may have lacked the cognitive capacity for manufacturing tools and/or their technology was entirely nonlithic.... In short, there are several reasons to believe that selection on the foot caused correlated changes in the hand during human evolution, that selection on the hallux was stronger and preceded selection on the lateral toes, and that these changes in manual digital proportions may have facilitated the development of stone tool technology. (more here)To over-simplify their claim: while feet were being selected for better load bearing and less mechanical work in running, it just so happened that this also made hands better suited for tool use.
So, the use of tools by early humans was due in large part to random chance. Since any event that relies on coincidence is less likely to be replicated, I read this as making the development of intelligence less likely / inevitable. If so, this means that the lack of other species in the night sky can be explained away in one more way--maybe most biological replicators don't get as lucky as we did in our evolutionary past to get a jumpstart towards tool use. Granted, this is rampant speculation and it's possible that we would have started using tools anyway.
But still, this makes it at least slightly more likely that we'll end up colonizing the known universe, instead of withering away on this doomed planet. And for today, I'll celebrate that.
Wednesday, August 25, 2010
Guilt Tripping For Love
Let's say you've reached and passed the "love threshold" of caring about someone, but then they spurn your affections. At first you may try to win 'em back the old fashioned way (throwing yourself at them), but if this fails, what is one to do? Jesse Bering suggests that we may have an evolutionary tendency to begin earnestly sulking at this point:
Bering's argument uses reasoning very similar to Michael Vassar's speculation that more social animals are more likely to feel pain. The only problem with both of these claims is that there's little direct data to back them up... can you think of any ethical way to test this?
[O]ne of the more fascinating things about the resignation/despair stage... is the possibility that it actually serves an adaptive signalling function that may help salvage the doomed relationship, especially for an empathetic species such as our own... [H]eartbreak is not easily experienced at either end, and when your actions have produced such a sad and lamentable reaction in another person, when you watch someone you care about (but no longer feel any real long-term or sexual desire to be with) suffer in such ways, it can be difficult to fully extricate yourself from a withered romance. If I had to guess—and this is just a hunch, in the absence of any studies that I’m aware of to support this claim—I’d say that a considerable amount of genes have replicated in our species solely because, with our damnable social cognitive abilities, we just don’t have the heart to break other people’s hearts.The reason the sadness has to be legit is that humans are super savvy at detecting conscious deception ploys, and sadness recognized to be fake is not persuasive. So although one might prefer to merely fake sadness and otherwise go on as normal, that strategy has lower evolutionary fitness.
Bering's argument uses reasoning very similar to Michael Vassar's speculation that more social animals are more likely to feel pain. The only problem with both of these claims is that there's little direct data to back them up... can you think of any ethical way to test this?
Mark Cuban's Non-Probabilistic Thinking
His investment advice today is to pay off high interest debt, save your money in cash, and try to cut personal spending. Fair enough. But then he makes the outrageous claim that "If you have under 100k dollars in liquid assets, your net worth will be higher in one year if you follow this advice than if you follow ANY other investment advice any broker or banker will give you this year."
The likelihood of this claim proving true is vanishingly small. Out of all of the other pieces of investment advice proffered, surely some of these will beat the null strategy of playing it safe. Now, Cuban might argue that you can't identify which advice will allow you to beat the null a priori, and so you're better off not trying, but that's a totally different claim.
Bottom line: Cuban's blog gets demoted from "medium" to "low" priority on the Google reader hierarchy, and is now teetering on the edge of unsubscribe territory.
The likelihood of this claim proving true is vanishingly small. Out of all of the other pieces of investment advice proffered, surely some of these will beat the null strategy of playing it safe. Now, Cuban might argue that you can't identify which advice will allow you to beat the null a priori, and so you're better off not trying, but that's a totally different claim.
Bottom line: Cuban's blog gets demoted from "medium" to "low" priority on the Google reader hierarchy, and is now teetering on the edge of unsubscribe territory.
Thursday, August 19, 2010
Sequence Continuation
The eight or so images from Manfred Harringer's paper describing a model for human pattern valuation are beautiful. Here's one:
And it's "solution":
To me, the key to understanding this continuation is that you can't just focus on the numerical pattern. You have to take into account the symmetry of the square as well.
And it's "solution":
To me, the key to understanding this continuation is that you can't just focus on the numerical pattern. You have to take into account the symmetry of the square as well.
Thursday, August 12, 2010
Trade Off #7: Proximity vs Scale
In many walks of life, we face incentives to either disperse our resources among a large set of options or to focus our resources in one area. Dispersion increases our proximity to each of the options but by focusing in one area we often achieve benefits due to scale. Examples:
(photo credit: Jonathon)
- Multinational businesses can either produce their products close to customers and minimize transport costs, or they can produce their products in a central location that maximizes economies of scale. (see here)
- One paradox of doing updating our statuses and chronicling our thoughts on the internet: it scales well in that we can speak to nearly everyone, but at the same time we end up speaking to nobody in particular. In potentially reaching the masses, we inevitably sacrifice some of our proximity to others. (see here)
- In biology, cells can produce proteins at a central, diffuse location that minimizes the energy spent transporting ribosomes RNA along the cytoskeleton, or they can produce proteins locally and maximize the probability that the protein ends up where it can interact fruitfully with other cellular components. (see here)
- In perception, one can pay attention to either the forest as a whole or look at individual trees, but not both at the same time. This generalizes to difficulties in understanding multiple levels of a hierarchy with just one approach. (see here)
(photo credit: Jonathon)
Wednesday, August 11, 2010
Randomization In Politically Sensitive Topics
When I was in high school we had a controversy over whether minorities were and should be over-represented in the pictures of school publications. Some thought the pictures attempted to display a false image of the school to spur donations, and this understandably annoyed them.
It was a tricky question because the school could always claim ignorance. They could argue that any deviations in the pictured sample from the total student population were merely statistical anomalies. Who could logically prove them wrong?
With hindsight, the best solution would have been for the school to draw up a list of all the students in the school and randomly choose which ones to include in the photo. And generally, with the now widespread availability of random number generators, it seems to me that the best solution to representing large communities with small sample sizes is to randomize the selections.
In a related issue, while describing some "athlete" in a post from two days ago I had to choose between the pronouns of him, her, him/her, or one of the many gender neutral pronouns. Choosing just one of the two could subconsciously bias readers into associating a particular gender with a particular activity, perpetuating stereotypes. On the other hand, him/her is unwieldy, and readers likely wouldn't understand the gender neutral pronouns. I was stuck between being a sexist or a sloppy writer.
So I decided to randomize my choice between "him" and "her." I went to random.org, assigned "1" to "him" and "2" to "her," and generated a random number between these two. I got a "2", so the pronoun I used was "her." This was kind of time consuming, but in the future one might imagine word processors offering the randomization of these pronouns as a standard feature. By randomizing, the reader is no longer systematically biased to associate a certain behavior with a certain gender.
One remaining issue is that people might not trust that authors and institutions have actually carried out the randomizations, but instead faked it and went with the option that made them look the best. Thus I can foresee the creation of The Institute Of Randomness, whose role is to impartially randomize words and samples for institutions, authors, and ad agencies. The institute might even offer to randomize the gender, sexual orientation, attractiveness, ethnicity, and etc., of characters in stories, so as to further minimize stereotyping.
Another good part about randomizing is that it eliminates the condescension often involved with direct reversals of stereotypes. Instead, you end up with equality under the laws of probability, which is as it should be.
It was a tricky question because the school could always claim ignorance. They could argue that any deviations in the pictured sample from the total student population were merely statistical anomalies. Who could logically prove them wrong?
With hindsight, the best solution would have been for the school to draw up a list of all the students in the school and randomly choose which ones to include in the photo. And generally, with the now widespread availability of random number generators, it seems to me that the best solution to representing large communities with small sample sizes is to randomize the selections.
In a related issue, while describing some "athlete" in a post from two days ago I had to choose between the pronouns of him, her, him/her, or one of the many gender neutral pronouns. Choosing just one of the two could subconsciously bias readers into associating a particular gender with a particular activity, perpetuating stereotypes. On the other hand, him/her is unwieldy, and readers likely wouldn't understand the gender neutral pronouns. I was stuck between being a sexist or a sloppy writer.
So I decided to randomize my choice between "him" and "her." I went to random.org, assigned "1" to "him" and "2" to "her," and generated a random number between these two. I got a "2", so the pronoun I used was "her." This was kind of time consuming, but in the future one might imagine word processors offering the randomization of these pronouns as a standard feature. By randomizing, the reader is no longer systematically biased to associate a certain behavior with a certain gender.
One remaining issue is that people might not trust that authors and institutions have actually carried out the randomizations, but instead faked it and went with the option that made them look the best. Thus I can foresee the creation of The Institute Of Randomness, whose role is to impartially randomize words and samples for institutions, authors, and ad agencies. The institute might even offer to randomize the gender, sexual orientation, attractiveness, ethnicity, and etc., of characters in stories, so as to further minimize stereotyping.
Another good part about randomizing is that it eliminates the condescension often involved with direct reversals of stereotypes. Instead, you end up with equality under the laws of probability, which is as it should be.
Tuesday, August 10, 2010
Is Love On A Spectrum?
Given my working assumption that every human tendency is on a spectrum, it seems reasonable that love, too, is on a spectrum. But this is controversial, because it implies that there are degrees to how much one loves, whereas in popular conception there are no degrees to love, the question is a binary "do you love me?", and that's that.
So, is love an exception to the no thresholds assumption? I'm thinking no, and here's why.
First, I agree with Elie Wiesel, who said that "the opposite of love is not hate, the opposite of love is indifference." This makes sense because hate is often correlated with love. Think of Elizabeth and Darcy in Pride and Prejudice.
Second, love is usually a relative term. Competition of one particular love over other possible loves is implicit. To tell someone that you love them, but that you love everyone else just as much, is basically to say that you don't really love them.
Given the above, this is my preferred model. We have a spectrum of how much we care about a given thing. This spectrum runs the gamut from totally indifferent to really, really caring a lot. You can always care more, you can always care less. "Love" is the state of being at some point among the set of caring levels above some arbitrary point, the "love threshold." This diagram may help you to visualize:
So, while it's natural that there will be some fluctuations in how much you care, as long as your feelings remain in the "in love" portion of the spectrum, you can still honestly say that you're in love. There are degrees to caring, but once you've defined your love threshold, there can be no degrees to love.
Nevertheless, where one defines the love threshold can vary based on, among other things, jadedness, neediness, and drunkenness. This variability underscores that the concept of love will always and forever be an abstraction, a human construct designed to serve our far mode ends.
Edit: See Lemmus's insightful comments below, suggesting that I change "caring" above to "liking," which makes sense and is more consistent with the difference between love and hate.
So, is love an exception to the no thresholds assumption? I'm thinking no, and here's why.
First, I agree with Elie Wiesel, who said that "the opposite of love is not hate, the opposite of love is indifference." This makes sense because hate is often correlated with love. Think of Elizabeth and Darcy in Pride and Prejudice.
Second, love is usually a relative term. Competition of one particular love over other possible loves is implicit. To tell someone that you love them, but that you love everyone else just as much, is basically to say that you don't really love them.
Given the above, this is my preferred model. We have a spectrum of how much we care about a given thing. This spectrum runs the gamut from totally indifferent to really, really caring a lot. You can always care more, you can always care less. "Love" is the state of being at some point among the set of caring levels above some arbitrary point, the "love threshold." This diagram may help you to visualize:
So, while it's natural that there will be some fluctuations in how much you care, as long as your feelings remain in the "in love" portion of the spectrum, you can still honestly say that you're in love. There are degrees to caring, but once you've defined your love threshold, there can be no degrees to love.
Nevertheless, where one defines the love threshold can vary based on, among other things, jadedness, neediness, and drunkenness. This variability underscores that the concept of love will always and forever be an abstraction, a human construct designed to serve our far mode ends.
Edit: See Lemmus's insightful comments below, suggesting that I change "caring" above to "liking," which makes sense and is more consistent with the difference between love and hate.
Monday, August 9, 2010
Trade Off #6: Training vs Battling
Deciding whether to build resources or engage in actual competition is a common trade off that individuals must wrestle with. The upsides to building up resources are that you'll have a better chance of success when you do engage, and that it may be safer. The upsides to battling now are that you'll have more immediate feedback, and that you may end up with more total chances to battle. Examples:
- An aspiring athlete can either practice her fundamentals (training) or go play the actual sport (battling).
- Periods of seismic quiescence (fewer small quakes) from 1 to 2.5 years often precede large earthquakes (see here). One can think of quiescence as "training" that builds up tension in the fault prior to rupture.
- In developmental economics, there is an inverse correlation among children between hours of work and reading / math skills (see here). Generally, studying is training while doing is battling.
- In reproduction, some models argue that there is a trade off between sperm production and securing mates (see here). Production is training while mating is, in some senses, a battle.
(credit for sweet photo of preying mantis goes to HUS0)
Sunday, August 8, 2010
Spectrums Everywhere
The concept of neurodiversity seems to have originated with respect to autism, but it is now being generalized to nearly everything else. Thomas Armstrong explains one of the key components in a recent interview:
So, statistically speaking, we should all be somewhere on the spectrum of every sort of tendency and disorder. Now, people surely shift along these spectrums, due to changes in one's environment, developmental clocks, and the probabilistic nature of gene expression. But people should still remain somewhere on the spectrum of any given cognitive style.
I will spare you my rampant speculation from this point on, but believe me, I could go on for days. The bipolar spectrum, the schizophrenia spectrum, the addiction spectrum, to name a few: it is very interesting to extrapolate from the idea that we all should be at least somewhere along these.
One of the eight principles that I discuss in my book Neurodiversity is that everyone exists along “continuums of competence” with respect to a range of human processes including sociability, literacy, intelligence(s), attention, mood, and so forth. This is very similar to the DSM-V’s embracing of a dimensional perspective, and to that extent, I think the DSM-V is moving in the right direction. The problem is that the DSM-V will be a high stakes publication, and if people are put on a continuum from normal to pathological, the fuzzy line where normal becomes pathological (and vice versa) becomes very important, and may determine whether a person will be labeled with a disorder, given a drug treatment, and perhaps even stigmatized as a result. There’s a danger that many so-called normal people will be added to the ranks of the mentally disordered. Also, what’s missing from the DSM (in all its versions) is any kind of discussion of the positive dimensions of each of the disability categories.Genomic studies continue to struggle to find correlations between specific polymorphisms and psychological traits (see here). What this indicates is that a bunch of little genetic polymorphisms shift your tendencies in one direction or another, but there are no large discrete steps.
So, statistically speaking, we should all be somewhere on the spectrum of every sort of tendency and disorder. Now, people surely shift along these spectrums, due to changes in one's environment, developmental clocks, and the probabilistic nature of gene expression. But people should still remain somewhere on the spectrum of any given cognitive style.
I will spare you my rampant speculation from this point on, but believe me, I could go on for days. The bipolar spectrum, the schizophrenia spectrum, the addiction spectrum, to name a few: it is very interesting to extrapolate from the idea that we all should be at least somewhere along these.
Saturday, August 7, 2010
Two Tools For The Citizen Scientist
1) A portable electrophysiology toolkit. Strap it to a cockroach's leg, stimulate it, and it spits out neuromuscular action potentials. Try varying the conditions and see if you can get the frequency of the action potentials to change. This will run you around $110. (HT to Andrew Hires)
2) A microscope on your cell phone. It has a resolution of ~ 1 micrometer and is amazingly lensless. Apparently, it will only cost $10 but probably won't hit the market for 5 years, which is unfortunate. There are cool comparison photos with a conventional microscope (10x objective lens, 0.25 numerical aperture) but the paper is stuck behind a paywall and so I can't post them due to copyright issues. (HT on the concept to Jason Snyder).
2) A microscope on your cell phone. It has a resolution of ~ 1 micrometer and is amazingly lensless. Apparently, it will only cost $10 but probably won't hit the market for 5 years, which is unfortunate. There are cool comparison photos with a conventional microscope (10x objective lens, 0.25 numerical aperture) but the paper is stuck behind a paywall and so I can't post them due to copyright issues. (HT on the concept to Jason Snyder).
Subscribe to:
Posts (Atom)