Intrade totally blew it with the Sarah Palin pick, and some people have been using this as part of a molehill of anecdotal evidence for why prediction markets are fundamentally flawed. I agree that some specific predictions have sucked, although I suspect that they would have been better if there had been a larger sample size. (A larger sample size would be easier to amass if the contracts were more openly legal, but I digress.)
But the great part about prediction markets is that if you think any other system is better, you're free to use those predictions to guide which contracts you will buy in the prediction market. If your other system is indeed better, then you will quickly become outlandishly rich. So the question is... if you think prediction markets suck, then why aren't you a multi-millionaire?
(Hat tip: Benny Caz)
Saturday, August 30, 2008
Friday, August 29, 2008
The statistician's credo
Here are two attempts at encapsulating a passion for numbers. The first is from the movie Pi:
"Restate my assumptions: One, Mathematics is the language of nature. Two, Everything around us can be represented and understood through numbers. Three: If you graph the numbers of any system, patterns emerge. Therefore, there are patterns everywhere in nature. Evidence: The cycling of disease epidemics; the wax and wane of caribou populations; sun spot cycles; the rise and fall of the Nile. So, what about the stock market? The universe of numbers that represents the global economy. Millions of hands at work, billions of minds. A vast network, screaming with life. An organism. A natural organism. My hypothesis: Within the stock market, there is a pattern as well... Right in front of me... hiding behind the numbers. Always has been."
The rest of the film is just OK, but that line alone rightfully earned it a cult following. The second is a quote from Radford Neal's PhD thesis (via Andrew Gelman):
"Sometimes a simple model will outperform a more complex model... Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem that led to the simple model performing well."
Why can't statisticians be romantic too?
"Restate my assumptions: One, Mathematics is the language of nature. Two, Everything around us can be represented and understood through numbers. Three: If you graph the numbers of any system, patterns emerge. Therefore, there are patterns everywhere in nature. Evidence: The cycling of disease epidemics; the wax and wane of caribou populations; sun spot cycles; the rise and fall of the Nile. So, what about the stock market? The universe of numbers that represents the global economy. Millions of hands at work, billions of minds. A vast network, screaming with life. An organism. A natural organism. My hypothesis: Within the stock market, there is a pattern as well... Right in front of me... hiding behind the numbers. Always has been."
The rest of the film is just OK, but that line alone rightfully earned it a cult following. The second is a quote from Radford Neal's PhD thesis (via Andrew Gelman):
"Sometimes a simple model will outperform a more complex model... Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem that led to the simple model performing well."
Why can't statisticians be romantic too?
Thursday, August 28, 2008
Updated thoughts on climate change
"When the facts change, I change my mind. What do you do, sir?" - John Maynard Keynes
1) There are economic costs of climate change, but that doesn't mean that they aren't worth it. What we need is dispassionate cost-risk analysis. More articles like this, summarized here, would be a good start.
2) Save the cheerleader, not the world. However one approaches the problem, curbing climate change should not be focused on "saving the planet," but instead saving human lives. In 10,000 years the carbon levels will reset, no matter how high they get today. Don't feel sorry for planet earth, it doesn't care either way.
3) Don't blame everything on China and India. In 2007 the United States had about 5% of the world's population but we emitted about 25% of the world's greenhouse gases. Once we get those numbers to near even, then we can start being the moral police.
4) It's OK to be fervent about your beliefs, there's nothing irrational about passion. If you believe that unregulated climate change will kill a large amount of people, and this is a outcome that you would like to stop, and you believe that there is something that can be done about it, then you should feel strongly about it. Just make sure that your emotions come from the facts, not the other way around.
5) It's irrational to get angry at neighbor that drives a gas-juggling Hummer. It is more rational to get angry politicians who refuse to institute a carbon tax that would force your neighbor to pay society extra for the greenhouse gases he emits.
6) I support a carbon tax over a cap-and-trade policy, even though it has the ugly word "tax" in it. If a politician supporting a "tax" is not electable, he/she should lie about it.
7) Planting trees should be considered a viable short-term solution, because they are excellent carbon dioxide "sinks." Some people don't love this idea, because eventually the tree will die and the carbon dioxide will decompose into the atmosphere. In 250 years we'll have wild new technology to solve our problems, but we need to get there first.
8) Nuclear power should also be considered a short-term solution. This is despite Vassar's Miscellany News decrying the industry as "one that has been propped up by the government for far too long." Hmmm... I don't really see that numbers (the chart is from here) When I've talk to engineers and scientists about alternative energy, they've predominantly supported nuclear energy over all other sources.
9) There's no need to argue for "more research" into alternative energy. If the financial incentives are there, firms will research ways of producing energy without emitting greenhouse gases. NIH funding should be there too, but I don't see climate change as more urgent than research into than cancer, genetic engineering, or neuroscience. We always need to consider opportunity cost.
10) I've heard pet theories about wind, solar, geothermal, wave, tidal, hydro, carbon trapping at the source, et cetera et cetera. It doesn't matter which your favorite one is, the best we can do is to tax greenhouse gas emissions and let the open market decide. I don't favor subsidies for any of these industries, but I do favor an immediate flat tax on all carbon emissions, which would accomplish the same goal.
11) Ethanol (and other biofuels) subsidies are a mistake, since it generally takes more than a gallon of fossil fuels to make one gallon of ethanol (see here). This is an example of why we cannot trust the government to subsidize certain types of energy over others.
If I had to perform triage on problems facing humankind today, climate change would not be my first priority (nuclear proliferation probably would be). But it would be up there.
Tuesday, August 26, 2008
Tuesday Statisticz: Buying ballgames, how MLB payrolls correlate with success
That's the title of my statistics riff for August, which you can read here. In large part I did it because some of the other attempts on the internet seemed so inept. By looking at data from the past 6 years, I tried to show exactly how helpful additional payroll money is. Check it out if you're interested, and if you get bored with any of the writing, don't feel guilty about skipping directly to the juicy graph.
Monday, August 25, 2008
Negotiation 101
One of the most common negotiating techniques that movie characters employ is to question whether their enemy really wants to do what he is threatening to. Here's a typical encounter:
Character A: "I'm calling state police in five minutes. They'll be here in ten."
Character B: "Thought you would've done that by now. You know why you haven't? Because you think this might be an irreparable mistake. Because deep inside you, you know it doesn't matter what the rules say. When the lights go out, and you ask yourself 'is she better off here or better off there', you know the answer."
Here's the structure of it. Character A makes a threat, and character B doesn't want him to go through with the threat, so he tries to instill some doubt in character A.
Character B is playing on character A's cognitive dissonance. The idea is that if character A really wanted to call the state police, he already would have. There would be no need for further discussion. But since character A has not yet called the state police, he must not actually want to.
One of the boldest uses of this technique is when somebody has a gun to your head. To play character B's role and attempt to convince character A that he doesn't want to kill you would take guts. But if the technique works, you should apply it everywhere. You must be willing to swallow that bullet!
Does anyone know if this is a certified negotiation technique, or is it merely a Hollywood concoction? Maybe Influence will let me know, I just ordered it from Amazon.
Character A: "I'm calling state police in five minutes. They'll be here in ten."
Character B: "Thought you would've done that by now. You know why you haven't? Because you think this might be an irreparable mistake. Because deep inside you, you know it doesn't matter what the rules say. When the lights go out, and you ask yourself 'is she better off here or better off there', you know the answer."
Here's the structure of it. Character A makes a threat, and character B doesn't want him to go through with the threat, so he tries to instill some doubt in character A.
Character B is playing on character A's cognitive dissonance. The idea is that if character A really wanted to call the state police, he already would have. There would be no need for further discussion. But since character A has not yet called the state police, he must not actually want to.
One of the boldest uses of this technique is when somebody has a gun to your head. To play character B's role and attempt to convince character A that he doesn't want to kill you would take guts. But if the technique works, you should apply it everywhere. You must be willing to swallow that bullet!
Does anyone know if this is a certified negotiation technique, or is it merely a Hollywood concoction? Maybe Influence will let me know, I just ordered it from Amazon.
Sunday, August 24, 2008
Tracking results leads to success
Chris Wanstrath delivers a fascinating keynote address on why you should make the time in your life for some sort of side project. He specifically addresses computer programmers, but the advice could apply to anyone.
And how does he recommend that you motivate yourself to work on the side project? By tracking your results:
"Every time you work on your side project, mark a big X through that day on a calendar. Eventually you'll have a nice line of Xs. Missing an X will be torture -- it'll mess up your beautiful streak. The goal is to maintain the streak, even if you don't think you have any ideas for the day. The best way to overcome writer's block is to write, after all."
This probably isn't the first time you've heard this advice, and it probably won't be the last. But I want to stress that it's worked for me, too. I now loathe to mark down that I procrastinated the previous day, and I love to mark down that I spent the time in particularly productive ways, like working on a long-term project.
If you're thinking of tracking your results (on anything!), I recommend Google Documents, so that you can do it anywhere and don't need to worry about losing a piece of paper. We'll see if it works for Jer and his 25 book challenge.
And how does he recommend that you motivate yourself to work on the side project? By tracking your results:
"Every time you work on your side project, mark a big X through that day on a calendar. Eventually you'll have a nice line of Xs. Missing an X will be torture -- it'll mess up your beautiful streak. The goal is to maintain the streak, even if you don't think you have any ideas for the day. The best way to overcome writer's block is to write, after all."
This probably isn't the first time you've heard this advice, and it probably won't be the last. But I want to stress that it's worked for me, too. I now loathe to mark down that I procrastinated the previous day, and I love to mark down that I spent the time in particularly productive ways, like working on a long-term project.
If you're thinking of tracking your results (on anything!), I recommend Google Documents, so that you can do it anywhere and don't need to worry about losing a piece of paper. We'll see if it works for Jer and his 25 book challenge.
Saturday, August 23, 2008
More on the workforce/academia dichotomy
Zbicyclist posted some interesting comments on my post about "obligation bankruptcy," and he's copied them onto his blog here. He does a good job of connecting theoretical nonsense to a practical application--something I need to do more of. Thanks for the discussion.
Friday, August 22, 2008
Call me the treasurer because I'm in the business of coining
In The Stuff of Thought, Steven Pinker lays out a model for why some words catch on and others don't. The key factors are frequency, unobtrusiveness, diversity of users and situations, generation of other forms and meanings, and endurance of the concept.
Frequency, diversity, endurance, and generation of other forms are what happens when the word is successful, but they won't help you actually coin the word. So the main factor we are left with is unobtrusiveness. This means that the word can't be too cute, and can't be too long.
I'm not sure how much this academic stuff will help us to name the phenomenon described in "obligation bankruptcy," but it can't hurt, right?
And here are my next two suggestions: ingrown planning and dragging the anchor. I don't want to explain them because they ought to be fairly intuitive.
Let's brainstorm this, Seth Godin style. Throw some ideas out there, if nobody likes it we can toss them right back, no harm done.
Frequency, diversity, endurance, and generation of other forms are what happens when the word is successful, but they won't help you actually coin the word. So the main factor we are left with is unobtrusiveness. This means that the word can't be too cute, and can't be too long.
I'm not sure how much this academic stuff will help us to name the phenomenon described in "obligation bankruptcy," but it can't hurt, right?
And here are my next two suggestions: ingrown planning and dragging the anchor. I don't want to explain them because they ought to be fairly intuitive.
Let's brainstorm this, Seth Godin style. Throw some ideas out there, if nobody likes it we can toss them right back, no harm done.
Tuesday, August 19, 2008
Obligation bankruptcy
The longer you wait to call or e-mail somebody back, the harder it is to dial those numbers. The longer you wait to start a homework assignment, the harder it is to open your backpack. The longer you wait to do anything, the harder it is to do it.
I'm not sure if this has been proven experimentally, but I've seen it too much personally to discount it. I think that it has to do with a vague combination of cognitive dissonance and avoidance. You think to yourself, "but if I really wanted to do this, wouldn't I have done it at the first opportunity"? So you then deduce that you don't actually want to do it at all.
The main problem with this phenomenon is that it doesn't have a name, so I am naming it: obligation bankruptcy. This name is not totally original. I stole the structure from Chris Yeh, who a couple of years ago wrote about declaring "RSS bankruptcy." But now I am beginning to think that the name may be too long-winded, so I open up the problem to my readers. Any better ideas?
Coincidentally, I have been putting off this post for at least two weeks now. Thanks to Ben Casnocha for spurring a convo about it.
I'm not sure if this has been proven experimentally, but I've seen it too much personally to discount it. I think that it has to do with a vague combination of cognitive dissonance and avoidance. You think to yourself, "but if I really wanted to do this, wouldn't I have done it at the first opportunity"? So you then deduce that you don't actually want to do it at all.
The main problem with this phenomenon is that it doesn't have a name, so I am naming it: obligation bankruptcy. This name is not totally original. I stole the structure from Chris Yeh, who a couple of years ago wrote about declaring "RSS bankruptcy." But now I am beginning to think that the name may be too long-winded, so I open up the problem to my readers. Any better ideas?
Coincidentally, I have been putting off this post for at least two weeks now. Thanks to Ben Casnocha for spurring a convo about it.
Sunday, August 17, 2008
Why some ideas fail
"Suppose we made, let us say, French our "official" language for fifteen years, then Japanese for the next fifteen. The English language would still be spoken by nearly everyone, but in thirty years, we would all be trilingual."
That's from Neil Postman's The End of Education, and I think it is a terrible idea. What makes it so bad is the blatant disregard for opportunity cost. Sure, it might be marginally better if everyone in the country was trilingual, but is worth all of that time and effort? No way.
This is a general rule: most bad ideas suffer because they don't take into account how much time or money or effort will be necessary to implement them. It's not that the outcome wouldn't necessarily be an improvement. But whenever we consider a course of action, we have to consider what we could accomplish instead of that action as well.
I am not great at this, and I'll admit that it is hard. But some people, like those proposing compulsory national service, an invasion of Iran, or making French the official language of the US, are clearly not even trying.
That's from Neil Postman's The End of Education, and I think it is a terrible idea. What makes it so bad is the blatant disregard for opportunity cost. Sure, it might be marginally better if everyone in the country was trilingual, but is worth all of that time and effort? No way.
This is a general rule: most bad ideas suffer because they don't take into account how much time or money or effort will be necessary to implement them. It's not that the outcome wouldn't necessarily be an improvement. But whenever we consider a course of action, we have to consider what we could accomplish instead of that action as well.
I am not great at this, and I'll admit that it is hard. But some people, like those proposing compulsory national service, an invasion of Iran, or making French the official language of the US, are clearly not even trying.
Wednesday, August 13, 2008
Rating incompleteness theorem
Background: The anchoring effect is when a value gets assigned to an object, and subsequent guesses or proposals will hover around that first number. The classic study showing the effect is when individuals were asked about how many African Nations were a part of the UN. Those who were asked "is it higher or lower than 45%?" gave lower answers than than those asked "is it higher or lower than 65%?".
The anchoring effect's application to imdb's rating system is obvious. If you are rating a movie that is already rated highly, you will probably give it a higher rating. Statistically what this means is that each of the ratings is dependent--they adjust at least slightly based on what the previous data points have been.
This is a fairly big problem for imdb, and I've thought a lot about how they could fix it. One way they could do it would be to only reveal the movie's overall rating after users have rated it. But the problem with this idea is that it would cripple the usefulness of imdb. The rating system's chief utility is to tell us which movies we should watch, not to sit around ranking movies we've already seen.
We're left at an impasse, a catch-22, an inconsistent self-referential loop. If you strive to eliminate the anchoring effect, you destroy the utility of the rating system. But if you allow the anchoring effect, then your values are biased. The system is incomplete, and I'm not sure that you can solve it without vastly decreasing the sample size. Obviously I'm open to any of your ideas.
The anchoring effect's application to imdb's rating system is obvious. If you are rating a movie that is already rated highly, you will probably give it a higher rating. Statistically what this means is that each of the ratings is dependent--they adjust at least slightly based on what the previous data points have been.
This is a fairly big problem for imdb, and I've thought a lot about how they could fix it. One way they could do it would be to only reveal the movie's overall rating after users have rated it. But the problem with this idea is that it would cripple the usefulness of imdb. The rating system's chief utility is to tell us which movies we should watch, not to sit around ranking movies we've already seen.
We're left at an impasse, a catch-22, an inconsistent self-referential loop. If you strive to eliminate the anchoring effect, you destroy the utility of the rating system. But if you allow the anchoring effect, then your values are biased. The system is incomplete, and I'm not sure that you can solve it without vastly decreasing the sample size. Obviously I'm open to any of your ideas.
Tuesday, August 12, 2008
Nonuse of behavioral economics in software updates
Am I the only one that clicks "remind me later" 99% of the time my computer asks me if I want to do something?
Behavioral economics, from what I can gander, has a few central tenets. People prefer to choose the "middle" option, so anchor them to a really high price (like a $99 "Titanic" sushi dinner) and they'll pay more. People tend to neither opt in nor out of programs, so if you want organ donors, make them opt out. And finally, people are rampant procrastinators!
That third point is why the "remind me later" button is so poorly designed. It allows you to put off the problem until later at low cost. If the software engineers want you to make the updates then they should say: "I don't have time to update now." And if they don't care if you make the changes, then why would they waste your time asking?
My suggestion isn't great, but at least users may consciously think about how much time they have. And if you combine this with a button telling them that updating will take less than 10 seconds, that would be a powerful 1-2 punch.
Behavioral economics, from what I can gander, has a few central tenets. People prefer to choose the "middle" option, so anchor them to a really high price (like a $99 "Titanic" sushi dinner) and they'll pay more. People tend to neither opt in nor out of programs, so if you want organ donors, make them opt out. And finally, people are rampant procrastinators!
That third point is why the "remind me later" button is so poorly designed. It allows you to put off the problem until later at low cost. If the software engineers want you to make the updates then they should say: "I don't have time to update now." And if they don't care if you make the changes, then why would they waste your time asking?
My suggestion isn't great, but at least users may consciously think about how much time they have. And if you combine this with a button telling them that updating will take less than 10 seconds, that would be a powerful 1-2 punch.
Sunday, August 10, 2008
Career-dependent tendencies
People who have worked as a waiter insist that you must tip heavily for good service. The same goes for other types of service workers, like taxi drivers. Whether they are still in the profession or not, they still signal aggressively that they tip well.
But they're not the only people that act this way. I know an aspiring film major who insists on watching the credits after a movie, to show respect to the people who made the film.
This behavior is irrational. Being adamant about tipping makes you look silly, and watching the credits is a waste of time because you won't remember the names anyway. I'm not immune to it either: I used to be a lifeguard, and it has made me a stickler about rules around the pool. (The pool is supposed to be relaxing).
Why does this happen to people? I think that it's a matter of cognitive dissonance. If we work for a long time in a profession, we want our customers to act in a certain way. Then we start expressing these views (even inwardly).
When the roles are reversed and we are in the position of customer, we feel physiological pressure to match our actions with our words. This is irrational, because it leads us to weigh certain aspects of life (those relating to our work) over all others.
But they're not the only people that act this way. I know an aspiring film major who insists on watching the credits after a movie, to show respect to the people who made the film.
This behavior is irrational. Being adamant about tipping makes you look silly, and watching the credits is a waste of time because you won't remember the names anyway. I'm not immune to it either: I used to be a lifeguard, and it has made me a stickler about rules around the pool. (The pool is supposed to be relaxing).
Why does this happen to people? I think that it's a matter of cognitive dissonance. If we work for a long time in a profession, we want our customers to act in a certain way. Then we start expressing these views (even inwardly).
When the roles are reversed and we are in the position of customer, we feel physiological pressure to match our actions with our words. This is irrational, because it leads us to weigh certain aspects of life (those relating to our work) over all others.
Correctness bias
From The Moral Animal:
"One might think that, being rational creatures, we would eventually grow suspicious of our uncanny long string of rectitude, our unerring knack for being on the right side of any dispute over credit, or money, or manners, or anything else. Nope. Time and again--whether arguing over a place in line, a promotion we never got, or which car hit which--we are shocked at the blindness of people who dare suggest that our outrage isn't warranted."
Sometimes, the truth hurts. Evolutionary psychology is an explanation for biases, but it need not be used as a rationale.
"One might think that, being rational creatures, we would eventually grow suspicious of our uncanny long string of rectitude, our unerring knack for being on the right side of any dispute over credit, or money, or manners, or anything else. Nope. Time and again--whether arguing over a place in line, a promotion we never got, or which car hit which--we are shocked at the blindness of people who dare suggest that our outrage isn't warranted."
Sometimes, the truth hurts. Evolutionary psychology is an explanation for biases, but it need not be used as a rationale.
Thursday, August 7, 2008
Nonbelief in the implied invisible
I was driving today and I noticed a weird clicking noise on the left side of my car. It sounded like my brake was intermittently scraping against the road. It was disturbing, and I was worried that there might be something wrong. So I rolled up the window and the noise went away. Problem solved.
The ability to turn off your belief in the implied invisible when necessary is an integral part of your procrastinator toolbox.
The ability to turn off your belief in the implied invisible when necessary is an integral part of your procrastinator toolbox.
Wednesday, August 6, 2008
Individual over group bias
The claim that individuals believe themselves to be better drivers than average is one of the common pieces of evidence used to show the effect of overconfidence bias. The BPS Research Digest reports that the experimental method of these results may be slightly flawed.
Instead of asking students to compare themselves to the overall group (or average), they asked the students to compare somebody else to the group. By this manipulation, they found that students still showed a preferential bias towards the individual over the group whether they were that individual or somebody else was.
This result doesn't imply that individuals are not overconfident--we still are--but merely that the cause of that overconfidence may not be narcissism but instead a failure to compare an individual to a fair population mean.
Instead of asking students to compare themselves to the overall group (or average), they asked the students to compare somebody else to the group. By this manipulation, they found that students still showed a preferential bias towards the individual over the group whether they were that individual or somebody else was.
This result doesn't imply that individuals are not overconfident--we still are--but merely that the cause of that overconfidence may not be narcissism but instead a failure to compare an individual to a fair population mean.
Tuesday, August 5, 2008
The prisoner's dilemma in Rififi
"I liked you. I really liked you, Macoroni. But you know the rules." -- Tony le Stéphanois
Rififi includes the classic gangster movie scene where one guy has rats out his friend, and the boss of the gang finds him in a helpless situation. The squealer tries to plead his case, but the hoodlum mercilessly kills him anyways. In this case, the boss delivers a classic line before doing so.
Who would sign up for a job with rules like these? These rules seem counter-productive, since more people on your side will end up dead. But game theory predicts otherwise.
The situation is that you are held captive by a rival gang, and they want to know where one of your friends is hiding out. Here's the trade-off without "the rules": if you rat out your friend's location, you'll end up with a moral hangover but a better chance of getting off. With "the rules", if you rat out your friend's location, you'll less likely be killed by your enemies, but you will be killed by your friends.
If "the rules" are enforced 100% of the time, it will no longer be in your best interest to squeal. This equilibrium is better for your gang overall, since your rivals only can take one of your gang members hostage.
That's why enforcing the rules 100% of the time is so crucial for any gang lord, and that's why a ruthless, cold-blooded killer is so valued in that job market. Of course, once you bring torture into play, these basic rules dissolve, and things get even crazier. The take-home message is that if you don't grasp basic game theory, you will not be enjoying gangster movies as much as you should be.
Rififi includes the classic gangster movie scene where one guy has rats out his friend, and the boss of the gang finds him in a helpless situation. The squealer tries to plead his case, but the hoodlum mercilessly kills him anyways. In this case, the boss delivers a classic line before doing so.
Who would sign up for a job with rules like these? These rules seem counter-productive, since more people on your side will end up dead. But game theory predicts otherwise.
The situation is that you are held captive by a rival gang, and they want to know where one of your friends is hiding out. Here's the trade-off without "the rules": if you rat out your friend's location, you'll end up with a moral hangover but a better chance of getting off. With "the rules", if you rat out your friend's location, you'll less likely be killed by your enemies, but you will be killed by your friends.
If "the rules" are enforced 100% of the time, it will no longer be in your best interest to squeal. This equilibrium is better for your gang overall, since your rivals only can take one of your gang members hostage.
That's why enforcing the rules 100% of the time is so crucial for any gang lord, and that's why a ruthless, cold-blooded killer is so valued in that job market. Of course, once you bring torture into play, these basic rules dissolve, and things get even crazier. The take-home message is that if you don't grasp basic game theory, you will not be enjoying gangster movies as much as you should be.
Monday, August 4, 2008
Analyzing personality based on writing style
Would it be possible to analyze somebody's personality and place them on sliding scale based a writing sample? We'd assuming that you have a large sample, like all of the writing that people have posted on forums.
You could look at various factors: the number of positive vs. negative adjectives, the number of exclamation marks, and the average sentence length, for example. You would then tally up these factors, plug it into some sort of formula, and classify the personality of the user.
I think it might be difficult, but it could be cool for a few reasons:
1) A low cost to the consumer, because you utilize data they already have. People could just copy and paste the stuff they've already written from their blog or facebook wall-to-wall conservations, or wherever. At a low cost, they are able to get back highly specific content.
2) Look at how successful horoscopes have been, and they are based on precious little scientific basing. If the data could be put through a few machine learning sessions matching writing samples to results of personality tests, you could at least have some idea of people's personalities. Or, you could maybe put in writing from famous authors and then match people's copy to them. Users would find out that they have personalities more similar to either Gabriel Garcia Marquez, Shakespeare, or Chuck Palahniuk.
3) People could tabulate their results and aggregate them on their blog. Plus, it would scale well if a few algorithms were doing most of the heavy lifting.
Downsides abound as well, but is any one crippling? I see this happening in the next ten years, unless the internet quickly moves out of a text-based environment.
You could look at various factors: the number of positive vs. negative adjectives, the number of exclamation marks, and the average sentence length, for example. You would then tally up these factors, plug it into some sort of formula, and classify the personality of the user.
I think it might be difficult, but it could be cool for a few reasons:
1) A low cost to the consumer, because you utilize data they already have. People could just copy and paste the stuff they've already written from their blog or facebook wall-to-wall conservations, or wherever. At a low cost, they are able to get back highly specific content.
2) Look at how successful horoscopes have been, and they are based on precious little scientific basing. If the data could be put through a few machine learning sessions matching writing samples to results of personality tests, you could at least have some idea of people's personalities. Or, you could maybe put in writing from famous authors and then match people's copy to them. Users would find out that they have personalities more similar to either Gabriel Garcia Marquez, Shakespeare, or Chuck Palahniuk.
3) People could tabulate their results and aggregate them on their blog. Plus, it would scale well if a few algorithms were doing most of the heavy lifting.
Downsides abound as well, but is any one crippling? I see this happening in the next ten years, unless the internet quickly moves out of a text-based environment.
Sunday, August 3, 2008
An application of the conformity theory
Wild story from David Samuel's New Yorker article on California's marijuana entrepreneurs: "While Blue napped, I wandered around his apartment, and counted nearly a dozen images and carvings of the elephant-headed Hindu god Ganesha. The proliferation of Ganesha dates back to a well-publicized federal bust in January, 2007, when the D.E.A. seized the medicine and cash of eleven pot dispensaries in Los Angeles. The only major dispensary that wasn’t busted had a Ganesha in its window. Now it is hard to find a karmically inclined ganja dealer in Los Angeles who doesn’t own a herd of lucky figurines."
The rest of the article is fascinating too. What strikes me is that Ganesha statues are the perfect application of the conformity theory. They're so damn cool: there's a catch story behind them, non-conformists hoard them, and they are random enough that if anybody does own a Ganesha statue, they must be cool.
This last point I think is the crucial one here. Certain phenomenon seem to straddle both sides of the conformity theory at once. It's sort of like trucker hats in the early-2000s: they could be cool or they could be stupid, depending on why you were wearing it. Another example might be really bad movies, which are only allowed to be enjoyed in an ironic sense.
But with Ganesha statues, you don't have to make this distinction. Until they are sold at any major retailers, they will remain 100% cool.
Saturday, August 2, 2008
Shooting the moon in academia
Some background: the goal in the card game "hearts" is to avoid tricks with hearts and the queen of spades. Each of these cards counts as a point, and you win by having as few points as possible. However, there is a rule that if you get all of the hearts and the queen of spades, you get zero points and everybody else gets 26. This rule is called "shooting the moon."
Sometimes, when you take a test and do very badly, you are worried that you might get every question wrong. But then again, this is fairly impressive! You have to know something about the material to get every problem wrong.
My proposal is that there should be a way to "shoot the moon" on tests, where you answer every question incorrectly. You would get a 150% (instead of a max 100%), and maybe everybody else in the class who would otherwise get an A would get -1% from their score. If multiple people "shot the moon", these negative percentage points would not sum.
Any takers? It would certainly bring out the gambler in each of us, because if you were to get just one problem right, it would mean a seriously failing grade.
Sometimes, when you take a test and do very badly, you are worried that you might get every question wrong. But then again, this is fairly impressive! You have to know something about the material to get every problem wrong.
My proposal is that there should be a way to "shoot the moon" on tests, where you answer every question incorrectly. You would get a 150% (instead of a max 100%), and maybe everybody else in the class who would otherwise get an A would get -1% from their score. If multiple people "shot the moon", these negative percentage points would not sum.
Any takers? It would certainly bring out the gambler in each of us, because if you were to get just one problem right, it would mean a seriously failing grade.
Friday, August 1, 2008
Correlations between economic and political freedom
Scott Aaronson constructs an interesting graph on his blog correlating Freedom House's 2008 Freedom of the World Study and the WSJ's 2008 Index of Economic Freedom. He calls his own analysis amateur, because if he were to do analysis like political scientists it would involve reducing each test to Chi squares and analyzing the significance of them, but whatever that is boring. Listen Scott, there's nothing wrong with a simple t-test now and then.
There is a correlation between the two types of freedom, and he notes that:
"Looking at the countries in question, it seems clear that part of this correlation is due to both freedoms being correlated with economic development, i.e. 'having your national shit together.'"
Indeed.
There is a correlation between the two types of freedom, and he notes that:
"Looking at the countries in question, it seems clear that part of this correlation is due to both freedoms being correlated with economic development, i.e. 'having your national shit together.'"
Indeed.
Subscribe to:
Posts (Atom)