You can find a discussion of it in this (unfortunately gated) article, in the context of med. They discuss two types. The first is "swearing so as to enable empathy and catharsis," in which you swear to mirror the feelings of the patient and spur him to discuss those feelings further. The second is "swearing to create a feeling of social equality," which shows that you are willing to break pointless rules, and instead care about actual results. Generally, they argue that swear words can lend an emotional edge to a sentence that other words cannot. I agree.
It might have been nice if they had discussed swear word overuse. One high school bball coach of mine used to swear only sparingly, but when he did you really listened. Yet if you are known to swear almost never, when you do swear that event itself might detract attention from what you are talking about. Thus it is tricky to approach the equilibrium of swearing that maintains optimal effectiveness in case of maximum need. Very fucking tricky.
Tuesday, September 28, 2010
Monday, September 27, 2010
Three Surprising Findings On "Genius"
From Dean Simonton, via Irfan Alvi's review of his book. The more surprising ideas are that:
1) "In some domains, overtraining can be detrimental to creativity. In these cases, cross-training can be more effective." Presumably cross-training involves learning about a variety of different topics. This seems useful either because it allows you to apply ideas in other fields to your own, or because it allows you time away from your main field to avoid getting bogged down in details. Or both.
2) "The creative processes underlying genius-level achievement are still not well understood, although use of heuristics and combinatorial thinking appear to be typically involved." It'd be nice to know precisely which heuristics lead to genius level output!
3) "Psychopathology has a positive correlation with level of genius, but outright madness inhibits genius and higher intelligence tends to provide the capacity to prevent outright madness." This jives with some research. For example, this study considers the effect of the interaction between cognitive ability and neuroticism on managerial performance. Regressing cognition and neuroticism alone explained 4% of the variance in performance, but adding the interaction term raised this to 19%. And after partitioning their sample in half by cognitive ability, they found the higher half had a positive and significant relationship between anxiety and performance, while the lower half had a negative and barely non-significant relationship.
I am still heartily recommending Simonton's Creativity in Science to people interested in these issues.
1) "In some domains, overtraining can be detrimental to creativity. In these cases, cross-training can be more effective." Presumably cross-training involves learning about a variety of different topics. This seems useful either because it allows you to apply ideas in other fields to your own, or because it allows you time away from your main field to avoid getting bogged down in details. Or both.
2) "The creative processes underlying genius-level achievement are still not well understood, although use of heuristics and combinatorial thinking appear to be typically involved." It'd be nice to know precisely which heuristics lead to genius level output!
3) "Psychopathology has a positive correlation with level of genius, but outright madness inhibits genius and higher intelligence tends to provide the capacity to prevent outright madness." This jives with some research. For example, this study considers the effect of the interaction between cognitive ability and neuroticism on managerial performance. Regressing cognition and neuroticism alone explained 4% of the variance in performance, but adding the interaction term raised this to 19%. And after partitioning their sample in half by cognitive ability, they found the higher half had a positive and significant relationship between anxiety and performance, while the lower half had a negative and barely non-significant relationship.
I am still heartily recommending Simonton's Creativity in Science to people interested in these issues.
Saturday, September 25, 2010
Trade Off #10: Plasticity vs Specialization
In college, I was constantly reading neuro lit extolling the virtues of neural plasticity, which is the ability of neurons to change based on feedback from the environment. Plasticity certainly has huge benefits. Specifically, plasticity allows for a better match between phenotype and environment across variable environments than a single, constant phenotype would.
But after a while, the idolatry of plasticity began to annoy me, in part because much of the lit discussed plasticity as if it had no downsides, which seems impossible. (If there really were no costs to plasticity, then evolution should have selected for it like woah).
The general downside seems to be that plasticity hinders specialization. That is, if a system has the ability to change easily (i.e. it has high plasticity), then it will tend to expend resources on a wide range of trait values, and will have fewer resources to focus on the most important and relevant traits. A few examples:
- Synaptic pruning and other mechanisms for synaptic plasticity allow for learning and memory, but they are energetically costly. Indeed, one hypothesis holds that sleep is the price we have to pay for plasticity the previous day. (see here)
- In an evolutionary framework, the major costs to more plasticity are 1) actually sensing the current environmental conditions, and 2) producing the actual trait in a less efficient way. Both of these divert resources from other tasks. (see here and here)
- People with autism spectrum disorders often find it difficult to parse novel stimuli, but can sometimes concentrate for especially long periods of time on specific niches. So one might think of the autistic cognitive style as shifted towards the specialization side of this trade off. (see here)
But given our current working definitions (plasticity = the ability, which is highly correlated with the tendency, for context-dependent change; specialization = funneling energy expenditures to a narrow purpose), and because it is sort of one level "meta" to switching costs vs change gains, we are granting this trade off its own place in the canon.
(Above photo taken by flickr user uncle beast. Plants are often studied w/r/t genetic plasticity because they can't simply pack up shop and move if the environment changes, like an animal or insect could.)
Thursday, September 23, 2010
Never Have I Ever
Robin Hanson published an interesting thought today,
Consider how Tyler Cowen is so wont to point out that he's never tried coffee, or how Ben Casnocha loves to discuss how he's never smoked weed ("love" might be harsh, but do see his #2).
Our "nevers"'s allow us to signal loyalty to our groups in ways that our "tried once but didn't like it so please still accept me"'s can only dream of.
[W]hy are so many of us (including me) so reluctant to experiment with so many joys with strong fans? After all, fans argue, their suggested drug, sex style, or religious experience would only take a few hours to try, and could give us a lifetime of joy if we liked it. It seems we see far larger costs than the time for a trial. My guess: we value our current identity, integrated as it is into our job, hobbies, friends, etc. We fear that if we try new joys, we will like them, and jump to practicing them, which will change us. We fear that by jumping to juicy joys, we won’t be us anymore.Twenty six google reader "likes" speak for themselves, but one quibble. I think it's more likely that the reason we don't try "fun" things once or twice is not because we fear how we will change, but fear how others might perceive us as having changed.
Consider how Tyler Cowen is so wont to point out that he's never tried coffee, or how Ben Casnocha loves to discuss how he's never smoked weed ("love" might be harsh, but do see his #2).
Our "nevers"'s allow us to signal loyalty to our groups in ways that our "tried once but didn't like it so please still accept me"'s can only dream of.
Wednesday, September 22, 2010
Trade Off #9: Some Now vs More Later
In expected value terms, this trade off is everywhere. By doing work, we are trading some of our freedom now for more freedom later, since you can usually buy freedom (see here). When we invest, we give up some cash $ now to get more later, since interest rates typically beat inflation.
But, taking the long view of time preference, why would anyone ever rationally prefer less of something they want? There are a few reasons:
- Everything is ephemeral (including, probably, the universe itself). So, there is no guarantee that if you wait until later, either you or the thing you are waiting for will even exist.
- Values usually change as time passes. So, you might expect to value something so much less later that you can justify settling for the relatively lesser current amount of it.
- If you learn to always delay gratification, you may find it difficult to ever actually reap the rewards you are meant to be aiming for, and continue to put off having fun into it is too late (see here).
Ultimately, there actually are some legit reasons to prefer some now to more later. Sometimes there really is a dark, beneficial side to procrastination. Honesty demands us to admit these things, even though it might be simpler and easier to have a rule towards always favoring the delayed reward.
(Photo of clouds covering all but the tops of buildings in london from trexcali)
Saturday, September 18, 2010
Trade Off #8: Random vs Determined
This trade off concerns the decision to either leave things completely up to chance or to wholly micro-manage them, and every option in-between.
Now, it's true that attempting to alter the path of current events is the only way we can shift future outcomes towards our preferences. So because of that we should favor determining events. But there are large costs to determining that typically push us towards the random side of the trade-off, such as:
- The opportunity cost of even weighing all the choices and outcomes, when you could be spending that energy on other ends. (see here)
- The likelihood that, because we have attempted to change an outcome, we will come to regret our decision and see missed opportunities, which is psychologically taxing. (see here)
- The likelihood that, even in taking action, we'll fail to improve outcomes over the status quo. And, despite our best intentions, the non-zero probability that our actions could even have negative expected value with respect to our goals. (see here)
(above photo by flickrer Jef Safi)
Thursday, September 16, 2010
Long Run Optimism Depends On Short Run Pessimism
Matt Ridley, according to the highest rated amazon review of his book*, believes that we have little to fear from the the long run sustainability of our energy consumption, because,
This works on an interpersonal level too. If you truly convince yourself that a situation will end up fine, you have zero incentive to do work to alter it. Thus, in the case that you are not worried at all about a situation, the situation will likely not turn out fine. This has tons of implications... but for tonight I'll just say that it's a cool paradox.
####
* 95.9% of people agree that, instead of reading the book, you mindswell just rely on Kevin Currie's review to draw sweeping inferences.
Yes, depending on non-renewable fuel, by definition, means that at some point, the fuel will run out. Ridley only points out that naysayers rely on a hidden but faulty premise: that the future will resemble the past. Yes, we will run out of fossil fuels if we keep using it, but whose to say that we will keep using them? ... these folks' error lies in assuming that future ways of production will resemble past ways, and time and time and time again, this assumption has proved erroneous! Ridley's point is that while we can NEVER say that the future WILL solve all pressing problems, so far we have. And we can assume we will in the future because our method of exchange has globalized the "collective brain," assuring that innovation will keep occurring and the best minds will all be working on the pressing problems of the day.But what if the best minds are convinced by this very argument? Then they will, rationally, cease to work on solutions to these types of problems. So Ridley's argument is only true to the extent that people don't believe in it.
This works on an interpersonal level too. If you truly convince yourself that a situation will end up fine, you have zero incentive to do work to alter it. Thus, in the case that you are not worried at all about a situation, the situation will likely not turn out fine. This has tons of implications... but for tonight I'll just say that it's a cool paradox.
####
* 95.9% of people agree that, instead of reading the book, you mindswell just rely on Kevin Currie's review to draw sweeping inferences.
Monday, September 13, 2010
Intellectual Hipsters
Yvain defines them as the third person in this scenario:
His idea is pretty similar to my beloved conformity theory. One difference is that I conceptualize opinions moving around a circle (i.e. with two axes) whereas he seems to view them as oscillating between two poles.
To me, it doesn't make sense that beliefs would just jump from one point to another. At least subconciously, there has to be some kind of intermediate stance, like "I'm not sure, but I don't consider this issue interesting." The second axes of "caring" or "interestingness" allows the individual to justify holding the belief at every given time point. Otherwise, belief changing would have to happen on a threshold, and there are no thresholds.
A naive person might think that [X] is an absolute good thing. Someone smarter than that naive person might realize that [Y] is a strong negative to [X] and desperately needs to be stopped. Someone even smarter than that, to differentiate emself from the second person, might decide [Y] wasn't such a big deal after all.He also gives some examples, like "don't care about Africa / give aid to Africa / don't give aid to Africa", where the third position is the intellectual hipster.
His idea is pretty similar to my beloved conformity theory. One difference is that I conceptualize opinions moving around a circle (i.e. with two axes) whereas he seems to view them as oscillating between two poles.
To me, it doesn't make sense that beliefs would just jump from one point to another. At least subconciously, there has to be some kind of intermediate stance, like "I'm not sure, but I don't consider this issue interesting." The second axes of "caring" or "interestingness" allows the individual to justify holding the belief at every given time point. Otherwise, belief changing would have to happen on a threshold, and there are no thresholds.
Wednesday, September 8, 2010
A Critique Of The 10,000 Hour Rule
Suzanne Lainson offers one here,
[Anders] Ericsson says that if you have experienced people who don't do any better than the average person, then they aren't experts. This seems to provide a good loophole to explain why average people can sometimes beat those with more experience. What he seems to be saying is that his theories are right, and when there appear to be exceptions, the exceptions don't count.Now that's what I call a take down! The rest of the post is a veritable smattering of research bits, certainly not for the faint of nucleus accumbens, but it is interesting. She concludes that deliberately attempting to practice something for ten thousand hours isn't worth it, because it might not be sufficient for success and random unplanned experiences tend to be more life-defining anyway.
Tuesday, September 7, 2010
Three End Of Summer Thoughts
1) As of today there are 800k+ articles currently tagged as "interesting" on del.icio.us, making it one of the site's most popular tags. But of course any link you go through the trouble of tagging you must have found interesting! The only way I can imagine this making sense is if people are using the word "interesting" to distance themselves from the conclusions of the article while admitting that some of the ideas are worth further contemplation. But that is unlikely to be the case for 800k articles, so what gives? (This annoys me more than it probably should.)
2) A co-worker brings in cupcakes early in the morning to share with everyone. I want to wait to have one so I can reward myself for working hard that day. But I'm worried that if I do so there will be none left. Gratification delay and the commons: antagonistic.
3) One question you may need to ask to determine whether you might be living in a simulation: given the ability, would you simulate your own life? Granted, this is frightfully solipsistic, but if I had such a power, I myself can definitely imagine simulating my own life again, tweaking various parameters to test the consequences. But how should I change my behavior based on the non-zero probability that I am living in my own simulation?!
2) A co-worker brings in cupcakes early in the morning to share with everyone. I want to wait to have one so I can reward myself for working hard that day. But I'm worried that if I do so there will be none left. Gratification delay and the commons: antagonistic.
3) One question you may need to ask to determine whether you might be living in a simulation: given the ability, would you simulate your own life? Granted, this is frightfully solipsistic, but if I had such a power, I myself can definitely imagine simulating my own life again, tweaking various parameters to test the consequences. But how should I change my behavior based on the non-zero probability that I am living in my own simulation?!
Monday, September 6, 2010
The Arguments For And Against Re-Rating Movies
The Prosecution: Changing one's mind about the quality of a given movie, or for that matter any given work of art, is a disgusting practice that ought to be accompanied by ruthless social disapproval. Re-rating allows and even encourages one to incorporate other's opinions into one's own ratings, heavily biasing them. Naïvely, many assume that this influence will always move ratings upwards and assure themselves that they won't merely follow the opinions of the most popular critics. But the reality of the conformity cycle is much more insidious: you are just as likely to learn that too many others like a movie and thus dislike it. There is no defense against these influences once you have been exposed to them, thus rating must happen early and remain steady despite the greatest of protestations. Ladies and gentlemen of the jury, I believe strongly, and upon contemplation I believe you will come to agree, that re-rating really is the bane of a high-functioning rating system.
The Defense: The vitriol of the prosecution's ad hominem attacks against everyday folks who happen to re-rate now and then, justified only by some childish appeal for purity, is dangerously short-sighted. If you don't understand a movie the first time you see it, that's not necessarily the movie's fault, it could be your own fault too. Thus it's totally understandable that, if you come to understand some angle of the movie better after conscious or unconscious contemplation, your rating might change. Moreover, the quality of a movie cannot be fully judged right after watching, because the quality of a movie is based not only on your experience during the movie, but also the value over replacement of any subsequent thoughts about that movie after watching. Thus a rating must be dynamic; it will change with the ebbs and flows of one's thought processes, the structure and patterns of one's interior life, and yes, maybe even one's interactions with other people. Re-rating is only natural given all of our other human tendencies, and its availability takes much unnecessary pressure and anxiety off of the initial rating. If we want to evolve as people, and more specifically as a society of movie watchers, then we must be willing to accept the consequences of such dynamicity. The defense rests.
The Verdict: Death by reruns of imdb's bottom 100.
The Defense: The vitriol of the prosecution's ad hominem attacks against everyday folks who happen to re-rate now and then, justified only by some childish appeal for purity, is dangerously short-sighted. If you don't understand a movie the first time you see it, that's not necessarily the movie's fault, it could be your own fault too. Thus it's totally understandable that, if you come to understand some angle of the movie better after conscious or unconscious contemplation, your rating might change. Moreover, the quality of a movie cannot be fully judged right after watching, because the quality of a movie is based not only on your experience during the movie, but also the value over replacement of any subsequent thoughts about that movie after watching. Thus a rating must be dynamic; it will change with the ebbs and flows of one's thought processes, the structure and patterns of one's interior life, and yes, maybe even one's interactions with other people. Re-rating is only natural given all of our other human tendencies, and its availability takes much unnecessary pressure and anxiety off of the initial rating. If we want to evolve as people, and more specifically as a society of movie watchers, then we must be willing to accept the consequences of such dynamicity. The defense rests.
The Verdict: Death by reruns of imdb's bottom 100.
Sunday, September 5, 2010
Our Hands Evolved Because Our Feet Evolved
A legitimately fascinating article (here) by Campbell Rolian, Daniel Lieberman, and Benedikt Hallgrímsson makes this claim, with substantially more precision. From the abstract:
Rolian et al go on to argue that:
So, the use of tools by early humans was due in large part to random chance. Since any event that relies on coincidence is less likely to be replicated, I read this as making the development of intelligence less likely / inevitable. If so, this means that the lack of other species in the night sky can be explained away in one more way--maybe most biological replicators don't get as lucky as we did in our evolutionary past to get a jumpstart towards tool use. Granted, this is rampant speculation and it's possible that we would have started using tools anyway.
But still, this makes it at least slightly more likely that we'll end up colonizing the known universe, instead of withering away on this doomed planet. And for today, I'll celebrate that.
Human hands and feet have longer, more robust first digits, and shorter lateral digits compared to African apes. These similarities are often assumed to be independently evolved adaptations for manipulative activities and bipedalism, respectively. However, hands and feet are serially homologous structures that share virtually identical developmental blueprints, raising the possibility that digital proportions coevolved in human hands and feet because of underlying developmental linkages that increase phenotypic covariation between them....In particular, selection pressures on the feet led to longer, stronger thumbs, and slightly shorter other fingers. This makes it easier to hold something with the tips of our other fingers and thumb, in part because they are closer together, and in part because the forces on the thumb are dissipated over a larger surface area. This diagram might help you visualize:
modern human's pad-to-pad precision grasping, doi:10.1371/journal.pone.0011727.g001 |
[C]hanges in manual digital proportions that enabled digit tips to be brought into opposition may have been a prerequisite for the development of precision gripping capability in australopiths. Also, although pedal changes associated with facultative bipedalism probably provided australopiths with hands capable of producing and using Oldowan stone tools by at least 3.5 million years ago, it should be noted that manufactured stone tools do not appear in the archaeological record until about 1 million years later. Australopiths may have lacked the cognitive capacity for manufacturing tools and/or their technology was entirely nonlithic.... In short, there are several reasons to believe that selection on the foot caused correlated changes in the hand during human evolution, that selection on the hallux was stronger and preceded selection on the lateral toes, and that these changes in manual digital proportions may have facilitated the development of stone tool technology. (more here)To over-simplify their claim: while feet were being selected for better load bearing and less mechanical work in running, it just so happened that this also made hands better suited for tool use.
So, the use of tools by early humans was due in large part to random chance. Since any event that relies on coincidence is less likely to be replicated, I read this as making the development of intelligence less likely / inevitable. If so, this means that the lack of other species in the night sky can be explained away in one more way--maybe most biological replicators don't get as lucky as we did in our evolutionary past to get a jumpstart towards tool use. Granted, this is rampant speculation and it's possible that we would have started using tools anyway.
But still, this makes it at least slightly more likely that we'll end up colonizing the known universe, instead of withering away on this doomed planet. And for today, I'll celebrate that.
Subscribe to:
Posts (Atom)