In my psychology seminar last week my group and I presented three computational models for how attention could lead to consciousness. Based on the papers we assigned, one of them had strong support from the data (Taylor's CODAM), one of them had medium data-based support (Wang's LEABRA, small amount of data given via Figure 3 here), and one had little to no support from data (Cavanna's figure 1 here, the "ascending reticular activating system"). We asked the thirteen other students to rank the models at the beginning of the class, with 1 being the best. Taylor's model was ranked last, with an average rank of 2.21, Wang's was second best, with an average rank of 2.07, and Cavanna's was considered the best, with an average rank of 1.71. So, there was an inverse relationship between the amount of data presented for a model and how much people preferred it.
This is one example, but it's indicative of a larger trend I've noticed throughout my classes at Vassar. I notice it in myself, too, but it's something I'm trying to work against. It's that all the students here are so critical of every study or claim that they hear or read, and are unwilling to be convinced by more data. You can't blame them. It's probably what everyone has told them to do their whole lives: be critical of everything you read, don't trust statistics, etc. The better advice would be to simply try to gauge the veracity and utility of any individual claim based on the data you are given and your prior beliefs about the possible bias of the data source.
As compared to the theoretically optimal equilibrium of critical versus appreciative thinking, our marketplace of ideas has swung much too far towards the critical side. If people are critical of every new idea they encounter, all that will do is bias them towards favoring the status quo or the null hypothesis. I blame the market failure on our social norms: being critical is too often viewed as automatically synonymous with correct. So let's change those social norms... One blog reader at a time.