Eliezer Yudkowsky works on artificial general intelligence or "friendly" AGI at the Singularity Institute. He has publicly stated that he thinks the particular problem he's working on is the most important in the world. In one post he noted his belief that, "[T]he ultimate test of a planet's existence probably comes down to Friendly AI, and Friendly AI may come down to nine people in a basement doing math. I keep my hopes up, and think of this as a "failing Earth" rather than a "failed Earth"."
On the other hand seemingly objective sources have indicated their disbelief that this problem is so important. In their recent diavlog, Scott Aaronson told Eliezer that he didn't think that computers would be able to match human intelligence for more than 1000 years. That doesn't bode well for the near-term prospects of an AI-driven singularity. And in their lively OB debate last December, Robin Hanson told Eliezer that he didn't think it was very likely AI's would "go foom" because that wouldn't fit what we know about previous growth rates. Again, not a very strong recommendation. Nonetheless, elsewhere Hanson told Eliezer that he did think that somebody might as well work on the friendly AI problem, and Eliezer is as good a person to do so as any (can't find the link, unfortunately). Aaranson has also said elsewhere that he sympathizes with Eliezer and that he is acting quite rationally in obsessing over the singularity, given his beliefs.
So the consensus view (also see here) on friendly AI research is that while it's not the most attention-worthy existential risk, it is worth of some modicum of attention, and Eliezer is a perfect candidate. My question is... Shouldn't somebody working on friendly AI personally consider it to be the most important question out there, consensus or no consensus? Even if this involves a little bit of self-delusion, wouldn't it be a worth it for stimulating that researcher's productivity?
More generally, is it generally a good thing if for any given researcher considers their topic to be the most important in the world? Even if it does involve some drawn-out logic? I would say, in most circumstances, yes. The only downside is that these researchers might be less likely to switch into something more important, or might be so good at convincing others that he/she draws away research funding from more objectively important sources. But I'd bet that those downsides are usually outweighed by the potential benefits of being monomaniacal. So I say, by all means, go try to save the world!