Over the last several years, a number of organizations have popped up: GiveWell, 80,000 Hours, The Life You Can Save, Giving What We Can, Centre for Effective Altruism, Future of Humanity Institute, Center for Applied Rationality, Leverage Research, and probably a few I’ve missed.

The purpose of these organizations is to strategically make the world a better place – figure out the best interventions, or the best actions to take, in order to reduce the most suffering and allow humanity to reach its greatest potential.

Perhaps surprisingly, not everyone agrees on the best way to achieve this. A quick look through the above websites: donate to proven cost-effective interventions, convince other people to donate a big fraction of their income, worry about existential risks, teach people to think more carefully, develop psychological insights, create friendly artificial intelligence, and so on.

I found this lack of coordination surprising because when people have a common goal, they benefit from working together, and so it seems they would do better if they were able to reach agreement on the best path forward. On the other hand, it makes sense that they have different paths: humans often exhibit flaws in rationality which would prevent such cooperation – reasons like tribalism, status quo bias, and simple inability to work with people. Also, different people have different skills and comparative advantages, so it might make sense to work on a lot of different things.

Anyway, a lot of the folks in these organizations are convinced that improving their own rationality could have high payoffs in their ability to achieve their goals. Tribalism can also be addressed by getting a lot of people from different organizations in the same room and hoping they make friends with each other. So Leverage Research and CFAR got together and created the Effective Altruism Summit. 50 people – many from the above organizations, but also a bunch of unaffiliated community members (including me) are going to stay in a big house in California for a week and try to produce a bit more rationality, goal-alignment and friendship amongst the effective altruism community.

I guess we’ll see how it goes!

Currently it seems to me that the best way forward is for humanity (or a small group of humans) to become smart and rational enough to develop safe, powerful artificial intelligence. We can’t go too slow on this: bad things are happening every day (people are dying, new advances are being made on dangerous technology). But if we go quick and screw it up, we’d be making things worse. I’m writing this down in order to see if my beliefs change over the course of the Summit.