I remember being in the car with my family (I was about 12, maybe 15?) and thinking about the abortion debate, and realizing that both sides had a point. Killing the baby seemed bad, but having kids that you didn’t want also seemed bad. Culturally, I had been taught that abortion was morally allowed (but regrettable) and so I tried to reconcile this with not wanting it to be okay to just kill people. Maybe it’s okay because the baby doesn’t feel the pain? Or maybe because nobody had developed emotional ties to the baby? It seemed that any rule I picked would require a surprising “transition point” from not-a-person / not-sentient to personhood. None really seemed right, so I eventually gave up on trying to find a simple consistent rule. I also remember being intrigued by other ethics debates like assisted suicide, and again never really got to a conclusion on that.

Similarly, I felt that politics was important but I didn’t know what to do about it. I felt duty to go vote when I cared about the outcome, but I never really came up with an actual argument that voting was worth my time. I had some intuition, though: that if I didn’t bother to vote this time, that meant other people might not choose to vote for similar reasons, and my reason for not voting didn’t seem very good, so I hoped that choosing to vote would somehow influence others like me to vote (or indicate that they would make a similar choice) as well. That argument doesn’t really go through, which is why I classified it as an “intuition”.

Through my undergraduate years (2004-2008) I read Paul Graham’s essays on startups. PG is an extraordinarily practical person and his simple, cogent style appealed to me greatly. It’s his writing that convinced me to do startups (e.g., “How to Make Wealth”). My parents are entrepreneurs so I had access to entrepreneurial spirit and concepts, but it took me reading PG to understand that I could actually, really do startups myself, that it was an advantage to do them while young, and that I didn’t need to be a marketing or business genius, or need permission from anyone, to start working on my own real businesses. PG teaches effectiveness in startups; “Make Something People Want” is the motto of his seed funding organization Y Combinator. So I dove in and did my own startup. Unfortunately, that first startup after college failed after a year due to my own ineffectiveness. I took a full time job after that - I knew I would eventually go back into startups but I didn’t know how or when. During that time, part of me was always searching for ways to fill the “holes” in my prior startup experience. I wanted to know how to raise money, how to fix my procrastination problem & improve my work habits, how to choose what to work on, and how to improve my social effectiveness.

Eventually (2009) I came across what was then called the Singularity Institute (now MIRI). I was very excited because they had a plan to actually make a positive impact on the world. It seemed vaguely plausible that a very small team of programmers might actually be able to code themselves enough power to solve all the world’s problems through friendly artificial intelligence, and this was quite exciting. I didn’t directly act on this idea, but very soon after finding SI, I found Less Wrong. I think that name was very good & sticky, because I can’t think of another reason that I kept coming back to LW over the next couple months to see if there was new content I would be interested in – I hadn’t found the Sequences yet (that took me about 3 or 4 tries of looking at the LW website over a period of months before I found them).

The Sequences, in case you’re not familiar, are a series of blog posts on Less Wrong, mostly by Eliezer Yudkowsky. They’re about how to improve your thinking patterns, how to become aware of & correct for cognitive biases, and there’s a pretty hefty dose of philosophy in there too. When I finally found them, I read them all, over a period of a few weeks. It’s about a million words in total. I started reading LW in order to obtain the thinking benefits – I wanted to be less wrong, and I recognized that there were biases in my brain, and I kept getting little bursts of insight as I read the posts. It was quite addictive. I think I self-improved substantially with respect to thought patterns. I quite often access concepts I learned on LW. I found people who had claimed to “solve” procrastination, and I tried to follow their techniques.

But while I was doing this self-improvement through reading, a surprising thing occurred: I developed some metaethics, mostly via the philosophical pieces on LW. Previously, I had rejected attempts to actually take one side or the other on ethical dilemmas. But LW’s philosophy made it seem more “okay” to be generally utilitarian / consequentialist, and my ethics started to lean in that direction. I began to believe:

  • that most moral intuitions are caused by culture & society, rather than "built-in"
  • that despite this, it was still right to trust (to some degree) my moral intuitions as to what is good and right
  • that, on the other hand, good reasoning should be able to convince me to overcome some of my moral intuitions -- in other words, that some amount of bullet-biting will probably be necessary in the long run
  • that people far away are still people and deserve similar moral weight as people nearby
  • that consequentialist reasoning should typically "beat" deontological intuitions in the long term / at scale, but that deontology & virtue ethics tend to be much more useful in most smaller scale applications (e.g., daily situations that we encounter)
  • that I often act in ways that I know are immoral to some degree, and instead of attempting to justify those actions, simply accept that I'm not acting perfectly morally (but I should check consequentialism to see whether the bad actions might have major negative impact, and if so, avoid doing them)
  • that making myself happy / removing obstacles to my own effectiveness is extremely important, since I expect to make a large positive impact with my work, and therefore I shouldn't accept trade-offs that have substantial costs to my happiness or work efficacy.

These ideas led me to accept effective altruism – I should always do the best thing I know of, in order to maximize my positive impact on the world; and I should constantly self-improve, in order to find new “best things” that I could be doing.