Lincoln's blog

[ Home | RSS 2.0 | ATOM 1.0 ]

Tue, 04 Feb 2014

Interesting Decision Theory question

I was reading "Young Money" by Kevin Roose. In it, he cites the following finance entry-level interview question (paraphrased):

Let's consider the following gamble: I pay you a fixed amount up-front. Then, I flip a fair coin repeatedly until it comes up heads. You pay me 2^N dollars where N is the number of tails -- so if heads came up right away, then you pay $1; if the sequence was Tails-Tails-Heads, $4; and so on.

At what price are you willing to play this game?

Take a few minutes to consider this question. You could even write a simple program to simulate it, if you're feeling so inclined.

I'm going to leave a little space here, but scroll down for my thoughts.








OK, so if you're like me, you immediately broke out pen and paper and computed the expected value of the gamble. And, perhaps, a surprising thing occurred: you realized that the gamble is valued at negative infinity. (The probability of paying $1 is 1/2; $2, 1/4; $4, 1/8; multiplying this sequence of tiny probabilities by large payments gets you the sum of infinitely many $1/2.)

OK, so now that you've gotten that far, at what price do you accept the gamble? Take some more time to consider it.








I wrote a program to simulate the gamble 10,000 times. The average per-play cost came out to $13. I ran it again, and it was $7. I kept running it -- it was usually between about $6 and $18, except for the time it was $50. Whoops.

My analysis: it's probably correct for most people to accept such gambles at a high enough price such that the take-home money would be life-changing. I would obviously accept it for a billion dollars, for instance. I'd probably accept it at a million. It's really hard for our brains to understand the low chance of getting wiped out, though. As Dave Baxter points out, the subjective value to you of an extra million dollars depends strongly on how much money you already have.

In the real world, you won't go into debt bringing your net worth too far below zero, because at some point you would just declare bankruptcy. So that also sets a bound on how bad the outcome can be, and makes the gamble (at certain prices) economically rational.

Aha! Jess Riedel points out that Wikipedia answers the question here: St. Petersburg Paradox.

I guess an interesting takeaway for me was that expected value calculations don't (& shouldn't) always dominate decisionmaking.

posted at: 18:28 | path: /rationality | permanent link to this entry

Sun, 01 Dec 2013

GiveDirectly is the #1 charity on Givewell. Wait, what?

GiveWell, the independent effective-altruist charity evaluator, just announced a new set of charity ratings. Specifically, GiveDirectly is now the top-rated organization. (The previous top-rated organization, Against Malaria Foundation, failed to meet a milestone and so GiveWell is holding off on recommending funding until they show that they can meet the milestone.)

GiveWell evaluates charities on the basis of expected world-improving value for your money. Their philosophy focuses heavily on proven interventions - they are not likely to recommend far-future speculative causes because there's no way to prove them. They are very thorough and demand high standards of evidence and transparency. GiveWell's recommendations are very widely respected.

GiveDirectly just gives money to the most needy. They find the poorest households in Kenya (by looking for houses with do not have solid walls/roofs) and surprise them with $500. They have an impressively thorough body of evidence that says that this works -- that people spend the money in very good ways.

Anyway, GiveDirectly is now the number one most effective donation target, as evaluated by GiveWell. This is quite surprising to me. Why? Because GiveDirectly seems to be a low bar!

Okay, wait, let me explain. Clearly, it's not really a low bar: the money is going to verifiably poor people, and the research tells us that they spend the money in high-value ways. In fact, it's easy to see that most charities will fall below this bar: I can easily imagine education charities, for example, where most of the money spent goes to children who would otherwise have gotten a good education in other ways.

But all of them? Essentially, GiveWell is saying "we cannot find any proven economies of scale when it comes to making the world a better place."

Let me expand on this. Economies of scale is why companies exist. For example, it would be hard for me to make myself a cellphone from electronic parts. It would cost me hundreds or thousands of dollars just to source the parts, and then I'd have to combine them all into a working device, and it probably wouldn't fit in my pocket. But I can get a working cellphone for fifteen dollars or less -- at least two or three orders of magnitude cheaper than doing it myself. The cellphone companies make money by producing phones in the millions (or billions); manufacturing techniques allow them to make an extremely large number of phones for very little, and sell them at a profit.

So yes, I'm surprised: since companies have to make a profit to continue existing, it seems like there should be a nonprofit which leverages similar economies of scale, but which could be even cheaper (and therefore more effective) because they don't have to make a profit.

And yet, for some reason, this doesn't exist in the charity world. Nobody has found a charitable intervention that scales in the same way that mass production scales. Or, at least, they haven't found one that a) scales; b) is proven to be effective; and c) isn't already a for-profit corporation. It appears that simply giving people cash transfers beats out any kind of replicable, mass-produced charity work. There's no mass-produced thing (a vaccine, or a drug, or a mosquito net, or a wheelbarrow) whose production scales well with charitable dollars.

Scaling: is it possible that a charitable intervention doesn't scale as well as a company? It seems rather unlikely. The production and distribution of things like vaccines and mosquito nets don't seem fundamentally different from all the things that are distributed for profit, like soda and shampoo. Maybe the best charity interventions are more long-term focused than most stuff people buy in stores.

Proven effectiveness: to me, this seems like a plausible problem in the charity sphere. Companies don't have to prove anything except to themselves -- if something is working, they get immediate feedback in the form of dollars and they just have to learn to scale it. But with charity, you don't necessarily know if things are working, if they're getting better, and so on, without trying hard to find out. It's so much easier to just hope/believe/think that your interventions are working than to actually test...

Isn't already a for-profit company: GiveWell doesn't recommend for-profit ventures, I assume because world-improving profitable companies can already get plenty of investors and money elsewhere. So it raises the question -- maybe all the best applications of economies-of-scale work well in a for-profit structure, and so they're invisible to philanthropists. This actually seems also somewhat plausible, but I would expect that there were some companies which couldn't quite make money but still would be worth funding as a nonprofit to distribute their product.

If you aren't pretty sure that one of these is true, then it seems like there is a giant hole where effective altruists should look for promising new nonprofits to start. Something which scales, is expected to have a scientifically demonstrable positive impact, and can't exist as a for-profit. If you could prototype your idea and gather some evidence of impact, and if you otherwise executed well, it seems quite likely that you could quickly get this organization to #1 on GiveWell's recommendations and make a big impact.

So let's brainstorm it. People who get grants from GiveDirectly very often buy metal roofs. Presumably the recipients have to buy them "retail" (whatever that is in Malawi) and either learn to install the roofs themselves, or pay a contractor to install the roof. But GiveDirectly already knows the houses who need metal roofs since that's their criterion for giving them grants. If they can obtain substantial cost savings by installing them "in bulk" then shouldn't they just go around installing roofs everywhere for free?

Well, this goes against what GiveDirectly wants to do. And maybe they're right. How would we know? Consider how many people there would be under my scheme who don't really want a roof, but would get one anyway if we handed it to them. Clearly, those people would prefer the cash. But we saved a lot of money mass-producing and mass-installing roofs for everyone else, so we got more roofs for the dollar than we would through GiveDirectly's scheme, so it's a question of whether there are more wasted roofs or whether the cost savings makes up for it.

My guess would be that there would be more wasted roofs -- roofs are already mass produced and there are probably cheap contractors to help install them, so they wouldn't get too much benefit from the economies of scale. But the real question isn't whether this works for roofs, it's whether it works for ANY intervention. If you can find a single intervention that works better than handing people cash, you can beat GiveDirectly.

Ok, I have two ideas for places to look: interventions which require large groups of people (collective action problems) and interventions which are against cultural norms. Both are likely to be ignored by the for-profit sphere, which is why I picked them out.

posted at: 04:48 | path: /rationality | permanent link to this entry

Sat, 16 Nov 2013

My Effective Altruism Origin Story

I remember being in the car with my family (I was about 12, maybe 15?) and thinking about the abortion debate, and realizing that both sides had a point. Killing the baby seemed bad, but having kids that you didn't want also seemed bad. Culturally, I had been taught that abortion was morally allowed (but regrettable) and so I tried to reconcile this with not wanting it to be okay to just kill people. Maybe it's okay because the baby doesn't feel the pain? Or maybe because nobody had developed emotional ties to the baby? It seemed that any rule I picked would require a surprising "transition point" from not-a-person / not-sentient to personhood. None really seemed right, so I eventually gave up on trying to find a simple consistent rule. I also remember being intrigued by other ethics debates like assisted suicide, and again never really got to a conclusion on that.

Similarly, I felt that politics was important but I didn't know what to do about it. I felt duty to go vote when I cared about the outcome, but I never really came up with an actual argument that voting was worth my time. I had some intuition, though: that if I didn't bother to vote this time, that meant other people might not choose to vote for similar reasons, and my reason for not voting didn't seem very good, so I hoped that choosing to vote would somehow influence others like me to vote (or indicate that they would make a similar choice) as well. That argument doesn't really go through, which is why I classified it as an "intuition".

Through my undergraduate years (2004-2008) I read Paul Graham's essays on startups. PG is an extraordinarily practical person and his simple, cogent style appealed to me greatly. It's his writing that convinced me to do startups (e.g., "How to Make Wealth"). My parents are entrepreneurs so I had access to entrepreneurial spirit and concepts, but it took me reading PG to understand that I could actually, really do startups myself, that it was an advantage to do them while young, and that I didn't need to be a marketing or business genius, or need permission from anyone, to start working on my own real businesses. PG teaches effectiveness in startups; "Make Something People Want" is the motto of his seed funding organization Y Combinator. So I dove in and did my own startup. Unfortunately, that first startup after college failed after a year due to my own ineffectiveness. I took a full time job after that - I knew I would eventually go back into startups but I didn't know how or when. During that time, part of me was always searching for ways to fill the "holes" in my prior startup experience. I wanted to know how to raise money, how to fix my procrastination problem & improve my work habits, how to choose what to work on, and how to improve my social effectiveness.

Eventually (2009) I came across what was then called the Singularity Institute (now MIRI). I was very excited because they had a plan to actually make a positive impact on the world. It seemed vaguely plausible that a very small team of programmers might actually be able to code themselves enough power to solve all the world's problems through friendly artificial intelligence, and this was quite exciting. I didn't directly act on this idea, but very soon after finding SI, I found Less Wrong. I think that name was very good & sticky, because I can't think of another reason that I kept coming back to LW over the next couple months to see if there was new content I would be interested in -- I hadn't found the Sequences yet (that took me about 3 or 4 tries of looking at the LW website over a period of months before I found them).

The Sequences, in case you're not familiar, are a series of blog posts on Less Wrong, mostly by Eliezer Yudkowsky. They're about how to improve your thinking patterns, how to become aware of & correct for cognitive biases, and there's a pretty hefty dose of philosophy in there too. When I finally found them, I read them all, over a period of a few weeks. It's about a million words in total. I started reading LW in order to obtain the thinking benefits -- I wanted to be less wrong, and I recognized that there were biases in my brain, and I kept getting little bursts of insight as I read the posts. It was quite addictive. I think I self-improved substantially with respect to thought patterns. I quite often access concepts I learned on LW. I found people who had claimed to "solve" procrastination, and I tried to follow their techniques.

But while I was doing this self-improvement through reading, a surprising thing occurred: I developed some metaethics, mostly via the philosophical pieces on LW. Previously, I had rejected attempts to actually take one side or the other on ethical dilemmas. But LW's philosophy made it seem more "okay" to be generally utilitarian / consequentialist, and my ethics started to lean in that direction. I began to believe:

  • that most moral intuitions are caused by culture & society, rather than "built-in"
  • that despite this, it was still right to trust (to some degree) my moral intuitions as to what is good and right
  • that, on the other hand, good reasoning should be able to convince me to overcome some of my moral intuitions -- in other words, that some amount of bullet-biting will probably be necessary in the long run
  • that people far away are still people and deserve similar moral weight as people nearby
  • that consequentialist reasoning should typically "beat" deontological intuitions in the long term / at scale, but that deontology & virtue ethics tend to be much more useful in most smaller scale applications (e.g., daily situations that we encounter)
  • that I often act in ways that I know are immoral to some degree, and instead of attempting to justify those actions, simply accept that I'm not acting perfectly morally (but I should check consequentialism to see whether the bad actions might have major negative impact, and if so, avoid doing them)
  • that making myself happy / removing obstacles to my own effectiveness is extremely important, since I expect to make a large positive impact with my work, and therefore I shouldn't accept trade-offs that have substantial costs to my happiness or work efficacy.

These ideas led me to accept effective altruism -- I should always do the best thing I know of, in order to maximize my positive impact on the world; and I should constantly self-improve, in order to find new "best things" that I could be doing.

posted at: 02:41 | path: /rationality | permanent link to this entry

Mon, 26 Aug 2013

What's Your Hourly Rate?

It's not uncommon that we need to decide whether to spend some money to save some time. Order food delivery and pay a delivery charge / tip, or travel to the restaurant yourself? Hire someone to build my Ikea furniture, or do it myself? Get the $240 high-power microwave that will last 5 years and saves you a minute a day, or the $40 slow one? Take the $80k job that's only 15 minutes away, or the $100k job which has an hour commute each way?

In this post, I don't have any particularly strong conclusions, but I'm going to go through and analyze a bunch of examples, in hopes that one of the techniques I present can come in useful in your own life.

We can run the back-of-the-envelope calculations. It'd take me 20 minutes to make the trip to the restaurant and back, and the delivery charge is $2. If my hourly rate is over $6/hr, I should order, all else being equal. Building Ikea furniture might take 3 hours of my time, but the builder only costs $25/hr and is a faster builder so she'll do it for $60. So if my hourly rate is over $20/hr I should hire someone. The fast microwave lasts 5 years, saves me 6 hours a year and costs $200 more, so if my hourly rate is more than $7/hr then I should buy it. The commute is an interesting example which I'll save to analyze later, or you can do it yourself before reading my analysis.

Take a look -- the important question with these calculations are all "what's your hourly rate?" Yet we observe many well-paid people taking the "do it yourself" / "cheap" option despite having hourly rates above the threshold. Are they all being irrational, or what's going on here?

Let's get the clear rationalizations out of the way: yes, I know you might enjoy building Ikea furniture or getting a walk outside. If this is your objection, just modify the examples appropriately to remove the enjoyment from the task -- e.g., you've already built three identical pieces of Ikea furniture today; it's drizzling outside so the walk won't be enjoyable.

Okay, here's a real objection: "I would build the Ikea furniture on the weekend, and I don't work weekends, so it's not substituting for paid hours." Here's my rejoinder: are you completely insensitive to time on the weekends? Aren't there some things you would rather do than others? Let's say some old friends are in town -- how much would you spend to be able to hang out with them instead of building furniture? Probably a lot, right? I would claim that most people usually have things almost that cool that they could do with that time, and they just usually don't try very hard to come up with a better use of time before deciding to build the furniture.

My second rejoinder to the above objection: just because you are getting paid such-and-such dollars per hour doesn't mean that's what your free time is worth. Free time can be worth more or less than a job. Here's the definition I'm going to use in order to make this clear: your "free time" rate is the lowest amount of money you would charge to do an arbitrary boring (not pleasant or unpleasant) task for an hour.

The corollary to this definition: If someone is willing to pay you your "free time" rate for an hour of work, no strings attached, you should probably take their offer, because you can find a way to get that hour back for cheaper (e.g., by buying a nicer microwave with the money). Contrapositively, if you regularly find yourself turning down such jobs, you are either acting irrationally, or your free-time wage is higher than you initially thought it was.

Okay, another objection: The microwave doesn't save me time, since I'm not fully occupied while waiting for my food to cook, and so I could read Twitter on my phone. I would have done this anyway to procrastinate at some point during the day, so I just "moved" that time, I didn't save it.

I think this objection has merit, but I don't always go through my whole Twitter feed during the day, and might go a whole day without reading Twitter, so if that happens I did waste that time. So maybe we should value blocks of time differently according to their "shape" -- longer blocks are more valuable, because you can get into a focused state, or travel out of your house, or whatever, and more gets done with that time.

Next question. I don't have a lot of money, and there seem to be a lot of time/money tradeoffs trying to get my dollars. It would cost me my whole savings to upgrade my microwave, stove, knife set, dishwasher, and then I'd be out of money to hire the cleaning staff, furniture-building staff, grocery delivery staff, and so on. Where do I stop? Why pick the microwave over these other things?

I'm not quite sure. One possible answer: "this is what savings is FOR, and you should spend your money in the most effective way you know how, which probably means buying the things that give you the best value in time-for-money. With the time you save, you can get another job and earn back the savings, which can then be reinvested into your time..."

But, you say, savings is useful for a lot more than simply reinvestment. You can pay emergency medical bills, you can send kids through college, buy a house, save for retirement, or other useful things which take large chunks of money. Okay, but let's talk about these. I don't have kids, don't intend to buy a house or car, and am reasonably healthy. Emergency medical bills, in the chance they arise, can go on credit cards (or, in the worst case, I can declare bankruptcy). Should I be saving for retirement, though? A lot of people seem to recommend this. But it's unclear if this is a good idea, because I'm trading off against time now -- and it seems pretty likely that time now is more useful.

There must be other uses for large piles of savings: Traveling, angel investing. Or, if you're a different kind of person, perhaps you'd spend your extra cash on strippers and cocaine. I'm sure there are a ton of expensive valuable things which I will bucket into "Everything Else". I'd guess that someone like me would probably do well with $2-10k in savings. Figure out what your optimal savings account size is, and beyond that it seems like savings is pretty well spent on things which save lots of time. (Don't forget that you could also be donating large amounts of money to effective charities. I am not including that option in this analysis, mainly because I'm focused on time/money tradeoffs here.)

OK, I want to go back and analyze the commute example from the top of the post. I'll repeat it here -- let's say you work a high-tech job at $100k/yr, or $50/hr, and that you commute 1 hour each way to get there. Let's also say you chose this job over a lower-paying nearby one -- $80k/yr ($40/hr) with a 15 minute commute, and assume the jobs were otherwise equally attractive. You're paying an extra 90 minutes a day for an extra $80, so the value of your free time must be under $53/hr. Actually, it's much worse than that: in the US, the marginal tax rate at these salaries is quite high (~25%) so you're really only taking home an extra $60 for your time. So choosing the higher-paying job actually only makes sense if the (observed) value of your free time is below $40/hr.

This brings me to the last point I would like to make, which is that the hourly rate you're actually getting paid isn't tied that closely to the value of your free time. In fact, it's neither an upper nor lower bound. I'll give some more examples: let's say you're a lawyer making $150/hr. But your firm only has so much work to give you, and you have a lot of kids who will need to go through college, and not much savings. You are probably going to value adding to your savings over saving time in the short term, and so you'll bother to drive 40 minutes to the grocery store to save $20 on groceries -- the value of your free time is not more than $30/hr despite your high salary.

On the other hand, let's say you're an entrepreneur working 50 hours a week at a current salary of $15/hr because your company is barely funded enough to pay you even that much. But you're the leader in a fast-growing market, and a huge piece of the potential value of your company comes from remaining the market leader over the next year. Hammering out the assumptions: let's say the hours you work over the next year contribute equally to remaining the market leader, which would make the difference between a $70M company and a $100M company. Let's say you own a third of it and do a third of the work - so the marginal value to you is $10M over the next year, divided by 7500 hours worked (by all three of you) in the year. This comes out to your free time being worth at least $1333/hr. Which means you should spend a lot of energy figuring out how to sustainably get more than 50 work hours in a week :)

Feedback on this post -- corrections, suggestions for improved analyses, and so on, are very much welcomed. (By email: lincoln / techhouse.org)

posted at: 01:54 | path: /rationality | permanent link to this entry

Sun, 14 Apr 2013

Spending Tradeoffs

I've found myself choosing more expensive options on a bunch of things lately. Are these tradeoffs correct? What goes into your tradeoff calculations?

Tradeoffs where I choose the more expensive option: (In these examples, the more expensive option is first.)

  • Guacamole on your burrito or no? (+$1/meal) - High deliciousness.
  • Manhattan or Brooklyn? (+$300/mo) - Regular payoff (convenience, excitement, awesomeness). Not sure about this one.
  • Get takeout or make dinner? (+$2/meal) - High value of my time (going to the store & cooking) outweighs cost.
  • Organic butter or regular? (+$3/lb) - Ostensible health benefits, minor taste benefits, and cost is low.
  • 27" or 21" iMac? (+$500) - Regular payoff in productivity.
  • Solid-state drive or platter? (+$150) - Regular payoff in productivity.
  • Amazon Prime or no? (+$80/yr) - Regular payoff in not having to think about how to obtain something.
  • Buy smoothie or make it? (+$4/day) - Prefer flavor; I don't like going to the grocery store. Not sure about this one.
  • Nice or cheap tea? (+$.50/cup) - High deliciousness. Low cost.
  • Withings scale (automatically uploads your weight) or cheap scale? (+$100) - Lowers barrier to tracking my weight daily, possibly resulting in health benefits.

Tradeoffs where I choose the cheaper option:

  • Men's dress shoes: Johnston & Murphy, or Rockport? (+$100) - Benefits (comfort) significant when used, but I only use them a couple times a year.
  • Gaming mouse or cheapo mouse? (+$40) - Benefits (accuracy, comfort) are not significant, despite daily use.
  • Frame that poster for my wall, or don't bother? (+$100) - Do not expect long-term benefits.
  • Business class or coach? (+$500 or more) - Benefits (comfort for a few hours) are not significant enough.
  • Designer jeans or cheap jeans? (+$100) - Benefits (comfort, style) are not significant enough, despite daily use. Not sure about this one since I've never worn designer jeans.

posted at: 17:18 | path: /rationality | permanent link to this entry

Mon, 21 May 2012

A Collection of Anti-Procrastination Techniques

I read How to Beat Procrastination when it was written, and have applied some of it to my own life. That post is somewhat theoretical, and significant parts smell a bit like BS. But that's okay, because I sort of believe the theory -- namely that you can look at the four factors Luke mentions (positive: value, expectancy; negative: impulsiveness, delay) and figure out which ones apply to the situation you're in, and then fix it, and you have a shot at reducing the procrastination behavior.

Without further ado:

  1. Setting a Timer - set your timer for 30 minutes. If you work straight through with minimal interruption, give yourself a reward. (Candy, coffee, Reddit, break, etc.) This increases the short-term value of the work you're doing. If you notice you wasted time during the 30 minute session, turn off the timer, take a break (get water but not food) and try setting the timer again.

  2. Keep a Planning Doc - I noticed I was procrastinating because I didn't know how to achieve the thing I wanted to achieve, or because it became too complicated for me to keep in my head. Expectancy was low -- since it was complicated or confusing, I had a low probability of doing the task correctly. For some reason, my brain always jumps to "must step away from computer / take a break" when this happens. However, when I'm on the clock (because I set a timer in #1), I would usually notice before getting distracted, and so I would recognize it as an instance of "needs to plan". Planning in my head doesn't work, so I keep a planning document (in a text editor) open at all times, and I write my plan in it.

    Planning gives you clock time credit, so feel good if you switch away from your actual work to write something in the planning doc. It is usually worth it, because then when you switch back to your actual work, you always have this context right there. It's been very helpful to have enough screen real estate for the planning doc to always be up, so I don't have to alt-tab to refer to it.

  3. Music - I noticed music was preventing me from achieving complicated goals because it was very distracting. So I turned it off, and had some very successful and productive timed sessions. However, when I was demotivated and didn't want to start the clock, music was helpful in motivating me. So there's an interesting middle ground there. My current strategy is to listen to upbeat music without trying to do anything else when I feel tired or demotivated, and once I get motivated enough, turn it off and set the timer.

  4. Website Blocker - block all the soul-sucking websites. I occasionally disable this when I'm letting myself goof off, but disabling it is enough steps that I realize I'm doing it, which is not always the case if there's no blocker.

  5. Stimulants - Caffeine raises my mood and improves my motivation levels. Modafinil appears to improve my motivation levels slightly, but it doesn't raise my mood at all (in fact, it sometimes seems to give me various minor aches and pains), so I take it much less often.

  6. Sleep - When I'm tired, I'm thoroughly demotivated to do much of anything that seems like work. Modafinil and caffeine can help with this, but they both appear to be short-term solutions, in that they allow me to move my sleep requirements around, but not to save overall time. However, melatonin is another story: it knocks me out quickly and thoroughly, so I get more restful sleep and I naturally wake up earlier than I otherwise would have. Napping is something that can help here too; I am still working on napping in the middle of the day or at weird times. (I still feel awkward doing so, even when it would obviously be beneficial.)

I'll add to this as I learn more things.

posted at: 17:51 | path: /rationality | permanent link to this entry

Fri, 02 Mar 2012

Reflecting on Rationality & Startups

It's been about six months since the end of Rationality Mega-Camp. Since then, what's been going on in my brain?

Well, I did a semester of graduate school, and started a company as well. I did a reasonably passable job on my class and research projects, while building the core of what is now a funded startup. I was quite busy with work, but I even maintained a reasonable social & hobby life, brewing 60 gallons of beer, starting an indoor tomato garden, getting a decent level of exercise, and spending social time with friends.

Blink. Wow. Now that I am stepping back to consider it, I can't remember a time when I felt even half as effective as I have consistently felt over the last six months. There have been ups and downs, but overall I've been both happy and productive -- I am pretty damn satisfied with my effectiveness.

The real question I am interested in is: how much of this level of effectiveness was due to RBC / rationality training?

I don't know how to directly make progress on answering this question, so I'm going to consider what skills from RBC I use often.

Models

The most important one seems to be that RBC taught me to produce better models of the world, and rely on them more. The catchphrase here is "nothing works or fails as if by magic," but that's not really very descriptive.

I'll give an example. Before, when evaluating a business idea, I would run down a mental checklist of "is it a real thing that you could produce? does it have a market? can you get people to hear about it and pay for it?" But that procedure wasn't very effective at answering the question, because my brain would often answer "yes" or "no" based on some confused process: for example, comparing that strategy against similar strategies that had worked for others in the past, without attempting to understand why they worked or failed.

Now, when I hear the same idea, I don't start with the checklist. Instead I actually reason about all the causal processes involved: "OK, so you want to build a widget. People want this widget because of X, which is a real problem that I agree people have. You expect them to hear about it because of Y, but I don't understand how it will spread to more people after that." Instead of a rote process, I am following a much more analytical process which relies on my models of the world, and I expect those models to be accurate and to lead to correct answers. When I am surprised by a result, I propagate that back to the models, so they are constantly improving.

Before, I was basically doing Cargo Cult Science. This is an essay by Richard Feynman where he describes people going through the motions -- they saw someone else achieving something and want the same, so they mimic their actions. My brain was mimicking the actions of successful strategies in the past, without understanding why they worked. Now, I don't do this anymore.

I'm not sure if this even quite captures the depth of the insight I'm trying to describe. It might not be possible to convey in a single blog post. Your best bet for gaining it might be to read a lot of Robin Hanson and Tyler Cowen.

Anyway, nothing works or fails by magic, and most practically relevant causal processes seem to be grokkable by humans. So it's a matter of constantly learning and grokking more models. My brain does the learning part automatically, and as soon as it had enough of a framework to build on, it started filling in the holes. So the upshot is that I feel like I am a lot better at predicting the success or failure of various things than I used to be.

RBC did not directly attempt to teach this skill, but it was an attitude that a lot of people in the community had, and I am pretty sure I can credit RBC with teaching it to me -- simply by participating in discussions with SI folks and the surrounding community. People who I associate with very high levels of this skill are Michael Vassar, Anna Salamon, and Geoff Anders, all of whom taught at RBC.

Communication skills

Asking for examples! This was pretty much the first thing they taught at RBC and it's incredibly important to my day-to-day communication. I think of Anna every time I ask for an example.

Metadiscussion and metastrategy: these are things that my fellow campers and instructors loved to talk about. Example -- "at the beginning of a brainstorming session, remind people to discuss the problem thoroughly before proposing any solutions"; "during a meeting, regularly ask if the meeting is achieving the goals it was designed for".

Not trusting my brain

I set timers, I put things on calendars, I write things down, I block myself from the internet. All these things I do regularly and it helps. I admit that my brain is not perfect and that it will do the wrong thing if I don't take simple steps to correct it. So I take the damn steps. I learned like a hundred ways that brains can go wrong at RBC, and I apply bits of that knowledge regularly.

So did RBC help?

I still don't know. This post is quite unfinished but I am going to post it and get back to work. I intend to take the other angle later, arguing that despite these skills, RBC is not mainly responsible for my seemingly higher levels of effectiveness. Stay tuned...

posted at: 04:17 | path: /rationality | permanent link to this entry

Sat, 13 Aug 2011

RBC review exercises

The content of this post is on the shared RBC blog.

This is my last post on the shared blog, because Rationality Mega-Camp / Boot Camp is done after this weekend! Further posts by me will be here.

posted at: 18:16 | path: /rationality/bootcamp | permanent link to this entry

Sun, 07 Aug 2011

Sun, 31 Jul 2011

Fashion trips

The content of this post is on the shared RBC blog. I wrote this on Jul 31, and forgot to link it here last week -- whoops.

posted at: 22:00 | path: /rationality/bootcamp | permanent link to this entry

Sat, 23 Jul 2011

Fri, 15 Jul 2011

Sun, 10 Jul 2011

Thu, 07 Jul 2011

Mon, 04 Jul 2011

Wed, 29 Jun 2011

Sun, 26 Jun 2011

Munchkining

One of the defining philosophies of the local Singularity Institute community here is "Munchkining at life". Where I come from we would call it min/maxing your life, or hacking your life. It refers to winning (making money or happiness) by unconventional or unexpected means. You're following the rules (laws and ethics) but not obeying societal expectations.

Eliezer Yudkowsky and Michael Vassar proclaim (loudly and to anyone who will listen) that there are a huge number of munchkining opportunities for anyone to take. Low-hanging fruit, in other words. They believe this strongly enough that they've told us that one of the goals of Rationality Boot Camp is to produce millionaires, by teaching us the skills we need to notice and take advantage of these munchkin opportunities.

If Michael and Eliezer are correct about the low-hanging fruit, and if they can successfully teach the appropriate skills, then it makes sense that SingInst is paying for us to be here: they expect to get a lot of big donations, either from us directly in a few years, or through us because we end up in powerful positions.

Anyway, we had a session about munchkining today, called "The World is Mad." Eliezer first polled us about how much weight we assigned to the implication that "nobody else is doing X" should mean that X is a bad idea. I gave the weight four out of ten, and the answers were quite varied. Eliezer then gave lots of examples of things people weren't doing which were obviously a good idea (and things which people were doing which were shown to be retarded):

For each of these we rated how much this caused us to believe the "world is mad" hypothesis. For me the most impressive one was the hospital checklist one, followed by prison rape. How much do each of these cause you to update?

I guess the experiment is to determine whether people can really internalize that "the world is mad". Eliezer and Michael were saying that they present people with all this data, and yet people still look at what others are doing to determine whether something is a good idea. I can see what they're saying, because despite all this, I haven't updated more than a couple points on that scale. I initially answered 4 out of 10; I would now put my estimation of "the world is mad" at about 6 or 7. I am still not fully updated to that view; it's hard for me to internalize.

Apparently one science fiction author, Jack Vance, treats this view as a skill, "zs'hanh", which is defined as "contemptuous indifference to the activity of others". Eliezer tweaked this definition to "contemptuous indifference to the inactivity of others. I expect to make a lot of progress learning this skill this summer.

posted at: 03:00 | path: /rationality/bootcamp | permanent link to this entry

Sat, 25 Jun 2011

The Elephant and the Rider

Here's a metaphor we've been using a lot at Rationality Boot Camp: People's behavior is controlled by an elephant and a human rider -- the rider represents your rational, cognitive, high-level thought processes, whereas the elephant represents your emotions, habits, and unconscious thoughts. The metaphor works well: the proportions are about right, and when the elephant and the rider conflict, guess who wins.

One practical technique we've been studying is Nonviolent Communication (NVC), a buzzword-driven methodology for communicating with people, which focuses on ways to defuse upsetting situations. Someone called it "elephant whispering". The idea is that if someone is angry with you, or frustrated, or whatever, their elephant is going crazy, and you can help tame it so their rider can get back in control.

The main technique we're learning through NVC is empathy. You can derive utterances which will reliably calm an upset person down and engage their cognition. The elephant apparently responds very well to this sort of empathy. This is as opposed to a number of common ways people try to talk to upset people, which don't usually work as well. Don't do these:

  • Advising: "I think you should..." (I do this one a lot!)
  • One-Upping: "That's nothing. Wait until you hear what happened to me."
  • Educating: "This could turn into a positive experience for you if you just..."
  • Consoling: "It wasn't your fault. You did the best you could."
  • Storytelling: "That reminds me of the time..."
  • Shutting down: "Cheer up. Don't feel so bad."
  • Sympathizing: "Oh, you poor thing."
  • Interrogating: "When did this begin?"
  • Explaining: "I would have called, but..."
  • Correcting: "That's not how it happened."

Instead, the technique says to talk only about: "observations, feelings, needs, and requests." Observations are factual statements, "You left the light on last night" (what not to say: generalizations or evaluations, like "you always leave the light on").

Feelings are observations about emotions. Things like "angry", "confused", "disappointed", "frustrated", "sad", "embarrassed", etc. There are also positive feelings: "hopeful", "intrigued", "proud" and so on.

"Needs" refer to basic human needs which our elephant wants: self-worth, food, rest, love, creativity, order, respect, etc.

And requests are from one person to another, reasonable things to ask of another person, like "turn off the light before you sleep" or "knock before you enter".

You can leave off several of observations, feelings, needs, or requests if they don't apply. You can also take guesses as to the cause. If you successfully signal curiosity, the other person won't get more upset, even if you guessed wrong -- instead, it will trigger them to correct your guess, but in the process it will engage their rider and naturally defuse their upsetness.

For example, if your roommate stomps into the kitchen and won't talk to you: "Are you upset because the kitchen isn't clean?" Even if you're wrong, more likely than not, your roommate will figure out her reason for being upset, and should become less upset and more willing to talk.

There's a lot of good examples and information on wikihow's article.

My evaluation? When I imagine some of the confrontations I've had, I can easily imagine NVC working. The instructor, Divia, says that her interpersonal relationships have massively improved with the use of NVC. For these reasons I am inclined to believe that it works, with fairly high uncertainty until I have put some of this stuff into practice. In any case, let me know if you've had any experiences which would show that it works or doesn't work.

posted at: 03:58 | path: /rationality/bootcamp | permanent link to this entry

Fri, 24 Jun 2011

Tortuga Meetup

Apparently I've fallen behind slightly on my weekly update schedule. I planned to update yesterday, at the last minute (of course) and fell prey to the planning fallacy.

Eliezer showed up at the Tortuga Less Wrong meetup yesterday, and it was a blast.

Tortuga is a communal living group in Mountain View. The Boot Camp is in Berkeley, which isn't exactly close to Mountain View; it's extremely inconvenient and expensive to get there via public transport. Fortunately some people had cars we could pile into.

There were probably 30 or 40 people there. We did an exercise called "The Strangest Thing an AI Could Tell You", where we came up with the strangest thing that an AI could tell you where you believed it, rather than believing that the AI was broken or you were going crazy.

One of the interesting questions we came up with is: "why does the AI output this particular statement?" Did you ask it "do all humans have a tail that we just can't see" and it said yes? Or did you ask it "tell me something surprising" and it produced this gem? The question doesn't specify, and it seems that the answers are quite different depending on the process by which the AI would have generated the question.

We spent a while talking about this, and then moved onto random other topics, and it was really a blast. More than half the attendees stuck around for at least four hours. Eventually some people went to hang out in the naked hot tub in the back. I went, of course, and Eliezer also went, and now I have had the privilege and bragging rights of playing Truth or Dare in a hot tub naked with Eliezer Yudkowsky.

posted at: 17:53 | path: /rationality/bootcamp | permanent link to this entry

Fri, 17 Jun 2011

Memory

I am learning how to memorize things!

Our instructor Geoff today taught us memorization techniques which really work and have a huge and obvious effect. The two techniques are called "pegs" and "memory palace".

I'd heard of memory palace before. I hadn't heard of pegs. They're both fairly ancient techniques, so nothing groundbreaking here, but they are highly effective and easy to learn.

Today, we first did a pre-test to measure memorization before trying to learn any techniques. Geoff read us a list of twenty random objects and we had to try to recall the objects in the order given. I did remember eight objects correctly -- the first four, the last two, and a couple in the middle which stood out.

After the pretest, we started working on memorization techniques. I chose pegs.

The idea with pegs is that you use normal flash-card style memorization techniques to bind numbers to objects. For me, one is "hat" and two is "hen" and three is "ham". These objects can be reused many times, so you only have to memorize the list once. When you're actually faced with a list of things to memorize, you just "hang" each of the memorizees on their associated pegs. So if Geoff reads "1. Zipper," you add a zipper to your hat. Later you just think of the number 1, which leads you to think of a hat, which leads you to remember the zipper on the hat.

The peg objects can be anything you want, but I followed the Major System, which assigns major consonants to each digit. So 1 is 'd' or 't', 2 is 'n', 3 is 'm'. You can look up the list on Wikipedia, but the idea is that you come up with a word you'll remember whose only major consonants are the digits of the number you're trying to remember. (Vowels and 'w','h','y' are not major consonants for this purpose). This appears to be a very scalable system and I expect to derive good use from it.

Anyway, I was able to correctly memorize nine things (instead of 8) in the post-test, but I was able to recall twelve things two hours later, and I can tell I would have done much better if I had been better at remembering my pegs. I'm going to spend time each day practicing my pegs until they're second nature. I would expect to get 19 or 20 out of 20 once I am quickly able to come up with the peg image for a given number.

posted at: 01:02 | path: /rationality/bootcamp | permanent link to this entry