Focusing is debugging for the brain


a high-quality realistic photo of a computer programmer sitting in front of a computer screen with a brain on it
"a high-quality realistic photo of a computer programmer sitting in front of a computer screen with a brain on it" by DALL-E

I’m working through Hammertime, a 30-day program of instrumental rationality exercises. Hammertime introduced me to a technique called Focusing via a LessWrong post called “Focusing, for skeptics”. Focusing involves:

  1. Thinking about a problem
  2. Paying attention to how your body feels. This is a felt sense
  3. Coming up with a handle for that felt sense: a word, image, or concept
  4. Comparing the handle to the felt sense, paying attention to whether the handle feels right. If so, go back to step 3 and narrow down the handle, adding more description and nuance. If not, try step 3 with a new handle. Repeat!

This is pretty much how I debug a computer program. First, I gather information about the bug. What part of the code is it in? When the bug happens, what state is the program in? This is like paying attention to a felt sense.

Then, I lean on my intuition to surface hypotheses—short sentences that point to the bug’s root cause. This is like coming up with a handle for a felt sense.

Finally, I compare each hypothesis against the data. I might fill in blanks, tweak a hypothesis, or throw it out entirely. This is like comparing a handle to a felt sense.

I’m good at debugging but not at Focusing. It makes sense: I have years of experience ingesting information about computer programs. I've spent much less time paying attention to my body. (I haven’t meditated seriously for a couple of years, so I’m out of practice.) Plus, my accumulated coding knowledge fuels my bug intuition. It takes longer to come up with good handles for felt senses because I have less data on them.

One takeaway for me: when Focusing, I should leave behind any preconceived notions of how I feel about a problem. Just like how, when debugging, it’s easy to stay attached to an existing hypothesis, even when the data are against it.

(Edit: I asked ChatGPT to help me edit this blog post for concision. It did remove some unnecessary words.)