Some ways to “clear space”

[Epistemic status: pursuing ideas, no clear conclusion]

Part of the thinking of my Psychology and Phenomenology of Productivity

Followup to: What to do with should/flinches: TDT-stable internal incentives

So there’s a problem.  When I’m agitated, the thing that most helps is doing Focusing on the agitation, to dialogue with it and get clarity about which goals are threatened. But when I’m most agitated, my mind tends to glance off of my agitation. I can’t stabilize my intention on the agitation enough to start doing Focusing.

So I have a circular dependency. I want to do Focusing, to help the agitation. I can’t do Focusing, because I’m agitated.

I think resolving this is what is meant by the “clearing a space” step in Gendlin’s 6 steps, and may be isomorphic to “unblending”.

Personally, I really want to have a systematic solution to this problem.

These are somethings that I know help boost me out of the circular dependency.

  • Grab another person to be my focusing companion. This one helps hugely, for reasons that are unknown to me. (Extra working memory? I don’t think that’s it. Maybe, having another person looking at me creates a slight pressure towards coherent trains of thought, instead of my mind/attention jumping around from stimulus to stimulus? That seems closer.
  • Start writing. This seems like it also anchors my attention, so that it’s easier to be in contact with the anxiety/agitation, without slipping off.

These are some things that might help, but I haven’t tried in depth yet.

  • Some explicit practice with the unblending step of Focusing? (It’s my understanding that some Focusing teachers train this explicitly
  • Top down regulating my SNS activity using something like Val’s old Againstness, or my suggestion serenity routine from 2014?

 

Note that I need my solution that itself avoids the problem of the circular dependency. Whatever the technique I use to to make space to do Focusing on the agitation has to be easy to do when agitated, or what’s the point?

But ideally, I could have a TAP sequence that looked like…

[Notice the agitation] -> [Snap my fingers (or something, to reify and time-condence the noticing] -> [Clear space somehow] -> [Do Focusing on the root of my agitation]

Advertisements

What to do with should/flinches: TDT-stable internal incentives

[epistemic status: current hypothesis, backed by some simple theory, and virtually no empirical evidence yet]

{Part of the thinking going into my Psychology and Phenomenology of Productivity sequence}

The situation

I’ve been having an anxious and unproductive week. There’s a project that I intend to be working on, but I’ve been watching myself procrastinate (fairly rare for me these days), and work inefficiently.

More specifically this situation occurs:

I’m sitting down to start working. Or I am working, and I encounter a point of ambiguity, and my attention flinches away. Or I’m taking a break and am intended to start working again.

At that moment, I feel the pressure of the “should”, the knowing that I’m supposed to/I reflectively want to be making progress, and also feel the inclination to flinch away, to distract myself with something, to flick to LessWrong (it used to be youtube, or SMBC, but I blocked those) or to get something to eat. This comes along with a clench in my belly.

The Opportunity and Obligation

This is a moment of awareness. At that moment, I am conscious of my state, I’m conscious of my desire to make progress on the project. If I do flick to LessWrong, or otherwise distract myself, I will loose that conscious awareness. I’ll still feel bad, still have the clench in my belly, but I won’t be consciously aware of the thing I’m avoiding (at least until the next moment like this one). At that moment, I’m at choice about what to do (or at least more at choice). In the next moment, if the default trajectory is followed, I won’t be.

Realizing this put’s a different flavor on procrastination. Typically, if I’m procrastinating, I have a vague “just one more” justification. It’s ok to watch just one more youtube clip, I can quit after that one. I can stay in bed for another five minutes, and then get up. But if my level of consciousness of my situation fluctuates, that justification is flatly not true.

I have the opportunity right now, to choose something different. I, in actual fact, will not have that opportunity in five minutes.

That me, right then, in that timeslice, has a specific obligation to the world. [I should maybe write a post about how my morality cashes out to different timeslices having different highly-specific obligations to serve the Good.] In that moment, I, the me that is conscious of the should, have the obligation to seize the opportunity of that increased consciousness and use it to put myself on a trajectory such that the next timeslice can effectively pursue a project that will be a tick of the world iterating to a good, safe, future.

The problem

The naive way to seize on that opportunity is to force myself do the task.

There’s a problem with that solution, aside even from the fact that it doesn’t seem like it will work (it’s typically a red flag when one’s plan is “I’ll just use will power”). Even if I could reliably seize on my moment of awareness to force myself to overcome the aversion of my flinch response, doing so would disincentivize me from noticing in the first place.

Doing that would be to install a TAP: whenever I notice myself with a should/flinch, I’ll immediately grit my teeth and preform an effortful and painful mental action. This is conditioning my brain to NOT notice such experiences.

Which is to say, the “just do it” policy is not stable. If I successfully implemented it, I would end up strictly worse off, because I’d still be procrastinating, but I would be much less likely to notice my procrastination.

A guess at a solution

After having noticed this dynamic this week, this is the approach that I’m trying: when I notice the experience of an entangled “should” and the flinch away from it, I orient to hold both of them. More specifically, I move into facilitation mode, where my goal is to make sure that the concerns of both parts are heard and taken into account. Not to force any action, but to mediate between the two conflicting threads.

(Taking advantage of fleeting moments of increased consciousness to hold the concerns of two inchoate and conflicting things at once, is a bit tricky, but I bet I’ll aquire skill with practice.)

If I were to generalize this goal it is something like: when I have a moment of unusual awareness of a conflict, I move to in the direction of increased awareness.

I’ve only been doing this for a few days, so my n is super small, and full of confounds, but this seems to have lead to more time spent dialoguing parts, and days this week have been increasingly focused and productive.

 

Culture vs. Mental Habits

[epistemic status: personal view of the rationality community.]

In this “post”, I’m going to outline two dimensions on which one could assess the rationality community and the success of the rationality project. This is hardly the only possible break-down, but it is one that underlies a lot of my thinking about rationality community building, and what I would do, if I decided rationality community building were a strong priority.

I’m going to call those two dimensions Culture and Mental Habits. As we’ll see these are not cleanly distinct categories, and they tend to bleed into each other. But they have separate enough focuses that one can meaningfully talk about the differences between them.

Culture

By “culture” I mean something like…

  • Which good things are prioritized?
  • Which actions and behaviors are socially rewarded?
  • Which concepts and ideas are in common parlance?

Culture is about groups of people, what those groups share and what they value.

My perception is that on this dimension, the Bay area rationality community has done extraordinarily well.

Truth-seeking is seen as paramount: individuals are socially rewarded for admitting ignorance and changing their minds. Good faith and curiosity about other people’s beliefs is common.

Analytical and quantitative reasoning is highly respected, and increasingly, so is embodied intuition.

People get status for doing good scholarship (e.g. Sarah Constantin), for insightful analysis of complicated situations (e.g. Scott Alexander, for instance), or for otherwise producing good or interesting intellectual content (e.g. Eliezer).

Betting (putting your money where your mouth is) is socially-encouraged. Concepts like “crux” and “rationalist taboo” are well known enough to be frequently invoked in conversation.

Compared to the backdrop of mainline American culture, where admitting that you were wrong means losing face, and trying to figure out what’s true is secondary (if not outright suspicious, since it suggests political non-allegiance), the rationalist bubble’s culture of truth seeking is an impressive accomplishment.

Mental habits

For lack of a better term, I’m going to call this second dimension “mental habits” (or perhaps to borrow Leverage’s term “IPs”).

The thing that I care about in this category is “does a given individual reliably execute some specific cognitive move, when the situation calls for it?” or “does a given individual systematically avoid a given cognitive error?

Some examples, to gesture at what I mean

  • Never falling prey to the planning fallacy
  • Never falling prey to sunk costs
  • Systematically noticing defensiveness and deflinching or a similar move
  • Systematically noticing and responding to rationalization phenomenology
  • Implementing the “say oops” skill, when new evidence comes to light that overthrows an important position of yours
  • Systematic avoidance of the sorts of errors I outline my Cold War Cognitive Errors investigation (this is the only version that is available at this time).

The element of reliability is crucial. There’s a way that culture is about “counting up” (some people know concept X, and use it sometimes) and mental habits is about “counting down” (each person rarely fails to execute relevant mental process Y).

The reliability of mental habits (in contrast with some mental motion that you know how to do and have done once or twice), is crucial, because it puts one in a relevantly different paradigm.

For one thing, there’s a frame under which rationality is about avoiding failure modes: how to succeed in a given domain depends on the domain, but rationality is about how not to fail, generally. Under that frame, executing the correct mental motion 10% of the time is much less interesting and impressive than executing it everytime (or even 90% of the time).

If the goal is to avoid the sorts of errors in my cold war post, then it is not even remotely sufficient for individuals to be familiar with the patches: they have to reliably notice the moments of intervention and execute the patches, almost every time, in order to avoid the error in the crucial moment.

Furthermore, systematic execution of a mental TAP allows for more complicated cognitive machines. Lots of complex skills depend on all of the pieces of the skills working.

It seems to me, that along this dimension, the rationality community has done dismally.

Eliezer wrote about Mental Habits of this sort in the sequences and in his other writing, but when I consider even very advanced members of my community, I think very few of them systematically notice rationalization, or will reliably avoid sunk costs, or consistently respond to their own defensiveness.

I see very few people around me who explicitly attempt to train 5-second or smaller rationality skills. (Anna and Matt Fallshaw are exceptions who come to mind).

Anna gave a talk at the CFAR alumni reunion this year, in which she presented two low-level cognitive skills of that sort. There were about 40 people in the room watching the lecture, but I would be mildly surprised if even 2 of those people reliably execute the skills described, in the relevant-trigger situation, 6 months from that talk.

But I can imagine a nearby world, where the rationality community was more clearly a community of practice, and most of the the people in that room, would watch that talk and then train the cognitive habit to that level of reliability.

This is not to say that fast cognitive skills of this sort are what we should be focusing on. I can see arguments that culture really is the core thing. But nevertheless, it seems to me that the rationality community is not excelling on the dimension of training it’s members in mental TAPs.

[Added note: Brienne’s Tortoise skills is nearly archetypal of what I mean by “mental habits”.]