Ideology/narrative stabilizes path-dependent equilibria

[Epistemic status: sounds on track]

[Note: Anna might have been saying basically this, or something very nearby to this for the past six months]


Lately I’ve been reading (well, listening to) the Dictator’s Handbook by Bruce Bueno de Mesquita and Alastair Smith, which is something like a realpolitik analysis of how power works, in general. To summarize in a very compressed way: systems of power are made up of fractal hierarchies of cronies who support a leader (by providing him the means of power: the services of an army, the services of a tax-collector, votes that keep him in office) in return for special favors. Under this model, institutions are pyramids of “if you scratch my back, I’ll scratch yours” relationships.

This overall dynamic (and its consequences) is explained excellently in this 18 minute CGP Grey video. Highly recommended, if you haven’t watched it yet


One consequence of these dynamics is how coups work. In a dictatorship, if an upstart can secure the support of the army, and seize the means of revenue generation (and perhaps the support of some small number of additional essential backers) he gets to rule.

And this often happens in actual dictatorships. The authors describe the case of Samuel Doe, a Sargent in the Liberian military, who one night, with a small number of conspirators, assassinated the former dictator of Liberia in his bed, seized control of the treasury, and declared himself the new president of Liberia. Basically, because he now had the money, and so would be the one to pay them, the army switched allegiances and legitimized his authority. [Note: I think there are lot of important details to this story that I don’t understand and might make my summary here, misleading or inaccurate.]

Apparently, this sort of coup is common in dictatorships.


But I’m struck by how impossible it would be for someone to seize the government like that in the United States (at least in 2019). If a sitting president was not voted out of office, but declared that he was not going to step down, it is virtually inconceivable that he could get the army and the bureaucracy to rally around him and seize / retain power, in flagrant disregard for the constitutional protocols for the hand-off of power.

De Mesquita and Smith, as well as CGP Grey, discuss some of the structural reasons for this: in technological advanced liberal democracies, wealth is produced primarily by educated knowledge workers. Therefore, one can’t neglect the needs of the population at large like you can in a dictatorship, or you will cut off the flow of revenue that funds your state-apparatus.

But that structural consideration doesn’t seem to be most of the story to me. It seems like the main factor is ideology.


I can barely imagine a cabal of the majority of high ranking military officials agreeing to back a candidate that lost an election, even if they assessed that backing that candidate would be more profitable for them. My impression of military people in general is that they are extremely proud Americans, for  whom the ideals of freedom and democracy are neigh-spiritual in their import. They believe in Democracy, and rule of law, in something like the way that someone might believe in a religion.

And this is a major stabilizing force of the “Liberal Democracy” attractor. Not does this commitment to the ideals of America, act in the mind of any given high ranking military officer, making the idea of a coup distasteful to them, there’s an even more important pseudo-common knowledge effect. Even if a few generals are realpolitik, sociopath, personal expected utility maximizers, the expectation that other military leaders do have the reverence for democracy, and will therefore oppose coups against the constitution, makes organizing a coup harder and riskier. If you even talk about the possibility of seizing the state, instead of deferring to the result of an election, you are likely to be opposed, if not arrested.

And even if all of the top military leaders somehow managed to coordinate to support a coup, in defiance of an election result, they would run into the same problem one step down on the chain of command. Their immediate subordinates are also committed patriots, and would oppose their superior’s outright power grab.

The ideology, the belief in democracy, keeps democracy stable.

Realpolitik analysis is an info hazard?

Indeed, we might postulate that if all of the parties involved understood, and took for granted, the realpolitik analysis that who has power is a matter of calculated self interest and flow of resources (in the style of the Athenian’s reply the the Milians), as opposed to higher ideals like justice or freedom, this would erode the stabilizing force of democracy, which I think is generally preferable to dictatorship.

(Or maybe not: maybe even if everyone bought into the realpolitik analysis, they would still think that democratic institutions were in their personal best interest, and would oppose disruption no less fervently.)

I happen to think that realpolitik analysis is basically correct, but propagating that knowledge may represent a negative externality. (Luckily (?), this kind of ideology has an immune system: people are reluctant to view the world in terms of naked power relations. Believing in Democracy has warm fuzzies, about it.)

There’s also the possibility of an uncanny valley effect: If everyone took for granted the realpolitik analysis the world would be worse of than we are now, but if everyone took that analysis for granted and also took something like TDT for granted, then we would be better off?

When implementation diverges from ideal

The ideology of democracy or patriotism does represent a counter-force against naked, self interested power grabs. But it is a less robust defense against other ideologies.

Even more threatening is when the application of an ideology is in doubt. Suppose that an election is widely believed to have been fraudulent, or the “official” winner of an election is not the candidate who “should have won”. (I’m thinking of a situation in which a candidate wins the popular vote, by a huge margin, but still loose the electoral college.) In cases like these, high ranking members of the military or bureaucracy might feel that the actual apparatus of democracy is no longer embodying the spirit of the democracy, by representing the will of the people.

In a severe enough situation of this sort, they might feel that the patriotic thing to do is actually to revolt against the current croup system, in the service of the true ideal that the system has betrayed. But once this happens, the clear, legitimized, succession of power is broken, and who should rule becomes contentious.  I expect this to devolve into a chaos, and one where many would make a power grab by claiming to be the true heir to the American Ideal.

In the worst case, we the US degrades into a “Waring states” period, as many warlord vie for power via the use of force and rhetoric.

Some interesting notes

One thing that is interesting to me is the degree to which it only matters if a few groups have this kind of ideology: the military, and some parts of the bureaucracy.

Could we just have patriotism in those sectors, and abandon the ideology of America elsewhere? Interestingly, that sort of looks like what the world is like: the military and some parts of the government (red tribe?) are strongly proud to serve America and defend freedom, while my stereotype of someone who lives in Portland (blue tribe) might wear a button that reads “America was never great” and talks a lot about how America is an empire that does huge amounts of harm in the world, and democracy is a farce. [Although, this may not indicate that they don’t share the ideology of Democracy. They’re signaling sophistication by counter signaling, but if the if push came to shove, the Portlander might fight hella hard for Democratic institutions.]

In so far as we do live in a world where we have the ideology of Democracy right in exactly the places where it needs to be to protect our republic, how did that happen? Is it just that people who have that ideology self select into positions where they can defend it? Or it it that people with power and standing based on a system are biased towards thinking that that system is good?

Conclusion: generalizing to other levels of abstraction

I bet this analysis generalizes. That is, it isn’t just that the ideology of democracy stabilizes the democracy attractor. I suspect that that is what narratives / ideologies / ego structures do, in general, across levels of abstraction: they help stabilize equilibria.

I’m not sure how this plays out in human minds. You have story about who you are and what you’re about and what you value, and a bunch of sub parts buy into that story (that sounds weird? How do my parts “buy into” or believe (in) my narrative about myself?) and this creates a Nash equilibrium where if one part were to act against the equilibrium, it would be punished, or cut off from some resource flow?

Is that what rationalization is? When a part “buys into” the narrative?  What does that even mean? Are human beings made of the same kind of “if you scratch my back, I’ll scratch yours” relationships (between parts) as institutions made of  (between people)? How would that even work? They make trades across time in the style of Andrew Critch?

I bet there’s a lot more to understand here.




My current model of Anxiety

[epistemic status: untested first draft model

Part of my Psychological Principles of Productivity series]

This is a brief post on my current working model of what “anxiety” is. (More specifically, this is my current model of what’s going on when I experience a state characterized by high energy, distraction, and a kind of “jittery-ness”/ agitation. I think other people may use the handle “anxiety” for other different states.)

I came up with this a few weeks ago, durring that period of anxiety and procrastination. (It was at least partial inspired by my reading a draft of Kaj’s recent post on IFS. I don’t usually have “pain” as an element of my psychological theorizing.)

The model

Basically, the state that I’m calling anxiety is characterized by two responses moving “perpendicular” to each other: increased physiological arousal, mobilizing for action, and a flinch response redirecting attention to decrease pain.

Here’s the causal diagram:



The parts of the model

It starts with some fear or belief about the state of the world. Specially, this fear is an alief about an outcome that 1) would be bad and 2) is uncertain.

For instance:

  • Maybe I’ve waited too late to start, and I won’t be able to get the paper in by the deadline.
  • Maybe this workshop won’t be good and I’m going to make a fool of myself.
  • Maybe this post doesn’t make as much sense as I thought.

(I’m not sure about this, but I think that the uncertainty is crucial. At least in my experience, at least some of the time, if there’s certainty about the bad outcome, my resources are mobilized to deal with it. This “mobilization and action” has an intensity to it, but it isn’t anxiety.)

This fear is painful, insofar as it represents the possibility of something bad happening to you or your goals.

The fear triggers physiological arousal, or SNS activation. You become “energized”. This is part of your mind getting you ready to act, activating the fight-or-flight response, to deal with the possible bad-thing.

(Note: I originally drew the diagram with the pain causing the arousal. My current guess is that it makes more sense to talk about the fear causing the arousal directly. Pain doesn’t trigger fight-or-flight responses (think about being stabbed, or having a stomach ache). It’s when their’s danger, but not certain harm, that we get ready to move.)

However, because the fear includes pain, there are other parts of the mind that have a flinch response. There’s a sub-verbal reflex away from the painful fear-thought.

In particular, there’s often an urge towards distraction. Distractions like…

  • Flipping to facebook
  • Flipping to LessWrong
  • Flipping to Youtube
  • Flipping to [webcomic of your choice]
  • Flipping over to look at your finances
  • Going to get something to eat
  • Going to the bathroom
  • Walking around “thinking about something”

This is often accompanied by rationalization thought, that is justifying the distraction behavior to yourself.

So we end up with the fear causing both high levels of physiological SNS activation, and distraction behaviors.


The distraction-seeking is what gives rise to the “reactivity” (I should write about this sometime) of anxiety, and the heightened SNS gives rise to the jittery “high energy” of anxiety.

Of course, these responses work at cross purposes: the SNS energy is mobilizing for action, (and will be released when action has been taken and the situation is improved) and and the flinch is trying not to think the bad possibility.

I think the heightened physiological arousal might be part of why  anxiety is hard to dialogue with. Doing focusing requires (? Is helped by?) calm and relaxation.

I think this might also explain a phenomenon that I’ve observed in myself: both watching TV and masturbating defuse anxiety. (That is, I can be highly anxious and unproductive, but if if I watch youtube clips for and hour and a half, or masturbate, I’ll feel more settled and able to focus afterwards).

This might be because both of these activities can grab my attention so that I loose track of the originating fear thought, but I don’t think that’s right. I think that these activities just defuse the heightened SNS, which clears space so that I can orient on making progress.

This suggests that any activity that reduces my SNS activation will be similarly effective. That matches my experience (exercise, for instance, is a standard excellent response to anxiety), but I’ll want to play with modulating my physiological arousal a bit and see.

Note for application

In case this isn’t obvious from the post, this model suggests that you want to learn to notice your flinches and (the easier one) your distraction behaviors, so that they can be triggers for self-dialogue. If you’re looking to increase your productivity, this is one of the huge improvements that is on the table for many people. (I’ll maybe say more about this sometime.)

RAND needed the “say oops” skill

[Epistemic status: a middling argument]

A few months ago, I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.”

Since then I spent some time doing additional research into what cognitive errors and mistakes  those consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed.

However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months.

It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up.

The missile gap

In the late nineteen-fifties it was widely understood that there was a “missile gap”: that the soviets had many more ICBM (“intercontinental ballistic missiles” armed with nuclear warheads) than the US.

Estimates varied widely on how many missiles the soviets had. The Army and the Navy gave estimates of about 40 missiles, which was about at parity with the the US’s strategic nuclear force. The Air Force and the Strategic Air Command, in contrast, gave estimates of as many as 1000 soviet missiles, 20 times more than the US’s count.

(The Air Force and SAC were incentivized to inflate their estimates of the Russian nuclear arsenal, because a large missile gap strongly necessitated the creation of more nuclear weapons, which would be under SAC control and entail increases in the Air Force budget. Similarly, the Army and Navy were incentivized to lowball their estimates, because a comparatively weaker soviet nuclear force made conventional military forces more relevant and implied allocating budget-resources to the Army and Navy.)

So there was some dispute about the size of the missile gap, including an unlikely possibility of nuclear parity with the Soviet Union. Nevertheless, the Soviet’s nuclear superiority was the basis for all planning and diplomacy at the time.

Kennedy campaigned on the basis of correcting the missile gap. Perhaps more critically, all of RAND’s planning and analysis was concerned with the possibility of the Russians launching a nearly-or-actually debilitating first or second strike.

The revelation

In 1961 it came to light, on the basis of new satellite photos, that all of these estimates were dead wrong. It turned out the the Soviets had only 4 nuclear ICBMs, one tenth as many as the US controlled.

The importance of this development should be emphasized. It meant that several of the fundamental assumptions of US nuclear planners were in error.

First of all, it meant that the Soviets were not bent on world domination (as had been assumed). Ellsberg says…

Since it seemed clear that the Soviets could have produced and deployed many, many more missiles in the three years since their first ICBM test, it put in question—it virtually demolished—the fundamental premise that the Soviets were pursuing a program of world conquest like Hitler’s.

That pursuit of world domination would have given them an enormous incentive to acquire at the earliest possible moment the capability to disarm their chief obstacle to this aim, the United States and its SAC. [That] assumption of Soviet aims was shared, as far as I knew, by all my RAND colleagues and with everyone I’d encountered in the Pentagon:

The Assistant Chief of Staff, Intelligence, USAF, believes that Soviet determination to achieve world domination has fostered recognition of the fact that the ultimate elimination of the US, as the chief obstacle to the achievement of their objective, cannot be accomplished without a clear preponderance of military capability.

If that was their intention, they really would have had to seek this capability before 1963. The 1959–62 period was their only opportunity to have such a disarming capability with missiles, either for blackmail purposes or an actual attack. After that, we were programmed to have increasing numbers of Atlas and Minuteman missiles in hard silos and Polaris sub-launched missiles. Even moderate confidence of disarming us so thoroughly as to escape catastrophic damage from our response would elude them indefinitely.

Four missiles in 1960–61 was strategically equivalent to zero, in terms of such an aim.

This revelation about soviet goals was not only of obvious strategic importance, it also took the wind out of the ideological motivation for this sort of nuclear planning. As Ellsberg relays early in his book, many, if not most, RAND employees were explicitly attempting to defend US and the world from what was presumed to be an aggressive communist state, bent on conquest. This just wasn’t true.

But it had even more practical consequences: this revelation meant that the Russians had no first strike (or for that matter, second strike) capability. They could launch their ICBMs at American cities or military bases, but such an attack had no chance of debilitating US second strike capacity. It would unquestionably trigger a nuclear counterattack from the US who, with their 40 missiles, would be able to utterly annihilate the Soviet Union. The only effect of a Russian nuclear attack would be to doom their own country.

[Eli’s research note: What about all the Russian planes and bombs? ICBMs aren’t the the only way of attacking the US, right?]

This means that the primary consideration in US nuclear war planning at RAND and elsewhere, was fallacious. The Soviet’s could not meaningfully destroy the US.

…the estimate contradicted and essentially invalidated the key RAND studies on SAC vulnerability since 1956. Those studies had explicitly assumed a range of uncertainty about the size of the Soviet ICBM force that might play a crucial role in combination with bomber attacks. Ever since the term “missile gap” had come into widespread use after 1957, Albert Wohlstetter had deprecated that description of his key findings. He emphasized that those were premised on the possibility of clever Soviet bomber and sub-launched attacks in combination with missiles or, earlier, even without them. He preferred the term “deterrent gap.” But there was no deterrent gap either. Never had been, never would be.

To recognize that was to face the conclusion that RAND had, in all good faith, been working obsessively and with a sense of frantic urgency on a wrong set of problems, an irrelevant pursuit in respect to national security.

This realization invalidated virtually all of RAND’s work to date. Virtually every, analysis, study, and strategy, had been useless, at best.

The reaction to the revelation

How did RAND employees respond to this reveal, that their work had been completely off base?

That is not a recognition that most humans in an institution are quick to accept. It was to take months, if not years, for RAND to accept it, if it ever did in those terms. To some degree, it’s my impression that it never recovered its former prestige or sense of mission, though both its building and its budget eventually became much larger. For some time most of my former colleagues continued their focus on the vulnerability of SAC, much the same as before, while questioning the reliability of the new estimate and its relevance to the years ahead. [Emphasis mine]

For years the specter of a “missile gap” had been haunting my colleagues at RAND and in the Defense Department. The revelation that this had been illusory cast a new perspective on everything. It might have occasioned a complete reassessment of our own plans for a massive buildup of strategic weapons, thus averting an otherwise inevitable and disastrous arms race. It did not; no one known to me considered that for a moment. [Emphasis mine]

According to Ellsberg, many at RAND were unable to adapt to the new reality and continued (fruitlessly) to continue with what they were doing, as if by inertia, when the thing that they needed to do (to use Eliezer’s turn of phrase) is “halt, melt, and catch fire.”

This suggests that one failure of this ecosystem, that was working in the domain of existential risk, was a failure to “say oops“: to notice a mistaken belief, concretely acknowledge that is was mistaken, and to reconstruct one’s plans and world views.

Relevance to people working on AI safety

This seems to be at least some evidence (though, only weak evidence, I think), that we should be cautious of this particular cognitive failure ourselves.

It may be worth rehearsing the motion in advance: how will you respond, when you discover that a foundational crux of your planning is actually mirage, and the world is actually different than it seems?

What if you discovered that your overall approach to making the world better was badly mistaken?

What if you received a strong argument against the orthogonality thesis?

What about a strong argument for negative utilitarianism?

I think that many of the people around me have effectively absorbed the impact of a major update at least once in their life, on a variety of issues (religion, x-risk, average vs. total utilitarianism, etc), so I’m not that worried about us. But it seems worth pointing out the importance of this error mode.

A note: Ellsberg relays later in the book that, durring the Cuban missile crisis, he perceived Kennedy as offering baffling terms to the soviets: terms that didn’t make sense in light of the actual strategic situation, but might have been sensible under the premiss of a soviet missile gap. Ellsberg wondered, at the time, if Kennedy had also failed to propagate the update regarding the actual strategic situation.

I believed it very unlikely that the Soviets would risk hitting our missiles in Turkey even if we attacked theirs in Cuba. We couldn’t understand why Kennedy thought otherwise. Why did he seem sure that the Soviets would respond to an attack on their missiles in Cuba by armed moves against Turkey or Berlin? We wondered if—after his campaigning in 1960 against a supposed “missile gap”—Kennedy had never really absorbed what the strategic balance actually was, or its implications.

I mention this because additional research suggests that this is implausible: that Kennedy and his staff were aware of the true strategic situation, and that their planning was based on that premise.

Goal-factoring as a tool for noticing narrative-reality disconnect

[The idea of this post, as well as the opening example, were relayed to me by Ben Hoffman, who mentioned it as a thing that Michael Vassar understands well. This was written with Ben’s blessing.]

Suppose you give someone an option of one of three fruits: a radish, a carrot, and and apple. The person chooses the carrot. When you ask them why, they reply “because it’s sweet.”

Clearly, there’s something funny going on here. While the carrot is sweeter than the radish, the apple is sweeter than the carrot. So sweetness must not be the only criterion your fruit-picker is using to make his decision. He/she might be choosing partially on that basis, but there must also be some other, unmentioned factor, that is guiding his/her choice.

Now imagine someone is describing the project that they’re working on (project X). They explain their reasoning for undertaking this project, the good outcomes that will result from it: reasons a, b, and c.

When someone is presenting their reasoning like this, it can be useful to take a, be and c as premises, and try and project what seems to you like the best course of action that optimizes for those goals. That is, do a quick goal-factoring, to see if you can discover a y, that seems to fulfill goals a, b, and c, better than X does.

If you can come up with such a Y, this is suggestive of some unmentioned factor in your interlocutor’s reasoning, just as there was in the choice of your fruit-picker.

Of course this could be innocuous. Maybe Y has some drawback you’re unaware of, and so actually X is the better plan. Maybe the person you’re speaking with just hadn’t thought of Y.

But but it also might be he/she’s lying outright about why he/she’s doing X. Or maybe he/she has some motive that he/she’s not even admitting to him/herself.

Whatever the case, the procedure of taking someone else’s stated reasons as axioms and then trying to build out the best plan that satisfies them is a useful procedure for drawing out dynamics that are driving situations under the surface.

I’ve long used this technique effectively on myself, but I sugest that it might be an important lens for viewing the actions of institutions and other people. It’s often useful to tease out exactly how their declared stories about themselves deviate from their revealed agency, and this is one way of doing that.



When do you need traditions? – A hypothesis

[epistemic status: speculation about domains I have little contact with, and know little about]

I’m rereading Samo Burja’s draft, Great Founder Theory. In particular, I spent some time today thinking about living, dead, and lost traditions and chains of Master-Apprenticeship relationships.

It seems like these chains often form the critical backbone of a continuing tradition (and when they fail, the tradition starts to die). Half of Nobel winners are the students of other Noble winners.

But it also seems like there are domains that don’t rely, or at least don’t need to rely on the conveyance of tacit knowledge via Master-Appreticeship relationships.

For instance, many excellent programmers are self-taught. It doesn’t seem like our civilization’s collective skill in programming depends on current experts passing on their knowledge to the next generation via close in-person contact. As a thought experiment, if all current programers disappeared today, but the computers and educational materials remained, I expect we would return to our current level of collective programing skill within a few decades.

In contrast, consider math. I know almost nothing about higher mathematics, but I would guess that if all now-living mathematicians disappeared, they’ed leave a lot of math, but progress on the frontiers of mathematics would halt, and it would take many years, maybe centuries, for mathematical progress to catch up to that frontier again. I make this bold posit on the basis of the advice I’ve heard (and I’ve personally verified) that learning from tutors is way more effective than learning just from textbooks, and that mathematicians do track their lineages.

In any case, it doesn’t seem like great programers run in lineages the way that Nobel Laureates do.

This is in part because programming in particular has some features that lends itself to autodidactictry: in particular, a novice programer gets clear and immediate feedback: his/her code either compiles or it doesn’t. But I don’t think this is the full story.

Samo discusses some of the factors that determine this difference in his document: for instance, traditions in domains that provide easy affordance for “checking work” against the territory  (such as programming) tend to be more resilient.

But I want to dig into a more specific difference.


A domain of skill entails some process that when applied, produces some output.

Gardening is the process, fruits are the output. Carpentry (or some specific construction procedure) is the process, the resulting chair is the output.  Painting is the process, the painting is the output.

To the degree that the output is or embodies the generating process, master-apprenticeship relationships are less necessary.

It’s a well trodden trope that a program is the programmer’s thinking about a problem. (Paul Graham in Holding a Program in One’s Head: “Your code is your understanding of the problem you’re exploring.“) A comparatively large portion of a programmer’s thought process is represented in his/her program (including the comments). A novice programer, looking at a program written by a master, can see not just what a well-written program looks like, but also, to a large degree, what sort of thinking produces a well-writen program. Much of the tacit knowledge is directly expressed in the final product.

Compare this to say, a revolutionary scientist. A novice scientist might read the papers of elite groundbreaking science, and the novice might learn something, but so much of the process – the intuition that the topic in question was worth investigating, the subtle thought process that led to the hypothesis, the insight of what experiment would elegantly investigate that hypothesis – are not encoded in the paper, and are not legible to the reader.

I think that this is a general feature of domains. And this feature is predictive of the degree to which skill in a given domain relies strongly on traditions of Master- Apprenticeship.

Other examples:

I have the intuition, perhaps false (are there linages of award-winning novelist the way there are linages of Nobel laureates?), that novelists mostly do not learn their craft in apprenticeships to other writers. I suggest that writing is like programing: largely self-taught, except in the sense that one ingests and internalizes large numbers of masterful works. But enough of the skill of writing great novels is contained in the finished work that new novelists can be “trained” this way.

What about Japanese wood-block printing? From the linked video, it seems as if David Bull received about an hour of instruction in wood carving once every seven years or so. But those hours were enormously productive for him. Notably, this sort of wood-carving is a step removed from the final product: one carves the printing block, and then uses the block to make a print. Looking at the finished block, it seems, does not sufficiently convey the techniques used for creating the block. But on top of that the block is not the final product, only an intermediate step. The novice outside of an apprenticeship may only ever see the prints of a master-piece, not the blocks that make the prints.

Does this hold up at all?

That’s the theory. However, I can come up with at least a few counter proposals and confounding factors:

Countertheory: The dominating factor is the age of the tradition. Computer Science is only a few decades old, so recreating it can’t take more than a few decades. Let it develop for a few more centuries (without the advent of machine intelligence or other transformative technology), and the Art of Programming will have progressed so far that it does depend on Master/Apprentice relationships, and the loss of all living programers would be as much as a hit as the loss of all living mathematicians.

This doesn’t seem like it explains novelists, but maybe “good writing” is mostly a matter of fad? (I expect some literary connoisseurs would leap down my throat at that. In any case, it doesn’t seem correct to me.)

Confounder: economic incentive: If we lost all masters of Japanese wood-carving, but there was as much economic incentive for the civilization to remaster it as there would be for remastering programming, would it take any longer? I find that dubious.

Why does this matter? 

Well for one thing, if you’re in the business of building traditions to last more than a few decades, it’s pretty important to know when you will need to institute close-contact lineages.

Separately, this seem relevant whenever one is hoping to learn from dead masters.

Darwin surely counts among the great scientific-thinkers. He successfully abstracted out a fundamental structuring principle of the natural world. As someone interested in epistemology, it seems promising to read Darwin, in order to tease out how he was thinking. I was previously planning to read the Origins of Species. Now, it seems much more fruitful to read Darwin’s notebooks, which I expect to contain more of his process than his finished works do.