Some notes on Von Neumann, as a human being

I recently read Prisoner’s Dilemma, which half an introduction to very elementary game theory, and half a biography of John Von Neumann, and watched this old PBS documentary about the man.

I’m glad I did. Von Neumann has legendary status in my circles, as the smartest person ever to live. [1] Many times I’ve written the words “Von Neumann Level Intelligence” in a AI strategy document, or speculated about how many coordinated Von Neumanns would it take to take over the world. (For reference, I now think that 10 is far too low, mostly because he didn’t seem to have the entrepreneurial or managerial dispositions.)

Learning a little bit more about him was humanizing. Yes, he was the smartest person ever to live, but he was also an actual human being, with actual human traits.

Watching this first clip, I noticed that I was surprised by a number of thing.

  1. That VN had an accent. I had known that he was Hungarian, but somehow it had never quite propagated that he would speak with a Hungarian accent.
  2. That he was middling height (somewhat shorter than the presenter he’s talking too).
  3. The thing he is saying is the sort of thing that I would expect to hear from any scientist in the public eye, “science education is important.” There is something revealing about Von Neumann, despite being the smartest person in the world, saying basically what I would expect Neil DeGrasse Tyson to say in an interview. A lot of the time he was wearing his “scientist / public intellectual” hat, not the “smartest person ever to live” hat.

Some other notes of interest:

He was not a skilled poker player, which punctured my assumption that Von Neumann was omnicompetent. (pg. 5) Nevertheless, poker was among the first inspirations for game theory. (When I told this to Steph, she quipped “Oh. He wasn’t any good at it, so he developed a theory from first principles, describing optimal play?” For all I know, that might be spot on.)

Perhaps relatedly, he claimed he had low sales resistance, and so would have his wife come clothes shopping with him. (pg. 21)


He was sexually crude, and perhaps a bit misogynistic. Eugene Wigner stated that “Johny believed in having sex, in pleasure, but not in emotional attachment. HE was interested in immediate pleasure and little comprehension of emotions in relationships and mostly saw women in terms of their bodies.” The journalist Steve Heimes wrote “upon entering an office where a pretty secretary was working, von Neumann habitually would bend way over, more or less trying to look up her dress.” (pg. 28) Not surprisingly, his relationship with his wife, Klara, was tumultuous, to say the least.

He did however, maintain a strong, life long, relationship with his mother (who died the same year that he did).

Overall, he gives the impression of being a genius, overgrown child.


Unlike many of his colleagues, he seemed not to share the pangs conscience that afflicted many of the bomb creators. Rather than going back to academia following the war, he continued doing work for the government, including the development of the Hydrogen bomb.

Von Neumann advocated preventative war: giving the Soviet union an ultimatum, of joining a world government, backed by the threat of (and probable enaction of) nuclear attack, while the US still had a nuclear monopoly. He famously said of the matter, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock.”

This attitude was certainly influenced by his work on game theory, but it should also be noted that Von Neumann hated communism.

Richard Feynman reports that Von Neumann, in their walks through the Los Alamos desert, convinced him to adopt and attitude of “social irresponsibility”, that one “didn’t have to be responsible for the world he was in.”


Prisoner’s dilemma says that he and his collaborators “pursued patents less aggressively than the could have”. Edward Teller commented, “probably the IBM company owes half its money to John Von Neumann.” (pg. 76)

So he was not very entrepreneurial, which is a bit of a shame, because if he had the disposition he probably could have made sooooo much money / really taken substantial steps towards taking over the world. (He certainly had the energy to be an entrepreneur: he only slept for a few hours a night, and was working for basically all his working hours.


He famously always wore a grey oxford 3 piece suit, including when playing tennis with Stanislaw Ulam, or when riding a donkey down the grand canyon. But, I am not clear why. Was that more comfortable? Did he think it made him look good? Did he just not want to have to ever think about clothing, and so preferred to be over-hot in the middle of the Los Alamos desert, rather than need to think about if today was “shirt sleeves weather”?


Von Neumann himself once commented on the strange fact of so many Hungarian geniuses growing up in such a small area, in his generation:

Stanislaw Ulam recalled that when Von Neumann was asked about this “statistically unlikely” Hungarian phenomenon, Von Neumann “would say that it was a coincidence of some cultural factors which he could not make precise: an external pressure on the whole society of this part of Central Europe, a subconscious feeling of extreme insecurity in individual, and the necessity of producing the unusual or facing extinction.” (pg. 66)


One thing that surprised me most was that it seems that, despite being possibly the smartest person in modernity, he would have benefited from attending a CFAR workshop.

For one thing, at the end of his life, he was terrified of dying. But throughout the course of his life he made many reckless choices with his health.

He ate gluttonously and became fatter and fatter over the course of his life. (One friend remarked that he “could count anything but calories.”)

Furthermore, he seemed to regularly risk his life when driving.

Von Neuman was an aggressive and apparently reckless driver. He supposedly totaled his car every year or so. An intersection in Princeton was nicknamed “Von Neumann corner” for all the auto accidents he had there. records of accidents and speeding arrests are preserved in his papers. [The book goes on to list a number of such accidents.] (pg. 25)

(Amusingly, Von Neumann’s reckless driving seems due, not to drinking and driving, but to singing and driving. “He would sway back and forth, turning the steering wheel in time with the music.”)

I think I would call this a bug.

On another thread, one of his friends (the documentary didn’t identify which) expressed that he was over-impressed by powerful people, and didn’t make effective tradeoffs.

I wish he’d been more economical with his time in that respect. For example, if people called him to Washington or elsewhere, he would very readily go and so on, instead of having these people come to him. It was much more important, I think, he should have saved his time and effort.

He felt, when the government called, [that] one had to go, it was a patriotic duty, and as I said before he was a very devoted citizen of the country. And I think one of the things that particularly pleased him was any recognition that came sort-of from the government. In fact, in that sense I felt that he was sometimes somewhat peculiar that he would be impressed by government officials or generals and so on. If a big uniform appeared that made more of an impression than it should have. It was odd.

But it shows that he was a person of many different and sometimes self contradictory facets, I think.

Stanislaw Ulam speculated, “I think he had a hidden admiration for people and organizations that could be tough and ruthless.” (pg. 179)

From these statements, it seems like Von Neumann leapt at chances to seem useful or important to the government, somewhat unreflectively.

These anecdotes suggest that Von Neumann would have gotten value out of Goal Factoring, or Units of Exchange, or IDC (possibly there was something deeper going on, regarding a blindspots around death, or status, but I think the point still stands, and he would have benefited from IDC).

Despite being the discoverer/ inventor of VNM Utility theory, and founding the field of Game Theory (concerned with rational choice), it seems to me that Von Neumann did far less to import the insights of the math into his actual life than say, Critch.

(I wonder aloud if this is because Von Neumann was born and came of age before the development of cognitive science. I speculate that the importance of actually applying theories of rationality in practice, only becomes obvious after Tversky and Kahneman demonstrate that humans are not rational by default. (In evidence against this view: Eliezer seems to have been very concerned with thinking clearly, and being sane, before encountering Heuristics and Biases in his (I belive) mid 20s. He was exposed to Evo Psych though.))


Also, he converted to Catholicism at the end of his life, based on Pascal’s Wager. He commented “So long as there is the possibility of eternal damnation for nonbelievers it is more logical to be a believer at the end”, and “There probably has to be a God. Many things are easier to explain if there is than if there isn’t.”

(According to wikipedia, this deathbed conversion did not give him much comfort.)

This suggests that he would have gotten value out of reading the sequences, in addition to attending a CFAR workshop.

 

RAND needed the “say oops” skill

[Epistemic status: a middling argument]

A few months ago, I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.”

Since then I spent some time doing additional research into what cognitive errors and mistakes  those consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed.

However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months.

It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up.

The missile gap

In the late nineteen-fifties it was widely understood that there was a “missile gap”: that the soviets had many more ICBM (“intercontinental ballistic missiles” armed with nuclear warheads) than the US.

Estimates varied widely on how many missiles the soviets had. The Army and the Navy gave estimates of about 40 missiles, which was about at parity with the the US’s strategic nuclear force. The Air Force and the Strategic Air Command, in contrast, gave estimates of as many as 1000 soviet missiles, 20 times more than the US’s count.

(The Air Force and SAC were incentivized to inflate their estimates of the Russian nuclear arsenal, because a large missile gap strongly necessitated the creation of more nuclear weapons, which would be under SAC control and entail increases in the Air Force budget. Similarly, the Army and Navy were incentivized to lowball their estimates, because a comparatively weaker soviet nuclear force made conventional military forces more relevant and implied allocating budget-resources to the Army and Navy.)

So there was some dispute about the size of the missile gap, including an unlikely possibility of nuclear parity with the Soviet Union. Nevertheless, the Soviet’s nuclear superiority was the basis for all planning and diplomacy at the time.

Kennedy campaigned on the basis of correcting the missile gap. Perhaps more critically, all of RAND’s planning and analysis was concerned with the possibility of the Russians launching a nearly-or-actually debilitating first or second strike.

The revelation

In 1961 it came to light, on the basis of new satellite photos, that all of these estimates were dead wrong. It turned out the the Soviets had only 4 nuclear ICBMs, one tenth as many as the US controlled.

The importance of this development should be emphasized. It meant that several of the fundamental assumptions of US nuclear planners were in error.

First of all, it meant that the Soviets were not bent on world domination (as had been assumed). Ellsberg says…

Since it seemed clear that the Soviets could have produced and deployed many, many more missiles in the three years since their first ICBM test, it put in question—it virtually demolished—the fundamental premise that the Soviets were pursuing a program of world conquest like Hitler’s.

That pursuit of world domination would have given them an enormous incentive to acquire at the earliest possible moment the capability to disarm their chief obstacle to this aim, the United States and its SAC. [That] assumption of Soviet aims was shared, as far as I knew, by all my RAND colleagues and with everyone I’d encountered in the Pentagon:

The Assistant Chief of Staff, Intelligence, USAF, believes that Soviet determination to achieve world domination has fostered recognition of the fact that the ultimate elimination of the US, as the chief obstacle to the achievement of their objective, cannot be accomplished without a clear preponderance of military capability.

If that was their intention, they really would have had to seek this capability before 1963. The 1959–62 period was their only opportunity to have such a disarming capability with missiles, either for blackmail purposes or an actual attack. After that, we were programmed to have increasing numbers of Atlas and Minuteman missiles in hard silos and Polaris sub-launched missiles. Even moderate confidence of disarming us so thoroughly as to escape catastrophic damage from our response would elude them indefinitely.

Four missiles in 1960–61 was strategically equivalent to zero, in terms of such an aim.

This revelation about soviet goals was not only of obvious strategic importance, it also took the wind out of the ideological motivation for this sort of nuclear planning. As Ellsberg relays early in his book, many, if not most, RAND employees were explicitly attempting to defend US and the world from what was presumed to be an aggressive communist state, bent on conquest. This just wasn’t true.

But it had even more practical consequences: this revelation meant that the Russians had no first strike (or for that matter, second strike) capability. They could launch their ICBMs at American cities or military bases, but such an attack had no chance of debilitating US second strike capacity. It would unquestionably trigger a nuclear counterattack from the US who, with their 40 missiles, would be able to utterly annihilate the Soviet Union. The only effect of a Russian nuclear attack would be to doom their own country.

[Eli’s research note: What about all the Russian planes and bombs? ICBMs aren’t the the only way of attacking the US, right?]

This means that the primary consideration in US nuclear war planning at RAND and elsewhere, was fallacious. The Soviet’s could not meaningfully destroy the US.

…the estimate contradicted and essentially invalidated the key RAND studies on SAC vulnerability since 1956. Those studies had explicitly assumed a range of uncertainty about the size of the Soviet ICBM force that might play a crucial role in combination with bomber attacks. Ever since the term “missile gap” had come into widespread use after 1957, Albert Wohlstetter had deprecated that description of his key findings. He emphasized that those were premised on the possibility of clever Soviet bomber and sub-launched attacks in combination with missiles or, earlier, even without them. He preferred the term “deterrent gap.” But there was no deterrent gap either. Never had been, never would be.

To recognize that was to face the conclusion that RAND had, in all good faith, been working obsessively and with a sense of frantic urgency on a wrong set of problems, an irrelevant pursuit in respect to national security.

This realization invalidated virtually all of RAND’s work to date. Virtually every, analysis, study, and strategy, had been useless, at best.

The reaction to the revelation

How did RAND employees respond to this reveal, that their work had been completely off base?

That is not a recognition that most humans in an institution are quick to accept. It was to take months, if not years, for RAND to accept it, if it ever did in those terms. To some degree, it’s my impression that it never recovered its former prestige or sense of mission, though both its building and its budget eventually became much larger. For some time most of my former colleagues continued their focus on the vulnerability of SAC, much the same as before, while questioning the reliability of the new estimate and its relevance to the years ahead. [Emphasis mine]

For years the specter of a “missile gap” had been haunting my colleagues at RAND and in the Defense Department. The revelation that this had been illusory cast a new perspective on everything. It might have occasioned a complete reassessment of our own plans for a massive buildup of strategic weapons, thus averting an otherwise inevitable and disastrous arms race. It did not; no one known to me considered that for a moment. [Emphasis mine]

According to Ellsberg, many at RAND were unable to adapt to the new reality and continued (fruitlessly) to continue with what they were doing, as if by inertia, when the thing that they needed to do (to use Eliezer’s turn of phrase) is “halt, melt, and catch fire.”

This suggests that one failure of this ecosystem, that was working in the domain of existential risk, was a failure to “say oops“: to notice a mistaken belief, concretely acknowledge that is was mistaken, and to reconstruct one’s plans and world views.

Relevance to people working on AI safety

This seems to be at least some evidence (though, only weak evidence, I think), that we should be cautious of this particular cognitive failure ourselves.

It may be worth rehearsing the motion in advance: how will you respond, when you discover that a foundational crux of your planning is actually mirage, and the world is actually different than it seems?

What if you discovered that your overall approach to making the world better was badly mistaken?

What if you received a strong argument against the orthogonality thesis?

What about a strong argument for negative utilitarianism?

I think that many of the people around me have effectively absorbed the impact of a major update at least once in their life, on a variety of issues (religion, x-risk, average vs. total utilitarianism, etc), so I’m not that worried about us. But it seems worth pointing out the importance of this error mode.


A note: Ellsberg relays later in the book that, durring the Cuban missile crisis, he perceived Kennedy as offering baffling terms to the soviets: terms that didn’t make sense in light of the actual strategic situation, but might have been sensible under the premiss of a soviet missile gap. Ellsberg wondered, at the time, if Kennedy had also failed to propagate the update regarding the actual strategic situation.

I believed it very unlikely that the Soviets would risk hitting our missiles in Turkey even if we attacked theirs in Cuba. We couldn’t understand why Kennedy thought otherwise. Why did he seem sure that the Soviets would respond to an attack on their missiles in Cuba by armed moves against Turkey or Berlin? We wondered if—after his campaigning in 1960 against a supposed “missile gap”—Kennedy had never really absorbed what the strategic balance actually was, or its implications.

I mention this because additional research suggests that this is implausible: that Kennedy and his staff were aware of the true strategic situation, and that their planning was based on that premise.

Initial thoughts about the early history of Circling

I spent a couple of hours over the past week looking into the origins and early history of Circling, as part of a larger research project.

If you want to read some original sources, this was the most useful and informative post on the topic that I found.

You can also read my curated notes (only the things that were most interesting to me), including my thinking about the Rationality Community.


A surprising amount of the original work was done while people were in college. Notably, Bryan, Decker, and Sarah, all taught and developed Circling / AR in the living spaces of their colleges:

“Even before this, Bryan Bayer and Decker Cunov had independently discovered the practice as a tool to resolve conflicts in their shared college household in Missouri,”

“Sara had been a college student, had discovered Authentic Relating Games, had introduced them into her college dorm with great success”

It reminds me that a lot the existence and growth of EA was driven by student groups. I wonder if most movements are seeded by people in their early 20s, and therefore college campuses have been the background for the origins of most movements throughout the past century.


There’s in  way in which the teaching of Circling spread, the way the teaching of rationality didn’t.

It sounds like many of the people who frequently attended the early weekend programs that Guy and Jerry (and others) were putting on, had ambitions to develop and run similar programs of their own one day. And to a large degree, they did. There’s been something like 10 to 15 for pay circling-based programs, across at least 4 organizations. In contrast Rationality has one CFAR, that primarily runs a single program.

I wonder what accounts for the difference?

Hypotheses:

  • Circlers tend to be poor, where rationalist tend to be software engineers. Circlers could dream of doing Circling full time, but there’s not much appeal for rationalists to be teaching rationality full time. (That would be a pay cut, and there’s no “activity” that rationalist love and that they would get to do as their job.)
  • Rationality is too discrete and explicit. Once you’ve taught the rationality techniques you know, you’re done (or you have to be in the business of inventing new ones), whereas teaching Circling is more like a service: there’s not a distinct point when the student “has it” and doesn’t need your teaching, but a gradual apprenticeship.
  • Relatedly, maybe there’s just not enough demand for rationality training. A CFAR workshop is, for most rationalists, is a thing that you do once, whereas Circlers might attend several Circling immersion or trainings in a year. Rationality can become a culture and a way of life, but CFAR workshops are not. As a result, the demand for rationality training amounts to 1 workshop per community member, instead of something like 50 events per community member.
    • Notably, if CFAR had a slightly different model, this feature could change.
  • Rationality is less of concrete thing, separate from the CFAR or LessWrong brands.
    • Because of this, I think most people don’t feel enough ownership of “Rationality” as an independent thing. It’s Eliezer’s thing or CFAR’s thing. Not something that is separate from either of them.
    • Actually, the war between the founders might be relevant here. That Guy and Decker were both teaching Circling highlighted that is was separate from any one brand.
    • I wonder what the world would look like if Eliezer coined a new term for the thing we call rationality, instead of taking on a word that already has meaning in the wider world. I expect there would be less potential for a mass movement, but more and affordance to teach the thing, a feeling that one could be expert at it.
  • Maybe the fact the Circling was independently discovered by Guy and Jerry, and Decker and Bryan, made it obvious that no one owned it.
    • If we caused a second rationality-training organization to crop up, would that cause a profusion of rationality orgs?
  • Circling people acquired enough confidence in their own skills that they felt comfortable charging for them, rationalist don’t.
    • It is more obvious who the people who are skilled in circling is, because you can see it in a Circle.
    • Circling has an activity that is engaging to spend many an hour at and includes a feedback loop, so people became skilled at it in a way that rationalists don’t.

There aren’t people who are trying to build Rationality empires the way Jordan is trying to build a Circling empire.


I get the sense that a surprising number of the core people of circling are what I would call “jocks.” (Though my actual sample is pretty limited)

  • Guy originally worked as a personal trainer.
  • Sean Wilkinson and John Thompson ran a personal tennis academy before teaching Circling.
  • Jordan was a model.

“Many of us lived together in communal houses and/or were in relationships with other community members.”

They had group houses and called themselves “the community”. I wonder how common those threads are, in subcultures across time (or at least across the past century).

When do you need traditions? – A hypothesis

[epistemic status: speculation about domains I have little contact with, and know little about]

I’m rereading Samo Burja’s draft, Great Founder Theory. In particular, I spent some time today thinking about living, dead, and lost traditions and chains of Master-Apprenticeship relationships.

It seems like these chains often form the critical backbone of a continuing tradition (and when they fail, the tradition starts to die). Half of Nobel winners are the students of other Noble winners.

But it also seems like there are domains that don’t rely, or at least don’t need to rely on the conveyance of tacit knowledge via Master-Appreticeship relationships.

For instance, many excellent programmers are self-taught. It doesn’t seem like our civilization’s collective skill in programming depends on current experts passing on their knowledge to the next generation via close in-person contact. As a thought experiment, if all current programers disappeared today, but the computers and educational materials remained, I expect we would return to our current level of collective programing skill within a few decades.

In contrast, consider math. I know almost nothing about higher mathematics, but I would guess that if all now-living mathematicians disappeared, they’ed leave a lot of math, but progress on the frontiers of mathematics would halt, and it would take many years, maybe centuries, for mathematical progress to catch up to that frontier again. I make this bold posit on the basis of the advice I’ve heard (and I’ve personally verified) that learning from tutors is way more effective than learning just from textbooks, and that mathematicians do track their lineages.

In any case, it doesn’t seem like great programers run in lineages the way that Nobel Laureates do.

This is in part because programming in particular has some features that lends itself to autodidactictry: in particular, a novice programer gets clear and immediate feedback: his/her code either compiles or it doesn’t. But I don’t think this is the full story.

Samo discusses some of the factors that determine this difference in his document: for instance, traditions in domains that provide easy affordance for “checking work” against the territory  (such as programming) tend to be more resilient.

But I want to dig into a more specific difference.

Theory:

A domain of skill entails some process that when applied, produces some output.

Gardening is the process, fruits are the output. Carpentry (or some specific construction procedure) is the process, the resulting chair is the output.  Painting is the process, the painting is the output.

To the degree that the output is or embodies the generating process, master-apprenticeship relationships are less necessary.

It’s a well trodden trope that a program is the programmer’s thinking about a problem. (Paul Graham in Holding a Program in One’s Head: “Your code is your understanding of the problem you’re exploring.“) A comparatively large portion of a programmer’s thought process is represented in his/her program (including the comments). A novice programer, looking at a program written by a master, can see not just what a well-written program looks like, but also, to a large degree, what sort of thinking produces a well-writen program. Much of the tacit knowledge is directly expressed in the final product.

Compare this to say, a revolutionary scientist. A novice scientist might read the papers of elite groundbreaking science, and the novice might learn something, but so much of the process – the intuition that the topic in question was worth investigating, the subtle thought process that led to the hypothesis, the insight of what experiment would elegantly investigate that hypothesis – are not encoded in the paper, and are not legible to the reader.

I think that this is a general feature of domains. And this feature is predictive of the degree to which skill in a given domain relies strongly on traditions of Master- Apprenticeship.

Other examples:

I have the intuition, perhaps false (are there linages of award-winning novelist the way there are linages of Nobel laureates?), that novelists mostly do not learn their craft in apprenticeships to other writers. I suggest that writing is like programing: largely self-taught, except in the sense that one ingests and internalizes large numbers of masterful works. But enough of the skill of writing great novels is contained in the finished work that new novelists can be “trained” this way.

What about Japanese wood-block printing? From the linked video, it seems as if David Bull received about an hour of instruction in wood carving once every seven years or so. But those hours were enormously productive for him. Notably, this sort of wood-carving is a step removed from the final product: one carves the printing block, and then uses the block to make a print. Looking at the finished block, it seems, does not sufficiently convey the techniques used for creating the block. But on top of that the block is not the final product, only an intermediate step. The novice outside of an apprenticeship may only ever see the prints of a master-piece, not the blocks that make the prints.

Does this hold up at all?

That’s the theory. However, I can come up with at least a few counter proposals and confounding factors:

Countertheory: The dominating factor is the age of the tradition. Computer Science is only a few decades old, so recreating it can’t take more than a few decades. Let it develop for a few more centuries (without the advent of machine intelligence or other transformative technology), and the Art of Programming will have progressed so far that it does depend on Master/Apprentice relationships, and the loss of all living programers would be as much as a hit as the loss of all living mathematicians.

This doesn’t seem like it explains novelists, but maybe “good writing” is mostly a matter of fad? (I expect some literary connoisseurs would leap down my throat at that. In any case, it doesn’t seem correct to me.)

Confounder: economic incentive: If we lost all masters of Japanese wood-carving, but there was as much economic incentive for the civilization to remaster it as there would be for remastering programming, would it take any longer? I find that dubious.

Why does this matter? 

Well for one thing, if you’re in the business of building traditions to last more than a few decades, it’s pretty important to know when you will need to institute close-contact lineages.

Separately, this seem relevant whenever one is hoping to learn from dead masters.

Darwin surely counts among the great scientific-thinkers. He successfully abstracted out a fundamental structuring principle of the natural world. As someone interested in epistemology, it seems promising to read Darwin, in order to tease out how he was thinking. I was previously planning to read the Origins of Species. Now, it seems much more fruitful to read Darwin’s notebooks, which I expect to contain more of his process than his finished works do.

 

 

 

Initial Comparison between RAND and the Rationality Cluster

I’m currently reading The Doomsday Machine: Confessions of a Nuclear War Planner by Daniel Ellsberg (the man who leaked the Pentagon Papers), on the suggestion of Anna Salamon.

I’m interested in the cold war planning communities because they might be relevant to the sort of thinking that is happening, or needs to happen, around AI x-risk, today. And indeed, there are substantial resemblances between the RAND corporation and at least some of the orgs that form the core of the contemporary x-risk ecosystem.

For instance…

A narrative of “saving the world”:

[M]y colleagues were driven men. They shared a feeling—soon transmitted to me—that we were in the most literal sense working to save the world. A successful Soviet nuclear attack on the United States would be a catastrophe, and not only for America.

A perception of the inadequacy of the official people in power:

But above all, precisely in my early missile-gap years at RAND and as a consultant in Washington, there was our sense of mission, the burden of believing we knew more about the dangers ahead, and what might be done about them, than did the generals in the Pentagon or SAC, or Congress or the public, or even the president. It was an enlivening burden.

We were rescuing the world from our Soviet counterparts as well as from the possibly fatal lethargy and bureaucratic inertia of the Eisenhower administration and our sponsors in the Air Force.

Furthermore, a major theme of the book is the insanity of US Nuclear Command and Control polices.  Ellsberg points repeatedly at the failures of decision-making and morality amongst the US government.

A sense of intellectual camaraderie:

In the middle of the first session, I ventured—though I was the youngest, assigned to be taking notes, and obviously a total novice on the issues—to express an opinion. (I don’t remember what it was.) Rather than showing irritation or ignoring my comment, Herman Kahn, brilliant and enormously fat, sitting directly across the table from me, looked at me soberly and said, “You’re absolutely wrong.” A warm glow spread throughout my body. This was the way my undergraduate fellows on the editorial board of the Harvard Crimson (mostly Jewish, like Herman and me) had routinely spoken to each other; I hadn’t experienced anything like it for six years. At King’s College, Cambridge, or in the Society of Fellows, arguments didn’t remotely take this gloves-off, take-no-prisoners form. I thought, “I’ve found a home.”

Visceral awareness of existential failure:

At least some of the folks at RAND had a visceral sense of the impending end of the world. They didn’t feel like they were just playing intellectual games.

I couldn’t believe that the world would long escape nuclear holocaust. Alain Enthoven and I were the youngest members of the department. Neither of us joined the extremely generous retirement plan RAND offered. Neither of us believed, in our late twenties, we had a chance of collecting on it.

That last point seems particularly relevant. Folks in our cluster invest in the development and practice of tools like IDC in part because of the psychological pressures that accompany the huge stakes of x-risk.

At least some of the “defense intellectuals” of the Cold War were under similar pressures.[1]

For this reason, the social and intellectual climate around RAND and similar organizations during the Cold War represents an important case study, a second data point for comparison to our contemporaries working on existential risk.

How did RAND employees handle the psychological pressures? Did they spontaneously invent strategies for thinking clearly in the face of the magnitude of the stakes? If so, can we emulate those strategies? If not, does that imply that their thinking about their work was compromised? Or does it suggest that our emphasis on psychological integration methods are misplaced?

And perhaps most importantly, what mistakes did they make? Can we use their example to foresee similar mistakes of our own and avoid them?


[1] – Indeed, it seems like they were under greater pressures. There’s a sense of franticness and urgency that I feel in Ellsberg’s description that I don’t feel around MIRI. But I think that this is due to the time horizons that RAND and co. were operating under compared to the those that MIRI is operating under. I expect that as we near the arrival of AGI, there will be a sense of urgency and psychological pressure that is just as great and greater than that of the cold war planners.

End note: In addition to all these more concrete correlations, there’s also the intriguing intertwining of existential risk and decision theory in both of the data points of nuclear war planning and AI safety. I wonder if that is merely coincidence or represents some deeper connection.