Cocktail party ideas | Patreon

You don't have to be at a party to see this phenomenon in action, but there's a curious thing I regularly see at parties in social circles where people value intelligence and cleverness without similarly valuing on-the-ground knowledge or intellectual rigor. People often discuss the standard trendy topics (some recent ones I've observed at multiple parties are how to build a competitor to Google search and how to solve the problem of high transit construction costs) and explain why people working in the field today are doing it wrong and then explain how they would do it instead. I occasionally have good conversations that fit that pattern (with people with very deep expertise in the field who've been working on changing the field for years), but the more common pattern is that someone with cocktail-party level knowledge of a field will give their ideas on how the field can be fixed.

Asking people why they think their solutions would solve valuable problems in the field has become a hobby of mine when I'm at parties where this kind of superficial pseudo-technical discussion dominates the party. What I've found when I've asked for details is that, in areas where I have some knowledge, people generally don't know what sub-problems need to be solved to solve the problem they're trying to address, making their solution hopeless. After having done this many times, my opinion is that the root cause of this is generally that many people who have a superficial understanding of topic assume that the topic is as complex as their understanding of the topic instead of realizing that only knowing a bit about a topic means that they're missing an understanding of the full complexity of a topic.

Since I often attend parties with programmers, this means I often hear programmers retelling their cocktail-party level understanding of another field (the search engine example above notwithstanding). If you want a sample of similar comments online, you can often see these when programmers discuss "trad" engineering fields. An example I enjoyed was this Twitter thread where Hillel Wayne discussed how programmers without knowledge of trad engineering often have incorrect ideas about what trad engineering is like, where many of the responses are from programmers with little to no knowledge of trad engineering who then reply to Hillel with their misconceptions. When Hillel completed his crossover project, where he interviewed people who've worked in a trad engineering field as well as in software, he got even more such comments. Even when people are warned that naive conceptions of a field are likely to be incorrect, many can't help themselves and they'll immediately reply with their opinions about a field they know basically nothing about.

Anyway, in the crossover project, Hillel compared the perceptions of people who'd actually worked in multiple fields to pop-programmer perceptions of trad engineering. One of the many examples of this that Hillel gives is when people talk about bridge building, where he notes that programmers say things like

The predictability of a true engineer’s world is an enviable thing. But ours is a world always in flux, where the laws of physics change weekly. If we did not quickly adapt to the unforeseen, the only foreseeable event would be our own destruction.

and

No one thinks about moving the starting or ending point of the bridge midway through construction.

But Hillel interviewed a civil engineer who said that they had to move a bridge! Of course, civil engineers don't move bridges as frequently as programmers deal with changes in software but, if you talk to actual, working, civil engineers, many civil engineers frequently deal with changing requirements after a job has started that's not fundamentally different from what programmers have to deal with at their jobs. People who've worked in both fields or at least talk to people in the other field tend to think the concerns faced by engineers in both fields are complex, but people with a cocktail-party level of understanding of the field often claim that the field they're not in is simple, unlike their field.

A line I often hear from programmers is that programming is like "having to build a plane while it's flying", implicitly making the case that programming is harder than designing and building a plane since people who design and build planes can do so before the plane is flying1. But, of course, someone who designs airplanes could just as easily say "gosh, my job would be very easy if I could build planes with 4 9s of uptime and my plane were allowed to crash and kill all of the passengers for 1 minute every week". Of course, the constraints on different types of projects and different fields make different things hard, but people often seem to have a hard time seeing constraints other fields have that their field doesn't. One might think that understanding that their own field is more complex than an outsider might naively think would help people understand that other fields may also have hidden complexity, but that doesn't generally seem to be the case.

If we look at the rest of the statement Hillel was quoting (which is from the top & accepted answer to a stack exchange question), the author goes on to say:

It's much easier to make accurate projections when you know in advance exactly what you're being asked to project rather than making guesses and dealing with constant changes.

The vast majority of bridges are using extremely tried and true materials, architectures, and techniques. A Roman engineer could be transported two thousand years into the future and generally recognize what was going on at a modern construction site. There would be differences, of course, but you're still building arches for load balancing, you're still using many of the same materials, etc. Most software that is being built, on the other hand . . .

This is typical of the kind of error people make when they're discussing cocktail-party ideas. Programmers legitimately gripe when clueless execs who haven't been programmers for a decade request unreasonable changes to a project that's in progress, but this is not so different and actually more likely to be reasonable than when politicians who've never been civil engineers require project changes on large scale civil engineering projects. It's plausible that, on average, programming projects have more frequent or larger changes to the project than civil engineering projects, I'd guess that the intra-field variance is at least as large as the inter-field variance.

And, of course, only someone who hasn't done serious engineering work in the physical world could say something like "The predictability of a true engineer’s world is an enviable thing. But ours is a world always in flux, where the laws of physics change weekly", thinking that the (relative) fixity of physical laws means that physical work is predictable. When I worked as a hardware engineer, a large fraction of the effort and complexity of my projects went into dealing with physical uncertainty and civil engineering is no different (if anything, the tools civil engineers have to deal with physical uncertainty on large scale projects are much worse, resulting in a larger degree of uncertainty and a reduced ability to prevent delays due to uncertainty).

If we look at how Roman engineering or even engineering from 300 years ago differs from modern engineering, a major source of differences is our much better understanding of uncertainty that comes from the physical world. It didn't used to be shocking when a structure failed not too long after being built without any kind of unusual conditions or stimulus (e.g., building collapse, or train accident due to incorrectly constructed rail). This is now rare enough that it's major news if it happens in the U.S. or Canada and this understanding also lets us build gigantic structures in areas where it would have been previously considered difficult or impossible to build moderate-sized structures.

For example, if you look at a large-scale construction project in the Vancouver area that's sitting on the delta (Delta, Richmond, much of the land going out towards Hope), it's only relatively recently that we discovered the knowledge necessary to build some large scale structures (e.g., tall-ish buildings) reliably on that kind of ground, which is one of the many parts of modern civil engineering a Roman engineer wouldn't understand. A lot of this comes from a field called geotechnical engineering, a sub-field of civil engineering (alternately, arguably its own field and also arguably a subfield of geological engineering) that involves the ground, i.e., soil mechanics, rock mechanics, geology, hydrology, and so on and so forth. One fundamental piece of geotechnical engineering is the idea that you can apply mechanics to reason about soil. The first known application of mechanics to soils, a fundamental part of geotechnical engineering, was in 1773 and geotechnical engineering as it's thought of today is generally said to have started in 1925. While Roman engineers did a lot of impressive work, the mental models they were operating with precluded understanding much of modern civil engineering.

Naturally, for this knowledge to have been able to change what we can build, it must change how we build. If we look at what a construction site on compressible Vancouver delta soils that uses this modern knowledge looks like, by wall clock time, it mostly looks like someone put a pile of sand on the construction site (preload). While a Roman engineer would know what a pile of sand is, they wouldn't know how someone figured out how much sand was needed and how long it needed to be there (in some cases, Romans would use piles or rafts where we would use preload today, but in many cases, they had no answer to the problems preload solves today).

Geotechnical engineering and the resultant pile of sand (preload) is one of tens of sub-fields where you'd need expertise when doing a modern, large scale, civil engineering project that a Roman engineer would need a fair amount of education to really understand.

Coming back to cocktail party solutions I hear, one common set of solutions is how to fix high construction costs and slow construction. There's a set of trendy ideas that people throw around about why things are so expensive, why projects took longer than projected, etc. Sometimes, these comments are similar to what I hear from practicing engineers that are involved in the projects but, more often than not, the reasons are pretty different. When the reasons are the same, it seems that they must be correct by coincidence since they don't seem to understand the body of knowledge necessary to reason through the engineering tradeoffs2.

Of course, like cocktail party theorists, civil engineers with expertise in the field also think that modern construction is wasteful, but the reasons they come up with are often quite different from what I hear at parties3. It's easy to come up with cocktail party solutions to problems by not understanding the problem, assuming the problem is artificially simple, and then coming up with a solution to the imagined problem. It's harder to understand the tradeoffs in play among the tens of interacting engineering sub-fields required to do large scale construction projects and have an actually relevant discussion of what the tradeoffs should be and how one might motivate engineers and policy makers to shift where the tradeoffs land.

A widely cited study on the general phenomena of people having wildly oversimplified and incorrect models of how things work is this study by Rebecca Lawson on people's understanding of how bicycles work, which notes:

Recent research has suggested that people often overestimate their ability to explain how things function. Rozenblit and Keil (2002) found that people overrated their understanding of complicated phenomena. This illusion of explanatory depth was not merely due to general overconfidence; it was specific to the understanding of causally complex systems, such as artifacts (crossbows, sewing machines, microchips) and natural phenomena (tides, rainbows), relative to other knowledge domains, such as facts (names of capital cities), procedures (baking cakes), or narratives (movie plots).

And

It would be unsurprising if nonexperts had failed to explain the intricacies of how gears work or why the angle of the front forks of a bicycle is critical. Indeed, even physicists disagree about seemingly simple issues, such as why bicycles are stable (Jones, 1970; Kirshner, 1980) and how they steer (Fajans, 2000). What is striking about the present results is that so many people have virtually no knowledge of how bicycles function.​​

In "experiment 2" in the study, people were asked to draw a working bicycle and focus on the mechanisms that make the bicycle work (as opposed to making the drawing look nice) and 60 of the 94 participants had at least one gross error that caused the drawing to not even resemble a working bicycle. If we look at a large-scale real-world civil engineering project, a single relevant subfield, like geotechnical engineering, contains many orders of magnitude more complexity than a bicycle and it's pretty safe to guess that, to the nearest percent, zero percent of lay people (or Roman engineers) could roughly sketch out what the relevant moving parts are.

For a non-civil engineering example, Jamie Brandon quotes this excerpt from Jim Manzi's Uncontrolled, which is a refutation of a "clever" nugget that I've frequently heard trotted out at parties:

The paradox of choice is a widely told folktale about a single experiment in which putting more kinds of jam on a supermarket display resulted in less purchases. The given explanation is that choice is stressful and so some people, facing too many possible jams, will just bounce out entirely and go home without jam. This experiment is constantly cited in news and media, usually with descriptions like "scientists have discovered that choice is bad for you". But if you go to a large supermarket you will see approximately 12 million varieties of jam. Have they not heard of the jam experiment? Jim Manzi relates in Uncontrolled:

First, note that all of the inference is built on the purchase of a grand total of thirty-five jars of jam. Second, note that if the results of the jam experiment were valid and applicable with the kind of generality required to be relevant as the basis for economic or social policy, it would imply that many stores could eliminate 75 percent of their products and cause sales to increase by 900 percent. That would be a fairly astounding result and indicates that there may be a problem with the measurement.

... the researchers in the original experiment themselves were careful about their explicit claims of generalizability, and significant effort has been devoted to the exact question of finding conditions under which choice overload occurs consistently, but popularizers telescoped the conclusions derived from one coupon-plus-display promotion in one store on two Saturdays, up through assertions about the impact of product selection for jam for this store, to the impact of product selection for jam for all grocery stores in America, to claims about the impact of product selection for all retail products of any kind in every store, ultimately to fairly grandiose claims about the benefits of choice to society. But as we saw, testing this kind of claim in fifty experiments in different situations throws a lot of cold water on the assertion.

As a practical business example, even a simplification of the causal mechanism that comprises a useful forward prediction rule is unlikely to be much like 'Renaming QwikMart stores to FastMart will cause sales to rise,' but will instead tend to be more like 'Renaming QwikMart stores to FastMart in high-income neighborhoods on high-traffic roads will cause sales to rise, as long as the store is closed for painting for no more than two days.' It is extremely unlikely that we would know all of the possible hidden conditionals before beginning testing, and be able to design and execute one test that discovers such a condition-laden rule.

Further, these causal relationships themselves can frequently change. For example, we discover that a specific sales promotion drives a net gain in profit versus no promotion in a test, but next year when a huge number of changes occurs - our competitors have innovated with new promotions, the overall economy has deteriorated, consumer traffic has shifted somewhat from malls to strip centers, and so on - this rule no longer holds true. To extend the prior metaphor, we are finding our way through our dark room by bumping our shins into furniture, while unobserved gremlins keep moving the furniture around on us. For these reasons, it is not enough to run an experiment, find a causal relationship, and assume that it is widely applicable. We must run tests and then measure the actual predictiveness of the rules developed from these tests in actual implementation.

So far, we've discussed examples of people with no background in a field explaining how a field works or should work, but the error of taking a high-level view and incorrectly assuming that things are simple also happens when people step back and have a high-level view of their own field that's disconnected from the details. For example, back when I worked at Centaur and we'd not yet shipped a dual core chip, a nearly graduated PhD student in computer architecture from a top school asked me, "why don't you just staple two cores together to make a dual core chip like Intel and AMD? That's an easy win".

At that time, we'd already been working on going from single core to multi core for more than one year. Making a single core chip multi-core or even multi-processor capable with decent performance requires significant additional complexity to the cache and memory hierarchy, the most logically complex part of the chip. As a rough estimate, I would guess that taking a chip designed for single-core use and making it multi-processor capable at least doubles the amount of testing/verification effort required to produce a working chip (and the majority of the design effort that goes into a chip is on testing/verification). More generally, a computer architect is only as good as their understanding of the tradeoffs their decisions impact. Great ones have a strong understanding of the underlying fields they must interact with. A common reason that a computer architect will make a bad decision is that they have a cocktail party level understanding of the fields that are one or two levels below computer architecture. An example of a bad decision that's occurred multiple times in industry is when a working computer architect decides to add SMT to a chip because it's basically a free win. You pay a few percent extra area and get perhaps 20% better performance. I know of multple attempts to do this that completely failed for predictable reasons because the architect failed to account for the complexity and verification cost of adding SMT. Adding SMT adds much more complexity than adding a second core because the logic has to be plumbed through everything and it causes an explosion in the complexity of verifying the chip for the same reason. Intel famously added SMT to the P4 and did not enable in the first generation it was shipped in because it was too complex to verify in a single generation and had critical, showstopping, bugs. With the years of time they had to shake the bugs out on one generation of architecture, they fixed their SMT implementation and shipped it in the next generation of chips. This happened again when they migrated to the Core architecture and added SMT to that. A working computer architect should know that this happened twice to Intel, implying that verifying an SMT implementation is hard, and yet there have been multiple instances where someone had a cocktail party level of understanding of the complexity of SMT and suggested adding it to a design that did not have the verification budget to ever ship a working chip with SMT.

And, of course, this isn't really unique to computer architecture. I used the dual core example because it's one that happens to currently be top-of-mind for me, but I can think of tens of similar examples off the top of my head and I'm pretty sure I could write up a few hundred examples if I spent a few days thinking about similar examples. People working in a field still have to be very careful to avoid having an incorrect, too abstract, view of the world that elides details and draws comically wrong inferences or conclusions as a result. When people outside a field explain how things should work, their explanations are generally even worse than someone in the field who missed a critical consideration and they generally present crank ideas.

Bringing together the Roman engineering example and the CPU example, going from 1 core to 2 (and, in general, going from 1 to 2, as in 1 datacenter to 2 datacenters or a monolith to a distributed system) is something every practitioner should understand is hard, even if some don't. Somewhat relatedly, if someone showed off a 4 THz processor that had 1000x the performance of a 4 GHz processor, that's something any practitioner should recognize as alien technology that they definitely do not understand. Only a lay person with no knowledge of the field could reasonably think to themselves, "it's just a processor running at 1000x the clock speed; an engineer who can make a 4 GHz processor would basically understand how a 4 THz processor with 1000x the performance works". We are so far from being able to scale up performance by 1000x by running chips 1000x faster that doing so would require many fundamental breakthroughs in technology and, most likely, the creation of entirely new fields that contain more engineering knowledge than exists in the world today. Similarly, only a lay person could look at Roman engineering and modern civil engineering and think "Romans built things and we build things that are just bigger and more varied; a Roman engineer should be able to understand how we build things today because the things are just bigger". Geotechnical engineering alone contains more engineering knowledge than existed in all engineering fields combined in the Roman era and it's only one of the new fields that had to be invented to allow building structures like we can build today.

Of course, I don't expect random programmers to understand geotechnical engineering, but I would hope that someone who's making a comparison between programming and civil engineering would at least have some knowledge of civil engineering and not just assume that the amount of knowledge that exists in the field is roughly equal to their knowledge of the field when they know basically nothing about the field.

Although I seem to try a lot harder than most folks to avoid falling into the trap of thinking something is simple because I don't understand it, I still fall prey to this all the time and the best things I've come up with to prevent this, while better than nothing, are not reliable.

One part of this is that I've tried to cultivate noticing "the feeling of glossing over something without really understanding it". I think of this is analogous to (and perhaps it's actually the same thing as) something that's become trendy over the past twenty years, paying attention to how emotions feel in your body and understanding your emotional state by noticing feelings in your body, e.g., a certain flavor of tight feeling in a specific muscle is a sure sign that I'm angry.

There's a specific feeling I get in my body when I have a fuzzy, high-level, view of something and am mentally glossing over it. I can easily miss it if I'm not paying attention and I suspect I can also miss it when I gloss over something in a way where the non-conscious part of the brain that generates the feeling doesn't even know that I'm glossing over something. Although noticing this feeling is inherently unreliable, I think that everything else I might do that's self contained to check my own reasoning fundamentally relies on the same mechanism (e.g., if I have a checklist to try to determine if I haven't glossed over something when I'm reasoning about a topic, some part of that process will still rely on feeling or intuition). I do try to postmortem cases where I missed the feeling to figure out happened, and that's basically how I figured out that I have a feeling associated with this error in the first place (I thought about what led up to this class of mistake in the past and noticed that I have a feeling that's generally associated with it), but that's never going to perfect or even very good.

Another component is doing what I think of as "checking inputs into my head". When I was in high school, I noticed that a pretty large fraction of the "obviously wrong" things I said came from letting incorrect information into my head. I didn't and still don't have a good, cheap, way to tag a piece of information with how reliable it is, so I find it much easier to either fact-check or discard information on consumption.

Another thing I try to do is get feedback, which is unreliable and also intractable in the general case since the speed of getting feedback is so much slower than the speed of thought that slowing down general thought to the speed of feedback would result in having relatively few thoughts4.

Although, unlike in some areas, there's no mechanical, systematic, set of steps that can be taught that will solve the problem, I do think this is something that can be practiced and improved and there are some fields where similar skills are taught (often implicitly). For example, when discussing the prerequisites for an advanced or graduate level textbook, it's not uncommon to see a book say something like "Self contained. No prerequisites other than mathematical maturity". This is a shorthand way of saying "This book doesn't require you to know any particular mathematical knowledge that a high school student wouldn't have picked up, but you do need to have ironed out a kind of fuzzy thinking that almost every untrained person has when it comes to interpreting and understanding mathematical statements". Someone with a math degree will have a bunch of explicit knowledge in their head about things like Cauchy-Schwarz inequality and the Bolzano-Weierstrass theorem, but the important stuff for being able to understand the book isn't the explicit knowledge, but the general way one thinks about math.

Although there isn't really a term for the equivalent of mathematical maturity in other fields, e.g., people don't generally refer to "systems designs maturity" as something people look for in systems design interviews, the analogous skill exists even though it doesn't have a name. And likewise for just thinking about topics where one isn't a trained expert, like a non-civil engineer thinking about why a construction project cost what it did and took as long as it did, a sort of general maturity of thought5.

Thanks to Reforge - Engineering Programs and Flatirons Development for helping to make this post possible by sponsoring me at the Major Sponsor tier.

Also, thanks to Pam Wolf, Ben Kuhn, Yossi Kreinin, Fabian Giesen, Laurence Tratt, Danny Lynch, Justin Blank, A. Cody Schuffelen, Michael Camilleri, and Anonymous for comments/corrections discussion.

An anonymous blog reader gave this example of their own battle with cocktail party ideas:

Your most recent post struck a chord with me (again!), as I have recently learned that I know basically nothing about making things cold, even though I've been a low-temperature physicist for nigh on 10 years, now. Although I knew the broad strokes of cooling, and roughly how a dilution refrigerator works, I didn't appreciate the sheer challenge of keeping things at milliKelvin (mK) temperatures. I am the sole physicist on my team, which otherwise consists of mechanical engineers. We have found that basically every nanowatt of dissipation at the mK level matters, as does every surface-surface contact, every material choice, and so on.

Indeed, we can say that the physics of thermal transport at mK temperatures is well understood, and we can write laws governing the heat transfer as a function of temperature in such systems. They are usually written as P = aT^n. We know that different classes of transport have different exponents, n, and those exponents are well known. Of course, as you might expect, the difference between having 'hot' qubits vs qubits at the base temperature of the dilution refrigerator (30 mK) is entirely wrapped up in the details of exactly what value of the pre-factor a happens to be in our specific systems. This parameter can be guessed, usually to within a factor of 10, sometimes to within a factor of 2. But really, to ensure that we're able to keep our qubits cold, we need to measure those pre-factors. Things like type of fastener (4-40 screw vs M4 bolt), number of fasteners, material choice (gold? copper?), and geometry all play a huge role in the actual performance of the system. Oh also, it turns out n changes wildly as you take a metal from its normal state to its superconducting state. Fun!

We have spent over a year carefully modeling our cryogenic systems, and in the process have discovered massive misconceptions held by people with 15-20 years of experience doing low-temperature measurements. We've discovered material choices and design decisions that would've been deemed insane had any actual thermal modeling been done to verify these designs.

The funny thing is, this was mostly fine if we wanted to reproduce the results of academic labs, which mostly favored simpler experiment design, but just doesn't work as we leave the academic world behind and design towards our own purposes.

P.S. Quantum computing also seems to suffer from the idea that controlling 100 qubits (IBM is at 127) is not that different from 1,000 or 1,000,000. I used to think that it was just PR bullshit and the people at these companies responsible for scaling were fully aware of how insanely difficult this would be, but after my own experience and reading you post, I'm a little worried that most of them don't truly appreciate the titanic struggle ahead for us.

This is just a long-winded way of saying that I have held cocktail party ideas about a field in which I have a PhD and am ostensibly an expert, so your post was very timely for me. I like to use your writing as a springboard to think about how to be better, which has been very difficult. It's hard to define what a good physicist is or does, but I'm sure that trying harder to identify and grapple with the limits of my own knowledge seems like a good thing to do.

For a broader and higher-level discussion of clear thinking, see Julia Galef's Scout Mindset:

WHEN YOU THINK of someone with excellent judgment, what traits come to mind? Maybe you think of things like intelligence, cleverness, courage, or patience. Those are all admirable virtues, but there’s one trait that belongs at the top of the list that is so overlooked, it doesn’t even have an official name.

So I’ve given it one. I call it scout mindset: the motivation to see things as they are, not as you wish they were.

Scout mindset is what allows you to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course. It’s what prompts you to honestly ask yourself questions like “Was I at fault in that argument?” or “Is this risk worth it?” or “How would I react if someone from the other political party did the same thing?” As the late physicist Richard Feynman once said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.”

As a tool to improve thought, the book has a number of chapters that give concrete checks that one can try, which makes it more (or at least more easily) actionable than this post, which merely suggests that you figure out what it feels like when you're glossing over something. But I don't think that the ideas in the book are a substitute for this post, in that the self-checks the book suggests don't directly attack the problem discussed in this post.

In one chapter, Galef suggests leaning into confusion (e.g., if some seemingly contradictory information gives rise to a feeling of confusion), which I agree with. I would add that there are a lot of other feelings that are useful to observe that don't really have a good name. When it comes to evaluating ideas, some that I try to note, beside the already mentioned "the feeling that I'm glossing over important details", are "the feeling that a certain approach is likely to pay off if pursued", "the feeling that an approach is really fraught/dangerous", "the feeling that there's critical missing information", "the feeling that something is really wrong", along with similar feelings that don't have great names.

For a discussion of how the movie Don't Look Up promotes the idea that the world is simple and we can easily find cocktail party solutions to problems, see this post by Scott Alexander.

Also, John Salvatier notes that reality has a surprising amount of detail.


  1. Another one I commonly hear is that, unlike trad engineers, programmers do things that have never been done before [return]
  2. Discussions about construction delays similarly ignore geotechnical reasons for delays. As with the above, I'm using geotechnical as an example of a sub-field that explains many delays because it's something I happen to be familiar with, not because it's the most important thing, but it is a major cause of delays and, on many kinds of projects, the largest cause of delays.

    Going back to our example that a Roman engineer might, at best, superficially understand, the reason that we pile dirt onto the ground before building is that much of Vancouver has poor geotechnical conditions for building large structures. The ground is soft and will get unevenly squished down over time if something heavy is built on top of it. The sand is there as a weight, to pre-squish the ground.

    As described in the paragraph above, this sounds straightforward. Unfortunately, it's anything but. As it happens, I've been spending a lot of time driving around with a geophysics engineer (a field that's related to but quite distinct from geotechnical engineering). When we drive over a funny bump or dip in the road, she can generally point out the geotechnical issue or politically motivated decision to ignore the geotechnical engineer's guidance that caused the bump to come into existence. The thing I find interesting about this is that, even though the level of de-risking done for civil engineering projects is generally much higher than is done for the electrical engineering projects I've worked on, where in turn it's much higher than on any software project I've worked on, enough "bugs" still make it into "production" that you can see tens or hundreds of mistakes in a day if you drive around, are knowledgeable, and pay attention.

    Fundamentally, the issue is that humanity does not have the technology to understand the ground at anything resembling a reasonable cost for physically large projects, like major highways. One tool that we have is to image the ground with ground penetrating radar, but this results in highly underdetermined output. Another tool we have is to use something like a core drill or soil augur, which is basically digging down into the ground to see what's there. This also has inherently underdetermined output because we only get to see what's going on exactly where we drilled and the ground sometimes has large spatial variation in its composition that's not obvious from looking at it from the surface. A common example is when there's an unmapped remnant creek bed, which can easily "dodge" the locations where soil is sampled. Other tools also exist, but they, similarly, leave the engineer with an incomplete and uncertain view of the world when used under practical financial constraints.

    When I listen to cocktail party discussions of why a construction project took so long and compare it to what civil engineers tell me caused the delay, the cocktail party discussion almost always exclusively discusses reasons that civil engineers tell me are incorrect. There are many reasons for delays and "unexpected geotechnical conditions" are a common one. Civil engineers are in a bind here since drilling cores is time consuming and expensive and people get mad when they see that the ground is dug up and no "real work" is happening (and likewise when preload is applied — "why aren't they working on the highway?"), which creates pressure on politicians which indirectly results in timelines that don't allow sufficient time to understand geotechnical conditions. This sometimes results in a geotechnical surprise during a project (typically phrased as "unforseen geotechnical conditions" in technical reports), which can result in major parts of a project having to switch to slower and more expensive techniques or, even worse, can necessitate a part of a project being redone, resulting in cost and schedule overruns.

    I've never heard a cocktail party discussion that discusses geotechnical reasons for project delays. Instead, people talk about high-level reasons that are plausible sounding to a lay person, but completely fabricated, reasons that are disconnected from reality. But if you want to discuss how things can be built more quickly and cheaply, "progress studies", etc., this cannot be reasonably done without having some understanding of the geotechnical tradeoffs that are in play (as well as the tradeoffs from other civil engineering fields we haven't discussed).

    [return]
  3. One thing we could do to keep costs under control is to do less geotechnical work and ignore geotechnical surprises up to some risk bound. Today, some of the "amount of work" done is determined by regulations and much of it is determined by case law, which gives a rough idea of what work needs to be done to avoid legal liability in case of various bad outcomes, such as a building collapse.

    If, instead of using case law and risk of liability to determine how much geotechnical derisking should be done, we compute this based on QALYs per dollar, at the margin, we seem to spend a very large amount of money geotechnical derisking compared to many other interventions.

    This is not just true of geotechnical work and is also true of other fields in civil engineering, e.g., builders in places like the U.S. and Canada do much more slump testing than is done in some countries that have a much faster pace of construction, which reduces the risk of a building's untimely demise. It would be both scandalous and a serious liability problem if a building collapsed because the builders of the building didn't do slump testing when they would've in the U.S. or Canada,, but buildings usually don't collapse even when builders don't do as much slump testing as tends to be done in the U.S. and Canada.

    Countries that don't build to standards roughly as rigorous as U.S. or Canadian standards sometimes have fairly recently built structures collapse in ways that would be considered shocking in the U.S. and Canada, but the number of lives saved per dollar is very small compared to other places the money could be spent. Whether or not we should change this with a policy decision is a more relevant discussion to building costs and timelines than the fabricated reasons I hear cocktail party discussions of construction costs, but I've never heard this or other concrete reasons for project cost brought up outside of civil engineering circles.

    Even if we just confine ourselves to work that's related to civil engineering as opposed to taking a broader, more EA-minded approach, and looking QALYs for all possible interventions, the tradeoff between resources spent on derisking during construction vs. resources spent derisking on an ongoing basis (inspections, maintenance, etc.), the relative resource levels weren't determined by a process that should be expected to produce anywhere near an optimal outcome.

    [return]
  4. Some people suggest that writing is a good intermediate step that's quicker than getting external feedback while being more reliable than just thinking about something, but I find writing too slow to be usable as a way to clarify ideas and, after working on identifying when I'm having fuzzy thoughts, I find that trying to think through an idea to be more reliable as well as faster. [return]
  5. One part of this that I think is underrated by people who have a self-image of "being smart" is where book learning and thinking about something is sufficient vs. where on-the-ground knowledge of the topic is necessary.

    A fast reader can read the texts one reads for most technical degrees in maybe 40-100 hours. For a slow reader, that could be much slower, but it's still not really that much time. There are some aspects of problems where this is sufficient to understand the problem and come up with good, reasonable, solutions. And there are some aspects of problems where this is woefully inefficient and thousands of hours of applied effort are required to really be able to properly understand what's going on.

    [return]