Tag: Engineering

Why Early Decisions Have the Greatest Impact and Why Growing too Much is a Bad Thing

I never went to Engineering school. My undergrad is Computer Science. Despite that I've always wanted to learn more about Engineering.

John Kuprenas and Matthew Frederick have put together a book, 101 Things I Learned in Engineering School, which contains some of the big ideas.

In the author's note, Kuprenas writes:

(This book) introduces engineering largely through its context, by emphasizing the common sense behind some of its fundamental concepts, the themes intertwined among its many specialities, and the simple abstract principles that can be derived from real-world circumstances. It presents, I believe, some clear glimpses of the forest as well as the trees within it.

Here are three (of the many) things I noted in the book.

***

#8 An object receives a force, experiences stress, and exhibits strain.

force-stress-strain

Force, stress, and strain are used somewhat interchangeably in the lay world and may even be used with less than ideal rigor by engineers. However, they have different meanings.

A force, sometimes called “load,” exists external to and acts upon a body, causing it to change speed, direction, or shape. Examples of forces include water pressure on a submarine hull, snow loads on a bridge, and wind loads on the sides of a skyscraper.

Stress is the “Experience” of a body—its internal resistance to an external force acting on it. Stress is force divided by unit area, and is expressed in units such as pounds per square inch.

Strain is a product of stress. It is the measurable percentage of deformation or change in an object such as a change in length.

#48 Early decisions have the greatest impact.

Early Decisions Have Greater Impact

Decisions made just days or weeks into a project—assumptions of end-user needs, commitments to a schedule, the size and shape of a building footprint, and so on—have the most significant impact on design, feasibility, and cost. As decisions are made later and later in the design process, their influence decreases. Minor cost savings sometimes can be realized through value engineering in the later stages of design, but the biggest cost factors are embedded at the outset in a project's DNA.

Everyone seems to understand this point on the surface and yet few people consider the implications. I know a lot of people who make their career on cleaning up their own mess. That is, they make a poor initial decision and then work extra hours while running around with stress and panic as they clean up their own mess. In the worst organizations these people are promoted for doing an exceptional job.

Proper management of early decisions produces more free time and lower stress.

#75 A successful system won't necessarily work at a different scale.

Systems Scale

An imaginary team of engineers sought to build a “super-horse” that would be twice as tall as a normal horse. When they created it, they discovered it to be a troubled, inefficient beast. Not only was it two times the height of a normal horse, it was twice as wide and twice as long, resulting in an overall mass eight times greater than normal. But the cross sectional area of its veins and arteries was only four times that of a normal horse calling for its heart to work twice as hard. The surface area of its feed was four times that of a normal horse, but each foot had to support twice the weight per unit of surface area compared to a normal horse. Ultimately, the sickly animal had to be put down.

This becomes interesting when you think of the ideal size for things and how we, as well intentioned humans, often make things worse. This has a name. It's called iatrogenics.

Let us briefly put an organizational lens on this. Inside organizations resources are scarce. Generally the more people you have under you the more influence and authority you have inside the organization. Unless there is a proper culture and incentive system in place, your incentive is to grow and not shrink. In fact, in all the meetings I've ever been in with senior management, I can't recall anyone who ran a division saying I have too many resources. It's a derivative of Parkinson's Law — only work isn't expanding to fill the time available. Instead, work is expanding to fill the number of people.

Contrast that with Berkshire Hathaway, run by Warren Buffett. In a 2010 letter to shareholders he wrote:

Our flexibility in respect to capital allocation has accounted for much of our progress to date. We have been able to take money we earn from, say, See’s Candies or Business Wire (two of our best-run businesses, but also two offering limited reinvestment opportunities) and use it as part of the stake we needed to buy BNSF.

In the 2014 letter he wrote:

To date, See’s has earned $1.9 billion pre-tax, with its growth having required added investment of only $40 million. See’s has thus been able to distribute huge sums that have helped Berkshire buy other businesses that, in turn, have themselves produced large distributable profits. (Envision rabbits breeding.) Additionally, through watching See’s in action, I gained a business education about the value of powerful brands that opened my eyes to many other profitable investments.

There is an optimal size to See's. Had they retained that $1.9 billion in earnings they distributed to Berkshire, the CEO and management team might have a claim to bigger pay checks, they'd be managing ~$2 billion in assets instead of $40 million, but the result would have been very sub-optimal.

Our pursuit of growth beyond a certain point often ensures that one of the biggest forces in the world, time, is working against us. “What is missing,” writes Jeff Stibel in BreakPoint, “is that the unit of measure for progress isn’t size, it’s time.”

***

Other books in the series:
101 Things I Learned in Culinary School
101 Things I Learned in Business School
101 Things I Learned in Law School
101 Things I Learned in Film School

The Three Essential Properties of the Engineering Mind-Set

Applied Minds: How Engineers Think

In his book Applied Minds: How Engineers Think, Guru Madhavan explores the mental tools of engineers that allow engineering feats. His framework is built around a flexible intellectual tool kit called modular systems thinking.

The core of the engineering mind-set is what I call modular systems thinking. It's not a singular talent, but a melange of techniques and principles. Systems-level thinking is more than just being systematic; rather, it's about the understanding that in the ebb and flow of life, nothing is stationary and everything is linked. The relationships among the modules of a system give rise to a whole that cannot be understood by analyzing its constituent parts.

***
Thinking in Systems

Thinking in systems means that you can deconstruct (breaking down a larger system into its modules) and reconstruct (putting it back together).

The focus is on identifying the strong and weak links—how the modules work, don't work, or could potentially work—and applying this knowledge to engineer useful outcomes.

There is no engineering method, so modular systems thinking varies with contexts.

Engineering Dubai's Burj Khalifa is different from coding the Microsoft Office Suite. Whether used to conduct wind tunnel tests on World Cup soccer balls or to create a missile capable of hitting another missile midflight, engineering works in various ways. Even within a specific industry, techniques can differ. Engineering an artifact like a turbofan engine is different from assembling a megasystem like an aircraft, and by extension, a system of systems, such as the air traffic network.

***
The Three Essential Properties of the Engineering Mind-Set

1. The ability to see structure where there's nothing apparent.

From haikus to high-rise buildings, our world relies on structures. Just as a talented composer “hears” a sound before it's put down on a score, a good engineer is able to visualize—and produce—structures through a combination of rules, models, and instincts. The engineering mind gravitates to the piece of the iceberg underneath the water rather than its surface. It's not only about what one sees; it's also about the unseen.

A structured systems-level thinking process would consider how the elements of the system are linked in logic, in time, in sequence, and in function—and under what conditions they work and don't work. A historian might apply this sort of structural logic decades after something has occurred, but an engineer needs to do this preemptively, whether with the finest details or top-level abstractions. This is one of the main reasons why engineers build models: so that they can have structured conversations based in reality. Critically, envisioning a structure involves having the wisdom to know when a structure is valuable, and when it isn't.

Consider, for example, the following catechism by George Heilmeier—a former director of the U.S. Defense Advanced Research Projects Agency (DARPA), who also engineered the liquid crystal displays (LCDs) that are part of modern-day visual technologies. His approach to innovation is to employ a checklist-like template suitable for a project with well-defined goals and customers.

  • What are you trying to do? Articulate your objectives using absolutely no jargon.
  • How is it done today, and what are the limits of current practice?
  • What's new in your approach and why do you think it will be successful?
  • Who cares? If you're successful, what difference will it make?
  • What are the risks and the payoffs?
  • How much will it cost? How long will it take?
  • What are the midterm and final “exams” to check for success?

This type of structure “helps ask the right questions in a logical way.”

2. Adeptness at designing under constraints
The real world is full of constraints that make or break potential.

Given the innately practical nature of engineering, the pressures on it are far greater compared to other professions. Constraints—whether natural or human-made—don't permit engineers to wait until all phenomena are fully understood and explained. Engineers are expected to produce the best possible results under the given conditions. Even if there are no constraints, good engineers know how to apply constraints to help achieve their goals. Time constraints on engineers fuel creativity and resourcefulness. Financial constraints and the blatant physical constraints hinging on the laws of nature are also common, coupled with an unpredictable constraint—namely, human behavior.

“Imagine if each new version of the Macintosh Operating System, or of Windows, was in fact a completely new operating system that began from scratch. It would bring personal computing to a halt,” Olivier de Week and his fellow researchers at the Massachusetts Institute of Technology point out. Engineers often augment their software products, incrementally addressing customer preferences and business necessities— which are nothing but constraints. “Changes that look easy at first frequently necessitate other changes, which in turn cause more change. . . . You have to find a way to keep the old thing going while creating something new.” The pressures are endless.

3. Understanding Trade-offs
The ability to hold alternative ideas in your head and make considered judgments.

Engineers make design priorities and allocate resources by ferreting out the weak goals among stronger ones. For an airplane design, a typical trade-off could be to balance the demands of cost, weight, wingspan, and lavatory dimensions within the constraints of the given performance specifications. This type of selection pressure even trickles down to the question of whether passengers like the airplane they're flying in. If constraints are like tightrope walking, then trade-offs are inescapable tugs-of-war among what's available, what's possible, what's desirable, and what the limits are.

Applied Minds: How Engineers Think will help you borrow strategies from engineering and apply them to your most pressing problems.

Laws of Character and Personality

laws of character“One of the most valuable personal traits is the ability to get along with all kinds of people.”

The Unwritten Laws of Engineering is a book for those engineers who have more obstacles of a personal nature in organizations than technical. First published as a series of three articles in Mechanical Engineering, the “laws” were written and formulated as a professional code of conduct so to speak circa 1944. Although fragmentary and incomplete, they are still used by engineers, young and old, to guide their behavior.

This is a rather comprehensive quality but it defines the prime requisite of personality in any human organization. No doubt this ability can be achieved by various formulas, although it is based mostly upon general, good-natured friendliness, together with consistent observance of the “Golden Rule.” The following “dos and don’ts” are more specific elements of a winning formula:

(1) Cultivate the ability to appreciate the good qualities, rather than dislike the shortcomings, of each individual.

(2) Do not give vent to impatience and annoyance on slight provocation. Some offensive individuals seem to develop a striking capacity for becoming annoyed, which they indulge with little or no restraint.

(3) Do not harbor grudges after disagreements involving honest differences of opinion. Keep your arguments objective and leave personalities out of it. Never foster enemies, for as E. B. White put it: “One of the most time-consuming things is to have an enemy.”

(4) Form the habit of considering the feelings and interests of others.

(5) Do not become unduly preoccupied with your own selfish interests. When you look out for Number One first, your associates will be disinclined to look out for you, because they know you are already doing that. This applies to the matter of credit for accomplishments. But you need not fear being overlooked; about the only way to lose credit for a creditable job is to grab for it too avidly.

(6) Make it a rule to help the other person whenever an opportunity arises. Even if you are mean-spirited enough to derive no personal satisfaction from accommodating others, it’s a good investment. The business world demands and expects cooperation and teamwork among the members of an organization.

(7) Be particularly careful to be fair on all occasions. This means a good deal more than just fair upon demand. All of us are frequently unfair, unintentionally, simply because we do not consider other points of view to ensure that the interests of others are fairly protected. For example, we are often too quick to unjustly criticize another for failing on an assignment when the real fault lies with the manager who failed to provide the tools to do the job. Most important, whenever you enjoy a natural advantage or hold a position from which you could seriously mistreat someone, you must “lean over backwards” to be fair and square.

(8) Do not take yourself or your work too seriously. A sense of humor, under reasonable control, is much more becoming than a chronically sour dead-pan, a perpetual air of tedious seriousness, or a pompous righteousness. It is much better for your blood pressure, and for the morale of the office, to laugh off an awkward situation now and then than to maintain a tense, tragic atmosphere whenever matters take an embarrassing turn. Of course, a serious matter should be taken seriously, but preserving an oppressively heavy and funereal atmosphere does more harm than good.

(9) Put yourself out just a little to be genuinely cordial in greeting people. True cordiality is, of course, spontaneous and should never be affected, but neither should it be inhibited. We all know people who invariably pass us in the hall or encounter us elsewhere without a shadow of recognition. Whether this is due to inhibition or preoccupation, we cannot help thinking that such unsociable chumps would not be missed much if we just didn’t see them. Like anything else, this can be overdone, but most engineers can safely promote more cordiality in themselves.

(10) Give people the benefit of the doubt, especially when you can afford to do so. Mutual distrust and suspicion generate a great deal of unnecessary friction. These are derived chiefly from misunderstandings, pure ignorance, or ungenerously assuming that people are guilty until proven innocent. You will get much better cooperation from others if you assume that they are just as intelligent, reasonable, and decent as you are, even when you know they are not (although setting the odds of that are tricky indeed).

In a closing section of The Unwritten Laws of Engineering, King makes the point that “It is a mistake, of course, to try too hard to get along with everybody merely by being agreeable or even submissive on all occasions. … Do not give ground too quickly just to avoid a fight, when you know you're in the right. If you can be pushed around easily the chances are that you will be pushed around. Indeed, you can earn the respect of your associates by demonstrating your readiness to engage in a good (albeit non-personal) fight when your objectives are worth fighting for.”

As Shakespeare put it in Hamlet when Polonius offers advice to his son, “Beware Of entrance to a quarrel, but being in, Bear't that the opposed may beware of thee.”

Margin of Safety: An Introduction to the Mental Model

Previously on Farnam Street, we covered the idea of Redundancy — a central concept in both the world of engineering and in practical life. Today we’re going to explore a related concept: Margin of Safety.

The margin of safety is another concept rooted in engineering and quality control. Let’s start there, then see where else our model might apply in practical life, and lastly, where it might have limitations.

* * *

Consider a highly-engineered jet engine part. If the part were to fail, the engine would also fail, perhaps at the worst possible moment—while in flight with passengers on board. Like most jet engine parts, let us assume the part is replaceable over time—though we don’t want to replace it too often (creating prohibitively high costs), we don’t expect it to last the lifetime of the engine. We design the part for 10,000 hours of average flying time.

That brings us to a central question: After how many hours of service do we replace this critical part? The easily available answer might be 9,999 hours. Why replace it any sooner than we have to? Wouldn’t that be a waste of money?

The first problem is, we know nothing of the composition of the 10,000 hours any individual part has gone through. Were they 10,000 particularly tough hours, filled with turbulent skies? Was it all relatively smooth sailing? Somewhere in the middle?

Just as importantly, how confident are we that the part will really last the full 10,000 hours? What if it had a slight flaw during manufacturing? What if we made an assumption about its reliability that was not conservative enough? What if the material degraded in bad weather to a degree we didn’t foresee?

The challenge is clear, and the implication obvious: we do not wait until the part has been in service for 9,999 hours. Perhaps at 7,000 hours, we seriously consider replacing the part, and we put a hard stop at 7,500 hours.

The difference between waiting until the last minute and replacing it comfortably early gives us a margin of safety. The sooner we replace the part, the more safety we have—by not pushing the boundaries, we leave ourselves a cushion. (Ever notice how your gas tank indicator goes on long before you’re really on empty? It’s the same idea.)

The principle is essential in bridge building. Let’s say we calculate that, on an average day, a proposed bridge will be required to support 5,000 tons at any one time. Do we build the structure to withstand 5,001 tons? I'm not interested in driving on that bridge. What if we get a day with much heavier traffic than usual? What if our calculations and estimates are little off? What if the material weakens over time at a rate faster than we imagined? To account for these, we build the bridge to support 20,000 tons. Only now do we have a margin of safety.

This fundamental engineering principle is useful in many practical areas of life, even for non-engineers. Let’s look at one we all face.

* * *

Take a couple earning $100,000 per year after taxes, or about $8,300 per month. In designing their life, they must necessarily decide what standard of living to enjoy. (The part which can be quantified, anyway.) What sort of monthly expenses should they allow themselves to accumulate?

One all-too-familiar approach is to build in monthly expenses approaching $8,000. A $4,000 mortgage, $1,000 worth of car payments, $1,000/month for private schools…and so on. The couple rationalizes that they have “earned” the right live large.

However, what if there are some massive unexpected expenditures thrown their way? (In the way life often does.) What if one of them lost their job and their combined monthly income dropped to $4,000?

The couple must ask themselves whether the ensuing misery is worth the lavish spending. If they kept up their $8,000/month spending habit after a loss of income, they would have to choose between two difficult paths: Rapidly eating into their savings or considerably downsizing their life. Either is likely to cause extreme misery from the loss of long-held luxuries.

Thinking in reverse, how can we avoid the potential misery?

A common refrain is to tell the couple to make sure they’ve stashed away some money in case of emergency, to provide a buffer. Often there is a specific multiple of current spending we’re told to have in reserve—perhaps 6-12 months. In this case, savings of $48,000-$96,000 should suffice.

However, is there a way we can build them a much larger margin for error?

Let’s say the couple decides instead to permanently limit their monthly spending to $4,000 by owning a smaller house, driving less expensive cars, and trusting their public schools. What happens?

Our margin of safety now compounds. Obviously, a savings rate exceeding 50% will rapidly accumulate in their favor — $4,300 put away by the first month, $8,600 by the second month, and so on. The mere act of systematically underspending their income rapidly gives them a cushion without much trying. If an unexpected expenditure comes up, they’ll almost certainly be ready.

The unseen benefit, and the extra margin of safety in this choice comes if either spouse loses their income – either by choice (perhaps to care for a child) or by bad luck (health issues). In this case, not only has a high savings rate accumulated in their favor but because their spending is systematically low, they are able to avoid tapping it altogether! Their savings simply stop growing temporarily while they live on one income. This sort of “belt and suspenders” solution is the essence of margin-of-safety thinking.

(On a side note: Let’s take it even one step further. Say their former $8,000 monthly spending rate meant they probably could not retire until age 70, given their current savings rate, investment choices, and desired lifestyle post-retirement. Reducing their needs to $4,000 not only provides them much needed savings, quickly accelerating their retirement date, but they now need even less to retire on in the first place. Retiring at 70 can start to look like retiring at 45 in a hurry.)

* * *

Clearly, the margin of safety model is very powerful and we’re wise to use it whenever possible to avoid failure. But it has limitations.

One obvious issue, most salient in the engineering world, comes in the tradeoff with time and money. Given an unlimited runway of time and the most expensive materials known to mankind, it’s likely that we could “fail-proof” many products to such a ridiculous degree as to be impractical in the modern world.

For example, it’s possible to imagine Boeing designing a plane that would have a fail rate indistinguishable from zero, with parts being replaced 10% into their useful lives, built with rare but super-strong materials, etc.—so long as the world was willing to pay $25,000 for a coach seat from Boston to Chicago. Given the impracticability of that scenario, our tradeoff has been to accept planes that are not “fail-proof,” but merely extremely unlikely to fail, in order to give the world safe enough air travel at an affordable cost. This tradeoff has been enormously wise and helpful to the world. Simply put, the margin-of-safety idea can be pushed into farce without careful judgment.

* * *

This brings us to another limitation of the model, which is the failure to engage in “total systems” thinking. I'm reminded of a quote I've used before at Farnam Street:

The reliability that matters is not the simple reliability of one component of a system,
but the final reliability of the total control system
.”
— Garrett Hardin in Filters Against Folly

Let’s return to the Boeing analogy. Say we did design the safest and most reliable jet airplane imaginable, with parts that would not fail in one billion hours of flight time under the most difficult weather conditions imaginable on Earth—and then let it be piloted by a drug addict high on painkillers.

The problem is that the whole flight system includes much more than just the reliability of the plane itself. Just because we built in safety margins in one area does not mean the system will not fail. This illustrates not so much a failure of the model itself, but a common mistake in the way the model is applied.

* * *

Which brings us to a final issue with the margin of safety model—naïve extrapolation of past data. Let’s look at a common insurance scenario to illustrate this one.

Suppose we have a 100-year-old reinsurance company – PropCo – which reinsures major primary insurers in the event of property damage in California caused by a catastrophe – most worrying being an earthquake and its aftershocks. Throughout its entire (long) history, PropCo had never experienced a yearly loss on this sort of coverage worse than $1 billion. Most years saw no loss worse than $250 million, and in fact, many years had no losses at all – giving them comfortable profit margins.

Thinking like engineers, the directors of PropCo insisted that the company have such a strong financial position so that they could safely cover a loss twice as bad as anything ever encountered. Given their historical losses, the directors believed this extra capital would give PropCo a comfortable “margin of safety” against the worst case. Right?

However, our directors missed a few crucial details. The $1 billion loss, the insurer’s worst, had been incurred in the year 1994 during the Northridge earthquake. Since then, the building density of Californian cities had increased significantly, and due to ongoing budget issues and spreading fraud, strict building codes had not been enforced. Considerable inflation in the period since 1994 also ensured that losses per damaged square foot would be far higher than ever faced previously.

With these conditions present, let’s propose that California is hit with an earthquake reading 7.0 on the Richter scale, with an epicenter 10 miles outside of downtown LA. PropCo faces a bill of $5 billion – not twice as bad, but five times as bad as it had ever faced. In this case, PropCo fails.

This illustration (which recurs every so often in the insurance field) shows the limitation of naïvely assuming a margin of safety is present based on misleading or incomplete past data.

* * *

Margin of safety is an important component to some decisions and life. You can think of it as a reservoir to absorb errors or poor luck. Size matters. At least, in this case, bigger is better. And if you need a calculator to figure out how much room you have, you're doing something wrong.

Margin of safety is part of the Farnam Street Latticework of Mental Models.

Eric Drexler on taking action in the face of limited knowledge

radical abundance

Science pursues answers to questions, but not always the questions that engineering must ask.

The founding father of nanotechnology, Eric Drexler, who aptly described the difference between science and engineering, comments on the central differences between how science and engineering approach solutions in a world of limited knowledge.

Drexler's explanation, found in his insightful book Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, discusses how there is a certain amount of ignorance that pervades everything. How then, should we respond? Engineers apply a margin of safety.

Drexler writes:

When faced with imprecise knowledge, a scientist will be inclined to improve it, yet an engineer will routinely accept it. Might predictions be wrong by as much as 10 percent, and for poorly understood reasons? The reasons may pose a difficult scientific puzzle, yet an engineer might see no problem at all. Add a 50 percent margin of safety, and move on.

Safety margins are standard parts of design, and imprecise knowledge is but one of many reasons.

Engineers and scientists ask different questions:

… Accuracy can only be judged with respect to a purpose and engineers often can choose to ask questions for which models give good-enough answers.

The moral of the story: Beware of mistaking the precise knowledge that scientists naturally seek for the reliable knowledge that engineers actually need.

beware

Nature presents puzzles that thwart human understanding.

Some of this is necessary fallibility—some things we simply cannot understand or predict. Just because we want to understand something doesn't mean it's within our capacity to do so.

Other problems represent limited understanding and predictability — there are things we simply cannot do yet, for a variety of reasons.

… Predicting the weather, predicting the folding of membrane proteins, predicting how particular molecules will fit together to form a crystal— all of these problems are long-standing areas of research that have achieved substantial but only partial success. In each of these cases, the unpredictable objects of study result from a spontaneous process— evolution, crystallization, atmospheric dynamics— and none has the essential features of engineering design.

What leads to system-level predictability?

— Well-understood parts with predictable local interactions, whether predictability stems from calculation or testing
— Design margins and controlled system dynamics to limit the effects of imprecision and variable conditions
— Modular organization, to facilitate calculation and testing and to insulate subsystems from one another and the external

… When judging engineering concepts, beware of assuming that familiar concerns will cause problems in systems designed to avoid them.

Seeking Unique Answers vs. Seeking Multiple Options

Expanding the range of possibilities plays opposite roles in inquiry and design.

If elephantologists have three viable hypotheses about an animal’s ancestry, at least two hypotheses must be wrong. Discovering yet another possible line of descent creates more uncertainty, not less— now three must be wrong. In science, alternatives represent ignorance.

If automobile engineers have three viable designs for a car’s suspension, all three designs will presumably work. Finding yet another design reduces overall risk and increases the likelihood that at least one of the designs will be excellent. In engineering, alternatives represent options. Not knowing which scientific hypothesis is true isn’t at all like having a choice of engineering solutions. Once again, what may seem like similar questions in science and engineering are more nearly opposite.

Knowledge of options is sometimes mistaken for ignorance of facts.

Remarkably, in engineering, even scientific uncertainty can contribute to knowledge, because uncertainty about scientific facts can suggest engineering options.

Simple, Specific Theories vs. Complex, Flexible Designs

Engineers value what scientists don't: flexibility.

Science likewise has no use for a theory that can be adjusted to fit arbitrary data, because a theory that fits anything forbids nothing, which is to say that it makes no predictions at all. In developing designs, by contrast, engineers prize flexibility — a design that can be adjusted to fit more requirements can solve more problems. The components of the Saturn V vehicle fit together because the design of each component could be adjusted to fit its role.

In science, a theory should be easy to state and within reach of an individual’s understanding. In engineering, however, a fully detailed design might fill a truck if printed out on paper.

This is why engineers must sometimes design, analyze, and judge concepts while working with descriptions that take masses of detail for granted. A million parameters may be left unspecified, but these parameters represent adjustable engineering options, not scientific uncertainty; they represent, not a uselessly bloated and flexible theory, but a stage in a process that routinely culminates in a fully specified product.


Beware of judging designs as if they were theories in science. An esthetic that demands uniqueness and simplicity is simply misplaced.

Curiosity-Driven Investigation vs. Goal-Oriented Development

Organizational structure differs between scientific and engineering pursuits. The coordination of work isn't interchangeable.

In science, independent exploration by groups with diverse ideas leads to discovery, while in systems engineering, independent work would lead to nothing of use, because building a tightly integrated system requires tight coordination. Small, independent teams can design simple devices, but never a higher-order system like a passenger jet.

In inquiry, investigator-led, curiosity-driven research is essential and productive. If the goal is to engineer complex products, however, even the most brilliant independent work will reliably produce no results.

The moral of the story: Beware of approaching engineering as if it were science, because this mistake has opportunity costs that reduce the value of science itself.

In closing, Drexler comments on applying the engineering perspective.

Drawing on established knowledge to expand human capabilities, by contrast, requires an intellectual discipline that, in its fullest, high-level form, differs from science in almost every respect.

Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization is worth reading in its entirety.

The Difference Between Science And Engineering

radical abundance

Eric Drexler is often described as “the founding father of nanotechnology.”

His recent book, Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, includes a fascinating explanation of the difference between science and engineering.

At first glance, scientific inquiry and engineering design can seem the same. One important distinction, however, results from the flow of information.

The essence of science is inquiry; the essence of engineering is design. Scientific inquiry expands the scope of human perception and understanding; engineering design expands the scope of human plans and results.

Inquiry and design are perfectly distinct as concepts, but often interwoven in practice, whether within a field, a research program, a development team, or a single creative mind. Meshing design with inquiry can be as vital as hand-eye coordination. Engineering new instruments enables inquiry, while scientific inquiry can enable design. Chemical engineers investigate chemical systems, testing combinations of reactants, temperature, pressure, and time in search of conditions that maximize product yield; they may undertake inquiries every day, yet in the end their experiments support engineering design and analysis. Conversely, experimental physicists undertake engineering when they develop machines like the Large Hadron Collider. With its tunnels, vacuum systems, superconducting magnets, and ten-thousand-ton particle detectors, this machine demanded engineering design on a grand scale, yet all as part of a program of scientific inquiry.

But the close, interweaving links between scientific inquiry and engineering design can obscure how deeply they differ.

ED

While interacting with the same physical world, the way you look at the problem — through the lens of design or inquiry — shapes what you see.

The Bottom-Up Structure of Scientific Inquiry

Scientific inquiry builds knowledge from bottom to top, from the ground of the physical world to the heights of well-tested theories, which is to say, to general, abstract models of how the world works. The resulting structure can be divided into three levels linked by two bridges.

At the ground level, we find physical things of interest to science, things like grasses and grazing herds on the African savannah, galaxies and gas clouds seen across cosmological time, and ordered electronic phases that emerge within a thousandth of a degree of absolute zero.

On the bridge to the level above, physical things become objects of study through human perception, extended by instruments like radio telescopes, magnetometers, and binoculars, yielding results to be recorded and shared, extending human knowledge. Observations bring information across the first bridge, from physical things to the realm of symbols and thought.

At this next level of information flow, scientists build concrete descriptions of what they observe. …

On the bridge to the top level of this sketch of science, concrete descriptions drive the evolution of theories, first by suggesting ideas about how the world works, and then by enabling tests of those ideas through an intellectual form of natural selection. As theories compete for attention and use, the winning traits include simplicity, breadth, and precision, as well as the breadth and precision of observational tests— and how well theory and data agree, of course.

Newtonian mechanics serves as the standard example. Its breadth embraces every mass, force, and motion, while its precision is mathematically exact. This breadth and precision are the source of both its power in practice and its failure as an ultimate theory. Newton’s Laws make precise predictions for motions at any speed, enabling precise observations to reveal their flaws.

Thus, in scientific inquiry, knowledge flows from bottom to top:

  • Through observation and study, physical systems shape concrete descriptions.
  • By suggeting ideas and then testing them, concrete descriptions shape scientific theories.

Here is a schematic structure of Scientific inquiry contrasted with the structure of engineering design.

The Antiparallel Structures of Scientific Inquiry

The Top-Down Structure of Engineering Design

In scientific inquiry information flows from matter to mind, but in engineering design information flows from mind to matter:

  • Inquiry extracts information through instruments; design applies information through tools.
  • Inquiry shapes its descriptions to fit the physical world; design shapes the physical world to fit its descriptions.

At this level, the contrasts are often as concrete as the difference between a microscope in an academic laboratory and a milling machine on a factory floor. At the higher more abstract levels of science and engineering, the differences are less concrete, yet at least as profound. Here, the contrasts are between designs and theories, intangible yet different products of the mind.

  • Scientists seek unique, correct theories, and if several theories seem plausible, all but one must be wrong, while engineers seek options for working designs, and if several options will work, success is assured.
  • Scientists seek theories that apply across the widest possible range (the Standard Model applies to everything), while engineers seek concepts well-suited to particular domains (liquid-cooled nozzles for engines in liquid-fueled rockets).
  • Scientists seek theories that make precise, hence brittle predictions (like Newton’s), while engineers seek designs that provide a robust margin of safety.
  • In science a single failed prediction can disprove a theory, no matter how many previous tests it has passed, while in engineering one successful design can validate a concept, no matter how many previous versions have failed.

The Strategy of Systems Engineering

With differences this stark, it may seem a surprise that scientific inquiry and engineering design are ever confused, yet to judge by both the popular and scientific press, clear understanding seems uncomfortably rare.*

The key to understanding engineering at the systems level— the architectural level— is to understand how abstract engineering choices can be grounded in concrete facts about the physical world. And a key to this, in turn, is to understand how engineers can design systems that are beyond their full comprehension.

Seeking Knowledge vs. Applying Knowledge

Because science and engineering face opposite directions, they ask different questions.

Scientific inquiry faces toward the unknown, and this shapes the structure of scientific thought; although scientists apply established knowledge, the purpose of science demands that they look beyond it.

Engineering design, by contrast, shuns the unknown. In their work, engineers seek established knowledge and apply it in hopes of avoiding surprises. In engineering, the fewer experiments, the better.

Inquiry and design call for different patterns of thought, patterns that can clash. In considering the science in the area around an engineering problem, a scientist may see endless unknowns and assume that scarce knowledge will preclude engineering, while an engineer considering the very same problem and body of knowledge may find ample knowledge to do the job.

Radical Abundance “offers a mind-expanding vision of a world hurtling toward an unexpected future.”