Why Micromanaging Kills Corporate Culture

“The more he kept sweating the details,
the less his people took ownership of their work.”

***

The most important part of a company’s culture is trust. People don’t feel trusted when you micromanage and this has disastrous implications.

In It’s Your Ship: Management Techniques from the Best Damn Ship in the Navy, Michael Abrashoff  writes:

The difference between thinking as a top performer and thinking like your boss is the difference between individual contribution and real leadership. Some people never make this jump; they keep doing what made them successful, which in a leadership role usually means micromanaging. My predecessor on Benfold (the ship Abrashoff commanded), for instance, was extremely smart—a nuclear engineer and one of the brightest guys in the Navy. He spent his entire career in engineering, and when he took command of Benfold, he became, in effect, the super chief engineer of the ship. According to those who worked for him, he never learned to delegate. The more he kept sweating the details, the less his people took ownership of their work and the ship.

This so often happens in organizations: Micromanagement (or picomanagement, if micro doesn’t quite describe it) kills ownership. And when employees don’t have ownership—skin in the game—everything starts to go to hell. This is one reason government organizations are considered to be dysfunctional — everything is someone else’s responsibility. The incentives are awful.

Consider this anecdote Abrashoff uses to illustrate his point.

A pharmaceutical company I was working with promoted its best salesman to be head of sales. Instead of leading the sales force, he became the super salesman of the company. He had to be in on every deal, large or small. The other salespeople lost interest and stopped feeling as if they were in charge of their own jobs because they knew they couldn’t make a deal without him there to close it. The super salesman would swoop in at the last minute, close the deal, claim all the glory, and the others were left feeling that they were just holding his bat.

This reminds me of something Marshall Goldsmith, author of the impressive What Got You Here Won’t Get You There, once relayed in a conference. He told the story of a typical person in a typical organization presenting an idea to the senior approval body. This person did all this work, it’s their idea, and they know it inside and out. Anyway, they present and the senior management team, keen to exercise their egos, start chiming in with things like “did you think of this …” or “but … ” or “however …”. The project gets better with these comments, after all most people don’t get to that level without being somewhat intelligent. However the commitment of the person who presented the idea goes down dramatically because it’s no longer their idea. They’ve lost some ownership (the degree to which is very dependent on the conversation). The end result is a better idea with less commitment. And you know what? The outcome is worse than if the management team just approved the project. Goldsmith was pointing out the obvious and the world has never looked the same to me since.

Abrashoff aptly concludes:

When people feel they own an organization, they perform with greater care and devotion. They want to do things right the first time, and they don’t have accidents by taking shortcuts for the sake of expedience.

[…]

I am absolutely convinced that with good leadership, freedom does not weaken discipline— it strengthens it. Free people have a powerful incentive not to screw up.

Remember the wisdom of Joseph Trussman. Trust is one of the keys to getting the world to do most of the work for you. Call this an unrecognized simplicity — and one that Ken Iverson exploited to help show why culture eats strategy.

Joseph Tussman: Getting the World to do the Work for You

Nothing better sums up the ethos of Farnam Street than this quote by Joseph Tussman.

***

Joseph Tussman

“What the pupil must learn, if he learns anything at all, is that the world will do most of the work for you, provided you cooperate with it by identifying how it really works and aligning with those realities. If we do not let the world teach us, it teaches us a lesson.” — Joseph Tussman

The best way to identify how the world really works is to find the general principles that line up with historically significant sample sizes — those that apply, in the words of Peter Kaufman, “across the geological time scale of human, organic, and inorganic history.”

Pair with Andy Benoit’s wisdom and make some time to think about them.

To Sacrifice the Joy of Life is to Miss the Point

Your ability to get things done and be productive is not always a function of hours.

Working more doesn’t always mean you’re working better or harder. It doesn’t mean you’re doing your best. And it certainly doesn’t mean that you’re going to live a more meaningful life. Heck, it doesn’t even mean you’re going to finish your project faster.

In The Four Agreements, Don Miguel Ruiz tells the story of a man who wanted to transcend his suffering. So he goes to a Buddhist temple to find a Master to help. He asks the master “Master, if I meditate four hours a day, how long will it take me to transcend?”

The Master looked at him and said, “If you meditate four hours a day, perhaps you will transcend in ten years.”

Thinking he could do better, the man then said, “Oh Master, what if I meditated eight hours a day, how long will it take me to transcend?”

The Master looked at him and said, “If you meditate eight hours a day, perhaps you will transcend in twenty years.”

“But why will it take me longer if I meditate more?” the man asked.

The Master replied, “You are not here to sacrifice your joy or your life. You are here to live, to be happy and to love. If you can do your best in two hours of meditation, but you spend eight hours instead, you will only grow tired, miss the point, and you won’t enjoy your life.”

Working harder often misses the point.

When interviewed, those nearing the end of their lives did not say they wished they’d worked harder. Rather, they encouraged being willing to make sacrifices to spend time doing things that bring enjoyment.

Just to be clear, I’m not in the The 4-Hour Workweek camp. Farnam Street Media is 60+ a week.

However, it’s not a simple ask coming up with work life balance. As David Whyte argues, that is a flawed lens.

Questions about work and its interaction with the joy of living are personal and significant. We often only think about them toward the end of our life, when it’s too late to make changes.

Start asking yourself these questions today. And if you need a break, join me for a thinking/reading week in Hawaii this March.

Venkatesh Rao on The Three Types of Decision Makers, Mental Models, and How to Process Information

Venkatesh Rao
Venkatesh Rao

On this episode of The Knowledge Project, I have Venkatesh Rao.

Chris Dixon, a previous guest on the show, suggested I interview Venkatesh, who is a writer, independent researcher and consultant.

Venkatesh is the founder of the blog Ribbonfarm, the technology analysis site Breaking Smart, and the author of a book on decision making called Tempo.

We talk about a host of fascinating subjects, including the 3 types of decision makers, mental models, the implications of the free age and economy, and how to process information. I hope you enjoy the conversation as much as I did.

******

Listen

***

Show Notes

Transcript:
A complete transcript is available for members.

Books Mentioned

Montaigne’s Rule for Reading: Pursue Pleasure

montaigne-anon-ca-1590

 

His rule in reading remained the one he had learned from Ovid: Pursue pleasure. ‘If I encounter difficulties in reading,’ he wrote, ‘I do not gnaw my nails over them; I leave them there. I do nothing without gaiety.’

How to Live: A Life of Montaigne

Michel de Montaigne might have been the original “essayist” — a proto-version of Christopher Hitchens or George Orwell. Well-read, smart, critical, and with a tendency to write in a personal tone, with references to and reflections on his own thoughts and his own life.

Montaigne was known as a well-born French statesman during the time of the Reformation in Europe, when Catholic and Protestants were viciously fighting one another over the “one true church.” (The strong, violent ideologies at play ring familiar to those of us observing extreme religious terrorism today.) A century after the delivery of the printing press to the West, the Wars of Religion coincided with two historical periods that we now consider monumental —  the Renaissance and the Reformation. Such were the times molding a young Montaigne.

The son of a wealthy businessman, Montaigne was born on a chateau near Bordeaux (rough life) although his father did his best to keep him grounded — he forced Michel to spend some of his early years living with peasants in a cottage.

After a fairly rigorous education in the classics initiated by his family, a stint at boarding school, and a formal legal education, Montaigne went on to a career as a court adviser at Bordeaux Parliament, and then retired to his extensive personal library where he would begin to write. His personal essays — on topics ranging from death and the meaning of life to the cultural relativism inherent in judging Brazilian cannibals — would go on to influence every generation hence, starting with Shakespeare.

Montaigne became well-known for his devotion to skepticism in the tradition of the Pyrrhonians. In short: A constant withholding of judgment, a deep distrust of his own knowledge, and a desire to avoid ideology and overreaching.  In fact, one of the pillars of the Pyrrhonian style of thought was to construct both sides of an argument as cogently as possible before leaning one way or another, something reminiscent of Charlie Munger’s work required to hold an opinion and a foundation of modern legal training. This devotion of Montaigne’s, combined with the personal feel and wide-ranging topics of his writing, made him the first of his kind as a writer.

In the wonderful biography How to Live: A Life of Montaigne, by Sarah Bakewell, we learn a bit about the books that influenced Montaigne himself. As would have been the case for most of his contemporaries, his primary influences were classics from Greece and Rome. He started with the 16th century’s version of the Grimm Brothers: Ovid’s Metamorphoses, and then moved on to Virgil’s Aeneid and some modern comedic plays. In other words, Montaigne started out with works of fiction:

One unsuitable text which Montaigne discovered for himself at the age of seven or eight was Ovid’s Metamorphoses. This tumbling cornucopia of stories about miraculous transformations among ancient gods and mortals was the closest thing the Renaissance had to a compendium of fairy tales…In Ovid, people change. They turn into trees, animals, stars, bodies of water, or disembodied voices. They alter sex; they become werewolves. A woman called Scylla enters a poisonous pool and sees each of her limbs turn into a dog-like monster from which she cannot pull away because the monsters are also her….Once a taste of this sort of thing had started him off, Montaigne galloped through other books similarly full of good stories: Virgil’s Aeneid, then Terence, Plautus, and various modern Italian comedies. He learned, in defiance of school policy, to associate reading with excitement.

As he got older, though, Montaigne turned more and more to non-fiction, to works of real life. In his words, reading non-fiction taught you about the ‘diversity and truth of man,’ as well as ‘the variety of ways he is put together, and the accidents that threaten him.’

The best material he had available to him were from the classical stylings of writers like Tacitus, historian of the Roman periods in the early years after Christ; Plutarch, the biographer of the eminent Greeks and Romans; and Lucretius, the Roman philosophical poet. In Bakewell’s biography, we learn what it was he loved about these authors:

He loved how Tacitus treated public events from the point of view of ‘private behavior and inclinations’ and was struck by the historian’s fortune in living through a ‘strange and extreme’ period, just as Montaigne himself did. Indeed, he wrote of Tacitus ‘you would often say that it is us he is describing.’

Turning to biographers, Montaigne liked those who went beyond the external events of a life and tried to reconstruct a person’s inner world from the evidence. No one excelled in this more than his favorite writer of all — the Greek biographer Plutarch, who lived from around AD 46 to around 120 and whose vast Lives presented narratives of notable Greeks and Romans in themed pairs.

Plutarch was to Montaigne what Montaigne was to many later readers: a model to follow, and a treasure-chest of ideas, quotations, and anecdotes to plunder. ‘He is so universal and so full that on all occasions and however eccentric the subject you have taken up, he makes his way into your work.’

[…]

Montaigne also loved the strong sense of Plutarch’s own personality that comes across in his work: ‘I think I know him even into his soul.’ This was what Montaigne looked for in a book, just as people later looked for it in him: the feeling of meeting a real person across the centuries. Reading Plutarch, he lost awareness of the gap in time that divided them — much bigger than the gap between Montaigne and us.

The last point is, of course, sort of fascinating. When we think about Montaigne, he seems a whole world away. 16th century France is a place we fill in our imagination with velvet cloth and kings and queens and peasants and history class. Impossibly far in the past. But that period was only 450 short years ago; Montaigne himself was reading authors 1,500 years or more before him! A far greater gap in time. Yet he felt their insights were as relevant as when they were written — a lesson we should all learn from.

We can also get a glimpse of the kind of reader Montaigne considered himself: A pretty lazy one.

I leaf through now one book, now another,’ he wrote,’ without order and without plan, by disconnected fragments.’ He could sound positively cross if he thought anyone might suspect him of careful scholarship. Once, catching himself having said that books offer consolation, he hastily added, ‘Actually I use them scarcely any more than those who do not know them at all.’ And one of his sentences starts, ‘We who have little contact with books…’

His rule in reading remained the one he had learned from Ovid: pursue pleasure. ‘If I encounter difficulties in reading,’ he wrote, ‘I do not gnaw my nails over them; I leave them there. I do nothing without gaiety.’

Although Bakewell, and we, suspect he was feigning some humility as far as his laziness; of the second point on pursuing pleasure, Bakewell writes that Montaigne took this philosophy of gentleness and freedom and, “Of this, Montaigne made a whole principle of living.”

Still interested? Pick up Montaigne’s Essays and Bakewell’s biography for more.

Karl Popper on The Line Between Science and Pseudoscience

It’s not immediately clear, to the layman, what the essential difference is between science and something masquerading as science: pseudoscience. The distinction gets at the core of what comprises human knowledge: How do we actually know something to be true? Is it simply because our powers of observation tell us so? Or is there more to it?

Sir Karl Popper, the scientific philosopher, was interested in the same problem. How do we actually define the scientific process? How do we know which theories can be said to be truly explanatory?

3833724834_397c34132c_z

He began addressing it in a lecture, which is printed in the book Conjectures and Refutations: The Growth of Scientific Knowledge (also available online):

When I received the list of participants in this course and realized that I had been asked to speak to philosophical colleagues I thought, after some hesitation and consultation, that you would probably prefer me to speak about those problems which interest me most, and about those developments with which I am most intimately acquainted. I therefore decided to do what I have never done before: to give you a report on my own work in the philosophy of science, since the autumn of 1919 when I first began to grapple with the problem, ‘When should a theory be ranked as scientific?’ or ‘Is there a criterion for the scientific character or status of a theory?’

Popper saw a problem with the number of theories he considered non-scientific that, on their surface, seemed to have a lot in common with good, hard, rigorous science. But the question of how we decide which theories are compatible with the scientific method, and those which are not, was harder than it seemed.

***

It is most common to say that science is done by collecting observations and grinding out theories from them. Charles Darwin once said, after working long and hard at the problem of the Origin of Species,

My mind seems to have become a kind of machine for grinding general laws out of large collections of facts.

This is a popularly accepted notion. We observe, observe, and observe, and we look for theories to best explain the mass of facts. (Although even this is not really true: Popper points out that we must start with some a priori knowledge to be able to generate new knowledge. Observation is always done with some hypotheses in mind–we can’t understand the world from a totally blank slate. More on that another time.)

The problem, as Popper saw it, is that some bodies of knowledge more properly named pseudosciences would be considered scientific if the “Observe & Deduce” operating definition were left alone. For example, a believing astrologist can ably provide you with “evidence” that their theories are sound. The biographical information of a great many people can be explained this way, they’d say.

The astrologist would tell you, for example, about how “Leos” seek to be the center of attention; ambitious, strong, seeking limelight. As proof, they might follow up with a host of real-life Leos: World-leaders, celebrities, politicians, and so on. In some sense, the theory would hold up. The observations could be explained by the theory, which is how science works, right?

Sir Karl ran into this problem in a concrete way because he lived during a time when psychoanalytic theories were all the rage at just the same time Einstein was laying out a new foundation for the physical sciences with the concept of relativity. What made Popper uncomfortable were comparisons between the two. Why did he feel so uneasy putting Marxist theories and Freudian psychology in the same category of knowledge as Einstein’s Relativity? Did all three not have vast explanatory power in the world? Each theory’s proponents certainly believed so, but Popper was not satisfied.

It was during the summer of 1919 that I began to feel more and more dissatisfied with these three theories–the Marxist theory of history, psychoanalysis, and individual psychology; and I began to feel dubious about their claims to scientific status. My problem perhaps first took the simple form, ‘What is wrong with Marxism, psycho-analysis, and individual psychology? Why are they so different from physical theories, from Newton’s theory, and especially from the theory of relativity?’

I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory.

Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth; who refused to see it, either because it was against their class interest, or because of their repressions which were still ‘un-analysed’ and crying aloud for treatment.

Here was the salient problem: The proponents of these new sciences saw validations and verifications of their theories everywhere. If you were having trouble as an adult, it could always be explained by something your mother or father had done to you when you were young, some repressed something-or-other that hadn’t been analyzed and solved. They were confirmation bias machines.

What was the missing element? Popper had figured it out before long: The non-scientific theories could not be falsified. They were not testable in a legitimate way. There was no possible objection that could be raised which would show the theory to be wrong.

In a true science, the following statement can be easily made: “If happens, it would show demonstrably that theory is not true.” We can then design an experiment, a physical one or sometimes a simple thought experiment, to figure out if actually does happen It’s the opposite of looking for verification; you must try to show the theory is incorrect, and if you fail to do so, thereby strengthen it.

Pseudosciences cannot and do not do this–they are not strong enough to hold up. As an example, Popper discussed Freud’s theories of the mind in relation to Alfred Adler’s so-called “individual psychology,” which was popular at the time:

I may illustrate this by two very different examples of human behaviour: that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child. Each of these two cases can be explained with equal ease in Freudian and in Adlerian terms. According to Freud the first man suffered from repression (say, of some component of his Oedipus complex), while the second man had achieved sublimation. According to Adler the first man suffered from feelings of inferiority (producing perhaps the need to prove to himself that he dared to commit some crime), and so did the second man (whose need was to prove to himself that he dared to rescue the child). I could not think of any human behaviour which could not be interpreted in terms of either theory. It was precisely this fact–that they always fitted, that they were always confirmed–which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness.

Popper contrasted these theories against Relativity, which made specific, verifiable predictions, giving the conditions under which the predictions could be shown false. It turned out that Einstein’s predictions came to be true when tested, thus verifying the theory through attempts to falsify it. But the essential nature of the theory gave grounds under which it could have been wrong. To this day, physicists seek to figure out where Relativity breaks down in order to come to a more fundamental understanding of physical reality. And while the theory may eventually be proven incomplete or a special case of a more general phenomenon, it has still made accurate, testable predictions that have led to practical breakthroughs.

Thus, in Popper’s words, science requires testability: “If observation shows that the predicted effect is definitely absent, then the theory is simply refuted.”  This means a good theory must have an element of risk to it. It must be able to be proven wrong under stated conditions.

From there, Popper laid out his essential conclusions, which are useful to any thinker trying to figure out if a theory they hold dear is something that can be put in the scientific realm:

1. It is easy to obtain confirmations, or verifications, for nearly every theory–if we look for confirmations.

2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory–an event which would have refuted the theory.

3. Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.

4. A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.

5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ‘corroborating evidence’.)

7. Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a ‘conventionalist twist’ or a ‘conventionalist stratagem’.)

One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.

Finally, Popper was careful to say that it is not possible to prove that Freudianism was not true, at least in part. But we can say that we simply don’t know whether it’s true, because it does not make specific testable predictions. It may have many kernels of truth in it, but we can’t tell. The theory would have to be restated.

This is the essential “line of demarcation, as Popper called it, between science and pseudoscience.

Become a Farnam Street VIP and join our exclusive community with a membership.

Get The Best Newsletter on the Internet