Category: Technology

Why the Printing Press and the Telegraph Were as Impactful as the Internet

What makes a communications technology revolutionary? One answer to this is to ask whether it fundamentally changes the way society is organized. This can be a very hard question to answer, because true fundamental changes alter society in such a way that it becomes difficult to speak of past society without imposing our present understanding.

In her seminal work, The Printing Press as An Agent of Change, Elizabeth Eisenstein argues just that:

When ideas are detached from the media used to transmit them, they are also cut off from the historical circumstances that shape them, and it becomes difficult to perceive the changing context within which they must be viewed.

Today we rightly think of the internet and the mobile phone, but long ago, the printing press and the telegraph both had just as heavy an impact on the development of society.

Printing Press

Thinking of the time before the telegraph, when communications had to be hand delivered, is quaint. Trying to conceive the world before the uniformity of communication brought about by the printing press is almost unimaginable.

Eisenstein argues that the printing press “is of special historical significance because it produced fundamental alterations in prevailing patterns of continuity and change.”

Before the printing press there were no books, not in the sense that we understand them. There were manuscripts that were copied by scribes, which contained inconsistencies and embellishments, and modifications that suited who the scribe was working for. The printing press halted the evolution of symbols: For the first time maps and numbers were fixed.

Furthermore, because pre-press scholars had to go to manuscripts, Eisenstein says we should “recognize the novelty of being able to assemble diverse records and reference guides, and of being able to study them without having to transcribe them at the same time” that was afforded by the printing press.

This led to new ways of being able to compare and thus develop knowledge, by reducing the friction of getting to the old knowledge:

More abundantly stocked bookshelves obviously increased opportunities to consult and compare different texts. Merely by making more scrambled data available, by increasing the output of Aristotelian, Alexandrian and Arabic texts, printers encouraged efforts to unscramble these data.

Eisenstein argues that many of the great thinkers of the 16th century, such as Descartes and Montaigne, would have been unlikely to have produced what they did without the changes wrought by the printing press. She says of Montaigne, “that he could see more books by spending a few months in his Bordeaux tower-study than earlier scholars had seen after a lifetime of travel.”

The printing press increased the speed of communication and the spread of knowledge: Far less man hours were needed to turn out 50 printed books than 50 scribed manuscripts.

Telegraph

Henry Ford famously said of life before the car “If I had asked people what they wanted, they would have said faster horses“. This sentiment could be equally applied to the telegraph, a communications technology that came about 400 years after the printing press.

Before the telegraph, the speed of communication was dependent on the speed of the physical object doing the transporting – the horse, or the ship. Societies were thus organized around the speed of communication available to them, from the way business was conducted and wars were fought to the way interpersonal communication was conducted.

Let's consider, for example, the way the telegraph changed the conduct of war.

Prior to the telegraph, countries shared detailed knowledge of their plans with their citizens in order to boost morale, knowing that their plans would arrive at the enemy the same time their ships did. Post-telegraph, communications could arrive far faster than soldiers: This was something to consider!

In addition, as Tom Standage considers in his book The Victorian Internet, the telegraph altered the command structure in battle. “For who was better placed to make strategic decisions: the commander at the scene or his distant superiors?”

The telegraph brought changes similar in many ways to the printing press: It allowed for an accumulation of knowledge and increased the availability of this knowledge; more people had access to more information.

And society was forever altered as the new speed of communication made it fundamentally impossible to not use the telegraph, just as it is near impossible not to use a mobile phone or the Internet today.

Once the telegraph was widespread, there was no longer a way to do business without using it. Having up to the minute stock quotes changed the way businesses evaluated their holdings. Being able to communicate with various offices across the country created centralization and middle management. These elements became part of doing business so that it became nonsensical to talk about developing any aspect of business independent of the effect of electronic communication.

A Final Thought on Technology Uptake

One can argue that the more revolutionary an invention is, the slower the initial uptake into society, as society must do a fair amount of reorganizing to integrate the invention.

Such was the case for both the telegraph and printing press, as they allowed for things that were never before possible. Not being possible, they were rarely considered. Being rarely considered, there wasn't a large populace pining for them to happen. So when new options presented themselves, no one was rushing to embrace them, because there was no general appreciation of their potential. This is, of course, a fundamental aspect of revolutionary technology. Everyone has to figure out how (and why) to use it.

In The Victorian Internet, Standage says of William Cooke and Samuel Morse, the British and American inventors, respectively, of the telegraph:

[They] had done the impossible and constructed working telegraphs. Surely the world would fall at their feet. Building the prototypes, however, turned out to be the easy part. Convincing people of their significance was far more of a challenge.

It took years for people to see advantages with the telegraph. Even after the first lines were built, and the accuracy and speed of the communications they could carry verified, Morse realized that “everybody still thought of the telegraph as a novelty, as nothing more than an amusing subject for a newspaper article, rather than the revolutionary new form of communication that he envisaged.”

The new technology might confer great benefits, but it took a lot of work building the infrastructure, both physical and mental, to take any advantage of them.

The printing press faced similar challenges. In fact, books printed from Gutenberg until 1501 have their own term, incunabula, which reflects the transition from manuscript to book. Eisenstein writes: “Printers and scribes copied each other’s products for several decades and duplicated the same texts for the same markets during the age of incunabula.”

The momentum took a while to build. When it did, the changes were remarkable.

But looking at these two technologies serves as a reminder of what revolutionary means in this context: The use by and value to society cannot be anticipated. Therefore, great and unpredictable shifts are caused when they are adopted and integrated into everyday life.

Don’t Let Your (Technology) Tools Use You

“In an information-rich world, the wealth of information means a dearth of something else:
a scarcity of whatever it is that information consumes.
What information consumes is rather obvious: it consumes the attention of its recipients.
Hence a wealth of information creates a poverty of attention and a need to allocate
that attention efficiently among the overabundance of information sources that might consume it.”
Herbert Simon

***

A shovel is just a shovel. You shovel things with it. You can break up weeds and dirt. (You can also whack someone with it.) I’m not sure I’ve seen a shovel used for much else.

Modern technological tools aren’t really like that.

What is an iPhone, functionally? Sure, it’s got the phone thing down, but it’s also a GPS, a note-taker, an emailer, a text messager, a newspaper, a video-game device, a taxi-calling service, a flashlight, a web browser, a library, a book…you get the point. It does a lot.

This all seems pretty wonderful. To perform those functions 20 years ago, you needed a map and a sense of direction, a notepad, a personal computer, a cell phone, an actual newspaper, a Playstation, a phone and the willingness to talk to a person, an actual flashlight, an actual library, an actual book…you get the point. As Mark Andreessen puts it, the world is being eaten by software. One simple (looking) device and a host of software can perform the functions served by a bunch of big clunky tools of the past.

So far, we’ve been convinced that use of the New Tools is mostly “upside,” that our embrace of them should be wholehearted. Much of this is for good reason. Do you remember how awful using a map was? Yuck.

The problem is that our New Tools are winning the battle of attention. We’ve gotten to the point where the tools use us as much as we use them. This new reality means we need to re-examine our relationship with our New Tools.

Don't Let Your Tools Use You

Down the Rabbit Hole

Here’s a typical situation.

You’re on your computer finishing the client presentation you have to give in two days. Your phone lights up and makes a chimney noise — you’ve got a text message. “Hey, have you seen that new Dracula movie?” asks your friend. It only takes a few messages before the two of you begin to disagree on whether Transylvania is actually a real place. Off to Google!

After a few quick clicks, you get to Wikipedia, which tells you that yes, Transylvania is a region of Romania which the author Bram Stoker used as Count Dracula’s birthplace. Reading the Wikipedia entry costs you about 20 minutes. As you read, you find out that Bram Stoker was actually Irish. Irish! An Irish guy wrote Dracula? How did I not know this? Curiosity stoked, you look up Irish novelists, the history of Gothic literature, the original vampire stories…down and down the rabbit hole you go.

Eventually your thirst for trivia is exhausted, and you close the Wikipedia tab to text your friend how wrong they are in regards to Transylvania. You click the Home button to leave your text conversation, which lets you see the Twitter icon. I wonder how many people retweeted my awesome joke about ventriloquism? You pull it up and start “The Scroll.” Hah! Greg is hilarious. Are you serious, Bill Gates? Damn — I wish I read as much as Shane Parrish. You go and go. Your buddy tweets a link to an interesting-looking article about millennials — “10 Ways Millennials are Ruining the Workplace”. God, they are so self-absorbed. Click.

You decide to check Facebook and see if that girl from the cocktail party on Friday commented on your status. She didn’t, but Wow, Susanne went to Hawaii? You look at 35 pictures Susanne posted in her first three hours in Hawaii. Wait, who’s that guy she’s with? You click his name and go to his Facebook page. On down the rabbit hole you fall…

Now it’s been two hours since you left your presentation to respond to the text message, and you find yourself physically tired from the rapid scanning and clicking, scanning and clicking, scanning and clicking of the past two hours. Sad, you go get a coffee, go for a short walk, and decide: Now, I will focus. No more distraction.

Ten minutes in, your phone buzzes. That girl from the cocktail party commented on your status…

Attention for Sale

We’ve all been there. When we come up for air, it can feel like the aftermath of a mob crowd. What did I just do?

The tools we’re now addicted to have been engineered for a simple purpose: To keep us addicted to them. The service they provide is secondary to the addiction. Yes, Facebook is a networking tool. Yes, Twitter is a communication tool. Yes, Instagram is an excellent food-photography tool. But unless they get us hooked and keep us hooked, their business models are broken.

Don’t believe us?

Take stock of the metrics by which people value or assess these companies. Clicks. Views. Engagement. Return visits. Length of stay. The primary source of value for these products is how much you use them and what they can sell to you while you’re there. Increasing their value is a simple (but not easy) proposition: Either get usage up or figure out more effective ways to sell to you while you’re there.

As Herbert Simon might have predicted, our attention is for sale, and we’re ceding it a little at a time as the tools get better and better at fulfilling their function. There’s a version of natural selection going on, where the only consumer technology products that survive are the enormously addictive ones. The trait which produces maximum fitness is addictiveness itself. If you’re not using a tool constantly, it has no value to advertisers or data sellers, and thus they cannot raise capital to survive. And even if it’s an app or tool that you buy, one that you have to pay money for upfront, they must hook you on Version 1 if you’re going to be expected to buy Versions 2, 3, and 4.

This ecosystem ensures that each generation of consumer tech products – hardware or software – gets better and better at keeping you hooked. These services have learned, through a process of evolution, to drown users in positive feedback and create intense habitual usage. They must – because any other outcome is death. Facebook doesn’t want you to go on once a month to catch up on your correspondence. You must be engaged. The service does not care whether it’s unnecessarily eating into your life.

Snap Back to Reality

It’s up to us to take our lives back then. We must comprehend that the New Tools have a tremendous downside in their loss of focused attention, and that we’re giving it up willingly in a sort of Faustian bargain for entertainment, connectedness, and novelty.

Psychologist Mihaly Csikszentmihalyi pioneered the concept of Flow, where we enter an enjoyable state of rapt attention to our work and produce a high level of creative output. It’s a wonderful feeling, but the New Tools have learned to provide the same sensation without the actual results. We don’t end up with a book, or a presentation, or a speech, or a quilt, or a hand-crafted table. We end up two hours later in the day.

***

The first step towards a solution must be to understand the reality of this new ecosystem.

It follows Garrett Hardin’s “First Law of Ecology”: You can never merely do one thing. The New Tools are not like the Old Tools, where you pick up the shovel, do your shoveling, and then put the shovel back in the garage. The iPhone is not designed that way. It’s designed to keep you going, as are most of the other New Tools. You probably won’t send one text. You probably won’t watch one video. You probably won’t read one article. You’re not supposed to!

The rational response to this new reality depends a lot on who you are and what you need the tools for. Some people can get rid of 50% or more of their New Tools very easily. You don’t have to toss out your iPhone for a StarTAC, but because software is doing the real work, you can purposefully reduce the capability of the hardware by reducing your exposure to certain software.

As you shed certain tools, expect a homeostatic response from your network. Don’t be mistaken: If you’re a Snapchatter or an Instagrammer or simply an avid texter, getting rid of those services will give rise to consternation. They are, after all, networking tools. Your network will notice. You’ll need a bit of courage to face your friends and tell them, with a straight face, that you won’t be Instagramming anymore because you’re afraid of falling down the rabbit hole. But if you’ve got the courage, you’ll probably find that after a week or two of adjustment your life will go on just fine.

The second and more mild type of response would be to appreciate the chain-smoking nature of these products and to use them more judiciously. Understand that every time you look at your iPhone or connect to the Internet, the rabbit hole is there waiting for you to tumble down. If you can grasp that, you’ll realize that you need to be suspicious of the “quick check.” Either learn to batch Internet and phone time into concentrated blocks or slowly re-learn how to ignore the desire to follow up on every little impulse that comes to mind. (Or preferably, do both.)

A big part of this is turning off any sort of “push” notification, which must be the most effective attention-diverter ever invented by humanity. A push notification is anything that draws your attention to the tool without your conscious input. It’s when your phone buzzes for a text message, or an image comes on the screen when you get an email, or your phone tells you that you’ve got a Facebook comment. Anything that desperately induces you to engage. You need to turn them off. (Yes, including text message notifications – your friends will get used to waiting).

E-mail can be the worst offender; it’s the earliest and still one of the most effective digital rabbit holes. To push back, close your email client when you’re not using it. That way, you’ll have to open it to send or read an email. Then go ahead and change the settings on your phone’s email client so you have to “fetch” emails yourself, rather than having them pushed at you. Turn off anything that tells you an email has arrived.

Once you stop being notified by your tools, you can start to engage with them on your own terms and focus on your real work for a change; focus on the stuff actually producing some value in your life and in the world. When the big stuff is done, you can give yourself a half-hour or an hour to check your Facebook page, check your Instagram page, follow up on Wikipedia, check your emails, and respond to your text messages. This isn’t as good a solution as deleting many of the apps altogether, but it does allow you to engage with these tools on your own terms.

However you choose to address the world of New Tools, you’re way ahead if you simply recognize their power over your attention. Getting lost in hyperlinks and Facebook feeds doesn’t mean you’re weak, it just means the tools you’re using are designed, at their core, to help you get lost. Instead of allowing yourself to go to work for them, resolve to make them work for you.

How Google’s Self-Driving Car Sees The Road

The least reliable part of the car is the driver. Chris Urmson, who heads Google's driverless car program, gave a talk at TED in March about the company's self-driving cars. This could be the most fascinating thing you see today.

Marshall McLuhan: The Here And Now

mcluhan-5301

“In a culture like ours, long accustomed to splitting and dividing all things as a means of control, it is sometimes a bit of a shock to be reminded that, in operational and practical fact, the medium is the message.”

***

In this passage from Understanding Media, Marshall McLuhan, reminds us of the difficulty that frictionless connection brings with it and how technological media advances have worked not to preserve but rather to ‘abolish history.'

Perfection of the means of communication has meant instantaneity. Such an instantaneous network of communication is the body-mind unity of each of us. When a city or a society achieves a diversity and equilibrium of awareness analogous to the body-mind network, it has what we tend to regard as a high culture.

But the instantaneity of communication makes free speech and thought difficult if not impossible, and for many reasons. Radio extends the range of the casual speaking voice, but it forbids that many should speak. And when what is said has such range of control, it is forbidden to speak any but the most acceptable words and notions. Power and control are in all cases paid for by loss of freedom and flexibility.

Today the entire globe has a unity in point of mutual interawareness, which exceeds in rapidity the former flow of information in a small city—say Elizabethan London with its eighty or ninety thousand inhabitants. What happens to existing societies when they are brought into such intimate contact by press, picture stories, newsreels, and jet propulsion? What happens when the Neolithic Eskimo is compelled to share the time and space arrangements of technological man? What happens in our minds as we become familiar with the diversity of human cultures which have come into existence under innumerable circumstances, historical and geographical? Is what happens comparable to that social revolution which we call the American melting pot?

When the telegraph made possible a daily cross section of the globe transferred to the page of newsprint, we already had our mental melting pot for cosmic man—the world citizen.The mere format of the page of newsprint was more revolutionary in its intellectual and emotional consequences than anything that could be said about any part of the globe.

When we juxtapose news items from Tokyo, London, New York, Chile, Africa, and New Zealand, we are not just manipulating space. The events so brought together belong to cultures widely separated in time. The modern world abridges all historical times as readily as it reduces space. Everywhere and every age have become here and now. History has been abolished by our new media.

The Glass Cage: Automation and US

The Glass Cage

The impact of technology is all around us. Maybe we're at another Gutenberg moment and maybe we're not.

Marshall McLuhan said it best.

When any new form comes into the foreground of things, we naturally look at it through the old stereos. We can’t help that. This is normal, and we’re still trying to see how will our previous forms of political and educational patterns persist under television. We’re just trying to fit the old things into the new form, instead of asking what is the new form going to do to all the assumptions we had before.

He also wrote that “a new medium is never an addition to an old one, nor does it leave the old one in peace.”

In The Glass Cage: Automation and US, Nick Carr, one of my favorite writers, enters the debate about the impact automation has on us, “examining the personal as well as the economic consequences of our growing dependence on computers.”

We know that the nature of jobs is going to change in the future thanks to technology. Tyler Cowen argues “If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch.”

Carr's book shows another side to the argument – the broader human consequences to living in a world where computers and software do the things we used to do.

Computer automation makes our lives easier, our chores less burdensome. We're often able to accomplish more in less time—or to do things we simply couldn't do before. But automation also has deeper, hidden effects. As aviators have learned, not all of them are beneficial. Automation can take a toll on our work, our talents, and our lives. It can narrow our perspectives and limit our choices. It can open us to surveillance and manipulation. As computers become our constant companions, our familiar, obliging helpmates, it seems wise to take a closer look at exactly how they're changing what we do and who we are.

On the autonomous automobile, for example, Carr agues that while they have a ways to go before they start chauffeuring us around, there are broader questions that need to be answered first.

Although Google has said it expects commercial versions of its car to be on sale by the end of the decade, that's probably wishful thinking. The vehicle's sensor systems remain prohibitively expensive, with the roof-mounted laser apparatus alone going for eighty thousand dollars. Many technical challenges remain to be met, such as navigating snowy or leaf-covered roads, dealing with unexpected detours, and interpreting the hand signals of traffic cops and road workers. Even the most powerful computers still have a hard time distinguishing a bit of harmless road debris (a flattened cardboard box, say) from a dangerous obstacle (a nail-studded chunk of plywood). Most daunting of all are the many legal, cultural, and ethical hurdles a driverless car faces-Where, for instance, will culpability and liability reside should a computer-driven automobile cause an accident that kills or injures someone? With the car's owner? With the manufacturer that installed the self-driving system? With the programmers who wrote the software? Until such thorny questions get sorted out, fully automated cars are unlikely to grace dealer showrooms.

Tacit and Explicit Knowledge

Self-driving cars are just one example of a technology that forces us “to change our thinking about what computers and robots can and can't do.”

Up until that fateful October day, it was taken for granted that many important skills lay beyond the reach of automation. Computers could do a lot of things, but they couldn't do everything. In an influential 2004 book, The New Division of Labor: How Computers Are Creating the Next Job Market, economists Frank Levy and Richard Murnane argued, convincingly, that there were practical limits to the ability of software programmers to replicate human talents, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed specifically to the example of driving a car on the open road, a talent that requires the instantaneous interpretation of a welter of visual signals and an ability to adapt seamlessly to shifting and often unanticipated situations. We hardly know how we pull off such a feat ourselves, so the idea that programmers could reduce all of driving's intricacies, intangibilities, and contingencies to a set of instructions, to lines of software code, seemed ludicrous. “Executing a left turn across oncoming traffic,” Levy and Murnane wrote, “involves so many factors that it is hard to imagine the set of rules that can replicate a drivers behavior.” It seemed a sure bet, to them and to pretty much everyone else, that steering wheels would remain firmly in the grip of human hands.

In assessing computers' capabilities, economists and psychologists have long drawn on a basic distinction between two kinds of knowledge: tacit and explicit. Tacit knowledge, which is also sometimes called procedural knowledge, refers to all the stuff we do without actively thinking about it: riding a bike, snagging a fly ball, reading a book, driving a car. These aren't innate skills—we have to learn them, and some people are better at them than others—but they can't be expressed as a simple recipe, a sequence of precisely defined steps. When you make a turn through a busy intersection in your car, neurological studies have shown, many areas of your brain are hard at work, processing sensory stimuli, making estimates of time and distance, and coordinating your arms and legs. But if someone asked you to document everything involved in making that turn, you wouldn't be able to, at least not without resorting to generalizations and abstractions.The ability resides deep in your nervous system outside the ambit of your conscious mind. The mental processing goes on without your awareness.

Much of our ability to size up situations and make quick judgments about them stems from the fuzzy realm of tacit knowledge. Most of our creative and artistic skills reside there too. Explicit knowledge, which is also known as declarative knowledge, is the stuff you can actually write down: how to change a flat tire, how to fold an origami crane, how to solve a quadratic equation. These are processes that can be broken down into well-defined steps. One person can explain them to another person through written or oral instructions: do this, then this, then this.

Because a software program is essentially a set of precise, written instructions—do this, then this, then this—we've assumed that while computers can replicate skills that depend on explicit knowledge, they're not so good when it comes to skills that flow from tacit knowledge. How do you translate the ineffable into lines of code, into the rigid, step-by-step instructions of an algorithm? The boundary between the explicit and the tacit has always been a rough one—a lot of our talents straddle the line—but it seemed to offer a good way to define the limits of automation and, in turn, to mark out the exclusive precincts of the human. The sophisticated jobs Levy and Murnane identified as lying beyond the reach of computers—in addition to driving, they pointed to teaching and medical diagnosis—were a mix of the mental and the manual, but they all drew on tacit knowledge.

Google's car resets the boundary between human and computer, and it does so more dramatically, more decisively, than have earlier breakthroughs in programming. It tells us that our idea of the limits of automation has always been something of a fiction. Were not as special as we think we are. While the distinction between tacit and explicit knowledge remains a useful one in the realm of human psychology, it has lost much of its relevance to discussions of automation.

Tomorrowland

That doesn't mean that computers now have tacit knowledge, or that they've started to think the way we think, or that they'll soon be able to do everything people can do. They don't, they haven't, and they won't. Artificial intelligence is not human intelligence. People are mindful; computers are mindless. But when it comes to performing demanding tasks, whether with the brain or the body, computers are able to replicate our ends without replicating our means. When a driverless car makes a left turn in traffic, it's not tapping into a well of intuition and skill; it's following a program. But while the strategies are different, the outcomes, for practical purposes, are the same. The superhuman speed with which computers can follow instructions, calculate probabilities, and receive and send data means that they can use explicit knowledge to perform many of the complicated tasks that we do with tacit knowledge. In some cases, the unique strengths of computers allow them to perform what we consider to be tacit skills better than we can perform them ourselves. In a world of computer-controlled cars, you wouldn't need traffic lights or stop signs. Through the continuous, high-speed exchange of data, vehicles would seamlessly coordinate their passage through even the busiest of intersections—just as computers today regulate the flow of inconceivable numbers of data packets along the highways and byways of the internet. What's ineffable in our own minds becomes altogether effable in the circuits of a microchip.

Many of the cognitive talents we've considered uniquely human, it turns out, are anything but. Once computers get quick enough, they can begin to replicate our ability to spot patterns, make judgments, and learn from experience.

It's not only vocations that are increasingly being computerized, avocations are too.

Thanks to the proliferation of smartphones, tablets, and other small, affordable, and even wearable computers, we now depend on software to carry out many of our daily chores and pastimes. We launch apps to aid us in shopping, cooking, exercising, even finding a mate and raising a child. We follow turn-by-turn GPS instructions to get from one place to the next. We use social networks to maintain friendships and express our feelings. We seek advice from recommendation engines on what to watch, read, and listen to. We look to Google, or to Apple's Siri, to answer our questions and solve our problems. The computer is becoming our all-purpose tool for navigating, manipulating, and understanding the world, in both its physical and its social manifestations. Just think what happens these days when people misplace their smartphones or lose their connections to the net. Without their digital assistants, they feel helpless.

As Katherine Hayles, a literature professor at Duke University, observed in her 2012 book How We Think, “When my computer goes down or my Internet connection fails, I feel lost, disoriented, unable to work—in fact, I feel as if my hands have been amputated.”

While our dependency on computers is “disconcerting at times,” we welcome it.

We're eager to celebrate and show off our whizzy new gadgets and apps—and not only because they're so useful and so stylish. There's something magical about computer automation. To watch an iPhone identify an obscure song playing over the sound system in a bar is to experience something that would have been inconceivable to any previous generation.

Miswanting

The trouble with automation is “that it often gives us what we don't need at the cost of what we do.”

To understand why that's so, and why we're eager to accept the bargain, we need to take a look at how certain cognitive biases—flaws in the way we think—can distort our perceptions. When it comes to assessing the value of labor and leisure, the mind's eye can't see straight.

Mihaly Csikszentmihalyi, a psychology professor and author of the popular 1990 book Flow, has described a phenomenon that he calls “the paradox of work.” He first observed it in a study conducted in the 1980s with his University of Chicago colleague Judith LeFevre. They recruited a hundred workers, blue-collar and white-collar, skilled and unskilled, from five businesses around Chicago. They gave each an electronic pager (this was when cell phones were still luxury goods) that they had programmed to beep at seven random moments a day over the course of a week. At each beep, the subjects would fill out a short questionnaire. They'd describe the activity they were engaged in at that moment, the challenges they were facing, the skills they were deploying, and the psychological state they were in, as indicated by their sense of motivation, satisfaction, engagement, creativity, and so forth. The intent of this “experience sampling,” as Csikszentmihalyi termed the technique, was to see how people spend their time, on the job and off, and how their activities influence their “quality of experience.”

The results were surprising. People were happier, felt more fulfilled by what they were doing, while they were at work than during their leisure hours. In their free time, they tended to feel bored and anxious. And yet they didn't like to be at work. When they were on the job, they expressed a strong desire to be off the job, and when they were off the job, the last thing they wanted was to go back to work. “We have,” reported Csikszentmihalyi and LeFevre, “the paradoxical situation of people having many more positive feelings at work than in leisure, yet saying that they wish to be doing something else when they are at work, not when they are in leisure.” We're terrible, the experiment revealed, at anticipating which activities will satisfy us and which will leave us discontented. Even when we're in the midst of doing something, we don't seem able to judge its psychic consequences accurately.

Those are symptoms of a more general affliction, on which psychologists have bestowed the poetic name miswanting. We're inclined to desire things we don't like and to like things we don't desire. “When the things we want to happen do not improve our happiness, and when the things we want not to happen do,” the cognitive psychologists Daniel Gilbert and Timothy Wilson have observed, “it seems fair to say we have wanted badly.” And as slews of gloomy studies show, we're forever wanting badly. There's also a social angle to our tendency to misjudge work and leisure. As Csikszentmihalyi and LeFevre discovered in their experiments, and as most of us know from our own experience, people allow themselves to be guided by social conventions—in this case, the deep-seated idea that being “at leisure” is more desirable, and carries more status, than being “at work”—rather than by their true feelings. “Needless to say,” the researchers concluded, “such a blindness to the real state of affairs is likely to have unfortunate consequences for both individual wellbeing and the health of society.” As people act on their skewed perceptions, they will “try to do more of those activities that provide the least positive experiences and avoid the activities that are the source of their most positive and intense feelings.” That's hardly a recipe for the good life.

It's not that the work we do for pay is intrinsically superior to the activities we engage in for diversion or entertainment. Far from it. Plenty of jobs are dull and even demeaning, and plenty of hobbies and pastimes are stimulating and fulfilling. But a job imposes a structure on our time that we lose when we're left to our own devices. At work, were pushed to engage in the kinds of activities that human beings find most satisfying. We're happiest when we're absorbed in a difficult task, a task that has clear goals and that challenges us not only to exercise our talents but to stretch them. We become so immersed in the flow of our work, to use Csikszentmihalyi s term, that we tune out distractions and transcend the anxieties and worries that plague our everyday lives. Our usually wayward attention becomes fixed on what we're doing. “Every action, movement, and thought follows inevitably from the previous one,” explains Csikszentmihalyi. “Your whole being is involved, and you're using your skills to the utmost.” Such states of deep absorption can be produced by all manner of effort, from laying tile to singing in a choir to racing a dirt bike. You don't have to be earning a wage to enjoy the transports of flow.

More often than not, though, our discipline flags and our mind wanders when we're not on the job. We may yearn for the workday to be over so we can start spending our pay and having some fun, but most of us fritter away our leisure hours. We shun hard work and only rarely engage in challenging hobbies. Instead, we watch TV or go to the mall or log on to Facebook. We get lazy. And then we get bored and fretful. Disengaged from any outward focus, our attention turns inward, and we end up locked in what Emerson called the jail of self-consciousness. Jobs, even crummy ones, are “actually easier to enjoy than free time,” says Csikszentmihalyi, because they have the “built-in” goals and challenges that “encourage one to become involved in one's work, to concentrate and lose oneself in it.” But that's not what our deceiving minds want us to believe. Given the opportunity, we'll eagerly relieve ourselves of the rigors of labor. We'll sentence ourselves to idleness.

Automation offers us innumerable promises. Our lives, we think, will be greater if more things are automated. Yet as Carr explores in The Glass Cage, automation extracts a cost. Removing “complexity from jobs, diminishing the challenge they present and hence the level of engagement they promote.” This doesn't mean that Carr is anti-automation. He's not. He just wants us to see another side.

“All too often,” Carr warns, “automation frees us from that which makes us feel free.”

Claude Shannon: The Man Who Turned Paper Into Pixels

"The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning."— Claude Shannon (1948)
“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning.”— Claude Shannon (1948)

Claude Shannon is the most important man you've probably never heard of. If Alan Turing is to be considered the father of modern computing, then the American mathematician Claude Shannon is the architect of the Information Age.

The video, created by the British filmmaker Adam Westbrook, echoes the thoughts of Nassim Taleb that boosting the signal does not mean you remove the noise, in fact, just the opposite: you amplify it.

Any time you try to send a message from one place to another something always gets in the way. The original signal is always distorted. Where ever there is signal there is also noise.

So what do you do? Well, the best anyone could do back then was to boost the signal. But then all you do is boost the noise.

Thing is we were thinking about information all wrong. We were obsessed with what a message meant.

A Renoir and a receipt? They’re different, right? Was there a way to think of them in the same way? Like so many breakthroughs the answer came from an unexpected place. A brilliant mathematician with a flair for blackjack.

***

The transistor was invented in 1948, at Bell Telephone Laboratories. This remarkable achievement, however, “was only the second most significant development of that year,” writes James Gleick in his fascinating book: The Information: A History, a Theory, a Flood. The most important development of 1948 and what still underscores modern technology is the bit.

An invention even more profound and more fundamental came in a monograph spread across seventy-nine pages of The Bell System Technical Journal in July and October. No one bothered with a press release. It carried a title both simple and grand “A Mathematical Theory of Communication” and the message was hard to summarize. But it was a fulcrum around which the world began to turn. Like the transistor, this development also involved a neologism: the word bit, chosen in this case not by committee but by the lone author, a thirty-two-year -old named Claude Shannon. The bit now joined the inch, the pound, the quart, and the minute as a determinate quantity— a fundamental unit of measure.

But measuring what? “A unit for measuring information,” Shannon wrote, as though there were such a thing, measurable and quantifiable, as information.

[…]

Shannon’s theory made a bridge between information and uncertainty; between information and entropy; and between information and chaos. It led to compact discs and fax machines, computers and cyberspace, Moore’s law and all the world’s Silicon Alleys. Information processing was born, along with information storage and information retrieval. People began to name a successor to the Iron Age and the Steam Age.

Gleick also recounts the relationship between Turing and Shannon:

In 1943 the English mathematician and code breaker Alan Turing visited Bell Labs on a cryptographic mission and met Shannon sometimes over lunch, where they traded speculation on the future of artificial thinking machines. (“ Shannon wants to feed not just data to a Brain, but cultural things!” Turing exclaimed. “He wants to play music to it!”)

Commenting on vitality of information, Gleick writes:

(Information) pervades the sciences from top to bottom, transforming every branch of knowledge. Information theory began as a bridge from mathematics to electrical engineering and from there to computing. … Now even biology has become an information science, a subject of messages, instructions, and code. Genes encapsulate information and enable procedures for reading it in and writing it out. Life spreads by networking. The body itself is an information processor. Memory resides not just in brains but in every cell. No wonder genetics bloomed along with information theory. DNA is the quintessential information molecule, the most advanced message processor at the cellular level— an alphabet and a code, 6 billion bits to form a human being. “What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life,’” declares the evolutionary theorist Richard Dawkins. “It is information, words, instructions.… If you want to understand life, don’t think about vibrant, throbbing gels and oozes, think about information technology.” The cells of an organism are nodes in a richly interwoven communications network, transmitting and receiving, coding and decoding. Evolution itself embodies an ongoing exchange of information between organism and environment.

The bit is the very core of the information age.

The bit is a fundamental particle of a different sort: not just tiny but abstract— a binary digit, a flip-flop, a yes-or-no. It is insubstantial, yet as scientists finally come to understand information, they wonder whether it may be primary: more fundamental than matter itself. They suggest that the bit is the irreducible kernel and that information forms the very core of existence.

In the words of John Archibald Wheeler, the last surviving collaborator of both Einstein and Bohr, information gives rise to “every it— every particle, every field of force, even the spacetime continuum itself.”

This is another way of fathoming the paradox of the observer: that the outcome of an experiment is affected, or even determined, when it is observed. Not only is the observer observing, she is asking questions and making statements that must ultimately be expressed in discrete bits. “What we call reality,” Wheeler wrote coyly, “arises in the last analysis from the posing of yes-no questions.” He added: “All things physical are information-theoretic in origin, and this is a participatory universe.” The whole universe is thus seen as a computer —a cosmic information-processing machine.

The greatest gift of Prometheus to humanity was not fire after all: “Numbers, too, chiefest of sciences, I invented for them, and the combining of letters, creative mother of the Muses’ arts, with which to hold all things in memory .”

Information technologies are both relative in the time they were created and absolute in terms of the significance. Gleick writes:

The alphabet was a founding technology of information. The telephone, the fax machine, the calculator, and, ultimately, the computer are only the latest innovations devised for saving, manipulating, and communicating knowledge. Our culture has absorbed a working vocabulary for these useful inventions. We speak of compressing data, aware that this is quite different from compressing a gas. We know about streaming information, parsing it, sorting it, matching it, and filtering it. Our furniture includes iPods and plasma displays, our skills include texting and Googling, we are endowed, we are expert, so we see information in the foreground. But it has always been there. It pervaded our ancestors’ world, too, taking forms from solid to ethereal, granite gravestones and the whispers of courtiers. The punched card, the cash register, the nineteenth-century Difference Engine, the wires of telegraphy all played their parts in weaving the spiderweb of information to which we cling. Each new information technology, in its own time, set off blooms in storage and transmission. From the printing press came new species of information organizers: dictionaries, cyclopaedias, almanacs— compendiums of words, classifiers of facts, trees of knowledge. Hardly any information technology goes obsolete. Each new one throws its predecessors into relief. Thus Thomas Hobbes, in the seventeenth century, resisted his era’s new-media hype: “The invention of printing, though ingenious, compared with the invention of letters is no great matter.” Up to a point, he was right. Every new medium transforms the nature of human thought. In the long run, history is the story of information becoming aware of itself.

The Information: A History, a Theory, a Flood is a fascinating read.

(image source)

12