Tag: David Gelernter

What A Rembrandt Can Teach you about Software and Programmers

matrix

A thoughtful passage by David Gelernter in Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean on how looking at a Rembrandt can teach us to better understand not only software but the craft behind it.

Suppose you visit an art museum and walk up to a painting. I say “Ah ha! I see you're admiring some powdered pigments, mixed with oil and smeared onto what appears to be a canvas panel.” You say “No, you moron. I'm admiring a Rembrandt.” Good. You're three-quarters of the way towards a deep understanding of software.

How did this happen?

Well clearly we may, if we choose, regard a painting as a coming-together of two separate elements. The paints and canvas—the physical stuff; and the form-giving mind-plan. I'll call these two elements the body and the disembodied painting respectively.

Both are necessary to the finished product. But they are unequally decisive to its character. If Rembrandt had (while trying to shake out a tablecloth) accidentally chucked his favorite paint set into a canal on the very morning he was destined to make our painting; if he'd accordingly been forced to go down to the basement and hunt up another set—the finished product would be the same. But if he'd altered his mind-plan—the disembodied painting—before setting to work, our finished painting would obviously have been different.

In fact, the disembodied painting is a painting in and of itself— albeit a painting of a special kind, namely an unbodied one. Rembrandt is perfectly entitled to tell his wife “I have a painting in mind” before setting to work. But plainly the mere body is no painting, not in and of itself. If the paints on Rembrandt's table went around telling people “Hey look at us, we're a painting,” no-one would believe them.

This distinction is the key to software and its special character. A running program is a machine of a certain kind, an information machine. The program text—the words and symbols that the programmer composes, that “tell the computer what to do” – is a disembodied information machine. Your computer provides a body.

Unlike Rembrandt's mind plan, a disembodied information machine must be written down precisely and in full. It's a bit like the engineering drawings for a new toaster in this regard; the machine designer leaves nothing to chance. Unlike Rembrandt's mind plan or the toaster drawings, on the other hand, a disembodied information machine can be “embodied” automatically. No skill, judgement or human intervention is required. Merely hand your text to a computer (it's probably stored inside the computer already); the computer itself performs the “embodying.”

So: A running program—an information machine or infomachine for short—is the embodiment of a disembodied machine. In saying this, we have said a lot. A fairly simple point first, then a subtle and deeply important one—

Some people believe that, when they see a program running, the machine they are watching is a “computer.” True, but not true enough. The computer, that impressive-looking box with the designer logo, is merely the paint, not the painting. When you say I'm watching this computer do its stuff, you are saying in effect I'm admiring not this Rembrandt but some paint smeared on canvas. Some people imagine the computer as a gifted actor (say) who is handed a program and declaims it feelingly. No: bad image. The computer itself is of the utmost triviality to the workings of the infomachine you are watching. It may decide how fast or slowly the thing runs, and may effect its behaviour just a little around the fringes, but essentially, it is of no logical significance whatever. It is a mere body, and bodies are a dime a dozen.

The second point is harder.

People often find it difficult to keep in mind that, when they see a program text, what they're looking at is a machine. The fact that, for the time being, the machine they're looking at has no body confuses them With good reason: This is a subtle, maybe a confusing point. They leap to the conclusion that what programmers do amounts to arranging symbols on paper (or in a computer file) in a certain way. They look at a program and see merely a highly specialized kind of document …

This mistake is fatal to any real understanding of what software is.

Understanding software doesn't mean understanding how program texts are arranged, it means understanding what the working infomachine itself is like—what actually happens when you embody the thing and turn it on-what kind of structure you are creating when you organize those squiggles-the shape of the finished product, the way information hums through it, the way it grows, shrinks and changes as it runs, the look and feel of the actual computational landscape. This is where software creativity is exercised. This is where the field evolves, metamorphoses and explodes. Talented software designers work with some image of the actual running program uppermost in mind. Failing to see through the program text to the machine it represents is like trying to understand musical notation without grasping that those little sticks and ellipsoids represent sounds.

This kind of information is hard to convey. You can't directly see a running program. You can sense its workings indirectly, but you can't open the hood and look right at the mechanism. An ironic reversal of the Rembrandt experience: Here the mind-plan is tangible, but the embodied thing itself is not.

Finally concluding

[I]f you get carried away, and start asserting that “music is the mechanical manipulation of symbols on staff paper,” “programming is mathematics,” you have committed intellectual suicide. You've mistaken the means for the end. You've cut yourself off absolutely from all real inspiration, creativity and growth. And you have failed, profoundly, to understand the character of your field.

A dangerous mistake. Where software is concerned, an all-too-natural one.

A Plunge and Squish View of the Mind

_MG_5833

How can we bring our knowledge to bear on a problem? Does this resemble how we accumulate knowledge in the first place? A thoughtful passage by David Gelernter in Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean explores these questions.

In your mind particulars turn into generalities gradually, imperceptibly—like snow at the bottom of a drift turning into ice. If you don't know any general rules, if you've merely experienced something once, then that once will have to do. You may remember one example, or a collection of particular examples, or a general rule. These states blend together: When you've mastered the rule, you can still recall some individual experiences if you need to. Any respectable mind simulation must accommodate all three states. Any one of them might be the final state for some particular (perfectly respectable) mind. (Many people have been to Disneyland once, a fair number have been there a couple of times, and a few, no doubt, have been to Disneyland so often that the individual visits blend together into a single melted ice-cream puddle of a visit to Disneyland rule or script or principle or whatever. All three states are real.)

Plunge-and-squish adapts to whatever you have on hand. If there is a single relevant memory, plunge finds it. If there are several, squish constructs a modest generalization, one that captures the quirks of its particular elements. If there are many, squish constructs a sound, broad-based generalization. You may even wind up with a perma-squish abstraction, if this particular squish happens frequently enough and the elements blend smoothly together. It all happens automatically.

You need plunge and squish.

It's worth pausing here to explain in a little more detail plunge and squish. Plunge is when you take a new case—”one attribute or many attributes, doesn't matter”—and plunge it into the memory pool. “The plunged-in case attracts memories from all over: The ‘force fields' inside the system get warped in such a way that every stored memory (every case in the database) is re-oriented with respect to the plunged-in “bait.” The most relevant memories approach closest; and the less-relevant ones recede into the distance.” Squish, on the other hand, means “to look at the closest cases that are attracted by a plunge, and compact them together into a single ‘super case.' We take all these nearby memories (in other words) and superimpose them.”

One more point: Whatever stack of memories you have on hand, you can cut the deck in a million ways. You can reshuffle it endlessly. You can, if you need to, synthesize a general rule at a moment's notice. You see an asphalt spreader on the next block. You develop an expectation: The next block will smell like [the smell of fresh asphalt…}. What happened—did you wrack your brain for that important general principle, squirrelled away for just such an occasion—fact number three million twenty-one thousand and seven—fresh asphalt usually smells like…? Or did you synthesize this rule by doing a plunge-and-squish on the spot?

Clearly you can cobble together an abstraction, a category or an expectation at a moment's notice. You can create new categories to order whenever they are needed. (Unpleasant vacations? Objects that look like metal but aren't?…) Any realistic mind simulation must know how to do this.

Gotta have plunge; gotta have squish.

And so we arrive, finally, at two radically different pictures of the mind. In the mind-map view, there is a dense intertwined superstructure of categories, rules and generalizations, with the odd specific, particular fact hanging from the branches like the occasional bird-pecked apple. In the plunge-and-squish view, there are slowly-shifting, wandering and reforming snowdrifts instead, built without superstructure out of a billion crystal flakes—a billion particular experiences. New experiences sift constantly downwards onto the snowscape and old ones settle imperceptibly into ice-clear universal, and the whole scene is alive and constantly, subtly changing.

It's too soon to say which view is right. Both approaches need a lot more work. Both have produced interesting results. …

Real vs. Simulated Memories

Blue Brain

Software memory is increasingly doing more and more for us. Yet it lacks one important element of human memory: emotion.

This thought-provoking excerpt comes from Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean, a book recommended by Marc Andreessen.

When an expert remembers a patient, he doesn't remember a mere list of words. He remembers an experience, a whole galaxy of related perceptions. No doubt he remembers certain words—perhaps a name, a diagnosis, maybe some others. But he also remembers what the patient looked like, sounded like; how the encounter made him feel (confident, confused?) … Clearly these unrecorded perceptions have tremendous information content. People can revisit their experiences, examine their stored perceptions in retrospect. In reducing a “memory” to mere words, and a quick-march parade step of attribute, value, attribute, value at that, we are giving up a great deal. We are reducing a vast mountaintop panorama to a grainy little black-and-white photograph.

There is, too, a huge distance between simulated remembering—pulling cases out of the database—and the real thing. To a human being, an experience means a set of coherent sensations, which are wrapped up and sent back to the storeroom for later recollection. Remembering is the reverse: A set of coherent sensations is trundled out of storage and replayed—those archived sensations are re-experienced. The experience is less vivid on tape (so to speak) than it was in person, and portions of the original may be smudged or completely missing, but nonetheless—the Rememberer gets, in essence, another dose of the original experience. For human beings, in other words, remembering isn't merely retrieving, it is re-experiencing.

And this fact is important because it obviously impinges (probably in a large way) on how people do their remembering. Why do you “choose” to recall something? Well for one thing, certain memories make you feel good. The original experience included a “feeling good” sensation, and so the tape has “feel good” recorded on it, and when you recall the memory—you feel good. And likewise, one reason you choose (or unconsciously decide) not to recall certain memories is that they have “feel bad” recorded on them, and so remembering them makes you feel bad. (If you don't believe me check with Freud, who based the better part of a profoundly significant career on this observation, more or less.) It's obvious that the emotions recorded in a memory have at least something to do with steering your solitary rambles through Memory Woods.

But obviously, the software version of remembering has no emotional compass. To some extent, that's good: Software won't suppress, repress or forget some illuminating case because (say) it made a complete fool of itself when the case was first presented. Objectivity is powerful.

On the other hand, we are brushing up here against a limitation that has a distinctly fundamental look. We want our Mirror Worlds to “remember” intelligently—to draw just the right precedent or two from a huge database. But human beings draw on reason and emotion when they perform all acts of remembering. An emotion can be a concise, nuanced shorthand for a whole tangle of facts and perceptions that you never bothered to sort out. How did you feel on your first day at work or school, your child's second birthday, last year's first snowfall? Later you might remember that scene; you might be reminded merely by the fact that you now feel the same as you did then. Why do you feel the same? If you think carefully, perhaps you can trace down the objective similarities between the two experiences. But their emotional resemblance was your original clue. And it's quite plausible that “expertise” works this way also, at least occasionally: I'm reminded of a past case, not because of any objective similarity, but rather because I now feel the same as I did then.