Download 000-550 dumps questions free enjoy your success

If you need to pass IBM 000-550 exam, killexams.com has produced IBM IBM solidDB and IBM solidDB Universal Cache study guide questions database that will guarantee a person pass 000-550 exam! killexams.com provides a person the valid, Most recent, and 2022 up-to-date 000-550 practice test questions and provided a totally Guarantee.

Exam Code: 000-550 Practice exam 2022 by Killexams.com team
IBM solidDB and IBM solidDB Universal Cache
IBM Universal exam Questions
Killexams : IBM Universal exam Questions - BingNews https://killexams.com/pass4sure/exam-detail/000-550 Search results Killexams : IBM Universal exam Questions - BingNews https://killexams.com/pass4sure/exam-detail/000-550 https://killexams.com/exam_list/IBM Killexams : Did the Universe Just Happen? Killexams : The Atlantic | April 1988 | Did the Universe Just Happen? | Wright


More on science and technology from The Atlantic Monthly.

The Atlantic Monthly | April 1988
 

I. Flying Solo


d Fredkin is scanning the visual field systematically. He checks the instrument panel regularly. He is cool, collected, in control. He is the optimally efficient pilot.

The plane is a Cessna Stationair Six—a six-passenger single-engine amphibious plane, the kind with the wheels recessed in pontoons. Fredkin bought it not long ago and is still working out a few kinks; right now he is taking it for a spin above the British Virgin Islands after some minor mechanical work.

He points down at several brown-green masses of land, embedded in a turquoise sea so clear that the shadows of yachts are distinctly visible on its sandy bottom. He singles out a small island with a good-sized villa and a swimming pool, and explains that the compound, and the island as well, belong to "the guy that owns Boy George"—the rock star's agent, or manager, or something.

I remark, loudly enough to overcome the engine noise, "It's nice."

Yes, Fredkin says, it's nice. He adds, "It's not as nice as my island."

He's joking, I guess, but he's right. Ed Fredkin's island, which soon comes into view, is bigger and prettier. It is about 125 acres, and the hill that constitutes its bulk is a deep green—a mixture of reeds and cacti, sea grape and turpentine trees, machineel and frangipani. Its beaches range from prosaic to sublime, and the coral in the waters just offshore attracts little and big fish whose colors look as if they were coordinated by Alexander Julian. On the island's west side are immense rocks, suitable for careful climbing, and on the east side are a bar and restaurant and a modest hotel, which consists of three clapboard buildings, each with a few rooms. Between east and west is Fredkin's secluded island villa. All told, Moskito Island—or Drake's Anchorage, as the brochures call it—is a nice place for Fredkin to spend the few weeks of each year when he is not up in the Boston area tending his various other businesses.

In addition to being a self-made millionaire, Fredkin is a self-made intellectual. Twenty years ago, at the age of thirty-four, without so much as a bachelor's degree to his name, he became a full professor at the Massachusetts Institute of Technology. Though hired to teach computer science, and then selected to guide MIT's now eminent computer-science laboratory through some of its formative years, he soon branched out into more-offbeat things. Perhaps the most idiosyncratic of the courses he has taught is one on "digital physics," in which he propounded the most idiosyncratic of his several idiosyncratic theories. This theory is the reason I've come to Fredkin's island. It is one of those things that a person has to be prepared for. The preparer has to say, "Now, this is going to sound pretty weird, and in a way it is, but in a way it's not as weird as it sounds, and you'll see this once you understand it, but that may take a while, so in the meantime don't prejudge it, and don't casually dismiss it." Ed Fredkin thinks that the universe is a computer.

Fredkin works in a twilight zone of modern science—the interface of computer science and physics. Here two concepts that traditionally have ranked among science's most fundamental—matter and energy—keep bumping into a third: information. The exact relationship among the three is a question without a clear answer, a question vague enough, and basic enough, to have inspired a wide variety of opinions. Some scientists have settled for modest and sober answers. Information, they will tell you, is just one of many forms of matter and energy; it is embodied in things like a computer's electrons and a brain's neural firings, things like newsprint and radio waves, and that is that. Others talk in grander terms, suggesting that information deserves full equality with matter and energy, that it should join them in some sort of scientific trinity, that these three things are the main ingredients of reality.

Fredkin goes further still. According to his theory of digital physics, information is more fundamental than matter and energy. He believes that atoms, electrons, and quarks consist ultimately of bits—binary units of information, like those that are the currency of computation in a personal computer or a pocket calculator. And he believes that the behavior of those bits, and thus of the entire universe, is governed by a single programming rule. This rule, Fredkin says, is something fairly simple, something vastly less arcane than the mathematical constructs that conventional physicists use to explain the dynamics of physical reality. Yet through ceaseless repetition—by tirelessly taking information it has just transformed and transforming it further—it has generated pervasive complexity. Fredkin calls this rule, with discernible reverence, "the cause and prime mover of everything."

T THE RESTAURANT ON FREDKIN'S ISLAND THE FOOD is prepared by a large man named Brutus and is humbly submitted to diners by men and women native to nearby islands. The restaurant is open-air, ventilated by a sea breeze that is warm during the day, cool at night, and almost always moist. Between the diners and the ocean is a knee-high stone wall, against which waves lap rhythmically. Beyond are other islands and a horizon typically blanketed by cottony clouds. Above is a thatched ceiling, concealing, if the truth be told, a sheet of corrugated steel. It is lunchtime now, and Fredkin is sitting in a cane-and-wicker chair across the table from me, wearing a light cotton sport shirt and gray swimming trunks. He was out trying to windsurf this morning, and he enjoyed only the marginal success that one would predict on the basis of his appearance. He is fairly tall and very thin, and has a softness about him—not effeminacy, but a gentleness of expression and manner—and the complexion of a scholar; even after a week on the island, his face doesn't vary much from white, except for his nose, which is red. The plastic frames of his glasses, in a modified aviator configuration, surround narrow eyes; there are times—early in the morning or right after a nap—when his eyes barely qualify as slits. His hair, perennially semi-combed, is black with a little gray.

Fredkin is a pleasant mealtime companion. He has much to say that is interesting, which is fortunate because generally he does most of the talking. He has little curiosity about other people's minds, unless their interests happen to coincide with his, which few people's do. "He's right above us," his wife, Joyce, once explained to me, holding her left hand just above her head, parallel to the ground. "Right here looking down. He's not looking down saying, 'I know more than you.' He's just going along his own way."

The food has not yet arrived, and Fredkin is passing the time by describing the world view into which his theory of digital physics fits. "There are three great philosophical questions," he begins. "What is life? What is consciousness and thinking and memory and all that? And how does the universe work?" He says that his "informational viewpoint" encompasses all three. Take life, for example. Deoxyribonucleic acid, the material of heredity, is "a good example of digitally encoded information," he says. "The information that implies what a creature or a plant is going to be is encoded; it has its representation in the DNA, right? Okay, now, there is a process that takes that information and transforms it into the creature, okay?" His point is that a mouse, for example, is "a big, complicated informational process."

Fredkin exudes rationality. His voice isn't quite as even and precise as Mr. Spock's, but it's close, and the parallels don't end there. He rarely displays emotion—except, perhaps, the slightest sign of irritation under the most trying circumstances. He has never seen a problem that didn't have a perfectly logical solution, and he believes strongly that intelligence can be mechanized without limit. More than ten years ago he founded the Fredkin Prize, a $100,000 award to be given to the creator of the first computer program that can beat a world chess champion. No one has won it yet, and Fredkin hopes to have the award raised to $1 million.

Fredkin is hardly alone in considering DNA a form of information, but this observation was less common back when he first made it. So too with many of his ideas. When his world view crystallized, a quarter of a century ago, he immediately saw dozens of large-scale implications, in fields ranging from physics to biology to psychology. A number of these have gained currency since then, and he considers this trend an ongoing substantiation of his entire outlook.

Fredkin talks some more and then recaps. "What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity life, DNA—you know, the biochemical functions—are controlled by a digital information process. Then, at another level, our thought processes are basically information processing." That is not to say, he stresses, that everything is best viewed as information. "It's just like there's mathematics and all these other things, but not everything is best viewed from a mathematical viewpoint. So what's being said is not that this comes along and replaces everything. It's one more avenue of modeling reality, and it happens to cover the sort of three biggest philosophical mysteries. So it sort of completes the picture."

Among the scientists who don't dismiss Fredkin's theory of digital physics out of hand is Marvin Minsky, a computer scientist and polymath at MIT, whose renown approaches cultic proportions in some circles. Minsky calls Fredkin "Einstein-like" in his ability to find deep principles through simple intellectual excursions. If it is true that most physicists think Fredkin is off the wall, Minsky told me, it is also true that "most physicists are the ones who don't invent new theories"; they go about their work with tunnel vision, never questioning the dogma of the day. When it comes to the kind of basic reformulation of thought proposed by Fredkin, "there's no point in talking to anyone but a Feynman or an Einstein or a Pauli," Minsky says. "The rest are just Republicans and Democrats." I talked with Richard Feynman, a Nobel laureate at the California Institute of Technology, before his death, in February. Feynman considered Fredkin a brilliant and consistently original, though sometimes incautious, thinker. If anyone is going to come up with a new and fruitful way of looking at physics, Feynman said, Fredkin will.

Notwithstanding their moral support, though, neither Feynman nor Minsky was ever convinced that the universe is a computer. They were endorsing Fredkin's mind, not this particular manifestation of it. When it comes to digital physics, Ed Fredkin is flying solo.

He knows that, and he regrets that his ideas continue to lack the support of his colleagues. But his self-confidence is unshaken. You see, Fredkin has had an odd childhood, and an odd education, and an odd career, all of which, he explains, have endowed him with an odd perspective, from which the essential nature of the universe happens to be clearly visible. "I feel like I'm the only person with eyes in a world where everyone's blind," he says.

II. A Finely Mottled Universe


HE PRIME MOVER OF EVERYTHING, THE SINGLE principle that governs the universe, lies somewhere within a class of computer programs known as cellular automata, according to Fredkin.

The cellular automaton was invented in the early 1950s by John von Neumann, one of the architects of computer science and a seminal thinker in several other fields. Von Neumann (who was stimulated in this and other inquiries by the ideas of the mathematician Stanislaw Ulam) saw cellular automata as a way to study reproduction abstractly, but the word cellular is not meant biologically when used in this context. It refers, rather, to adjacent spaces—cells—that together form a pattern. These days the cells typically appear on a computer screen, though von Neumann, lacking this convenience, rendered them on paper.

In some respects cellular automata resemble those splendid graphic displays produced by patriotic masses in authoritarian societies and by avid football fans at American universities. Holding up large colored cards on cue, they can collectively generate a portrait of, say, Lenin, Mao Zedong, or a University of Southern California Trojan. More impressive still, one portrait can fade out and another crystallize in no time at all. Again and again one frozen frame melts into another It is a spectacular feat of precision and planning.

But suppose there were no planning. Suppose that instead of arranging a succession of cards to display, everyone learned a single rule for repeatedly determining which card was called for next. This rule might assume any of a number of forms. For example, in a crowd where all cards were either blue or white, each card holder could be instructed to look at his own card and the cards of his four nearest neighbors—to his front, back, left, and right—and do what the majority did during the last frame. (This five-cell group is known as the von Neumann neighborhood.) Alternatively, each card holder could be instructed to do the opposite of what the majority did. In either event the result would be a series not of predetermined portraits but of more abstract, unpredicted patterns. If, by prior agreement, we began with a USC Trojan, its white face might dissolve into a sea of blue, as whitecaps drifted aimlessly across the stadium. Conversely, an ocean of randomness could yield islands of structure—not a Trojan, perhaps, but at least something that didn't look entirely accidental. It all depends on the original pattern of cells and the rule used to transform it incrementally.

This leaves room for abundant variety. There are many ways to define a neighborhood, and for any given neighborhood there are many possible rules, most of them more complicated than blind conformity or implacable nonconformity. Each cell may, for instance, not only count cells in the vicinity but also pay attention to which particular cells are doing what. All told, the number of possible rules is an exponential function of the number of cells in the neighborhood; the von Neumann neighborhood alone has 232, or around 4 billion, possible rules, and the nine-cell neighborhood that results from adding corner cells offers 2512, or roughly 1 with 154 zeros after it, possibilities. But whatever neighborhoods, and whatever rules, are programmed into a computer, two things are always true of cellular automata: all cells use the same rule to determine future behavior by reference to the past behavior of neighbors, and all cells obey the rule simultaneously, time after time.

In the late 1950s, shortly after becoming acquainted with cellular automata, Fredkin began playing around with rules, selecting the powerful and interesting and discarding the weak and bland. He found, for example, that any rule requiring all four of a cell's immediate neighbors to be lit up in order for the cell itself to be lit up at the next moment would not provide sustained entertainment; a single "off" cell would proliferate until darkness covered the computer screen. But equally simple rules could create great complexity. The first such rule discovered by Fredkin dictated that a cell be on if an odd number of cells in its von Neumann neighborhood had been on, and off otherwise. After "seeding" a good, powerful rule with an irregular landscape of off and on cells, Fredkin could watch rich patterns bloom, some freezing upon maturity, some eventually dissipating, others locking into a cycle of growth and decay. A colleague, after watching one of Fredkin's rules in action, suggested that he sell the program to a designer of Persian rugs.

Today new cellular-automaton rules are formulated and tested by the "information-mechanics group" founded by Fredkin at MIT's computer-science laboratory. The core of the group is an international duo of physicists, Tommaso Toffoli, of Italy, and Norman Margolus, of Canada. They differ in the degree to which they take Fredkin's theory of physics seriously, but both agree with him that there is value in exploring the relationship between computation and physics, and they have spent much time using cellular automata to simulate physical processes. In the basement of the computer-science laboratory is the CAM—the cellular automaton machine, designed by Toffoli and Margolus partly for that purpose. Its screen has 65,536 cells, each of which can assume any of four colors and can change color sixty times a second.

The CAM is an engrossing, potentially mesmerizing machine. Its four colors—the three primaries and black—intermix rapidly and intricately enough to form subtly shifting hues of almost any gradation; pretty waves of deep blue or red ebb and flow with fine fluidity and sometimes with rhythm, playing on the edge between chaos and order.

Guided by the right rule, the CAM can do a respectable imitation of pond water rippling outward circularly in deference to a descending pebble, or of bubbles forming at the bottom of a pot of boiling water, or of a snowflake blossoming from a seed of ice: step by step, a single "ice crystal" in the center of the screen unfolds into a full-fledged flake, a six-edged sheet of ice riddled symmetrically with dark pockets of mist. (It is easy to see how a cellular automaton can capture the principles thought to govern the growth of a snowflake: regions of vapor that find themselves in the vicinity of a budding snowflake freeze—unless so nearly enveloped by ice crystals that they cannot discharge enough heat to freeze.)

These exercises are fun to watch, and they give one a sense of the cellular automaton's power, but Fredkin is not particularly interested in them. After all, a snowflake is not, at the visible level, literally a cellular automaton; an ice crystal is not a single, indivisible bit of information, like the cell that portrays it. Fredkin believes that automata will more faithfully mirror reality as they are applied to its more fundamental levels and the rules needed to model the motion of molecules, atoms, electrons, and quarks are uncovered. And he believes that at the most fundamental level (whatever that turns out to be) the automaton will describe the physical world with perfect precision, because at that level the universe is a cellular automaton, in three dimensions—a crystalline lattice of interacting logic units, each one "deciding" zillions of point in time. The information thus produced, Fredkin says, is the fabric of reality, the stuff of which matter and energy are made. An electron, in Fredkin's universe, is nothing more than a pattern of information, and an orbiting electron is nothing more than that pattern moving. Indeed, even this motion is in some sense illusory: the bits of information that constitute the pattern never move, any more than football fans would change places to slide a USC Trojan four seats to the left. Each bit stays put and confines its activity to blinking on and off. "You see, I don't believe that there are objects like electrons and photons, and things which are themselves and nothing else," Fredkin says. What I believe is that there's an information process, and the bits, when they're in certain configurations, behave like the thing we call the electron, or the hydrogen atom, or whatever."

HE READER MAY NOW HAVE A NUMBER OF questions that unless satisfactorily answered will lead to something approaching contempt for Fredkin's thinking. One such question concerns the way cellular automata chop space and time into little bits. Most conventional theories of physics reflect the intuition that reality is continuous—that one "point" in time is no such thing but, rather, flows seamlessly into the next, and that space, similarly, doesn't come in little chunks but is perfectly smooth. Fredkin's theory implies that both space and time have a graininess to them, and that the grains cannot be chopped up into smaller grains; that people and dogs and trees and oceans, at rock bottom, are more like mosaics than like paintings; and that time's essence is better captured by a digital watch than by a grandfather clock.

The obvious question is, Why do space and time seem continuous if they are not? The obvious answer is, The cubes of space and points of time are very, very small: time seems continuous in just the way that movies seem to move when in fact they are frames, and the illusion of spatial continuity is akin to the emergence of smooth shades from the finely mottled texture of a newspaper photograph.

The obvious answer, Fredkin says, is not the whole answer; the illusion of continuity is yet more deeply ingrained in our situation. Even if the ticks on the universal clock were, in some absolute sense, very slow, time would still seem continuous to us, since our perception, itself proceeding in the same ticks, would be no more finely grained than the processes being perceived. So too with spatial perception: Can eyes composed of the smallest units in existence perceive those units? Could any informational process sense its ultimate constituents? The point is that the basic units of time and space in Fredkin's reality don't just happen to be imperceptibly small. As long as the creatures doing the perceiving are in that reality, the units have to be imperceptibly small.

Though some may find this discreteness hard to comprehend, Fredkin finds a grainy reality more sensible than a smooth one. If reality is truly continuous, as most physicists now believe it is, then there must be quantities that cannot be expressed with a finite number of digits; the number representing the strength of an electromagnetic field, for example, could begin 5.23429847 and go on forever without failing into a pattern of repetition. That seems strange to Fredkin: wouldn't you eventually get to a point, around the hundredth, or thousandth, or millionth decimal place, where you had hit the strength of the field right on the nose? Indeed, wouldn't you expect that every physical quantity has an exactness about it? Well, you might and might not. But Fredkin does expect exactness, and in his universe he gets it.

Fredkin has an interesting way of expressing his insistence that all physical quantities be "rational." (A rational number is a number that can be expressed as a fraction—as a ratio of one integer to another. Expressed as a decimal, a rational number will either end, as 5/2 does in the form of 2.5, or repeat itself endlessly, as 1/7 does in the form of 0.142857142857142 . . .) He says he finds it hard to believe that a finite volume of space could contain an infinite amount of information. It is almost as if he viewed each parcel of space as having the digits describing it actually crammed into it. This seems an odd perspective, one that confuses the thing itself with the information it represents. But such an inversion between the realm of things and the realm of representation is common among those who work at the interface of computer science and physics. Contemplating the essence of information seems to affect the way you think.

The prospect of a discrete reality, however alien to the average person, is easier to fathom than the problem of the infinite regress, which is also raised by Fredkin's theory. The problem begins with the fact that information typically has a physical basis. Writing consists of ink; speech is composed of sound waves; even the computer's ephemeral bits and bytes are grounded in configurations of electrons. If the electrons are in turn made of information, then what is the information made of?

Asking questions like this ten or twelve times is not a good way to earn Fredkin's respect. A look of exasperation passes fleetingly over his face. "What I've tried to explain is that—and I hate to do this, because physicists are always doing this in an obnoxious way—is that the question implies you're missing a very important concept." He gives it one more try, two more tries, three, and eventually some of the fog between me and his view of the universe disappears. I begin to understand that this is a theory not just of physics but of metaphysics. When you disentangle these theories—compare the physics with other theories of physics, and the metaphysics with other ideas about metaphysics—both sound less far-fetched than when jumbled together as one. And, as a bonus, Fredkin's metaphysics leads to a kind of high-tech theology—to speculation about supreme beings and the purpose of life.

III. The Perfect Thing


DWARD FREDKIN WAS BORN IN 1934, THE LAST OF three children in a previously prosperous family. His father, Manuel, had come to Southern California from Russia shortly after the Revolution and founded a chain of radio stores that did not survive the Great Depression. The family learned economy, and Fredkin has not forgotten it. He can reach into his pocket, pull out a tissue that should have been retired weeks ago, and, with cleaning solution, make an entire airplane windshield clear. He can take even a well-written computer program, sift through it for superfluous instructions, and edit it accordingly, reducing both its size and its running time.

Manuel was by all accounts a competitive man, and he focused his competitive energies on the two boys: Edward and his older brother, Norman. Manuel routinely challenged Ed's mastery of fact, inciting sustained arguments over, say, the distance between the moon and the earth. Norman's theory is that his father, though bright, was intellectually insecure; he seemed somehow threatened by the knowledge the boys brought home from school. Manuel's mistrust of books, experts, and all other sources of received wisdom was absorbed by Ed.

So was his competitiveness. Fredkin always considered himself the smartest kid in his class. He used to place bets with other students on exam scores. This habit did not endear him to his peers, and he seems in general to have lacked the prerequisites of popularity. His sense of humor was unusual. His interests were not widely shared. His physique was not a force to be reckoned with. He recalls, "When I was young—you know, sixth, seventh grade—two kids would be choosing sides for a game of something. It could be touch football. They'd choose everybody but me, and then there'd be a fight as to whether one side would have to take me. One side would say, 'We have eight and you have seven,' and they'd say, 'That's okay.' They'd be willing to play with seven." Though exhaustive in documenting his social alienation, Fredkin concedes that he was not the only unpopular student in school. "There was a socially active subgroup, probably not a majority, maybe forty percent, who were very socially active. They went out on dates. They went to parties. They did this and they did that. The others were left out. And I was in this big left-out group. But I was in the pole position. I was really left out."

Of the hours Fredkin spent alone, a good many were devoted to courting disaster in the name of science. By wiring together scores of large, 45-volt batteries, he collected enough electricity to conjure up vivid, erratic arcs. By scraping the heads off matches and buying sulfur, saltpeter, and charcoal, he acquired a good working knowledge of pyrotechnics. He built small, minimally destructive but visually impressive bombs, and fashioned rockets out of cardboard tubing and aluminum foil. But more than bombs and rockets, it was mechanisms that captured Fredkin's attention. From an early age he was viscerally attracted to Big Ben alarm clocks, which he methodically took apart and put back together. He also picked up his father's facility with radios and household appliances. But whereas Manuel seemed to fix things without understanding the underlying science, his son was curious about first principles.

So while other kids were playing baseball or chasing girls, Ed Fredkin was taking things apart and putting them back together Children were aloof, even cruel, but a broken clock always responded gratefully to a healing hand. "I always got along well with machines," he remembers.

After graduation from high school, in 1952, Fredkin headed for the California Institute of Technology with hopes of finding a more appreciative social environment. But students at Caltech turned out to bear a disturbing resemblance to people he had observed elsewhere. "They were smart like me," he recalls, "but they had the full spectrum and distribution of social development." Once again Fredkin found his weekends unencumbered by parties. And once again he didn't spend his free time studying. Indeed, one of the few lessons he learned is that college is different from high school: in college if you don't study, you flunk out. This he did a few months into his sophomore year. Then, following in his brother's footsteps, he joined the Air Force and learned to fly fighter planes.

T WAS THE AIR FORCE THAT FINALLY BROUGHT Fredkin face to face with a computer. He was working for the Air Proving Ground Command, whose function was to ensure that everything from combat boots to bombers was of top quality, when the unit was given the job of testing a computerized air-defense system known as SAGE (for "semi-automatic ground environment"), To test SAGE the Air Force needed men who knew something about computers, and so in 1956 a group from the Air Proving Ground Command, including Fredkin, was sent to MIT's Lincoln Laboratory and enrolled in computer-science courses. "Everything made instant sense to me," Fredkin remembers. "I just soaked it up like a sponge."

SAGE, when ready for testing, turned out to be even more complex than anticipated—too complex to be tested by anyone but genuine experts—and the job had to be contracted out. This development, combined with bureaucratic disorder, meant that Fredkin was now a man without a function, a sort of visiting scholar at Lincoln Laboratory. "For a period of time, probably over a year, no one ever came to tell me to do anything. Well, meanwhile, down the hall they installed the latest, most modern computer in the world—IBM's biggest, most powerful computer. So I just went down and started to program it." The computer was an XD-1. It was slower and less capacious than an Apple Macintosh and was roughly the size of a large house.

When Fredkin talks about his year alone with this dinosaur, you half expect to hear violins start playing in the background. "My whole way of life was just waiting for the computer to come along," he says. "The computer was in essence just the perfect thing." It was in some respects preferable to every other conglomeration of matter he had encountered—more sophisticated and flexible than other inorganic machines, and more logical than organic ones. "See, when I write a program, if I write it correctly, it will work. If I'm dealing with a person, and I tell him something, and I tell him correctly, it may or may not work."

The XD-1, in short, was an intelligence with which Fredkin could empathize. It was the ultimate embodiment of mechanical predictability, the refuge to which as a child he had retreated from the incomprehensibly hostile world of humanity. If the universe is indeed a computer, then it could be a friendly place after all.

During the several years after his arrival at Lincoln Lab, as Fredkin was joining the first generation of hackers, he was also immersing himself in physics—finally learning, through self-instruction, the lessons he had missed by dropping out of Caltech. It is this two-track education, Fredkin says, that led him to the theory of digital physics. For a time "there was no one in the world with the same interest in physics who had the intimate experience with computers that I did. I honestly think that there was a period of many years when I was in a unique position."

The uniqueness lay not only in the fusion of physics and computer science but also in the peculiar composition of Fredkin's physics curriculum. Many physicists acquire as children the sort of kinship with mechanism that he still feels, but in most cases it is later diluted by formal education; quantum mechanics, the prevailing paradigm in contemporary physics, seems to imply that at its core, reality, has truly random elements and is thus inherently unpredictable. But Fredkin escaped the usual indoctrination. To this day he maintains, as did Albert Einstein, that the common interpretation of quantum mechanics is mistaken—that any seeming indeterminacy in the subatomic world reflects only our ignorance of the determining principles, not their absence. This is a critical belief, for if he is wrong and the universe is not ultimately deterministic, then it cannot be governed by a process as exacting as computation.

After leaving the Air Force, Fredkin went to work for Bolt Beranek and Newman, a consulting firm in the Boston area, now known for its work in artificial intelligence and computer networking. His supervisor at BBN, J. C. R. Licklider, says of his first encounter with Fredkin, "It was obvious to me he was very unusual and probably a genius, and the more I came to know him, the more I came to think that that was not too elevated a description." Fredkin "worked almost continuously," Licklider recalls. "It was hard to get him to go to sleep sometimes." A pattern emerged. Licklider would give Fredkin a problem to work on—say, figuring out how to get a computer to search a text in its memory for an only partially specified sequence of letters. Fredkin would retreat to his office and return twenty or thirty hours later with the solution—or, rather, a solution; he often came back with the answer to a question different from the one that Licklider had asked. Fredkin's focus was intense but undisciplined, and it tended to stray from a problem as soon as he was confident that he understood the solution in principle.

This intellectual wanderlust is one of Fredkin's most enduring and exasperating traits. Just about everyone who knows him has a way of describing it: "He doesn't really work. He sort of fiddles." "Very often he has these great ideas and then does not have the discipline to cultivate the idea." "There is a gap between the quality of the original ideas and what follows. There's an imbalance there." Fredkin is aware of his reputation. In self-parody he once brought a cartoon to a friend's attention: A beaver and another forest animal are contemplating an immense man-made dam. The beaver is saying something like, "No, I didn't actually build it. But it's based on an idea of mine."

Among the ideas that congealed in Fredkin's mind during his stay at BBN is the one that gave him his current reputation as (depending on whom you talk to) a thinker of great depth and rare insight, a source of interesting but reckless speculation, or a crackpot.

IV. Tick by Tick, Dot by Dot


HE IDEA THAT THE UNIVERSE IS A COMPUTER WAS inspired partly by the idea of the universal computer. Universal computer, a term that can accurately be applied to everything from an IBM PC to a Cray supercomputer, has a technical, rigorous definition, but here its upshot will do: a universal computer can simulate any process that can be precisely described and perform any calculation that is performable.

This broad power is ultimately grounded in something very simple: the algorithm. An algorithm is a fixed procedure for converting input into output, for taking one body of information and turning it into another. For example, a computer program that takes any number it is given, squares it, and subtracts three is an algorithm. This isn't a very powerful algorithm; by taking a 3 and turning it into a 6, it hasn't created much new information. But algorithms become more powerful with recursion. A recursive algorithm is an algorithm whose output is fed back into it as input. Thus the algorithm that turned 3 into 6, if operating recursively, would continue, turning 6 into 33, then 33 into 1,086, then 1,086 into 1,179,393, and so on.

The power of recursive algorithms is especially apparent in the simulation of physical processes. While Fredkin was at BBN, he would use the company's Digital Equipment Corporation PDP-1 computer to simulate, say, two particles, one that was positively charged and one that was negatively charged, orbiting each other in accordance with the laws of electromagnetism. It was a pretty sight: two phosphor dots dancing, each etching a green trail that faded into yellow and then into darkness. But for Fredkin the attraction lay less in this elegant image than in its underlying logic. The program he had written took the particles' velocities and positions at one point in time, computed those variables for the next point in time, and then fed the new variables back into the algorithm to get newer variables—and so on and so on, thousands of times a second. The several steps in this algorithm, Fredkin recalls, were "very simple and very beautiful." It was in these orbiting phosphor dots that Fredkin first saw the appeal of his kind of universe—a universe that proceeds tick by tick and dot by dot, a universe in which complexity boils down to rules of elementary simplicity.

Fredkin's discovery of cellular automata a few years later permitted him further to indulge his taste for economy of information and strengthened his bond with the recursive algorithm. The patterns of automata are often all but impossible to describe with calculus yet easy to express algorithmically. Nothing is so striking about a good cellular automaton as the contrast between the simplicity of the underlying algorithm and the richness of its result. We have all felt the attraction of such contrasts. It accompanies the comprehension of any process, conceptual or physical, by which simplicity accommodates complexity. Simple solutions to complex problems, for example, make us feel good. The social engineer who designs uncomplicated legislation that will cure numerous social ills, the architect who eliminates several nagging design flaws by moving a single closet, the doctor who traces gastro-intestinal, cardiovascular, and respiratory ailments to a single, correctable cause—all feel the same kind of visceral, aesthetic satisfaction that must have filled the first caveman who literally killed two birds with one stone.

For scientists, the moment of discovery does not simply reinforce the search for knowledge; it inspires further research. Indeed, it directs research. The unifying principle, upon its apprehension, can elicit such devotion that thereafter the scientist looks everywhere for manifestations of it. It was the scientist in Fredkin who, upon seeing how a simple programming rule could yield immense complexity, got excited about looking at physics in a new way and stayed excited. He spent much of the next three decades fleshing out his intuition.

REDKIN'S RESIGNATION FROM BOLT BERANEK AND Newman did not surprise Licklider. "I could tell that Ed was disappointed in the scope of projects undertaken at BBN. He would see them on a grander scale. I would try to argue—hey, let's cut our teeth on this and then move on to bigger things." Fredkin wasn't biting. "He came in one day and said, 'Gosh, Lick, I really love working here, but I'm going to have to leave. I've been thinking about my plans for the future, and I want to make'—I don't remember how many millions of dollars, but it shook me—'and I want to do it in about four years.' And he did amass however many millions he said he would amass in the time he predicted, which impressed me considerably."

In 1962 Fredkin founded Information International Incorporated—an impressive name for a company with no assets and no clients, whose sole employee had never graduated from college. Triple-I, as the company came to be called, was placed on the road to riches by an odd job that Fredkin performed for the Woods Hole Oceanographic Institute. One of Woods Hole's experiments had run into a complication: underwater instruments had faithfully recorded the changing direction and strength of deep ocean currents, but the information, encoded in tiny dots of light on sixteen-millimeter film, was inaccessible to the computers that were supposed to analyze it. Fredkin rented a sixteen-millimeter movie projector and with a surprisingly simple modification turned it into a machine for translating those dots into terms the computer could accept.

This contraption pleased the people at Woods Hole and led to a contract with Lincoln Laboratory. Lincoln was still doing work for the Air Force, and the Air Force wanted its computers to analyze radar information that, like the Woods Hole data, consisted of patterns of light on film. A makeshift information-conversion machine earned Triple-I $10,000, and within a year the Air Force hired Fredkin to build equipment devoted to the task. The job paid $350,000—the equivalent today of around $1 million. RCA and other companies, it turned out, also needed to turn visual patterns into digital data, and "programmable film readers" that sold for $500,000 apiece became Triple-I's stock-in-trade. In 1968 Triple-I went public and Fredkin was suddenly a millionaire. Gradually he cashed in his chips. First he bought a ranch in Colorado. Then one day he was thumbing through the classifieds and saw that an island in the Caribbean was for sale. He bought it.

In the early 1960s, at the suggestion of the Defense Department's Advanced Research Projects Agency, MIT set up what would become its Laboratory for Computer Science. It was then called Project MAC, an acronym that stood for both "machine-aided cognition" and "multiaccess computer." Fredkin had connections with the project from the beginning. Licklider, who had left BBN for the Pentagon shortly after Fredkin's departure, was influential in earmarking federal money for MAC. Marvin Minsky—who would later serve on Triple-I's board, and by the end of 1967 owned some of its stock—was centrally involved In MAC's inception. Fredkin served on Project MAC's steering committee, and in 1966 he began discussing with Minsky the possibility of becoming a visiting professor at MIT. The idea of bringing a college dropout onto the faculty, Minsky recalls, was not as outlandish as it now sounds; computer science had become an academic discipline so suddenly that many of its leading lights possessed meager formal credentials. In 1968, after Licklider had come to MIT and become the director of Project MAC, he and Minsky convinced Louis Smullin, the head of the electrical-engineering department, that Fredkin was worth the gamble. "We were a growing department and we wanted exciting people," Smullin says. "And Ed was exciting."

Fredkin had taught for barely a year before he became a full professor, and not much later, in 1971, he was appointed the head of Project MAC—a position that was also short-lived, for in the fall of 1974 he began a sabbatical at the California Institute of Technology as a Fairchild Distinguished Scholar. He went to Caltech under the sponsorship of Richard Feynman. The deal, Fredkin recalls, was that he would teach Feynman more about computer science, and Feynman would teach him more about physics. While there, Fredkin developed an idea that has slowly come to be seen as a profound contribution to both disciplines. The idea is also—in Fredkin's mind, at least—corroborating evidence for his theory of digital physics. To put its upshot in brief and therefore obscure terms, Fredkin found that computation is not inherently irreversible and thus it is possible, in principle, to build a computer that doesn't use up energy and doesn't give off heat.

All computers on the market are irreversible. That is, their history of information processing cannot be inferred from their present informational state; you cannot look at the data they contain and figure out how they arrived at it. By the time the average computer tells you that 2 plus 2 equals 4, it has forgotten the question; for all it knows, you asked what 1 plus 3 is. The reason for this ignorance is that computers discharge information once it is no longer needed, so that they won't get clogged up.

In 1961 Rolf Landauer, of IBM's Thomas J. Watson Research Center, established that this destruction of information is the only part of the computational process that unavoidably involves the dissipation of energy. It takes effort, in other words, for a computer to forget things but not necessarily for it to perform other functions. Thus the question of whether you can, in principle, build a universal computer that doesn't dissipate energy in the form of heat is synonymous with the question of whether you can design a logically reversible universal computer, one whose computational history can always be unearthed. Landauer, along with just about everyone else, thought such a computer impossible; all past computer architectures had implied the regular discarding of information, and it was widely believed that this irreversibility was intrinsic to computation. But while at Caltech, Fredkin did one of his favorite things—he showed that everyone had been wrong all along.

Of the two kinds of reversible computers invented by Fredkin, the better known is called the billiard-ball computer. If it were ever actually built, it would consist of billiard balls ricocheting around in a labyrinth of "mirrors," bouncing off the mirrors at 45-degree angles, periodically banging into other moving balls at 90-degree angles, and occasionally exiting through doorways that occasionally would permit new balls to enter. To extract data from the machine, you would superimpose a grid over it, and the presence or absence of a ball in a given square at a given point in time would constitute information. Such a machine, Fredkin showed, would qualify as a universal computer; it could do anything that normal computers do. But unlike other computers, it would be perfectly reversible; to recover its history, all you would have to do is stop it and run it backward. Charles H. Bennett, of IBM's Thomas J. Watson Research Center, independently arrived at a different proof that reversible computation is possible, though he considers the billiard-ball computer to be in some respects a more elegant solution to the problem than his own.

The billiard-ball computer will never be built, because it is a platonic device, existing only in a world of ideals. The balls are perfectly round and hard, and the table perfectly smooth and hard. There is no friction between the two, and no energy is lost when balls collide. Still, although these ideals are unreachable, they could be approached eternally through technological refinement, and the heat produced by fiction and collision could thus be reduced without limit. Since no additional heat would be created by information loss, there would be no necessary minimum on the total heat emitted by the computer. "The cleverer you are, the less heat it will generate," Fredkin says.

The connection Fredkin sees between the billiard-ball computer and digital physics exemplifies the odd assortment of evidence he has gathered in support of his theory. Molecules and atoms and their constituents, he notes, move around in theoretically reversible fashion, like billiard balls (although it is not humanly possible, of course, actually to take stock of the physical state of the universe, or even one small corner of it, and reconstruct history by tracing the motion of microscopic particles backward). Well, he asks, given the theoretical reversibility of physical reality, doesn't the theoretical feasibility of a reversible computer lend credence to the claim that computation is reality's basis?

No and yes. Strictly speaking, Fredkin's theory doesn't demand reversible computation. It is conceivable that an irreversible process at the very core of reality could give rise to the reversible behavior of molecules, atoms, electrons, and the rest. After all, irreversible computers (that is, all computers on the market) can simulate reversible billiard balls. But they do so in a convoluted way, Fredkin says, and the connection between an irreversible substratum and a reversible stratum would, similarly, be tortuous—or, as he puts it, "aesthetically obnoxious." Fredkin prefers to think that the cellular automaton underlying reversible reality does its work gracefully.

Consider, for example, a variant of the billiard-ball computer invented by Norman Margolus, the Canadian in MIT's information-mechanics group. Margolus showed how a two-state cellular automaton that was itself reversible could simulate the billiard-ball computer using only a simple rule involving a small neighborhood. This cellular automaton in action looks like a jazzed-up version of the original video game, Pong. It is an overhead view of endlessly energetic balls ricocheting off clusters of mirrors and each other It is proof that a very simple binary cellular automaton can give rise to the seemingly more complex behavior of microscopic particles bouncing off each other. And, as a kind of bonus, these particular particles themselves amount to a computer. Though Margolus discovered this powerful cellular-automaton rule, it was Fredkin who had first concluded that it must exist and persuaded Margolus to look for it. "He has an intuitive idea of how things should be," Margolus says. "And often, if he can't come up with a rational argument to convince you that it should be so, he'll sort of transfer his intuition to you."

That, really, is what Fredkin is trying to do when he argues that the universe is a computer. He cannot give you a single line of reasoning that leads inexorably, or even very plausibly, to this conclusion. He can tell you about the reversible computer, about Margolus's cellular automaton, about the many physical quantities, like light, that were once thought to be continuous but are now considered discrete, and so on. The evidence consists of many little things—so many, and so little, that in the end he is forced to convey his truth by simile. "I find the supporting evidence for my beliefs in ten thousand different places," he says. "And to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, Where is this animal? I say, Well, he was here, he's about this big, this that and the other. And I know a thousand things about him. I don't have him in hand, but I know he's there." The story changes upon retelling. One day it's Bigfoot that Fredkin's trailing. Another day it's a duck: feathers are everywhere, and the tracks are webbed. Whatever the animal, the moral of the story remains the same: "What I see is so compelling that it can't be a creature of my imagination."

V. Deus ex Machina


HERE WAS SOMETHING BOTHERSOME ABOUT ISAAC Newton's theory of gravitation. The idea that the sun exerts a pull on the earth, and vice versa, sounded vaguely supernatural and, in any event, was hard to explain. How, after all, could such "action at a distance" be realized? Did the earth look at the sun, estimate the distance, and consult the law of gravitation to determine where it should move and how fast? Newton sidestepped such questions. He fudged with the Latin phrase si esset: two bodies, he wrote, behave as if impelled by a force inversely proportional to the square of their distance. Ever since Newton, physics has followed his example. Its "force fields" are, strictly speaking, metaphorical, and its laws purely descriptive. Physicists make no attempt to explain why things obey the law of electromagnetism or of gravitation. The law is the law, and that's all there is to it.

Fredkin refuses to accept authority so blindly. He posits not only laws but also a law-enforcement agency: a computer. Somewhere out there, he believes, is a machinelike thing that actually keeps our individual bits of space abiding by the rule of the universal cellular automaton. With this belief Fredkin crosses the line between physics and metaphysics, between scientific hypothesis and cosmic speculation. If Fredkin had Newton's knack for public relations, if he stopped at saying that the universe operates as if it were a computer, he could Boost his stature among physicists while preserving the essence of his theory—the idea that the dynamics of physical reality will ultimately be better captured by a single recursive algorithm than by the mathematics of conventional physics, and that the continuity of time and space implicit in traditional mathematics is illusory.

Actually, some estimable physicists have lately been saying things not wholly unlike this stripped-down version of the theory. T. D. Lee, a Nobel laureate at Columbia University, has written at length about the possibility that time is discrete. And in 1984 Scientific American, not exactly a soapbox for cranks, published an article in which Stephen Wolfram, then of Princeton's Institute for Advanced Study, wrote, "Scientific laws are now being viewed as algorithms. . . . Physical systems are viewed as computational systems, processing information much the way computers do." He concluded, "A new paradigm has been born."

The line between responsible scientific speculation and off-the-wall metaphysical pronouncement was nicely illustrated by an article in which Tomasso Toffoli, the Italian in MIT's information-mechanics group, stayed barely on the responsible side of it. Published in the journal Physica D, the article was called "Cellular automata as an alternative to (rather than an approximation of) differential equations in modeling physics." Toffoli's thesis captured the core of Fredkin's theory yet had a perfectly reasonable ring to it. He simply suggested that the historical reliance of physicists on calculus may have been due not just to its merits but also to the fact that before the computer, alternative languages of description were not practical.

Why does Fredkin refuse to do the expedient thing—leave out the part about the universe actually being a computer? One reason is that he considers reprehensible the failure of Newton, and of all physicists since, to back up their descriptions of nature with explanations. He is amazed to find "perfectly rational scientists" believing in "a form of mysticism: that things just happen because they happen." The best physics, Fredkin seems to believe, is metaphysics.

The trouble with metaphysics is its endless depth. For every question that is answered, at least one other is raised, and it is not always clear that, on balance, any progress has been made. For example, where is this computer that Fredkin keeps talking about? Is it in this universe, residing along some fifth or sixth dimension that renders it invisible? Is it in some meta-universe? The answer is the latter, apparently, and to understand why, we need to return to the problem of the infinite regress, a problem that Rolf Landauer, among others, has cited with respect to Fredkin's theory. Landauer illustrates the problem by telling the old turtle story. A professor has just finished lecturing at some august university about the origin and structure of the universe, and an old woman in tennis shoes walks up to the lectern. "Excuse me, sir, but you've got it all wrong," she says. "The truth is that the universe is sitting on the back of a huge turtle." The professor decides to humor her. "Oh, really?" he asks. "Well, tell me, what is the turtle standing on?" The lady has a ready reply: "Oh, it's standing on another turtle." The professor asks, "And what is that turtle standing on?" Without hesitation, she says, "Another turtle." The professor, still game, repeats his question. A look of impatience comes across the woman's face. She holds up her hand, stopping him in mid-sentence. "Save your breath, sonny," she says. "It's turtles all the way down."

The infinite-regress problem afflicts Fredkin's theory in two ways, one of which we have already encountered: if matter is made of information, what is the information made of? And even if one concedes that it is no more ludicrous for information to be the most fundamental stuff than for matter or energy to be the most fundamental stuff, what about the computer itself? What is it made of? What energizes it? Who, or what, runs it, or set it in motion to begin with?

HEN FREDKIN IS DISCUSSING THE PROBLEM OF THE infinite regress, his logic seems variously cryptic, evasive, and appealing. At one point he says, "For everything in the world where you wonder, 'What is it made out of?' the only thing I know of where the question doesn't have to be answered with anything else is for information." This puzzles me. Thousands of words later I am still puzzled, and I press for clarification. He talks some more. What he means, as near as I can tell, is what follows.

First of all, it doesn't matter what the information is made of, or what kind of computer produces it. The computer could be of the conventional electronic sort, or it could be a hydraulic machine made of gargantuan sewage pipes and manhole covers, or it could be something we can't even imagine. What's the difference? Who cares what the information consists of? So long as the cellular automaton's rule is the same in each case, the patterns of information will be the same, and so will we, because the structure of our world depends on pattern, not on the pattern's substrate; a carbon atom, according to Fredkin, is a certain configuration of bits, not a certain kind of bits.

Besides, we can never know what the information is made of or what kind of machine is processing it. This point is reminiscent of childhood conversations that Fredkin remembers having with his sister, Joan, about the possibility that they were part of a dream God was having. "Say God is in a room and on his table he has some cookies and tea," Fredkin says. "And he's dreaming this whole universe up. Well, we can't reach out and get his cookies. They're not in our universe. See, our universe has bounds. There are some things in it and some things not." The computer is not; hardware is beyond the grasp of its software. Imagine a vast computer program that contained bodies of information as complex as people, motivated by bodies of information as complex as ideas. These "people" would have no way of figuring out what kind of computer they owed their existence to, because everything they said, and everything they did—including formulate metaphysical hypotheses—would depend entirely on the programming rules and the original input. As long as these didn't change, the same metaphysical conclusions would be reached in an old XD-1 as in a Kaypro 2.

This idea—that sentient beings could be constitutionally numb to the texture of reality—has fascinated a number of people, including, lately, computer scientists. One source of the fascination is the fact that any universal computer can simulate another universal computer, and the simulated computer can, because it is universal, do the same thing. So it is possible to conceive of a theoretically endless series of computers contained, like Russian dolls, in larger versions of themselves and yet oblivious of those containers. To anyone who has lived intimately with, and thought deeply about, computers, says Charles Bennett, of IBM's Watson Lab, this notion is very attractive. "And if you're too attracted to it, you're likely to part company with the physicists." Physicists, Bennett says, find heretical the notion that anything physical is impervious to expertment, removed from the reach of science.

Fredkin's belief in the limits of scientific knowledge may sound like evidence of humility, but in the end it permits great ambition; it helps him go after some of the grandest philosophical questions around. For example, there is a paradox that crops up whenever people think about how the universe came to be. On the one hand, it must have had a beginning. After all, things usually do. Besides, the cosmological evidence suggests a beginning: the big bang. Yet science insists that it is impossible for something to come from nothing; the laws of physics forbid the amount of energy and mass in the universe to change. So how could there have been a time when there was no universe, and thus no mass or energy?

Fredkin escapes from this paradox without breaking a sweat. Granted, he says, the laws of our universe don't permit something to come from nothing. But he can imagine laws that would permit such a thing; in fact, he can imagine algorithmic laws that would permit such a thing. The conservation of mass and energy is a consequence of our cellular automaton's rules, not a consequence of all possible rules. Perhaps a different cellular automaton governed the creation of our cellular automation—just as the rules for loading software are different from the rules running the program once it has been loaded.

What's funny is how hard it is to doubt Fredkin when with such assurance he makes definitive statements about the creation of the universe—or when, for that matter, he looks you in the eye and tells you the universe is a computer. Partly this is because, given the magnitude and intrinsic intractability of the questions he is addressing, his answers aren't all that bad. As ideas about the foundations of physics go, his are not completely out of the ball park; as metaphysical and cosmogonic speculation goes, his isn't beyond the pale.

But there's more to it than that. Fredkin is, in his own odd way, a rhetorician of great skill. He talks softly, even coolly, but with a low-key power, a quiet and relentless confidence, a kind of high-tech fervor. And there is something disarming about his self-awareness. He's not one of these people who say crazy things without having so much as a clue that you're sitting there thinking what crazy things they are. He is acutely conscious of his reputation; he knows that some scientists are reluctant to invite him to conferences for fear that he'll say embarrassing things. But he is not fazed by their doubts. "You know, I'm a reasonably smart person. I'm not the smartest person in the world, but I'm pretty smart—and I know that what I'm involved in makes perfect sense. A lot of people build up what might be called self-delusional systems, where they have this whole system that makes perfect sense to them, but no one else ever understands it or buys it. I don't think that's a major factor here, though others might disagree." It's hard to disagree, when he so forthrightly offers you the chance.

Still, as he gets further from physics, and more deeply into philosophy, he begins to try one's trust. For example, having tackled the question of what sort of process could generate a universe in which spontaneous generation is impossible, he aims immediately for bigger game: Why was the universe created? Why is there something here instead of nothing?

HEN THIS SUBJECT COMES UP, WE ARE SITTING IN the Fredkins' villa. The living area has pale rock walls, shiny-clean floors made of large white ceramic tiles, and built-in bookcases made of blond wood. There is lots of air—the ceiling slopes up in the middle to at least twenty feet—and the air keeps moving; some walls consist almost entirely of wooden shutters that, when open, let the sea breeze pass as fast as it will. I am glad of this. My skin, after three days on Fredkin's island, is hot, and the air, though heavy, is cool. The sun is going down.

Fredkin, sitting on a white sofa, is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "analytical" approach associated with traditional mathematics, including differential equations, and the "computational" approach associated with algorithms. You can predict a future state of a system susceptible to the analytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold.

This indeterminacy is very suggestive. It suggests, first of all, why so many "chaotic" phenomena, like smoke rising from a cigarette, are so difficult to predict using conventional mathematics. (In fact, some scientists have taken to modeling chaotic systems with cellular automata.) To Fredkin, it also suggests that even if human behavior is entirely determined, entirely inevitable, it may be unpredictable; there is room for "pseudo free will" in a completely mechanistic universe. But on this particular evening Fredkin is interested mainly in cosmogony, in the implications of this indeterminacy for the big question: Why does this giant computer of a universe exist?

It's simple, Fredkin explains: "The reason is, there is no way to know the answer to some question any faster than what's going on."

Aware that he may have said something enigmatic, Fredkin elaborates. Suppose, he says, that there is an all-powerful God. "And he's thinking of creating this universe. He's going to spend seven days on the job—this is totally allegorical—or six days on the job. Okay, now, if he's as all-powerful as you might imagine, he can say to himself, 'Wait a minute, why waste the time? I can create the whole thing, or I can just think about it for a minute and just realize what's going to happen so that I don't have to bother.' Now, ordinary physics says, Well, yeah, you got an all-powerful God, he can probably do that. What I can say is—this is very interesting—I can say I don't care how powerful God is; he cannot know the answer to the question any faster than doing it. Now, he can have various ways of doing it, but he has to do every Goddamn single step with every bit or he won't get the right answer. There's no shortcut."

Around sundown on Fredkin's island all kinds of insects start chirping or buzzing or whirring. Meanwhile, the wind chimes hanging just outside the back door are tinkling with methodical randomness. All this music is eerie and vaguely mystical. And so, increasingly, is the conversation. It is one of those moments when the context you've constructed falls apart, and gives way to a new, considerably stranger one. The old context in this case was that Fredkin is an iconoclastic thinker who believes that space and time are discrete, that the laws of the universe are algorithmic, and that the universe works according to the same principles as a computer (he uses this very phrasing in his most circumspect moments). The new context is that Fredkin believes that the universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news/bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places.

So, I say, you're arguing that the reason we're here is that some being wanted to theorize about reality, and the only way he could test his theories was to create reality? "No, you see, my explanation is much more abstract. I don't imagine there is a being or anything. I'm just using that to talk to you about it. What I'm saying is that there is no way to know what the future is any faster than running this [the universe] to get to that [the future]. Therefore, what I'm assuming is that there is a question and there is an answer, okay? I don't make any assumptions about who has the question, who wants the answer, anything."

But the more we talk, the closer Fredkin comes to the religious undercurrents he's trying to avoid. "Every astrophysical phenomenon that's going on is always assumed to be just accident," he says. "To me, this is a fairly arrogant position, in that intelligence—and computation, which includes intelligence, in my view—is a much more universal thing than people think. It's hard for me to believe that everything out there is just an accident." This sounds awfully like a position that Pope John Paul II or Billy Graham would take, and Fredkin is at pains to clarify his position: "I guess what I'm saying is—I don't have any religious belief. I don't believe that there is a God. I don't believe in Christianity or Judaism or anything like that, okay? I'm not an atheist, I'm not an agnostic, I'm just in a simple state. I don't know what there is or might be. But what I can say is that it seems likely to me that this particular universe we have is a consequence of something I would call intelligent." Does he mean that there's something out there that wanted to get the answer to a question? "Yeah." Something that set up the universe to see what would happen? "In some way, yes."

VI. The Language Barrier


N 1974, UPON RETURNING TO MIT FROM CALTECH, Fredkin was primed to revolutionize science. Having done the broad conceptual work (concluding that the universe is a computer), he would enlist the aid of others in taking care of the details—translating the differential equations of physics into algorithms, experimenting with cellular-automaton rules and selecting the most elegant, and, eventually, discovering The Rule, the single law that governs every bit of space and accounts for everything. "He figured that all he needed was some people who knew physics, and that it would all be easy," Margolus says.

One early obstacle was Fredkin's reputation. He says, "I would find a brilliant student; he'd get turned on to this stuff and start to work on it. And then he would come to me and say, 'I'm going to work on something else.' And I would say, 'Why?' And I had a few very honest ones, and they would say, 'Well, I've been talking to my friends about this and they say I'm totally crazy to work on it. It'll ruin my career. I'll be tainted forever.'" Such fears were not entirely unfounded. Fredkin is one of those people who arouse either affection, admiration, and respect, or dislike and suspicion. The latter reaction has come from a number of professors at MIT, particularly those who put a premium on formal credentials, proper academic conduct, and not sounding like a crackpot. Fredkin was never oblivious of the complaints that his work wasn't "worthy of MIT," nor of the movements, periodically afoot, to sever, or at least weaken, his ties to the university. Neither were his graduate students.

Fredkin's critics finally got their way. In the early 1980s, while he was serving briefly as the president of Boston's CBS-TV affiliate, someone noticed that he wasn't spending much time around MIT and pointed to a faculty rule limiting outside professional activities. Fredkin was finding MIT "less and less interesting" anyway, so he agreed to be designated an adjunct professor. As he recalls the deal, he was going to do a moderate amount of teaching and be paid an "appropriate" salary. But he found the real salary insulting, declined payment, and never got around to teaching. Not surprisingly, he was not reappointed adjunct professor when his term expired, in 1986. Meanwhile, he had so nominally discharged his duties as the head of the information-mechanics group that the title was given to Toffoli.

Fredkin doubts that his ideas will achieve widespread acceptance anytime soon. He believes that most physicists are so deeply immersed in their kind of mathematics, and so uncomprehending of computation, as to be incapable of grasping the truth. Imagine, he says, that a twentieth-century time traveler visited Italy in the early seventeenth century and tried to reformulate Galileo's ideas in terms of calculus. Although it would be a vastly more powerful language of description than the old one, conveying its importance to the average scientist would be nearly impossible. There are times when Fredkin breaks through the language barrier, but they are few and far between. He can sell one person on one idea, another on another, but nobody seems to get the big picture. It's like a painting of a horse in a meadow, he says"Everyone else only looks at it with a microscope, and they say, 'Aha, over here I see a little brown pigment. And over here I see a little green pigment.' Okay. Well, I see a horse."

Fredkin's research has nevertheless paid off in unanticipated ways. Comparing a computer's workings and the dynamics of physics turned out to be a good way to figure out how to build a very efficient computer—one that harnesses the laws of physics with great economy. Thus Toffoli and Margolus have designed an inexpensive but powerful cellular-automata machine, the CAM 6. The "machine' is actually a circuit board that when inserted in a personal computer permits it to orchestrate visual complexity at a speed that can be matched only by general-purpose computers costing hundreds of thousands of dollars. Since the circuit board costs only around $1,500, this engrossing machine may well entice young scientific revolutionaries into joining the quest for The Rule. Fredkin speaks of this possibility in almost biblical terms, "The big hope is that there will arise somewhere someone who will have some new, brilliant ideas," he says. "And I think this machine will have a dramatic effect on the probability of that happening."

But even if it does happen, it will not ensure Fredkin a place in scientific history. He is not really on record as believing that the universe is a computer. Although some of his tamer insights have been adopted, fleshed out, and published by Toffoli or Margolus, sometimes in collaboration with him, Fredkin himself has published nothing on digital physics. His stated rationale for not publishing has to do with, of all things, lack of ambition. "I'm just not terribly interested," he says. "A lot of people are fantastically motivated by publishing. It's part of a whole thing of getting ahead in the world." Margolus has another explanation: "Writing something down in good form takes a lot of time. And usually by the time he's done with the first or second draft, he has another wonderful idea that he's off on."

These two theories have merit, but so does a third: Fredkin can't write for academic journals. He doesn't know how. His erratic, hybrid education has left him with a mixture of terminology that neither computer scientists nor physicists recognize as their native tongue. Further, he is not schooled in the rules of scientific discourse; he seems just barely aware of the line between scientific hypothesis and philosophical speculation. He is not politic enough to confine his argument to its essence: that time and space are discrete, and that the state of every point in space at any point in time is determined by a single algorithm. In short, the very background that has allowed Fredkin to see the universe as a computer seems to prevent him from sharing his vision. If he could talk like other scientists, he might see only the things that they see.


Robert Wright is the author of
Three Scientists and Their Gods: Looking for Meaning in an Age of Information, The Moral Animal: Evolutionary Psychology and Everyday Life, and Nonzero: The Logic of Human Destiny.
Copyright © 2002 by The Atlantic Monthly Group. All rights reserved.
The Atlantic Monthly; April 1988; Did the Universe Just Happen?; Volume 261, No. 4; page 29.
Wed, 24 Nov 2010 05:10:00 -0600 text/html https://www.theatlantic.com/past/docs/issues/88apr/wright.htm
Killexams : The 25 Worst Tech Products of All Time

— -- At PC World, we spend most of our time talking about products that make your life easier or your work more productive. But it's the lousy ones that linger in our memory long after their shrinkwrap has shriveled, and that make tech editors cry out, "What have I done to deserve this?"

Still, even the worst products deserve recognition (or deprecation). So as we put together our list of World Class winners for 2006, we decided also to spotlight the 25 worst tech products that have been released since PC World began publishing nearly a quarter-century ago.

Picking our list wasn't exactly rocket science; it was more like group therapy. PC World staffers and contributors nominated their candidates and then gave each one the sniff test. We sought the worst of the worst--operating systems that operated badly, hardware that never should have left the factory, applications that spied on us and fed our data to shifty marketers, and products that left a legacy of poor performance and bad behavior.

And because one person's dog can be another's dish, we also devised a (Dis)Honorable Mention list for products that didn't quite achieve universal opprobrium.

Of course, most truly awful ideas never make it out of somebody's garage. Our bottom 25 designees are all relatively well-known items, and many had multimillion-dollar marketing campaigns behind them. In other words, they were made by people who should have known better. In fact, three of the ten worst were made by Microsoft. Coincidence? We think not.

The first entry in our Hall of Shame: The ISP that everyone loves to hate...

  • The Worst Five
  • Numbers 6 to 10
  • Numbers 11 to 15
  • Numbers 16 to 20
  • Numbers 21 to 25
  • (Dis)Honorable Mention
  • The Complete List of the Worst 25
  • Want to comment on this story? Post your thoughts here.

How do we loathe AOL? Let us count the ways. Since America Online emerged from the belly of a BBS called Quantum "PC-Link" in 1989, users have suffered through awful software, inaccessible dial-up numbers, rapacious marketing, in-your-face advertising, questionable billing practices, inexcusably poor customer service, and enough spam to last a lifetime. And all the while, AOL remained more expensive than its major competitors. This lethal combination earned the world's biggest ISP the top spot on our list of bottom feeders.

AOL succeeded initially by targeting newbies, using brute-force marketing techniques. In the 90s you couldn't open a magazine (PC World included) or your mailbox without an AOL disk falling out of it. This carpet-bombing technique yielded big numbers: At its peak, AOL claimed 34 million subscribers worldwide, though it never revealed how many were just using up their free hours.

Once AOL had you in its clutches, escaping was notoriously difficult. Several states sued the service, claiming that it continued to bill customers after they had requested cancellation of their subscriptions. In August 2005, AOL paid a $1.25 million fine to the state of New York and agreed to change its cancellation policies--but the agreement covered only people in New York.

Ultimately the Net itself--which AOL subscribers were finally able to access in 1995-- made the service's shortcomings painfully obvious. Prior to that, though AOL offered plenty of its own online content, it walled off the greater Internet. Once people realized what content was available elsewhere on the Net, they started wondering why they were paying AOL. And as America moved to broadband, many left their sluggish AOL accounts behind. AOL is now busy rebranding itself as a content provider, not an access service.

Though America Online has shown some improvement lately--with better browsers and e-mail tools, fewer obnoxious ads, scads of broadband content, and innovative features such as parental controls--it has never overcome the stigma of being the online service for people who don't know any better.

In order for your browser to display the following paragraph this site must download new software; please wait. Sorry, the requested codec was not found. Please upgrade your system.

A frustrating inability to play media files--due in part to constantly changing file formats--was only part of Real's problem. RealPlayer also had a disturbing way of making itself a little too much at home on your PC--installing itself as the default media player, taking liberties with your Windows Registry, popping up annoying "messages" that were really just advertisements, and so on.

And some of RealNetworks' habits were even more troubling. For example, shortly after RealJukeBox appeared in 1999, security researcher Richard M. Smith discovered that the software was assigning a unique ID to each user and phoning home with the titles of media files played on it--while failing to disclose any of this in its privacy policy. Turns out that RealPlayer G2, which had been out since the previous year, also broadcast unique IDs. After a tsunami of bad publicity and a handful of lawsuits, Real issued a patch to prevent the software from tracking users' listening habits. But less than a year later, Real was in hot water again for tracking the habits of its RealDownload download-management software customers.

To be fair, RealNetworks deserves credit for offering a free media player and for hanging in there against Microsoft's relentless onslaught. We appreciate the fact that there's an alternative to Windows Media Player; we just wish it were a better one.

Back in 1995, when RAM cost $30 to $50 a megabyte and Windows 95 apps were demanding more and more of it, the idea of "doubling" your system memory by installing a $30 piece of software sounded mighty tempting. The 700,000 users who bought Syncronys's SoftRAM products certainly thought so. Unfortunately, that's not what they got.

It turns out that all SoftRAM really did was expand the size of Windows' hard disk cache--something a moderately savvy user could do without any extra software in about a minute. And even then, the performance boost was negligible. The FTC dubbed Syncronys's claims "false and misleading," and the company was eventually forced to pull the product from the market and issue refunds. After releasing a handful of other bad Windows utilities, the company filed for Chapter 7 bankruptcy in 1999. It will not be missed.

This might be the worst version of Windows ever released--or, at least, since the dark days of Windows 2.0. Windows Millennium Edition (aka Me, or the Mistake Edition) was Microsoft's follow-up to Windows 98 SE for home users. Shortly after Me appeared in late 2000, users reported problems installing it, getting it to run, getting it to work with other hardware or software, and getting it to stop running. Aside from that, Me worked great.

To its credit, Me introduced features later made popular by Windows XP, such as system restore. Unfortunately, it could also restore files you never wanted to see again, like viruses that you'd just deleted. Forget Y2K; this was the real millennium bug.

(Photo courtesy of Geek.com.)

When you stick a music CD into your computer, you shouldn't have to worry that it will turn your PC into a hacker's plaything. But that's exactly what Sony BMG Music Entertainment's music discs did in 2005. The discs' harebrained copy protection software installed a rootkit that made it invisible even to antispyware or antivirus software. Any moderately clever cyber attacker could then use the same rootkit to hide, say, a keylogger to capture your bank account information, or a remote-access Trojan to turn your PC into a zombie.

Security researcher Dan Kaminsky estimated that more than half a million machines were infected by the rootkit. After first downplaying the problem and then issuing a "fix" that made things worse, Sony BMG offered to refund users' money and replace the faulty discs. Since then, the record company has been sued up the wazoo; a federal court judge recently approved a settlement in the national class action suit. Making your machine totally vulnerable to attacks--isn't that Microsoft's job?

Few products get accused of killing Christmas for thousands of kids, but that fate befell Disney's first CD-ROM for Windows. The problem: The game relied on Microsoft's new WinG graphics engine, and video card drivers had to be hand-tuned to work with it, says Alex St. John. He's currently CEO of game publisher WildTangent, but in the early 1990s he was Microsoft's first "game evangelist."

In late 1994, Compaq released a Presario whose video drivers hadn't been tested with WinG. When parents loaded the Lion King disc into their new Presarios on Christmas morning, many children got their first glimpse of the Blue Screen of Death. But this sad story has a happy ending. The WinG debacle led Microsoft to develop a more stable and powerful graphics engine called DirectX. And the team behind DirectX went on to build the Xbox--restoring holiday joy for a new generation of kids.

No list of the worst of the worst would be complete without Windows' idiot cousin, Bob. Designed as a "social" interface for Windows 3.1, Bob featured a living room filled with clickable objects, and a series of cartoon "helpers" like Chaos the Cat and Scuzz the Rat that walked you through a small suite of applications. Fortunately, Bob was soon buried in the avalanche of hype surrounding Windows 95, though some of the cartoons lived on to annoy users of Microsoft Office and Windows XP (Clippy the animated paper clip, anyone?).

Mostly, Bob raised more questions than it answered. Like, had anyone at Microsoft actually used Bob? Did they think anyone else would? And did they deliberately make Bob's smiley face logo look like Bill Gates, or was that just an accident?

Full of features, easy to use, and a virtual engraved invitation to hackers and other digital delinquents, Internet Explorer 6.x might be the least secure software on the planet. How insecure? In June 2004, the U.S. Computer Emergency Readiness Team (CERT) took the unusual step of urging PC users to use a browser--any browser--other than IE. Their reason: IE users who visited the wrong Web site could end up infected with the Scob or Download.Ject keylogger, which could be used to steal their passwords and other personal information. Microsoft patched that hole, and the next one, and the one after that, and so on, ad infinitum.

To be fair, its ubiquity paints a big red target on it--less popular apps don't draw nearly as much fire from hackers and the like. But here's hoping that Internet Explorer 7 springs fewer leaks than its predecessor.

Digital music is such a great idea that even record companies finally, begrudgingly accepted it after years of implacable opposition. In 2002, two online services backed by music industry giants proposed giving consumers a legitimate alternative to illegal file sharing. But the services' stunningly brain-dead features showed that the record companies still didn't get it.

PressPlay charged $15 per month for the right to listen to 500 low-quality audio streams, download 50 audio tracks, and burn 10 tracks to CD. It didn't sound like an awful deal, until you found out that not every song could be downloaded, and that you couldn't burn more than two tracks from the same artist. MusicNet cost $10 per month for 100 streamed songs and 100 downloads, but each downloaded audio file expired after only 30 days, and every time you renewed the song it counted against your allotment.

Neither service's paltry music selections could compete against the virtual feast available through illicit means. Several billion illegal downloads later, an outside company--Apple, with its iTunes Music Service--showed the record companies the right way to market digital music.

In the early days of the PC, dBASE was synonymous with database. By the late 1980s, Ashton-Tate's flagship product owned nearly 70 percent of the PC database market. But dBASE IV changed all that. Impossibly slow and filled with more bugs than a rain forest, the $795 program was an unmitigated disaster.

Within a year of its release, Ashton-Tate's market share had plummeted to the low 40s. A patched-up version, dBASE IV 1.1, appeared two years later, but by then it was too late. In July 1991 the company merged with Borland, which eventually discontinued dBASE in favor of its own database products and sold the rights in 1999 to a new company, dataBased Intelligence, Inc.

The name-your-price model worked for airline tickets, rental cars, and hotels--why not groceries and gas? Unfortunately, even Priceline spokescaptain William Shatner couldn't keep these services in orbit. Grocery shoppers could find real discounts bidding for products online, but only if they weren't picky about brands and were willing to follow Byzantine rules on what they could buy and how they paid for them.

Fuel customers had to pay for petrol online, wait for a Priceline gas card to arrive in the mail, and then find a local station that would honor it--a lot of hassle to save a few pennies per gallon. In less than a year, WebHouse Club, the Priceline affiliate that ran both programs, ran out of gas--and cash--and was forced to shut down.

Back in the mid-90s, so-called "push" technology was all the rage. In place of surfing the Web for news and information, push apps like the PointCast Network would deliver customized information directly to your desktop--along with a healthy serving of ads. But push quickly turned into a drag, as PointCast's endless appetite for bandwidth overwhelmed dial-up connections and clogged corporate networks.

In addition, PointCast's proprietary screensaver/browser had a nasty habit of commandeering your computer and not giving it back. Companies began to ban the application from offices and cubicles, and push got shoved out the door. Ironically, the idea of push has made a comeback of sorts via low-bandwidth RSS feeds. But too late for PointCast, which sent out its last broadcast in early 2000.

Talk about your bastard offspring. IBM's attempt to build an inexpensive computer for homes and schools was an orphan almost from the start. The infamous "Chiclet" keyboard on the PCjr. was virtually unusable for typing, and the computer couldn't run much of the software written for its hugely successful parent, the IBM PC.

A price tag nearly twice that of competing home systems from Commodore and Atari didn't Boost the situation. Two years after Junior's splashy debut, IBM sent him to his room and never let him out again.

(Photo courtesy of the Oldskool Shrine to the IBM PCjr and Tandy 1000.)

After a decade as one of the computer industry's major PC builders, the folks at Gateway 2000 wanted to celebrate--not just by popping a few corks, but by offering a specially configured system to show some customer appreciation.

But instead of Cristal champagne, buyers got Boone's Farm--the so-called 6X CD-ROM spun at 4X or slower (a big performance hit in 1995), the video card was a crippled version of what people thought they were getting, and the surround-sound speakers weren't actually surround-capable. Perhaps Gateway was sticking to the traditional gift for a tenth anniversary: It's tin, not gold.

Click-click-click. That was the sound of data dying on thousands of Iomega Zip drives. Though Iomega sold tens of millions of Zip and Jaz drives that worked flawlessly, thousands of the drives died mysteriously, issuing a clicking noise as the drive head became misaligned and clipped the edge of the removable media, rendering any data on that disc permanently inaccessible.

Iomega largely ignored the problem until angry customers filed a class action suit in 1998, which the company settled three years later by offering rebates on future products. And the Zip disk, once the floppy's heir apparent, has largely been eclipsed by thumb drives and cheaper, faster, more capacious rewritable CDs and DVDs.

Thank Comet Cursor for introducing spyware to an ungrateful nation. This simple program had one purpose: to change your mouse cursor into Bart Simpson, Dilbert, or one of thousands of other cutesy icons while you were visiting certain Web sites. But Comet had other habits that were not so cute.

For example, it assigned your computer a unique ID and phoned home whenever you visited a Comet-friendly Web site. When you visited certain sites, it could install itself into Internet Explorer without your knowledge or explicit consent. And it was bundled with RealPlayer 7 (yet another reason to loathe RealPlayer). Some versions would hijack IE's search assistant or cause the browser to crash.

Though Comet's founders insisted that the program was not spyware, thousands of users disagreed. Comet Systems was bought by pay-per-click ad company FindWhat in 2004; earlier this year, Comet's cursor software scurried down a mouse hole, never to be seen again.

Editor's note: After publication of this article, we heard from a founder of Comet Systems who took issue with our characterization of Comet Cursor's behavior. In response we have amended the description of how Comet Cursor got installed on PCs. See PC World's Techlog for more information.

Some buildings are portable, if you have access to a Freightliner. Stonehenge is a portable sun dial, if you have enough people on hand to get things rolling. And in 1989, Apple offered a "portable" Macintosh--a 4-inch-thick, 16-pound beast that severely strained the definition of "laptop"--and the aching backs of its porters.

Huge lead-acid batteries contributed to its weight and bulk; the batteries were especially important because Portable wouldn't run on AC power. Some computers are affordable, too; the Portable met that description only if you had $6500 of extra cash on hand.

Fast, big, and highly unreliable, this 75GB hard drive was quickly dubbed the "Deathstar" for its habit of suddenly failing and taking all of your data with it.

About a year after IBM released the Deskstar, users filed a class action suit, alleging that IBM had misled customers about its reliability. IBM denied all liability, but last June it agreed to pay $100 to Deskstar owners whose drives and data had departed their desks and gone on to a celestial reward. Well before that, IBM had washed its hands of the Deathstar, selling its hard drive division to Hitachi in 2002.

The 14-ounce OQO Model 1 billed itself as the "world's smallest Windows XP computer"--and that was a big part of its problem. You needed a magnifying glass to read icons or text on its 5-by-3-inch screen, and the hide-away keypad was too tiny to accommodate even two adult fingers.

The Model 1 also ran hot to the touch, and at $1900+ it could easily burn a hole in your wallet. Good things often come in small packages, but not this time.

Appearing at the tail end of the dot com craze, the CueCat was supposed to make it easier for magazine and newspaper readers to find advertisers' Web sites (because apparently it was too challenging to type www.pepsi.com into your browser).

The company behind the device, DigitalConvergence, mailed hundreds of thousands of these cat-shaped bar-code scanners to subscribers of magazines and newspapers. Readers were supposed to connect the device to a computer, install some software, scan the barcodes inside the ads, and be whiskered away to advertisers' websites. Another "benefit": The company used the device to gather personally identifiable information about its users.

The CueCat's maker was permanently declawed in 2001, but not before it may have accidentally exposed its user database to hackers.

Some things just aren't meant to be done while walking or driving, and one of them is watching DVDs. Unfortunately, that message was lost on Eyetop.net, makers of the Eyetop Wearable DVD Player.

This system consisted of a standard portable DVD player attached to a pair of heavy-duty shades that had a tiny 320-by-240-pixel LCD embedded in the right eyepiece. You were supposed to carry the DVD player and battery pack in an over-the shoulder sling, put on the eyeglasses, and then... squint. Or maybe wear a patch on your left eye as you walked and watched at the same time.

Up close, the LCD was supposed to simulate a 14-inch screen. Unfortunately, the only thing the Eyetop stimulated was motion sickness.

Before Xbox, before PlayStation, before DreamCast, there was Apple's Pippin. Wha-huh? That's right--Apple had an Internet-capable game console that connected to your TV. But it ran on a weak PowerPC processor and came with a puny 14.4-kbps modem, so it was stupendously slow offline and online.

Then, too, it was based on the Mac OS, so almost no games were available for it. And it cost nearly $600--nearly twice as much as other, far more powerful game consoles. Underpowered, overpriced, and underutilized--that pretty much describes everything that came out of Apple in the mid-90s.

(Photo courtesy of The Mac Geek.com.)

In the late 90s, companies competed to dangle free PCs in front of you: All you had to do was sign up, and a PC would eventually show up at your door. But one way or another. there was always a catch: You had to sign up for a long-term ISP agreement, or tolerate an endless procession of Web ads, or surrender reams of personal information. Free-PC.com may have been the creepiest of them all. First you filled out an extensive questionnaire on your income, interests, racial and marital status, and more. Then you had to spend at least 10 hours a week on the PC and at least 1 hour surfing the Web using Free-PC's ISP.

In return you got a low-end Compaq Presario with roughly a third of the screen covered in ads. And while you watched the PC (and the ads), Free-PC watched you--recording where you surfed, what software you used, and who knows what else.

We can't say whether this would have led to some Big Brotherish nightmare, because within a year Free-PC.com merged with eMachines. By then, other vendors had similarly concluded that "free" computers just didn't pay.

Few products literally stink, but this one did--or at least it would have, had it progressed beyond the prototype stage.

In 2001, DigiScents unveiled the iSmell, a shark-fin-shaped gizmo that plugged into your PC's USB port and wafted appropriate scents as you surfed smell-enabled Web sites--say, perfume as you were browsing Chanel.com, or cheese doodles at Frito-Lay.com. But skeptical users turned up their noses at the idea, making the iSmell the ultimate in vaporware.

As the first "autostereo" 3D notebook, Sharp's RD3D was supposed to display 3D images without requiring the use of funny glasses. But "auto-headache" was more like it, as the RD3D was painful to look at.

When you pressed the button to enable 3D mode, the notebook's performance slowed, and the 3D effect was noticeable only within a very narrow angle--and if you moved your head, it disappeared. Maybe the funny glasses weren't so bad after all.

They may not have scored a spot in our baker's two dozen of infamy, but these ten products were too flawed to be forgotten.

Apple Newton MessagePad (1994): Yes, we know that the Apple Newton also happens to be number 28 on our list of the 50 greatest gadgets (so no letters, please). But while Apple's innovative concept won kudos, the Newton's execution was lacking, especially in its first version. Aside from its famously awful handwriting recognition, the Newton was too bulky and too expensive for all but Apple acolytes.

Apple Puck Mouse (1998):Introduced with the original iMac, Apple's stylishly round hockey-puck-shaped mouse had only one button (natch), but figuring out where that button was and orienting the mouse without looking down created an ergonomic nightmare. Apple added a small indentation in a later version so you could figure out where to put your finger, but you still had to find the indentation. The puck got chucked a couple years after it was introduced.

Apple Twentieth Anniversary Macintosh (1997):Learning nothing from Gateway 2000's fiasco a couple of year's earlier with its , Apple in 1997 released a specially designed bronze-colored Mac to celebrate its 20th year of making computers. This one came with a Bose sound system and leather palm rests, but it also had a weak processor, no network card, and a slow CD-ROM drive (because a faster one couldn't be mounted vertically in its special case). To participate in the celebration, Mac lovers had to plunk down $7500--three times what the same computer cost in a different case. It may qualify as the priciest case mod of all time. Steve Jobs might have bought one; we doubt whether many others did.

Circuit City DiVX DVDs (1998):Remember the disposable DVD? Circuit City's attempt at starting its own pay-per-view movie service entailed proprietary set-top players and disposable DiVX movie discs that expired 48 hours after you started watching them. The player required a phone line so it could check whether you had permission to watch. But as it turned out, consumers preferred their DVDs without strings, and Circuit City ended up dropping $114 million on its little experiment.

Concord Eye-Q Go Wireless Digital Camera (2004):The first Bluetooth-enabled digital camera cost a little more than otherwise comparable drugstore cameras, but for the premium you got the ability to transfer 7MB of images in a nap-inducing 15 minutes. (Transfer time using an old-fashioned USB cable: 8 seconds.) The Bluetooth was a bust, the camera was crude, and the pictures were awful. Aside from that, it was just fabulous.

Dell SL320i (1993): The Ford Pinto of notebook PCs, this model had the unfortunate habit of combusting and eventually had to be recalled. Laptops from Apple, HP, and Sony, as well as a handful of other Dell models suffered similar overheating problems over the years, but the SL320i blazed the trail.

Motorola Rokr E1 (2005): The world's most popular digital music player meets the world's coolest looking phones; what could possibly go wrong? Well, plenty. The Rokr E1 held only about a hundred songs, file transfers were painfully slow, the iTunes interface was sluggish, and--duh--you couldn't download tunes via a cell connection. This phone ain't rockin', so don't bother knockin'.

3Com Audrey (1999): Some of us had a soft spot in our hearts for Audrey, the Internet appliance--that supple form, the cute way her light blinked green when a new e-mail message arrived. But with limited functionality and no broadband support, she failed to excite the masses, instead becoming a symbol of why Net appliances bombed.

Timex Data Link Watch (1995): This early wristwatch/PDA looked like a Casio on steroids. To download data to it, you held it in front of your CRT monitor while the monitor displayed a pattern of flashing black-and-white stripes (which, incidentally, also turned you into the Manchurian Candidate). Depending on your point of view, it was either seriously cool or deeply disturbing.

WebTV (1995): Getting the Web to display on a typical TV in 1995 was like watching an elephant tap-dance--you were amazed not that it could do it well but that it could do it at all. With the WebTV, Web pages looked horsey, some media formats didn't work at all, and using the remote control to hop from link to link was excruciating.

Contributing editor Dan Tynan writes PC World's Gadget Freak column. He is also the author of Computer Privacy Annoyances (O'Reilly Media 2005).

The Complete List of Losers

  • America Online (1989-2006)
  • RealNetworks RealPlayer (1999)
  • Syncronys SoftRAM (1995)
  • Microsoft Windows Millennium (2000)
  • Sony BMG Music CDs (2005)
  • Disney The Lion King CD-ROM (1994)
  • Microsoft Bob (1995)
  • Microsoft Internet Explorer 6 (2001)
  • Pressplay and Musicnet (2002)
  • dBASE IV (1988)
  • Priceline Groceries and Gas (2000)
  • PointCast (1996)
  • IBM PCjr. (1984)
  • Gateway 2000 10th Anniversary PC (1995)
  • Iomega Zip Drive (1998)
  • Comet Cursor (1997)
  • Apple Macintosh Portable (1989)
  • IBM Deskstar 75GXP (2000)
  • OQO Model 1 (2004)
  • CueCat (2000)
  • Eyetop Wearable DVD Player (2004)
  • Apple Pippin @World (1996)
  • Free PCs (1999)
  • DigiScents iSmell (2001)
  • Sharp RD3D Notebook (2004)
  • (Dis)Honorable Mention
  • Want to comment on this story? Post your thoughts here.
Tue, 20 Jun 2006 00:00:00 -0500 en text/html https://abcnews.go.com/Technology/PCWorld/story?id=2006871
Killexams : Frequently Asked Questions

The M.A. program in Social-Organizational Psychology hosts a number of events for students to promote learning outside the classroom and to foster a sense of community within the program. Every semester, we offer networking opportunities, talks and panels to discuss current issues in the field. 

Our student-run club, The Organization and Human Development Consulting Club (OHDCC), is a rich and vibrant organization that sponsors numerous initiatives that help its members develop professionally as well as feel connected to the Social- Organizational Psychology community here at TC. OHDCC sponsors a student mentoring program where 1st year students are paired with more experienced students for advice, friendship, and networking. ODHCC also hosts professional development opportunities including talks and panels with leaders in the field, training opportunities including a “crack-the-case” workshop for help with case-based job interviews, social events both at TC and with other NYC universities, and social service projects within the larger Morningside Heights neighborhood. OHDCC also provides members with an opportunity to develop their own leadership skills via project management and governance within the organization.

Our students also join the Teachers College, Columbia University chapter of Psi Chi, The International Honor Society in Psychology. As indicated on the Psi Chi website (http://www.psichi.org), Psi Chi is the largest student psychological organization and includes over 700,000 members in chapters across the world. In addition, Psi Chi was the first student organization to have a formal affiliation with the American Psychological Association. Our chapter of Psi Chi unites all of the psychology and psychology-related programs at TC. In order to be accepted into Psi Chi, students are required to have completed at least 12 credits in their program and achieved a minimum GPA of 3.5. Several membership drives are held during the Fall and Spring semesters and an induction ceremony is held during the Spring semester. Our chapter of Psi Chi hosts both professional and social events related to psychology.  

In addition, students are encouraged to join student chapters of professional associations as well as enroll under national memberships for organizations, such as the METRO Applied Psychology, Society for Industrial-Organizational Psychology (SIOP), the Society for Human Resource Management (SHRM), and OD Network. Students are also encouraged to attend and participate in professional conferences as well as take advantage of opportunities within the Columbia University system and the New York City area.

Sat, 11 May 2019 02:32:00 -0500 en text/html https://www.tc.columbia.edu/organization-and-leadership/social-organizational-psychology/degree-info/master-of-arts/frequently-asked-questions/
Killexams : Positive feedback: the science of criticism that actually works

Years ago, after I received some negative feedback at work, my husband Laurence told me something that stuck with me: when we receive criticism, we go through three stages. The first, he said, with apologies for the language, is, “Fuck you.” The second is “I suck.” And the third is “Let’s make it better.”

I recognised immediately that this is true, and that I was stuck at stage two. It’s my go-to in times of trouble, an almost comfortable place where I am protected from further disapproval because no matter how bad someone is about to tell me I am, I already know it. Depending on your personality, you may be more likely to stay at stage one, confident in your excellence and cursing the idiocy of your critics. The problem, Laurence continued, is being unable to move on to stage three, the only productive stage.

Recently, I asked my husband if he could remember who had come up with the three-stage feedback model. He said it was Bradley Whitford, the Emmy-award winning actor who played the charismatic Josh Lyman in The West Wing and, among other roles, the scary dad in the 2017 horror movie Get Out. “What? I would definitely have remembered that. There is no way that would have slipped my mind,” I insisted, especially because I had a mini-crush on the Lyman character for four of The West Wing’s seven series.

In 20 seconds flat, I had my laptop open and was putting one of my few superpowers, googling, to use. There it was. Whitford has aired this theory in public at least twice. Once during a 2012 talk at his alma mater, Wesleyan University, and again when he was interviewed on Marc Maron’s podcast in 2018.

To Maron, Whitford put it like this: “If I’m honest, anytime any director has ever said anything to me, I go through three silent beats: Fuck you. I suck. OK, what?” He added: “I really believe that that is a universal response and some people get stuck on ‘I suck’. You know people who live there. Some people live on ‘Fuck you’. Most people pretty quick get to the [third stage].” I realised that while Laurence said the third stage was “Let’s make it better”, Whitford’s original was the more ambiguous “Okay, what?”

The cover of FT Weekend Magazine, July 23/24

Feedback is part of our everyday existence. It is widely viewed as crucial to improving our performance at work, in education and the quality of our relationships. Most white-collar professionals partake in some form of annual appraisal, performance development review or 360-degree feedback, in which peers, subordinates and managers submit praise and criticism. Performance management is a big business; the global market for feedback software alone was worth $1.37bn in 2020.

I decided to try to contact Whitford to find out more. But first, I wanted to know if there was any empirical evidence to back up his idea, and to learn how to leapfrog stages one and two and get to stage three as quickly as possible.


In 2019, I came across a book on a colleague’s desk titled Radical Candor, written by a former Google and Apple executive named Kim Scott. At the time, I was covering my boss’s maternity leave and, as I encountered the niggling issues that beset every team, I became interested, for the first time in my life, in management theory. The book’s title resonated with me. Who wouldn’t want to hear a truly honest assessment of their performance if it would help them improve?

When we feel optimistic about feedback, we imagine the kind of insights a good therapist might offer, gentle but piercing appraisals of our strengths and weaknesses, precious gems of knowledge sharp enough to cut through our self-delusions and insecurities. On a deeper level, many of us crave the thrill of being known, of being truly understood.

Of course, this is not what feedback is actually like.

We overestimate the capacity of our colleagues to calibrate their comments to our individual emotional states. We underestimate how bruising it is to hear that we are not meeting expectations, even when the issues are minor. And we can be surprised by critiques that do not line up with our sense of who we are. If you believe you’re a great listener and your 360-degree feedback comes back with complaints that you monopolise meetings, that may not feel like being known so much as feeling alien to yourself.

And yet we all have blind spots. As the psychologists David Dunning and Justin Kruger showed in a 1999 study, when we are unskilled in a particular field, we are more likely to overrate our ability in that area. Our incompetence makes it all the harder for us to understand how bad we are, a phenomenon now widely known at the Dunning-Kruger effect. This is one reason why feedback can be so necessary.

An illustration of a smiling pair of lips with the letters ‘F*** YOU’ on the teeth
The actor Bradley Whitford told podcaster Marc Maron: ‘Anytime any director has ever said anything to me, I go through three silent beats: Fuck you. I suck. OK, what?’ © James Joyce

One of Scott’s fundamental beliefs is that there is nothing kind in keeping quiet about a colleague’s weaknesses. She calls this “ruinous empathy”. Scott is a two-word-catchphrase-generating machine. While aiming to achieve “radical candour”, you need to avoid “manipulative insincerity” and “obnoxious aggression”. The key in giving feedback, she writes in her book, is to “care personally” while “challenging directly”.

One of her favourite examples of radical candour in her own life is from 2004, soon after she joined Google to run sales for its AdSense team. She had just given a presentation to chief executive Eric Schmidt and Google’s founders, and was feeling pretty good, when Sheryl Sandberg, then a vice-president at the company and her boss, took her to one side. After congratulating her, Sandberg said: “You said ‘um’ a lot. Were you aware of it?” Scott brushed the comment off. Sandberg said she could recommend a speech coach and Google would pay. Scott again tried to move on, feeling it was a minor issue.

Sandberg grasped the nettle: “You are one of the smartest people I know, but saying ‘um’ so much makes you sound stupid.” In the book, Scott describes this moment as revelatory. She went to a speech coach and began thinking about how to teach others to adopt a more candid style of management.

When I email Scott to ask if she’ll talk about feedback, she replies promptly. She lives in a quiet, hilly neighbourhood in the San Francisco Bay Area, a 15-minute drive from the Google and Apple campuses, and suggests a video call at 7.30am her time. She logs on from her kitchen, early morning light pouring in through large windows behind her and bouncing off stainless steel surfaces.

A petite 54-year-old with rimless glasses, shoulder-length blonde hair and irrepressible energy, her preferred uniform is a T-shirt, jeans and an orange zip-up cardigan. I notice she wears the same cardigan in multiple TED-style talks. She later tells me she has 12 of them, in different weights, for summer, autumn and winter. She’s had so much flak about her clothes throughout her career that she decided to wear the same thing every day.

“I’m going to apologise because there’s going to be some background noise, I’m making eggs for my son,” she says cheerfully. Of course, it’s so early, I say, should we reschedule? “No, no, no . . . I’ve been up for a while, I have to just pay attention to the water boiling, that’s all.” She is cordial but brisk. I realise I am speaking to a highly productive person who is a scheduling master. I feel the urge not to waste her time.

Radical Candor was published in 2017 and became a New York Times bestseller. I begin by explaining the Whitford hypothesis. Does it ring true to her, a workplace guru who has made the art of giving feedback her speciality? “Yes, absolutely,” she says. But she would add an earlier stage: soliciting feedback. A phrase like “Do you have any feedback for me?” is bad, she says, because most people will simply respond “No.” It’s easier to pretend everything’s fine than to enter the awkward zone of giving criticism. “Nobody wants to give you feedback. Except your children.”

A good question, she says, is one that cannot be answered with a yes or no. Her preference is, “What can I do, or stop doing, to make it easier to work with me?” Even this question has been subject to, well, feedback. “Christa Quarles, when she was CEO of OpenTable, said, ‘I hate that question!’” Scott recalls. Quarles, who became friends with Scott after attending one of her talks, prefers asking, “Tell me what I’m doing wrong,” which Scott says is fine too.

Because she now coaches top executives at companies that have included Ford and IBM, Scott comes from a different angle than most. (Her book is subtitled: Be a Kick-Ass Boss Without Losing Your Humanity.) Managers who need feedback must somehow persuade employees to be honest with them despite their authority and the nervousness it can create. For the rest of us, feedback usually comes whether we ask for it or not.

I tell her that since childhood I have struggled not to take it personally and can tear up in the face of criticism, a trait I find infuriating and embarrassing. “I am a weeper myself,” she says, to my surprise, and suddenly switches to a more confiding tone. “My grandmother told me this when I was a child. I forget what I was in trouble for, but I was getting some critical feedback, and she sat me down and said, ‘Look Kim, if you can learn to listen when people are criticising you, and decide what bits are going to help you be better, you’ll be a stronger person.’”

It strikes me as very Kim Scott to describe a childhood scolding as “getting some critical feedback”. But it also pleases me to think there is a direct line from her grandmother’s advice to her successful career. And her grandmother was right. Research shows that a decisive factor in the effectiveness of feedback is whether we see it as an opportunity to grow or as a fixed verdict on our ability.

This holds true even when we are merely anticipating feedback. In a 1995 study by academics from the University of California, Riverside, children were split into two groups to solve maths problems. One was informed the aim was to “help you learn new things”. The other was told: “How you do . . . helps us know how smart you are in math and what kind of grade you might get.” The first group solved more problems.

What was your most memorable experience of feedback, given or received? And what has it taught you? Let us know in the comments below. We may publish a selection of responses on ft.com

In 2018, Scott received disruptive feedback when the satirical television show Silicon Valley featured a character who espouses “Rad Can”, a clear reference to her philosophy. The problem was that the character in question was a bully. Scott was on a plane when the episode aired. “I landed in London, and my phone just blew up,” she says. “I was devastated.”

The experience prompted her to write a second edition of the book. In its preface, she notes that some people were using her theories “as a licence to behave like jerks” and suggests readers substitute the word “compassionate” for “radical”. Scott got to stage three in the Whitford model pretty quickly, I suggest. “It really was useful,” she says of the TV episode. “It was painful and it was annoying, but there was something to learn.”

I wonder if there are some personality types that are better at responding in this way, but Scott argues we can all learn to be more resilient. She recommends listening with the intent to understand, not to respond. “Not responding straight away helps me avoid the ‘FU’ part,” she says. She also leans on a technique from psychology in which you observe your emotions with curiosity. “Part of what helps is to identify the feeling in your body. If you feel shame, for me, it’s a tingling feeling in the back of my knees, kind of the same feeling I get if I walk to the edge of a precipice . . . When I recognise I’m having that feeling, then I can take a step back and take a few breaths.”


Shame is the feeling I most associate with negative feedback. When I was 10, my class was told to make small 3D buildings out of paper. I cut carefully around the outlines of a cuboid and a prism, ran a glue stick over tabs at the edges and pressed them together in sequence. Sellotape was also employed. The teacher asked us to bring the models to him. I walked to his desk and handed mine over. He gazed at it in silence. After a long pause, he said: “You’re not very good with your hands, are you?”

For most of human history, this kind of feedback was the norm: direct and, at times, brutal. As recently as a few decades ago, it was also how performance at work was managed. In the early 1970s, the oral historian Studs Terkel interviewed more than 100 Americans about their jobs for his book Working. A steel mill worker named Mike Lefevre described being “chewed out” by his foreman, who told him, “Mike you’re a good worker, but you have a bad attitude.”

A 47-year-old Chicago bus driver recalled the humiliation of being told off by supervisors in public: “Some of them have the habit of wanting to bawl you out there on the street. That’s one of the most upsetting parts of it.” Nancy Rogers, a bank teller, said she was yelled at by her boss and had given some thought to why this might be: “He’s about 50, in a position that he doesn’t really enjoy. He’s been there for a long time and hasn’t really advanced that much.”

A picture of a broken pencil with the words ‘I suck’ on it
‘Ego-involving feedback’ should be avoided, as it prompts the listener to believe they can’t change; that the failure is intrinsic to who they are © James Joyce

Yelling, screaming, bawling out. This is the kind of feedback that has become unacceptable in most workplaces. And not just because it’s hurtful and rude, or because we’ve all become “snowflakes”. It’s unproductive. A large volume of research shows criticism conveyed this way demotivates. Fearful, aggrieved people are less able to focus on the tasks at hand and are more likely to doubt themselves, resent their boss and possibly attempt armchair psychoanalysis, à la Rogers.

The type of criticism Lefevre received can be particularly destructive. Being told you have a bad attitude is what researchers call “ego-involving feedback”, which prompts the listener to believe they can’t change, that the failure is intrinsic to who they are. The teacher who said I wasn’t good with my hands was similarly generalising from a specific task, says Naomi Winstone, a professor of educational psychology at the Surrey Institute of Education. “It’s really terrible as a piece of feedback because it gives the impression that it’s fixed: you will always not be good.”

While research into the giving of feedback has been around since the early 20th century, the question of how we receive it has been less studied. Winstone, a warm, empathetic 39-year-old with a background in cognitive psychology, noticed the relative lack of research in 2013, when, as a director of undergraduate studies, she was tasked with improving students’ experience of assessment and feedback. She felt she could use her training to understand the barriers that keep students from acting on constructive criticism. “We assume that using feedback is just this amazing, in-built skill that we all know how to do effectively. We really don’t,” she says.

Winstone believes the ability to process feedback needs to be developed when we are young, like critical thinking. One of the projects she’s working on is titled “Everybody Hurts”, inspired by an idea first suggested by two medical education specialists in Australia, Margaret Bearman and Elizabeth Molloy. They argued that to help students learn to cope with feedback, teachers should open up about their own failures. Bearman and Molloy named this “intellectual streaking”, but in a confirmation of my theory that anyone working in feedback becomes very responsive to feedback, they renamed it “intellectual candour” after an editor felt the reference to nudity was inappropriate.

Another Australian academic, Phillip Dawson, took intellectual streaking to heart. In 2018 he wrote a blog post, with endearing honesty, that bullet-pointed his typical reaction to negative comments:

  1. Have an immediate affective response. This is usually some sort of hurt, though I’ve also felt anger, elation, stress, pride, shame and confusion.

  2. Hide the comments so they can’t hurt me.

  3. Make a to-do note to give the comments a proper look later on.

  4. [Time passes, often to the point where I now have to look at the comments again]

  5. Experience the same hurt from step 1 all over again.

  6. Use the comments to Boost my work.

A soft-spoken 39-year-old professor with curly brown hair, Dawson tells me over video call from Melbourne that he feels shame if he knows he has underperformed at work relative to his ability. But in his free time, he does stand-up comedy and, in that context, his impulse is to go to Whitford’s “stage one”. (He’s too polite to say the F word.) “And it kills me. Because I know that in my professional life, I’m better at it. So I don’t think we have a universal capability with feedback. It’s very contextual.”

Dawson recommends pausing when you receive criticism. Once you feel calm, try rewriting the feedback into a list of actions. “By rewriting, I’m making them tasks I assign myself,” he says. This “defangs” the feedback and allows you to take ownership of the next steps. He also recommends Thanks for the Feedback, a 2015 book by Douglas Stone and Sheila Heen, two lecturers at Harvard Law School who specialise in conflict resolution. They argue that feedback comes in three types: appreciation, coaching and evaluation. Problems arise when we expect one but get another. Often we simply crave a “Well done” or “Thank you”, and it’s jarring when we receive a tough evaluation instead. “I’ve found that to be really useful,” Dawson says, laughing. “It’s OK to want praise!”


I’m starting to feel I’ve got on top of the feedback question when I interview Avraham Kluger, co-author of one of the seminal pieces of research in the history of feedback studies. “I wonder if we could start by talking about your 1996 paper?” I ask. There is a long pause, so long that I wonder if my internet has frozen. I am at home in London. Kluger, a 63-year-old professor of organisational behaviour at Hebrew University Business School, is in Jerusalem.

It turns out the internet’s fine. He was just thinking. Kluger finally responds: “Yeah, I can tell you that. But I want to ask you another question, about the hidden assumptions, or the principal suppositions, behind your question.” There is another pause. “Why do we care about feedback to begin with? Why do we want to give feedback at all?”

I repeat his last question out loud, hesitantly. Is he really challenging the whole premise of feedback? Essentially, yes. We give it, he argues, because we hope to change the behaviour of another person. But often the person already knows there is a problem. “They don’t change because they don’t have the inner resources,” he says. His tone of voice is suddenly scathing, not scathing towards the people who can’t change, but towards those who assume they can do it for them.

Kluger’s journey to becoming a feedback-sceptic took decades. He was born in Tel Aviv in 1958, the son of Holocaust survivors. After studying psychology at university, he took a job in 1984 as a behavioural consultant to a police force in Israel. Hired to apply psychological principles to the management of police officers, he began by interviewing the regional chief of police’s direct reports. The subordinates complained that they received zero feedback from their boss.

Kluger took notes and presented his findings a few weeks later in a senior leadership meeting. Not long after he began speaking, the chief of police interrupted. “It’s over!” he apparently yelled, slamming his fist on the table. “I have been in the police force for 40 years. I came from this rank” – Kluger, re-enacting the scene for me, points to an invisible badge on his upper arm – “to this rank” – pointing to his shoulder – “and I am telling you, a good policeman does not need feedback. If he does need feedback he’s not a good policeman.” The chief turned to his secretary. “What’s next on the agenda?”

In trying to give feedback, Kluger had received some seriously negative feedback. Later, he would decide he’d made two mistakes. Although he had interviewed all of the subordinates, he had not interviewed the chief of police. And he had made his report in public. Criticising someone in front of others inflicts a particular kind of humiliation.

For all its painfulness, the episode was ultimately useful. Kluger became curious about what the academic literature did not understand about feedback and its effects on motivation. The following year he began a PhD to investigate this at the Stevens Institute of Technology in New Jersey. He devised an experiment in which he gave some engineers a set of test questions. One group was told after each question whether they’d got it right or wrong. The other group was given no feedback at all. Once the engineers had finished the questions, Kluger announced that the experiment was over but if anyone wanted to continue working, they could. To his astonishment, the people who had received no feedback at all were the most motivated to continue.


In 1989, Kluger got an assistant professorship at Rutgers University’s School of Management. Among the first people he met was Angelo DeNisi, a gregarious New Yorker from the Bronx. When Kluger told him he was studying the destructive effects of feedback on performance, DeNisi was intrigued. “My career is based on performance appraisal and finding ways to make it more accurate. You’re telling me the assumptions are incorrect?” he asked. “Yes, I’m afraid I am,” replied Kluger.

It was the start of a long friendship. “He’s Angelo, but he was an angel to me, in a way, to my career”, Kluger says. DeNisi was more experienced and had connections. The two reviewed hundreds of feedback experiments going back to 1905. What they found was explosive. In 38 per cent of cases, feedback not only did not Boost performance, it actively made it worse. Even positive feedback could backfire. “This was heresy,” DeNisi recalls.

The way he tells it, his main function in getting the research published was to render Kluger’s sometimes impenetrable thinking lucid. “My role was to translate Avi’s ideas to the rest of the world. Avi has a way of thinking, that . . . ” DeNisi says, trailing off. “He’s brilliant, he truly is. But oftentimes his thinking isn’t linear. It goes round and round in circles. I inserted the linear thinking. But the ideas, the heart of the paper, is Avi.”

In 1996, they published their meta-analysis. It won awards and became one of the most-cited in the field. The two men would work together again, but their paths diverged. Kluger moved back to Israel and eventually became disenchanted with the entire subject. He no longer describes himself as a feedback researcher. He came to believe that as a performance management tool, it is so flawed, so risky and so unpredictable, that it is only worth using in limited circumstances, such as when safety rules must be enforced. If a construction worker keeps walking around a site without a helmet, negative feedback is vital, Kluger acknowledges. The most effective way to give it is with great clarity about potential consequences. The worker should be told that the next time they go without a helmet, he or she will be fired.

An illustration of a pair of hands holding a broken smiley face
In 38 per cent of cases, feedback not only did not Boost performance, it actively made it worse, according to Avraham Kluger and Angelo DeNisi’s 1996 study © James Joyce

But in many other types of work, the formula for good feedback includes too many variables: the personality of the recipient, their motivations, whether they believe they are capable of implementing change, the abilities of the manager. Kluger now calls himself a researcher of listening. Instead of managers giving top-down feedback, he argues they should spend more time listening to their direct reports. In the process of talking in depth about their work, the subordinate will often recognise issues and decide to correct them on their own.

Based on this theory, Kluger developed something he calls the “feed-forward interview” as an alternative, or prologue, to a performance review. He offers to give me a demo. A week after our first conversation, we meet again over video call. I feel slightly nauseous, wondering what I’ve signed up for.

It is a curiously intimate process. He asks me to recall, in great detail, a time that I felt full of life at work. Full of energy. Maybe even happy. I describe a reporting trip to meet a source and how it felt when I realised I was being told something important, that the person I was speaking to had a story to tell. “What was it like?” he asks. “Like a lightbulb going on,” I reply. Kluger is working from a script, which he adapts to each person he interviews, and some of his techniques are borrowed from therapy. “I want to make sure I heard you,” he says, then repeats back to me what I’ve said. “Let’s explore what made this possible — what was materially important?” Sometimes he gives me better words than the ones I used initially. “You needed autonomy to make this happen, correct?” he asks. “Yes, exactly,” I say.

At the end of the session, he sums up. “I want to suggest that the conditions that we just enumerated are part of the inner code of Esther flourishing at work.” It feels like he’s awarding me a prize. He asks me to visualise this inner code as a lighthouse beaming from the shore, a safe harbour. He holds up a hand and begins opening and closing his fist, to mimic the lighthouse flashing. “Imagine you’re the captain of the ship of your life.” Kluger brings up his other hand to represent a boat. “To what degree are you navigating towards the light of those conditions? Or are you sailing away?”

Being truly listened to is exhilarating. As Kluger intended, I end up seeing work from a new perspective and giving myself some critical feedback about my priorities. But I’m not sure all managers would want their employees to go on a similar journey, one which is potentially unsettling and could lead them to rethink their choices. And it’s not exactly feedback. Of course that’s the point.


Months after I first started thinking about this subject, I have lunch with a friend who tells me a colleague frequently criticises her. It’s demoralising, especially as the person never praises even excellent work. “How should I respond?” my friend asks. I sit back and think. Despite all the time I’ve spent researching feedback, I’m unsure what to advise.

An illustration of two clenched fists with the letters ‘FEED’ and ‘BACK’ on them
Sometimes feedback is really bias or bullying. If what your boss is delivering is obnoxious aggression, “Locate the exit nearest you,” says Kim Scott © James Joyce

Kim Scott notes there will be times when feedback is wrong. Look for the five or 10 per cent that you can agree with, and fix that problem “theatrically”, she says. Later, once you’re out of the “Fuck you” and “I suck” stages, you should have a respectful conversation, explaining how you disagree. A respectful disagreement can strengthen a bond, she believes. Winstone, the educational psychology professor, suggests going back to the feedback-giver and saying, “This is why I don’t think this is the case. Can we talk about it?”

Sometimes feedback is really bias or bullying. If what your boss is delivering is obnoxious aggression, “Locate the exit nearest you,” Scott advises. “Having a boss that is bullying is damaging to your health. It’s a big deal.”

Much of how we respond to feedback is driven by the nature of our relationship with the person giving it. This is why Kluger believes it’s useless to focus on the recipient of feedback alone. The outcome will always depend on the “dyad” — the sociological term for two people in a particular relationship — and what transpires between them.

Kluger still sometimes sends work-in-progress to his friend and former research partner DeNisi. DeNisi recently told him that a paper was hard to follow and needed more work. Kluger told his wife, who said: “See, that’s why Angelo’s a friend. Because he tells you the truth. You should listen to him.”

“You gave him good feedback!” I tell DeNisi. “Yes, and he listened,” he says, beaming. It reminds me of a piece of research Kluger told me about, which theorises we’re more likely to accept negative feedback if we feel loved by the provider. “I’m not talking about romantic love,” Kluger said. “But if you really feel loved and cared for by the provider, then you’re most likely to accept it and to process it.”

I try every way I can to contact Bradley Whitford. I email his agency and leave a voicemail. One agent emails to tell me I have the wrong person and gives me his publicist’s contact details instead. She doesn’t reply. I write one of those embarrassing public tweets, essentially begging him to talk to me or answer some questions over email. Finally, I receive a response from an assistant: “Thanks so much for thinking of Bradley. He is not available this time around, but I will definitely let you know should anything change.”

I go through the three stages pretty quickly. Whitford has better things to do, and I’m grateful to him anyway. Now when I receive negative feedback, just identifying I’m at stage one or two helps speed me along. And his theory set me on a path that showed me it’s normal to react emotionally to criticism and that it doesn’t mean you can’t learn from it. If you found any of this remotely helpful, you can thank Whitford too. If you didn’t, I welcome your feedback.

Esther Bintliff is deputy editor of FT Weekend Magazine

Follow @FTMag on Twitter to find out about our latest stories first

Tue, 26 Jul 2022 17:40:00 -0500 en-GB text/html https://www.ft.com/content/a681ac3c-73b8-459b-843c-0d796f15020e
Killexams : Windows 98 For Spaceships? Not Quite!

One of the news items that generated the most chatter among Hackaday editors this week was that ESA’s Mars Express mission is receiving a software update. And they’re updating the operating system to…Windows 98.

Microsoft’s late-90s consumer desktop operating system wouldn’t have been the first to come to mind as appropriate for a spacecraft, but ESA were quick to remind us that it was the development toolchain, not the craft itself, that depended upon it. It’s still quite a surprise to find Windows 98 being dusted off for such an unexpected purpose, and it’s led us to consider those now-almost-forgotten operating systems once more, and to question where else it might still be found.

Win, or Lose?

A Windows 98 CD
This CD stood on the shoulders of 16-bit giants. Mark Morgan, CC BY 2.0.

For those of you who never used an earlier Windows version, perhaps it’s time for a short and sketchy history lesson. The original IBM PCs and clones shipped with DOS, PC-DOS or MS-DOS, Microsoft’s 16-bit single tasking operating system with a command line interface.

By the late 1980s they had developed the first few versions of Windows, a 16-bit GUI that sat over and further extended DOS with an extra set of APIs. To run these early Windows versions you had first to boot into DOS, and then type win at the command line to start Windows.

Meanwhile in the early 1990s they produced the first in a separate line of operating systems to be called Windows New Technology, or NT. These were native 32-bit operating systems that contained the full set of Windows APIs including 32-bit support natively, and were designed to compete with lower-end UNIX machines at the enterprise level. Alongside this the DOS-based Windows versions gained a set of 32-bit API extensions and eventually evolved into the more modern GUI of Windows 95, 98, and then ME. It was still possible to boot Windows 98 into a DOS prompt and type win to start the desktop, but by this point the underlying technology had been stretched to the limit and the result was often buggy and unreliable. In the early 2000s they were discontinued, and the next Windows NT version dubbed Windows XP was also aimed at the consumer market.

Peering Into The Mind Of A 1990s Scientist

XKCD 2347, a huge pile of infrastructure depending on one small software project.
Is that single project in fact a crusty old beige-box Pentium running some ASP? XKCD 2347(CC BY-NC 2.5).

So we return to Mars Express. By the time the craft was being designed it’s fair to say that Windows NT and its successors were a stable product, more stable by far than the consumer operating systems. Did the fruits of a desktop operating system eventually make it to space because whoever controlled a researcher’s IT budget skimped a bit on the software and PC purchasing? We may never know, but given that it seems to have delivered the goods, perhaps it wasn’t such a bad choice after all.

All of this brings us to the question of where else there might be a copy of WIndows 98 lurking. Sure, some of you will have retro gaming PCs and no doubt there will be tales of elderly relatives still using it, not to mention that some pieces of 1990s test equipment ran it. Even the McLaren F1 supercar famously could only be serviced with a particular model of 1990s Compaq laptop. But those aren’t exactly mission critical. Instead we want you to tell us about Windows 98 in the wild where it’s a surprise, or even where it definitely shouldn’t be. Is a nearly quarter century old OS that’s been out of support since 2006 propping up an unwieldy tower of services somewhere? We’re honestly not sure whether we want to know or not.

Header: Javi1977,CC0, and Federico Beccari, CC0.

Wed, 03 Aug 2022 11:59:00 -0500 Jenny List en-US text/html https://hackaday.com/2022/07/06/windows-98-for-spaceships-not-quite/
Killexams : How to Invest in Artificial Intelligence -- 3 Companies to Watch No result found, try new keyword!The notion of practical AI has been around for decades: Back in 1950, the pioneering computer expert Alan Turing proposed the notion of a test that could ... successes was IBM's Deep Blue computer ... Sat, 23 Jul 2022 12:00:00 -0500 en-us text/html https://www.thestreet.com/opinion/how-to-invest-in-artificial-intelligence-3-companies-to-watch-13359989 Killexams : Breakthrough quantum algorithm

City College of New York physicist Pouyan Ghaemi and his research team are claiming significant progress in using quantum computers to study and predict how the state of a large number of interacting quantum particles evolves over time. This was done by developing a quantum algorithm that they run on an IBM quantum computer. "To the best of our knowledge, such particular quantum algorithm which can simulate how interacting quantum particles evolve over time has not been implemented before," said Ghaemi, associate professor in CCNY's Division of Science.

Entitled "Probing geometric excitations of fractional quantum Hall states on quantum computers," the study appears in the journal of Physical Review Letters.

"Quantum mechanics is known to be the underlying mechanism governing the properties of elementary particles such as electrons," said Ghaemi. "But unfortunately there is no easy way to use equations of quantum mechanics when we want to study the properties of large number of electrons that are also exerting force on each other due to their electric charge.

His team's discovery, however, changes this and raises other exciting possibilities.

"On the other front, recently, there has been extensive technological developments in building the so-called quantum computers. These new class of computers utilize the law of quantum mechanics to preform calculations which are not possible with classical computers."

We know that when electrons in material interact with each other strongly, interesting properties such as high-temperature superconductivity could emerge," Ghaemi noted. "Our quantum computing algorithm opens a new avenue to study the properties of materials resulting from strong electron-electron interactions. As a result it can potentially guide the search for useful materials such as high temperature superconductors."

He added that based on their results, they can now potentially look at using quantum computers to study many other phenomena that result from strong interaction between electrons in solids. "There are many experimentally observed phenomena that could be potentially understood using the development of quantum algorithms similar to the one we developed."

The research was done at CCNY -- and involved an interdisciplinary team from the physics and electrical engineering departments -- in collaboration with experts from Western Washington University, Leeds University in the UK; and Schlumberger-Doll Research Center in Cambridge, Massachusetts. The research was funded by the National Science Foundation and Britain's Engineering and Science Research Council.

Story Source:

Materials provided by City College of New York. Note: Content may be edited for style and length.

Tue, 26 Jul 2022 12:00:00 -0500 en text/html https://www.sciencedaily.com/releases/2022/07/220727110714.htm
Killexams : StartMail Review Sun, 17 Jul 2022 11:59:00 -0500 en text/html https://www.pcmag.com/reviews/startmail Killexams : IBM scientists demonstrate memory breakthrough for the first time

For the first time, scientists at IBM Research have demonstrated that a relatively new memory technology, known as phase-change memory (PCM), can reliably store multiple data bits per cell over extended periods of time. This significant improvement advances the development of low-cost, faster and more durable memory applications for consumer devices, including mobile phones and cloud storage, as well as high-performance applications, such as enterprise data storage. With a combination of speed, endurance, non-volatility and density, PCM can enable a paradigm shift for enterprise IT and storage systems within the next five years.

Scientists have long been searching for a universal, non-volatile memory technology with far superior performance than Flash – today’s most ubiquitous non-volatile memory technology. The benefits of such a memory technology would allow computers and servers to boot instantaneously and significantly enhance the overall performance of IT systems. A promising contender is PCM that can write and retrieve data 100 times faster than Flash, enable high storage capacities and not lose data when the power is turned off. Unlike Flash, PCM is also very durable and can endure at least 10 million write cycles, compared to current enterprise-class Flash at 30,000 cycles or consumer-class Flash at 3,000 cycles. While 3,000 cycles will out live many consumer devices, 30,000 cycles are orders of magnitude too low to be suitable for enterprise applications.

“As organizations and consumers increasingly embrace cloud-computing models and services, whereby most of the data is stored and processed in the cloud, ever more powerful and efficient, yet affordable storage technologies are needed,” states Dr. Haris Pozidis, Manager of Memory and Probe Technologies at IBM Research – Zurich. “By demonstrating a multi-bit phase-change memory technology which achieves for the first time reliability levels akin to those required for enterprise applications, we made a big step towards enabling practical memory devices based on multi-bit PCM.”

Multi-level Phase Change Memory Breakthrough

To achieve this breakthrough demonstration IBM scientists in Zurich used advanced modulation coding techniques to mitigate the problem of short-term drift in multi-bit PCM, which causes the stored resistance levels to shift over time, which in turn creates read errors. Up to now, reliable retention of data has only been shown for single bit-per-cell PCM, whereas no such results on multi-bit PCM have been reported.

PCM leverages the resistance change that occurs in the material - an alloy of various elements - when it changes its phase from crystalline – featuring low resistance – to amorphous – featuring high resistance – to store data bits. In a PCM cell, where a phase-change material is deposited between a top and a bottom electrode, phase change can controllably be induced by applying voltage or current pulses of different strengths. These heat up the material and when distinct temperature thresholds are reached cause the material to change from crystalline to amorphous or vice versa.

In addition, depending on the voltage, more or less material between the electrodes will undergo a phase change, which directly affects the cell's resistance. Scientists exploit that aspect to store not only one bit, but multiple bits per cell.  In the present work, IBM scientists used four distinct resistance levels to store the bit combinations “00”, “01” 10” and “11”.

To achieve the demonstrated reliability, crucial technical advancements in the “read” and “write” process were necessary. The scientists implemented an iterative “write” process to overcome deviations in the resistance due to inherent variability in the memory cells and the phase-change materials:

“We apply a voltage pulse based on the deviation from the desired level and then measure the resistance. If the desired level of resistance is not achieved, we apply another voltage pulse and measure again – until we achieve the exact level,” explains Pozidis.

Despite using the iterative process, the scientists achieved a worst-case write latency of about 10 microseconds, which represents a 100x performance increase over even the most advanced Flash memory on the market today.

For demonstrating reliable read-out of data bits, the scientists needed to tackle the problem of resistance drift. Because of structural relaxation of the atoms in the amorphous state, the resistance increases over time after the phase change, eventually causing errors in the read-out. To overcome that issue, the IBM scientists applied an advanced modulation coding technique that is inherently drift-tolerant.  The modulation coding technique is based on the fact that, on average, the relative order of programmed cells with different resistance levels does not change due to drift.

Using that technique, the IBM scientists were able to mitigate drift and demonstrate long- term retention of bits stored in a subarray of 200,000 cells of their PCM test chip, fabricated in 90-nanometer CMOS technology.

The PCM test chip was designed and fabricated by scientists and engineers located in Burlington, Vermont; Yorktown Heights, New York and in Zurich. This retention experiment has been under way for more than five months, indicating that multi-bit PCM can achieve a level of reliability that is suitable for practical applications.

The PCM research project at IBM Research – Zurich will continue to be studied at the recently opened Binnig and Rohrer Nanotechnology Center. The center, which is jointly operated by IBM and ETH Zurich as part of a strategic partnership in nanosciences, offers a cutting-edge infrastructure, including a large cleanroom for micro- and nanofabrication as well as six “noise-free” labs, especially shielded laboratories for highly sensitive experiments.

The paper “Drift-tolerant Multilevel Phase-Change Memory” by N. Papandreou, H. Pozidis, T. Mittelholzer, G.F. Close, M. Breitwisch, C. Lam and E. Eleftheriou, was recently presented by Haris Pozidis at the 3rd IEEE International Memory Workshop in Monterey, CA.

Wed, 29 Jun 2022 12:00:00 -0500 en text/html https://www.albawaba.com/ibm-scientists-demonstrate-memory-breakthrough-first-time-380893
Killexams : The Politics of American Health Care: What Is It Costing You? Atlantic Monthly Sidebar

October 1973
The health care crisis is upon us. In response to soaring costs, a jumbled patchwork of insurance programs, and critical problems in delivering medical care, some kind of national health insurance has seemed in accurate years to be an idea whose time has finally come in America. For those not protected by insurance--and often for those who are partially protected--illness means financial disaster. The quality of American medical care is at issue too. After twenty years of unprecedentedly high spending for research our public health standards have fallen far behind some countries with fewer resources. We rank seventeenth in infant mortality, according to a United Nations study; thirtieth in life expectancy for males, behind Spain, Greece, and five Communist nations in Eastern Europe. And yet, writes reporter Godfrey Hodgson, for all our troubles, an opportunity to reform American health care has slipped by. How could this be? And where do we go from here?

by Godfrey Hodgson


If someone had asked me five years ago," said Dr. Rashi Fein, "to estimate when the United States would have some clearly universal and comprehensive system of national health insurance, I might have answered: in fifteen years."

"Three years ago," he went on, "my estimate would have been fifteen minus eight, or minus ten, or even minus twelve. But if someone were to ask me today when we will have national health insurance, my answer would be: many years more."

Dr. Fein, of the Harvard Medical School, is one of the most highly respected authorities on health policy in the country; the fact that both the Nixon Administration and Senator Edward Kennedy have consulted him gives some index of his standing. But he is not alone in his judgment "A few years ago," I was told by Dr. John Hogness, president of the Institute of Medicine at the National Academy of Sciences, "the feeling among experts was that we would have national health insurance by 1973. Now the feeling among experts is that it will take three to five years."

"Not in this Congress," said Bill Fullerton, top staff aide on health to Chairman Wilbur Mills of the House Ways and Means Committee. "Not in the next Congress. With the confrontation politics we have now, I don't see the Democrats giving the Republicans something to crow about, and I don't see the Republicans giving the Democrats something to crow about either. My personal view is that the earliest it could come would be the second year of the next Administration."

A simple way of stating what has happened is that after the passage of Medicare in 1965, just about everybody in the world of health policy was persuaded that the United States needed something called national health insurance--even if there was little agreement on what that should mean. Then, rather suddenly in the early 1970s, the mood changed. More cautious views reasserted themselves, both among experts and among politicians. The same reversal of mood which dampened optimism about the possibility of social change through political action generally--in the field of education, for example, or in dealing with poverty and urban problems- has now also begun to shape the debate about health care.

It is not just that congressional enactment of any kind of national health insurance, which only a few years ago seemed to be getting closer and closer, is now becoming more and more remote. Any form of national health insurance enacted in the late 1970s is now likely to be a milder measure, involving less drastic change, than seemed inevitable so recently.

One Democrat the Republicans will be especially anxious not to present with anything to crow about before 1976 is Senator Kennedy. But the strongest push in Congress for a truly radical overhaul of the health care system has come from Kennedy, as chairman of the subcommittee on health of the Senate Labor and Public Welfare Committee. Both liberals and organized labor--through the Committee for National Health Insurance, set up by the late Walter Reuther and backed by George Meany and the AFL-CIO as well--have supported Kennedy's "health security" approach. S.3, the Kennedy-Griffiths bill (it is sponsored in the House by Representative Martha Griffiths, Democrat of Michigan) is the most radical of the half dozen or more health insurance bills now before Congress. It offers the fullest range of benefits with the fewest co-insurances, deductibles, and other snags. It is also the firmest on the principle that if government financing is to be made available to the health care system, it must be used as leverage to secure government control: leverage to keep down rising costs and to make the health system more accountable to government in other respects. And with this intent, one of its key proposals has been that, as Kennedy put it in his book In Critical Condition in 1972, "The Federal government would become the health insurance carrier for the entire nation." "Only the government," he wrote, "can operate such an insurance program in the best interest of all the people. We can no longer afford the health insurance industry in America, and we should not waste vast funds bailing it out."

That is a plain enough position, and little more than a year old. Yet Kennedy will almost certainly have to retreat from that position, and--whatever precise form national health insurance eventually takes--insurance companies will almost certainly have an important share of the pie.

This is not because of any change of heart on Kennedy's part. It is a consequence of the facts of life in Congress. Senator Kennedy has emerged as the leading Democratic spokesman on health issues, but he does not have the votes to pass his health security bill, even in the Senate. The question is academic, in any case, since he cannot bring it to the floor for a vote. That prerogative lies with the Senate Finance Committee, headed by Senator Russell Long, Democrat of Louisiana. And Long has his own bill, artfully contrived to sap the strength of Kennedy's proposals.

Senator Long is in favor of what is called "catastrophic insurance" that is, insurance against catastrophically heavy medical bills. Long proposes that the federal government insure people only against being hospitalized for sixty days and for other medical bills over $2000. Kennedy--unwisely, perhaps, from a tactical point of view--has emphasized, in his book and in hearings, the havoc caused by "medical catastrophes."

"Catastrophic insurance" would be inflationary in its effect. This is because, as things stand now, many people go without treatment that costs more than the upper limit imposed by their conventional insurance policy; with a federal insurance program footing the bill, more expensive treatment would be prescribed and paid for. Moreover, a federal insurance scheme limited to the "catastrophic" end of the scale would do nothing to meet Kennedy's central concern that the public establish control over the costs and procedures of a system that would absorb billions of dollars of public money.

"It calms down the flame," Senator Kennedy told me, "but it really doesn't meet the need. You're going to be bankrupting people anyway. It adds up to an inflationary trend. And I have a philosophic belief that you have to use the lever of financing to do something about quality, and something about control."

Yet the appeal of "catastrophic insurance," as the lesser of evils for politicians and organized medicine if the alternative is a fully federalized system, could be very damaging to hopes of more comprehensive reform. In 1972, the Senate Finance Committee was already deadlocked on Senator Russell Long's "catastrophic insurance" legislation. The coalition against it was maintained by Senator Kennedy and his allies, who insisted that, if passed, the Long bill meant good-bye to any broader national health insurance measure for five years. If Long's bill had been reported out of committee, it would almost certainly have passed on the Senate floor. "There just aren't enough souls in the Senate hardy enough to vote against it," one Kennedy aide said bitterly. And if it had passed the Senate, then all would have depended, as it so often does when the shape of major legislation is at stake, on one man: Chairman Wilbur Mills of the House Ways and Means Committee.

And so, in the summer of 1972, Arkansas Democrat Mills and Massachusetts Democrat Kennedy issued a joint statement. They said they would work together to draft joint legislation on national health insurance. Officially, the line from both the Senator's and the Chairman's staff people is noncommittal: "Work is continuing." But in early 1973 Kennedy and Mills met several times, and they now hope to introduce a joint bill this fall, a bill which would in one major respect be weaker than the Kennedy-Griffiths bill.

Kennedy is trying earnestly to find a compromise he can accept without betraying what he regards as the essential features of S.3. And Mills is trying just as hard to find a bill he can pass. Two main issues remain to be resolved.

One is the method of financing. Realistically, national health insurance can be financed in only three ways: through the federal budget; through Social Security, like Medicare; or through what is known as "mandated" coverage. This last approach, favored by the Nixon Administration, would have the great bulk of premiums paid by individuals and their employers to private insurance companies, so that public funds would be needed only for those who could not obtain coverage through their jobs, or for other special reasons. On financing, Mills and Kennedy have not yet made up their minds. If we had the key to this problem," Senator Kennedy told me, "we'd have the legislation."

The second major issue--the role of insurance companies--has now been resolved. But Kennedy has made an important concession. He is wary of the danger that, whatever regulatory controls Congress sets up over the health insurance industry, "in all too short a time the controlled end up controlling." That is why he favored eliminating the role of the private health insurance industry completely. He now believes, however, that "there may be some role, actuarial or procedural, some arrangement that could be worked out that would give them some role." One point, in short, is now clear. In order to get a version of national health insurance that Mills would report out of committee, Kennedy will have to accommodate, in some way, the commercial health insurance industry. America, one might say with Senator Kennedy, may not be able to afford the health insurance industry; but Senator Kennedy cannot afford to do without the health insurance industry.

The prospect of Senator Kennedy's crusade for national health insurance being transformed into a canny Kennedy-Mills compromise with Aetna, Connecticut General, or even W. Clement Stone (the Chicago insurance man with the "positive mental attitude" who was, so far as we know, Mr. Nixon's biggest campaign contributor in both 1968 and 1972), is distressing for those who believe that the American health care system needs radical reform.

There are other indications, too, that the middle ground both of academic debate and practical politics has taken a step back toward the cautious center. But this accurate shift should be seen in perspective: it comes after some startlingly rapid strides to the left.

Between 1968 and 1970--which is to say, almost overnight, as major shifts in political attitudes are measured--certain ideas about health policy that had been considered daring even on the liberal Left suddenly became widely acceptable. There were two in particular: the idea that the federal government should certain the availability of health care to all citizens through a national health insurance system, and the idea that government should actively encourage the replacement of at least some fee-for-service practice by prepaid group health arrangements. As late as 1965, even liberal reformers limited themselves to the complaint that not enough health care was available to enough people. But by 1970, to say that the American health care system was in crisis had indeed become something of a cliche.

In early 1970, President Nixon announced to a press conference that "we face a massive crisis in this [health] area. Unless action is taken within the next two or three years...we will have a breakdown in our medical system." That was well over three years ago and to date there has been no legislative action commensurate with such a prophecy. But neither has there been anything that could truly be called a breakdown.

No doubt at the time the President believed exactly what he said. His assessment was supported by the somber report sent to him by Robert Finch, his first Secretary of Health, Education, and Welfare, and Roger Egeberg, his first Assistant Secretary for Health and Scientific Affairs. The report posed the alternatives in the starkest terms. "What is at stake," it concluded, "is the pluralistic, independent, voluntary nature of our health care system. We will lose it to pressures for monolithic, government-dominated medical care unless we can make the system work for everyone." And this was no idiosyncratic view of Finch's. Two years later his successor at HEW, Elliot Richardson, produced a White Paper which, he insisted, "seeks to modify the entire SYSTEM of health care" (his italics).

The Nixon Administration's policies, even more than its rhetoric, showed its conversion at that time to the argument that the system needed fundamental change. Each of its two main proposals marked a clear breach with the shibboleths of organized medicine. One official who had worked at HEW for a decade said to me with awe in his voice: "The atmosphere around here changed so much in three or four years that you had a Republican President saying, and a Republican Administration doing, things that would have raised the roof if Kennedy or Johnson had done them."

One of the two main thrusts of the Administration's strategy was to encourage the spread of Health Maintenance Organizations (HMO's). The HMO concept was co opted from the prepaid group health idea, of which the California-based Kaiser Permanente plan is the best-known example. The essence of the group health idea is that, instead of paying medical bills when they fall due, you pay a flat annual sum to join a group which contracts to provide all the medical care you may need. Typically, the group either maintains its own hospitals, or makes arrangements with particular hospitals to provide services; it also employs physicians and other staff. The whole idea has long been anathema to organized medicine. For one thing, it turns the rugged, individualist, fee-for-service, small-businessman physician into an employee. Secondly, with the specific incentive of fee-for-service removed, group health schemes have shown consistently lower rates of utilization of advanced medical technology, thus stimulating a passionate debate among medical academics as to whether group health did too little, or fee-for-service doctors did too much.

The notion that Health Maintenance Organizations "might provide a check on the provision of unnecessary services, inflation and inequitable distribution" of doctors received a boost from the work of Dr. Paul Ellwood, Jr., of the American Rehabilitation Foundation in Minneapolis, on the grounds that HMO's would "align [the physician's] economic interests with those of the consumer." In a paper called "The Health Maintenance Strategy," first drafted in March, 1970, and revised in June and October that year, Ellwood and his Colleagues seemed to be consciously selling HMO's as an alternative to national health insurance. "The Nixon Administration must make a major decision on its strategy for dealing with the much-proclaimed health crisis in America. It can either rely on continued or increased Federal intervention...or promote a health maintenance industry...the health maintenance strategy offers...a feasible alternative to a nationalized health system."

This argument was apparently persuasive, for in a speech early in 1971, Mrs. Beverlee Myers, an assistant administrator at HEW, laid down the prerequisites for the types of HMO the federal government would encourage. Significantly, Mrs. Myers defined HMO's more widely than liberals would have liked: besides the Kaiser-Permanente model, with its own hospitals and other specialized facilities, she included organizations modeled on the San Joaquin Medical Care Foundation in California, that is to say, group practice arrangements, set up by doctors, which RETAINED fee-for-service.

One can get the flavor of some of the organizations which qualify as HMO's under the Administration's definition by practicing a long article published by Fortune this spring in which the profit-making possibilities of HMO's are glowingly discussed. Their profit ratio, the article argues, could be as high as that of oil companies, and one doctor who successfully sold stock in his HMO received the highest accolade Fortune has to bestow: "a fast-moving entrepreneur." "The most blatant forms of entrepreneurial practice qualify as HMO's under the Administration's proposals," comments Dr. Victor Sidel, of Montefiore Hospital in the Bronx, New York. And Dr. Max Fine, of the Committee for National Health Insurance, described to me with horror the style of some of the HMO's that flourish in Southern California. "They send loudspeaker trucks round the streets," he said. "They offer fried chicken to anyone who joins the group. And they send solicitors round door to door, and pay them three dollars a head for every patient they sign up." One such group health outfit advertises in medical journals, offering free use of a company-owned Mercedes to any doctor who will join. Sports cars for the doctors, and fried chicken for the patients: that was hardly the vision of group health pioneers.

Both with HMO's and with national health insurance, the Republicans borrowed a liberal idea and made something very different out of it. National health insurance, in fact, has become one of those phrases, like the word "liberal," whose meaning has exploded, so that little pieces of it are littered all over the political landscape. There is a world of difference between the health scheme Labor and Senator Kennedy had in mind, and what Republicans have proposed.

There is the matter of the insurance companies. When Elliot Richardson was Secretary of HEW, he admitted in congressional testimony that his version of national health insurance would have the effect of swelling the premium income of private health insurance companies from $23 billion to $30 billion in the first year, or by some 30 percent. Dr. Fine calculates that even this is a substantial underestimate, and that the Administration's plan would have produced a windfall for the insurance companies (including Blue Cross and Blue Shield) of no less than $12 billion--more than 50 percent above its current premium income. The fact that in spite of these remarkable expectations the insurance industry opposes any form of national health insurance as an entering wedge of federal regulation, gives some idea of how ideological, and how profitable, the insurance business may be taken to be.

But liberals' objections to the Administration's proposals are not limited to the observation that they would enrich the insurance industry lavishly. The National Health Insurance Partnership, as the Administration called its national health insurance proposals, would limit the government's role to a minimum. It would merely require that the carrying of a stipulated minimum employer-employee health insurance coverage be made a mandatory condition of all employment. Those who were not covered in this way would be picked up either by a family health insurance plan, replacing Medicaid for the poor; or by "pool coverage," arranged in some not too clearly specified way for the unemployed, the self employed, and the employees of small businesses. The government would also, under the Administration's plan, set national health insurance standards. And here the contrast between coverage proposed in the Kennedy-Griffiths bill and the modest benefits (and thickets of exceptions) promised by the Administration's bill is so great as to be ludicrous.

The principle of national health insurance dates back to the 1930s. It was temporarily interred when congressional efforts to enact the Wagner-Murray-Dingell bill (first proposed in 1943 after Senator Robert Wagner had sponsored a similar bill in 1939) ended in 1950, in the face of fierce opposition from the American Medical Association. The AMA assessed its members twenty-five dollars a head that year to fight "the enslavement of the medical profession."

For nearly two decades the idea lay dormant. Public attention to the problem during the 1950s and the early 1960s focused on two efforts, one congressional, the other undertaken by private enterprise. Politicians abandoned any attempt to involve the government in making medical care available to the general population, and concentrated instead on taking care of the special problems of the old and the very poor. Ideologically, the thrust for the two programs eventually enacted in 1965 came from different directions. Medicare--free medical care for the aged under Social Security--was a liberal program. But Medicaid--federal assistance to the states to provide medical care for the indigent--was attractive to conservatives and states righters, concerned about the growing burden of providing medical care for welfare recipients.

One important reason for political indifference to the medical needs of working- and middle-class people in those years was the rapid spread of private health insurance, both with commercial insurance companies and with the "nonprofit" Blue Cross and Blue Shield plans. By 1971, more than 76 million Americans were covered by Blue Cross hospitalization insurance; and of these about 66 million had Blue Shield policies for doctors' and surgeons' bills. (By contrast, only some 8 million people belonged to group health plans organized by the Group Health Association of America, of whom some 2 million belonged to the largest plan, Kaiser-Permanente.) Spread of health insurance came about largely because hospitals, and, to a lesser extent, the doctors, wanted to be assured of payment. Nevertheless the proportion of private hospital expenditure paid by insurance (excluding all government payments) rose from 34.6 percent in 1950 to 73.7 percent in 1968. Inadequate as it was, the spread of private health insurance for years precluded any clamorous demand for national health insurance.

Walter Reuther, shortly before his death, revived public discussion of national health insurance. In the Bronfman lecture to the American Public Health Association in Detroit on November 14, 1968, less than two weeks after President Nixon was elected, he called for "an appropriate system of national health insurance that will provide an adequate and workable financial mechanism to make high quality comprehensive health care available to every American." In January, 1969, Senator Kennedy announced that he intended to sponsor a measure to enact such a system. And then, abruptly, everybody was talking about national health insurance again. As Dr. William McKissick reported in the New England Journal of Medicine:

"The year 1969 witnessed advocacy of universal health insurance from political quarters that ranged from Representative John Dingell...to the American Medical Association...Governor Rockefeller, the Aetna Life Insurance Company, Senator Javits, Senator Kennedy...Walter Reuther..."

and, he might have added, Reuther's adversaries in the AFL-CIO. When President Nixon was converted to the cause, it seemed truly an idea whose time had come.

A bureaucratic detail pinpoints the change in official Administration attitudes toward health care reform better than all the volumes of congressional testimony and the task force reports of the time. For years, every draft of health legislation ritually included boiler-plate language which, it was believed by congressmen and federal officials, had to be inserted to placate the AMA. Section 1801 of the Social Security Act of l965, which enacted Medicare, is an example: "Nothing in this title," it proclaimed a trifle disingenuously, "shall be construed to authorize any Federal officer or employee to exercise any control over the practice of medicine or the manner in which medical services are provided."

Suddenly, in the changed climate of l968 and subsequent years, nobody felt the need to write in clauses like that anymore. And not only that. Robert Ball, who quit this year as commissioner of the Social Security Administration, made the point to me that, after 1968 or 1969, most experts would have regarded it as a grave deficiency in legislation if it did not attempt to exercise some control over "the manner in which medical services are provided."

Why did the field of debate about national health policy and the role of the federal government shift so sharply to the left after 1968?

In a single word, the answer given by all the experts I talked to was: Medicare. The impact of the enactment of Medicare and Medicaid in 1965 was both financial and psychological.

The fact that hospitals and doctors were reimbursed on a cost basis, under both programs, accelerated the inflation of medical costs. Medicare and Medicaid went into effect in 1966. Within two years, cost inflation had reached the proportions of a crisis. And that steep, sudden inflation exposed other weaknesses in the health system, and triggered a general reassessment of long-accepted assumptions and values.

At the same time, once the federal government was involved in paying for health care, essentially for the first time, it became both possible and necessary to ask how much further it should be involved. As Wilbur Cohen, Secretary of HEW at the time Medicare was enacted in 1965, put it to me: "The passage of Medicare broke the back of the ideological controversy over the government's role, and opened up the possibility of discussing changes in the delivery system." Robert Ball agrees:

"There has been a most remarkable change in atmosphere. I went all through the fight for Medicare, and it would have been unthinkable at that time to get stronger legislation in terms of affecting the delivery of health care. When Medicare came in it was accepted as an economic measure, as something that would protect people against the costs of medical care. It was considered as part of the pension system. The great change which began fairly soon was that people began to feel that there was a real responsibility to do something about cost; to do something about quality; and to do something about organization. Medicare was a terrific catalyst."

It wasn't long before it became plain that the cost overruns on Medicare were going to be spectacular. Between l966 and 1968, everything went up; hospital bills, doctors' fees, laboratory charges, insurance premiums, and even--though more modestly- nurses' and orderlies' wages. (At Massachusetts General Hospital in Boston, for example, nurses' wages went up 100 percent between 1959 and 1969; but over the same period interns' salaries went up 1650 percent!) Over the decade of the 1960s, hospital charges rose four times as fast as all other items in the Consumer Price Index; physicians' fees rose twice as fast. And that increase was heavily concentrated in the brief period after the introduction of Medicare. The rate of inflation of hospital costs, for example, increased from an average of 6.9 percent between l950 and 1960 to an annual average of 14.8 percent between March, 1966, and March, l970.

Some economists [See Endnote 1] have argued that the primary reason for inflation was increased demand from patients. But the Nixon Administration's own White Paper in 1971 commented that "while undoubtedly there were improvements in the quality of care for at least some of the population, more than 75% of the increase in expenditures for hospital care, and nearly 70% of the increase for physician services, were the consequence of inflation."

By 1969, some hospitals were charging as much as $150 a day for basic care--in effect for little more than a bed, food, and attention from a nurse when she had a moment. John de Lury, of the New York sanitation workers' union, gave a state legislative hearing a harrowing illustration of what the full cost could come to:

"A ten-year-old boy was admitted to the hospital at 3:20 A.M. The boy died at 10:34 the same night. The family of this child was charged $105.80 for drugs, $184.80 for X rays, $220.00 for inhalation therapy, $655.50 for laboratory work. The total bill for the child was $1717.80."

With the government reimbursing whatever hospitals and doctors charged, the cost of both Medicare and Medicaid spiraled out of control. Less than ONE YEAR after Medicare came into operation, Congress had to increase by 25 percent the Social Security tax budgeted to pay for it. The actuarial estimates of both utilization and cost presented to Congress by the Administration when the program was under construction proved to be hopelessly understated. Cost overruns, projected over the next twenty-five years, added up to a stupendous $131 billion.

Medicaid was soon in worse trouble than Medicare. Two reporters, generally sympathetic to the program, wrote that "starting in late l966 Medicaid hit New York's medical marketplace like a flash-flood." In January, 1967, the federal budget, assuming that Medicaid would be in operation in forty-eight states by the end of the year, predicted that it would cost $2.25 billion. A year later, however, although only thirty-seven states were receiving Medicaid, the real cost came to $3.54 billion.

In human terms, there is no question that both Medicare and Medicaid have done incalculable good. Medicare, covering about 21 million people, paid portions of bills for roughly half of them last year; allowing for various overlaps among programs, Medicaid paid a part of the bills last year for about 16 million of the many more people who were eligible. No one can put a cash value on the lives that have been prolonged and the suffering saved as a result. But there is no denying that by pouring money into the medical system on a cost reimbursement basis, Medicare and Medicaid set off a wild inflation in costs.

The rise, however, had begun long before Medicare was enacted. The rapid spread of health insurance, both with commercial companies and with Blue Cross/Blue Shield, had been triggering an inflationary effect since the middle 1950s. Private insurance policies worked on a cost reimbursement basis, like Medicare, and far from having any deflationary control over medical practice, often encouraged EXPENSIVE treatment. Many policies, for example, would pay for certain kinds of treatment only in a hospital, a proviso that naturally encouraged needless hospitalization.

Secondly, the acceleration of Medicare-induced inflation coincided with the general inflation of the late 1960s. Half of the inflation of medical costs, Wilbur Cohen believes, is due to the general inflation, which in turn he blames on the Vietnam War.

In any case, a flood of new money--Medicare and Medicaid are now paying for well over a third of all health care--was poured into the system at a moment when medical costs had been rising more quickly than most other costs. No serious attempt was made either to increase the number of providers or to hold down costs by imposing controls. The result was predictable.

The inflationary effect worked differently in the two cases of doctors' fees and hospital bills. In the case of physicians' fees, what happened was what economists call a "demand-pull" inflation. Since the supply of doctors remained virtually stable, greatly increased demand meant that doctors could increase their fees (sometimes directly sometimes by splitting procedures and thereby charging more for what would have been one appointment) and still be sure of the same volume of patients.

Some have argued that hospital costs too, rose mainly in response to demand-pull, as an army of new users descended on overstressed resources. The argument is attractive to hospital administrators because, if accepted, its logical corollary is that even more money should be spent on hospitals--something administrators approve of for various reasons. But while admissions to hospitals rose by 21 percent over the period from 1961 to 1969, the supply of hospital beds rose even faster, by 25 percent. Medicare and Medicaid did indeed drive hospital costs up, but not by stimulating an excess of demand over supply. In the hospitals, there was cost-push inflation.

In October, 1968, two economists, Paul S. Feldstein and Saul Waldman, correctly summarized what had happened in the Social Security Bulletin:

"In Medicare's first year, the financial position of the hospitals improved considerably, possibly as the result of the following factors:

(a) increases in occupancy rates;

(b) reimbursement to hospitals for the cost of services to some aged patients, previously provided free or at reduced charges;

(c) reduction of losses from uncollectibles...

(d) payment to voluntary and government hospitals under Medicare of an allowance amounting to 2% of allowable costs...

(e) receipt of additional revenue from higher charges."

Why did charges go up? Feldstein and Waldman offered two alternative explanations:

"1. Hospital management may have miscalculated the effect of Medicare and believed higher charges to non-Medicare patients would be needed because it expected less than adequate reimbursement under Medicare.

2. Hospital management may have decided that the early Medicare period, which was a period of unusual change in hospital finances and accounting, was a convenient time to adjust their charge schedules." [They also added a third point: some large hospitals remained in deficit even after increasing charges].

That is a gentle way of putting it. Bill Fullerton, in Wilbur Mills's office, spelled it out more bluntly. "After Medicare," he told me, "the hospitals got paid more than before: they got full cost. And that meant that, despite all their protestations to the contrary, for the first time they were really making money. I talk to plenty of hospital administrators, and they say, 'I have a little list of things I want to do for the hospital. When Medicare came along, I could start checking them off.'"

John de Lury put it even less kindly, and still accurately:

"The hospitals with the high patient costs are the newer ones, those on the make, with brilliant reputations, with teaching affiliations. Above all they are the ones with programs of vast expansion and edifice complexes. Their rates are high to support their vast ambitions, and they are making us pay through the nose."

The strategic accommodation accepted by the Johnson Administration in 1965, in order to pass Medicare, was with the hospital people: with the American Hospital Association and with Blue Cross. (In most American cities, hospital trustees, senior medical staff, and Blue Cross boards are so intertwined that it can truthfully be said the deal was made with a single interest: "the hospital people") Two main concessions were made. Medicare was to be financed by reimbursing costs. And the hospitals were to be allowed to choose their own "fiscal intermediaries" to check, audit, and authorize disbursement. Some chose private insurance companies. More chose Blue Cross. Given the coinciding interests, attitudes, and personal contacts of many hospital administrators and of the Blue Cross people who were supposed to be riding herd on them, it is not too harsh to say that "the hospital people" were given their own ticket to write.

They could, and did, expand their buildings, take on new staff, invest in fancy electronic equipment, make generous settlements with the unions--and could be paid whatever the bill came to by the feds, just as long as the friendly fellows at Blue Cross or at the insurance company said it was OK. The result was a bonanza not only for doctors and hospitals but for insurers, electronic data processors, surgical dressing manufacturers, drug companies, and all the other interests which feed at the $70-billion trough of the medical-industrial complex. It was no accident that in the years immediately following passage of Medicare, the hottest of the hot stocks on Wall Street were those of profit-making nursing homes for old people. The promoter of the hottest of them all, the tactfully named Four Season Nursing Centers, has just pleaded guilty to the biggest stock fraud in American history: $200 million.

What made the Medicare bonanza so attractive was this no-loss proposition: the federal government was footing the bill, and the providers were adding it up. As two Tufts Medical School professors wrote in 1970: "Medicare has proved a better mechanism for insuring the providers than the patients."

Government audits have tightened up recently, and have shown how inadequate both sections of the insurance industry, the profit-making companies and Blue Cross/Blue Shield, proved at controlling costs. Massachusetts Blue Cross, for example, was given a 15 percent increase in premiums by the regulatory authority on December 1, 1970, and then filed for a further 33 percent increase on May 10, 1971. In the previous two years, modest investigation revealed the executive payroll had almost doubled. Blue Cross of Chicago was criticized by HEW auditors for using Medicare money to pay for first-class air travel and entertainment for its executives. And so it went in the commercial companies, too. In certain cases, federal auditors found that insurance companies acting as fiscal intermediaries programmed their computers to omit from cost-control checking procedures hospitals in which the companies had invested.

Two particularly baroque tales illustrate just how free and easy the spending was: the episode of the monogrammed golf balls, and the rags-to-riches saga of the Medicare Billionaire.

In Virginia, Blue Cross was named as the "intermediary" for Medicare, Part A (hospital services), and Blue Shield was the ''fiscal agent" on behalf of the state for Medicaid. They were charged by the law with deciding which "providers"--mainly hospitals in this case--should be paid how much; on a "reasonable cost basis." They were then to receive, disburse, and account for the money, and apply safeguards against "unnecessary utilization of services." The government auditors' report suggests that this hardly turned out to be the main problem.

The "Virginia Blues" took on staff until the federal auditors found, two years later, that staffing was "about 23% in excess of requirements." They bought two big IBM machines, so that while the workload increased by 22.3 percent, money spent on data-processing jumped 1409.8 percent. Yet, the auditors found, the system remained "basically ineffective." Other expenditures seemed even harder to justify. Soon after getting the Medicare contract, the Virginia Blues built a new office building, and spent more than a million dollars on new furniture for it...from the furniture company whose sales manager was chairman of the state Blue Cross board of trustees' building committee. He did not, however, let that position influence him into giving Blue Cross undue bargains. With no competitive bidding, Blue Cross paid $1750 each for secretaries' desks.

Medicare was also billed for at least part of the cost of entertainment for Blue Cross executives and their "clients"--it would be interesting to know whether "clients" means insurees or doctors--including "cocktails, beer, wine, alcoholic beverages, tickets for stage plays and football games and golf fees." A portion of the cost of a company picnic was charged to Medicare, including 1050 buffet dinners, two bartenders, bingo prizes, and the rental of six ponies.

The Social Security Administration's auditors commented (dryly) that "since alcoholic beverages are not considered stimulants of production and do not help to disseminate technical information, they cannot be considered allowable costs to the Medicare program."

Finally there were the golf balls. Medicare was charged with one-third of $2138.50 paid for thirteen dozen golf balls imprinted with the Blue Cross/Blue Shield monogram. Teachers and social workers are not the only ones who sometimes benefit more from programs intended to help the poor than the poor themselves do.

Shortly after his first inauguration, President Nixon announced the names of a select group of trustees for the Nixon Foundation. Most of them were either well known to be old friends of the new President, or at least heavy campaign contributors. More than one owed a good deal to the medical-industrial complex; W. Clement Stone, for example, or Elmer Bobst the Vitamin King. But one name was then quite obscure: that of H. Ross Perot.

By the end of 1969, after his elevation to the board of the Nixon Foundation, H. Ross Perot had become a world celebrity by chartering two jets (one of them modestly christened "Peace on Earth") and flying off to Southeast Asia in a well publicized attempt to ransom American POW's. By 1970, with stock of the Electronic Data Systems Corporation, which he controlled, selling at over $150 a share on the New York Stock Exchange, Perot's personal wealth was authoritatively estimated at $1.5 billion. No American, Fortune magazine guessed admiringly, had ever made so much money so quickly. But then neither Henry Ford nor John D. Rockefeller, nor even Paul Getty, had had federal Medicare funds to help him.

Perot, at one time an IBM software salesman made his big leap in 1965 when Texas Blue Cross and Blue Shield subcontracted the data-processing work arising under their Medicare contract to the company he had founded in 1962, the Electronic Data Systems Corporation (EDS). Perot was at that time, and remained for almost a year afterwards, a part-time employee of Texas Blue Cross; he was manager of their data processing department at a salary of $20,000 a year. Until then, EDS had been small beer. Its turnover had never exceeded $500,000 a year. The Blue Cross contract was worth $5 million; the contract ran for three years, even though Blue Cross's contract with the government, on which it depended, was for only one year. There was no competitive bidding, and while the contract between the Social Security Administration and the Texas Blues had a provision for the examination of records, the contract between Blue Cross and EDS did not. To get ready to handle this tempting contract, EDS had enjoyed a vital helping hand: a loan of $8 million from the Republic National Bank of Dallas, whose chief executive officer was the chairman of Blue Cross.

Once into Medicare and Medicaid work, Perot and EDS never had to look back. The corporation's gross revenue rose from $1.6 million in 1966 to $47.6 million in 1970. The rate of profit on that revenue rose from 15 percent in 1966 to 41 percent in 1969, and then fell back to a mere 29 percent in 1970. By 1971, EDS was doing the electronic data-processing work for Medicare in nine states, including four out of the biggest five. Two-thirds of its gross revenues have come from Medicare and Medicaid. EDS stoutly maintains that it has processed Medicare and Medicaid claims more cheaply than they would otherwise have been processed, and this may be true. In any case, on its own showing, EDS made profits of up to 41 percent on turnover, two-thirds of which came straight out of public funds which were supposed to be disbursed on a "reasonable cost basis." In other words, in a perfectly legal manner, and in less than five years, Ross Perot was able to make himself a very wealthy man in a way that would be regarded in many other countries as incredible: by owning stock in a company that helped other private organizations decide whether the government should or should not pay out public funds for the medical care of the old and the poor.

In an atmosphere where such things were possible, it was probably inevitable that attention should turn from the mechanics of the health care system to its ethics. While the escalation of costs forced even conservatives to concede that the system faced crisis, liberals--who had largely confined themselves to quantitative and economic issues in the past--began to make more searching criticisms.

"The cost question turned the spotlight on the other deficiencies of the system," said Karl Yordy of the National Institute of Medicine. "It was not only the increase in costs produced by the fact that Medicare was on a cost-reimbursing basis," thinks Dr. Jack Geiger, the head of the State University of New York's new medical school at Stony Brook, "it was the rise in costs, plus the failure to deliver health care, that revealed the inadequacies of the system. The push for a reassessment of the system came from the fact that the cost of health care, and the difficulty of finding primary sources of health care, were beginning to hit the white middle class."

Whatever the reason, four new, interrelated lines of criticism began to be heard in the late 1960s with increasing force. Each probed more deeply than the last into the substructure of assumptions that underlay the American medical system.

* A new skepticism appeared about the value of technology in medicine; a new willingness to question an equation that had been virtually unchallenged for a generation; the assumption that good medical care means advanced medical technology.

* Institutions, and in particular the dominant institution of modern medicine in America, the hospital, became objects of increased suspicion.

* The physician's professional authority began to be challenged as never before.

* Ultimately, even the traditional ethics of medicine were called into question on issues of social and political responsibility.

For over twenty years, from 1948 to 1968, Congress had been persuaded to pour money steadily into medical research. Most politicians were aware that "health" was an issue with their constituents. But once the AMA seemed to block any reform of the health care delivery system, the only politically prudent way of showing concern for the health issue was by voting money for research. A formidable coalition pressed the good work forward. It consisted of lobbyists, led by two indomitable ladies, Mary Lasker and Florence Mahoney, and others less free from self-interest; congressmen and senators, led by Representative John Fogarty of Rhode Island and Senator Lister Hill of Alabama; and research administrators. These lobbyists succeeded in increasing congressional appropriations for medical research, channeled through the National Institutes of Health (NIH), from $7 million in 1947 to over $1 billion twenty years later. In sixteen out of those twenty years, Congress actually reversed its normal, stern, budget-cutting inclinations of those years, and appropriated more money than the Administration was asking for. [See Endnote 2]. Not coincidentally, in the late 1950s and the early 1960s, the public began to read more and more about the wonders of arcane medical technology: about miracle drugs, and magic surgery and electronic aids to diagnosis, and eventually about "space age medicine."

Dr. Jack Geiger was strategically situated to observe this phenomenon. He had run OEO-funded health centers in Mississippi and Boston before heading up SUNY's new medical school at Stony Brook. But before going to medical school he had been a science reporter for UPI, where it was his job to chronicle the marvels of the new medical technology. As a result of that experience he is convinced that inflated propaganda during those years led eventually to a backlash of skepticism about what technology could achieve in medicine.

"There was endless publicity about the cure for cancer, the cure for heart disease, and so on," Geiger said, putting audible quotation marks around the word "cure." "People began to feel it was only a matter of time before the brilliant, dedicated doctors discovered a cure for death."

He was hardly exaggerating. In 1961, President Kennedy was interviewed on NBC's Today show by Dave Garroway, who asked him: "Will we eventually cure everybody? Will the health of the nation approach perfection someday, by care and research?" And it wasn't only gushing television interviewers who seemed in the early 1960s to assume that research would "cure everybody" someday. For Mary Lasker, Stephen Strickland has written, "the conquest of disease was...an obtainable goal," while many senators thought that conquest was "assured, if not imminent."

And yet, after twenty years of these unprecedentedly high expenditures on research, American medicine, far from "curing everybody," was unable to prevent public health standards from slipping behind those of many other countries with far smaller resources. By the time the l970 United Nations Demographic Yearbook was published, the United States was seventeenth in the international league table in infant mortality, behind Hong Kong, Western Samoa, and Fiji as well as most countries of Western Europe. Twelve countries, plus two of the constituent republics of the Soviet Union, the Ukraine and Byelorussia, claimed a higher life expectancy for females, and another six came within one year of the U.S. figure of seventy-four years; while in life expectancy for males, in spite of all the money spent on research into cancer, heart disease, and stroke, the United States ranked thirtieth, behind Spain, Greece, and five countries in Communist Eastern Europe. [See Endnote 3].

A reaction against the prevailing faith in technology was to be expected, and it came. Doctors and laymen began to wonder whether American medicine was not placing too heavy an emphasis on drugs, surgery, and research, at the expense of primary and preventive health care. Medical academics began to show a new interest in foreign medical practice. The National Health Service in Britain, once beyond the American pale because it was "socialized medicine," has probably been more comprehensively studied in the last five years by American than by British scholars. Scandinavian public health has attracted almost as much attention.

These comparative studies came back with some disturbing information. One study [See Endnote 4] found, for example, that twice as much surgery was performed in proportion to population in the United States as in England and Wales. This might have been taken as comforting proof of the old-fashioned reluctance of British surgeons to operate--except for the awkward fact that rates of surgery in American group health plans also turned out to be half those reported for Blue Shield fee-for service practice. In other words, those American surgeons who earn a fee every time they operate perform twice as many operations as British surgeons. And worse, When American surgeons had a financial incentive to operate, they did so twice as often as when they had none. Another study showed that the incidence of tonsillectomies in California was twice as high as it was in Sweden, a country with outstandingly high standards of health care for children. Numerous other studies, not on an international comparative basis (a study of New York teamsters' and their families' health, for example), supported the suggestion that a disturbingly high proportion of elective surgery was unnecessarily performed.

Another trend fed the new skepticism: a galloping increase in specialization within the health professions. "Virtually all the attributes and the attending problems of modern American medicine," the medical sociologist Rosemary Stevens wrote in 1970, "spring from the gigantic technological achievements...which both precipitated and were facilitated by functional specialization." The general practitioner was becoming extinct. By 1970, there were twice as many men and women training for a single surgical specialty--orthopedic surgery--as for general practice. By 1968, 70 percent of all the physicians in the country claimed to be specialists. A majority of doctors, perhaps, continued to equate increased specialization with increased skill. And yet there was a growing awareness that while specialization, like research, had undoubtedly raised the standard of the best treatment--in the sense that sophisticated diagnosis, medication, and surgery were available--progress in these fields had failed to bring a commensurate improvement in standards of overall care, in contrast to those of other countries.

In 1969, for only the second time in two decades, Congress slashed the NIH budget. The cuts were a consequence of two unrelated events: the Vietnam squeeze on domestic spending and stinging criticisms of NIH's financial management by a House subcommittee headed by Representative L. H. Fountain, Democrat of North Carolina. The cuts, while not very large, signaled a change of mood. By the end of the 1960s, the prevailing disposition in Congress, and among those in the general public who thought about medical policy, was to concede that research remained important, but to insist that improving the delivery and reducing the cost, of health care must have priority over the litany of "cancer, heart disease and stroke."

A similar sequence can be traced in attitudes toward the hospital as an institution. Most of the high technology medicine is practiced in hospitals, so that for a time they, and their fund-raising efforts, benefited from the prestige of the new miracles that were being performed in them. Moreover, with the proliferation of specialists, many thoughtful analysts envisioned a greatly expanded role for hospitals as medical centers, coordinating the diagnosis and treatment of both in-patients and outpatients.

But by the end of the decade, attitudes toward hospitals were changing. Too many people were being hospitalized, partly because of technological trends, but even more because of the way insurance policies were being written. The average cost of hospitalization increased by 75 percent between 1966 and 1969.

Just at this moment, for largely professional reasons, a new wave of liberal and radical medical academics (many of them students of "social medicine," a new discipline for which universities and medical schools were setting up departments during these very years) wanted to see the hospitals' role diminished. "The principle of the hospital as the center of care, including ambulatory care, is being challenged," wrote Dr. Jack Geiger at about this time. "People shouldn't go to the hospital unless they have to," said Dr. Victor Sidel of Montefiore in the Bronx. "Why not?" I asked. "Because hospitals are dangerous places." He went on to explain that hospitalization may serve as a depressant where patient morale is concerned, and that the danger of infection persists in even the best-run modern hospital.

Hospitals were also being assailed from without. They were being attacked by poor people, especially poor black people, and by their radical allies, black and white. The charge was that at best they were unresponsive to community needs, and, at worst, arrogant and racist. The attack was made at many levels: in sophisticated academic argument; as part of radical confrontation tactics; and in the heat of spontaneous outbursts of anger and despair.

In 1970, Barbara and John Ehrenreich, who had been associated with a group of radical medical reformers called the Health Policy Advisory Center (Health-PAC), distilled the radical critique into a forceful and provocative book, The American Health Empire. Their thesis is one of alluring simplicity. Liberals, they say, are always lamenting the lack of system in the American health industry, or complaining that "the system doesn't work." They are wrong, say the Ehrenreichs. There is indeed a health system in America, and it works efficiently enough. The catch is that its primary objective is not to provide good health care for the American people. The FIRST priority of the system, say the Ehrenreichs, is to make money; and this the health industry does in a highly efficient manner, for doctors, drug manufacturers, insurance companies, and the rest of the medical-industrial complex, including "nonprofit hospitals, which take their profits in power, in salaries for administrators and medical staff, and in real estate."

The second priority, the book goes on to argue, is research, which the industry needs to make profits, and which requires the clinical data provided by patients, especially in hospitals. The third priority is training more doctors, to perpetuate the system. Here, too, a supply of patients, preferably poor patients with "interesting" conditions, is required. [See Endnote 5]. Some doctors will go on to do research Some will teach. Most will make money, for themselves and for the medical industrial complex. And, incidentally, they will have to provide health care for the American people.

This cynical but compelling analysis is supported by a historical interpretation. The Ehrenreichs argue that a three-stage revolution of scale has occurred in American medicine, analogous to the development from small business to corporation to giant multinational conglomerate. First, the individual doctor in private practice began to give way to the hospital: by 1969 less than 29 percent of the nation's health expenditures went to individual doctors. Then, even before this process had run its course, individual hospitals became dependent on the medical schools and teaching hospitals. In the last twenty years, almost all hospitals and health centers in New York, and a very large number of doctors in practice, have come under the medical and organizational influence of Columbia-Presbyterian, Einstein-Montefiore, and the city's five other major teaching hospital complexes. The same process has occurred in Baltimore, Philadelphia, Boston, Houston, Los Angeles, and other major cities. The resulting networks of affiliated institutions the Ehrenreichs call "health empires," and they argue that these empires have an institutional compulsion to expand, in order to attract the talent and the money--both from the federal government and from private foundations and bequests--they need to stay on top.

Radicals like the Ehrenreichs generally argue that these newly emerging giant medical institutions carry within them the seeds of their own destruction. Almost all of them are located in the inner city--Columbia-Presbyterian, for example, on the fringes of Harlem--where the contrast between their wealth and privilege and conditions in the surrounding community will inevitably lead to dialectical contradictions and ultimately to conflict.

To a limited extent, the people of the ghettos began to verify this prediction. However, just as in ghetto rioting between 1964 and 1968, small, local businesses were attacked while major symbols of social and economic power--like the General Motors building, an oasis in the flames of Detroit--were left untouched, old, poor hospitals bore the brunt of the trouble, not the rich, expanding medical complexes. Confrontations have occurred between administrators and insurgents in several big city hospitals: Lincoln Hospital in the Bronx, Cook County in Chicago, San Francisco General, and D.C. General in Washington, for example. The insurgents in each case were young, mainly white, middle-class radical doctors; or community militants; or some combination of these.

The suspicion that, for all its technical achievement, American medicine was falling away from both professional ethics and the concept of social responsibility was not confined to radicals or to the young. Consider, for example, the following harsh judgment on the quality of treatment dispensed to recipients of Medicaid:

"The task force, along with what is possibly a majority in the health profession and certainly a majority of the population interprets the accurate Federal enactments [those, that is, establishing Medicaid] as intending that access to basic medical care shall be a right or entitlement....That right or entitlement is not fulfilled when millions in the population...are given a kind of service that is woefully inferior by every standard known to man or doctor."

That statement is taken from no radical diatribe. It is a quotation from the official report of a task force set up by the Nixon Administration to study Medicaid, with staff support from HEW, and chaired by the president of the Blue Cross Association of Chicago, Walter McNerney. In a paper written in 1972, Dr. Victor Sidel summed up the anguished reassessment that had been taking place in a thousand lectures, papers, and medical conferences: "Everything that we have learned about how clever we are, how important we are, how relevant we are, how trusted we are, how self-sacrificing we are, will have to be re-examined."

Besides speeches, reports, and articles, the wave of dissatisfaction with medical practice manifested itself in a variety of ways, on the part of both doctors and their patients. Doctors demanded peer review of each other's professional performance. Many medical experts argued for the development of a paraprofessional class, to relieve doctors of some part of their medical monopoly. Most significantly, malpractice suits against doctors rose sharply, and courts increasingly had a tendency to find for the plaintiffs, the patients.

Writing in the Spring, 1973, issue of The Public Interest, sociologist Nathan Glazer discussed this last phenomenon and expressed a concern felt by many Americans in the years of the great medical bonanza. He pointed out that the incidence of malpractice suits in England or in Sweden had not greatly increased, and asked

"why the best-paid doctors in the world--at least some of them--seem to find it necessary to engage in barely legal or illegal practices in order to force their incomes higher? Or is it indeed because they are the best-paid doctors in the world that so many have engaged in profiteering at the public expense?...Is it possible that the somewhat greater distance between healing and monetary payment that generally prevails in England and Sweden contributes to a morally healthier medical profession?"

In that same article, reviewing accurate academic studies of the American health system, Professor Glazer reports a "recent backlash of defense for the American health system."

If, in the late 1960s, congressional faith in Unlimited research expenditure wavered under the influence of a new concern with the human problems of health care delivery, and a new skepticism about technology, that skepticism was rather short lived. The "dread disease lobby" soon reasserted itself. In January, 1971, President Nixon proposed (with the support, incidentally, of Senator Kennedy) to set up a new, separate National Cancer Institute. He advocated it with the old can-do rhetoric about the need for the kind of effort "that split the atom and took man to the moon." The rest of the medical research community was strong enough to prevent this dismemberment of NIH. But by the end of 1971, Congress enacted, and the President signed, a new Cancer Act, authorizing $1.6 billion over three years for a new "assault" on cancer. At the medical schools, the high tide of radicalism is said to have crested.

Even before its attention was distracted and its energies paralyzed by Watergate, the Administration had lost any sense of urgency about national health insurance. Indeed it had all but abandoned its plans in their original form. Before he left HEW on his way to the Justice Department via the Pentagon, Elliot Richardson had his staff prepare what came to be called the Mega proposal. This was a secret plan for comprehensive "simplification and reform" of the more than 300 HEW categorical social programs. In general, the plan followed the lines of the Administration's general philosophy of revenue-sharing and "the New Federalism"--not to mention its desire to cut back domestic spending.

The health section, which soon leaked out, proved of special interest. For one thing, it contained a harshly explicit critique of the Administration's own national health insurance proposals. "Many billions," it said, would be needed to convert these into a "true universal entitlement plan with a comprehensive benefit plan." In its place, the Mega paper suggested a new concept: Maximum Liability Health Insurance. In effect this was a much liberalized version of the concept of "catastrophic insurance." The government would provide cover from the "catastrophic" end of the scale down to a maximum liability fixed for each individual according to his income. Below that level he would be on his own, at liberty either to pay his own bills or to insure himself in the market.

This scheme derived largely from the theories of Dr. Martin Feldstein, the Harvard economist who worked as a consultant on the Mega proposal. Feldstein has now spelled out the maximum liability concept under another name ("major risk insurance," or MRI) in an article in the June, 1973, issue of the AMA magazine, Prism.

"Every family would receive a comprehensive insurance policy with an annual direct expense limit...scaled so that families with higher incomes would be responsible for larger amounts of their medical bills...the expense limit might start at $300 per year for a family with an income below $3,000. It might be ten percent of the income of persons earning between $3,000 and $10,000 and $1,000 for incomes above that level. MRI could be improved by introducing a coinsurance feature above that basic deductible...."[See Endnote 6].

Feldstein argues that his scheme would not only prevent financial hardship for individuals but would also save billions of dollars in public money, both by removing the existing tax deductions for medical expenses (though of course these could be removed in conjunction with any other reform) and by making people "more cost conscious in spending their own money and this in turn would help to check inflation."

Feldstein is an economist, but his analysis seems almost oblivious to noneconomic factors. In one of his papers, he attributes rising costs primarily to rising demand on the part of patients, brushing aside the degree to which it is doctors, and not patients, who make the effective decisions about the demand for treatment. He argues that the best way to measure the benefits of hospital care is in dollars! In short, he, and others of his school, analyze the medical system as if it were a marketplace like any other. They advocate "deductibles," "coinsurance," and other devices to make the patient spend his own money when he needs medical care precisely because such devices help to turn medical decisions into economic ones.

The revival of respect for essentially economic prescriptions for the health care crisis was to be expected. But it marked the end of a period in which economic problems caused a revaluation of the noneconomic deficiencies of the medical system.

Both the shift to the left in the late 1960s and the subsequent reaction can be explained by factors intrinsic to the health care system--up to a point. There is, as I have tried to show, a great deal of evidence that the way Medicare and Medicaid were introduced led to a rapid rise in medical inflation, and that inflation led to reassessment, to a sense of crisis, and to a change in the whole climate for reform of the system. The reaction can also be explained in terms of developments within the world of medicine. If rapidly rising costs led to a sense of crisis, then it can hardly be irrelevant that inflation of health costs has slowed down. Harry Schwartz, in his spirited 1972 book called The Case for American Medicine, claims that "between August 1971 and August 1972, the medical price index rose only 2.2%, substantially less than the cost of living rose in the same period." Checking with other health economists, I found that there is some dispute about the extent to which government policy from Phase I to Phase IV braked the inflation of medical costs, but it is not disputed that the sharp rise of the late 1960s is now over. Indeed, to the extent that the sharp inflation was caused by fee increases that would not be repeated for some time to come, it would be odd if the rate of inflation had not slackened off. And again, to the extent that the rise in costs created a new propensity to ask fundamental questions about the system, the passing of the wave in medical costs would naturally be expected to presage some calming of the clamor for reform.

But I suspect that something larger is also involved. The striking thing is how closely the changes in political and intellectual attitudes to the health care system have mirrored the changes in other areas of American life.

A parallel from the history of higher education may illustrate what I mean. In August, 1967, Christopher Jencks and David Riesman finished writing a book about American universities, which they called The Academic Revolution. In January, 1969, they sat down to write a preface to the second edition. In the intervening seventeen months, all hell had broken loose on American campuses. The "revolution" of their title, to the extent that it was more than merely metaphorical, had been a slow, gradual affair, transforming universities over a couple of decades. Now, to the embarrassment of Jencks and Riesman, people were rushing out to buy their book in the hope of getting some understanding of what looked like an all-too literal academic revolution.

This experience set Jencks and Riesman to measuring, in their second preface, the sheer speed of change in American attitudes to higher education since they had started studying it at the end of the l950s. Then, they recalled,

"both educators and laymen seemed appallingly complacent about American higher education...the only widespread complaints about higher education were that Americans needed more of it...Almost all educators accepted the legitimacy and authority of the academic profession a few years ago, even when they criticized specific aspects of its operation. Today all this has changed...the public is becoming more skeptical about educators' claims...educators are far less sure than they were that their traditions and values are worth defending...a small but growing minority seems convinced that the academic system...is not just blemished but fundamentally rotten."

Change only a few words--"educators" into "doctors," "academic" into Medical," and so on--and this could stand as a summary of what happened in American medicine in the late 1960s. It was as true of American medicine as of higher education in the 1950s and the early 1960s--probably more true--that "the only widespread complaints were that Americans needed more of it."

The crisis of confidence of the late 1960s, in short, seems to have followed the same course in the universities as it did in medicine, and in fact as it did in the life of other social institutions as well. Doubt, criticism, crisis, radical challenges to old assumptions and established structures of authority and a willingness even on the part of conservatives to move out of entrenched positions and accept a surprising degree of change: these were the characteristics of the years from l966 to l970. Then the old assumptions, the defenders of the status quo and the doubts about the doubts began to have their turn once again.

The reasons for this pattern, I am suggesting, lie only partly inside the medical system itself, in the consequences of Medicare, or in the operation of demand-pull and cost-push. For, much as some of its leaders would like it to, the medical system does not in the end operate in isolation from the general mood of American life. In the most specific ways, medical institutions, like other institutions, felt the impact not only of liberal reform but also of the Vietnam War, of inflation, the civil rights movement, feminism, student dissent, and ghetto anger. It also felt the reaction to the great national crisis of the 1960s: the disillusionment with rising federal budgets, the impatience of the white middle class with strident radicalism, the counterattack of organized interest groups, and the stubbornness with which conservative intellectuals have patched up intellectual defenses of the free enterprise status quo.

Yet most of the things that were wrong with American medicine when President Nixon thought the system faced imminent breakdown are wrong with it now. Americans still spend far more on their health than people anywhere else in the world, and live shorter lives than do the inhabitants of many of the countries Americans' ancestors emigrated from in the nineteenth century.

What are the prospects for change at this time? The initiative for health care reform having passed to Congress, the eventual shape of national health insurance will emerge from the tete-a-tetes between Senator Kennedy and Chairman Mills. Even if they do succeed in drafting a bill by the fall, it will have to wait in line behind both tax and trade legislation in what have become overcrowded sea-lanes leading to the Ways and Means Committee. Even if legislation is drafted late this year, it is realistic to guess that it would not become law before, say, 1977. Perhaps five years have been lost.

This is not the result of any obstructionist desire on the part of Wilbur Mills. The chairman may be an old gray fox, a pragmatist who keeps his cards up against his chest until he plays them. But he is also a positive man, a master craftsman of consensus. He personally wants to pass a historic--which means a substantial- national health insurance bill before he retires. He has steered thousands of pieces of legislation through Congress. But Medicare was to have been his monument. He sees that monument as slightly tarnished, by inflation and by the suggestion of incompleteness. National health insurance would be a fitting crown for one of the great congressional careers.

Mills does not think such a bill can be passed unless it includes a role--some role, however limited--for the insurance industry. On the other great issue, the method of financing, the evidence is that neither Kennedy nor Mills has yet committed himself. And there are good reasons why they should not commit themselves yet. The deficit is so high that it would be hard to finance a major measure through the budget. Social Security is so high that it would be hard to increase that for a while. And the "mandated" employer-employee approach has several disadvantages--not least that it belongs to the Republicans. No wonder many experts are turning toward some gradualist approach to national health insurance. They want to crawl up to it unobserved, and wave the flag only when there is a victory to announce.

One such gradualist tactic would start with the catastrophic approach and transform it bit by bit. If the Long bill took effect, not at $2000 and sixty days, but at $100 and a week, we would wake up one morning to find that Congress had amended it into national health insurance. In theory, Martin Feldstein's idea of starting at the top and working down could be extended piecemeal even more effectively. A third approach, which both Mills's and Kennedy's staff have examined, would be to start with mothers and children--for example, with federal coverage of prenatal care, delivery, and postnatal care up to the age of five. That would have numerous advantages. Good health care in the first five years of life is cheap--it need cost no more than $5-7 billion a year, Dr. Rashi Fein calculates. It also reduces the need for more expensive care later in life. And motherhood is notoriously hard to oppose politically.

Senator Kennedy admits the attraction of such gradualist ideas. But for him they all suffer from the same philosophic weakness. They give the government no leverage to reform the health care system as a whole and they therefore risk repeating the experience of Medicare. His personal preference is for Social Security as a method of financing, and for an interesting reason. His thinking goes back to what was perhaps the most important single cause of the mood of reaction we have been witnessing recently: the feeling on the part of many middle-class people that they were left out by the liberal reforms of the 1960s. Again, it is worth noticing that the argument is in no way specific to the medical field; it applies to social policy as a whole. Kennedy told me:

"I think there's one lesson we learned in the 1960s and that is that programs which are directed toward alleviating social needs ought to be targeted in a universal way. What is the most successful social program we have in this country? Social Security. And why? Because it's universal. A policeman in Boston, for example, isn't going to be against a particular program that helps the blacks, so long as he's being helped too. Once he feels the pressures on him are being taken care of, I think he's enormously tolerant of doing something for the disadvantaged--so long as he feels that things are moving along for him too."

That is the language of consensus politics. It is, interestingly, identical with the view of Irving Kristol, who, through his editorship of the journal The Public Interest and friendship with Ambassador Daniel Patrick Moynihan, had a considerable influence on the social policies of the first Nixon Administration. And it may well be true that it will take a return to consensus politics, on Capitol Hill and perhaps in the White House too, if the Democrats can recapture it, before a major national health insurance measure is passed.

But the important weakness in the American health care system which the crisis of the late 1960s revealed was not the organizational and financial crisis which a cautious, compromise brand of national health insurance would deal with. It was the entrepreneurial concept of the doctor's social role, the intimate relationship, in Nathan Glazer's formulation between healing and monetary reward, which has prevented a real, indeed a brilliant, improvement in medical technique from being translated into commensurate improvement in medical care. Cleansed of the entrepreneurial temptations which the Administration's interpretation has allowed, the Health Maintenance Organization could develop into the key institution in a transformation of the economic structure of medicine which would diminish the conflict between the doctor's and the patient's interests. Probably the best hope of spreading the HMO system would be by linking it with a national health insurance system; and there are certainly ways in which this could be done. But it is clear that in the crisis of the 1960s an opportunity for reforming one of the least attractive aspects of American life arrived, and was lost.


ENDNOTES:

1. For example, Dr. Martin Feldstein, in The Rising Cost of Hospital Care, National Center for Health Services Research and Development, 1971.

2. The operation of the medical research lobby was analyzed by Elizabeth Drew in The Atlantic of December, 1967. Mrs. Drew pointed out that cancer, heart disease, and stroke, which received the lion's share of congressional attention, happen to be the major medical threats to elderly, middle-class males, the demographic category to which all the key congressional figures belonged. The story has also been well told in detail by Stephen P. Strickland in Politics, Science and Dread Disease (Harvard University Press, 1972).

3. These international rankings are not absolutely precise, for two reasons. National statistics do not all date from the same year. And not all territories included in the U.N. data are sovereign nations. I have, for example, excluded Northern Ireland and the Ukraine (both of which have higher life expectancy than the United States). Interestingly, two U.S. dependencies, Puerto Rico and the Ryuku Islands, also have higher male life expectancy than the United States.

4. By Dr. John Bunker, New England Journal of Medicine, 1970.

5. The revelation in 1972 that a federally sponsored research project in Tuskegee, Alabama, had been paying black syphilis patients to forgo treatment in the interests of research illustrated how far medical researchers were prepared to go. It also stimulated debate among civil libertarians and on Capitol Hill about defining new canons of research ethics.

6. The Rising Cost of Hospital Care, National Center for Health Services Research and Development, in 1971. Feldstein complains that "the role of increasing demand as the primary cause" of cost increases "is not generally understood," in spite of the fact that he has taken the trouble to demonstrate it with "a formal mathematical model." Yet at times he seems a little confused on this point himself: he concludes his paper by saying that "our current methods of hospital insurance have encouraged hospitals... to increase the sophistication and expensiveness of their product more rapidly than the public actually wants." The trouble, in fact, has come from a special type of "demand," usually known to those unfamiliar with Feldstein's formal mathematical model as "supply."


Copyright © 1973, Godfrey Hodgson. All rights reserved.
The Atlantic Monthly, October, 1973, issue. Volume 232, Number 4 (pages 45-61).
Tue, 03 May 2022 17:02:00 -0500 text/html https://www.theatlantic.com/past/docs/politics/healthca/hodgson.htm
000-550 exam dump and training guide direct download
Training Exams List