Most recent Questions of 00M-648 test are given at killexams.com

Killexams.com has many tributes of effective 00M-648 test takers. Utilizing these substantial, Latest, and 2022 refreshed 00M-648 VCE is adequate to finish the test at absolute first endeavor or cashback. A few 00M-648 effective test-takers send us their experience and deceives that seller utilized in 00M-648 test.

Exam Code: 00M-648 Practice test 2022 by Killexams.com team
IBM Rational IT Sales Mastery Test v2
IBM Rational exam
Killexams : IBM Rational test - BingNews https://killexams.com/pass4sure/exam-detail/00M-648 Search results Killexams : IBM Rational test - BingNews https://killexams.com/pass4sure/exam-detail/00M-648 https://killexams.com/exam_list/IBM Killexams : Radian Group: Best Near Capital Gain Real Estate Services Prospect
Business on Wall Street in Manhattan

Pgiam/iStock via Getty Images

Investment Thesis

21st Century paces of change in technology and rational behavior (not of emotional reactions) seriously disrupt the accepted productive investment strategy of the 20th century. Passive investing can't compete.

One required change is the shortening of forecast horizons. There must be a shift from the multi-year passive approach of buy and hold to the active strategy of specific price-change target achievement or time-limit actions, with reinvestment set to new nearer-term targets.

That change avoids the irretrievable loss of invested time spent destructively by failure to recognize shifting evolution, as in the cases of IBM, Kodak, GM, Xerox, GE, and many others.

It recognizes the progress in medical, communication, and information technologies and enjoys their operational benefits already present in extended lifetimes, trade-commission-free investments, and coming in transportation ownership and energy usage.

But it requires the ability to make valid direct comparisons of value between investment reward prospects and risk exposures in the uncertain future. Since uncertainty expands as the future dimension increases, shorter forecast horizons are a means of improving the reward-to-risk comparison.

That shortening is now best invoked at the investment entry point by using Market-Maker ("MM") expectations for coming prices. When reached, the expanded capital is then reintroduced at the exit/reinvestment point to new promising candidates, with their own specific near-term expectations for target prices.

The MM's constant presence, extensive global communications and human resources dedicated to monitoring industry-focused competitive evolution sharpen MM price expectations. But their job is to get buyers and sellers to agree on exchanging share ownership - without having to take on risk while doing it.

Others in the MM community provide protection for capital of the Transaction negotiators, which gets temporarily exposed to price-change risk. Derivative-securities deals to hedge undesired price changes are regularly created. The deals' prices and contracts provide a window of sorts to view MM price expectations, the best indication of likely near-term outlook.

This article focuses primarily on Radian Group Inc. (NYSE:RDN).

Description of Equity Subject Company

"Radian Group Inc., together with its subsidiaries, engages in the mortgage and real estate services business in the United States. Its Mortgage segment offers credit-related insurance coverage primarily through private mortgage insurance on residential first-lien mortgage loans, as well as other credit risk management, contract underwriting, and fulfillment solutions. This segment primarily serves mortgage originators, such as mortgage banks, commercial banks, savings institutions, credit unions, and community banks. The company's Homegenius segment offers title services. This segment serves consumers, mortgage lenders, mortgage and real estate investors, government-sponsored enterprises, and real estate brokers and agents. The company was founded in 1977 and is headquartered in Wayne, Pennsylvania."

Source: Yahoo Finance

Alternative Investments Compared

The investment selections most frequently visited by users of Yahoo Finance were added to by principal holdings of stocks in the subject's exchange-traded fund ("ETF"), and as a market proxy, the SPDR S&P 500 ETF (SPY) in making up a comparison group against which to match RDN.

Following the same analysis as with RDN, historic sampling of today's Risk~Reward balances were taken for each of the alternative investments. They are mapped out in Figure 1.

Figure 1.

MM hedging forecasts

blockdesk.com

(used with permission).

Expected rewards for these securities are the greatest gains from current closing market price seen worth protecting short positions. Their measure is on the horizontal green scale.

The risk dimension is of genuine price drawdowns at their most extreme point while being held in previous pursuit of upside rewards similar to the ones currently being seen. They are measured on the red vertical scale.

Both scales are of percent change from zero to 25%. Any stock or ETF whose present risk exposure exceeds its reward prospect will be above the dotted diagonal line. Capital-gain attractive to-buy issues are in the directions down and to the right.

Our principal interest is in RDN at location [7], midway between locations [6 and 1]. A "market index" norm of reward~risk tradeoffs is offered by SPY at [4].

Comparing Features of Alternative Investment Stocks

The Figure 1 map provides a good visual comparison of the two most important aspects of every equity investment in the short term. There are other aspects of comparison which this map sometimes does not communicate well, particularly when general market perspectives like those of SPY are involved. Where questions of "how likely" are present in other comparative tables, like Figure 2, may be useful.

Yellow highlighting of the table's cells emphasize factors important to securities valuations and the security RDN, most promising of near capital gain as ranked in column [R].

Figure 2

detail comparison data

blockdesk.com

(used with permission)

Why Do All This Math?

Figure 2's purpose is to attempt universally comparable answers, stock by stock, of a) How big the prospective price gain payoff may be, b) how LIKELY the payoff will be a profitable experience, c) how soon it may happen, and d) what price drawdown risk may be encountered during its holding period.

Readers familiar with our analysis methods after quick examination of Figure 2 may wish to skip to the next section viewing Price range forecast trends for RDN.

Column headers for Figure 2 define investment-choice preference elements for each row stock whose symbol appears at the left in column [A]. The elements are derived or calculated separately for each stock, based on the specifics of its situation and current-day MM price-range forecasts. Data in red numerals are negative, usually undesirable to "long" holding positions. Table cells with yellow fills are of data for the stocks of principal interest and of all issues at the ranking column, [R].

The price-range forecast limits of columns [B] and [C] get defined by MM hedging actions to protect firm capital required to be put at risk of price changes from volume trade orders placed by big-$ "institutional" clients.

[E] measures potential upside risks for MM short positions created to fill such orders, and reward potentials for the buy-side positions so created. Prior forecasts like the present provide a history of relevant price draw-down risks for buyers. The most severe ones actually encountered are in [F], during holding periods in an effort to reach [E] gains. Those are where buyers are emotionally most likely to accept losses.

The Range Index [G] tells where today's price lies relative to the MM community's forecast of upper and lower limits of coming prices. Its numeric is the percentage proportion of the full low to high forecast seen below the current market price.

[H] tells what proportion of the [L] demo of prior like-balance forecasts have earned gains by either having price reach its [B] target or be above its [D] entry cost at the end of a 3-month max-patience holding period limit. [I] gives the net gains-losses of those [L] experiences.

What makes RDN most attractive in the group at this point in time is its basic strength in capturing much of the forecast upside [E] in realized payoffs of [I], shown in [N] as a credibility ratio. Only one of its competitors manages to realize profits of half of what has been implied as an upside price gain forecast target.

Further, Reward~Risk tradeoffs involve using the [H] odds for gains with the 100 - H loss odds as weights for N-conditioned [E] and for [F], for a combined-return score [Q]. The typical position holding period [J] on [Q] provides a figure of merit [fom] ranking measure [R] useful in portfolio position preferencing. Figure 2 is row-ranked on [R] among alternative candidate securities, with RDN in top rank.

Along with the candidate-specific stocks these selection considerations are provided for the averages of some 3000 stocks for which MM price-range forecasts are available today, and 20 of the best-ranked (by fom) of those forecasts, as well as the forecast for S&P 500 Index ETF (SPY) as an equity-market proxy.

Current-market index SPY is marginally competitive as an investment alternative. Its Range Index of 27 indicates 3/4ths of its forecast range is to the upside.

As shown in column [T] of figure 2, those levels vary significantly between stocks. What matters is the net gain between investment gains and losses actually achieved following the forecasts, shown in column [I]. The Win Odds of [H] tells what proportion of the demo RIs of each stock were profitable. Odds below 80% often have proven to lack reliability.

Price Range Forecast Trends for RDN

Figure 3

daily forecast trends

blockdesk.com

(used with permission)

No, this is not a "technical analysis chart" showing only historical data. It is a Behavioral Analysis picture of the Market-Making community's actions in hedging investments of the subject. Those actions define expected coming price change limits shown as vertical bars with a heavy dot at the closing price of the forecast's date.

It is an genuine picture of the expected future, not a hope of the recurrence of the past.

The special value of such pictures is their ability to immediately communicate the balance of expectation attitudes between optimism and pessimism. We quantify that balance by calculating what proportion of the price-range uncertainty lies to the downside, between the current market price and the lower expected limit, labeled the Range Index [RI].

Here, the RI at zero indicates no further price decline is likely, but not guaranteed. The odds of 3 months passing without either reaching or exceeding the upper forecast limit or being at that time below the expected lower price (today's) are quite slight.

The probability function of price changes for RDN is pictured by the lower Figure 3 (thumbnail) frequency distribution of the past 5 years of RI values with the zero today indicated.

Conclusion

The multi-path valuations explored by the analysis covered in Figure 2 are rich testimony to the near-future value prospect advantage of a current investment in Radian Group Inc. over and above the other compared alternative investment candidates.

Tue, 26 Jul 2022 02:32:00 -0500 en text/html https://seekingalpha.com/article/4525821-radian-group-best-near-capital-gain-real-estate-services-prospect
Killexams : James Duncan

James Duncan is the head of Epsilon’s Financial Services vertical and brings over 25 years of experience in financial services. James has been on the forefront of many industry transformational efforts in banking and investment management over his career. He has extensive work experience in the areas of strategic planning, sales and marketing, and business optimization for numerous companies in various industries throughout the world.James has a strong public and private company leadership background that includes Executive Vice President of the credit card division within Western Alliance Bancorp, Senior Vice President, Global Relations Manager for Visa where he was responsible for all worldwide business, co-brand and operational aspects for Visa’s largest financial institution member, Director of Worldwide CRM Programs for IBM Rational Software, and management consultant with Coopers & Lybrand.James is the past Chairman of the Board of Junior Achievement of Delaware and holds a B.A. in Political Science from Saint Michael’s College in Vermont.

Fri, 24 Mar 2017 13:42:00 -0500 en text/html https://www.accountingtoday.com/author/james-duncan-ab1655
Killexams : Did the Universe Just Happen? Killexams : The Atlantic | April 1988 | Did the Universe Just Happen? | Wright


More on science and technology from The Atlantic Monthly.

The Atlantic Monthly | April 1988
 

I. Flying Solo


d Fredkin is scanning the visual field systematically. He checks the instrument panel regularly. He is cool, collected, in control. He is the optimally efficient pilot.

The plane is a Cessna Stationair Six—a six-passenger single-engine amphibious plane, the kind with the wheels recessed in pontoons. Fredkin bought it not long ago and is still working out a few kinks; right now he is taking it for a spin above the British Virgin Islands after some minor mechanical work.

He points down at several brown-green masses of land, embedded in a turquoise sea so clear that the shadows of yachts are distinctly visible on its sandy bottom. He singles out a small island with a good-sized villa and a swimming pool, and explains that the compound, and the island as well, belong to "the guy that owns Boy George"—the rock star's agent, or manager, or something.

I remark, loudly enough to overcome the engine noise, "It's nice."

Yes, Fredkin says, it's nice. He adds, "It's not as nice as my island."

He's joking, I guess, but he's right. Ed Fredkin's island, which soon comes into view, is bigger and prettier. It is about 125 acres, and the hill that constitutes its bulk is a deep green—a mixture of reeds and cacti, sea grape and turpentine trees, machineel and frangipani. Its beaches range from prosaic to sublime, and the coral in the waters just offshore attracts little and big fish whose colors look as if they were coordinated by Alexander Julian. On the island's west side are immense rocks, suitable for careful climbing, and on the east side are a bar and restaurant and a modest hotel, which consists of three clapboard buildings, each with a few rooms. Between east and west is Fredkin's secluded island villa. All told, Moskito Island—or Drake's Anchorage, as the brochures call it—is a nice place for Fredkin to spend the few weeks of each year when he is not up in the Boston area tending his various other businesses.

In addition to being a self-made millionaire, Fredkin is a self-made intellectual. Twenty years ago, at the age of thirty-four, without so much as a bachelor's degree to his name, he became a full professor at the Massachusetts Institute of Technology. Though hired to teach computer science, and then selected to guide MIT's now eminent computer-science laboratory through some of its formative years, he soon branched out into more-offbeat things. Perhaps the most idiosyncratic of the courses he has taught is one on "digital physics," in which he propounded the most idiosyncratic of his several idiosyncratic theories. This theory is the reason I've come to Fredkin's island. It is one of those things that a person has to be prepared for. The preparer has to say, "Now, this is going to sound pretty weird, and in a way it is, but in a way it's not as weird as it sounds, and you'll see this once you understand it, but that may take a while, so in the meantime don't prejudge it, and don't casually dismiss it." Ed Fredkin thinks that the universe is a computer.

Fredkin works in a twilight zone of modern science—the interface of computer science and physics. Here two concepts that traditionally have ranked among science's most fundamental—matter and energy—keep bumping into a third: information. The exact relationship among the three is a question without a clear answer, a question vague enough, and basic enough, to have inspired a wide variety of opinions. Some scientists have settled for modest and sober answers. Information, they will tell you, is just one of many forms of matter and energy; it is embodied in things like a computer's electrons and a brain's neural firings, things like newsprint and radio waves, and that is that. Others talk in grander terms, suggesting that information deserves full equality with matter and energy, that it should join them in some sort of scientific trinity, that these three things are the main ingredients of reality.

Fredkin goes further still. According to his theory of digital physics, information is more fundamental than matter and energy. He believes that atoms, electrons, and quarks consist ultimately of bits—binary units of information, like those that are the currency of computation in a personal computer or a pocket calculator. And he believes that the behavior of those bits, and thus of the entire universe, is governed by a single programming rule. This rule, Fredkin says, is something fairly simple, something vastly less arcane than the mathematical constructs that conventional physicists use to explain the dynamics of physical reality. Yet through ceaseless repetition—by tirelessly taking information it has just transformed and transforming it further—it has generated pervasive complexity. Fredkin calls this rule, with discernible reverence, "the cause and prime mover of everything."

T THE RESTAURANT ON FREDKIN'S ISLAND THE FOOD is prepared by a large man named Brutus and is humbly submitted to diners by men and women native to nearby islands. The restaurant is open-air, ventilated by a sea breeze that is warm during the day, cool at night, and almost always moist. Between the diners and the ocean is a knee-high stone wall, against which waves lap rhythmically. Beyond are other islands and a horizon typically blanketed by cottony clouds. Above is a thatched ceiling, concealing, if the truth be told, a sheet of corrugated steel. It is lunchtime now, and Fredkin is sitting in a cane-and-wicker chair across the table from me, wearing a light cotton sport shirt and gray swimming trunks. He was out trying to windsurf this morning, and he enjoyed only the marginal success that one would predict on the basis of his appearance. He is fairly tall and very thin, and has a softness about him—not effeminacy, but a gentleness of expression and manner—and the complexion of a scholar; even after a week on the island, his face doesn't vary much from white, except for his nose, which is red. The plastic frames of his glasses, in a modified aviator configuration, surround narrow eyes; there are times—early in the morning or right after a nap—when his eyes barely qualify as slits. His hair, perennially semi-combed, is black with a little gray.

Fredkin is a pleasant mealtime companion. He has much to say that is interesting, which is fortunate because generally he does most of the talking. He has little curiosity about other people's minds, unless their interests happen to coincide with his, which few people's do. "He's right above us," his wife, Joyce, once explained to me, holding her left hand just above her head, parallel to the ground. "Right here looking down. He's not looking down saying, 'I know more than you.' He's just going along his own way."

The food has not yet arrived, and Fredkin is passing the time by describing the world view into which his theory of digital physics fits. "There are three great philosophical questions," he begins. "What is life? What is consciousness and thinking and memory and all that? And how does the universe work?" He says that his "informational viewpoint" encompasses all three. Take life, for example. Deoxyribonucleic acid, the material of heredity, is "a good example of digitally encoded information," he says. "The information that implies what a creature or a plant is going to be is encoded; it has its representation in the DNA, right? Okay, now, there is a process that takes that information and transforms it into the creature, okay?" His point is that a mouse, for example, is "a big, complicated informational process."

Fredkin exudes rationality. His voice isn't quite as even and precise as Mr. Spock's, but it's close, and the parallels don't end there. He rarely displays emotion—except, perhaps, the slightest sign of irritation under the most trying circumstances. He has never seen a problem that didn't have a perfectly logical solution, and he believes strongly that intelligence can be mechanized without limit. More than ten years ago he founded the Fredkin Prize, a $100,000 award to be given to the creator of the first computer program that can beat a world chess champion. No one has won it yet, and Fredkin hopes to have the award raised to $1 million.

Fredkin is hardly alone in considering DNA a form of information, but this observation was less common back when he first made it. So too with many of his ideas. When his world view crystallized, a quarter of a century ago, he immediately saw dozens of large-scale implications, in fields ranging from physics to biology to psychology. A number of these have gained currency since then, and he considers this trend an ongoing substantiation of his entire outlook.

Fredkin talks some more and then recaps. "What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity life, DNA—you know, the biochemical functions—are controlled by a digital information process. Then, at another level, our thought processes are basically information processing." That is not to say, he stresses, that everything is best viewed as information. "It's just like there's mathematics and all these other things, but not everything is best viewed from a mathematical viewpoint. So what's being said is not that this comes along and replaces everything. It's one more avenue of modeling reality, and it happens to cover the sort of three biggest philosophical mysteries. So it sort of completes the picture."

Among the scientists who don't dismiss Fredkin's theory of digital physics out of hand is Marvin Minsky, a computer scientist and polymath at MIT, whose renown approaches cultic proportions in some circles. Minsky calls Fredkin "Einstein-like" in his ability to find deep principles through simple intellectual excursions. If it is true that most physicists think Fredkin is off the wall, Minsky told me, it is also true that "most physicists are the ones who don't invent new theories"; they go about their work with tunnel vision, never questioning the dogma of the day. When it comes to the kind of basic reformulation of thought proposed by Fredkin, "there's no point in talking to anyone but a Feynman or an Einstein or a Pauli," Minsky says. "The rest are just Republicans and Democrats." I talked with Richard Feynman, a Nobel laureate at the California Institute of Technology, before his death, in February. Feynman considered Fredkin a brilliant and consistently original, though sometimes incautious, thinker. If anyone is going to come up with a new and fruitful way of looking at physics, Feynman said, Fredkin will.

Notwithstanding their moral support, though, neither Feynman nor Minsky was ever convinced that the universe is a computer. They were endorsing Fredkin's mind, not this particular manifestation of it. When it comes to digital physics, Ed Fredkin is flying solo.

He knows that, and he regrets that his ideas continue to lack the support of his colleagues. But his self-confidence is unshaken. You see, Fredkin has had an odd childhood, and an odd education, and an odd career, all of which, he explains, have endowed him with an odd perspective, from which the essential nature of the universe happens to be clearly visible. "I feel like I'm the only person with eyes in a world where everyone's blind," he says.

II. A Finely Mottled Universe


HE PRIME MOVER OF EVERYTHING, THE SINGLE principle that governs the universe, lies somewhere within a class of computer programs known as cellular automata, according to Fredkin.

The cellular automaton was invented in the early 1950s by John von Neumann, one of the architects of computer science and a seminal thinker in several other fields. Von Neumann (who was stimulated in this and other inquiries by the ideas of the mathematician Stanislaw Ulam) saw cellular automata as a way to study reproduction abstractly, but the word cellular is not meant biologically when used in this context. It refers, rather, to adjacent spaces—cells—that together form a pattern. These days the cells typically appear on a computer screen, though von Neumann, lacking this convenience, rendered them on paper.

In some respects cellular automata resemble those splendid graphic displays produced by patriotic masses in authoritarian societies and by avid football fans at American universities. Holding up large colored cards on cue, they can collectively generate a portrait of, say, Lenin, Mao Zedong, or a University of Southern California Trojan. More impressive still, one portrait can fade out and another crystallize in no time at all. Again and again one frozen frame melts into another It is a spectacular feat of precision and planning.

But suppose there were no planning. Suppose that instead of arranging a succession of cards to display, everyone learned a single rule for repeatedly determining which card was called for next. This rule might assume any of a number of forms. For example, in a crowd where all cards were either blue or white, each card holder could be instructed to look at his own card and the cards of his four nearest neighbors—to his front, back, left, and right—and do what the majority did during the last frame. (This five-cell group is known as the von Neumann neighborhood.) Alternatively, each card holder could be instructed to do the opposite of what the majority did. In either event the result would be a series not of predetermined portraits but of more abstract, unpredicted patterns. If, by prior agreement, we began with a USC Trojan, its white face might dissolve into a sea of blue, as whitecaps drifted aimlessly across the stadium. Conversely, an ocean of randomness could yield islands of structure—not a Trojan, perhaps, but at least something that didn't look entirely accidental. It all depends on the original pattern of cells and the rule used to transform it incrementally.

This leaves room for abundant variety. There are many ways to define a neighborhood, and for any given neighborhood there are many possible rules, most of them more complicated than blind conformity or implacable nonconformity. Each cell may, for instance, not only count cells in the vicinity but also pay attention to which particular cells are doing what. All told, the number of possible rules is an exponential function of the number of cells in the neighborhood; the von Neumann neighborhood alone has 232, or around 4 billion, possible rules, and the nine-cell neighborhood that results from adding corner cells offers 2512, or roughly 1 with 154 zeros after it, possibilities. But whatever neighborhoods, and whatever rules, are programmed into a computer, two things are always true of cellular automata: all cells use the same rule to determine future behavior by reference to the past behavior of neighbors, and all cells obey the rule simultaneously, time after time.

In the late 1950s, shortly after becoming acquainted with cellular automata, Fredkin began playing around with rules, selecting the powerful and interesting and discarding the weak and bland. He found, for example, that any rule requiring all four of a cell's immediate neighbors to be lit up in order for the cell itself to be lit up at the next moment would not provide sustained entertainment; a single "off" cell would proliferate until darkness covered the computer screen. But equally simple rules could create great complexity. The first such rule discovered by Fredkin dictated that a cell be on if an odd number of cells in its von Neumann neighborhood had been on, and off otherwise. After "seeding" a good, powerful rule with an irregular landscape of off and on cells, Fredkin could watch rich patterns bloom, some freezing upon maturity, some eventually dissipating, others locking into a cycle of growth and decay. A colleague, after watching one of Fredkin's rules in action, suggested that he sell the program to a designer of Persian rugs.

Today new cellular-automaton rules are formulated and tested by the "information-mechanics group" founded by Fredkin at MIT's computer-science laboratory. The core of the group is an international duo of physicists, Tommaso Toffoli, of Italy, and Norman Margolus, of Canada. They differ in the degree to which they take Fredkin's theory of physics seriously, but both agree with him that there is value in exploring the relationship between computation and physics, and they have spent much time using cellular automata to simulate physical processes. In the basement of the computer-science laboratory is the CAM—the cellular automaton machine, designed by Toffoli and Margolus partly for that purpose. Its screen has 65,536 cells, each of which can assume any of four colors and can change color sixty times a second.

The CAM is an engrossing, potentially mesmerizing machine. Its four colors—the three primaries and black—intermix rapidly and intricately enough to form subtly shifting hues of almost any gradation; pretty waves of deep blue or red ebb and flow with fine fluidity and sometimes with rhythm, playing on the edge between chaos and order.

Guided by the right rule, the CAM can do a respectable imitation of pond water rippling outward circularly in deference to a descending pebble, or of bubbles forming at the bottom of a pot of boiling water, or of a snowflake blossoming from a seed of ice: step by step, a single "ice crystal" in the center of the screen unfolds into a full-fledged flake, a six-edged sheet of ice riddled symmetrically with dark pockets of mist. (It is easy to see how a cellular automaton can capture the principles thought to govern the growth of a snowflake: regions of vapor that find themselves in the vicinity of a budding snowflake freeze—unless so nearly enveloped by ice crystals that they cannot discharge enough heat to freeze.)

These exercises are fun to watch, and they provide one a sense of the cellular automaton's power, but Fredkin is not particularly interested in them. After all, a snowflake is not, at the visible level, literally a cellular automaton; an ice crystal is not a single, indivisible bit of information, like the cell that portrays it. Fredkin believes that automata will more faithfully mirror reality as they are applied to its more fundamental levels and the rules needed to model the motion of molecules, atoms, electrons, and quarks are uncovered. And he believes that at the most fundamental level (whatever that turns out to be) the automaton will describe the physical world with perfect precision, because at that level the universe is a cellular automaton, in three dimensions—a crystalline lattice of interacting logic units, each one "deciding" zillions of point in time. The information thus produced, Fredkin says, is the fabric of reality, the stuff of which matter and energy are made. An electron, in Fredkin's universe, is nothing more than a pattern of information, and an orbiting electron is nothing more than that pattern moving. Indeed, even this motion is in some sense illusory: the bits of information that constitute the pattern never move, any more than football fans would change places to slide a USC Trojan four seats to the left. Each bit stays put and confines its activity to blinking on and off. "You see, I don't believe that there are objects like electrons and photons, and things which are themselves and nothing else," Fredkin says. What I believe is that there's an information process, and the bits, when they're in certain configurations, behave like the thing we call the electron, or the hydrogen atom, or whatever."

HE READER MAY NOW HAVE A NUMBER OF questions that unless satisfactorily answered will lead to something approaching contempt for Fredkin's thinking. One such question concerns the way cellular automata chop space and time into little bits. Most conventional theories of physics reflect the intuition that reality is continuous—that one "point" in time is no such thing but, rather, flows seamlessly into the next, and that space, similarly, doesn't come in little chunks but is perfectly smooth. Fredkin's theory implies that both space and time have a graininess to them, and that the grains cannot be chopped up into smaller grains; that people and dogs and trees and oceans, at rock bottom, are more like mosaics than like paintings; and that time's essence is better captured by a digital watch than by a grandfather clock.

The obvious question is, Why do space and time seem continuous if they are not? The obvious answer is, The cubes of space and points of time are very, very small: time seems continuous in just the way that movies seem to move when in fact they are frames, and the illusion of spatial continuity is akin to the emergence of smooth shades from the finely mottled texture of a newspaper photograph.

The obvious answer, Fredkin says, is not the whole answer; the illusion of continuity is yet more deeply ingrained in our situation. Even if the ticks on the universal clock were, in some absolute sense, very slow, time would still seem continuous to us, since our perception, itself proceeding in the same ticks, would be no more finely grained than the processes being perceived. So too with spatial perception: Can eyes composed of the smallest units in existence perceive those units? Could any informational process sense its ultimate constituents? The point is that the basic units of time and space in Fredkin's reality don't just happen to be imperceptibly small. As long as the creatures doing the perceiving are in that reality, the units have to be imperceptibly small.

Though some may find this discreteness hard to comprehend, Fredkin finds a grainy reality more sensible than a smooth one. If reality is truly continuous, as most physicists now believe it is, then there must be quantities that cannot be expressed with a finite number of digits; the number representing the strength of an electromagnetic field, for example, could begin 5.23429847 and go on forever without failing into a pattern of repetition. That seems strange to Fredkin: wouldn't you eventually get to a point, around the hundredth, or thousandth, or millionth decimal place, where you had hit the strength of the field right on the nose? Indeed, wouldn't you expect that every physical quantity has an exactness about it? Well, you might and might not. But Fredkin does expect exactness, and in his universe he gets it.

Fredkin has an interesting way of expressing his insistence that all physical quantities be "rational." (A rational number is a number that can be expressed as a fraction—as a ratio of one integer to another. Expressed as a decimal, a rational number will either end, as 5/2 does in the form of 2.5, or repeat itself endlessly, as 1/7 does in the form of 0.142857142857142 . . .) He says he finds it hard to believe that a finite volume of space could contain an infinite amount of information. It is almost as if he viewed each parcel of space as having the digits describing it actually crammed into it. This seems an odd perspective, one that confuses the thing itself with the information it represents. But such an inversion between the realm of things and the realm of representation is common among those who work at the interface of computer science and physics. Contemplating the essence of information seems to affect the way you think.

The prospect of a discrete reality, however alien to the average person, is easier to fathom than the problem of the infinite regress, which is also raised by Fredkin's theory. The problem begins with the fact that information typically has a physical basis. Writing consists of ink; speech is composed of sound waves; even the computer's ephemeral bits and bytes are grounded in configurations of electrons. If the electrons are in turn made of information, then what is the information made of?

Asking questions like this ten or twelve times is not a good way to earn Fredkin's respect. A look of exasperation passes fleetingly over his face. "What I've tried to explain is that—and I hate to do this, because physicists are always doing this in an obnoxious way—is that the question implies you're missing a very important concept." He gives it one more try, two more tries, three, and eventually some of the fog between me and his view of the universe disappears. I begin to understand that this is a theory not just of physics but of metaphysics. When you disentangle these theories—compare the physics with other theories of physics, and the metaphysics with other ideas about metaphysics—both sound less far-fetched than when jumbled together as one. And, as a bonus, Fredkin's metaphysics leads to a kind of high-tech theology—to speculation about supreme beings and the purpose of life.

III. The Perfect Thing


DWARD FREDKIN WAS BORN IN 1934, THE LAST OF three children in a previously prosperous family. His father, Manuel, had come to Southern California from Russia shortly after the Revolution and founded a chain of radio stores that did not survive the Great Depression. The family learned economy, and Fredkin has not forgotten it. He can reach into his pocket, pull out a tissue that should have been retired weeks ago, and, with cleaning solution, make an entire airplane windshield clear. He can take even a well-written computer program, sift through it for superfluous instructions, and edit it accordingly, reducing both its size and its running time.

Manuel was by all accounts a competitive man, and he focused his competitive energies on the two boys: Edward and his older brother, Norman. Manuel routinely challenged Ed's mastery of fact, inciting sustained arguments over, say, the distance between the moon and the earth. Norman's theory is that his father, though bright, was intellectually insecure; he seemed somehow threatened by the knowledge the boys brought home from school. Manuel's mistrust of books, experts, and all other sources of received wisdom was absorbed by Ed.

So was his competitiveness. Fredkin always considered himself the smartest kid in his class. He used to place bets with other students on test scores. This habit did not endear him to his peers, and he seems in general to have lacked the prerequisites of popularity. His sense of humor was unusual. His interests were not widely shared. His physique was not a force to be reckoned with. He recalls, "When I was young—you know, sixth, seventh grade—two kids would be choosing sides for a game of something. It could be touch football. They'd choose everybody but me, and then there'd be a fight as to whether one side would have to take me. One side would say, 'We have eight and you have seven,' and they'd say, 'That's okay.' They'd be willing to play with seven." Though exhaustive in documenting his social alienation, Fredkin concedes that he was not the only unpopular student in school. "There was a socially active subgroup, probably not a majority, maybe forty percent, who were very socially active. They went out on dates. They went to parties. They did this and they did that. The others were left out. And I was in this big left-out group. But I was in the pole position. I was really left out."

Of the hours Fredkin spent alone, a good many were devoted to courting disaster in the name of science. By wiring together scores of large, 45-volt batteries, he collected enough electricity to conjure up vivid, erratic arcs. By scraping the heads off matches and buying sulfur, saltpeter, and charcoal, he acquired a good working knowledge of pyrotechnics. He built small, minimally destructive but visually impressive bombs, and fashioned rockets out of cardboard tubing and aluminum foil. But more than bombs and rockets, it was mechanisms that captured Fredkin's attention. From an early age he was viscerally attracted to Big Ben alarm clocks, which he methodically took apart and put back together. He also picked up his father's facility with radios and household appliances. But whereas Manuel seemed to fix things without understanding the underlying science, his son was curious about first principles.

So while other kids were playing baseball or chasing girls, Ed Fredkin was taking things apart and putting them back together Children were aloof, even cruel, but a broken clock always responded gratefully to a healing hand. "I always got along well with machines," he remembers.

After graduation from high school, in 1952, Fredkin headed for the California Institute of Technology with hopes of finding a more appreciative social environment. But students at Caltech turned out to bear a disturbing resemblance to people he had observed elsewhere. "They were smart like me," he recalls, "but they had the full spectrum and distribution of social development." Once again Fredkin found his weekends unencumbered by parties. And once again he didn't spend his free time studying. Indeed, one of the few lessons he learned is that college is different from high school: in college if you don't study, you flunk out. This he did a few months into his sophomore year. Then, following in his brother's footsteps, he joined the Air Force and learned to fly fighter planes.

T WAS THE AIR FORCE THAT FINALLY BROUGHT Fredkin face to face with a computer. He was working for the Air Proving Ground Command, whose function was to ensure that everything from combat boots to bombers was of top quality, when the unit was given the job of testing a computerized air-defense system known as SAGE (for "semi-automatic ground environment"), To test SAGE the Air Force needed men who knew something about computers, and so in 1956 a group from the Air Proving Ground Command, including Fredkin, was sent to MIT's Lincoln Laboratory and enrolled in computer-science courses. "Everything made instant sense to me," Fredkin remembers. "I just soaked it up like a sponge."

SAGE, when ready for testing, turned out to be even more complex than anticipated—too complex to be tested by anyone but genuine experts—and the job had to be contracted out. This development, combined with bureaucratic disorder, meant that Fredkin was now a man without a function, a sort of visiting scholar at Lincoln Laboratory. "For a period of time, probably over a year, no one ever came to tell me to do anything. Well, meanwhile, down the hall they installed the latest, most modern computer in the world—IBM's biggest, most powerful computer. So I just went down and started to program it." The computer was an XD-1. It was slower and less capacious than an Apple Macintosh and was roughly the size of a large house.

When Fredkin talks about his year alone with this dinosaur, you half expect to hear violins start playing in the background. "My whole way of life was just waiting for the computer to come along," he says. "The computer was in essence just the perfect thing." It was in some respects preferable to every other conglomeration of matter he had encountered—more sophisticated and flexible than other inorganic machines, and more logical than organic ones. "See, when I write a program, if I write it correctly, it will work. If I'm dealing with a person, and I tell him something, and I tell him correctly, it may or may not work."

The XD-1, in short, was an intelligence with which Fredkin could empathize. It was the ultimate embodiment of mechanical predictability, the refuge to which as a child he had retreated from the incomprehensibly hostile world of humanity. If the universe is indeed a computer, then it could be a friendly place after all.

During the several years after his arrival at Lincoln Lab, as Fredkin was joining the first generation of hackers, he was also immersing himself in physics—finally learning, through self-instruction, the lessons he had missed by dropping out of Caltech. It is this two-track education, Fredkin says, that led him to the theory of digital physics. For a time "there was no one in the world with the same interest in physics who had the intimate experience with computers that I did. I honestly think that there was a period of many years when I was in a unique position."

The uniqueness lay not only in the fusion of physics and computer science but also in the peculiar composition of Fredkin's physics curriculum. Many physicists acquire as children the sort of kinship with mechanism that he still feels, but in most cases it is later diluted by formal education; quantum mechanics, the prevailing paradigm in contemporary physics, seems to imply that at its core, reality, has truly random elements and is thus inherently unpredictable. But Fredkin escaped the usual indoctrination. To this day he maintains, as did Albert Einstein, that the common interpretation of quantum mechanics is mistaken—that any seeming indeterminacy in the subatomic world reflects only our ignorance of the determining principles, not their absence. This is a critical belief, for if he is wrong and the universe is not ultimately deterministic, then it cannot be governed by a process as exacting as computation.

After leaving the Air Force, Fredkin went to work for Bolt Beranek and Newman, a consulting firm in the Boston area, now known for its work in artificial intelligence and computer networking. His supervisor at BBN, J. C. R. Licklider, says of his first encounter with Fredkin, "It was obvious to me he was very unusual and probably a genius, and the more I came to know him, the more I came to think that that was not too elevated a description." Fredkin "worked almost continuously," Licklider recalls. "It was hard to get him to go to sleep sometimes." A pattern emerged. Licklider would provide Fredkin a problem to work on—say, figuring out how to get a computer to search a text in its memory for an only partially specified sequence of letters. Fredkin would retreat to his office and return twenty or thirty hours later with the solution—or, rather, a solution; he often came back with the answer to a question different from the one that Licklider had asked. Fredkin's focus was intense but undisciplined, and it tended to stray from a problem as soon as he was confident that he understood the solution in principle.

This intellectual wanderlust is one of Fredkin's most enduring and exasperating traits. Just about everyone who knows him has a way of describing it: "He doesn't really work. He sort of fiddles." "Very often he has these great ideas and then does not have the discipline to cultivate the idea." "There is a gap between the quality of the original ideas and what follows. There's an imbalance there." Fredkin is aware of his reputation. In self-parody he once brought a cartoon to a friend's attention: A beaver and another forest animal are contemplating an immense man-made dam. The beaver is saying something like, "No, I didn't actually build it. But it's based on an idea of mine."

Among the ideas that congealed in Fredkin's mind during his stay at BBN is the one that gave him his current reputation as (depending on whom you talk to) a thinker of great depth and rare insight, a source of interesting but reckless speculation, or a crackpot.

IV. Tick by Tick, Dot by Dot


HE IDEA THAT THE UNIVERSE IS A COMPUTER WAS inspired partly by the idea of the universal computer. Universal computer, a term that can accurately be applied to everything from an IBM PC to a Cray supercomputer, has a technical, rigorous definition, but here its upshot will do: a universal computer can simulate any process that can be precisely described and perform any calculation that is performable.

This broad power is ultimately grounded in something very simple: the algorithm. An algorithm is a fixed procedure for converting input into output, for taking one body of information and turning it into another. For example, a computer program that takes any number it is given, squares it, and subtracts three is an algorithm. This isn't a very powerful algorithm; by taking a 3 and turning it into a 6, it hasn't created much new information. But algorithms become more powerful with recursion. A recursive algorithm is an algorithm whose output is fed back into it as input. Thus the algorithm that turned 3 into 6, if operating recursively, would continue, turning 6 into 33, then 33 into 1,086, then 1,086 into 1,179,393, and so on.

The power of recursive algorithms is especially apparent in the simulation of physical processes. While Fredkin was at BBN, he would use the company's Digital Equipment Corporation PDP-1 computer to simulate, say, two particles, one that was positively charged and one that was negatively charged, orbiting each other in accordance with the laws of electromagnetism. It was a pretty sight: two phosphor dots dancing, each etching a green trail that faded into yellow and then into darkness. But for Fredkin the attraction lay less in this elegant image than in its underlying logic. The program he had written took the particles' velocities and positions at one point in time, computed those variables for the next point in time, and then fed the new variables back into the algorithm to get newer variables—and so on and so on, thousands of times a second. The several steps in this algorithm, Fredkin recalls, were "very simple and very beautiful." It was in these orbiting phosphor dots that Fredkin first saw the appeal of his kind of universe—a universe that proceeds tick by tick and dot by dot, a universe in which complexity boils down to rules of elementary simplicity.

Fredkin's discovery of cellular automata a few years later permitted him further to indulge his taste for economy of information and strengthened his bond with the recursive algorithm. The patterns of automata are often all but impossible to describe with calculus yet easy to express algorithmically. Nothing is so striking about a good cellular automaton as the contrast between the simplicity of the underlying algorithm and the richness of its result. We have all felt the attraction of such contrasts. It accompanies the comprehension of any process, conceptual or physical, by which simplicity accommodates complexity. Simple solutions to complex problems, for example, make us feel good. The social engineer who designs uncomplicated legislation that will cure numerous social ills, the architect who eliminates several nagging design flaws by moving a single closet, the doctor who traces gastro-intestinal, cardiovascular, and respiratory ailments to a single, correctable cause—all feel the same kind of visceral, aesthetic satisfaction that must have filled the first caveman who literally killed two birds with one stone.

For scientists, the moment of discovery does not simply reinforce the search for knowledge; it inspires further research. Indeed, it directs research. The unifying principle, upon its apprehension, can elicit such devotion that thereafter the scientist looks everywhere for manifestations of it. It was the scientist in Fredkin who, upon seeing how a simple programming rule could yield immense complexity, got excited about looking at physics in a new way and stayed excited. He spent much of the next three decades fleshing out his intuition.

REDKIN'S RESIGNATION FROM BOLT BERANEK AND Newman did not surprise Licklider. "I could tell that Ed was disappointed in the scope of projects undertaken at BBN. He would see them on a grander scale. I would try to argue—hey, let's cut our teeth on this and then move on to bigger things." Fredkin wasn't biting. "He came in one day and said, 'Gosh, Lick, I really love working here, but I'm going to have to leave. I've been thinking about my plans for the future, and I want to make'—I don't remember how many millions of dollars, but it shook me—'and I want to do it in about four years.' And he did amass however many millions he said he would amass in the time he predicted, which impressed me considerably."

In 1962 Fredkin founded Information International Incorporated—an impressive name for a company with no assets and no clients, whose sole employee had never graduated from college. Triple-I, as the company came to be called, was placed on the road to riches by an odd job that Fredkin performed for the Woods Hole Oceanographic Institute. One of Woods Hole's experiments had run into a complication: underwater instruments had faithfully recorded the changing direction and strength of deep ocean currents, but the information, encoded in tiny dots of light on sixteen-millimeter film, was inaccessible to the computers that were supposed to analyze it. Fredkin rented a sixteen-millimeter movie projector and with a surprisingly simple modification turned it into a machine for translating those dots into terms the computer could accept.

This contraption pleased the people at Woods Hole and led to a contract with Lincoln Laboratory. Lincoln was still doing work for the Air Force, and the Air Force wanted its computers to analyze radar information that, like the Woods Hole data, consisted of patterns of light on film. A makeshift information-conversion machine earned Triple-I $10,000, and within a year the Air Force hired Fredkin to build equipment devoted to the task. The job paid $350,000—the equivalent today of around $1 million. RCA and other companies, it turned out, also needed to turn visual patterns into digital data, and "programmable film readers" that sold for $500,000 apiece became Triple-I's stock-in-trade. In 1968 Triple-I went public and Fredkin was suddenly a millionaire. Gradually he cashed in his chips. First he bought a ranch in Colorado. Then one day he was thumbing through the classifieds and saw that an island in the Caribbean was for sale. He bought it.

In the early 1960s, at the suggestion of the Defense Department's Advanced Research Projects Agency, MIT set up what would become its Laboratory for Computer Science. It was then called Project MAC, an acronym that stood for both "machine-aided cognition" and "multiaccess computer." Fredkin had connections with the project from the beginning. Licklider, who had left BBN for the Pentagon shortly after Fredkin's departure, was influential in earmarking federal money for MAC. Marvin Minsky—who would later serve on Triple-I's board, and by the end of 1967 owned some of its stock—was centrally involved In MAC's inception. Fredkin served on Project MAC's steering committee, and in 1966 he began discussing with Minsky the possibility of becoming a visiting professor at MIT. The idea of bringing a college dropout onto the faculty, Minsky recalls, was not as outlandish as it now sounds; computer science had become an academic discipline so suddenly that many of its leading lights possessed meager formal credentials. In 1968, after Licklider had come to MIT and become the director of Project MAC, he and Minsky convinced Louis Smullin, the head of the electrical-engineering department, that Fredkin was worth the gamble. "We were a growing department and we wanted exciting people," Smullin says. "And Ed was exciting."

Fredkin had taught for barely a year before he became a full professor, and not much later, in 1971, he was appointed the head of Project MAC—a position that was also short-lived, for in the fall of 1974 he began a sabbatical at the California Institute of Technology as a Fairchild Distinguished Scholar. He went to Caltech under the sponsorship of Richard Feynman. The deal, Fredkin recalls, was that he would teach Feynman more about computer science, and Feynman would teach him more about physics. While there, Fredkin developed an idea that has slowly come to be seen as a profound contribution to both disciplines. The idea is also—in Fredkin's mind, at least—corroborating evidence for his theory of digital physics. To put its upshot in brief and therefore obscure terms, Fredkin found that computation is not inherently irreversible and thus it is possible, in principle, to build a computer that doesn't use up energy and doesn't provide off heat.

All computers on the market are irreversible. That is, their history of information processing cannot be inferred from their present informational state; you cannot look at the data they contain and figure out how they arrived at it. By the time the average computer tells you that 2 plus 2 equals 4, it has forgotten the question; for all it knows, you asked what 1 plus 3 is. The reason for this ignorance is that computers discharge information once it is no longer needed, so that they won't get clogged up.

In 1961 Rolf Landauer, of IBM's Thomas J. Watson Research Center, established that this destruction of information is the only part of the computational process that unavoidably involves the dissipation of energy. It takes effort, in other words, for a computer to forget things but not necessarily for it to perform other functions. Thus the question of whether you can, in principle, build a universal computer that doesn't dissipate energy in the form of heat is synonymous with the question of whether you can design a logically reversible universal computer, one whose computational history can always be unearthed. Landauer, along with just about everyone else, thought such a computer impossible; all past computer architectures had implied the regular discarding of information, and it was widely believed that this irreversibility was intrinsic to computation. But while at Caltech, Fredkin did one of his favorite things—he showed that everyone had been wrong all along.

Of the two kinds of reversible computers invented by Fredkin, the better known is called the billiard-ball computer. If it were ever actually built, it would consist of billiard balls ricocheting around in a labyrinth of "mirrors," bouncing off the mirrors at 45-degree angles, periodically banging into other moving balls at 90-degree angles, and occasionally exiting through doorways that occasionally would permit new balls to enter. To extract data from the machine, you would superimpose a grid over it, and the presence or absence of a ball in a given square at a given point in time would constitute information. Such a machine, Fredkin showed, would qualify as a universal computer; it could do anything that normal computers do. But unlike other computers, it would be perfectly reversible; to recover its history, all you would have to do is stop it and run it backward. Charles H. Bennett, of IBM's Thomas J. Watson Research Center, independently arrived at a different proof that reversible computation is possible, though he considers the billiard-ball computer to be in some respects a more elegant solution to the problem than his own.

The billiard-ball computer will never be built, because it is a platonic device, existing only in a world of ideals. The balls are perfectly round and hard, and the table perfectly smooth and hard. There is no friction between the two, and no energy is lost when balls collide. Still, although these ideals are unreachable, they could be approached eternally through technological refinement, and the heat produced by fiction and collision could thus be reduced without limit. Since no additional heat would be created by information loss, there would be no necessary minimum on the total heat emitted by the computer. "The cleverer you are, the less heat it will generate," Fredkin says.

The connection Fredkin sees between the billiard-ball computer and digital physics exemplifies the odd assortment of evidence he has gathered in support of his theory. Molecules and atoms and their constituents, he notes, move around in theoretically reversible fashion, like billiard balls (although it is not humanly possible, of course, actually to take stock of the physical state of the universe, or even one small corner of it, and reconstruct history by tracing the motion of microscopic particles backward). Well, he asks, given the theoretical reversibility of physical reality, doesn't the theoretical feasibility of a reversible computer lend credence to the claim that computation is reality's basis?

No and yes. Strictly speaking, Fredkin's theory doesn't demand reversible computation. It is conceivable that an irreversible process at the very core of reality could provide rise to the reversible behavior of molecules, atoms, electrons, and the rest. After all, irreversible computers (that is, all computers on the market) can simulate reversible billiard balls. But they do so in a convoluted way, Fredkin says, and the connection between an irreversible substratum and a reversible stratum would, similarly, be tortuous—or, as he puts it, "aesthetically obnoxious." Fredkin prefers to think that the cellular automaton underlying reversible reality does its work gracefully.

Consider, for example, a variant of the billiard-ball computer invented by Norman Margolus, the Canadian in MIT's information-mechanics group. Margolus showed how a two-state cellular automaton that was itself reversible could simulate the billiard-ball computer using only a simple rule involving a small neighborhood. This cellular automaton in action looks like a jazzed-up version of the original video game, Pong. It is an overhead view of endlessly energetic balls ricocheting off clusters of mirrors and each other It is proof that a very simple binary cellular automaton can provide rise to the seemingly more complex behavior of microscopic particles bouncing off each other. And, as a kind of bonus, these particular particles themselves amount to a computer. Though Margolus discovered this powerful cellular-automaton rule, it was Fredkin who had first concluded that it must exist and persuaded Margolus to look for it. "He has an intuitive idea of how things should be," Margolus says. "And often, if he can't come up with a rational argument to convince you that it should be so, he'll sort of transfer his intuition to you."

That, really, is what Fredkin is trying to do when he argues that the universe is a computer. He cannot provide you a single line of reasoning that leads inexorably, or even very plausibly, to this conclusion. He can tell you about the reversible computer, about Margolus's cellular automaton, about the many physical quantities, like light, that were once thought to be continuous but are now considered discrete, and so on. The evidence consists of many little things—so many, and so little, that in the end he is forced to convey his truth by simile. "I find the supporting evidence for my beliefs in ten thousand different places," he says. "And to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, Where is this animal? I say, Well, he was here, he's about this big, this that and the other. And I know a thousand things about him. I don't have him in hand, but I know he's there." The story changes upon retelling. One day it's Bigfoot that Fredkin's trailing. Another day it's a duck: feathers are everywhere, and the tracks are webbed. Whatever the animal, the moral of the story remains the same: "What I see is so compelling that it can't be a creature of my imagination."

V. Deus ex Machina


HERE WAS SOMETHING BOTHERSOME ABOUT ISAAC Newton's theory of gravitation. The idea that the sun exerts a pull on the earth, and vice versa, sounded vaguely supernatural and, in any event, was hard to explain. How, after all, could such "action at a distance" be realized? Did the earth look at the sun, estimate the distance, and consult the law of gravitation to determine where it should move and how fast? Newton sidestepped such questions. He fudged with the Latin phrase si esset: two bodies, he wrote, behave as if impelled by a force inversely proportional to the square of their distance. Ever since Newton, physics has followed his example. Its "force fields" are, strictly speaking, metaphorical, and its laws purely descriptive. Physicists make no attempt to explain why things obey the law of electromagnetism or of gravitation. The law is the law, and that's all there is to it.

Fredkin refuses to accept authority so blindly. He posits not only laws but also a law-enforcement agency: a computer. Somewhere out there, he believes, is a machinelike thing that actually keeps our individual bits of space abiding by the rule of the universal cellular automaton. With this belief Fredkin crosses the line between physics and metaphysics, between scientific hypothesis and cosmic speculation. If Fredkin had Newton's knack for public relations, if he stopped at saying that the universe operates as if it were a computer, he could Improve his stature among physicists while preserving the essence of his theory—the idea that the dynamics of physical reality will ultimately be better captured by a single recursive algorithm than by the mathematics of conventional physics, and that the continuity of time and space implicit in traditional mathematics is illusory.

Actually, some estimable physicists have lately been saying things not wholly unlike this stripped-down version of the theory. T. D. Lee, a Nobel laureate at Columbia University, has written at length about the possibility that time is discrete. And in 1984 Scientific American, not exactly a soapbox for cranks, published an article in which Stephen Wolfram, then of Princeton's Institute for Advanced Study, wrote, "Scientific laws are now being viewed as algorithms. . . . Physical systems are viewed as computational systems, processing information much the way computers do." He concluded, "A new paradigm has been born."

The line between responsible scientific speculation and off-the-wall metaphysical pronouncement was nicely illustrated by an article in which Tomasso Toffoli, the Italian in MIT's information-mechanics group, stayed barely on the responsible side of it. Published in the journal Physica D, the article was called "Cellular automata as an alternative to (rather than an approximation of) differential equations in modeling physics." Toffoli's thesis captured the core of Fredkin's theory yet had a perfectly reasonable ring to it. He simply suggested that the historical reliance of physicists on calculus may have been due not just to its merits but also to the fact that before the computer, alternative languages of description were not practical.

Why does Fredkin refuse to do the expedient thing—leave out the part about the universe actually being a computer? One reason is that he considers reprehensible the failure of Newton, and of all physicists since, to back up their descriptions of nature with explanations. He is amazed to find "perfectly rational scientists" believing in "a form of mysticism: that things just happen because they happen." The best physics, Fredkin seems to believe, is metaphysics.

The trouble with metaphysics is its endless depth. For every question that is answered, at least one other is raised, and it is not always clear that, on balance, any progress has been made. For example, where is this computer that Fredkin keeps talking about? Is it in this universe, residing along some fifth or sixth dimension that renders it invisible? Is it in some meta-universe? The answer is the latter, apparently, and to understand why, we need to return to the problem of the infinite regress, a problem that Rolf Landauer, among others, has cited with respect to Fredkin's theory. Landauer illustrates the problem by telling the old turtle story. A professor has just finished lecturing at some august university about the origin and structure of the universe, and an old woman in tennis shoes walks up to the lectern. "Excuse me, sir, but you've got it all wrong," she says. "The truth is that the universe is sitting on the back of a huge turtle." The professor decides to humor her. "Oh, really?" he asks. "Well, tell me, what is the turtle standing on?" The lady has a ready reply: "Oh, it's standing on another turtle." The professor asks, "And what is that turtle standing on?" Without hesitation, she says, "Another turtle." The professor, still game, repeats his question. A look of impatience comes across the woman's face. She holds up her hand, stopping him in mid-sentence. "Save your breath, sonny," she says. "It's turtles all the way down."

The infinite-regress problem afflicts Fredkin's theory in two ways, one of which we have already encountered: if matter is made of information, what is the information made of? And even if one concedes that it is no more ludicrous for information to be the most fundamental stuff than for matter or energy to be the most fundamental stuff, what about the computer itself? What is it made of? What energizes it? Who, or what, runs it, or set it in motion to begin with?

HEN FREDKIN IS DISCUSSING THE PROBLEM OF THE infinite regress, his logic seems variously cryptic, evasive, and appealing. At one point he says, "For everything in the world where you wonder, 'What is it made out of?' the only thing I know of where the question doesn't have to be answered with anything else is for information." This puzzles me. Thousands of words later I am still puzzled, and I press for clarification. He talks some more. What he means, as near as I can tell, is what follows.

First of all, it doesn't matter what the information is made of, or what kind of computer produces it. The computer could be of the conventional electronic sort, or it could be a hydraulic machine made of gargantuan sewage pipes and manhole covers, or it could be something we can't even imagine. What's the difference? Who cares what the information consists of? So long as the cellular automaton's rule is the same in each case, the patterns of information will be the same, and so will we, because the structure of our world depends on pattern, not on the pattern's substrate; a carbon atom, according to Fredkin, is a certain configuration of bits, not a certain kind of bits.

Besides, we can never know what the information is made of or what kind of machine is processing it. This point is reminiscent of childhood conversations that Fredkin remembers having with his sister, Joan, about the possibility that they were part of a dream God was having. "Say God is in a room and on his table he has some cookies and tea," Fredkin says. "And he's dreaming this whole universe up. Well, we can't reach out and get his cookies. They're not in our universe. See, our universe has bounds. There are some things in it and some things not." The computer is not; hardware is beyond the grasp of its software. Imagine a vast computer program that contained bodies of information as complex as people, motivated by bodies of information as complex as ideas. These "people" would have no way of figuring out what kind of computer they owed their existence to, because everything they said, and everything they did—including formulate metaphysical hypotheses—would depend entirely on the programming rules and the original input. As long as these didn't change, the same metaphysical conclusions would be reached in an old XD-1 as in a Kaypro 2.

This idea—that sentient beings could be constitutionally numb to the texture of reality—has fascinated a number of people, including, lately, computer scientists. One source of the fascination is the fact that any universal computer can simulate another universal computer, and the simulated computer can, because it is universal, do the same thing. So it is possible to conceive of a theoretically endless series of computers contained, like Russian dolls, in larger versions of themselves and yet oblivious of those containers. To anyone who has lived intimately with, and thought deeply about, computers, says Charles Bennett, of IBM's Watson Lab, this notion is very attractive. "And if you're too attracted to it, you're likely to part company with the physicists." Physicists, Bennett says, find heretical the notion that anything physical is impervious to expertment, removed from the reach of science.

Fredkin's belief in the limits of scientific knowledge may sound like evidence of humility, but in the end it permits great ambition; it helps him go after some of the grandest philosophical questions around. For example, there is a paradox that crops up whenever people think about how the universe came to be. On the one hand, it must have had a beginning. After all, things usually do. Besides, the cosmological evidence suggests a beginning: the big bang. Yet science insists that it is impossible for something to come from nothing; the laws of physics forbid the amount of energy and mass in the universe to change. So how could there have been a time when there was no universe, and thus no mass or energy?

Fredkin escapes from this paradox without breaking a sweat. Granted, he says, the laws of our universe don't permit something to come from nothing. But he can imagine laws that would permit such a thing; in fact, he can imagine algorithmic laws that would permit such a thing. The conservation of mass and energy is a consequence of our cellular automaton's rules, not a consequence of all possible rules. Perhaps a different cellular automaton governed the creation of our cellular automation—just as the rules for loading software are different from the rules running the program once it has been loaded.

What's funny is how hard it is to doubt Fredkin when with such assurance he makes definitive statements about the creation of the universe—or when, for that matter, he looks you in the eye and tells you the universe is a computer. Partly this is because, given the magnitude and intrinsic intractability of the questions he is addressing, his answers aren't all that bad. As ideas about the foundations of physics go, his are not completely out of the ball park; as metaphysical and cosmogonic speculation goes, his isn't beyond the pale.

But there's more to it than that. Fredkin is, in his own odd way, a rhetorician of great skill. He talks softly, even coolly, but with a low-key power, a quiet and relentless confidence, a kind of high-tech fervor. And there is something disarming about his self-awareness. He's not one of these people who say crazy things without having so much as a clue that you're sitting there thinking what crazy things they are. He is acutely conscious of his reputation; he knows that some scientists are reluctant to invite him to conferences for fear that he'll say embarrassing things. But he is not fazed by their doubts. "You know, I'm a reasonably smart person. I'm not the smartest person in the world, but I'm pretty smart—and I know that what I'm involved in makes perfect sense. A lot of people build up what might be called self-delusional systems, where they have this whole system that makes perfect sense to them, but no one else ever understands it or buys it. I don't think that's a major factor here, though others might disagree." It's hard to disagree, when he so forthrightly offers you the chance.

Still, as he gets further from physics, and more deeply into philosophy, he begins to try one's trust. For example, having tackled the question of what sort of process could generate a universe in which spontaneous generation is impossible, he aims immediately for bigger game: Why was the universe created? Why is there something here instead of nothing?

HEN THIS SUBJECT COMES UP, WE ARE SITTING IN the Fredkins' villa. The living area has pale rock walls, shiny-clean floors made of large white ceramic tiles, and built-in bookcases made of blond wood. There is lots of air—the ceiling slopes up in the middle to at least twenty feet—and the air keeps moving; some walls consist almost entirely of wooden shutters that, when open, let the sea breeze pass as fast as it will. I am glad of this. My skin, after three days on Fredkin's island, is hot, and the air, though heavy, is cool. The sun is going down.

Fredkin, sitting on a white sofa, is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "analytical" approach associated with traditional mathematics, including differential equations, and the "computational" approach associated with algorithms. You can predict a future state of a system susceptible to the analytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold.

This indeterminacy is very suggestive. It suggests, first of all, why so many "chaotic" phenomena, like smoke rising from a cigarette, are so difficult to predict using conventional mathematics. (In fact, some scientists have taken to modeling chaotic systems with cellular automata.) To Fredkin, it also suggests that even if human behavior is entirely determined, entirely inevitable, it may be unpredictable; there is room for "pseudo free will" in a completely mechanistic universe. But on this particular evening Fredkin is interested mainly in cosmogony, in the implications of this indeterminacy for the big question: Why does this giant computer of a universe exist?

It's simple, Fredkin explains: "The reason is, there is no way to know the answer to some question any faster than what's going on."

Aware that he may have said something enigmatic, Fredkin elaborates. Suppose, he says, that there is an all-powerful God. "And he's thinking of creating this universe. He's going to spend seven days on the job—this is totally allegorical—or six days on the job. Okay, now, if he's as all-powerful as you might imagine, he can say to himself, 'Wait a minute, why waste the time? I can create the whole thing, or I can just think about it for a minute and just realize what's going to happen so that I don't have to bother.' Now, ordinary physics says, Well, yeah, you got an all-powerful God, he can probably do that. What I can say is—this is very interesting—I can say I don't care how powerful God is; he cannot know the answer to the question any faster than doing it. Now, he can have various ways of doing it, but he has to do every Goddamn single step with every bit or he won't get the right answer. There's no shortcut."

Around sundown on Fredkin's island all kinds of insects start chirping or buzzing or whirring. Meanwhile, the wind chimes hanging just outside the back door are tinkling with methodical randomness. All this music is eerie and vaguely mystical. And so, increasingly, is the conversation. It is one of those moments when the context you've constructed falls apart, and gives way to a new, considerably stranger one. The old context in this case was that Fredkin is an iconoclastic thinker who believes that space and time are discrete, that the laws of the universe are algorithmic, and that the universe works according to the same principles as a computer (he uses this very phrasing in his most circumspect moments). The new context is that Fredkin believes that the universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news/bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places.

So, I say, you're arguing that the reason we're here is that some being wanted to theorize about reality, and the only way he could test his theories was to create reality? "No, you see, my explanation is much more abstract. I don't imagine there is a being or anything. I'm just using that to talk to you about it. What I'm saying is that there is no way to know what the future is any faster than running this [the universe] to get to that [the future]. Therefore, what I'm assuming is that there is a question and there is an answer, okay? I don't make any assumptions about who has the question, who wants the answer, anything."

But the more we talk, the closer Fredkin comes to the religious undercurrents he's trying to avoid. "Every astrophysical phenomenon that's going on is always assumed to be just accident," he says. "To me, this is a fairly arrogant position, in that intelligence—and computation, which includes intelligence, in my view—is a much more universal thing than people think. It's hard for me to believe that everything out there is just an accident." This sounds awfully like a position that Pope John Paul II or Billy Graham would take, and Fredkin is at pains to clarify his position: "I guess what I'm saying is—I don't have any religious belief. I don't believe that there is a God. I don't believe in Christianity or Judaism or anything like that, okay? I'm not an atheist, I'm not an agnostic, I'm just in a simple state. I don't know what there is or might be. But what I can say is that it seems likely to me that this particular universe we have is a consequence of something I would call intelligent." Does he mean that there's something out there that wanted to get the answer to a question? "Yeah." Something that set up the universe to see what would happen? "In some way, yes."

VI. The Language Barrier


N 1974, UPON RETURNING TO MIT FROM CALTECH, Fredkin was primed to revolutionize science. Having done the broad conceptual work (concluding that the universe is a computer), he would enlist the aid of others in taking care of the details—translating the differential equations of physics into algorithms, experimenting with cellular-automaton rules and selecting the most elegant, and, eventually, discovering The Rule, the single law that governs every bit of space and accounts for everything. "He figured that all he needed was some people who knew physics, and that it would all be easy," Margolus says.

One early obstacle was Fredkin's reputation. He says, "I would find a brilliant student; he'd get turned on to this stuff and start to work on it. And then he would come to me and say, 'I'm going to work on something else.' And I would say, 'Why?' And I had a few very honest ones, and they would say, 'Well, I've been talking to my friends about this and they say I'm totally crazy to work on it. It'll ruin my career. I'll be tainted forever.'" Such fears were not entirely unfounded. Fredkin is one of those people who arouse either affection, admiration, and respect, or dislike and suspicion. The latter reaction has come from a number of professors at MIT, particularly those who put a premium on formal credentials, proper academic conduct, and not sounding like a crackpot. Fredkin was never oblivious of the complaints that his work wasn't "worthy of MIT," nor of the movements, periodically afoot, to sever, or at least weaken, his ties to the university. Neither were his graduate students.

Fredkin's critics finally got their way. In the early 1980s, while he was serving briefly as the president of Boston's CBS-TV affiliate, someone noticed that he wasn't spending much time around MIT and pointed to a faculty rule limiting outside professional activities. Fredkin was finding MIT "less and less interesting" anyway, so he agreed to be designated an adjunct professor. As he recalls the deal, he was going to do a moderate amount of teaching and be paid an "appropriate" salary. But he found the genuine salary insulting, declined payment, and never got around to teaching. Not surprisingly, he was not reappointed adjunct professor when his term expired, in 1986. Meanwhile, he had so nominally discharged his duties as the head of the information-mechanics group that the title was given to Toffoli.

Fredkin doubts that his ideas will achieve widespread acceptance anytime soon. He believes that most physicists are so deeply immersed in their kind of mathematics, and so uncomprehending of computation, as to be incapable of grasping the truth. Imagine, he says, that a twentieth-century time traveler visited Italy in the early seventeenth century and tried to reformulate Galileo's ideas in terms of calculus. Although it would be a vastly more powerful language of description than the old one, conveying its importance to the average scientist would be nearly impossible. There are times when Fredkin breaks through the language barrier, but they are few and far between. He can sell one person on one idea, another on another, but nobody seems to get the big picture. It's like a painting of a horse in a meadow, he says"Everyone else only looks at it with a microscope, and they say, 'Aha, over here I see a little brown pigment. And over here I see a little green pigment.' Okay. Well, I see a horse."

Fredkin's research has nevertheless paid off in unanticipated ways. Comparing a computer's workings and the dynamics of physics turned out to be a good way to figure out how to build a very efficient computer—one that harnesses the laws of physics with great economy. Thus Toffoli and Margolus have designed an inexpensive but powerful cellular-automata machine, the CAM 6. The "machine' is actually a circuit board that when inserted in a personal computer permits it to orchestrate visual complexity at a speed that can be matched only by general-purpose computers costing hundreds of thousands of dollars. Since the circuit board costs only around $1,500, this engrossing machine may well entice young scientific revolutionaries into joining the quest for The Rule. Fredkin speaks of this possibility in almost biblical terms, "The big hope is that there will arise somewhere someone who will have some new, brilliant ideas," he says. "And I think this machine will have a dramatic effect on the probability of that happening."

But even if it does happen, it will not ensure Fredkin a place in scientific history. He is not really on record as believing that the universe is a computer. Although some of his tamer insights have been adopted, fleshed out, and published by Toffoli or Margolus, sometimes in collaboration with him, Fredkin himself has published nothing on digital physics. His stated rationale for not publishing has to do with, of all things, lack of ambition. "I'm just not terribly interested," he says. "A lot of people are fantastically motivated by publishing. It's part of a whole thing of getting ahead in the world." Margolus has another explanation: "Writing something down in good form takes a lot of time. And usually by the time he's done with the first or second draft, he has another wonderful idea that he's off on."

These two theories have merit, but so does a third: Fredkin can't write for academic journals. He doesn't know how. His erratic, hybrid education has left him with a mixture of terminology that neither computer scientists nor physicists recognize as their native tongue. Further, he is not schooled in the rules of scientific discourse; he seems just barely aware of the line between scientific hypothesis and philosophical speculation. He is not politic enough to confine his argument to its essence: that time and space are discrete, and that the state of every point in space at any point in time is determined by a single algorithm. In short, the very background that has allowed Fredkin to see the universe as a computer seems to prevent him from sharing his vision. If he could talk like other scientists, he might see only the things that they see.


Robert Wright is the author of
Three Scientists and Their Gods: Looking for Meaning in an Age of Information, The Moral Animal: Evolutionary Psychology and Everyday Life, and Nonzero: The Logic of Human Destiny.
Copyright © 2002 by The Atlantic Monthly Group. All rights reserved.
The Atlantic Monthly; April 1988; Did the Universe Just Happen?; Volume 261, No. 4; page 29.
Wed, 24 Nov 2010 05:10:00 -0600 text/html https://www.theatlantic.com/past/docs/issues/88apr/wright.htm
Killexams : Behavioral science: A tool for successful cultural change

ICA

The International Compliance Association (ICA) is a professional membership and awarding body. ICA is the leading global provider of professional, certificated qualifications in anti-money laundering; governance, risk, and compliance; and financial crime prevention. ICA members are recognized globally for their commitment to best compliance practice and an enhanced professional reputation. To find out more, visit the ICA website.

As part of a series on culture change for the International Compliance Association, my aim is to demonstrate how studying human behavior can help alleviate some of the challenges of the compliance profession.

However, I must admit this series is also born of frustration. Let me explain.

Behavioral insights

The tools traditionally deployed by organizations to achieve cultural change are blunt instruments. Performance evaluations, bonuses, change management training, staff surveys, and intranet sites crammed full of policies form the core of most firms’ armories.

These tools have been used for many years, and their operational track record isn’t great. Indeed, they can seem like medieval forms of medicine–well-intentioned but without any grounding in science.

Real cultural change requires an understanding of the drivers of human behavior. And the most effective means of grasping these drivers is through behavioral science.

This is where my frustration comes in: Few organizations consult behavioral science when seeking to shape their internal culture. Why are these well-founded techniques not more widely used?

The British government has employed a “Behavioral Insights Team” for more than a decade. This unit works on how best to implement government policies using insights from behavioral science and has applied its findings in a range of interventions, from improving vaccination take-up and general practitioner cancer referrals to boosting test results and encouraging green investment. The science offers simple and cost-effective interventions that can dramatically Improve outcomes and even save lives.

A practical example—policies

Behavioral insights can help design systems that work with human beings rather than against them.

Consider your firm’s policies: do they say things like “documents containing sensitive personal information must not be saved to a shared drive on our network”? Such a policy is setting colleagues up to break the rule—after all, they may not know certain files contain personal data, nor may they know what constitutes “sensitive” personal data. So, expecting them to acknowledge the security settings on shared drives seems like a big ask.

If we do not anticipate the rule to be complied with, then why do we write it?

What if we designed our systems and processes to mitigate these human risks? If we are aware our colleagues tend to store personal data in openly shared areas, why not employ automated controls to subvert that human behavior? File scans for zip codes, automated document mark-up, email data leak protection—all these controls exist and have been used successfully, but for many of you studying this, I’d bet the only controls you have are an aspirational policy and training program for the pesky IT users who keep doing this.

Applying the science

Very few risk and compliance teams use behavioral science techniques to influence culture. In the same way compliance with government policy can be improved through informed intervention, so can embedding risk and compliance goals within an organization.

In this series, I aim to challenge assumptions on business I believe are without a solid, scientific foundation. These misconceptions include:

  • Staff behave in a way that is consistent with their expressed attitudes;
  • Staff go through a “change curve” when we redesign our strategy or organization;
  • Mistakes, failures, noncompliance, rule breaches, and process slip ups are most caused by “human error”;
  • Employees are rational adults and will align with our policies and culture as long as we clearly communicate them;
  • Ethical business is a concept we can all buy into and strive toward; and
  • Artificial intelligence (AI) can replicate and replace increasingly complex roles in our business.

Now you might consider all or some of the above as undeniable truisms. But I will argue they have no basis in the psychological literature. If anything, the evidence suggests the opposite:

  • Colleagues will say one thing and do another, especially if you ask them moral questions;
  • Staff often don’t move through a “change curve” from anger to acceptance;
  • Attributing a problem to “human error” is diagnostically lazy;
  • Staff do not behave in a rational manner and “treating them like adults” won’t help;
  • Ethical values are irrational and “communication” won’t solve that; and
  • AI absolutely should not replicate what we do; it must be better than us in important ways.

Listening to psychology

Most organizational goals are noble in intention. Helping our people through traumatic change, embedding ethical values, designing work environments with “human factors,” and modernizing organizations with powerful technology are, without doubt, laudable aims. And it is certainly the case building a compliant, risk-managed, ethical culture is the right thing to strive toward.

We are, however, pursuing these goals in the wrong way. I aim to demonstrate by listening to what psychology tells us about being human, we can create human-focused organizations that achieve those goals. To do this, we must abandon axioms and instead listen to what psychology tells us will work.

The International Compliance Association is a sister company to Compliance Week. Both organizations are under the umbrella of Wilmington plc.

Wed, 03 Aug 2022 02:39:00 -0500 en text/html https://www.complianceweek.com/ethics-and-culture/behavioral-science-a-tool-for-successful-cultural-change/31920.article
Killexams : A Short History Of AI, And Why It’s Heading In The Wrong Direction

Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would provide rise to the ai_05idea of artificial intelligence (AI).

As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.

But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.

Neural Networks

As AI faded into the sunset in the late 1980s, it allowed Neural Network researchers to get some much needed funding. Neural networks had been around since the 1960s, but were actively squelched by the AI researches. Starved of resources, not much was heard of neural nets until it became obvious that AI was not living up to the hype. Unlike computers – what original AI was based on – neural networks do not have a processor or a central place to store memory.

Deep Blue computer
Deep Blue computer

Neural networks are not programmed like a computer. They are connected in a way that gives them the ability to learn its inputs. In this way, they are similar to a mammal brain. After all, in the big picture a brain is just a bunch of neurons connected together in highly specific patterns. The resemblance of neural networks to brains gained them the attention of those disillusioned with computer based AI.

In the mid-1980s, a company by the name of NETtalk built a neural network that was able to, on the surface at least, learn to read. It was able to do this by learning to map patterns of letters to spoken language. After a little time, it had learned to speak individual words. NETtalk was marveled as a triumph of human ingenuity, capturing news headlines around the world. But from an engineering point of view, what it did was not difficult at all. It did not understand anything. It just matched patterns with sounds. It did learn, however, which is something computer based AI had much difficulty with.

Eventually, neural networks would suffer a similar fate as computer based AI – a lot of hype and interest, only to fade after they were unable to produce what people expected.

A New Century

The transition into the 21st century saw little in the development of AI. In 1997, IBMs Deep Blue made brief headlines when it beat [Garry Kasparov] at his own game in a series of chess matches. But Deep Blue did not win because it was intelligent. It won because it was simply faster. Deep Blue did not understand chess the same way a calculator does not understand math.

ai_04
Example of Google’s Inceptionism. The image is taken from the middle of the hierarchy during visual recognition.

Modern times have seen much of the same approach to AI. Google is using neural networks combined with a hierarchical structure and has made some interesting discoveries. One of them is a process called Inceptionism. Neural networks are promising, but they still show no clear path to a true artificial intelligence.

IBM’s Watson was able to best some of Jeopardy’s top players. It’s easy to think of Watson as ‘smart’, but nothing could be further from the truth. Watson retrieves its answers via searching terabytes of information very quickly. It has no ability to actually understand what it’s saying.

One can argue that the process of trying to create AI over the years has influenced how we define it, even to this day. Although we all agree on what the term “artificial” means, defining what “intelligence” actually is presents another layer to the puzzle. Looking at how intelligence was defined in the past will provide us some insight in how we have failed to achieve it.

Alan Turing and the Chinese Room

Alan Turing, father to modern computing, developed a simple test to determine if a computer was intelligent. It’s known as the Turing Test, and goes something like this: If a computer can converse with a human such that the human thinks he or she is conversing with another human, then one can say the computer imitated a human, and can be said to possess intelligence. The ELIZA program mentioned above fooled a handful of people with this test. Turing’s definition of intelligence is behavior based, and was accepted for many years. This would change in 1980, when John Searle put ai_02forth his Chinese Room argument.

Consider an English speaking man locked in a room. In the room is a desk, and on that desk is a large book. The book is written in English and has instructions on how to manipulate Chinese characters. He doesn’t know what any of it means, but he’s able to follow the instructions. Someone then slips a piece of paper under the door. On the paper is a story and questions about the story, all written in Chinese. The man doesn’t understand a word of it, but is able to use his book to manipulate the Chinese characters. His fills out the questions using his book, and passes the paper back under the door.

The Chinese speaking person on the other side reads the answers and determines they are all correct. She comes to the conclusion that the man in the room understands Chinese. It’s obvious to us, however, that the man does not understand Chinese. So what’s the point of the thought experiment?

The man is a processor. The book is a program. The paper under the door is the input. The processor applies the program to the input and produces an output. This simple thought experiment shows that a computer can never be considered intelligent, as it can never understand what it’s doing. It’s just following instructions. The intelligence lies with the author of the book or the programmer. Not the man or the processor.

A New Definition of Intelligence

In all of mankind’s pursuit of AI, he has been, and actively is looking for behavior as a definition for intelligence. But John Searle has shown us how a computer can produce intelligent behavior and still not be intelligent. How can the man or processor be intelligent if does not understand what it’s doing?

All of the above has been said to draw a clear line between behavior and understanding. Intelligence simply cannot be defined by behavior. Behavior is a manifestation of intelligence, and nothing more. Imagine lying still in a dark room. You can think, and are therefore intelligent. But you’re not producing any behavior.

Intelligence should be defined by the ability to understand. [Jeff Hawkins], author of On Intelligence, has developed a way to do this with prediction. He calls it the Memory Prediction Framework. Imagine a system that is constantly trying to predict what will happen next. When a prediction is met, the function is satisfied. When a prediction is not met, focus is pointed at the anomaly until it can be predicted. For example, you hear the jingle of your pet’s collar while you’re sitting at your desk. You turn to the door, predicting you will see your pet walk in. As long as this prediction is met, everything is normal. It is likely you’re unaware of doing this. But if the prediction is violated, it brings the scenario into focus, and you will investigate to find out why you didn’t see your pet walk in.

This process of constantly trying to predict your environment allows you to understand it. Prediction is the essence of intelligence, not behavior. If we can program a computer or neural network to follow the prediction paradigm, it can truly understand its environment. And it is this understanding that will make the machine intelligent.

So now it’s your turn. How would you define the ‘intelligence’ in AI?

Sun, 24 Jul 2022 12:00:00 -0500 Will Sweatman en-US text/html https://hackaday.com/2015/12/01/a-short-history-of-ai-and-why-its-heading-in-the-wrong-direction/
Killexams : Startups News No result found, try new keyword!Showcase your company news with guaranteed exposure both in print and online Online registration is now closed. If you are looking to purchase single tickets please email… Ready to embrace the ... Thu, 04 Aug 2022 06:27:00 -0500 text/html https://www.bizjournals.com/news/technology/startups Killexams : Background and Supplementary Reading Killexams : History 398 - Sources and Further Reading


Week 1, Introduction, The Technics of Simple and Compound Machines

The second lecture introduces the basic characteristics of simple and compound machines, taking as examples the machines described by Georgius Agricola in his De re metallica (1556) and the application of machines to the task of moving the Vatican obelisk as recounted by Domenico Fontana in his work of 1585.

Readings and Sources

Frances and Joseph Gies, Cathedral, Forge, and Waterwheel: Technology and Invention in the Middle Ages (NY: HarperCollins, 1994), summarize the current literature on the machine technology of the Middle Ages. Agricola's work as translated by Herbert and Lou Henry Hoover is still available in a Dover reprint. Also available from Dover is The Various and Ingenious Machines of Agostino Ramelli, trans. by Martha Teach Gnudi, with notes by Eugene S. Ferguson, which is a treasury of Renaissance (i.e. medieval) machine technics.  The catalog for the exhibition Mechanical Marvels: Invention in the Age of Leonardo, organized by the Istituto e Museo di Storia delle Scienze in Florence and the Italian firm Finmeccanica in 1997 offers a rich collection of Renaissance illustrations and photographs of reconstructions of Renaissance machines.  It was accompanied by a compact disc that included the animations of the machines at work.

Week 2, Mill and Manor, Cathedral and Town

With the technical system of the mill covered in Lecture 2 and that of the cathedral laid out in Erlande-Brandenburg's book, the lectures for this week turn to the place of the two systems in their respective social settings. Given the nature of the readings, the lectures emphasize the social and economic presence of the mill and the cathedral. A highly schematic description of the traditional agricultural village sets some of the background for understanding the upheaval brought about by the Industrial Revolution, especially among cottagers and marginal labor.

Readings

The chapters in Holt's study place the medieval English mill in its manorial setting and bring the miller out from behind the caricature presented by the literature of the period. In both cases, the argument offers insight into the sources and methods of cultural history of technology in pre-modern societies.

Sources

The mill is arguably the most sophisticated technical system of the preindustrial world. Certainly it was the most prevalent at the time; Domesday Book recorded almost 6000 watermills in England in 1068, and a century later the windmill began to spread from its apparent origin in East Anglia. Mills dotted the countryside throughout Europe, filled the understructures of the bridge of Paris, and floated in the major rivers. Structurally, the mill seems the evident model for the mechanical clock, devised sometime in the late 12th or early 13th century. Yet, as a technical system and as a social presence the mill has until recently escaped the attention of historians.

Three works now admirably fill the lacuna, the most latest being Richard L. Hills's Power from Wind: A History of Windmill Technology (Cambridge, 1994). Terry S. Reynolds' Stronger Than A Hundred Men: A History of the Vertical Water Wheel (Baltimore: Johns Hopkins, 1983) is a wide-ranging study that moves between technical details and social settings from ancient times down to the nineteenth century and that rests on what seems an exhaustive bibliography. Richard Holt's The Mills of Medieval England (Oxford, 1988) is geographically more focused but socially more detailed. Drawn from extensive archival research, it provides a well illustrated, technically proficient account of the construction and working of water and wind mills, and then sets out the economic, legal, and social structures that tied them to medieval English society. In this latter area, Holt's account significantly revises the long standard interpretation of Marc Bloch in his classic "The Advent and Triumph of the Watermills" (in his Land and Work in Medieval Europe, London 1967; orig. "Avènement et conquête du moulin à eau", Annales d'histoire economique et sociale 36[1935], 583-663), at least for England. John Muendel has written a series of articles on mills in northern Italy, exploring both their technical and their economic structure; see for example "The distribution of mills in the Florentine countryside during the late Middle Ages" in J.A. Raftis, ed., Pathways to Medieval Peasants (Toronto, 1981), 83-115, and "The horizontal mills of Pistoia", Technology and Culture 15(1974), 194-225. Chapter 2 of Lynn White's classic Medieval Technology and Social Change sets the background of the mill in "The Agricultural Revolution of the Middle Ages", and Chapter 3 treats water power, the mill, and machines in general. Marjorie Boyer's "Water mills: a problem for the bridges and boats of medieval France" (History of Technology 7(1982), 1-22), an outgrowth of her study of medieval bridges, calls attention to the urban presence of mills and makes it all the more curious that medieval scholars could talk of the machina mundi without mentioning them.

Two other monographs provide valuable guides to the technical structure of the mill. John Reynolds' Windmills and Watermills (London, 1970) and Rex Wailes's The English Windmill (London, 1954) are both richly illustrated, though Wailes's superb drawings, born of thirty years of visiting mills, often make the operations clearer than do Reynolds' photographs.

Robert Mark's Experiments in Gothic Structures analyses in some detail the technical structure of cathedrals, taking advantage of latest engineering methods such as photoelasticity and finite-element analysis. As the course developed, it became clear that the book is too long and technically detailed. The lecture lays out the main argument, illustrated by slides used in the book and reinforced by Mark's article coauthored with William W. Clark, "Gothic Structural Experimentation", Scientific American 251,5(1984), 176-185. There Mark and Clark document technical communication between builders at Notre Dame in Paris and at the new cathedral in Bourges. Jean Gimpel, The Cathedral Builders (NY, 1961), and Henry Kraus, Gold Was the Mortar: The Economics of Cathedral Building (London, 1978), place the building of cathedrals in the wider social context of the medieval city. In "The Education of the Medieval English Master Masons", Mediaeval Studies 32(1970), 1-26, and "The Geometrical Knowledge of the Medieval Master Masons", Speculum47(1972), 395-421, Lon Shelby dispels the legends of a secret science by describing the measurements the masons actually carried out in building large structures. David Macauley's Cathedral brings medieval construction to life in his inimitable drawings, the basis for an animated TV documentary.

The social drama of the building of a cathedral has attracted the attention of several novelists.  Ken Follett's Pillars of the Earth is my favorite, in particular for its technical accuracy and for its focus on the masons whose skill made the buildings possible.

Week 3, Power Machinery, The Steam Engine

The two lectures use slides and models to explain the workings of the new textile machinery and of the steam engine as they were invented and developed in the mid-to-late 18th century. The first focuses on the mechanization of spinning and the different relationship between operator and machine in the jenny and the frame, while the second follows the motives behind Watt's improvements on the Newcomen design and the resulting shift of focus from mines to factories and then to railroads.

Readings

The first chapter of Peter Laslett's The World We Have Lost portrays the social structure of pre-Industrial England, while Richard L. Hills's Power in the Industrial Revolution complements the lectures in several directions: Chapter 2 provides a survey of the process of transforming raw fiber into finished cloth, with an emphasis on the increasing difficulty of translating the manual task into mechanical action; 10 analyzes the problem of bringing power to the machines, with stress on the difficulties of measurement; and 11 describes the special difficulties of weaving by power.

Sources

For a more extensive and latest look at "the world we have lost", see Part I of Patricia Crone's Pre-Industrial Societies (Oxford/Cambridge, MA, 1989), as well as Carlo M. Cipolla's Before the Industrial Revolution: European Society and Economy, 1000-1700 (3rd. ed. NY, 1994).

In addition to Hills's comprehensive and well documented account, Walter English's The Textile Industry: An Account of the Early Inventions of Spinning, Weaving, and Knitting Machines (NY, 1969) contains detailed descriptions, supported by excellent diagrams and illustrations. Also helpful for illustrations are Maurice Daumas (ed.), A History of Technology and Invention (NY, 1978; orig. Histoire générale des techniques, Paris, 1969), vol. III, Part 7, Chaps. 1-2, and Wallace's Rockdale (see following week).

The steam engine must be the best described machine in the history of technology. Particular good explanations and illustrations can be found in Eugene S. Ferguson's "The Origins of the Steam Engine", Scientific American (Jan. 1964; repr. in Gene I. Rochlin (comp.), Scientific Technology and Social Change: Readings from Scientific American, San Francisco, 1974, Chap. 6); and D.S.L. Cardwell, From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age (Ithaca, 1971). The classic account in English remains Henry W. Dickinson, A Short History of the Steam Engine (London, 1938; 2nd. ed. 1963). On Watt in particular, see the documentary history assembled by Eric Robinson and A.E. Musson, James Watt and the Steam Revolution (Lond: Adams and Dart, 1969), which includes at the end color reproductions of engineers' wash drawings pertinent to the various patents.

Week 4, The Factory and The Factory System

The two lectures of this week and the first of the next pick up the agricultural society of the late Middle Ages and follow the transition to the new industrial society of the mid-19th century. The central element of that transition is the factory, viewed first as a system of production by machines, then as an organization of human labor, and finally as a new social and economic presence in British politics.

Readings

The readings provide supplementary and contrasting details for the lectures, which perforce are quite general and schematic. Chap. 12 of Hills's book opens the issue of how the machines themselves were produced.  J.T. Ward's The Factory System documents from contemporary sources the transition from domestic to factory production in the textile industry. The readings selected focus on artisans and domestic workers before the introduction of machines and on the initial response to the new system from various perspectives. Ward presents a seemingly bewildering potpourri of details, so it is important to look for the structures that hold them together. The lectures should provide some guidance.

Anthony F.C. Wallace's Rockdale describes the construction (or transformation) of a water-powered mill and reviews the constituent processes of cotton production by machine, supporting his discussion with excellent drawings. He pursues in some detail a question left open by the lectures, namely how manufacturers got hold of the machinery for their factories. Finally, he introduces readers to a group of families who worked in the mills. His verbal descriptions come to life when supplemented by David Macaulay's drawings in Mill (NY, 1983).

Sources

Jennifer Tann's The Development of the Factory (London, 1970) provides the most useful guide to the subject. Tann, who is the editor of The Selected Papers of Boulton & Watt (Vol. I, The Engine Partnership, MIT, 1981), builds her account on the B&W archives, from which she reproduces a rich selection of drawings and layouts of early factories. As she characterizes her work, "One of the themes which emerges from the following pages is that it was the same few manufacturers who adopted the costly innovations such as fire-proof buildings, who installed gas lighting, steam or warm air heating and fire extinguishing apparatus; they were the giants, the ones who are most likely to have left some record of their activities behind, yet in many respects, they were uncharacteristic. They appear to have found little difficulty in recruiting capital yet there were many smaller manufacturers who found difficulty in obtaining long-term loans, to whom a fire-proof factory or gas lighting would have seemed an unobtainable luxury. In this respect some of the most valuable letters are from those manufacturers who decided against buying a Boulton & Watt steam engine which was a good deal more expensive than an atmospheric engine or a simple water wheel."(p.2)

Other useful studies include:

William Fairbairn, Treatise on Mills and Millwork (2 vols., London, 1861-63; 2nd ed. 1864-65; 3rd ed. 1871-74; 4th ed. 1878); as the several editions suggest, this was the fundamental manual of mill design, covering the building, the source of power, the transmission power, heating, lighting, etc. Rich in illustrations.

Brian Bracegirdle, et al., The Archaeology of the Industrial Revolution (London, 1973), which contains magnificent black/white and color photos and line drawings of mills and steam engines, together with a brief but choice bibliography.

J.M. Richards, The Functional Tradition in Early Industrial Building (London, 1959); The many illustrations of early factories, water and wind mills, etc. are more helpful than the text.

A variety of sources provide glimpses of the workforce that first entered the new factories. Frank E. Huggett gives a short account built around extensive quotations from original sources in The Past, Present and Future of Factory Life and Work: A Documentary Inquiry (London, 1973). For a shorter account of the difficulties of adjustment to the regimen of the factory in its earliest days, based on a sampling of documentary evidence reflecting the experience of literate participants, see Sidney Pollard,, "Factory Discipline in the Industrial Revolution", Economic History Review 16(1963), 254-271. Humphrey Jennings has compiled a potpourri of original reports in Pandaemonium: The Coming of the Machine as Seen by Contemporary Observers, 1660-1886 (NY 1985).

Week 5, The Formation of Industrial Society, Industrial Ideologies

In 1815 England was ruled by a constitutional monarch, a hereditary nobility, and a landed gentry under a settlement worked out in 1689 after a half-century of turmoil. National policy was limited to matters of trade and diplomacy, with internal matters left to local government as embodied by Justices of the Peace meeting in Quarter Sessions. Land was the basis of political power, even as it was steadily losing in economic power to commerce and industry, the interests of which were largely unrepresented in Parliament. Over the next fifty years, that balance shifted radically, as the Constitution was reshaped to extend political voice to an urban electorate and to respond at the national level to the social and economic problems posed by rapid industrialization. That the transformation occurred without major violence makes it a remarkable chapter in British history. The first lecture traces the main outline of that process. The second examines contemporary efforts to explain the changes occurring at the time and to determine the basic structure of the newly emerging society. The lecture emphasizes the contrast between the political economists, who viewed industrialization as a perturbation in the dynamical system of the market, and Marx, who saw it as a new stage in the evolution of political society.

Reading

Charles Babbage's On the Economy of Machinery and Manufactures and Karl Marx's "Machinery and Large-Scale Industry" (Capital, Vol.I, Chap.15.) offer strongly contrasting views of the nature and future of the new industrial system. The specific chapters in Babbage show him trying to think out systems of production and looking forward to division of mental labor, i.e. management, which will become a theme later in the course. Few people have actually read Marx, who must rank as one of the greatest historians of technology; this is an opportunity to meet him on his home ground. Finally, E.P. Thompson's classic "Time, Work-Discipline, and Industrial Capitalism", Past and Present 38(1960), 56-97, offers a glimpse into the changing lives of industrial workers.

Sources

Histories of the Industrial Revolution in Britain abound. I have drawn mostly on S.G. Checkland, The Rise of Industrial Society in England, 1815-1885 (London, 1971), E.P. Thompson, The Making of the English Working Class (NY, 1963), and Phyllis Deane, The First Industrial Revolution (Cambridge, 1965). A latest study is Maxine Berg, The Age of Manufactures, 1700-1820: Industry, Innovation, and Work in Britain (Totawa, 1985/Oxford, 1986).

As an introduction to the history of economic thought, Robert L. Heilbronner's The Worldly Philosophers (NY, 1953; repr. in several editions since) is succinct and readable. For a reconsideration of Marx's technological determinism, see Donald Mackenzie, "Marx and the Machine", Technology and Culture 25(1984), 473-502, and John M. Sherwood, "Engels, Marx, Malthus, and the Machine", American Historical Review 90,4(1985),  837-65.

Week 6, The Machine in the Garden, John H. Hall and the Origins of the "American System"

England was the prototype for industrialization. The rest of the world could look to that country as an example of what to emulate and what to avoid. Some saw a land of power and prosperity and wondered aloud whether God might after all be an Englishman; others saw "dark, Satanic mills" and the "specter of Manchester" with its filthy slums and human misery. Americans in particular thought hard about industry and whether it could be reconciled with the republican virtues seemingly rooted in an agrarian order. "Let our workshops remain in Europe," urged Jefferson in his Notes on Virginia in 1785, and he was no happier for being wiser about the feasibility of that policy after the War of 1812. Nor did all his fellow countrymen agree in principle. Some saw vast opportunities for industry in a land rich in natural resources, including seemingly endless supplies of wood and of waterpower. The debate between the two views became a continuing theme of American literature, characterized by Leo Marx as The Machine in the Garden (NY, 1964).

The combination of abundant resources and scarce labor meant that industrialization in America would depend on the use of machinery, and from the outset American inventors strove to translate manual tasks into mechanical action. For reasons that so far elude scholarly consensus, Americans' fascination with machines informed their approach to manufacturing to such an extent that British observers in the mid-19th century characterized machine-based production as the "American System". Precisely what was meant by that at the time is not clear, but by the end of the century it came to mean mass production by means of interchangeable parts. The origins of that system lay in the new nation's armories, in particular at Harpers Ferry, where John H. Hall first devised techniques for serial machining of parts within given tolerances.

Reading

New to the course for 2001 is Ruth Schwartz Cowan's A Social History of American Technology, which provides the background for lectures dealing with case studies of the "republican technology" of the Lowell factories and the beginnings of the "American System" of mass production at Harpers Ferry Armory.

Sources

Chapter 2 of John F. Kasson's Civilizing the Machine: Technology and Republican Values in America, 1776-1900 relates Lowell's great experiment in combining automatic textile machinery with a transient female workforce to avoid a permanent urban proletariat. As a social experiment to reconcile industry with democratic values, Lowell has intrigued labor historians almost as much as it did contemporary observers. The most comprehensive account, based on payroll records and tax inventories, is Thomas Dublin, Women at Work: The Transformation Of Work and Community in Lowell, Massachusetts, 1826-1860 (NY, 1979). His Farm to Factory: Women's Letters, 1830-1860 (NY, 1981) transmits the workers' own words about their lives, as does Philip S. Foner's The Factory Girls (Urbana, 1977), meant to counteract the rosy picture painted in the factory-sponsored Lowell Offering, which has recently been reprinted. For a collection of original sources, factory views, and maps, see Gary Kulik, Roger Parks, and Theodore Penn (eds.), The New England Mill Village, 1790-1860 (Documents in American Industrial History, II, Cambridge, MA, 1982).

Merritt Roe Smith's Harpers Ferry Armory remains the standard account of John H. Hall's system for producing rifles with interchangeable parts on a scale large enough to be economical.  In addition to the technical details of the machinery and managerial techniques that made an industry of gunsmithing, Smith examines the political and social structure of the master gunmakers and the threats that the new technology posed to their way of life. Elting E. Morison's From Know-How to Nowhere: The Development of American Technology (NY, 1974) is a thoughtful and provocative account of engineering from colonial times to the early 20th century, emphasizing the loss of autonomy and accountability that came with modern industrial research. Two more latest accounts are David Freeman Hawke, Nuts and Bolts of the Past: A History of American Technology (NY, 1988), and Thomas P. Hughes, American Genesis: A Century of Invention and Technological Enthusiasm, 1879-1970 (NY, 1989). Brooke Hindle and Steven Lubar provide a richly illustrated survey of industrialization in America in their Engines of Change: The American Industrial Revolution (Washington, 1986). The book is based on an exhibit at the Smithsonian's National Museum of American History, the pictorial materials for which have been recorded on a videodisc available on request from the Museum. The now standard account of the development of mass production is David A. Hounshell, From the America System to Mass Production: The Development of Manufacturing Technology in the United States (Baltimore, 1984). A shorter version of his main thesis, together with an account of the armory system by Smith, is contained in Otto Mayr and Robert C. Post (eds.), Yankee Enterprise: The Rise of the American System of Manufactures (Washington, 1982).

Week 7, Precision and Production, Ford's Model T: A $500 Car

When Americans first began machine-based production, the United States had no machine-tool industry other than the shops attached to the factories themselves and the shops of traditional artisans such as clockmakers and gunsmiths. Using traditional hand tools, machine builders worked to routine tolerances of 1/100", and precision was achieved by fitting part to part. In 1914, the first full year of assembly-line production at Ford, rough surveys revealed some 500 firms producing over $30,000,000 worth of machine tools ranging from the most general to the most specific. Over the intervening century, routine shop-floor precision increased from 1/100" to 1/10,000", and the finest instruments could measure 1/1,000,000". Such precision does not occur naturally, nor are the means of attaining it self-evident. Indeed, Britain's leading machinist, Joseph Whitworth testified before Parliament that interchangeability and full machine production were not possible in principle. The achievement of the requisite precision over the course of the century is a remarkable story, still not told outside the specialist literature, and the first lecture is an effort to tell it.

Accuracy to 0.0001", achieved automatically by machines, was a prerequisite of Ford's methods of production and hence of the automobile he designed to meet the needs and means of the millions of potential owners. The second lecture backs up to provide an account of the invention of the internal combustion engine, which, like the steam engine, was originally conceived as a stationary source of power but was adapted to use in a vehicle. After a quick survey of the earliest automobiles, the lecture "reads" Ford's design of the Model T, first with respect to its intended user and then with respect to the methods by which Ford could produce it at an affordable price. Appendix I is a version of the second part of the lecture used with general audiences.

Reading

Cowan's book again provides general background for lectures on the origins of consumer society in the move of machinery from the factory to the home.  Nathan Rosenberg's seminal article, "Technological Change in the Machine Tool Industry, 1840-1910", Journal of Economic History 23(1963), 414-443, reveals the characteristics of machine tools that facilitated, or perhaps even made possible, the rapid diffusion of new techniques and levels of precision. The later lectures on software hark back to Rosenberg's interpretation when exploring the models of production informing current software engineering.

Sources

The essays by A.E. Musson, Paul Uselding, and David Hounshell in Mayr and Post's Yankee Enterprise provide accounts, respectively, of the British background, the development of precision instrumentation, and the development of mass production by means of interchangeable parts. In other years, I have used the early chapters of Hounshell's From the American System to Mass Production. Robert S. Woodbury, who first debunked "The Legend of Eli Whitney and Interchangeable Parts" (Technology and Culture 1(1960), 235-254), made a start on a comprehensive history of machine tools in the 19th century, working on a machine-by-machine basis, which he intended as prelude to a history of precision measurement and interchangeable parts. His histories of the gear-cutting machine, grinding machine, lathe, and milling machine, combined in 1972 as Studies in the History of Machine Tools, provide technical details and illustrations. W. Steeds offers a comprehensive, illustrated account in A History of Machine Tools, 1700-1910 (Oxford, 1969); cf. also L.T.C. Rolt, Tools for the Job: A History of Machine Tools (rev. ed. London, 1986), and Chris Evans, Precision Engineering: an Evolutionary View (Bedford: Cranfield Press, 1989). Machine tools caught the particular attention of the 1880 census, for which Charles H. Fitch compiled under the title Report on Power and Machinery Employed in Manufactures (Washington, 1888) an extensive, richly illustrated inventory of the tools then used in American industry. Frederick A. Halsey's classic Methods of Machine Shop Work (NY, 1914) defines the terms and standards of the industry at the turn of the 20th century. The Armington and Syms Machine Shop at Greenfield Village, Henry Ford Museum, in Dearborn is a restoration of a 19th-century production shop, powered by a steam engine through an overhead belt-and-pulley system.

Perhaps the best short account of the internal combustion engine is Lynwood Bryant's "The Origin of the Automobile Engine" (Scientific American, March 1967; repr. in Gene I. Rochlin (comp.), Scientific Technology and Social Change: Readings from Scientific American, San Francisco, 1974, Chap.9), focuses on Otto's development of the four-cycle engine on the basis of a specious notion of "stratified charge". For greater detail, see his two articles, "The Silent Otto", Technology and Culture7(1966), 184-200, and "The Origin of the Four-Stroke Cycle", ibid. 8(1967), 178-198, and for contrast, see his "Rudolf Diesel and His Rational Engine", Scientific American, August 1969 (repr. in Rochlin, Chap.10). On the early development of the automobile, see James J. Flink, America Adopts the Automobile, 1895-1910 (Cambridge, MA, 1970). John B. Rae offers a brief general history in The American Automobile (Chicago, 1965).

Allen Nevins tells the story of the Model T, which realized Ford's vision of a cheap, reliable car for the mass market, in Vol.I of his three-volume Ford: The Times, the Man, the Company (NY, 1954). However, the vehicle tells its own story when viewed through photographs of its multifarious uses, diagrams from the user's manual and parts list, advertisements by suppliers of parts and options, and stories about the "Tin Lizzie". Floyd Clymer's Historical Motor Scrapbook: Ford's Model T (Arleta, CA, 1954) offers an assortment of such materials, along with sections of the Operating Manual and Parts List. Reproductions of the manual and parts list are also available at the Henry Ford Museum in Dearborn. Several companies produced plastic and metal models of the car in varying detail, though nothing beats seeing the car itself, perhaps in the hands of a local antique car buff.

Week 8, Highland Park and the Assembly Line, Ford and the Five-Dollar Day

The first lecture moves from the Model T to the machines Ford designed to produce it and to the organization of those machines in his new assembly-line factory at Highland Park. With the machines in place and the pace of assembly established, Ford faced the problem of keeping increasing numbers of people at work tending the machines and keeping pace with the line. Although most jobs required little or no skill, they did demand sustained attention to repetitive tasks over a continuous period of time. The need to combat a 300% annual turnover among his labor force, combined with $27 million in excess profits in January 1914, induced Ford and his Vice-President James Cousins to introduce the "Bonus Plan", by which the standard wage at Highland Park jumped overnight from $2.30 to $5.00 for an eight-hour day. But the $5 day was only the most striking of Ford's efforts to retain the loyalty of his workers. Through John R. Lee and the Sociological Department, the company had already begun a program of factory outreach, involving itself in the lives of its employees. Although welcomed at first, the essentially paternalistic system led eventually to an oppressive system of control and triggered the union strife of the '30s which erased Ford's earlier benevolence from popular memory.

Reading

Henry Ford spoke for himself (through a ghost writer) in the article on "Mass Production" that appeared in the 13th edition of the Encyclopedia Britannica, and it is instructive, especially given the retrospectively critical stance of current historians, to see how the system looked through his eyes.  Among those historians is Stephen Meyer, whose book is the fullest historical account of the labor policy surrounding the $5 day.

Sources

Ford's Highland Park Plant, built to produce the Model T by his new methods, caught the attention of industrial engineers when it began full assembly-line operation in 1914. As a result, journals of the day offered extensive descriptions and illustrations of the plant. Perhaps the most informative contemporary source, Horace L. Arnold and Fay L. Faurote, Ford Methods and the Ford Shops, began as a series of articles in Engineering Magazine. Faurote, was a member of the Taylor Society and his account looks at Highland Park from the perspective of Scientific Management, especially in its emphasis on the paperwork involved in management of workers and inventory. David Hounshell's account in Chapters 6 and 7 of From the American System to Mass Production draws liberally from the photo collection of the Ford Archives, and the Smithsonian Institution has a short film loop depicting the assembly line in action. Lindy Bigg's The Rational Factory: Architecture, Technology, and Work in America's Age of Mass Production (Baltimore, 1996) analyzes the Ford plants as buildings in motion.

Allen Nevin's biography of Ford: The Man, the Times, and the Company is a useful counterbalance to Meyer's interpretation of the motives behind the $5 day. Eli Chinoy's Automobile Workers and the American Dream (Garden City, 1955) pursues the long-term effects of Ford's system of production on the workers it employed.

Week 9, Taylorism and Fordism, Mass Distribution: The Consumer Society

Since at the time Ford's methods were often associated with those proposed by Frederick W. Taylor under the name of "task management" or, more popularly, "Scientific Management", the second lecture examines Taylor's career as a consultant on shop-floor organization and the nature and scope of his Principles of Scientific Management, published at just about the time Ford was laying out Highland Park. In the end, the lecture emphasizes the quite different assumptions of the two men concerning the role of the worker in machine production and hence the essential incompatibility of Taylor's principles with Ford's methods of production. Nonetheless, as Taylor's followers found when they visited Highland Park, in matters of supervision and inventory control the two systems had much in common.

The $500 car (which by 1924 cost $290) was the most latest of a host of machines built for and sold to a new middle-class consumer society, which, through the $5 day, came to include the automobile worker. Mass production went hand-in-hand with mass distribution; indeed, the former made no sense without the latter. The second lecture presents a survey of the developments in communication, transportation, and management that made possible the patterns of consumption and the concomitant restructuring of society and politics noted by the Lynds in Middletown (Muncie, IN) in 1924.

Reading

Cowan provides an overview of the newly emerging consumer society, and selections from Robert S. and Helen Lynd's classic sociological study of Middletown offers a contemporary glimpse of that society as it was taking shape.

Sources

The best source for understanding Frederick W. Taylor is his own tract, The Principles of Scientific Management (NY, 1911; repr. 1939, 1947, 1967). The most latest and complete biography is Robert Kanigel's The One Best Way: Frederick Winslow Taylor and the Enigma of Efficiency (NY, 1997). Daniel Nelson's study of Taylor, Frederick W. Taylor and the Rise of Scientific Management (Madison, 1980) complements his earlier account of factory management, while Samuel Haber's Efficiency and Uplift: Scientific Management in the Progressive Era, 1890-1920 (Chicago, 1964) places Taylor in the context of the conservation and efficiency movements of turn-of-the-century America. Hugh G.J. Aitken's Taylorism at Watertown Arsenal: Scientific Management in Action, 1908-1915 (Cambridge, MA, 1960) remains a classic study of the development and implications of Taylor's ideas. While Ford himself perhaps could honestly claim not to have known about Taylor's methods, Hounshell shows that many of the people who worked with him in designing the assembly line and organizing the Ford workers did have backgrounds in Scientific Management. Alfred D. Chandler's magisterial The Visible Hand: The Managerial Revolution in American Business (Cambridge, 1977) puts Taylor and Ford in the context of the development new managerial practices in American at the turn of the century. Judith A. Merkle's Management and Ideology: The Legacy of the International Scientific Management Movement (Berkeley : University of California Press, 1980), Stephen P. Waring's Taylorism Transformed : Scientific Management Theory Since 1945 (Chapel Hill : University of North Carolina Press, 1991), and Nelson's A Mental Revolution: Scientific Management Since Taylor (Columbus: Ohio State University Press, 1992) bring the story down to the present.

The second lecture draws heavily from Alfred D. Chandler, Jr., The Visible Hand: The Managerial Revolution in American Business (Cambridge, 1977) and Daniel J. Boorstin, The Americans: The Democratic Experience (NY, 1973).  For a more recent, richly illustrated account, see Susan Strasser, Satisfaction Guaranteed:  The Making of the American Mass Market (Washington DC, 1889).

Week 10, From the Difference Engine to ENIAC; From Boole to EDVAC

The first lecture traces the dual roots of the stored-program digital electronic computer viewed as the combination of a mechanical calculator and a logic machine. Taking the first designs as both flexible and inchoate, the second examines the groups of people who gave it shape by incorporating it into their enterprises. In particular, the lecture looks at the means by which the nascent computer industry sold the computer to business and industry, thus creating the machine by creating demand for it.

Reading

Aspray and Campbell-Kelly provide perhaps the best historical account of the computer, emphasizing the environments into which it was introduced when it was new and the role they played in shaping its development.

Sources

Another latest history is Paul Ceruzzi's A History of Modern Computing (Cambridge, MA, 1998), which provides considerable detail about the development of the industry. For the development of the machine itself, Stan Augarten's Bit by Bit: An Illustrated History of Computers is a generally reliable, engagingly written, and richly illustrated survey from early methods of counting to the PC and supercomputer. Michael R. Williams's A History of Computing Technology (Prentice-Hall, 1985) takes a more scholarly approach to the same material but emphasizes developments before the computer itself. In 407 pages of text, the slide rule appears at p.111 and ENIAC at p.271; coverage ends with the IBM/360 series. Although once useful as a general account, Herman Goldstine's still oft-cited The Computer from Pascal to Von Neumann (Princeton, 1973) retains its value primarily for its personal account of the ENIAC project and of the author's subsequent work at the Institute for Advanced Study in Princeton.

Martin Davis' The Universal Computer:  The Road from Leibniz to Turing (New York, 2000; paperback under the title Engines of Logic, 2001) has recently joined Sybille Krämer's Symbolische Maschinen: die Idee der Formalisierung in geschichtlichem Abriss (Darmstadt, 1988) in relating the origins of the computer in the development of mathematical logic. William Aspray's dissertation, "From Mathematical Constructivity to Computer Science: Alan Turing, John von Neumann, and the Origins of Computer Science" (Wisconsin, 1980), covers the period from Hilbert's Program to the design of EDVAC, as does Martin Davis's "Mathematical Logic and the Origin of Modern Computers", in Esther R. Phillips (ed.), Studies in the History of Mathematics (MAA Studies in Mathematics, Vol.26; NY, 1987). The nineteenth-century background belongs to the history of mathematics and of logic proper, but the scholarly literature in those fields is spotty. Andrew Hodges's biography, Alan Turing: The Enigma (NY, 1983), is a splendid account of Turing's work and served as the basis for a compelling stage play, Breaking the Code. Aspray's John von Neumann and the Origins of Modern Computing (MIT, 1990) explores in some detail von Neumann's work both in the design and the application of computers.

Those who want to get right down into the workings of the computer should turn to Charles Petzold, Code: The Hidden Language of Computer Hardware and Software (Redmond, WA, 1999). Softer introductions may be found in Alan W. Biermann's Great Ideas in Computer Science: A Gentle Introduction (2nd ed., MIT Press, 1997) and Jay David Bolter's Turing's Man: Western Culture in the Computer Age (Chapel Hill, 1984).

Week 11, The Development of the Computer Industry, The Software Paradox

In keeping with the dual origins of the computer, the development of the industry since the early '50s has two distinct, though related aspects. Through transistors, integrated circuits, and VLSI, computers themselves have increased in power by a factor of 100 every five years, while dropping in price at about the same rate. Rapid progress in the development of hardware has made visionary devices commonplace within a span of five or ten years. IBM, DEC, and Apple represent the successive stages by which computers were transformed from specially designed capital investments to mass-produced consumer items over the span of thirty years.

The software paradox is simply stated: programmers have successfully automated everyone's job but their own. With the commercialization of the computer came the need to provide customers with the programs that matched its power to their purposes, either directly through application programs or indirectly through programming languages, operating systems, and related tools. Both industry and customers soon found themselves hiring and trying to manage large numbers of programmers who had no previous training either in computers or in systems analysis but whose programming skills gave them effective control over their work. To address the resulting issues of productivity and quality control, computer engineers and managers turned to earlier models of production, in particular through automatic programming and the software equivalent of interchangeable parts. So far, efforts to Taylorize or Fordize the production of programs have been unsuccessful. Nonetheless, they testify to the abiding impression that Taylor and Ford have made on American engineering and thus provide firm historical roots to modern technology.

Reading

The choice of Tracy Kidder's The Soul of a New Machine is aimed directly at continuing the theme of technology and the nature of work. In addition to portraying the complex organization on which modern technological development depends, it raises intriguing questions of power and exploitation.

Sources

The development of the computer industry is only now coming under the scrutiny of historians, and most of the current literature stems from journalists. The foremost exceptions are the latest books by Paul Ceruzzi and by William Aspray and Martin Campbell-Kelly, both of which offer a much needed and long awaited survey of the history of the industry from its pre-computer roots to the present. For a review of the state of the field several years ago, see my article, "The History of Computing in the History of Technology", Annals of the History of Computing 10(1988), 113-125 [pdf], updated in "Issues in the History of Computing", in Thomas J. Bergin and Rick G. Gibson (eds.), History of Programming Languages II (NY: ACM Press, 1996), 772-81. The Annals themselves constitute one of the most important sources. Among the most useful accounts are Augarten's Bit by Bit; Kenneth Flamm, Creating the Computer: Government, Industry, and High Technology (Washington, 1988); Katharine Davis Fishman, The Computer Establishment (NY, 1981); Howard Rheingold Tools for Thought: The History and Future of Mind-Expanding Technology (NY, 1985), and Pamela McCorduck, Machines Who Think (San Francisco, 1979). David E. Lundstrom's A Few Good Men from Univac (Cambridge, MA, 1987) provides a critical look at the early industry, while the collaborative effort of Charles J. Bashe, Lyle R. Johnson, John H. Palmer, and Emerson W. Pugh on IBM's Early Computers (Cambridge, MA, 1986) provides an exhaustively detailed account, based on company documents, of IBM's entry into the market and the series of machines up to the 360, which is the subject of a second volume now nearing completion. Paul Freiburger and Michael Swaine's Fire in the Valley: The Making of the Personal Computer (Berkeley, 1984) remains one of the best accounts of the early days of the PC industry.

The history of software remains largely unwritten and must be gleaned from the professional literature.  For overviews see my articles, "The Roots of Software Engineering", CWI Quarterly, 3,4(1990), 325-34 [pdf] and "Software: The Self-Programming Machine", in Atsushi Akera and Frederik Nebeker (eds.), From 0 to 1:  An Authoritative History of Modern Computing (New York: Oxford U.P., 2002), as well as the entry "Software History" in Anthony Ralston et al., Encyclopedia of Computer Science, 4th edition (London, 2000).  For "A Gentle Introduction" to what software is about, see Alan W. Biermann, Great Ideas in Computer Science (Cambridge, MA, 1997).

There is a growing number of personal accounts and reminiscences by computer people.  Among the most thought-provoking and least self-serving are Ellen Ullman, Close to the Machine: Technophilia and Its Discontents (San Francisco, 1997), Richard P. Gabriel, Patterns of Software:  Tales from the Software Community (New York, 1996), and Robert N. Britcher, The Limits of Software:  People, Projects, and Perspectives (Reading, MA, 1999).

Week 12, Working Toward Choices, Where Are We Now?

The computer is only one of several technologies, which, spawned or encouraged by the demands of World War II, rapidly transformed American society in the twenty-five years after 1945, bringing a general prosperity thought unimaginable even before the Depression. With that prosperity came new problems and a growing sense that technology threatened society as much as, or even more than, it fostered. The first lecture reviews the major elements of modern high technology as it has developed since the war, and the second tries to put the issues it raises into the perspective of the course as a whole. In the end, the course has no answers to offer, but only questions that may prove fruitful in seeking them.

Reading

Langdon Winner's "Do Artifacts Have Politics?" argues that indeed they do, that is, that how technologies will be used is part of how they are designed. Although that view does not preclude unintended consequences, it does place responsibility for technologies on the people who create, maintain, and use them. The readings throughout the course offer ample material for putting Winner's thesis to the test.

Perhaps the most famous example discussed by Winner is the story of Robert Moses and the parkway bridges designed too low to allow access by bus to Jones Beach.  The story, taken from Robert Caro's well known biography of Moses, turns out on close examination to be inaccurate in several details.  For a discussion of the story and its use by Winner, see Bernward Joerges, "Do Politics Have Artefacts?" Social Studies of Science 29,3(1999), 411-31 [JSTOR], Steve Woolgar and Geoff Cooper, "Do Artefacts Have Ambivalence? Moses' Bridges, Winner's Bridges, and Other Urban Legends in S&TS", Ibid., 433-49 [JSTOR], and Joerges, "Scams Cannot Be Busted: Reply to Cooper and Woolgar", Ibid., 45-57 [JSTOR]

Sources

Fred C. Allvine and Fred A. Tarpley, Jr. provide a brief survey of the major changes in the U.S. economy during the quarter century following World War II in The New State of the Economy (Cambridge, MA, 1977). Peter Drucker, The Age of Discontinuity (NY, 1968, 2nd ed. 1978), and John Kenneth Galbraith, The New Industrial State (NY, 1967, 3rd ed., 1978), lay particular emphasis on new technologies and their effect on our economic institutions, while Seymour Melman, Profits without Production (NY, 1983), and Michael Piore and Charles Sabel, The Second Industrial Divide: Possibilities for Prosperity (NY, 1984), question how positive those effects have been.

Winner's Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought (Cambridge, MA, 1977) is a fully developed statement of the issues discussed in his article. Literature on the political assumptions underlying technology abound; indeed, some of it provoked this course in the first place. Among the more latest and more interesting are David F. Noble, Forces of Production: A Social History of Industrial Automation (NY/Oxford, 1986), Walter A. McDougall, ...The Heavens and the Earth: A Political History of the Space Age (NY 1985), Shoshana Zuboff, In the Age of the Smart Machine: The Future of Work and Power (NY, 1988),  Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in the Cold War (Cambridge, MA: MIT, 1996), and Gene I. Rochlin, Trapped in the Net : The Unanticipated Consequences of Computerization (Princeton: PU Press, 1997). For the history of the Internet, see Janet Abbate, Inventing the Internet (Cambridge, MA: MIT Press, 1999)

As of 1996, the Internet and the World Wide Web, especially when grouped together under the concept of the National Information Superhighway, have become prime subjects for political and cultural analysis along the lines suggested by this week's readings and by the interpretive themes of the course. An article written twenty five years ago retains its pertinence. In "The Mythos of the Electronic Revolution" (American Scholar 39(1969-70), 219-241, 395-424), James W. Quirk and John J. Carey place the claims of the 1930s for a revolution of electrical power in the framework of Leo Marx's Machine in the Garden and showing how the notions of the "electronic village" or the "technotronic era" fashionable in the last '60s are similarly 20th-century evocations of the middle landscape. Cyberspace would seem to be the latest.

Reading Period

Albert Borgmann's Holding Onto Reality: The Nature of Information at the Turn of the Millennium  is one of a spate of latest books attempting to place the "information revolution" into some sort of historical perspective.  Others include: Michael E. Hobart and Zachary S. Schiffman, Information Ages: Literacy, Numeracy, and the Computer Revolution (Baltimore, 1998); James J. O'Donnell, Avatars of the Word:  From Papyrus to Cyberspace (Cambridge, MA, 1998). Less historical but nonetheless suggestive are John Seely Brown and Paul Duguid, The Social Life of Information (Boston, 2000) and Jay David Bolter and Richard Grusin, Remediation: Understanding New Media (Cambridge, MA, 1999).  Books on the Internet abound; perhaps the most important is Lawrence Lessig's Code and Other Laws of Cyberspace (New York, 1999), followed now by his The Future of Ideas: The Fate of the Commons in a Connected World (New York, 2001).

Sample Examination Questions

The lectures and readings of this course tend to emphasize the ways in which inventors, entrepreneurs, and workers looked upon machines as determinants of their socio-economic life.  At several points, however, we have caught glimpses of inventors and onlookers who have seen in machines expressions, either direct or symbolic, of the ideals and aspirations of their society.  Using specific examples over the range of the course, explore the role of the creative and esthetic imagination in determining the ways societies shape and respond to their technologies.

"Although historians often speak of 'the industrial revolution' and 'the rise of the factory system' in the singular, there have in reality been not one but three such revolutions.  The first was the mechanization of the textile industry in the late 18th and early 19th centuries, the second was the mechanization of the consumer durables industry in the middle-to-late 19th century through the 'American System', and the third was the growth of 'rational management' in the early 20th century.  In each of these 'revolutions' the technological basis, the economic objective, and the impact upon labor were completely different, and it is historically false to see the three as phases of a single development, as history texts usually do."  Discuss critically.

"But, as is common knowledge, an invention rarely spreads until it is strongly felt to be a social necessity, if only for the reason that its construction then becomes a matter of routine."  The eminent French historian, Marc Bloch, made this general claim in an article about the watermill in the Middle Ages.  Discuss its validity with reference to the automobile and the computer.

You have read examples of two analyses of the character and effects of industrialization in England in the early nineteenth century, namely selections from Babbage's On the Economy of Machines and Manufactures and a central chapter of Marx's Capital.  How well does each analysis explain the course of the "Lowell Experiment", as described by Kasson, from its initial inspiration to its eventual outcome in the 1840s?

The early textile factories were called "mills".  Given the traditional technical system denoted by the term, what does that usage tell us about initial perceptions of the factory?  In what ways was the usage deceptive from the outset?  Use specific examples to illustrate your analysis.

"One can argue that medieval Europe was a highly sophisticated technological society of a certain sort, involved in a fairly rapid, continuing process of sociotechnical change. One does not have to wait for the industrial revolution ... to see political societies remolded in response to technical innovation." Drawing on specific evidence from the lectures and readings so far, either make that argument or refute it.

"The factory was more than just a larger work unit. It was a system of production, resting on a characteristic definition of the functions and responsibilities of different participants in the productive process." (David Landes) Discuss Landes's assertion with reference to Harpers Ferry Armory, Ford's Highland Park Plant, and Data General's Westborough facility.

Ford's Sociological Department was a formal system of social control in an industrial setting. Compare and contrast this system with the forms of control at work in the Lowell textile mills and in the Eagle project at Data General.

"We do not use technologies so much as live them." Use Meyer's The Five-Dollar Day and the Lynds's Middletown to discuss this claim by Langdon Winner.
 
 

Thu, 23 Dec 2021 02:31:00 -0600 text/html https://www.princeton.edu/~hos/h398/398sources.v-1
Killexams : Healthcare Prescriptive Analytics Market Size and Growth 2022 Research Analysis by Volume, Price, latest Development and Forecast to 2028

The MarketWatch News Department was not involved in the creation of this content.

Jun 24, 2022 (The Expresswire) -- "Final Report will add the analysis of the impact of COVID-19 on this industry."

Global “Healthcare Prescriptive Analytics Market” forecast 2022-2028 report study gives comprehensive coverage of the market across different market segments, deep country level analysis, and examination on drivers, restraints, key trends and opportunities. Also, Healthcare Prescriptive Analytics market report primary focus on key business financials, product portfolio, expansion strategies, and latest developments. Healthcare Prescriptive Analytics market size report contains growth rate, revenue, segmentation with product type, application, end-users, regions, manufacturers, and more.

Get a demo PDF of the Report -https://www.absolutereports.com/enquiry/request-sample/20794598

Market Analysis and Insights: Global Healthcare Prescriptive Analytics Market

Healthcare prescriptive analytics is the final stage and the future of health care analysis. It does not merely predict future outcomes, but suggests the options at hand and then demonstrates the implications of each to make the decision-making process more rational, streamlined and optimized.
This report focuses on global and United States Healthcare Prescriptive Analytics market, also covers the segmentation data of other regions in regional level and county level.
Due to the COVID-19 pandemic, the global Healthcare Prescriptive Analytics market size is estimated to be worth USD 8735.1 million in 2022 and is forecast to a readjusted size of USD 14650 million by 2028 with a CAGR of 9.0% during the review period. Fully considering the economic change by this health crisis, by Type, Software accounting for the Healthcare Prescriptive Analytics global market in 2021, is projected to value USD million by 2028, growing at a revised CAGR in the post-COVID-19 period. While by Application, Clinical Data Analytics was the leading segment, accounting for over percent market share in 2021, and altered to CAGR throughout this forecast period.
In United States the Healthcare Prescriptive Analytics market size is expected to grow from USD million in 2021 to USD million by 2028, at a CAGR ofduring the forecast period.

Global Healthcare Prescriptive Analytics Scope:

Players, stakeholders, and other participants in the global Healthcare Prescriptive Analytics market will be able to gain the upper hand as they use the report as a powerful resource. The segmental analysis focuses on revenue and forecast by region (country), by Type, and by Application.

To Understand How Covid-19 Impact Is Covered in This Report -https://www.absolutereports.com/enquiry/request-covid19/20794598

The major Manufactures in Healthcare Prescriptive Analytics Market Report covered are:

● Allscripts
● Cerner
● IBM
● McKesson
● Medeanalytics
● Optum
● Oracle
● Microsoft
● SAS
● Alteryx
● FICO
● Tibco Software

Get a demo Copy of the Healthcare Prescriptive Analytics Market Report

Global Healthcare Prescriptive Analytics Market: Segment Analysis

The research report includes specific segments by region (country), by manufacturers, by Type and by Application. Each type provides information about the production during the forecast period of 2017 to 2028. By Application segment also provides consumption during the forecast period of 2017 to 2028. Understanding the segments helps in identifying the importance of different factors that aid the market growth.

The Healthcare Prescriptive Analytics market is segmented by Types:

● Software
● Hardware
● Other Services

The Healthcare Prescriptive Analytics market is segmented by Applications:

● Clinical Data Analytics
● Financial Data Analytics
● Administrative Data Analytics
● Research Data Analytics
● Others

Global Healthcare Prescriptive Analytics Market: Drivers and Restrains

Global Healthcare Prescriptive Analytics Market: Drivers and Restrains

The research report has incorporated the analysis of different factors that augment the market’s growth. It constitutes trends, restraints, and drivers that transform the market in either a positive or negative manner. This section also provides the scope of different segments and applications that can potentially influence the market in the future. The detailed information is based on current trends and historic milestones. This section also provides an analysis of the volume of production about the global market and about each type from 2017 to 2028. This section mentions the volume of production by region from 2017 to 2028. Pricing analysis is included in the report according to each type from the year 2017 to 2028, manufacturer from 2017 to 2022, region from 2017 to 2022, and global price from 2017 to 2028.

A thorough evaluation of the restrains included in the report portrays the contrast to drivers and gives room for strategic planning. Factors that overshadow the market growth are pivotal as they can be understood to devise different bends for getting hold of the lucrative opportunities that are present in the ever-growing market. Additionally, insights into market expert’s opinions have been taken to understand the market better.

Inquire or Share Your Questions If Any before the Purchasing This Report -https://www.absolutereports.com/enquiry/pre-order-enquiry/20794598

Geographical Segmentation:

Geographically, this report is segmented into several key regions, with sales, revenue, market share, and Healthcare Prescriptive Analytics market growth rate in these regions, from 2015 to 2028, covering

● North America (United States, Canada and Mexico) ● Europe (Germany, UK, France, Italy, Russia and Turkey etc.) ● Asia-Pacific (China, Japan, Korea, India, Australia, Indonesia, Thailand, Philippines, Malaysia, and Vietnam) ● South America (Brazil etc.) ● Middle East and Africa (Egypt and GCC Countries)

The study objectives of Healthcare Prescriptive Analytics Market report are:

● To analyze and study the Healthcare Prescriptive Analytics Market sales, value, status and forecast (2022-2028). ● Focuses on the key Healthcare Prescriptive Analytics manufacturers, to study the sales, value, market share and development plans in the future. ● Focuses on the global key manufacturers, to define, describe and analyses the market competition landscape, Healthcare Prescriptive Analytics market trends, and SWOT analysis. ● To define, describe and forecast the market by type, application, and region. ● To analyze the global and key regions market potential and advantage, opportunity, and challenge, restraints, and risks. ● To identify significant trends and factors driving or inhibiting market growth. ● To analyze the opportunities in the market for stakeholders by identifying the high growth segments. ● To strategically analyze each submarket with respect to individual growth trend and their contribution to the market ● To analyze competitive developments such as expansions, agreements, new product launches, and acquisitions in the market ● To strategically profile the key players and comprehensively analyze their growth strategies.

Get a demo Copy of the Healthcare Prescriptive Analytics market Report 2022

The reports help answering the following questions:

● What is the current size of the Healthcare Prescriptive Analytics market in different regions? ● How is the Healthcare Prescriptive Analytics market divided into different product segments? ● How are the overall market and different product segments growing? ● How is the market predicted to develop in the future? ● What is the market potential compared to other countries?

Detailed TOC of Global Healthcare Prescriptive Analytics Market Outlook 2022

1 Healthcare Prescriptive Analytics Market Overview

1.1 Product Overview and Scope

1.2 Segment by Types

1.3 Segment by Application

1.4 Global Market Growth Prospects

1.4.1 Global Healthcare Prescriptive Analytics Revenue Estimates and Forecasts (2016-2027)

1.4.2 Global Production Capacity Estimates and Forecasts (2016-2027)

1.4.3 Global Production Estimates and Forecasts (2016-2027)

1.5 Global Healthcare Prescriptive Analytics Market Size by Region

1.5.1 Global Market Size Estimates and Forecasts by Region: 2016 VS 2021 VS 2027

1.5.2 North America Healthcare Prescriptive Analytics Estimates and Forecasts (2016-2027)

1.5.3 Europe Estimates and Forecasts (2016-2027)

1.5.4 China Estimates and Forecasts (2016-2027)

1.5.5 Japan Estimates and Forecasts (2016-2027)

2 Market Competition by Manufacturers

2.1 Global Production Capacity Market Share by Manufacturers (2017-2022)

2.2 Global Revenue Market Share by Manufacturers (2017-2022)

2.3 Market Share by Company Type (Tier 1, Tier 2 and Tier 3)

2.4 Global Healthcare Prescriptive Analytics Average Price by Manufacturers (2017-2022)

2.5 Manufacturers Healthcare Prescriptive Analytics Production Sites, Area Served, Product Types

2.6 Healthcare Prescriptive Analytics Market Competitive Situation and Trends

2.6.1 Healthcare Prescriptive Analytics Market Concentration Rate

2.6.2 Global 5 and 10 Largest Healthcare Prescriptive Analytics Players Market Share by Revenue

2.6.3 Mergers and Acquisitions, Expansion

Purchase this Report (Price 4350 USD for a Single-User License) -https://www.absolutereports.com/purchase/20794598

3 Production and Capacity by Region

3.1 Global Production Capacity of Healthcare Prescriptive Analytics Market Share by Region (2017-2022)

3.2 Global Healthcare Prescriptive Analytics Revenue Market Share by Region (2017-2022)

3.3 Global Healthcare Prescriptive Analytics Production Capacity, Revenue, Price and Gross Margin (2017-2022)

3.4 North America Healthcare Prescriptive Analytics Production

3.4.1 North America Production Growth Rate (2017-2022)

3.4.2 North America Production Capacity, Revenue, Price and Gross Margin (2017-2022)

3.5 Europe Healthcare Prescriptive Analytics Production

3.5.1 Europe Production Growth Rate (2017-2022)

3.5.2 Europe Production Capacity, Revenue, Price and Gross Margin (2017-2022)

3.6 China Healthcare Prescriptive Analytics Production

3.6.1 China Production Growth Rate (2017-2022)

3.6.2 China Production Capacity, Revenue, Price and Gross Margin (2017-2022)

3.7 Japan Healthcare Prescriptive Analytics Production

3.7.1 Japan Healthcare Prescriptive Analytics Production Growth Rate (2017-2022)

3.7.2 Japan Healthcare Prescriptive Analytics Production Capacity, Revenue, Price and Gross Margin (2017-2022)

4 Global Healthcare Prescriptive Analytics Consumption by Region

4.1 Global Healthcare Prescriptive Analytics Consumption Market Share by Region

4.2 North America

4.3 Europe

4.4 Asia Pacific

4.5 Latin America

5 Healthcare Prescriptive Analytics Market Production, Revenue, Price Trend by Security Level

5.1 Global Production Market Share by Security Level (2017-2022)

5.2 Global Revenue Market Share by Security Level (2017-2022)

5.3 Global Price by Security Level (2017-2022)

6 Healthcare Prescriptive Analytics Market Consumption Analysis by Application

6.1 Global Consumption Market Share by Application (2017-2022)

6.2 Global Consumption Growth Rate by Application (2017-2022)

7 Key Companies Profiled

7.1 Manufacture 1

7.1.1 Manufacture 1 Healthcare Prescriptive Analytics Corporation Information

7.1.2 Manufacture 1 Healthcare Prescriptive Analytics Product Portfolio

7.1.3 Manufacture 1 Healthcare Prescriptive Analytics Production Capacity, Revenue, Price and Gross Margin (2017-2022)

7.1.4 Manufacture 1 Main Business and Markets Served

7.1.5 Manufacture 1 latest Developments/Updates

7.2 Manufacture 2

7.2.1 Manufacture 2 Healthcare Prescriptive Analytics Corporation Information

7.2.2 Manufacture 2 Healthcare Prescriptive Analytics Product Portfolio

7.2.3 Manufacture 2 Healthcare Prescriptive Analytics Production Capacity, Revenue, Price and Gross Margin (2017-2022)

7.2.4 Manufacture 2 Main Business and Markets Served

7.2.5 Manufacture 2 latest Developments/Updates

7.3 Manufacture 3

7.3.1 Manufacture 3 Healthcare Prescriptive Analytics Corporation Information

7.3.2 Manufacture 3 Healthcare Prescriptive Analytics Product Portfolio

7.3.3 Manufacture 3 Healthcare Prescriptive Analytics Production Capacity, Revenue, Price and Gross Margin (2017-2022)

7.3.4 Manufacture 3 Main Business and Markets Served

7.3.5 Manufacture 3 latest Developments/Updates

8 Healthcare Prescriptive Analytics Manufacturing Cost Analysis

8.1 Healthcare Prescriptive Analytics Key Raw Materials Analysis

8.1.1 Key Raw Materials

8.1.2 Key Raw Materials Price Trend

8.1.3 Key Suppliers of Raw Materials

8.2 Proportion of Manufacturing Cost Structure

8.3 Manufacturing Process Analysis of Healthcare Prescriptive Analytics

8.4 Industrial Chain Analysis

9 Marketing Channel, Distributors and Customers

9.1 Marketing Channel

9.2 Distributors List

9.3 Customers

10 Market Dynamics

10.1 Healthcare Prescriptive Analytics Industry Trends

10.2 Healthcare Prescriptive Analytics Growth Drivers

10.3 Healthcare Prescriptive Analytics Market Challenges

10.4 Healthcare Prescriptive Analytics Market Restraints

11 Production and Supply Forecast

11.1 Global Forecasted Production of by Region (2023-2028)

11.2 North America Production, Revenue Forecast (2023-2028)

11.3 Europe Production, Revenue Forecast (2023-2028)

11.4 China Production, Revenue Forecast (2023-2028)

11.5 Japan Production, Revenue Forecast (2023-2028)

12 Healthcare Prescriptive Analytics Market Consumption and Demand Forecast

12.1 Global Forecasted Demand Analysis of Healthcare Prescriptive Analytics

12.2 North America Forecasted Consumption by Country

12.3 Europe Market Forecasted Consumption by Country

12.4 Asia Pacific Market Forecasted Consumption by Region

12.5 Latin America Forecasted Consumption by Country

13 Forecast by Security Level and by Application (2023-2028)

13.1 Global Production, Revenue and Price Forecast by Security Level (2023-2028)

13.1.1 Global Forecasted Production by Security Level (2023-2028)

13.1.2 Global Forecasted Revenue by Security Level (2023-2028)

13.1.3 Global Forecasted Price by Security Level (2023-2028)

13.2 Global Forecasted Consumption by Application (2023-2028)

14 Research Finding and Conclusion

15 Methodology and Data Source

15.1 Methodology/Research Approach

15.1.1 Research Programs/Design

15.1.2 Market Size Estimation

15.1.3 Market Breakdown and Data Triangulation

15.2 Data Source

15.2.1 Secondary Sources

15.2.2 Primary Sources

15.3 Author List

15.4 Disclaimer

For Detailed TOC - https://www.absolutereports.com/TOC/20794598#TOC

Contact Us:

Absolute Reports

Phone : US +1 424 253 0807

UK +44 203 239 8187

Email : sales@absolutereports.com

Web : https://www.absolutereports.com

Press Release Distributed by The Express Wire

To view the original version on The Express Wire visit Healthcare Prescriptive Analytics Market Size and Growth 2022 Research Analysis by Volume, Price, latest Development and Forecast to 2028

COMTEX_409163084/2598/2022-06-24T07:02:29

Is there a problem with this press release? Contact the source provider Comtex at editorial@comtex.com. You can also contact MarketWatch Customer Service via our Customer Center.

The MarketWatch News Department was not involved in the creation of this content.

Thu, 23 Jun 2022 19:02:00 -0500 en-US text/html https://www.marketwatch.com/press-release/healthcare-prescriptive-analytics-market-size-and-growth-2022-research-analysis-by-volume-price-recent-development-and-forecast-to-2028-2022-06-24
Killexams : Radware Ltd.: Best Near-Term InfoTech Stock Buy Now
Business on Wall Street in Manhattan

Pgiam/iStock via Getty Images

Investment Thesis

Careful comparisons urge a buy of Radware Ltd. (NASDAQ:RDWR).

21st Century paces of change in technology and rational behavior (not of emotional reactions) seriously disrupt accepted productive investment strategy of the 20th century.

One required change is the shortening of forecast horizons, with a shift from the multi-year passive approach of buy&hold to the active strategy of specific price-change target achievement or time-limit actions, with reinvestment set to new nearer-term targets.

That change avoids the irretrievable loss of invested time spent destructively by failure to recognize shifting evolutions like the cases of IBM, Kodak, GM, Xerox, GE, and many others.

It recognizes the evolutions in medical, communication, and information technologies and enjoys their operational benefits already present in extended lifetimes, trade-commission-free investments, and coming in transportation ownership and energy usage.

But it requires the ability to make valid direct comparisons of value between investment reward prospects and risk exposures in the uncertain future. Since uncertainty expands as the future dimension increases, shorter forecast horizons are a means of improving the reward-to-risk comparison.

That shortening is now best attended at the investment entry point with Market-Maker expectations for coming prices. When reached, they are then reintroduced at the exit/reinvestment point as the term of expectations for the required coming comparisons as decisions step in to move forward.

The MM's constant presence, extensive global communications, and human resources dedicated to monitoring industry-focused competitive evolution sharpen MM price expectations, essential to their risk-avoidance roles.

Their roles requiring firm capital be only temporarily risk-exposed are hedged by derivative-securities deals to avoid undesired price changes. The deals' prices and contracts provide a window of sorts to MM price expectations.

Information technology via the Internet makes investment monitoring and management time and attention efficient despite its increase in frequency.

Once an investment choice is made and buy transaction confirmation is received, the target-price GTC sell order for the confirmed number of shares at the target price or better should be placed. Keeping trade actions entered through the Internet on your lap/desk-top or cell phone should avoid trade commission charges. Your broker's internal system should keep you informed of your account's progress.

Your own private calendar record should be kept of the date 63 market days (or 91 calendar days) beyond the trade's confirmation date as a time-limit alert to check if the GTC order has not been executed. If not, then start your exit and reinvestment decision process.

The 3 months' time limit is what we find to be a good choice, but may be extended some if desired. Beyond 5-6 months, time investments start to work against the process and are not recommended.

For investments guided by this article or others by me target prices will always be found as the high price in the MM forecast range.

Description of Equity Subject Company

"Radware Ltd., together with its subsidiaries, develops, manufactures, and markets cyber security and application delivery solutions for applications in cloud, physical, and software-defined data centers worldwide. The company sells its products primarily to independent distributors, including value-added resellers, original equipment manufacturers, and system integrators. Radware Ltd. was founded in 1996 and is headquartered in Tel Aviv, Israel."

Source: Yahoo Finance

Radware analyst estimates

Yahoo Finance

These growth estimates have been made by and are collected from Wall Street analysts to suggest what conventional methodology currently produces. The typical variations across forecast horizons of different time periods illustrate the difficulty of making value comparisons when the forecast horizon is not clearly defined.

Risk and Reward Balances Among RDWR Competitors

Figure 1

Risk and Reward Balances Among RDWR Competitors

blockdesk.com

(used with permission)

The risk dimension is of genuine price drawdowns at their most extreme point while being held in previous pursuit of upside rewards similar to the ones currently being seen. They are measured on the red vertical scale. Reward expectations are measured on the green horizontal scale.

Both scales are of percent change from zero to 25%. Any stock or ETF whose present risk exposure exceeds its reward prospect will be above the dotted diagonal line. Capital-gain-attractive to-buy issues are in the directions down and to the right.

Our principal interest is in RDWR at location [6], just above the green area marking reward to risk trade-offs at a 5 to 1 ratio. A "market index" norm of reward~risk tradeoffs is offered by SPY at [3]. Most appealing by this Figure 1 view for wealth-building investors is RDWR.

Comparing Competitive Features of Info-Technology Providers

The Figure 1 map provides a good visual comparison of the two most important aspects of every equity investment in the short term. There are other aspects of comparison which this map sometimes does not communicate well, particularly when general market perspectives like those of SPY are involved. Where questions of "how likely' are present other comparative tables, like Figure 2, may be useful.

Yellow highlighting of the table's cells emphasize factors important to securities valuations and the security RDWR of most promising of near capital gain as ranked in column [R].

Figure 2

Risk and Reward Balances Among RDWR Competitors

blockdesk.com

(used with permission)

Why Do All This Math?

Figure 2's purpose is to attempt universally comparable answers, stock by stock, of a) How BIG the prospective price gain payoff may be, b) how LIKELY the payoff will be a profitable experience, c) how SOON it may happen, and d) what price drawdown RISK may be encountered during its active holding period.

Readers familiar with our analysis methods after quick examination of Figure 2 may wish to skip to the next section viewing price range forecast trends for RDWR.

Column headers for Figure 2 define investment-choice preference elements for each row stock whose symbol appears at the left in column [A]. The elements are derived or calculated separately for each stock, based on the specifics of its situation and current-day MM price-range forecasts. Data in red numerals are negative, usually undesirable to "long" holding positions. Table cells with yellow fills are of data for the stocks of principal interest and of all issues at the ranking column, [R]. Fills of pink warn of conditions nor constructive to buys.

The price-range forecast limits of columns [B] and [C] get defined by MM hedging actions to protect firm capital required to be put at risk of price changes from volume trade orders placed by big-$ "institutional" clients.

[E] measures potential upside risks for MM short positions created to fill such orders, and reward potentials for the buy-side positions so created. Prior forecasts like the present provide a history of relevant price draw-down risks for buyers. The most severe ones actually encountered are in [F], during holding periods in effort to reach [E] gains. Those are where buyers are emotionally most likely to accept losses.

The Range Index [G] tells where today's price lies relative to the MM community's forecast of upper and lower limits of coming prices. Its numeric is the percentage proportion of the full low to high forecast seen below the current market price.

[H] tells what proportion of the [L] demo of prior like-balance forecasts have earned gains by either having price reach its [B] target or be above its [D] entry cost at the end of a 3-month max-patience holding period limit. [ I ] gives the net gains-losses of those [L] experiences.

What makes RDWR most attractive in the group at this point in time is its ability to produce capital gains most consistently at its present operating balance between share price risk and reward at the Range Index [G]. At a RI of 1, today's price is at the bottom of its forecast range, with all price expectations only to the upside. Not our expectations, but those of Market-Makers acting in support of Institutional Investment organizations build the values of their typical multi-billion-$ portfolios. Credibility of the [E] upside prospect as evidenced in the [I] payoff at +18% is shown in [N].

Further Reward~Risk tradeoffs involve using the [H] odds for gains with the 100 - H loss odds as weights for N-conditioned [E] and for [F], for a combined-return score [Q]. The typical position holding period [J] on [Q] provides a figure of merit [fom] ranking measure [R] useful in portfolio position preferencing. Figure 2 is row-ranked on [R] among alternative candidate securities, with PPG in top rank.

Along with the candidate-specific stocks, these selection considerations are provided for the averages of some 3,000 stocks for which MM price-range forecasts are available today, and 20 of the best-ranked (by fom) of those forecasts, as well as the forecast for S&P500 Index ETF (SPY) as an equity-market proxy.

Current-market index SPY is not competitive as an investment alternative. Its Range Index of 26 indicates 3/4ths of its forecast range is to the upside, but little more than half of previous SPY forecasts at this range index produced profitable outcomes.

As shown in column [T] of figure 2, those levels vary significantly between stocks. What matters is the net gain between investment gains and losses actually achieved following the forecasts, shown in column [I]. The Win Odds of [H] tells what proportion of the demo RIs of each stock were profitable. Odds below 80% often have proven to lack reliability.

Recent Forecast Trends of the Primary Subject

Figure 3

RDWR stock forecast

blockdesk.com

RDWR has declined in price to a point where hedging actions show that coming-price expectations resist further declines, with Range Index value of only 6% above the MM community's low and 94% of the range to the upside.

Past experiences at this level have produced profitable position outcomes in 10 out of every 11 opportunities, winners 91% of the time. The small image showing the frequency of RIs makes it clear that the bulk of the past 5 years of daily forecasts have been at higher price and price expectation levels.

Current comparison with other Leveraged ETFs ranks RDWR at the top of the list of such alternative investment prospect candidates.

Conclusion

Radware Ltd. Currently appears the best Information Technology competitor choice of investors desiring near-term capital gain wealth-builders.

Sat, 09 Jul 2022 06:06:00 -0500 en text/html https://seekingalpha.com/article/4522535-radware-rdwr-best-near-term-infotech-stock-buy-now
Killexams : Without Class 12 Board Exams, Here's How DU, JNU, Jamia Will Admit UG Students
Without Class 12 Board Exams, Here's How DU, JNU, Jamia Will Admit UG Students

Class 12 board exams cancelled. Here's How DU, JNU, JMI will provide admission (representational)

New Delhi:

With cancellation of Class 12 board exams by the Central Board of Secondary Education (CBSE), and other state and central boards, the next step for students is university admission. Some of the top universities in the country – University of Delhi (DU) Jawaharlal Nehru University (JNU) and Jamia Millia Islamia (JMI) University – are likely to begin their undergraduate admission process in the upcoming months. Get Regular Updates about Board Exams via SMS/E-mail. Click here to Subscribe

Recommended: download Updated CUCET/CUET 2022 Syllabus Free, Here!
Don't Miss: Preparing for CUET? Here are best tips by Experts to Score high. Click Here
Students Liked: Top Universities/Colleges Accepting CUET Score . Download List

The cancellation of the board exams will not have much of an impact on JNU, and Jamia Millia Islamia, since they conduct entrance tests, while the Delhi University will wait for the CBSE Class 12 result to begin merit-based admissions.

DU-affiliated colleges admit students to most of the UG programmes on the basis of merit, while admission to some of the programmes is given on the basis of entrance exam. Till last year, the National Testing Agency (NTA) had conducted the Delhi University Entrance Test (DUET). According to reports, the Central Universities Common Entrance Test (CUCET) may be used by 45 central universities, including Delhi University, for admission.

"There are strong chances that we might start registration by July 15 if all the boards cancel their exams," DU acting Vice-Chancellor Professor PC Joshi had told PTI.

"The admissions will be merit-based. The various boards are going to provide us some marks. Then there is the CUCET test for which we had already sent a proposal. The ministry has to take a call on whether it has to implement it or not and it will depend on the assessment of the COVID situation," Prof Joshi, who is also a member of the CUCET committee, told PTI.

Welcoming centre’s decision to cancel Class 12 board exams, JNU Vice-Chancellor Mamidala Jagadesh Kumar had earlier said the decision is "pragmatic and rational" considering the fact that the Covid pandemic is a once-in-a-century occurrence.

"In most Higher Educational Institutes (HEIs) such as JNU, the admission to undergraduate programmes is through an entrance examination. We will conduct the entrance examination whenever it is safe for the students to write it," Prof Kumar told PTI.

If the entrance test is delayed due to the pandemic situation and if the admissions are done at a later date than usual, the university will adjust its academic calendar to make up for the lost time, Mr Kumar added.

JMI media coordinator and public relations officer Ahmad Azeem told PTI that said admissions are done on the basis of the entrance test results.

"Under the CBSE evaluation criteria, marks will be awarded. If a student meets the eligibility criteria following the CBSE evaluation, they will be able to take the entrance exam," Mr Azeem said.

IP University Vice-Chancellor Mahesh Verma had said the cancellation of the board exams will not impact the admission process because admissions are done through the online Common Entrance Test (CET). The university has already started their admission process, with the introduction of five new courses.

"We just need the students to have passed their exams and that will make them eligible for the entrance exams…." Mr Verma told PTI.

After CBSE board test cancellation, Ambedkar University, which conducts admissions on the basis of merit, said it is a "timely and welcome decision by the government".

"It will help the university to complete the admission process in time and begin the next academic session timely. The CBSE will provide the results of Class 12. The process of admission in the undergraduate programmes will be merit-based, as has been done till the last academic session," it added.

Tue, 08 Jun 2021 02:43:00 -0500 en text/html https://www.ndtv.com/education/class-12-board-exams-cancelled-du-admission-2021-jamia-jnu-ambedkar-university-ipu-latest-news
00M-648 exam dump and training guide direct download
Training Exams List