Daily updated M2140-648 Exam Questions are available at killexams

Not exactly simple to finish M2140-648 test with just course readings. You eventually need killexams.com M2140-648 questions answers for training and further developing information about the tips and deceives utilized in M2140-648 practice exam. You want to rehearse the strategies utilized in the actual test with PDF Dumps and afterward, you are all set for the genuine M2140-648 test.

Exam Code: M2140-648 Practice test 2022 by Killexams.com team
IBM Rational IT Sales Mastery Test v2
IBM Rational reality
Killexams : IBM Rational reality - BingNews https://killexams.com/pass4sure/exam-detail/M2140-648 Search results Killexams : IBM Rational reality - BingNews https://killexams.com/pass4sure/exam-detail/M2140-648 https://killexams.com/exam_list/IBM Killexams : Five Ways To Convince Your Customers To Try Something New

Wendy Chen is the CEO of Omnistream, a retail automation company helping retailers bring joy to consumers

Every innovator, at some point, faces the same challenge. You’ve built a revolutionary mousetrap, but you need to convince people to actually take a chance on your product—and stop using whatever solution they’re currently using to keep the rodent population under control.

That’s a tough sell because, by definition, your new product is unproven. Even if you’ve been around a while and you have a clear record of success, and even if you can show how much ROI your product will generate on paper, customers quite reasonably worry about the potential for things to go wrong.

To drive things forward, it’s important to build your sales pipeline—and even your product itself—with your customers’ pain points in mind. Here are five ways to convince your customers to bet on innovation and take a chance on your product:

Understand The Friction

It isn’t enough to show your buyer that your product is better than the alternative. You need to understand and account for the friction that keeps them from wanting to make changes. That isn’t just conservatism—it’s a rational disinclination toward any sort of change.

Some industries, some companies and some product categories bring more inherent friction than others. It’s up to you to understand that and find ways to lubricate the wheels and create momentum for change.

Minimize The Risk

The biggest source of friction, of course, is the risk inherent in trying something new. If there’s a working product in place, then making any change brings a non-zero chance that things will stop working—and that usually ends with someone getting fired. Understandably, people in positions to make these decisions often prioritize minimizing risk rather than maximizing value, and it’s up to you to account for that fact.

One smart approach: Instead of trying to sell customers on a widespread rollout, offer to run a low-cost, low-risk pilot project. My company is a retail tech solutions vendor, and we often use pilot projects or small-scale tests with a handful of stores across one or two product categories to convince potential customers to try us out. We then measure their incremental growth and resulting store-level profitability having used our solutions against control stores.

Keep Costs Low

Nobody wants to spend money on unproven technology, and no matter how great your product, every customer will view it as unproven until they’ve seen it delivering consistent results for their specific use-case. Finding creative ways to keep costs low, especially during the early stages, is vital.

Some SaaS companies now use consumption-based pricing, rather than regular monthly subscriptions, to reassure customers they’ll only pay for what they use. Others, like my company, peg our price to the increased performance we deliver. It's important to do everything necessary to make sure your retail clients succeed, so they know they’re always coming out ahead.

It’s also important to ensure your product plays nicely with legacy infrastructure and is complementary to your existing investments: It doesn’t matter how great your product is if it requires your customer to completely rebuild their backend IT or POS systems. Simple integration into your existing core systems ensures a speedy execution. Another great option is to offer a modular offering, which allows customers to choose only the processes they want to ensure full integration into your entire existing supply chain, retail planning and forecasting systems.

Help Your Advocates Communicate Your Value

As the saying goes, nobody gets fired for buying IBM. Your goal during the pilot project is to develop advocates for your product—people at all levels, from end-user to the C suite—who are willing to stick their necks out and say your product is worth implementing more broadly.

To do that, you need to ensure you’re delivering at all levels of the organization: Change management support for the implementation team, a streamlined experience for users, real benefits (results) for their supervisors and clear metrics that document your product’s value and allow it to be easily communicated up the command chain.

Make Your Pilot Scalable

Once you’ve secured buy-in for your product, you need to be able to communicate a clear strategy for scaling up the pilot and delivering broader value. This needs to be baked into the DNA of your pilot: If you’ve focused on a handful of stores for one to two product categories, for instance, then make it easy to add a couple more stores or categories—or quickly scale up and add entire regions.

For bonus points, make your product more valuable as it scales. You’ve shown your product works across a couple of locations—but can you offer additional learnings and customer insights as you bring more locations into your network? You’ll also need to show willingness to customize your product in order to serve your customers’ unique needs and fringe cases and stay aligned with their own strategy for growth, so they’re motivated to lean into the relationship as they expand.

Enabling Innovation

We’re raised to view innovators as mavericks—people who think differently and change the world by the sheer force of their creativity and contrarianism. But the reality is that innovation is a team sport, and it’s only by convincing other people to join your mission that you’ll be able to win top-to-bottom buy-in and truly bring your product to scale. To succeed as B2B software innovators, we need to spend as much time thinking about how to turn our customers into innovators as we do on planning our own innovations.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Sun, 07 Aug 2022 22:00:00 -0500 Wendy Chen en text/html https://www.forbes.com/sites/forbestechcouncil/2022/08/08/five-ways-to-convince-your-customers-to-try-something-new/
Killexams : Biden wants an industrial renaissance. He can’t do it without immigration reform.

JOHNSTOWN, Ohio — Just 15 minutes outside of downtown Columbus, the suburbs abruptly evaporate. Past a bizarre mix of soybean fields, sprawling office parks and lonely clapboard churches is a field where the Biden administration — with help from one of the world’s largest tech companies — hopes to turn the U.S. into a hub of microchip manufacturing.

In his State of the Union address in March, President Joe Biden called this 1,000-acre spread of corn stalks and farmhouses a “field of dreams.” Within three years, it will house two Intel-operated chip facilities together worth $20 billion — and Intel is promising to invest $80 billion more now that Washington has sweetened the deal with subsidies. It’s all part of a nationwide effort to head off another microchip shortage, shore up the free world’s advanced industrial base in the face of a rising China and claw back thousands of high-end manufacturing jobs from Asia.

But even as Biden signs into law more than $52 billion in “incentives” designed to lure chipmakers to the U.S., an unusual alliance of industry lobbyists, hard-core China hawks and science advocates says the president’s dream lacks a key ingredient — a small yet critical core of high-skilled workers. It’s a politically troubling irony: To achieve the long-sought goal of returning high-end manufacturing to the United States, the country must, paradoxically, attract more foreign workers.

“For high-tech industry in general — which of course, includes the chip industry — the workforce is a huge problem,” said Julia Phillips, a member of the National Science Board. “It’s almost a perfect storm.”

From electrical engineering to computer science, the U.S. currently does not produce enough doctorates and master’s degrees in the science, technology, engineering and math fields who can go on to work in U.S.-based microchip plants. Decades of declining investments in STEM education means the U.S. now produces fewer native-born recipients of advanced STEM degrees than most of its international rivals.

Foreign nationals, including many educated in the U.S., have traditionally filled that gap. But a bewildering and anachronistic immigration system, historic backlogs in visa processing and rising anti-immigrant sentiment have combined to choke off the flow of foreign STEM talent precisely when a fresh surge is needed.

Powerful members of both parties have diagnosed the problem and floated potential fixes. But they have so far been stymied by the politics of immigration, where a handful of lawmakers stand in the way of reforms few are willing to risk their careers to achieve. With a short window to attract global chip companies already starting to close, a growing chorus is warning Congress they’re running out of time.

“These semiconductor investments won’t pay off if Congress doesn’t fix the talent bottleneck,” said Jeremy Neufeld, a senior immigration fellow at the Institute for Progress think tank.

Given the hot-button nature of immigration fights, the chip industry has typically been hesitant to advocate directly for reform. But as they pump billions of dollars into U.S. projects and contemplate far more expensive plans, a sense of urgency is starting to outweigh that reluctance.

“We are seeing greater and greater numbers of our employees waiting longer and longer for green cards,” said David Shahoulian, Intel’s head of workforce policy. “At some point it will become even more difficult to attract and retain folks. That will be a problem for us; it will be a problem for the rest of the tech industry.”

“At some point, you’ll just see more offshoring of these types of positions,” Shahoulian said.

A Booming Technology

Microchips (often called “semiconductors” by wonkier types) aren’t anything new. Since the 1960s, scientists — working first for the U.S. government and later for private industry — have tacked transistors onto wafers of silicon or other semiconducting materials to produce computer circuits. What has changed is the power and ubiquity of these chips.

The number of transistors researchers can fit on a chip roughly doubles every two years, a phenomenon known as Moore’s Law. In latest years, that has led to absurdly powerful chips bristling with transistors — IBM’s latest chip packs them at two-nanometer intervals into a space roughly the size of a fingernail. Two nanometers is thinner than a strand of human DNA, or about how long a fingernail grows in two seconds.

A rapid boost in processing power stuffed into ever-smaller packages led to the information technology boom of the 1990s. And things have only accelerated since — microchips remain the primary driver of advances in smartphones and missiles, but they’re also increasingly integrated into household appliances like toaster ovens, thermostats and toilets. Even the most inexpensive cars on the market now contain hundreds of microchips, and electric or luxury vehicles are loaded with thousands.

It all adds up to a commodity widely viewed as the bedrock of the new digital economy. Like fossil fuels before them, any country that controls the production of chips possesses key advantages on the global stage.

Until fairly recently, the U.S. was one of those countries. But while chips are still largely designed in America, its capacity to produce them has declined precipitously. Only 12 percent of the world’s microchip production takes place in the U.S., down from 37 percent in 1990. That percentage declines further when you exclude “legacy” chips with wider spaces between transistors — the vast majority of bleeding-edge chips are manufactured in Taiwan, and most factories not found on that island reside in Asian nations like South Korea, China and Japan.

For a long time, few in Washington worried about America’s flagging chip production. Manufacturing in the U.S. is expensive, and offshoring production to Asia while keeping R&D stateside was a good way to cut costs.

Two things changed that calculus: the Covid-19 pandemic and rising tensions between the U.S. and China.

Abrupt work stoppages sparked by viral spread in Asia sent shockwaves through finely tuned global supply chains. The flow of microchips ceased almost overnight, and then struggled to restart under new Covid surges and ill-timed extreme weather events. Combined with a spike in demand for microelectronics (sparked by generous government payouts to citizens stuck at home), the manufacturing stutter kicked off a chip shortage from which the world is still recovering.

Even before the pandemic, growing animosity between Washington and Beijing caused officials to question the wisdom of ceding chip production to Asia. China’s increasingly bellicose threats against Taiwan caused some to conjure up nightmare scenarios of an invasion or blockade that would sever the West from its supply of chips. The Chinese government was also pouring billions of dollars into a crash program to boost its own lackluster chip industry, prompting fears that America’s top foreign adversary could one day corner the market.

By 2020 the wheels had begun to turn on Capitol Hill. In January 2021, lawmakers passed as part of their annual defense bill the CHIPS for America Act, legislation authorizing federal payouts for chip manufacturers. But they then struggled to finance those subsidies. Although they quickly settled on more than $52 billion for chip manufacturing and research, lawmakers had trouble decoupling those sweeteners from sprawling anti-China “competitiveness” bills that stalled for over a year.

But those subsidies, as well as new tax credits for the chip industry, were finally sent to Biden’s desk in late July. Intel isn’t the only company that’s promised to supercharge U.S. projects once that money comes through — Samsung, for example, is suggesting it will expand its new $17 billion chip plant outside of Austin, Texas, to a nearly $200 billion investment. Lawmakers are already touting the subsidies as a key step toward an American renaissance in high-tech manufacturing.

Quietly, however, many of those same lawmakers — along with industry lobbyists and national security experts — fear all the chip subsidies in the world will fall flat without enough high-skilled STEM workers. And they accuse Congress of failing to seize multiple opportunities to address the problem.

STEM help wanted

In Columbus, just miles from the Johnstown field where Intel is breaking ground, most officials don’t mince words: The tech workers needed to staff two microchip factories, let alone eight, don’t exist in the region at the levels needed.

“We’re going to need a STEM workforce,” admitted Jon Husted, Ohio’s Republican lieutenant governor.

But Husted and others say they’re optimistic the network of higher ed institutions spread across Columbus — including Ohio State University and Columbus State Community College — can beef up the region’s workforce fast.

“I feel like we’re built for this,” said David Harrison, president of Columbus State Community College. He highlighted the repeated refrain from Intel officials that 70 percent of the 3,000 jobs needed to fill the first two factories will be “technician-level” jobs requiring two-year associate degrees. “These are our jobs,” Harrison said.

Harrison is anxious, however, over how quickly he and other leaders in higher ed are expected to convince thousands of students to sign up for the required STEM courses and join Intel after graduation. The first two factories are slated to be fully operational within three years, and will need significant numbers of workers well before then. He said his university still lacks the requisite infrastructure for instruction on chip manufacturing — “we’re missing some wafer processing, clean rooms, those kinds of things” — and explained that funding recently provided by Intel and the National Science Foundation won’t be enough. Columbus State will need more support from Washington.

“I don’t know that there’s a great Plan B right now,” said Harrison, adding that the new facilities will run into “the tens of millions.”

A lack of native STEM talent isn’t unique to the Columbus area. Across the country, particularly in regions where the chip industry is planning to relocate, officials are fretting over a perceived lack of skilled technicians. In February, the Taiwanese Semiconductor Manufacturing Corporation cited a shortage of skilled workers when announcing a six-month delay in the move-in date for their new plant in Arizona.

“Whether it’s a licensure program, a two-year program or a Ph.D., at all levels, there is a shortfall in high-tech STEM talent,” said Phillips. The NSB member highlighted the “missing millions of people that are not going into STEM fields — that basically are shut out, even beginning in K-12, because they’re not exposed in a way that attracts them to the field.”

Industry groups, like the National Association of Manufacturers, have long argued a two-pronged approach is necessary when it comes to staffing the high-tech sector: Reevaluating immigration policy while also investing heavily in workforce development

The abandoned House and Senate competitiveness bills both included provisions that would have enhanced federal support for STEM education and training. Among other things, the House bill would have expanded Pell Grant eligibility to students pursuing career-training programs.

“We have for decades incentivized degree attainment and not necessarily skills attainment,” said Robyn Boerstling, NAM’s vice president of infrastructure, innovation and human resources policy. “There are manufacturing jobs today that could be filled with six weeks of training, or six months, or six years; we need all of the above.”

But those provisions were scrapped, after Senate leadership decided a conference between the two chambers on the bills was too unwieldy to reach agreement before the August recess.

Katie Spiker, managing director of government affairs at National Skills Coalition, said the abandoned Pell Grant expansion shows Congress “has not responded to worker needs in the way that we need them to.” Amid criticisms that the existing workforce development system is unwieldy and ineffective, the decision to scrap new upgrades is a continuation of a trend of disinvesting in workers who hope to obtain the skills they need to meet employer demand.

“And it becomes an issue that only compounds itself over time,” Spiker said. “As technology changes, people need to change and evolve their skills.”

“If we’re not getting people skilled up now, then we won’t have people that are going to be able to evolve and skill up into the next generation of manufacturing that we’ll do five years from now.”

Congress finally sent the smaller Chips and Science Act — which includes the chip subsidies and tax credits, $200 million to develop a microchip workforce and a slate of R&D provisions — to the president’s desk in late July. The bill is expected to enhance the domestic STEM pool (at least on the margins). But it likely falls short of the generational investments many believe are needed.

“You could make some dent in it in six years,” said Phillips. “But if you really want to solve the problem, it’s closer to a 20-year investment. And the ability of this country to invest in anything for 20 years is not phenomenal.”

Immigration Arms Race

The microchip industry is in the midst of a global reshuffling that’s expected to last a better part of the decade — and the U.S. isn’t the only country rolling out the red carpet. Europe, Canada, Japan and other regions are also worried about their security, and preparing sweeteners for microchip firms to set up shop in their borders. Cobbling together an effective STEM workforce in a short time frame will be key to persuading companies to choose America instead.

That will be challenging at the technician level, which represents around 70 percent of workers in most microchip factories. But those jobs require only two-year degrees — and over a six-year period, it’s possible a sustained education and recruitment effort can produce enough STEM workers to at least keep the lights on.

It’s a different story entirely for Ph.D.s and master’s degrees, which take much longer to earn and which industry reps say make up a smaller but crucial component of a factory’s workforce.

Gabriela González, Intel’s head of global STEM research, policy and initiatives, said about 15 percent of factory workers must have doctorates or master’s degrees in fields such as material and electrical engineering, computer science, physics and chemistry. Students coming out of American universities with those degrees are largely foreign nationals — and increasingly, they’re graduating without an immigration status that lets them work in the U.S., and with no clear pathway to achieving that status.

A National Science Board estimate from earlier this year shows a steadily rising proportion of foreign-born students with advanced STEM skills. That’s especially true for degrees crucial to the chip industry — nearly 60 percent of computer science Ph.D.s are foreign born, as are more than 50 percent of engineering doctorates.

“We are absolutely reliant on being able to hire foreign nationals to fill those needs,” said Intel’s Shahoulian. Like many in the chip industry, Shaoulian contends there simply aren’t enough high-skilled STEM professionals with legal status to simultaneously serve America’s existing tech giants and an influx of microchip firms.

Some academics, such as Howard University’s Ron Hira, suggest the shortage of workers with STEM degrees is overblown, and industry simply seeks to import cheaper, foreign-born labor. But that view contrasts with those held by policymakers on Capitol Hill or people in the scientific and research communities. In a report published in late July by the Government Accountability Office, all 17 of the experts surveyed agreed the lack of a high-skilled STEM workforce was a barrier to new microchip projects in the U.S. — and most said some type of immigration reform would be needed.

Many, if not most, of the foreign nationals earning advanced STEM degrees from U.S. universities would prefer to stay and work in the country. But America’s immigration system is turning away these workers in record numbers — and at the worst possible time.

Ravi (not his real name, given his tenuous immigration status) is an Indian national. Nearly three years ago, he graduated from a STEM master’s program at a prestigious eastern university before moving to California to work as a design verification lead at an international chip company. He’s applied three times for an H-1B visa, a high-skilled immigration program used extensively by U.S. tech companies. But those visas are apportioned via a lottery, and Ravi lost each time. His current visa only allows him to work through the end of year — so Ravi is giving up and moving to Canada, where he’s agreed to take a job with another chip company. Given his skill set, he expects to quickly receive permanent legal status.

“The application process is incredibly simple there,” said Ravi, noting that Canadian officials were apologetic over their brief 12-week processing time (they’re swamped by refugee applications, he said).

If given the choice, Ravi said he would’ve probably stayed in California. But his story now serves as a cautionary tale for his younger brother back home. “Once he sort of completed his undergrad back in India, he did mention that he is looking at more immigration-friendly countries,” Ravi said. “He’s giving Canada more thought, at this point, than the United States.”

Ravi’s story is far from unique, particularly for Indian nationals. The U.S. imposes annual per-country caps on green cards — and between a yearly crush of applicants and a persistent processing backlog, Indians (regardless of their education or skill level) can expect to wait as long as 80 years for permanent legal status. A report released earlier this year by the libertarian Cato Institute found more than 1.4 million skilled immigrants are now stuck in green card backlogs, just a slight drop from 2020’s all-time high of more than 1.5 million.

The third rail of U.S. politics

The chip industry has shared its anxiety over America’s slipping STEM workforce with Washington, repeatedly asking Congress to make it easier for high-skilled talent to stay. But unlike their lobbying for subsidies and tax breaks — which has gotten downright pushy at times — they’ve done so very quietly. While chip lobbyists have spent months telling anyone who will listen why the $52 billion in financial incentives are a “strategic imperative,” they’ve only recently been willing to discuss their immigration concerns on the record.

In late July, nine major chip companies planned to send an open letter to congressional leadership warning that the shortage of high-skilled STEM workers “has truly never been more acute” and urging lawmakers to “enact much-needed green card reforms.” But the letter was pulled at the last minute, after some companies worried about wading into a tense immigration debate at the wrong time.

Leaders in the national security community have been less shy. In May, more than four dozen former officials sent a leader to congressional leadership urging them to shore up America’s slipping immigration edge before Chinese technology leapfrogs ours. “With the world’s best STEM talent on its side, it will be very hard for America to lose,” they wrote. “Without it, it will be very hard for America to win.”

The former officials exhorted lawmakers to take up and pass provisions in the House competitiveness bill that would’ve lifted green card caps for foreign nationals with STEM Ph.D.s or master’s degrees. It’d be a relatively small number of people — a February study from Georgetown University’s Center for Security and Emerging Technology suggested the chip industry would only need around 3,500 foreign-born workers to effectively staff new U.S.-based factories.

“This is such a small pool of people that there’s already an artificial cap on it,” said Klon Kitchen, a senior fellow focused on technology and national security at the conservative American Enterprise Institute.

Kitchen suggested the Republican Party’s wariness toward immigration shouldn’t apply to these high-skilled workers, and some elected Republicans agree. Sen. John Cornyn, whose state of Texas is poised to gain from the expansion of chip plants outside Austin, took up the torch — and almost immediately got burned.

Sen. Chuck Grassley, Iowa’s senior Republican senator, blocked repeated attempts by Cornyn, Democrats and others to include the green card provision in the final competitiveness package. Finding relief for a small slice of the immigrant community, Grassley reasoned, “weakens the possibility to get comprehensive immigration reform down the road.” He refused to budge even after Biden administration officials warned him of the national security consequences in a classified June 16 briefing, which was convened specifically for him. The effort has been left for dead (though a push to shoehorn a related provision into the year-end defense bill is ongoing).

Many of Grassley’s erstwhile allies are frustrated with his approach. “We’ve been talking about comprehensive immigration reform for how many decades?” asked Kitchen, who said he’s “not inclined” to let America’s security concerns “tread water in the background” while Congress does nothing to advance broader immigration bills.

Most Republicans in Congress agree with Kitchen. But so far it’s Cornyn, not Grassley, who’s paid a price. After helping broker a deal on gun control legislation in June, Cornyn was attacked by Breitbart and others on his party’s right flank for telling a Democratic colleague immigration would be next.

“Immigration is one of the most contentious issues here in Congress, and we’ve shown ourselves completely incapable of dealing with it on a rational basis,” Cornyn said in July. The senator said he’d largely given up on persuading Grassley to abandon his opposition to new STEM immigration provisions. “I would love to have a conversation about merit-based immigration,” Cornyn said. “But I don’t think, under the current circumstances, that’s possible.”

Cornyn blamed that in part on the far right’s reflexive outrage to any easing of immigration restrictions. “Just about anything you say or do will get you in trouble around here these days,” he said.

Given that reality, few Republicans are willing to stick their necks out on the issue.

“If you look at the messaging coming out of [the National Republican Senatorial Committee] or [the Republican Attorneys General Association], it’s all ‘border, border, border,’” said Rebecca Shi, executive director of the American Business Immigration Coalition. Shi said even moderate Republicans hesitate to publicly advance arguments “championing these sensible visas for Ph.D. STEM talents for integrated circuits for semiconductors.”

“They’re like … ‘I can’t say those phrases until after the elections,’” Shi said.

That skittishness extends to state-level officials — Ohio’s Husted spent some time expounding on the benefits of “bringing talented people here to do the work in America, rather than having companies leave America to have it done somewhere else.” He suggested that boosting STEM immigration would be key to Intel’s success in his state. But when asked whether he’s taken that message to Ohio’s congressional delegation — after all, he said he’d been pestering them to pass the chip subsidies — Husted hedged.

“My job is to do all I can for the people of the state of Ohio. There are other people whose job it is to message those other things,” Husted said. “But if asked, you heard what my answer is.”

Of course, Republicans also pin some of the blame on Democrats. “The administration ignores the fire at the border and the chaos there, which makes it very hard to have a conversation about controlling immigration flows,” Cornyn said.

And while Democratic lawmakers reject that specific concern, some admit their side hasn’t prioritized STEM immigration as it should.

“Neither team has completely clean hands,” said Sen. Mark Warner, the chair of the Senate Intelligence Committee. Warner noted that Democrats have also sought to hold back STEM immigration fixes as “part of a sweetener” so that business-friendly Republicans would in turn back pathways to citizenship for undocumented immigrants. He also dinged the chip companies, claiming the issue is “not always as straightforward” as the industry would like to frame it and that tech companies sometimes hope to pay less for foreign-born talent.

But Warner still supports the effort to lift green card caps for STEM workers. “Without that high-skilled immigration, it’s not like those jobs are going to disappear,” he said. “They’re just gonna move to another country.”

And despite their rhetoric, it’s hard to deny that congressional Republicans are largely responsible for continued inaction on high-skilled immigration — even as their allies in the national security space become increasingly insistent.

Stuck on STEM immigration

Though they’ve had to shrink their ambitions, lawmakers working to lift green card caps for STEM immigrants haven’t given up. A jurisdictional squabble between committees in July prevented advocates from including in the House’s year-end defense bill a provision that would’ve nixed the caps for Ph.D.s in “critical” STEM fields. They’re now hoping to shoehorn the provision into the Senate’s defense bill instead, and have tapped Republican Sen. Thom Tillis of North Carolina as their champion in the upper chamber.

But Tillis is already facing pushback from the right. And despite widespread support, few truly believe there’s enough momentum to overcome Grassley and a handful of other lawmakers willing to block any action.

“Most members on both sides recognize that this is a problem they need to resolve,” said Intel’s Shahoulian. “They’re just not at a point yet where they’re willing to compromise and take the political hits that come with it.”

The global chip industry is moving in the meantime. While most companies are still planning to set up shop in the U.S. regardless of what happens with STEM immigration, Shahoulian said inaction on that front will inevitably limit the scale of investments by Intel and other firms.

“You’re already seeing that dynamic playing out,” he said. “You’re seeing companies set up offices in Canada, set up offices elsewhere, move R&D work elsewhere in the world, because it is easier to retain talent elsewhere than it is here.”

“This is an issue that will progressively get worse,” Shahoulian said. “It’s not like there will be some drop-dead deadline. But yeah, it’s getting difficult.”

Intel is still plowing ahead in Johnstown — backhoes are churning up dirt, farmers have been bought out of homes owned by their families for generations and the extensive water and electric infrastructure required for eight chip factories is being laid. Whether those bets will pay off in the long-term may rest on Congress’ ability to thread the needle on STEM immigration. And there’s little optimism at the moment.

Sen. Maria Cantwell, the chair of the Senate Commerce Committee, said she sometimes wishes she could “shake everybody and tell them to wake up.” But she believes economic and geopolitical realities will force Congress to open the door to high-skilled foreign workers — eventually.

“I think the question is whether you do that now or in 10 years,” Cantwell said. “And you’ll be damn sorry if you wait for 10 years.”

Sat, 30 Jul 2022 23:00:00 -0500 en text/html https://www.politico.com/news/2022/07/31/microchip-immigration-tech-00048242 Killexams : Windows & .NET Watch: When the market speaks

Apple has a greater market capitalization than Microsoft. Market capitalization is basically a function of investors’ collective estimate of future profits. The market is seeing a greater future for Apple than Microsoft. Rational, or reality distortion field?
In 2009, Microsoft made US$14.6 billion in profit on $58.4 billion in revenue. Apple made $8.2 billion on $42.9 in sales. That’s 78% more sales than Apple and a profit margin of 25% as opposed to Apple’s 19%.

To reach Microsoft’s profit level at its current margin, Apple would have required sales of $76.4 billion (an increase of 78%). For Microsoft to have made Apple’s profit, their revenue would have had to been “just” $32.8 billion.

Yet the price-to-earnings ratio shows that investors are willing to pay a lot more for a piece of Apple’s pie. As I write this, AAPL’s P:E is over 22 while MSFT’s is hovering under 14. In other words, to own a dollar of Apple’s yearly revenues will cost you $22. Microsoft’s P:E puts it in the same ballpark as IBM, Coca Cola and Walmart, while investors are betting that Apple’s revenues will increase dramatically. The price-to-earnings ratio, more than share price or even market capitalization, speaks to the different perceptions of the company by investors.

If not revenue increase, the other way to justify a higher P:E is to anticipate better profit margins, but it’s surely harder for Apple, whose revenue comes with manufacturing costs, to dramatically increase margins than for a software company such as Microsoft. (Google, which enjoys a similar P:E to Apple, has an even higher profit margin than Microsoft.) At these volumes, even significant revenue windfalls (such as the bump Microsoft will get this year from Windows 7 and Office 2010, and that Apple will get from the iPad’s launch frenzy) don’t justify such gaps.

All of which is to say that the market is betting that the game is going to change. Microsoft changed the world of general computing several times, but the last time they did so was in the game of office software. While Microsoft’s Internet and cloud initiatives have shown an admirable ability to “turn the ship,” its leadership in those fields, much less its dominance, is not a given. Apple, on the other hand: iPod, iPhone, iPad. Pretty good decade.

But to use the finance industry’s favorite weasel words: “Past performance is no certain of future results.” Just ask Microsoft. Or IBM. Or General Motors. The “Great Man” theory of history would explain Apple’s ascendancy to the return of Steve Jobs and, increasingly, Microsoft’s stagnancy to the retirement of Bill Gates and the leadership of Steve Ballmer.

Recently, Jobs made it clear that a prototype of a touch-screen tablet preceded the creation of the iPhone. Apparently, when Jobs saw the finger-flick scrolling, he realized, “We can build a phone on this.” In retrospect, the decision to let iPhone and Touch pave the way for the iPad was a multibillion-dollar correct call. A $500 single-tasking netbook without a keyboard? Not a good bet five years ago.

Microsoft has only recently “transitioned” away J Allard and Robbie Bach from Microsoft’s Entertainment and Devices Division, the division most likely to spearhead the move beyond the desktop. The Xbox was until latest years a huge money pit, including an Xbox 360 recall estimated to have cost around a billion dollars. The 360 backed the wrong horse in the Blu-ray/HD-DVD race, and the Wii is generally considered the device that changed the console game (people like to move around and be with others while having fun? Who could have anticipated that?). The less said about Windows Mobile the better.

On the other hand, Xbox Live Arcade is arguably the service that proved the “app store” model of low-cost, impulse-bought download-only software, and, objectively, the Zune HD and Windows Phone 7 Series are at least competitive in terms of features, design and user experience. It’s not like the only talented people live in Silicon Valley.

Another theory of history would emphasize the sociological aspects: the internally competitive organization of Microsoft versus the Cult of Steve. Once, Microsoft was thought of as among the most meritocratic of tech companies; one doesn’t get that sense anymore. I’ve argued that Microsoft’s tight message control comes across as arrogant and alienating to the most important—and the most influential—group of developers: those who are engaged, enthusiastic and informed.

If you believe in the “Great Man” theory, that the hiring, firing or retirement of one or two exceptional individuals could change everything, then the gap in profits-to-earnings ratios between Apple and Microsoft is irrational. If you believe in the sociological theory, that the ups and downs of product launches and keynotes and this year’s winners and losers are just a distraction from a sea change in the competitive landscape, then a large gap in ratios is rational (if arguable on the competitive merits). I know where I come down. What about you?

Larry O’Brien is a technology consultant, analyst and writer. Read his blog at www.knowing.net.

Thu, 30 Jun 2022 11:59:00 -0500 en-US text/html https://sdtimes.com/apple/windows-net-watch-when-the-market-speaks/
Killexams : Did the Universe Just Happen? Killexams : The Atlantic | April 1988 | Did the Universe Just Happen? | Wright


More on science and technology from The Atlantic Monthly.

The Atlantic Monthly | April 1988
 

I. Flying Solo


d Fredkin is scanning the visual field systematically. He checks the instrument panel regularly. He is cool, collected, in control. He is the optimally efficient pilot.

The plane is a Cessna Stationair Six—a six-passenger single-engine amphibious plane, the kind with the wheels recessed in pontoons. Fredkin bought it not long ago and is still working out a few kinks; right now he is taking it for a spin above the British Virgin Islands after some minor mechanical work.

He points down at several brown-green masses of land, embedded in a turquoise sea so clear that the shadows of yachts are distinctly visible on its sandy bottom. He singles out a small island with a good-sized villa and a swimming pool, and explains that the compound, and the island as well, belong to "the guy that owns Boy George"—the rock star's agent, or manager, or something.

I remark, loudly enough to overcome the engine noise, "It's nice."

Yes, Fredkin says, it's nice. He adds, "It's not as nice as my island."

He's joking, I guess, but he's right. Ed Fredkin's island, which soon comes into view, is bigger and prettier. It is about 125 acres, and the hill that constitutes its bulk is a deep green—a mixture of reeds and cacti, sea grape and turpentine trees, machineel and frangipani. Its beaches range from prosaic to sublime, and the coral in the waters just offshore attracts little and big fish whose colors look as if they were coordinated by Alexander Julian. On the island's west side are immense rocks, suitable for careful climbing, and on the east side are a bar and restaurant and a modest hotel, which consists of three clapboard buildings, each with a few rooms. Between east and west is Fredkin's secluded island villa. All told, Moskito Island—or Drake's Anchorage, as the brochures call it—is a nice place for Fredkin to spend the few weeks of each year when he is not up in the Boston area tending his various other businesses.

In addition to being a self-made millionaire, Fredkin is a self-made intellectual. Twenty years ago, at the age of thirty-four, without so much as a bachelor's degree to his name, he became a full professor at the Massachusetts Institute of Technology. Though hired to teach computer science, and then selected to guide MIT's now eminent computer-science laboratory through some of its formative years, he soon branched out into more-offbeat things. Perhaps the most idiosyncratic of the courses he has taught is one on "digital physics," in which he propounded the most idiosyncratic of his several idiosyncratic theories. This theory is the reason I've come to Fredkin's island. It is one of those things that a person has to be prepared for. The preparer has to say, "Now, this is going to sound pretty weird, and in a way it is, but in a way it's not as weird as it sounds, and you'll see this once you understand it, but that may take a while, so in the meantime don't prejudge it, and don't casually dismiss it." Ed Fredkin thinks that the universe is a computer.

Fredkin works in a twilight zone of modern science—the interface of computer science and physics. Here two concepts that traditionally have ranked among science's most fundamental—matter and energy—keep bumping into a third: information. The exact relationship among the three is a question without a clear answer, a question vague enough, and basic enough, to have inspired a wide variety of opinions. Some scientists have settled for modest and sober answers. Information, they will tell you, is just one of many forms of matter and energy; it is embodied in things like a computer's electrons and a brain's neural firings, things like newsprint and radio waves, and that is that. Others talk in grander terms, suggesting that information deserves full equality with matter and energy, that it should join them in some sort of scientific trinity, that these three things are the main ingredients of reality.

Fredkin goes further still. According to his theory of digital physics, information is more fundamental than matter and energy. He believes that atoms, electrons, and quarks consist ultimately of bits—binary units of information, like those that are the currency of computation in a personal computer or a pocket calculator. And he believes that the behavior of those bits, and thus of the entire universe, is governed by a single programming rule. This rule, Fredkin says, is something fairly simple, something vastly less arcane than the mathematical constructs that conventional physicists use to explain the dynamics of physical reality. Yet through ceaseless repetition—by tirelessly taking information it has just transformed and transforming it further—it has generated pervasive complexity. Fredkin calls this rule, with discernible reverence, "the cause and prime mover of everything."

T THE RESTAURANT ON FREDKIN'S ISLAND THE FOOD is prepared by a large man named Brutus and is humbly submitted to diners by men and women native to nearby islands. The restaurant is open-air, ventilated by a sea breeze that is warm during the day, cool at night, and almost always moist. Between the diners and the ocean is a knee-high stone wall, against which waves lap rhythmically. Beyond are other islands and a horizon typically blanketed by cottony clouds. Above is a thatched ceiling, concealing, if the truth be told, a sheet of corrugated steel. It is lunchtime now, and Fredkin is sitting in a cane-and-wicker chair across the table from me, wearing a light cotton sport shirt and gray swimming trunks. He was out trying to windsurf this morning, and he enjoyed only the marginal success that one would predict on the basis of his appearance. He is fairly tall and very thin, and has a softness about him—not effeminacy, but a gentleness of expression and manner—and the complexion of a scholar; even after a week on the island, his face doesn't vary much from white, except for his nose, which is red. The plastic frames of his glasses, in a modified aviator configuration, surround narrow eyes; there are times—early in the morning or right after a nap—when his eyes barely qualify as slits. His hair, perennially semi-combed, is black with a little gray.

Fredkin is a pleasant mealtime companion. He has much to say that is interesting, which is fortunate because generally he does most of the talking. He has little curiosity about other people's minds, unless their interests happen to coincide with his, which few people's do. "He's right above us," his wife, Joyce, once explained to me, holding her left hand just above her head, parallel to the ground. "Right here looking down. He's not looking down saying, 'I know more than you.' He's just going along his own way."

The food has not yet arrived, and Fredkin is passing the time by describing the world view into which his theory of digital physics fits. "There are three great philosophical questions," he begins. "What is life? What is consciousness and thinking and memory and all that? And how does the universe work?" He says that his "informational viewpoint" encompasses all three. Take life, for example. Deoxyribonucleic acid, the material of heredity, is "a good example of digitally encoded information," he says. "The information that implies what a creature or a plant is going to be is encoded; it has its representation in the DNA, right? Okay, now, there is a process that takes that information and transforms it into the creature, okay?" His point is that a mouse, for example, is "a big, complicated informational process."

Fredkin exudes rationality. His voice isn't quite as even and precise as Mr. Spock's, but it's close, and the parallels don't end there. He rarely displays emotion—except, perhaps, the slightest sign of irritation under the most trying circumstances. He has never seen a problem that didn't have a perfectly logical solution, and he believes strongly that intelligence can be mechanized without limit. More than ten years ago he founded the Fredkin Prize, a $100,000 award to be given to the creator of the first computer program that can beat a world chess champion. No one has won it yet, and Fredkin hopes to have the award raised to $1 million.

Fredkin is hardly alone in considering DNA a form of information, but this observation was less common back when he first made it. So too with many of his ideas. When his world view crystallized, a quarter of a century ago, he immediately saw dozens of large-scale implications, in fields ranging from physics to biology to psychology. A number of these have gained currency since then, and he considers this trend an ongoing substantiation of his entire outlook.

Fredkin talks some more and then recaps. "What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity life, DNA—you know, the biochemical functions—are controlled by a digital information process. Then, at another level, our thought processes are basically information processing." That is not to say, he stresses, that everything is best viewed as information. "It's just like there's mathematics and all these other things, but not everything is best viewed from a mathematical viewpoint. So what's being said is not that this comes along and replaces everything. It's one more avenue of modeling reality, and it happens to cover the sort of three biggest philosophical mysteries. So it sort of completes the picture."

Among the scientists who don't dismiss Fredkin's theory of digital physics out of hand is Marvin Minsky, a computer scientist and polymath at MIT, whose renown approaches cultic proportions in some circles. Minsky calls Fredkin "Einstein-like" in his ability to find deep principles through simple intellectual excursions. If it is true that most physicists think Fredkin is off the wall, Minsky told me, it is also true that "most physicists are the ones who don't invent new theories"; they go about their work with tunnel vision, never questioning the dogma of the day. When it comes to the kind of basic reformulation of thought proposed by Fredkin, "there's no point in talking to anyone but a Feynman or an Einstein or a Pauli," Minsky says. "The rest are just Republicans and Democrats." I talked with Richard Feynman, a Nobel laureate at the California Institute of Technology, before his death, in February. Feynman considered Fredkin a brilliant and consistently original, though sometimes incautious, thinker. If anyone is going to come up with a new and fruitful way of looking at physics, Feynman said, Fredkin will.

Notwithstanding their moral support, though, neither Feynman nor Minsky was ever convinced that the universe is a computer. They were endorsing Fredkin's mind, not this particular manifestation of it. When it comes to digital physics, Ed Fredkin is flying solo.

He knows that, and he regrets that his ideas continue to lack the support of his colleagues. But his self-confidence is unshaken. You see, Fredkin has had an odd childhood, and an odd education, and an odd career, all of which, he explains, have endowed him with an odd perspective, from which the essential nature of the universe happens to be clearly visible. "I feel like I'm the only person with eyes in a world where everyone's blind," he says.

II. A Finely Mottled Universe


HE PRIME MOVER OF EVERYTHING, THE SINGLE principle that governs the universe, lies somewhere within a class of computer programs known as cellular automata, according to Fredkin.

The cellular automaton was invented in the early 1950s by John von Neumann, one of the architects of computer science and a seminal thinker in several other fields. Von Neumann (who was stimulated in this and other inquiries by the ideas of the mathematician Stanislaw Ulam) saw cellular automata as a way to study reproduction abstractly, but the word cellular is not meant biologically when used in this context. It refers, rather, to adjacent spaces—cells—that together form a pattern. These days the cells typically appear on a computer screen, though von Neumann, lacking this convenience, rendered them on paper.

In some respects cellular automata resemble those splendid graphic displays produced by patriotic masses in authoritarian societies and by avid football fans at American universities. Holding up large colored cards on cue, they can collectively generate a portrait of, say, Lenin, Mao Zedong, or a University of Southern California Trojan. More impressive still, one portrait can fade out and another crystallize in no time at all. Again and again one frozen frame melts into another It is a spectacular feat of precision and planning.

But suppose there were no planning. Suppose that instead of arranging a succession of cards to display, everyone learned a single rule for repeatedly determining which card was called for next. This rule might assume any of a number of forms. For example, in a crowd where all cards were either blue or white, each card holder could be instructed to look at his own card and the cards of his four nearest neighbors—to his front, back, left, and right—and do what the majority did during the last frame. (This five-cell group is known as the von Neumann neighborhood.) Alternatively, each card holder could be instructed to do the opposite of what the majority did. In either event the result would be a series not of predetermined portraits but of more abstract, unpredicted patterns. If, by prior agreement, we began with a USC Trojan, its white face might dissolve into a sea of blue, as whitecaps drifted aimlessly across the stadium. Conversely, an ocean of randomness could yield islands of structure—not a Trojan, perhaps, but at least something that didn't look entirely accidental. It all depends on the original pattern of cells and the rule used to transform it incrementally.

This leaves room for abundant variety. There are many ways to define a neighborhood, and for any given neighborhood there are many possible rules, most of them more complicated than blind conformity or implacable nonconformity. Each cell may, for instance, not only count cells in the vicinity but also pay attention to which particular cells are doing what. All told, the number of possible rules is an exponential function of the number of cells in the neighborhood; the von Neumann neighborhood alone has 232, or around 4 billion, possible rules, and the nine-cell neighborhood that results from adding corner cells offers 2512, or roughly 1 with 154 zeros after it, possibilities. But whatever neighborhoods, and whatever rules, are programmed into a computer, two things are always true of cellular automata: all cells use the same rule to determine future behavior by reference to the past behavior of neighbors, and all cells obey the rule simultaneously, time after time.

In the late 1950s, shortly after becoming acquainted with cellular automata, Fredkin began playing around with rules, selecting the powerful and interesting and discarding the weak and bland. He found, for example, that any rule requiring all four of a cell's immediate neighbors to be lit up in order for the cell itself to be lit up at the next moment would not provide sustained entertainment; a single "off" cell would proliferate until darkness covered the computer screen. But equally simple rules could create great complexity. The first such rule discovered by Fredkin dictated that a cell be on if an odd number of cells in its von Neumann neighborhood had been on, and off otherwise. After "seeding" a good, powerful rule with an irregular landscape of off and on cells, Fredkin could watch rich patterns bloom, some freezing upon maturity, some eventually dissipating, others locking into a cycle of growth and decay. A colleague, after watching one of Fredkin's rules in action, suggested that he sell the program to a designer of Persian rugs.

Today new cellular-automaton rules are formulated and tested by the "information-mechanics group" founded by Fredkin at MIT's computer-science laboratory. The core of the group is an international duo of physicists, Tommaso Toffoli, of Italy, and Norman Margolus, of Canada. They differ in the degree to which they take Fredkin's theory of physics seriously, but both agree with him that there is value in exploring the relationship between computation and physics, and they have spent much time using cellular automata to simulate physical processes. In the basement of the computer-science laboratory is the CAM—the cellular automaton machine, designed by Toffoli and Margolus partly for that purpose. Its screen has 65,536 cells, each of which can assume any of four colors and can change color sixty times a second.

The CAM is an engrossing, potentially mesmerizing machine. Its four colors—the three primaries and black—intermix rapidly and intricately enough to form subtly shifting hues of almost any gradation; pretty waves of deep blue or red ebb and flow with fine fluidity and sometimes with rhythm, playing on the edge between chaos and order.

Guided by the right rule, the CAM can do a respectable imitation of pond water rippling outward circularly in deference to a descending pebble, or of bubbles forming at the bottom of a pot of boiling water, or of a snowflake blossoming from a seed of ice: step by step, a single "ice crystal" in the center of the screen unfolds into a full-fledged flake, a six-edged sheet of ice riddled symmetrically with dark pockets of mist. (It is easy to see how a cellular automaton can capture the principles thought to govern the growth of a snowflake: regions of vapor that find themselves in the vicinity of a budding snowflake freeze—unless so nearly enveloped by ice crystals that they cannot discharge enough heat to freeze.)

These exercises are fun to watch, and they provide one a sense of the cellular automaton's power, but Fredkin is not particularly interested in them. After all, a snowflake is not, at the visible level, literally a cellular automaton; an ice crystal is not a single, indivisible bit of information, like the cell that portrays it. Fredkin believes that automata will more faithfully mirror reality as they are applied to its more fundamental levels and the rules needed to model the motion of molecules, atoms, electrons, and quarks are uncovered. And he believes that at the most fundamental level (whatever that turns out to be) the automaton will describe the physical world with perfect precision, because at that level the universe is a cellular automaton, in three dimensions—a crystalline lattice of interacting logic units, each one "deciding" zillions of point in time. The information thus produced, Fredkin says, is the fabric of reality, the stuff of which matter and energy are made. An electron, in Fredkin's universe, is nothing more than a pattern of information, and an orbiting electron is nothing more than that pattern moving. Indeed, even this motion is in some sense illusory: the bits of information that constitute the pattern never move, any more than football fans would change places to slide a USC Trojan four seats to the left. Each bit stays put and confines its activity to blinking on and off. "You see, I don't believe that there are objects like electrons and photons, and things which are themselves and nothing else," Fredkin says. What I believe is that there's an information process, and the bits, when they're in certain configurations, behave like the thing we call the electron, or the hydrogen atom, or whatever."

HE READER MAY NOW HAVE A NUMBER OF questions that unless satisfactorily answered will lead to something approaching contempt for Fredkin's thinking. One such question concerns the way cellular automata chop space and time into little bits. Most conventional theories of physics reflect the intuition that reality is continuous—that one "point" in time is no such thing but, rather, flows seamlessly into the next, and that space, similarly, doesn't come in little chunks but is perfectly smooth. Fredkin's theory implies that both space and time have a graininess to them, and that the grains cannot be chopped up into smaller grains; that people and dogs and trees and oceans, at rock bottom, are more like mosaics than like paintings; and that time's essence is better captured by a digital watch than by a grandfather clock.

The obvious question is, Why do space and time seem continuous if they are not? The obvious answer is, The cubes of space and points of time are very, very small: time seems continuous in just the way that movies seem to move when in fact they are frames, and the illusion of spatial continuity is akin to the emergence of smooth shades from the finely mottled texture of a newspaper photograph.

The obvious answer, Fredkin says, is not the whole answer; the illusion of continuity is yet more deeply ingrained in our situation. Even if the ticks on the universal clock were, in some absolute sense, very slow, time would still seem continuous to us, since our perception, itself proceeding in the same ticks, would be no more finely grained than the processes being perceived. So too with spatial perception: Can eyes composed of the smallest units in existence perceive those units? Could any informational process sense its ultimate constituents? The point is that the basic units of time and space in Fredkin's reality don't just happen to be imperceptibly small. As long as the creatures doing the perceiving are in that reality, the units have to be imperceptibly small.

Though some may find this discreteness hard to comprehend, Fredkin finds a grainy reality more sensible than a smooth one. If reality is truly continuous, as most physicists now believe it is, then there must be quantities that cannot be expressed with a finite number of digits; the number representing the strength of an electromagnetic field, for example, could begin 5.23429847 and go on forever without failing into a pattern of repetition. That seems strange to Fredkin: wouldn't you eventually get to a point, around the hundredth, or thousandth, or millionth decimal place, where you had hit the strength of the field right on the nose? Indeed, wouldn't you expect that every physical quantity has an exactness about it? Well, you might and might not. But Fredkin does expect exactness, and in his universe he gets it.

Fredkin has an interesting way of expressing his insistence that all physical quantities be "rational." (A rational number is a number that can be expressed as a fraction—as a ratio of one integer to another. Expressed as a decimal, a rational number will either end, as 5/2 does in the form of 2.5, or repeat itself endlessly, as 1/7 does in the form of 0.142857142857142 . . .) He says he finds it hard to believe that a finite volume of space could contain an infinite amount of information. It is almost as if he viewed each parcel of space as having the digits describing it actually crammed into it. This seems an odd perspective, one that confuses the thing itself with the information it represents. But such an inversion between the realm of things and the realm of representation is common among those who work at the interface of computer science and physics. Contemplating the essence of information seems to affect the way you think.

The prospect of a discrete reality, however alien to the average person, is easier to fathom than the problem of the infinite regress, which is also raised by Fredkin's theory. The problem begins with the fact that information typically has a physical basis. Writing consists of ink; speech is composed of sound waves; even the computer's ephemeral bits and bytes are grounded in configurations of electrons. If the electrons are in turn made of information, then what is the information made of?

Asking questions like this ten or twelve times is not a good way to earn Fredkin's respect. A look of exasperation passes fleetingly over his face. "What I've tried to explain is that—and I hate to do this, because physicists are always doing this in an obnoxious way—is that the question implies you're missing a very important concept." He gives it one more try, two more tries, three, and eventually some of the fog between me and his view of the universe disappears. I begin to understand that this is a theory not just of physics but of metaphysics. When you disentangle these theories—compare the physics with other theories of physics, and the metaphysics with other ideas about metaphysics—both sound less far-fetched than when jumbled together as one. And, as a bonus, Fredkin's metaphysics leads to a kind of high-tech theology—to speculation about supreme beings and the purpose of life.

III. The Perfect Thing


DWARD FREDKIN WAS BORN IN 1934, THE LAST OF three children in a previously prosperous family. His father, Manuel, had come to Southern California from Russia shortly after the Revolution and founded a chain of radio stores that did not survive the Great Depression. The family learned economy, and Fredkin has not forgotten it. He can reach into his pocket, pull out a tissue that should have been retired weeks ago, and, with cleaning solution, make an entire airplane windshield clear. He can take even a well-written computer program, sift through it for superfluous instructions, and edit it accordingly, reducing both its size and its running time.

Manuel was by all accounts a competitive man, and he focused his competitive energies on the two boys: Edward and his older brother, Norman. Manuel routinely challenged Ed's mastery of fact, inciting sustained arguments over, say, the distance between the moon and the earth. Norman's theory is that his father, though bright, was intellectually insecure; he seemed somehow threatened by the knowledge the boys brought home from school. Manuel's mistrust of books, experts, and all other sources of received wisdom was absorbed by Ed.

So was his competitiveness. Fredkin always considered himself the smartest kid in his class. He used to place bets with other students on test scores. This habit did not endear him to his peers, and he seems in general to have lacked the prerequisites of popularity. His sense of humor was unusual. His interests were not widely shared. His physique was not a force to be reckoned with. He recalls, "When I was young—you know, sixth, seventh grade—two kids would be choosing sides for a game of something. It could be touch football. They'd choose everybody but me, and then there'd be a fight as to whether one side would have to take me. One side would say, 'We have eight and you have seven,' and they'd say, 'That's okay.' They'd be willing to play with seven." Though exhaustive in documenting his social alienation, Fredkin concedes that he was not the only unpopular student in school. "There was a socially active subgroup, probably not a majority, maybe forty percent, who were very socially active. They went out on dates. They went to parties. They did this and they did that. The others were left out. And I was in this big left-out group. But I was in the pole position. I was really left out."

Of the hours Fredkin spent alone, a good many were devoted to courting disaster in the name of science. By wiring together scores of large, 45-volt batteries, he collected enough electricity to conjure up vivid, erratic arcs. By scraping the heads off matches and buying sulfur, saltpeter, and charcoal, he acquired a good working knowledge of pyrotechnics. He built small, minimally destructive but visually impressive bombs, and fashioned rockets out of cardboard tubing and aluminum foil. But more than bombs and rockets, it was mechanisms that captured Fredkin's attention. From an early age he was viscerally attracted to Big Ben alarm clocks, which he methodically took apart and put back together. He also picked up his father's facility with radios and household appliances. But whereas Manuel seemed to fix things without understanding the underlying science, his son was curious about first principles.

So while other kids were playing baseball or chasing girls, Ed Fredkin was taking things apart and putting them back together Children were aloof, even cruel, but a broken clock always responded gratefully to a healing hand. "I always got along well with machines," he remembers.

After graduation from high school, in 1952, Fredkin headed for the California Institute of Technology with hopes of finding a more appreciative social environment. But students at Caltech turned out to bear a disturbing resemblance to people he had observed elsewhere. "They were smart like me," he recalls, "but they had the full spectrum and distribution of social development." Once again Fredkin found his weekends unencumbered by parties. And once again he didn't spend his free time studying. Indeed, one of the few lessons he learned is that college is different from high school: in college if you don't study, you flunk out. This he did a few months into his sophomore year. Then, following in his brother's footsteps, he joined the Air Force and learned to fly fighter planes.

T WAS THE AIR FORCE THAT FINALLY BROUGHT Fredkin face to face with a computer. He was working for the Air Proving Ground Command, whose function was to ensure that everything from combat boots to bombers was of top quality, when the unit was given the job of testing a computerized air-defense system known as SAGE (for "semi-automatic ground environment"), To test SAGE the Air Force needed men who knew something about computers, and so in 1956 a group from the Air Proving Ground Command, including Fredkin, was sent to MIT's Lincoln Laboratory and enrolled in computer-science courses. "Everything made instant sense to me," Fredkin remembers. "I just soaked it up like a sponge."

SAGE, when ready for testing, turned out to be even more complex than anticipated—too complex to be tested by anyone but genuine experts—and the job had to be contracted out. This development, combined with bureaucratic disorder, meant that Fredkin was now a man without a function, a sort of visiting scholar at Lincoln Laboratory. "For a period of time, probably over a year, no one ever came to tell me to do anything. Well, meanwhile, down the hall they installed the latest, most modern computer in the world—IBM's biggest, most powerful computer. So I just went down and started to program it." The computer was an XD-1. It was slower and less capacious than an Apple Macintosh and was roughly the size of a large house.

When Fredkin talks about his year alone with this dinosaur, you half expect to hear violins start playing in the background. "My whole way of life was just waiting for the computer to come along," he says. "The computer was in essence just the perfect thing." It was in some respects preferable to every other conglomeration of matter he had encountered—more sophisticated and flexible than other inorganic machines, and more logical than organic ones. "See, when I write a program, if I write it correctly, it will work. If I'm dealing with a person, and I tell him something, and I tell him correctly, it may or may not work."

The XD-1, in short, was an intelligence with which Fredkin could empathize. It was the ultimate embodiment of mechanical predictability, the refuge to which as a child he had retreated from the incomprehensibly hostile world of humanity. If the universe is indeed a computer, then it could be a friendly place after all.

During the several years after his arrival at Lincoln Lab, as Fredkin was joining the first generation of hackers, he was also immersing himself in physics—finally learning, through self-instruction, the lessons he had missed by dropping out of Caltech. It is this two-track education, Fredkin says, that led him to the theory of digital physics. For a time "there was no one in the world with the same interest in physics who had the intimate experience with computers that I did. I honestly think that there was a period of many years when I was in a unique position."

The uniqueness lay not only in the fusion of physics and computer science but also in the peculiar composition of Fredkin's physics curriculum. Many physicists acquire as children the sort of kinship with mechanism that he still feels, but in most cases it is later diluted by formal education; quantum mechanics, the prevailing paradigm in contemporary physics, seems to imply that at its core, reality, has truly random elements and is thus inherently unpredictable. But Fredkin escaped the usual indoctrination. To this day he maintains, as did Albert Einstein, that the common interpretation of quantum mechanics is mistaken—that any seeming indeterminacy in the subatomic world reflects only our ignorance of the determining principles, not their absence. This is a critical belief, for if he is wrong and the universe is not ultimately deterministic, then it cannot be governed by a process as exacting as computation.

After leaving the Air Force, Fredkin went to work for Bolt Beranek and Newman, a consulting firm in the Boston area, now known for its work in artificial intelligence and computer networking. His supervisor at BBN, J. C. R. Licklider, says of his first encounter with Fredkin, "It was obvious to me he was very unusual and probably a genius, and the more I came to know him, the more I came to think that that was not too elevated a description." Fredkin "worked almost continuously," Licklider recalls. "It was hard to get him to go to sleep sometimes." A pattern emerged. Licklider would provide Fredkin a problem to work on—say, figuring out how to get a computer to search a text in its memory for an only partially specified sequence of letters. Fredkin would retreat to his office and return twenty or thirty hours later with the solution—or, rather, a solution; he often came back with the answer to a question different from the one that Licklider had asked. Fredkin's focus was intense but undisciplined, and it tended to stray from a problem as soon as he was confident that he understood the solution in principle.

This intellectual wanderlust is one of Fredkin's most enduring and exasperating traits. Just about everyone who knows him has a way of describing it: "He doesn't really work. He sort of fiddles." "Very often he has these great ideas and then does not have the discipline to cultivate the idea." "There is a gap between the quality of the original ideas and what follows. There's an imbalance there." Fredkin is aware of his reputation. In self-parody he once brought a cartoon to a friend's attention: A beaver and another forest animal are contemplating an immense man-made dam. The beaver is saying something like, "No, I didn't actually build it. But it's based on an idea of mine."

Among the ideas that congealed in Fredkin's mind during his stay at BBN is the one that gave him his current reputation as (depending on whom you talk to) a thinker of great depth and rare insight, a source of interesting but reckless speculation, or a crackpot.

IV. Tick by Tick, Dot by Dot


HE IDEA THAT THE UNIVERSE IS A COMPUTER WAS inspired partly by the idea of the universal computer. Universal computer, a term that can accurately be applied to everything from an IBM PC to a Cray supercomputer, has a technical, rigorous definition, but here its upshot will do: a universal computer can simulate any process that can be precisely described and perform any calculation that is performable.

This broad power is ultimately grounded in something very simple: the algorithm. An algorithm is a fixed procedure for converting input into output, for taking one body of information and turning it into another. For example, a computer program that takes any number it is given, squares it, and subtracts three is an algorithm. This isn't a very powerful algorithm; by taking a 3 and turning it into a 6, it hasn't created much new information. But algorithms become more powerful with recursion. A recursive algorithm is an algorithm whose output is fed back into it as input. Thus the algorithm that turned 3 into 6, if operating recursively, would continue, turning 6 into 33, then 33 into 1,086, then 1,086 into 1,179,393, and so on.

The power of recursive algorithms is especially apparent in the simulation of physical processes. While Fredkin was at BBN, he would use the company's Digital Equipment Corporation PDP-1 computer to simulate, say, two particles, one that was positively charged and one that was negatively charged, orbiting each other in accordance with the laws of electromagnetism. It was a pretty sight: two phosphor dots dancing, each etching a green trail that faded into yellow and then into darkness. But for Fredkin the attraction lay less in this elegant image than in its underlying logic. The program he had written took the particles' velocities and positions at one point in time, computed those variables for the next point in time, and then fed the new variables back into the algorithm to get newer variables—and so on and so on, thousands of times a second. The several steps in this algorithm, Fredkin recalls, were "very simple and very beautiful." It was in these orbiting phosphor dots that Fredkin first saw the appeal of his kind of universe—a universe that proceeds tick by tick and dot by dot, a universe in which complexity boils down to rules of elementary simplicity.

Fredkin's discovery of cellular automata a few years later permitted him further to indulge his taste for economy of information and strengthened his bond with the recursive algorithm. The patterns of automata are often all but impossible to describe with calculus yet easy to express algorithmically. Nothing is so striking about a good cellular automaton as the contrast between the simplicity of the underlying algorithm and the richness of its result. We have all felt the attraction of such contrasts. It accompanies the comprehension of any process, conceptual or physical, by which simplicity accommodates complexity. Simple solutions to complex problems, for example, make us feel good. The social engineer who designs uncomplicated legislation that will cure numerous social ills, the architect who eliminates several nagging design flaws by moving a single closet, the doctor who traces gastro-intestinal, cardiovascular, and respiratory ailments to a single, correctable cause—all feel the same kind of visceral, aesthetic satisfaction that must have filled the first caveman who literally killed two birds with one stone.

For scientists, the moment of discovery does not simply reinforce the search for knowledge; it inspires further research. Indeed, it directs research. The unifying principle, upon its apprehension, can elicit such devotion that thereafter the scientist looks everywhere for manifestations of it. It was the scientist in Fredkin who, upon seeing how a simple programming rule could yield immense complexity, got excited about looking at physics in a new way and stayed excited. He spent much of the next three decades fleshing out his intuition.

REDKIN'S RESIGNATION FROM BOLT BERANEK AND Newman did not surprise Licklider. "I could tell that Ed was disappointed in the scope of projects undertaken at BBN. He would see them on a grander scale. I would try to argue—hey, let's cut our teeth on this and then move on to bigger things." Fredkin wasn't biting. "He came in one day and said, 'Gosh, Lick, I really love working here, but I'm going to have to leave. I've been thinking about my plans for the future, and I want to make'—I don't remember how many millions of dollars, but it shook me—'and I want to do it in about four years.' And he did amass however many millions he said he would amass in the time he predicted, which impressed me considerably."

In 1962 Fredkin founded Information International Incorporated—an impressive name for a company with no assets and no clients, whose sole employee had never graduated from college. Triple-I, as the company came to be called, was placed on the road to riches by an odd job that Fredkin performed for the Woods Hole Oceanographic Institute. One of Woods Hole's experiments had run into a complication: underwater instruments had faithfully recorded the changing direction and strength of deep ocean currents, but the information, encoded in tiny dots of light on sixteen-millimeter film, was inaccessible to the computers that were supposed to analyze it. Fredkin rented a sixteen-millimeter movie projector and with a surprisingly simple modification turned it into a machine for translating those dots into terms the computer could accept.

This contraption pleased the people at Woods Hole and led to a contract with Lincoln Laboratory. Lincoln was still doing work for the Air Force, and the Air Force wanted its computers to analyze radar information that, like the Woods Hole data, consisted of patterns of light on film. A makeshift information-conversion machine earned Triple-I $10,000, and within a year the Air Force hired Fredkin to build equipment devoted to the task. The job paid $350,000—the equivalent today of around $1 million. RCA and other companies, it turned out, also needed to turn visual patterns into digital data, and "programmable film readers" that sold for $500,000 apiece became Triple-I's stock-in-trade. In 1968 Triple-I went public and Fredkin was suddenly a millionaire. Gradually he cashed in his chips. First he bought a ranch in Colorado. Then one day he was thumbing through the classifieds and saw that an island in the Caribbean was for sale. He bought it.

In the early 1960s, at the suggestion of the Defense Department's Advanced Research Projects Agency, MIT set up what would become its Laboratory for Computer Science. It was then called Project MAC, an acronym that stood for both "machine-aided cognition" and "multiaccess computer." Fredkin had connections with the project from the beginning. Licklider, who had left BBN for the Pentagon shortly after Fredkin's departure, was influential in earmarking federal money for MAC. Marvin Minsky—who would later serve on Triple-I's board, and by the end of 1967 owned some of its stock—was centrally involved In MAC's inception. Fredkin served on Project MAC's steering committee, and in 1966 he began discussing with Minsky the possibility of becoming a visiting professor at MIT. The idea of bringing a college dropout onto the faculty, Minsky recalls, was not as outlandish as it now sounds; computer science had become an academic discipline so suddenly that many of its leading lights possessed meager formal credentials. In 1968, after Licklider had come to MIT and become the director of Project MAC, he and Minsky convinced Louis Smullin, the head of the electrical-engineering department, that Fredkin was worth the gamble. "We were a growing department and we wanted exciting people," Smullin says. "And Ed was exciting."

Fredkin had taught for barely a year before he became a full professor, and not much later, in 1971, he was appointed the head of Project MAC—a position that was also short-lived, for in the fall of 1974 he began a sabbatical at the California Institute of Technology as a Fairchild Distinguished Scholar. He went to Caltech under the sponsorship of Richard Feynman. The deal, Fredkin recalls, was that he would teach Feynman more about computer science, and Feynman would teach him more about physics. While there, Fredkin developed an idea that has slowly come to be seen as a profound contribution to both disciplines. The idea is also—in Fredkin's mind, at least—corroborating evidence for his theory of digital physics. To put its upshot in brief and therefore obscure terms, Fredkin found that computation is not inherently irreversible and thus it is possible, in principle, to build a computer that doesn't use up energy and doesn't provide off heat.

All computers on the market are irreversible. That is, their history of information processing cannot be inferred from their present informational state; you cannot look at the data they contain and figure out how they arrived at it. By the time the average computer tells you that 2 plus 2 equals 4, it has forgotten the question; for all it knows, you asked what 1 plus 3 is. The reason for this ignorance is that computers discharge information once it is no longer needed, so that they won't get clogged up.

In 1961 Rolf Landauer, of IBM's Thomas J. Watson Research Center, established that this destruction of information is the only part of the computational process that unavoidably involves the dissipation of energy. It takes effort, in other words, for a computer to forget things but not necessarily for it to perform other functions. Thus the question of whether you can, in principle, build a universal computer that doesn't dissipate energy in the form of heat is synonymous with the question of whether you can design a logically reversible universal computer, one whose computational history can always be unearthed. Landauer, along with just about everyone else, thought such a computer impossible; all past computer architectures had implied the regular discarding of information, and it was widely believed that this irreversibility was intrinsic to computation. But while at Caltech, Fredkin did one of his favorite things—he showed that everyone had been wrong all along.

Of the two kinds of reversible computers invented by Fredkin, the better known is called the billiard-ball computer. If it were ever actually built, it would consist of billiard balls ricocheting around in a labyrinth of "mirrors," bouncing off the mirrors at 45-degree angles, periodically banging into other moving balls at 90-degree angles, and occasionally exiting through doorways that occasionally would permit new balls to enter. To extract data from the machine, you would superimpose a grid over it, and the presence or absence of a ball in a given square at a given point in time would constitute information. Such a machine, Fredkin showed, would qualify as a universal computer; it could do anything that normal computers do. But unlike other computers, it would be perfectly reversible; to recover its history, all you would have to do is stop it and run it backward. Charles H. Bennett, of IBM's Thomas J. Watson Research Center, independently arrived at a different proof that reversible computation is possible, though he considers the billiard-ball computer to be in some respects a more elegant solution to the problem than his own.

The billiard-ball computer will never be built, because it is a platonic device, existing only in a world of ideals. The balls are perfectly round and hard, and the table perfectly smooth and hard. There is no friction between the two, and no energy is lost when balls collide. Still, although these ideals are unreachable, they could be approached eternally through technological refinement, and the heat produced by fiction and collision could thus be reduced without limit. Since no additional heat would be created by information loss, there would be no necessary minimum on the total heat emitted by the computer. "The cleverer you are, the less heat it will generate," Fredkin says.

The connection Fredkin sees between the billiard-ball computer and digital physics exemplifies the odd assortment of evidence he has gathered in support of his theory. Molecules and atoms and their constituents, he notes, move around in theoretically reversible fashion, like billiard balls (although it is not humanly possible, of course, actually to take stock of the physical state of the universe, or even one small corner of it, and reconstruct history by tracing the motion of microscopic particles backward). Well, he asks, given the theoretical reversibility of physical reality, doesn't the theoretical feasibility of a reversible computer lend credence to the claim that computation is reality's basis?

No and yes. Strictly speaking, Fredkin's theory doesn't demand reversible computation. It is conceivable that an irreversible process at the very core of reality could provide rise to the reversible behavior of molecules, atoms, electrons, and the rest. After all, irreversible computers (that is, all computers on the market) can simulate reversible billiard balls. But they do so in a convoluted way, Fredkin says, and the connection between an irreversible substratum and a reversible stratum would, similarly, be tortuous—or, as he puts it, "aesthetically obnoxious." Fredkin prefers to think that the cellular automaton underlying reversible reality does its work gracefully.

Consider, for example, a variant of the billiard-ball computer invented by Norman Margolus, the Canadian in MIT's information-mechanics group. Margolus showed how a two-state cellular automaton that was itself reversible could simulate the billiard-ball computer using only a simple rule involving a small neighborhood. This cellular automaton in action looks like a jazzed-up version of the original video game, Pong. It is an overhead view of endlessly energetic balls ricocheting off clusters of mirrors and each other It is proof that a very simple binary cellular automaton can provide rise to the seemingly more complex behavior of microscopic particles bouncing off each other. And, as a kind of bonus, these particular particles themselves amount to a computer. Though Margolus discovered this powerful cellular-automaton rule, it was Fredkin who had first concluded that it must exist and persuaded Margolus to look for it. "He has an intuitive idea of how things should be," Margolus says. "And often, if he can't come up with a rational argument to convince you that it should be so, he'll sort of transfer his intuition to you."

That, really, is what Fredkin is trying to do when he argues that the universe is a computer. He cannot provide you a single line of reasoning that leads inexorably, or even very plausibly, to this conclusion. He can tell you about the reversible computer, about Margolus's cellular automaton, about the many physical quantities, like light, that were once thought to be continuous but are now considered discrete, and so on. The evidence consists of many little things—so many, and so little, that in the end he is forced to convey his truth by simile. "I find the supporting evidence for my beliefs in ten thousand different places," he says. "And to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, Where is this animal? I say, Well, he was here, he's about this big, this that and the other. And I know a thousand things about him. I don't have him in hand, but I know he's there." The story changes upon retelling. One day it's Bigfoot that Fredkin's trailing. Another day it's a duck: feathers are everywhere, and the tracks are webbed. Whatever the animal, the moral of the story remains the same: "What I see is so compelling that it can't be a creature of my imagination."

V. Deus ex Machina


HERE WAS SOMETHING BOTHERSOME ABOUT ISAAC Newton's theory of gravitation. The idea that the sun exerts a pull on the earth, and vice versa, sounded vaguely supernatural and, in any event, was hard to explain. How, after all, could such "action at a distance" be realized? Did the earth look at the sun, estimate the distance, and consult the law of gravitation to determine where it should move and how fast? Newton sidestepped such questions. He fudged with the Latin phrase si esset: two bodies, he wrote, behave as if impelled by a force inversely proportional to the square of their distance. Ever since Newton, physics has followed his example. Its "force fields" are, strictly speaking, metaphorical, and its laws purely descriptive. Physicists make no attempt to explain why things obey the law of electromagnetism or of gravitation. The law is the law, and that's all there is to it.

Fredkin refuses to accept authority so blindly. He posits not only laws but also a law-enforcement agency: a computer. Somewhere out there, he believes, is a machinelike thing that actually keeps our individual bits of space abiding by the rule of the universal cellular automaton. With this belief Fredkin crosses the line between physics and metaphysics, between scientific hypothesis and cosmic speculation. If Fredkin had Newton's knack for public relations, if he stopped at saying that the universe operates as if it were a computer, he could Excellerate his stature among physicists while preserving the essence of his theory—the idea that the dynamics of physical reality will ultimately be better captured by a single recursive algorithm than by the mathematics of conventional physics, and that the continuity of time and space implicit in traditional mathematics is illusory.

Actually, some estimable physicists have lately been saying things not wholly unlike this stripped-down version of the theory. T. D. Lee, a Nobel laureate at Columbia University, has written at length about the possibility that time is discrete. And in 1984 Scientific American, not exactly a soapbox for cranks, published an article in which Stephen Wolfram, then of Princeton's Institute for Advanced Study, wrote, "Scientific laws are now being viewed as algorithms. . . . Physical systems are viewed as computational systems, processing information much the way computers do." He concluded, "A new paradigm has been born."

The line between responsible scientific speculation and off-the-wall metaphysical pronouncement was nicely illustrated by an article in which Tomasso Toffoli, the Italian in MIT's information-mechanics group, stayed barely on the responsible side of it. Published in the journal Physica D, the article was called "Cellular automata as an alternative to (rather than an approximation of) differential equations in modeling physics." Toffoli's thesis captured the core of Fredkin's theory yet had a perfectly reasonable ring to it. He simply suggested that the historical reliance of physicists on calculus may have been due not just to its merits but also to the fact that before the computer, alternative languages of description were not practical.

Why does Fredkin refuse to do the expedient thing—leave out the part about the universe actually being a computer? One reason is that he considers reprehensible the failure of Newton, and of all physicists since, to back up their descriptions of nature with explanations. He is amazed to find "perfectly rational scientists" believing in "a form of mysticism: that things just happen because they happen." The best physics, Fredkin seems to believe, is metaphysics.

The trouble with metaphysics is its endless depth. For every question that is answered, at least one other is raised, and it is not always clear that, on balance, any progress has been made. For example, where is this computer that Fredkin keeps talking about? Is it in this universe, residing along some fifth or sixth dimension that renders it invisible? Is it in some meta-universe? The answer is the latter, apparently, and to understand why, we need to return to the problem of the infinite regress, a problem that Rolf Landauer, among others, has cited with respect to Fredkin's theory. Landauer illustrates the problem by telling the old turtle story. A professor has just finished lecturing at some august university about the origin and structure of the universe, and an old woman in tennis shoes walks up to the lectern. "Excuse me, sir, but you've got it all wrong," she says. "The truth is that the universe is sitting on the back of a huge turtle." The professor decides to humor her. "Oh, really?" he asks. "Well, tell me, what is the turtle standing on?" The lady has a ready reply: "Oh, it's standing on another turtle." The professor asks, "And what is that turtle standing on?" Without hesitation, she says, "Another turtle." The professor, still game, repeats his question. A look of impatience comes across the woman's face. She holds up her hand, stopping him in mid-sentence. "Save your breath, sonny," she says. "It's turtles all the way down."

The infinite-regress problem afflicts Fredkin's theory in two ways, one of which we have already encountered: if matter is made of information, what is the information made of? And even if one concedes that it is no more ludicrous for information to be the most fundamental stuff than for matter or energy to be the most fundamental stuff, what about the computer itself? What is it made of? What energizes it? Who, or what, runs it, or set it in motion to begin with?

HEN FREDKIN IS DISCUSSING THE PROBLEM OF THE infinite regress, his logic seems variously cryptic, evasive, and appealing. At one point he says, "For everything in the world where you wonder, 'What is it made out of?' the only thing I know of where the question doesn't have to be answered with anything else is for information." This puzzles me. Thousands of words later I am still puzzled, and I press for clarification. He talks some more. What he means, as near as I can tell, is what follows.

First of all, it doesn't matter what the information is made of, or what kind of computer produces it. The computer could be of the conventional electronic sort, or it could be a hydraulic machine made of gargantuan sewage pipes and manhole covers, or it could be something we can't even imagine. What's the difference? Who cares what the information consists of? So long as the cellular automaton's rule is the same in each case, the patterns of information will be the same, and so will we, because the structure of our world depends on pattern, not on the pattern's substrate; a carbon atom, according to Fredkin, is a certain configuration of bits, not a certain kind of bits.

Besides, we can never know what the information is made of or what kind of machine is processing it. This point is reminiscent of childhood conversations that Fredkin remembers having with his sister, Joan, about the possibility that they were part of a dream God was having. "Say God is in a room and on his table he has some cookies and tea," Fredkin says. "And he's dreaming this whole universe up. Well, we can't reach out and get his cookies. They're not in our universe. See, our universe has bounds. There are some things in it and some things not." The computer is not; hardware is beyond the grasp of its software. Imagine a vast computer program that contained bodies of information as complex as people, motivated by bodies of information as complex as ideas. These "people" would have no way of figuring out what kind of computer they owed their existence to, because everything they said, and everything they did—including formulate metaphysical hypotheses—would depend entirely on the programming rules and the original input. As long as these didn't change, the same metaphysical conclusions would be reached in an old XD-1 as in a Kaypro 2.

This idea—that sentient beings could be constitutionally numb to the texture of reality—has fascinated a number of people, including, lately, computer scientists. One source of the fascination is the fact that any universal computer can simulate another universal computer, and the simulated computer can, because it is universal, do the same thing. So it is possible to conceive of a theoretically endless series of computers contained, like Russian dolls, in larger versions of themselves and yet oblivious of those containers. To anyone who has lived intimately with, and thought deeply about, computers, says Charles Bennett, of IBM's Watson Lab, this notion is very attractive. "And if you're too attracted to it, you're likely to part company with the physicists." Physicists, Bennett says, find heretical the notion that anything physical is impervious to expertment, removed from the reach of science.

Fredkin's belief in the limits of scientific knowledge may sound like evidence of humility, but in the end it permits great ambition; it helps him go after some of the grandest philosophical questions around. For example, there is a paradox that crops up whenever people think about how the universe came to be. On the one hand, it must have had a beginning. After all, things usually do. Besides, the cosmological evidence suggests a beginning: the big bang. Yet science insists that it is impossible for something to come from nothing; the laws of physics forbid the amount of energy and mass in the universe to change. So how could there have been a time when there was no universe, and thus no mass or energy?

Fredkin escapes from this paradox without breaking a sweat. Granted, he says, the laws of our universe don't permit something to come from nothing. But he can imagine laws that would permit such a thing; in fact, he can imagine algorithmic laws that would permit such a thing. The conservation of mass and energy is a consequence of our cellular automaton's rules, not a consequence of all possible rules. Perhaps a different cellular automaton governed the creation of our cellular automation—just as the rules for loading software are different from the rules running the program once it has been loaded.

What's funny is how hard it is to doubt Fredkin when with such assurance he makes definitive statements about the creation of the universe—or when, for that matter, he looks you in the eye and tells you the universe is a computer. Partly this is because, given the magnitude and intrinsic intractability of the questions he is addressing, his answers aren't all that bad. As ideas about the foundations of physics go, his are not completely out of the ball park; as metaphysical and cosmogonic speculation goes, his isn't beyond the pale.

But there's more to it than that. Fredkin is, in his own odd way, a rhetorician of great skill. He talks softly, even coolly, but with a low-key power, a quiet and relentless confidence, a kind of high-tech fervor. And there is something disarming about his self-awareness. He's not one of these people who say crazy things without having so much as a clue that you're sitting there thinking what crazy things they are. He is acutely conscious of his reputation; he knows that some scientists are reluctant to invite him to conferences for fear that he'll say embarrassing things. But he is not fazed by their doubts. "You know, I'm a reasonably smart person. I'm not the smartest person in the world, but I'm pretty smart—and I know that what I'm involved in makes perfect sense. A lot of people build up what might be called self-delusional systems, where they have this whole system that makes perfect sense to them, but no one else ever understands it or buys it. I don't think that's a major factor here, though others might disagree." It's hard to disagree, when he so forthrightly offers you the chance.

Still, as he gets further from physics, and more deeply into philosophy, he begins to try one's trust. For example, having tackled the question of what sort of process could generate a universe in which spontaneous generation is impossible, he aims immediately for bigger game: Why was the universe created? Why is there something here instead of nothing?

HEN THIS SUBJECT COMES UP, WE ARE SITTING IN the Fredkins' villa. The living area has pale rock walls, shiny-clean floors made of large white ceramic tiles, and built-in bookcases made of blond wood. There is lots of air—the ceiling slopes up in the middle to at least twenty feet—and the air keeps moving; some walls consist almost entirely of wooden shutters that, when open, let the sea breeze pass as fast as it will. I am glad of this. My skin, after three days on Fredkin's island, is hot, and the air, though heavy, is cool. The sun is going down.

Fredkin, sitting on a white sofa, is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "analytical" approach associated with traditional mathematics, including differential equations, and the "computational" approach associated with algorithms. You can predict a future state of a system susceptible to the analytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold.

This indeterminacy is very suggestive. It suggests, first of all, why so many "chaotic" phenomena, like smoke rising from a cigarette, are so difficult to predict using conventional mathematics. (In fact, some scientists have taken to modeling chaotic systems with cellular automata.) To Fredkin, it also suggests that even if human behavior is entirely determined, entirely inevitable, it may be unpredictable; there is room for "pseudo free will" in a completely mechanistic universe. But on this particular evening Fredkin is interested mainly in cosmogony, in the implications of this indeterminacy for the big question: Why does this giant computer of a universe exist?

It's simple, Fredkin explains: "The reason is, there is no way to know the answer to some question any faster than what's going on."

Aware that he may have said something enigmatic, Fredkin elaborates. Suppose, he says, that there is an all-powerful God. "And he's thinking of creating this universe. He's going to spend seven days on the job—this is totally allegorical—or six days on the job. Okay, now, if he's as all-powerful as you might imagine, he can say to himself, 'Wait a minute, why waste the time? I can create the whole thing, or I can just think about it for a minute and just realize what's going to happen so that I don't have to bother.' Now, ordinary physics says, Well, yeah, you got an all-powerful God, he can probably do that. What I can say is—this is very interesting—I can say I don't care how powerful God is; he cannot know the answer to the question any faster than doing it. Now, he can have various ways of doing it, but he has to do every Goddamn single step with every bit or he won't get the right answer. There's no shortcut."

Around sundown on Fredkin's island all kinds of insects start chirping or buzzing or whirring. Meanwhile, the wind chimes hanging just outside the back door are tinkling with methodical randomness. All this music is eerie and vaguely mystical. And so, increasingly, is the conversation. It is one of those moments when the context you've constructed falls apart, and gives way to a new, considerably stranger one. The old context in this case was that Fredkin is an iconoclastic thinker who believes that space and time are discrete, that the laws of the universe are algorithmic, and that the universe works according to the same principles as a computer (he uses this very phrasing in his most circumspect moments). The new context is that Fredkin believes that the universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news/bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places.

So, I say, you're arguing that the reason we're here is that some being wanted to theorize about reality, and the only way he could test his theories was to create reality? "No, you see, my explanation is much more abstract. I don't imagine there is a being or anything. I'm just using that to talk to you about it. What I'm saying is that there is no way to know what the future is any faster than running this [the universe] to get to that [the future]. Therefore, what I'm assuming is that there is a question and there is an answer, okay? I don't make any assumptions about who has the question, who wants the answer, anything."

But the more we talk, the closer Fredkin comes to the religious undercurrents he's trying to avoid. "Every astrophysical phenomenon that's going on is always assumed to be just accident," he says. "To me, this is a fairly arrogant position, in that intelligence—and computation, which includes intelligence, in my view—is a much more universal thing than people think. It's hard for me to believe that everything out there is just an accident." This sounds awfully like a position that Pope John Paul II or Billy Graham would take, and Fredkin is at pains to clarify his position: "I guess what I'm saying is—I don't have any religious belief. I don't believe that there is a God. I don't believe in Christianity or Judaism or anything like that, okay? I'm not an atheist, I'm not an agnostic, I'm just in a simple state. I don't know what there is or might be. But what I can say is that it seems likely to me that this particular universe we have is a consequence of something I would call intelligent." Does he mean that there's something out there that wanted to get the answer to a question? "Yeah." Something that set up the universe to see what would happen? "In some way, yes."

VI. The Language Barrier


N 1974, UPON RETURNING TO MIT FROM CALTECH, Fredkin was primed to revolutionize science. Having done the broad conceptual work (concluding that the universe is a computer), he would enlist the aid of others in taking care of the details—translating the differential equations of physics into algorithms, experimenting with cellular-automaton rules and selecting the most elegant, and, eventually, discovering The Rule, the single law that governs every bit of space and accounts for everything. "He figured that all he needed was some people who knew physics, and that it would all be easy," Margolus says.

One early obstacle was Fredkin's reputation. He says, "I would find a brilliant student; he'd get turned on to this stuff and start to work on it. And then he would come to me and say, 'I'm going to work on something else.' And I would say, 'Why?' And I had a few very honest ones, and they would say, 'Well, I've been talking to my friends about this and they say I'm totally crazy to work on it. It'll ruin my career. I'll be tainted forever.'" Such fears were not entirely unfounded. Fredkin is one of those people who arouse either affection, admiration, and respect, or dislike and suspicion. The latter reaction has come from a number of professors at MIT, particularly those who put a premium on formal credentials, proper academic conduct, and not sounding like a crackpot. Fredkin was never oblivious of the complaints that his work wasn't "worthy of MIT," nor of the movements, periodically afoot, to sever, or at least weaken, his ties to the university. Neither were his graduate students.

Fredkin's critics finally got their way. In the early 1980s, while he was serving briefly as the president of Boston's CBS-TV affiliate, someone noticed that he wasn't spending much time around MIT and pointed to a faculty rule limiting outside professional activities. Fredkin was finding MIT "less and less interesting" anyway, so he agreed to be designated an adjunct professor. As he recalls the deal, he was going to do a moderate amount of teaching and be paid an "appropriate" salary. But he found the actual salary insulting, declined payment, and never got around to teaching. Not surprisingly, he was not reappointed adjunct professor when his term expired, in 1986. Meanwhile, he had so nominally discharged his duties as the head of the information-mechanics group that the title was given to Toffoli.

Fredkin doubts that his ideas will achieve widespread acceptance anytime soon. He believes that most physicists are so deeply immersed in their kind of mathematics, and so uncomprehending of computation, as to be incapable of grasping the truth. Imagine, he says, that a twentieth-century time traveler visited Italy in the early seventeenth century and tried to reformulate Galileo's ideas in terms of calculus. Although it would be a vastly more powerful language of description than the old one, conveying its importance to the average scientist would be nearly impossible. There are times when Fredkin breaks through the language barrier, but they are few and far between. He can sell one person on one idea, another on another, but nobody seems to get the big picture. It's like a painting of a horse in a meadow, he says"Everyone else only looks at it with a microscope, and they say, 'Aha, over here I see a little brown pigment. And over here I see a little green pigment.' Okay. Well, I see a horse."

Fredkin's research has nevertheless paid off in unanticipated ways. Comparing a computer's workings and the dynamics of physics turned out to be a good way to figure out how to build a very efficient computer—one that harnesses the laws of physics with great economy. Thus Toffoli and Margolus have designed an inexpensive but powerful cellular-automata machine, the CAM 6. The "machine' is actually a circuit board that when inserted in a personal computer permits it to orchestrate visual complexity at a speed that can be matched only by general-purpose computers costing hundreds of thousands of dollars. Since the circuit board costs only around $1,500, this engrossing machine may well entice young scientific revolutionaries into joining the quest for The Rule. Fredkin speaks of this possibility in almost biblical terms, "The big hope is that there will arise somewhere someone who will have some new, brilliant ideas," he says. "And I think this machine will have a dramatic effect on the probability of that happening."

But even if it does happen, it will not ensure Fredkin a place in scientific history. He is not really on record as believing that the universe is a computer. Although some of his tamer insights have been adopted, fleshed out, and published by Toffoli or Margolus, sometimes in collaboration with him, Fredkin himself has published nothing on digital physics. His stated rationale for not publishing has to do with, of all things, lack of ambition. "I'm just not terribly interested," he says. "A lot of people are fantastically motivated by publishing. It's part of a whole thing of getting ahead in the world." Margolus has another explanation: "Writing something down in good form takes a lot of time. And usually by the time he's done with the first or second draft, he has another wonderful idea that he's off on."

These two theories have merit, but so does a third: Fredkin can't write for academic journals. He doesn't know how. His erratic, hybrid education has left him with a mixture of terminology that neither computer scientists nor physicists recognize as their native tongue. Further, he is not schooled in the rules of scientific discourse; he seems just barely aware of the line between scientific hypothesis and philosophical speculation. He is not politic enough to confine his argument to its essence: that time and space are discrete, and that the state of every point in space at any point in time is determined by a single algorithm. In short, the very background that has allowed Fredkin to see the universe as a computer seems to prevent him from sharing his vision. If he could talk like other scientists, he might see only the things that they see.


Robert Wright is the author of
Three Scientists and Their Gods: Looking for Meaning in an Age of Information, The Moral Animal: Evolutionary Psychology and Everyday Life, and Nonzero: The Logic of Human Destiny.
Copyright © 2002 by The Atlantic Monthly Group. All rights reserved.
The Atlantic Monthly; April 1988; Did the Universe Just Happen?; Volume 261, No. 4; page 29.
Wed, 24 Nov 2010 05:10:00 -0600 text/html https://www.theatlantic.com/past/docs/issues/88apr/wright.htm
Killexams : Success Stories

Strategy: A History. By LAWRENCE FREEDMAN. Oxford University Press, 2013, 768 pp. $34.95

Lawrence Freedman’s monumental new book is one the most significant works in the fields of international relations, strategic studies, and history to appear in latest years, so readers should know what it is and what it is not. Despite its size and ambition, this magnum opus is not comprehensive. Strategy is instead a deliberately selective look at an important term that gets bandied about so much as to become almost meaningless. Scholars now have a work that arrests that slackness.

Readers should also know that Freedman’s book does not focus on “grand strategy,” a subject widely studied and a term often used to judge policymaking, since it concerns historical actors pursuing big ends. The index therefore contains no entry for the Roman Empire, and Freedman never discusses the grand strategies of such lasting players as the Ming dynasty, the Ottomans, King Philip II of Spain, the British Empire, or the Catholic Church. He does, however, tackle Satan’s strategy, in a dissection of Paradise Lost. There are diversions into literature, ancient myth, political theory, and the classics, and to the extent that they serve Freedman’s grander purpose of showing what strategy can sometimes be, the detours may be justified. But Freedman certainly likes to pick and choose, a tendency that can sometimes make it difficult for readers to follow the thread of his arguments even as readers move into the central sections.

Those sections are threefold -- “Strategies of Force,” “Strategy From Below,” and “Strategy From Above” -- and Strategy is best read as three separate books in one. As he has with everything else in this elaborate study, Freedman has chosen these titles carefully. Still, his idiosyncratic and even peremptory claim on meanings and the logical chain of his chapters remind one of Alice’s encounter with the arbitrary Red Queen: things are as the author says they are, whatever one may happen to think about whether a “from below” strategy is included in his “Strategies of Force” section. Yet the book still stands tall compared to the many lesser works on strategy and policy out there, which is why it will still stand out in ten or 20 years’ time.

WAR ON THE MIND

“Strategies of Force,” the largest of Freedman’s sections, comes the closest to a classic discussion of wars, campaigns, generals, and admirals. Yet rather than analyzing strategic campaigning on the battlefield, it mostly covers strategic theory about war. The book’s striking front cover, which shows a model of the Trojan horse, may trick bookstore browsers, but they will not find much about tactics, logistics, or the warrior ethos in the pages that follow. 

“Strategies of Force,” moreover, begins only in 1815, after the Napoleonic era, when full-blown theories of war emerged in the Western world. In one crisp chapter, Freedman introduces the military theorists Carl von Clausewitz and Henri de Jomini, making the case that the two contrasting authors -- the former Prussian, the latter Swiss -- should be regarded as the founding fathers of modern strategic thought, as they both reflected on the larger meanings of the epic struggle for Europe that they witnessed. Whereas Jomini gave planners somewhat more mechanistic rules regarding battlefield leadership, distance, timing, and logistics, Clausewitz taught them to appreciate other, less measurable elements, such as passion, unpredictability, chance, and friction. It was Clausewitz, too, who taught that politics does not stop when the fighting begins and that statesmen had to gear that fighting toward a desired peace. No wonder generals and professors in the nineteenth-century railway age generally preferred Jomini, whereas their successors, shocked by the chaos of World War I, came to favor Clausewitz. Both authors made sense in their time, Freedman argues, and they both have their limits. But he himself prefers the nuance that runs through Clausewitz’s works.

None of the later strategic theorists would surpass this duo, although they would add newer data and experiences. Freedman takes readers through the succeeding schools of thought, offering fine descriptions of such figures as the German military thinkers Hans Delbrück and Helmuth von Moltke the Elder, the analysts of land and naval power Halford Mackinder and Alfred Thayer Mahan, the British strategists of armored warfare Basil Liddell Hart and J. F. C. Fuller, the theorists of the nuclear age Bernard Brodie and Herman Kahn, and, finally, the students of guerrilla conflicts Che Guevara and David Galula.

This list may sound obvious, like the contents of a standard introductory syllabus on strategic theory, but Freedman turns it into something much more valuable through his acute judgments and summaries. For example, although the section on Mahan does explain that author’s belief that the nation with the greatest fleet would control the seas, Freedman gives more space to a lesser-known naval strategist, Julian Corbett, because he prefers the latter’s emphasis on geographic position, communications, and trade to the former’s more simplistic study of great fleets and the Trafalgar-like encounters they engaged in. Generally, Freedman approves of theorists with a Corbettian approach, since no single strategist can comprehend all aspects of war and get it right; once a conflict erupts, calm judgment and careful reasoning will prove more useful than fixed mindsets. Appropriately, this section of Strategy ends with al Qaeda, an adversary that has demonstrated the importance of surprise, confusion, luck, and passion -- and the futility of trying to use a fixed strategy against it. 

THE UNDERDOGS

In “Strategy From Below,” Freedman shifts his attention to what he calls a “strategy for underdogs” -- although he focuses on only the post-1789 ones. Like so many other scholars before him, he accepts that the most important changes in modern history in the West came with the European Enlightenment and the French Revolution. Scholars of early modern history may find this cutoff irritating: surely, Martin Luther’s nailing of his theses to the door of a Wittenberg church in 1517 was nothing if not a strategy from below. The same can be said of the social and political revolutions that convulsed Europe around 1648, many of which took their most extreme forms in such bodies as the Levellers, that radical political movement in England.

Karl Marx, Freedman’s key author here, knew all about these earlier revolutionary movements, of course, but as Freedman explains, Marx saw his own movement as different. In addition to having a scientific precision, it would also have a clear destiny: the destruction of capitalism followed by the uniting of the world’s workers. It is precisely because Marx and his co-writer, Friedrich Engels, worked out a full-blown theory of revolution that Freedman can commence his second section in the early nineteenth century. After all, unlike the Marxists, the Levellers were archaic radicals who claimed no predictive powers. They wanted to smash church rood screens and extend popular sovereignty; the Marxists wanted to abolish class structures altogether. Once true international socialism was established, the thinking went, there would be no more underdogs.

Moreover, no later revolutionary movement would lose its way getting to that harmonious endpoint, because Marx and Engels had provided a road map throughout their writings, if only they were studied carefully and properly followed. But in the early nineteenth century, other socialist writers were providing different road maps, and even some of Marx’s own followers would deviate from his. Freedman displays his impressive acuteness and erudition in describing the various leaders of these movements -- just look at his impressive portrait of the nineteenth-century French libertarian Pierre-Joseph Proudhon. Yet the story is a complicated one, and this part of the book flails around a lot.

The entry of Vladimir Lenin into this story, beginning with his return to Russia from exile in April 1917 to kick off the Bolshevik Revolution, returns purpose to the text. When Lenin dismounted his train at Petrograd’s Finland Station that month, he already had his own strategy: he would work with a small but strong Bolshevik core to quicken the Russian people’s revolution, employing the gun as well as the pulpit. The result was breathtaking. Lenin, Leon Trotsky, and their cadre toppled the moderate regime that had already gotten rid of the tsar, and they created a Bolshevik state that managed to stay alive despite massive counterassaults from the West. Previous thinkers had written about change; these ones actually accomplished it. Here at last, the theory of overthrow from below was being turned into practice.

But if Lenin could both alter and accelerate the course of revolution, then so could his many admirers in later ages and foreign lands -- or so they hoped. Provided the end goal was the same, the means could be altered. Even in Lenin’s experience, the socialist cause often experienced setbacks followed by renewal, and sometimes, well-meaning comrades who failed to understand the urgency of things or who were too eager to compromise had to be smashed as thoroughly as the old order. This happened again when Joseph Stalin steadily took control of the Soviet state, exiled Trotsky (and then had him killed), and survived successive threats from the West, Poland, Japan, and even the Nazis. Exigency and flexibility were the name of the game. As early as 1925 (when some Western powers were beginning to recognize the Soviet Union), one could say that a strategy from below had actually worked. As a strategic success story, it ranks as one of the great narratives of the past 200 years.

But Freedman’s jerkiness again intrudes. Just when one expects his revolutionary tale to move on to Mao Zedong in China and Vo Nguyen Giap in Vietnam, Freedman turns instead to the German sociologist Max Weber, the Russian writer Leo Tolstoy, the American social worker Jane Addams, and the American educational reformer John Dewey -- hardly firebrands. (For Mao and Giap, readers must go back to a chapter in the first section on guerrilla warfare.)

The post-Lenin part of “Strategy From Below” thus becomes an overwhelmingly American story, although with many other pieces added in. It mixes an account of the struggles of the United Automobile Workers in the United States with further discussions of revolutionary thinkers; invaluable vignettes of the Black Power crusade, civil rights, and feminism; a nifty chapter on Mohandas Gandhi and the strategy of nonviolence; summaries of the teachings of the philosophers Antonio Gramsci, Thomas Kuhn, and Michel Foucault; and a recounting of the opposition to the Vietnam War. Then, Freedman takes another remarkable turn, to analyze the successful political strategies of such Republicans as Ronald Reagan and his adviser Lee Atwater and the successful campaigns of Barack Obama.

This section of Freedman’s book does start strong -- how could it not, with Marx as its hero? -- but the many meanderings cost it strength and purpose. Moreover, the story of Marxism as a major historical phenomenon fits uneasily with the other, smaller crusades covered. It is easy to agree that the efforts of, say, the Black Power movement involved a strategy that was rooted in specific circumstances and strove toward specific ends that, once realized, meant success. It was indeed a strategy from below, as were Lenin’s grasp for power in 1917 and many of the revolutionary acts of later radicals. Yet Marxism itself was something larger, a total grand strategy that would never be realized until it had taken hold everywhere.

STRATEGY AT WORK

Freedman’s third book within a book bears the promising title “Strategy From Above,” although anyone looking for a quick survey of how great leaders carried out strategy will not find it here. Instead, this section concerns the rise and evolution of management, in theory and practice, from World War II to the present. “The focus is largely on business,” Freedman admits, and the players are modern managers, entrepreneurs, and theorists. In another example of Red Queen–like arbitrariness, Freedman focuses only on American business and the gurus who have provided it with new ideas. One can almost hear the Sorbonne intellectuals grinding their teeth at this hijacking. Aren’t their very different views of European state capitalism, the social welfare state, and the responsibilities of the firm of any interest? Yes, but only when they appear in an article in the Harvard Business Review.

That said, “Strategy From Above” is very good at what it does. So far as I know, there exists no equally succinct account and critique of American business strategy over the last seven decades. But the narrative goes back even further, to Taylorism -- the scientific approach to manual labor developed by Frederick Taylor in the 1880s -- and the innovations of Henry Ford at the turn of the century. As Freedman shows, Ford’s factories offered proof not only of a model for vastly enhanced production, thanks to the assembly line and the outsourced fabrication of many parts, but also of two larger things: the necessity of maintaining discipline within an organization and the vital role played by managers, who captain the ships on which ordinary workers toil.

The American business model of planned mass production would prove its value most dramatically with the advent of World War II. The U.S. system allowed Rosie the Riveter and her male counterparts to crank out ships, tanks, and airplanes far faster than the United States’ foes could destroy them. Having followed the new industrial model so wholeheartedly, the country emerged from the war as the greatest economic power of all time. All that was needed to keep growing after the war were better ways of explaining the model and newer ideas for improving it. And no book does a better job of describing the way in which management experts such as Alfred Chandler and Peter Drucker thought about companies and people, sometimes brushing aside or co-opting trade unions.

Freedman highlights two trends that changed American business in the postwar years. The first involves companies’ drive to find better and better methods of management in order to enhance their competitiveness -- a seductive idea, given all the evidence suggesting that the best-run firms, such as IBM, are usually the ones to survive and conquer. The high prophets of competitiveness have been extremely influential, and they have been rewarded with astonishing book sales. For example, Competitive Strategy, by the Harvard Business School professor Michael Porter, has now been in print for 34 years.

The second trend is the arrival of rational choice theory at business schools and other parts of the American university. According to this remorselessly logical way of thought, all actors maximize their utility. Its early advocates contended that it could apply to all forms of managing, from running a business to fighting the Cold War, and this simple way of looking at things proved immensely popular, perhaps nowhere more so than in political science departments. In most of Strategy, Freedman sticks to neutral description, but in this particular debate, he seems to enjoy dissecting rational choice. After pointing out how actual behavior deviates so frequently from what these models assume, he gently suggests, “The fact that they might be discussed mathematically did not put these theories on the same level as those in the natural sciences.”

This narrative of evolving ideas about management demonstrates once again Freedman’s hunch that no single comprehensive strategy can ever serve all purposes. In his view, it is futile to even search for such a theory, although that’s unlikely to stop social scientists from looking for one. It is equally likely that historians -- along with others who have witnessed what happens when theoretical strategies get mugged by reality, whether in business or in war -- will continue to prefer messier explanations of how things work. As Freedman argues, no strategy, however well it may work against a particular enemy or sell a particular product, is ever final, with a definite endpoint. Strategy is about how to get there; it is not about there.

TAKING STOCK

Freedman’s book should prove useful to students, fellow scholars, denizens of think tanks, and those working in the strategy departments of large organizations and top-rank investment companies. (It is already required memorizing in the grand strategy class I co-teach at Yale.) Yet Strategy has two main defects.

First, its contents are unbalanced. The three-legged structure simply cannot stand on its own, because the third part lacks the historical importance of the other two. After all, in the first section, readers learn about Moltke the Elder’s profound thoughts on victory, and in the second section, they read of Stokely Carmichael’s wrenching calls for black power, yet in the third section, they get the Boston Consulting Group. The comedown is great.

Second, despite the universalistic claim of its title, Strategy gets more and more American as it goes on. The third book is simply all about the United States. Freedman justifies his focus by noting that “the United States has been not only the most powerful but also the most intellectually innovative country in latest times.” Perhaps this emphasis reflects the author’s own life and times. Born in England just after World War II, Freedman could hardly have avoided being influenced by the increasing Americanization of his world.

Like me, many readers may wonder where to place Strategy in their libraries. It obviously should not go next to my various encyclopedias, given its selectiveness. Nor does it belong beside my small collection of books on management and business. It could go near my section of works on Marxism, socialism, and revolutionaries, but Strategy covers a far larger territory. By process of elimination, I have had to place it with my books on war. Although it differs from them all, it will stand close to books by Clausewitz, Jomini, and Mahan and not far from the many writings of the British military historian Michael Howard, Freedman’s longtime mentor and predecessor in his chair at King’s College London. That is not a bad place in which to be found.

Fri, 14 Aug 2020 00:50:00 -0500 en text/html https://www.foreignaffairs.com/reviews/success-stories
Killexams : Background and Supplementary Reading Killexams : History 398 - Sources and Further Reading


Week 1, Introduction, The Technics of Simple and Compound Machines

The second lecture introduces the basic characteristics of simple and compound machines, taking as examples the machines described by Georgius Agricola in his De re metallica (1556) and the application of machines to the task of moving the Vatican obelisk as recounted by Domenico Fontana in his work of 1585.

Readings and Sources

Frances and Joseph Gies, Cathedral, Forge, and Waterwheel: Technology and Invention in the Middle Ages (NY: HarperCollins, 1994), summarize the current literature on the machine technology of the Middle Ages. Agricola's work as translated by Herbert and Lou Henry Hoover is still available in a Dover reprint. Also available from Dover is The Various and Ingenious Machines of Agostino Ramelli, trans. by Martha Teach Gnudi, with notes by Eugene S. Ferguson, which is a treasury of Renaissance (i.e. medieval) machine technics.  The catalog for the exhibition Mechanical Marvels: Invention in the Age of Leonardo, organized by the Istituto e Museo di Storia delle Scienze in Florence and the Italian firm Finmeccanica in 1997 offers a rich collection of Renaissance illustrations and photographs of reconstructions of Renaissance machines.  It was accompanied by a compact disc that included the animations of the machines at work.

Week 2, Mill and Manor, Cathedral and Town

With the technical system of the mill covered in Lecture 2 and that of the cathedral laid out in Erlande-Brandenburg's book, the lectures for this week turn to the place of the two systems in their respective social settings. Given the nature of the readings, the lectures emphasize the social and economic presence of the mill and the cathedral. A highly schematic description of the traditional agricultural village sets some of the background for understanding the upheaval brought about by the Industrial Revolution, especially among cottagers and marginal labor.

Readings

The chapters in Holt's study place the medieval English mill in its manorial setting and bring the miller out from behind the caricature presented by the literature of the period. In both cases, the argument offers insight into the sources and methods of cultural history of technology in pre-modern societies.

Sources

The mill is arguably the most sophisticated technical system of the preindustrial world. Certainly it was the most prevalent at the time; Domesday Book recorded almost 6000 watermills in England in 1068, and a century later the windmill began to spread from its apparent origin in East Anglia. Mills dotted the countryside throughout Europe, filled the understructures of the bridge of Paris, and floated in the major rivers. Structurally, the mill seems the evident model for the mechanical clock, devised sometime in the late 12th or early 13th century. Yet, as a technical system and as a social presence the mill has until recently escaped the attention of historians.

Three works now admirably fill the lacuna, the most latest being Richard L. Hills's Power from Wind: A History of Windmill Technology (Cambridge, 1994). Terry S. Reynolds' Stronger Than A Hundred Men: A History of the Vertical Water Wheel (Baltimore: Johns Hopkins, 1983) is a wide-ranging study that moves between technical details and social settings from ancient times down to the nineteenth century and that rests on what seems an exhaustive bibliography. Richard Holt's The Mills of Medieval England (Oxford, 1988) is geographically more focused but socially more detailed. Drawn from extensive archival research, it provides a well illustrated, technically proficient account of the construction and working of water and wind mills, and then sets out the economic, legal, and social structures that tied them to medieval English society. In this latter area, Holt's account significantly revises the long standard interpretation of Marc Bloch in his classic "The Advent and Triumph of the Watermills" (in his Land and Work in Medieval Europe, London 1967; orig. "Avènement et conquête du moulin à eau", Annales d'histoire economique et sociale 36[1935], 583-663), at least for England. John Muendel has written a series of articles on mills in northern Italy, exploring both their technical and their economic structure; see for example "The distribution of mills in the Florentine countryside during the late Middle Ages" in J.A. Raftis, ed., Pathways to Medieval Peasants (Toronto, 1981), 83-115, and "The horizontal mills of Pistoia", Technology and Culture 15(1974), 194-225. Chapter 2 of Lynn White's classic Medieval Technology and Social Change sets the background of the mill in "The Agricultural Revolution of the Middle Ages", and Chapter 3 treats water power, the mill, and machines in general. Marjorie Boyer's "Water mills: a problem for the bridges and boats of medieval France" (History of Technology 7(1982), 1-22), an outgrowth of her study of medieval bridges, calls attention to the urban presence of mills and makes it all the more curious that medieval scholars could talk of the machina mundi without mentioning them.

Two other monographs provide valuable guides to the technical structure of the mill. John Reynolds' Windmills and Watermills (London, 1970) and Rex Wailes's The English Windmill (London, 1954) are both richly illustrated, though Wailes's superb drawings, born of thirty years of visiting mills, often make the operations clearer than do Reynolds' photographs.

Robert Mark's Experiments in Gothic Structures analyses in some detail the technical structure of cathedrals, taking advantage of latest engineering methods such as photoelasticity and finite-element analysis. As the course developed, it became clear that the book is too long and technically detailed. The lecture lays out the main argument, illustrated by slides used in the book and reinforced by Mark's article coauthored with William W. Clark, "Gothic Structural Experimentation", Scientific American 251,5(1984), 176-185. There Mark and Clark document technical communication between builders at Notre Dame in Paris and at the new cathedral in Bourges. Jean Gimpel, The Cathedral Builders (NY, 1961), and Henry Kraus, Gold Was the Mortar: The Economics of Cathedral Building (London, 1978), place the building of cathedrals in the wider social context of the medieval city. In "The Education of the Medieval English Master Masons", Mediaeval Studies 32(1970), 1-26, and "The Geometrical Knowledge of the Medieval Master Masons", Speculum47(1972), 395-421, Lon Shelby dispels the legends of a secret science by describing the measurements the masons actually carried out in building large structures. David Macauley's Cathedral brings medieval construction to life in his inimitable drawings, the basis for an animated TV documentary.

The social drama of the building of a cathedral has attracted the attention of several novelists.  Ken Follett's Pillars of the Earth is my favorite, in particular for its technical accuracy and for its focus on the masons whose skill made the buildings possible.

Week 3, Power Machinery, The Steam Engine

The two lectures use slides and models to explain the workings of the new textile machinery and of the steam engine as they were invented and developed in the mid-to-late 18th century. The first focuses on the mechanization of spinning and the different relationship between operator and machine in the jenny and the frame, while the second follows the motives behind Watt's improvements on the Newcomen design and the resulting shift of focus from mines to factories and then to railroads.

Readings

The first chapter of Peter Laslett's The World We Have Lost portrays the social structure of pre-Industrial England, while Richard L. Hills's Power in the Industrial Revolution complements the lectures in several directions: Chapter 2 provides a survey of the process of transforming raw fiber into finished cloth, with an emphasis on the increasing difficulty of translating the manual task into mechanical action; 10 analyzes the problem of bringing power to the machines, with stress on the difficulties of measurement; and 11 describes the special difficulties of weaving by power.

Sources

For a more extensive and latest look at "the world we have lost", see Part I of Patricia Crone's Pre-Industrial Societies (Oxford/Cambridge, MA, 1989), as well as Carlo M. Cipolla's Before the Industrial Revolution: European Society and Economy, 1000-1700 (3rd. ed. NY, 1994).

In addition to Hills's comprehensive and well documented account, Walter English's The Textile Industry: An Account of the Early Inventions of Spinning, Weaving, and Knitting Machines (NY, 1969) contains detailed descriptions, supported by excellent diagrams and illustrations. Also helpful for illustrations are Maurice Daumas (ed.), A History of Technology and Invention (NY, 1978; orig. Histoire générale des techniques, Paris, 1969), vol. III, Part 7, Chaps. 1-2, and Wallace's Rockdale (see following week).

The steam engine must be the best described machine in the history of technology. Particular good explanations and illustrations can be found in Eugene S. Ferguson's "The Origins of the Steam Engine", Scientific American (Jan. 1964; repr. in Gene I. Rochlin (comp.), Scientific Technology and Social Change: Readings from Scientific American, San Francisco, 1974, Chap. 6); and D.S.L. Cardwell, From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age (Ithaca, 1971). The classic account in English remains Henry W. Dickinson, A Short History of the Steam Engine (London, 1938; 2nd. ed. 1963). On Watt in particular, see the documentary history assembled by Eric Robinson and A.E. Musson, James Watt and the Steam Revolution (Lond: Adams and Dart, 1969), which includes at the end color reproductions of engineers' wash drawings pertinent to the various patents.

Week 4, The Factory and The Factory System

The two lectures of this week and the first of the next pick up the agricultural society of the late Middle Ages and follow the transition to the new industrial society of the mid-19th century. The central element of that transition is the factory, viewed first as a system of production by machines, then as an organization of human labor, and finally as a new social and economic presence in British politics.

Readings

The readings provide supplementary and contrasting details for the lectures, which perforce are quite general and schematic. Chap. 12 of Hills's book opens the issue of how the machines themselves were produced.  J.T. Ward's The Factory System documents from contemporary sources the transition from domestic to factory production in the textile industry. The readings selected focus on artisans and domestic workers before the introduction of machines and on the initial response to the new system from various perspectives. Ward presents a seemingly bewildering potpourri of details, so it is important to look for the structures that hold them together. The lectures should provide some guidance.

Anthony F.C. Wallace's Rockdale describes the construction (or transformation) of a water-powered mill and reviews the constituent processes of cotton production by machine, supporting his discussion with excellent drawings. He pursues in some detail a question left open by the lectures, namely how manufacturers got hold of the machinery for their factories. Finally, he introduces readers to a group of families who worked in the mills. His verbal descriptions come to life when supplemented by David Macaulay's drawings in Mill (NY, 1983).

Sources

Jennifer Tann's The Development of the Factory (London, 1970) provides the most useful guide to the subject. Tann, who is the editor of The Selected Papers of Boulton & Watt (Vol. I, The Engine Partnership, MIT, 1981), builds her account on the B&W archives, from which she reproduces a rich selection of drawings and layouts of early factories. As she characterizes her work, "One of the themes which emerges from the following pages is that it was the same few manufacturers who adopted the costly innovations such as fire-proof buildings, who installed gas lighting, steam or warm air heating and fire extinguishing apparatus; they were the giants, the ones who are most likely to have left some record of their activities behind, yet in many respects, they were uncharacteristic. They appear to have found little difficulty in recruiting capital yet there were many smaller manufacturers who found difficulty in obtaining long-term loans, to whom a fire-proof factory or gas lighting would have seemed an unobtainable luxury. In this respect some of the most valuable letters are from those manufacturers who decided against buying a Boulton & Watt steam engine which was a good deal more expensive than an atmospheric engine or a simple water wheel."(p.2)

Other useful studies include:

William Fairbairn, Treatise on Mills and Millwork (2 vols., London, 1861-63; 2nd ed. 1864-65; 3rd ed. 1871-74; 4th ed. 1878); as the several editions suggest, this was the fundamental manual of mill design, covering the building, the source of power, the transmission power, heating, lighting, etc. Rich in illustrations.

Brian Bracegirdle, et al., The Archaeology of the Industrial Revolution (London, 1973), which contains magnificent black/white and color photos and line drawings of mills and steam engines, together with a brief but choice bibliography.

J.M. Richards, The Functional Tradition in Early Industrial Building (London, 1959); The many illustrations of early factories, water and wind mills, etc. are more helpful than the text.

A variety of sources provide glimpses of the workforce that first entered the new factories. Frank E. Huggett gives a short account built around extensive quotations from original sources in The Past, Present and Future of Factory Life and Work: A Documentary Inquiry (London, 1973). For a shorter account of the difficulties of adjustment to the regimen of the factory in its earliest days, based on a sampling of documentary evidence reflecting the experience of literate participants, see Sidney Pollard,, "Factory Discipline in the Industrial Revolution", Economic History Review 16(1963), 254-271. Humphrey Jennings has compiled a potpourri of original reports in Pandaemonium: The Coming of the Machine as Seen by Contemporary Observers, 1660-1886 (NY 1985).

Week 5, The Formation of Industrial Society, Industrial Ideologies

In 1815 England was ruled by a constitutional monarch, a hereditary nobility, and a landed gentry under a settlement worked out in 1689 after a half-century of turmoil. National policy was limited to matters of trade and diplomacy, with internal matters left to local government as embodied by Justices of the Peace meeting in Quarter Sessions. Land was the basis of political power, even as it was steadily losing in economic power to commerce and industry, the interests of which were largely unrepresented in Parliament. Over the next fifty years, that balance shifted radically, as the Constitution was reshaped to extend political voice to an urban electorate and to respond at the national level to the social and economic problems posed by rapid industrialization. That the transformation occurred without major violence makes it a remarkable chapter in British history. The first lecture traces the main outline of that process. The second examines contemporary efforts to explain the changes occurring at the time and to determine the basic structure of the newly emerging society. The lecture emphasizes the contrast between the political economists, who viewed industrialization as a perturbation in the dynamical system of the market, and Marx, who saw it as a new stage in the evolution of political society.

Reading

Charles Babbage's On the Economy of Machinery and Manufactures and Karl Marx's "Machinery and Large-Scale Industry" (Capital, Vol.I, Chap.15.) offer strongly contrasting views of the nature and future of the new industrial system. The specific chapters in Babbage show him trying to think out systems of production and looking forward to division of mental labor, i.e. management, which will become a theme later in the course. Few people have actually read Marx, who must rank as one of the greatest historians of technology; this is an opportunity to meet him on his home ground. Finally, E.P. Thompson's classic "Time, Work-Discipline, and Industrial Capitalism", Past and Present 38(1960), 56-97, offers a glimpse into the changing lives of industrial workers.

Sources

Histories of the Industrial Revolution in Britain abound. I have drawn mostly on S.G. Checkland, The Rise of Industrial Society in England, 1815-1885 (London, 1971), E.P. Thompson, The Making of the English Working Class (NY, 1963), and Phyllis Deane, The First Industrial Revolution (Cambridge, 1965). A latest study is Maxine Berg, The Age of Manufactures, 1700-1820: Industry, Innovation, and Work in Britain (Totawa, 1985/Oxford, 1986).

As an introduction to the history of economic thought, Robert L. Heilbronner's The Worldly Philosophers (NY, 1953; repr. in several editions since) is succinct and readable. For a reconsideration of Marx's technological determinism, see Donald Mackenzie, "Marx and the Machine", Technology and Culture 25(1984), 473-502, and John M. Sherwood, "Engels, Marx, Malthus, and the Machine", American Historical Review 90,4(1985),  837-65.

Week 6, The Machine in the Garden, John H. Hall and the Origins of the "American System"

England was the prototype for industrialization. The rest of the world could look to that country as an example of what to emulate and what to avoid. Some saw a land of power and prosperity and wondered aloud whether God might after all be an Englishman; others saw "dark, Satanic mills" and the "specter of Manchester" with its filthy slums and human misery. Americans in particular thought hard about industry and whether it could be reconciled with the republican virtues seemingly rooted in an agrarian order. "Let our workshops remain in Europe," urged Jefferson in his Notes on Virginia in 1785, and he was no happier for being wiser about the feasibility of that policy after the War of 1812. Nor did all his fellow countrymen agree in principle. Some saw vast opportunities for industry in a land rich in natural resources, including seemingly endless supplies of wood and of waterpower. The debate between the two views became a continuing theme of American literature, characterized by Leo Marx as The Machine in the Garden (NY, 1964).

The combination of abundant resources and scarce labor meant that industrialization in America would depend on the use of machinery, and from the outset American inventors strove to translate manual tasks into mechanical action. For reasons that so far elude scholarly consensus, Americans' fascination with machines informed their approach to manufacturing to such an extent that British observers in the mid-19th century characterized machine-based production as the "American System". Precisely what was meant by that at the time is not clear, but by the end of the century it came to mean mass production by means of interchangeable parts. The origins of that system lay in the new nation's armories, in particular at Harpers Ferry, where John H. Hall first devised techniques for serial machining of parts within given tolerances.

Reading

New to the course for 2001 is Ruth Schwartz Cowan's A Social History of American Technology, which provides the background for lectures dealing with case studies of the "republican technology" of the Lowell factories and the beginnings of the "American System" of mass production at Harpers Ferry Armory.

Sources

Chapter 2 of John F. Kasson's Civilizing the Machine: Technology and Republican Values in America, 1776-1900 relates Lowell's great experiment in combining automatic textile machinery with a transient female workforce to avoid a permanent urban proletariat. As a social experiment to reconcile industry with democratic values, Lowell has intrigued labor historians almost as much as it did contemporary observers. The most comprehensive account, based on payroll records and tax inventories, is Thomas Dublin, Women at Work: The Transformation Of Work and Community in Lowell, Massachusetts, 1826-1860 (NY, 1979). His Farm to Factory: Women's Letters, 1830-1860 (NY, 1981) transmits the workers' own words about their lives, as does Philip S. Foner's The Factory Girls (Urbana, 1977), meant to counteract the rosy picture painted in the factory-sponsored Lowell Offering, which has recently been reprinted. For a collection of original sources, factory views, and maps, see Gary Kulik, Roger Parks, and Theodore Penn (eds.), The New England Mill Village, 1790-1860 (Documents in American Industrial History, II, Cambridge, MA, 1982).

Merritt Roe Smith's Harpers Ferry Armory remains the standard account of John H. Hall's system for producing rifles with interchangeable parts on a scale large enough to be economical.  In addition to the technical details of the machinery and managerial techniques that made an industry of gunsmithing, Smith examines the political and social structure of the master gunmakers and the threats that the new technology posed to their way of life. Elting E. Morison's From Know-How to Nowhere: The Development of American Technology (NY, 1974) is a thoughtful and provocative account of engineering from colonial times to the early 20th century, emphasizing the loss of autonomy and accountability that came with modern industrial research. Two more latest accounts are David Freeman Hawke, Nuts and Bolts of the Past: A History of American Technology (NY, 1988), and Thomas P. Hughes, American Genesis: A Century of Invention and Technological Enthusiasm, 1879-1970 (NY, 1989). Brooke Hindle and Steven Lubar provide a richly illustrated survey of industrialization in America in their Engines of Change: The American Industrial Revolution (Washington, 1986). The book is based on an exhibit at the Smithsonian's National Museum of American History, the pictorial materials for which have been recorded on a videodisc available on request from the Museum. The now standard account of the development of mass production is David A. Hounshell, From the America System to Mass Production: The Development of Manufacturing Technology in the United States (Baltimore, 1984). A shorter version of his main thesis, together with an account of the armory system by Smith, is contained in Otto Mayr and Robert C. Post (eds.), Yankee Enterprise: The Rise of the American System of Manufactures (Washington, 1982).

Week 7, Precision and Production, Ford's Model T: A $500 Car

When Americans first began machine-based production, the United States had no machine-tool industry other than the shops attached to the factories themselves and the shops of traditional artisans such as clockmakers and gunsmiths. Using traditional hand tools, machine builders worked to routine tolerances of 1/100", and precision was achieved by fitting part to part. In 1914, the first full year of assembly-line production at Ford, rough surveys revealed some 500 firms producing over $30,000,000 worth of machine tools ranging from the most general to the most specific. Over the intervening century, routine shop-floor precision increased from 1/100" to 1/10,000", and the finest instruments could measure 1/1,000,000". Such precision does not occur naturally, nor are the means of attaining it self-evident. Indeed, Britain's leading machinist, Joseph Whitworth testified before Parliament that interchangeability and full machine production were not possible in principle. The achievement of the requisite precision over the course of the century is a remarkable story, still not told outside the specialist literature, and the first lecture is an effort to tell it.

Accuracy to 0.0001", achieved automatically by machines, was a prerequisite of Ford's methods of production and hence of the automobile he designed to meet the needs and means of the millions of potential owners. The second lecture backs up to provide an account of the invention of the internal combustion engine, which, like the steam engine, was originally conceived as a stationary source of power but was adapted to use in a vehicle. After a quick survey of the earliest automobiles, the lecture "reads" Ford's design of the Model T, first with respect to its intended user and then with respect to the methods by which Ford could produce it at an affordable price. Appendix I is a version of the second part of the lecture used with general audiences.

Reading

Cowan's book again provides general background for lectures on the origins of consumer society in the move of machinery from the factory to the home.  Nathan Rosenberg's seminal article, "Technological Change in the Machine Tool Industry, 1840-1910", Journal of Economic History 23(1963), 414-443, reveals the characteristics of machine tools that facilitated, or perhaps even made possible, the rapid diffusion of new techniques and levels of precision. The later lectures on software hark back to Rosenberg's interpretation when exploring the models of production informing current software engineering.

Sources

The essays by A.E. Musson, Paul Uselding, and David Hounshell in Mayr and Post's Yankee Enterprise provide accounts, respectively, of the British background, the development of precision instrumentation, and the development of mass production by means of interchangeable parts. In other years, I have used the early chapters of Hounshell's From the American System to Mass Production. Robert S. Woodbury, who first debunked "The Legend of Eli Whitney and Interchangeable Parts" (Technology and Culture 1(1960), 235-254), made a start on a comprehensive history of machine tools in the 19th century, working on a machine-by-machine basis, which he intended as prelude to a history of precision measurement and interchangeable parts. His histories of the gear-cutting machine, grinding machine, lathe, and milling machine, combined in 1972 as Studies in the History of Machine Tools, provide technical details and illustrations. W. Steeds offers a comprehensive, illustrated account in A History of Machine Tools, 1700-1910 (Oxford, 1969); cf. also L.T.C. Rolt, Tools for the Job: A History of Machine Tools (rev. ed. London, 1986), and Chris Evans, Precision Engineering: an Evolutionary View (Bedford: Cranfield Press, 1989). Machine tools caught the particular attention of the 1880 census, for which Charles H. Fitch compiled under the title Report on Power and Machinery Employed in Manufactures (Washington, 1888) an extensive, richly illustrated inventory of the tools then used in American industry. Frederick A. Halsey's classic Methods of Machine Shop Work (NY, 1914) defines the terms and standards of the industry at the turn of the 20th century. The Armington and Syms Machine Shop at Greenfield Village, Henry Ford Museum, in Dearborn is a restoration of a 19th-century production shop, powered by a steam engine through an overhead belt-and-pulley system.

Perhaps the best short account of the internal combustion engine is Lynwood Bryant's "The Origin of the Automobile Engine" (Scientific American, March 1967; repr. in Gene I. Rochlin (comp.), Scientific Technology and Social Change: Readings from Scientific American, San Francisco, 1974, Chap.9), focuses on Otto's development of the four-cycle engine on the basis of a specious notion of "stratified charge". For greater detail, see his two articles, "The Silent Otto", Technology and Culture7(1966), 184-200, and "The Origin of the Four-Stroke Cycle", ibid. 8(1967), 178-198, and for contrast, see his "Rudolf Diesel and His Rational Engine", Scientific American, August 1969 (repr. in Rochlin, Chap.10). On the early development of the automobile, see James J. Flink, America Adopts the Automobile, 1895-1910 (Cambridge, MA, 1970). John B. Rae offers a brief general history in The American Automobile (Chicago, 1965).

Allen Nevins tells the story of the Model T, which realized Ford's vision of a cheap, reliable car for the mass market, in Vol.I of his three-volume Ford: The Times, the Man, the Company (NY, 1954). However, the vehicle tells its own story when viewed through photographs of its multifarious uses, diagrams from the user's manual and parts list, advertisements by suppliers of parts and options, and stories about the "Tin Lizzie". Floyd Clymer's Historical Motor Scrapbook: Ford's Model T (Arleta, CA, 1954) offers an assortment of such materials, along with sections of the Operating Manual and Parts List. Reproductions of the manual and parts list are also available at the Henry Ford Museum in Dearborn. Several companies produced plastic and metal models of the car in varying detail, though nothing beats seeing the car itself, perhaps in the hands of a local antique car buff.

Week 8, Highland Park and the Assembly Line, Ford and the Five-Dollar Day

The first lecture moves from the Model T to the machines Ford designed to produce it and to the organization of those machines in his new assembly-line factory at Highland Park. With the machines in place and the pace of assembly established, Ford faced the problem of keeping increasing numbers of people at work tending the machines and keeping pace with the line. Although most jobs required little or no skill, they did demand sustained attention to repetitive tasks over a continuous period of time. The need to combat a 300% annual turnover among his labor force, combined with $27 million in excess profits in January 1914, induced Ford and his Vice-President James Cousins to introduce the "Bonus Plan", by which the standard wage at Highland Park jumped overnight from $2.30 to $5.00 for an eight-hour day. But the $5 day was only the most striking of Ford's efforts to retain the loyalty of his workers. Through John R. Lee and the Sociological Department, the company had already begun a program of factory outreach, involving itself in the lives of its employees. Although welcomed at first, the essentially paternalistic system led eventually to an oppressive system of control and triggered the union strife of the '30s which erased Ford's earlier benevolence from popular memory.

Reading

Henry Ford spoke for himself (through a ghost writer) in the article on "Mass Production" that appeared in the 13th edition of the Encyclopedia Britannica, and it is instructive, especially given the retrospectively critical stance of current historians, to see how the system looked through his eyes.  Among those historians is Stephen Meyer, whose book is the fullest historical account of the labor policy surrounding the $5 day.

Sources

Ford's Highland Park Plant, built to produce the Model T by his new methods, caught the attention of industrial engineers when it began full assembly-line operation in 1914. As a result, journals of the day offered extensive descriptions and illustrations of the plant. Perhaps the most informative contemporary source, Horace L. Arnold and Fay L. Faurote, Ford Methods and the Ford Shops, began as a series of articles in Engineering Magazine. Faurote, was a member of the Taylor Society and his account looks at Highland Park from the perspective of Scientific Management, especially in its emphasis on the paperwork involved in management of workers and inventory. David Hounshell's account in Chapters 6 and 7 of From the American System to Mass Production draws liberally from the photo collection of the Ford Archives, and the Smithsonian Institution has a short film loop depicting the assembly line in action. Lindy Bigg's The Rational Factory: Architecture, Technology, and Work in America's Age of Mass Production (Baltimore, 1996) analyzes the Ford plants as buildings in motion.

Allen Nevin's biography of Ford: The Man, the Times, and the Company is a useful counterbalance to Meyer's interpretation of the motives behind the $5 day. Eli Chinoy's Automobile Workers and the American Dream (Garden City, 1955) pursues the long-term effects of Ford's system of production on the workers it employed.

Week 9, Taylorism and Fordism, Mass Distribution: The Consumer Society

Since at the time Ford's methods were often associated with those proposed by Frederick W. Taylor under the name of "task management" or, more popularly, "Scientific Management", the second lecture examines Taylor's career as a consultant on shop-floor organization and the nature and scope of his Principles of Scientific Management, published at just about the time Ford was laying out Highland Park. In the end, the lecture emphasizes the quite different assumptions of the two men concerning the role of the worker in machine production and hence the essential incompatibility of Taylor's principles with Ford's methods of production. Nonetheless, as Taylor's followers found when they visited Highland Park, in matters of supervision and inventory control the two systems had much in common.

The $500 car (which by 1924 cost $290) was the most latest of a host of machines built for and sold to a new middle-class consumer society, which, through the $5 day, came to include the automobile worker. Mass production went hand-in-hand with mass distribution; indeed, the former made no sense without the latter. The second lecture presents a survey of the developments in communication, transportation, and management that made possible the patterns of consumption and the concomitant restructuring of society and politics noted by the Lynds in Middletown (Muncie, IN) in 1924.

Reading

Cowan provides an overview of the newly emerging consumer society, and selections from Robert S. and Helen Lynd's classic sociological study of Middletown offers a contemporary glimpse of that society as it was taking shape.

Sources

The best source for understanding Frederick W. Taylor is his own tract, The Principles of Scientific Management (NY, 1911; repr. 1939, 1947, 1967). The most latest and complete biography is Robert Kanigel's The One Best Way: Frederick Winslow Taylor and the Enigma of Efficiency (NY, 1997). Daniel Nelson's study of Taylor, Frederick W. Taylor and the Rise of Scientific Management (Madison, 1980) complements his earlier account of factory management, while Samuel Haber's Efficiency and Uplift: Scientific Management in the Progressive Era, 1890-1920 (Chicago, 1964) places Taylor in the context of the conservation and efficiency movements of turn-of-the-century America. Hugh G.J. Aitken's Taylorism at Watertown Arsenal: Scientific Management in Action, 1908-1915 (Cambridge, MA, 1960) remains a classic study of the development and implications of Taylor's ideas. While Ford himself perhaps could honestly claim not to have known about Taylor's methods, Hounshell shows that many of the people who worked with him in designing the assembly line and organizing the Ford workers did have backgrounds in Scientific Management. Alfred D. Chandler's magisterial The Visible Hand: The Managerial Revolution in American Business (Cambridge, 1977) puts Taylor and Ford in the context of the development new managerial practices in American at the turn of the century. Judith A. Merkle's Management and Ideology: The Legacy of the International Scientific Management Movement (Berkeley : University of California Press, 1980), Stephen P. Waring's Taylorism Transformed : Scientific Management Theory Since 1945 (Chapel Hill : University of North Carolina Press, 1991), and Nelson's A Mental Revolution: Scientific Management Since Taylor (Columbus: Ohio State University Press, 1992) bring the story down to the present.

The second lecture draws heavily from Alfred D. Chandler, Jr., The Visible Hand: The Managerial Revolution in American Business (Cambridge, 1977) and Daniel J. Boorstin, The Americans: The Democratic Experience (NY, 1973).  For a more recent, richly illustrated account, see Susan Strasser, Satisfaction Guaranteed:  The Making of the American Mass Market (Washington DC, 1889).

Week 10, From the Difference Engine to ENIAC; From Boole to EDVAC

The first lecture traces the dual roots of the stored-program digital electronic computer viewed as the combination of a mechanical calculator and a logic machine. Taking the first designs as both flexible and inchoate, the second examines the groups of people who gave it shape by incorporating it into their enterprises. In particular, the lecture looks at the means by which the nascent computer industry sold the computer to business and industry, thus creating the machine by creating demand for it.

Reading

Aspray and Campbell-Kelly provide perhaps the best historical account of the computer, emphasizing the environments into which it was introduced when it was new and the role they played in shaping its development.

Sources

Another latest history is Paul Ceruzzi's A History of Modern Computing (Cambridge, MA, 1998), which provides considerable detail about the development of the industry. For the development of the machine itself, Stan Augarten's Bit by Bit: An Illustrated History of Computers is a generally reliable, engagingly written, and richly illustrated survey from early methods of counting to the PC and supercomputer. Michael R. Williams's A History of Computing Technology (Prentice-Hall, 1985) takes a more scholarly approach to the same material but emphasizes developments before the computer itself. In 407 pages of text, the slide rule appears at p.111 and ENIAC at p.271; coverage ends with the IBM/360 series. Although once useful as a general account, Herman Goldstine's still oft-cited The Computer from Pascal to Von Neumann (Princeton, 1973) retains its value primarily for its personal account of the ENIAC project and of the author's subsequent work at the Institute for Advanced Study in Princeton.

Martin Davis' The Universal Computer:  The Road from Leibniz to Turing (New York, 2000; paperback under the title Engines of Logic, 2001) has recently joined Sybille Krämer's Symbolische Maschinen: die Idee der Formalisierung in geschichtlichem Abriss (Darmstadt, 1988) in relating the origins of the computer in the development of mathematical logic. William Aspray's dissertation, "From Mathematical Constructivity to Computer Science: Alan Turing, John von Neumann, and the Origins of Computer Science" (Wisconsin, 1980), covers the period from Hilbert's Program to the design of EDVAC, as does Martin Davis's "Mathematical Logic and the Origin of Modern Computers", in Esther R. Phillips (ed.), Studies in the History of Mathematics (MAA Studies in Mathematics, Vol.26; NY, 1987). The nineteenth-century background belongs to the history of mathematics and of logic proper, but the scholarly literature in those fields is spotty. Andrew Hodges's biography, Alan Turing: The Enigma (NY, 1983), is a splendid account of Turing's work and served as the basis for a compelling stage play, Breaking the Code. Aspray's John von Neumann and the Origins of Modern Computing (MIT, 1990) explores in some detail von Neumann's work both in the design and the application of computers.

Those who want to get right down into the workings of the computer should turn to Charles Petzold, Code: The Hidden Language of Computer Hardware and Software (Redmond, WA, 1999). Softer introductions may be found in Alan W. Biermann's Great Ideas in Computer Science: A Gentle Introduction (2nd ed., MIT Press, 1997) and Jay David Bolter's Turing's Man: Western Culture in the Computer Age (Chapel Hill, 1984).

Week 11, The Development of the Computer Industry, The Software Paradox

In keeping with the dual origins of the computer, the development of the industry since the early '50s has two distinct, though related aspects. Through transistors, integrated circuits, and VLSI, computers themselves have increased in power by a factor of 100 every five years, while dropping in price at about the same rate. Rapid progress in the development of hardware has made visionary devices commonplace within a span of five or ten years. IBM, DEC, and Apple represent the successive stages by which computers were transformed from specially designed capital investments to mass-produced consumer items over the span of thirty years.

The software paradox is simply stated: programmers have successfully automated everyone's job but their own. With the commercialization of the computer came the need to provide customers with the programs that matched its power to their purposes, either directly through application programs or indirectly through programming languages, operating systems, and related tools. Both industry and customers soon found themselves hiring and trying to manage large numbers of programmers who had no previous training either in computers or in systems analysis but whose programming skills gave them effective control over their work. To address the resulting issues of productivity and quality control, computer engineers and managers turned to earlier models of production, in particular through automatic programming and the software equivalent of interchangeable parts. So far, efforts to Taylorize or Fordize the production of programs have been unsuccessful. Nonetheless, they testify to the abiding impression that Taylor and Ford have made on American engineering and thus provide firm historical roots to modern technology.

Reading

The choice of Tracy Kidder's The Soul of a New Machine is aimed directly at continuing the theme of technology and the nature of work. In addition to portraying the complex organization on which modern technological development depends, it raises intriguing questions of power and exploitation.

Sources

The development of the computer industry is only now coming under the scrutiny of historians, and most of the current literature stems from journalists. The foremost exceptions are the latest books by Paul Ceruzzi and by William Aspray and Martin Campbell-Kelly, both of which offer a much needed and long awaited survey of the history of the industry from its pre-computer roots to the present. For a review of the state of the field several years ago, see my article, "The History of Computing in the History of Technology", Annals of the History of Computing 10(1988), 113-125 [pdf], updated in "Issues in the History of Computing", in Thomas J. Bergin and Rick G. Gibson (eds.), History of Programming Languages II (NY: ACM Press, 1996), 772-81. The Annals themselves constitute one of the most important sources. Among the most useful accounts are Augarten's Bit by Bit; Kenneth Flamm, Creating the Computer: Government, Industry, and High Technology (Washington, 1988); Katharine Davis Fishman, The Computer Establishment (NY, 1981); Howard Rheingold Tools for Thought: The History and Future of Mind-Expanding Technology (NY, 1985), and Pamela McCorduck, Machines Who Think (San Francisco, 1979). David E. Lundstrom's A Few Good Men from Univac (Cambridge, MA, 1987) provides a critical look at the early industry, while the collaborative effort of Charles J. Bashe, Lyle R. Johnson, John H. Palmer, and Emerson W. Pugh on IBM's Early Computers (Cambridge, MA, 1986) provides an exhaustively detailed account, based on company documents, of IBM's entry into the market and the series of machines up to the 360, which is the subject of a second volume now nearing completion. Paul Freiburger and Michael Swaine's Fire in the Valley: The Making of the Personal Computer (Berkeley, 1984) remains one of the best accounts of the early days of the PC industry.

The history of software remains largely unwritten and must be gleaned from the professional literature.  For overviews see my articles, "The Roots of Software Engineering", CWI Quarterly, 3,4(1990), 325-34 [pdf] and "Software: The Self-Programming Machine", in Atsushi Akera and Frederik Nebeker (eds.), From 0 to 1:  An Authoritative History of Modern Computing (New York: Oxford U.P., 2002), as well as the entry "Software History" in Anthony Ralston et al., Encyclopedia of Computer Science, 4th edition (London, 2000).  For "A Gentle Introduction" to what software is about, see Alan W. Biermann, Great Ideas in Computer Science (Cambridge, MA, 1997).

There is a growing number of personal accounts and reminiscences by computer people.  Among the most thought-provoking and least self-serving are Ellen Ullman, Close to the Machine: Technophilia and Its Discontents (San Francisco, 1997), Richard P. Gabriel, Patterns of Software:  Tales from the Software Community (New York, 1996), and Robert N. Britcher, The Limits of Software:  People, Projects, and Perspectives (Reading, MA, 1999).

Week 12, Working Toward Choices, Where Are We Now?

The computer is only one of several technologies, which, spawned or encouraged by the demands of World War II, rapidly transformed American society in the twenty-five years after 1945, bringing a general prosperity thought unimaginable even before the Depression. With that prosperity came new problems and a growing sense that technology threatened society as much as, or even more than, it fostered. The first lecture reviews the major elements of modern high technology as it has developed since the war, and the second tries to put the issues it raises into the perspective of the course as a whole. In the end, the course has no answers to offer, but only questions that may prove fruitful in seeking them.

Reading

Langdon Winner's "Do Artifacts Have Politics?" argues that indeed they do, that is, that how technologies will be used is part of how they are designed. Although that view does not preclude unintended consequences, it does place responsibility for technologies on the people who create, maintain, and use them. The readings throughout the course offer ample material for putting Winner's thesis to the test.

Perhaps the most famous example discussed by Winner is the story of Robert Moses and the parkway bridges designed too low to allow access by bus to Jones Beach.  The story, taken from Robert Caro's well known biography of Moses, turns out on close examination to be inaccurate in several details.  For a discussion of the story and its use by Winner, see Bernward Joerges, "Do Politics Have Artefacts?" Social Studies of Science 29,3(1999), 411-31 [JSTOR], Steve Woolgar and Geoff Cooper, "Do Artefacts Have Ambivalence? Moses' Bridges, Winner's Bridges, and Other Urban Legends in S&TS", Ibid., 433-49 [JSTOR], and Joerges, "Scams Cannot Be Busted: Reply to Cooper and Woolgar", Ibid., 45-57 [JSTOR]

Sources

Fred C. Allvine and Fred A. Tarpley, Jr. provide a brief survey of the major changes in the U.S. economy during the quarter century following World War II in The New State of the Economy (Cambridge, MA, 1977). Peter Drucker, The Age of Discontinuity (NY, 1968, 2nd ed. 1978), and John Kenneth Galbraith, The New Industrial State (NY, 1967, 3rd ed., 1978), lay particular emphasis on new technologies and their effect on our economic institutions, while Seymour Melman, Profits without Production (NY, 1983), and Michael Piore and Charles Sabel, The Second Industrial Divide: Possibilities for Prosperity (NY, 1984), question how positive those effects have been.

Winner's Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought (Cambridge, MA, 1977) is a fully developed statement of the issues discussed in his article. Literature on the political assumptions underlying technology abound; indeed, some of it provoked this course in the first place. Among the more latest and more interesting are David F. Noble, Forces of Production: A Social History of Industrial Automation (NY/Oxford, 1986), Walter A. McDougall, ...The Heavens and the Earth: A Political History of the Space Age (NY 1985), Shoshana Zuboff, In the Age of the Smart Machine: The Future of Work and Power (NY, 1988),  Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in the Cold War (Cambridge, MA: MIT, 1996), and Gene I. Rochlin, Trapped in the Net : The Unanticipated Consequences of Computerization (Princeton: PU Press, 1997). For the history of the Internet, see Janet Abbate, Inventing the Internet (Cambridge, MA: MIT Press, 1999)

As of 1996, the Internet and the World Wide Web, especially when grouped together under the concept of the National Information Superhighway, have become prime subjects for political and cultural analysis along the lines suggested by this week's readings and by the interpretive themes of the course. An article written twenty five years ago retains its pertinence. In "The Mythos of the Electronic Revolution" (American Scholar 39(1969-70), 219-241, 395-424), James W. Quirk and John J. Carey place the claims of the 1930s for a revolution of electrical power in the framework of Leo Marx's Machine in the Garden and showing how the notions of the "electronic village" or the "technotronic era" fashionable in the last '60s are similarly 20th-century evocations of the middle landscape. Cyberspace would seem to be the latest.

Reading Period

Albert Borgmann's Holding Onto Reality: The Nature of Information at the Turn of the Millennium  is one of a spate of latest books attempting to place the "information revolution" into some sort of historical perspective.  Others include: Michael E. Hobart and Zachary S. Schiffman, Information Ages: Literacy, Numeracy, and the Computer Revolution (Baltimore, 1998); James J. O'Donnell, Avatars of the Word:  From Papyrus to Cyberspace (Cambridge, MA, 1998). Less historical but nonetheless suggestive are John Seely Brown and Paul Duguid, The Social Life of Information (Boston, 2000) and Jay David Bolter and Richard Grusin, Remediation: Understanding New Media (Cambridge, MA, 1999).  Books on the Internet abound; perhaps the most important is Lawrence Lessig's Code and Other Laws of Cyberspace (New York, 1999), followed now by his The Future of Ideas: The Fate of the Commons in a Connected World (New York, 2001).

Sample Examination Questions

The lectures and readings of this course tend to emphasize the ways in which inventors, entrepreneurs, and workers looked upon machines as determinants of their socio-economic life.  At several points, however, we have caught glimpses of inventors and onlookers who have seen in machines expressions, either direct or symbolic, of the ideals and aspirations of their society.  Using specific examples over the range of the course, explore the role of the creative and esthetic imagination in determining the ways societies shape and respond to their technologies.

"Although historians often speak of 'the industrial revolution' and 'the rise of the factory system' in the singular, there have in reality been not one but three such revolutions.  The first was the mechanization of the textile industry in the late 18th and early 19th centuries, the second was the mechanization of the consumer durables industry in the middle-to-late 19th century through the 'American System', and the third was the growth of 'rational management' in the early 20th century.  In each of these 'revolutions' the technological basis, the economic objective, and the impact upon labor were completely different, and it is historically false to see the three as phases of a single development, as history texts usually do."  Discuss critically.

"But, as is common knowledge, an invention rarely spreads until it is strongly felt to be a social necessity, if only for the reason that its construction then becomes a matter of routine."  The eminent French historian, Marc Bloch, made this general claim in an article about the watermill in the Middle Ages.  Discuss its validity with reference to the automobile and the computer.

You have read examples of two analyses of the character and effects of industrialization in England in the early nineteenth century, namely selections from Babbage's On the Economy of Machines and Manufactures and a central chapter of Marx's Capital.  How well does each analysis explain the course of the "Lowell Experiment", as described by Kasson, from its initial inspiration to its eventual outcome in the 1840s?

The early textile factories were called "mills".  Given the traditional technical system denoted by the term, what does that usage tell us about initial perceptions of the factory?  In what ways was the usage deceptive from the outset?  Use specific examples to illustrate your analysis.

"One can argue that medieval Europe was a highly sophisticated technological society of a certain sort, involved in a fairly rapid, continuing process of sociotechnical change. One does not have to wait for the industrial revolution ... to see political societies remolded in response to technical innovation." Drawing on specific evidence from the lectures and readings so far, either make that argument or refute it.

"The factory was more than just a larger work unit. It was a system of production, resting on a characteristic definition of the functions and responsibilities of different participants in the productive process." (David Landes) Discuss Landes's assertion with reference to Harpers Ferry Armory, Ford's Highland Park Plant, and Data General's Westborough facility.

Ford's Sociological Department was a formal system of social control in an industrial setting. Compare and contrast this system with the forms of control at work in the Lowell textile mills and in the Eagle project at Data General.

"We do not use technologies so much as live them." Use Meyer's The Five-Dollar Day and the Lynds's Middletown to discuss this claim by Langdon Winner.
 
 

Thu, 23 Dec 2021 02:31:00 -0600 text/html https://www.princeton.edu/~hos/h398/398sources.v-1 Killexams : Startups News No result found, try new keyword!Showcase your company news with guaranteed exposure both in print and online Ready to embrace the fast-paced future we’re all experiencing? Join us for tech… Outstanding Women in Business are ... Sun, 07 Aug 2022 12:41:00 -0500 text/html https://www.bizjournals.com/news/technology/startups Killexams : What's Next for Pinterest and Bed Bath & Beyond? No result found, try new keyword!It was bought out by JDA and then moved to IBM. It was called the I2 technologies ... but we need to actually create your alternative, a much more rational, reasonable way of doing things. Wed, 06 Jul 2022 05:41:00 -0500 text/html https://www.nasdaq.com/articles/whats-next-for-pinterest-and-bed-bath-beyond M2140-648 exam dump and training guide direct download
Training Exams List