Exam Code: 000-240 Practice test 2022 by Killexams.com team
IBM Sterling Configurator V9.1 Deployment
IBM Configurator study tips
Killexams : IBM Configurator study tips - BingNews https://killexams.com/pass4sure/exam-detail/000-240 Search results Killexams : IBM Configurator study tips - BingNews https://killexams.com/pass4sure/exam-detail/000-240 https://killexams.com/exam_list/IBM Killexams : Help regarding Study Guides and Links for F5 301

Looking for Study Guides and Links for F5 301 certification dumps? Information is at your fingertips! Here are a few of the best options:

Looking for Study Guides and Links for F5 301 certification dumps? Information is at your fingertips! Here are a few of the best options:

– cbt Nuggets offers a wide range of study guides and links to help you pass your F5 301 certification exams.

– Certkingdom is an online resource that offers both study guides and links to F5 301 test questions.

– Examokit provides comprehensive preparation materials, including study guides and links to test questions, for the F5 301 certification exams.

If you’re looking for help with your F5 certification dumps, then you’ve come to the right place. Our team of experts is here to provide you with the best resources and advice possible.

We have a variety of study guides and links that are perfect for anyone preparing for their F5 301 certification exams. These guides include detailed explanations and examples of each question type so that you can learn how to answer them correctly.

Tips and advice

We also offer tips and advice on how to best use our study materials, as well as how to maximize your chances of passing your exams. So don’t hesitate contact us today and let us help you achieve your certification goals!

If you’re looking for study guides and links for F5 certification dumps, then you’ve come to the right place! At CertsMaster, we have everything you need to get ready for your certification exams.

Our study guides and links are specifically designed to help you learn and remember the material. We also have a wide range of practice exams that will help you test your knowledge. So don’t wait any longer start preparing today!

If you’re looking for help with your F5 301a dumps, then you’ve come to the right place. Our team of experts has put together a comprehensive study guide that covers all the essential topics. Additionally, we’ve compiled a list of links to relevant resources that will help you Boost your knowledge and skills.

Preparing for your exams

Take a look at our study guide now and start preparing for your exams with confidence!

Are you ready to pass your F5 301 certification exams? If so, then you’ll need to study up! Here are some study guides and links that will help you get ready.

Study guides:

The F5 Networks certification guide: This comprehensive guide covers all the courses on the F5 301 exam. It includes detailed explanations and examples, as well as practice questions that will help you Boost your skills.

The IBM FlexNetICS Study Guide: This book is designed to help you pass your FlexNetICS certification exams with ease. It covers everything from installation and configuration to performance optimization and troubleshooting.

The UNIX Switching Certification Guide: This book is a must-read for anyone looking to pass the UNIX Switching certification exams. It provides complete coverage of all the courses on the exam, including installation, design, management and configuration.

Thank you for your enquiry. We are sorry to say that we do not have any links or study guides for F5 301 certification dumps.

Press Release Distributed by The Express Wire

To view the original version on The Express Wire visit Help regarding Study Guides and Links for F5 301

Sun, 17 Jul 2022 21:15:00 -0500 TheExpressWire en-US text/html https://www.digitaljournal.com/pr/help-regarding-study-guides-and-links-for-f5-301
Killexams : IBM Expands Its Power10 Portfolio For Mission Critical Applications

It is sometimes difficult to understand the true value of IBM's Power-based CPUs and associated server platforms. And the company has written a lot about it over the past few years. Even for IT professionals that deploy and manage servers. As an industry, we have become accustomed to using x86 as a baseline for comparison. If an x86 CPU has 64 cores, that becomes what we used to measure relative value in other CPUs.

But this is a flawed way of measuring CPUs and a broken system for measuring server platforms. An x86 core is different than an Arm core which is different than a Power core. While Arm has achieved parity with x86 for some cloud-native workloads, the Power architecture is different. Multi-threading, encryption, AI enablement – many functions are designed into Power that don’t impact performance like other architectures.

I write all this as a set-up for IBM's announced expanded support for its Power10 architecture. In the following paragraphs, I will provide the details of IBM's announcement and provide some thoughts on what this could mean for enterprise IT.

What was announced

Before discussing what was announced, it is a good idea to do a quick overview of Power10.

IBM introduced the Power10 CPU architecture at the Hot Chips conference in August 2020. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. Power10 is developed on the opensource Power ISA. Power10 comes in two variants – 15x SMT8 cores and 30x SMT4 cores. For those familiar with x86, SMT8 (8 threads/core seems extreme, as does SMT4. But this is where the Power ISA is fundamentally different from x86. Power is a highly performant ISA, and the Power10 cores are designed for the most demanding workloads.

One last note on Power10. SMT8 is optimized for higher throughput and lower computation. SMT4 attacks the compute-intensive space with lower throughput.

IBM introduced the Power E1080 in September of 2021. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. The E1080 is a system designed for mission and business-critical workloads and has been strongly adopted by IBM's loyal Power customer base.

Because of this success, IBM has expanded the breadth of the Power10 portfolio and how customers consume these resources.

The big reveal in IBM’s accurate announcement is the availability of four new servers built on the Power10 architecture. These servers are designed to address customers' full range of workload needs in the enterprise datacenter.

The Power S1014 is the traditional enterprise workhorse that runs the modern business. For x86 IT folks, think of the S1014 equivalent to the two-socket workhorses that run virtualized infrastructure. One of the things that IBM points out about the S1014 is that this server was designed with lower technical requirements. This statement leads me to believe that the company is perhaps softening the barrier for the S1014 in data centers that are not traditional IBM shops. Or maybe for environments that use Power for higher-end workloads but non-Power for traditional infrastructure needs.

The Power S1022 is IBM's scale-out server. Organizations embracing cloud-native, containerized environments will find the S1022 an ideal match. Again, for the x86 crowd – think of the traditional scale-out servers that are perhaps an AMD single socket or Intel dual-socket – the S1022 would be IBM's equivalent.

Finally, the S1024 targets the data analytics space. With lots of high-performing cores and a big memory footprint – this server plays in the area where IBM has done so well.

In addition, to these platforms, IBM also introduced the Power E1050. The E1050 seems designed for big data and workloads with significant memory throughput requirements.

The E1050 is where I believe the difference in the Power architecture becomes obvious. The E1050 is where midrange starts to bump into high performance, and IBM claims 8-socket performance in this four-socket socket configuration. IBM says it can deliver performance for those running big data environments, larger data warehouses, and high-performance workloads. Maybe, more importantly, the company claims to provide considerable cost savings for workloads that generally require a significant financial investment.

One benchmark that IBM showed was the two-tier SAP Standard app benchmark. In this test, the E1050 beat an x86, 8-socket server handily, showing a 2.6x per-core performance advantage. We at Moor Insights & Strategy didn’t run the benchmark or certify it, but the company has been conservative in its disclosures, and I have no reason to dispute it.

But the performance and cost savings are not just associated with these higher-end workloads with narrow applicability. In another comparison, IBM showed the Power S1022 performs 3.6x better than its x86 equivalent for running a containerized environment in Red Hat OpenShift. When all was added up, the S1022 was shown to lower TCO by 53%.

What makes Power-based servers perform so well in SAP and OpenShift?

The value of Power is derived both from the CPU architecture and the value IBM puts into the system and server design. The company is not afraid to design and deploy enhancements it believes will deliver better performance, higher security, and greater reliability for its customers. In the case of Power10, I believe there are a few design factors that have contributed to the performance and price//performance advantages the company claims, including

  • Use Differential DIMM technology to increase memory bandwidth, allowing for better performance from memory-intensive workloads such as in-memory database environments.
  • Built-in AI inferencing engines that increase performance by up to 5x.
  • Transparent memory encryption performs this function with no performance tax (note: AMD has had this technology for years, and Intel introduced about a year ago).

These seemingly minor differences can add up to deliver significant performance benefits for workloads running in the datacenter. But some of this comes down to a very powerful (pardon the redundancy) core design. While x86 dominates the datacenter in unit share, IBM has maintained a loyal customer base because the Power CPUs are workhorses, and Power servers are performant, secure, and reliable for mission critical applications.

Consumption-based offerings

Like other server vendors, IBM sees the writing on the wall and has opened up its offerings to be consumed in a way that is most beneficial to its customers. Traditional acquisition model? Check. Pay as you go with hardware in your datacenter? Also, check. Cloud-based offerings? One more check.

While there is nothing revolutionary about what IBM is doing with how customers consume its technology, it is important to note that IBM is the only server vendor that also runs a global cloud service (IBM Cloud). This should enable the company to pass on savings to its customers while providing greater security and manageability.

Closing thoughts

I like what IBM is doing to maintain and potentially grow its market presence. The new Power10 lineup is designed to meet customers' entire range of performance and cost requirements without sacrificing any of the differentiated design and development that the company puts into its mission critical platforms.

Will this announcement move x86 IT organizations to transition to IBM? Unlikely. Nor do I believe this is IBM's goal. However, I can see how businesses concerned with performance, security, and TCO of their mission and business-critical workloads can find a strong argument for Power. And this can be the beginning of a more substantial Power presence in the datacenter.

Note: This analysis contains insights from Moor Insights & Strategy Founder and Chief Analyst, Patrick Moorhead.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Thu, 14 Jul 2022 01:00:00 -0500 Matt Kimball en text/html https://www.forbes.com/sites/moorinsights/2022/07/14/ibm-expands-its-power10-portfolio-for-mission-critical-applications/
Killexams : Breakthrough quantum algorithm

City College of New York physicist Pouyan Ghaemi and his research team are claiming significant progress in using quantum computers to study and predict how the state of a large number of interacting quantum particles evolves over time. This was done by developing a quantum algorithm that they run on an IBM quantum computer. "To the best of our knowledge, such particular quantum algorithm which can simulate how interacting quantum particles evolve over time has not been implemented before," said Ghaemi, associate professor in CCNY's Division of Science.

Entitled "Probing geometric excitations of fractional quantum Hall states on quantum computers," the study appears in the journal of Physical Review Letters.

"Quantum mechanics is known to be the underlying mechanism governing the properties of elementary particles such as electrons," said Ghaemi. "But unfortunately there is no easy way to use equations of quantum mechanics when we want to study the properties of large number of electrons that are also exerting force on each other due to their electric charge.

His team's discovery, however, changes this and raises other exciting possibilities.

"On the other front, recently, there has been extensive technological developments in building the so-called quantum computers. These new class of computers utilize the law of quantum mechanics to preform calculations which are not possible with classical computers."

We know that when electrons in material interact with each other strongly, interesting properties such as high-temperature superconductivity could emerge," Ghaemi noted. "Our quantum computing algorithm opens a new avenue to study the properties of materials resulting from strong electron-electron interactions. As a result it can potentially guide the search for useful materials such as high temperature superconductors."

He added that based on their results, they can now potentially look at using quantum computers to study many other phenomena that result from strong interaction between electrons in solids. "There are many experimentally observed phenomena that could be potentially understood using the development of quantum algorithms similar to the one we developed."

The research was done at CCNY -- and involved an interdisciplinary team from the physics and electrical engineering departments -- in collaboration with experts from Western Washington University, Leeds University in the UK; and Schlumberger-Doll Research Center in Cambridge, Massachusetts. The research was funded by the National Science Foundation and Britain's Engineering and Science Research Council.

Story Source:

Materials provided by City College of New York. Note: Content may be edited for style and length.

Tue, 26 Jul 2022 12:00:00 -0500 en text/html https://www.sciencedaily.com/releases/2022/07/220727110714.htm
Killexams : Did the Universe Just Happen? Killexams : The Atlantic | April 1988 | Did the Universe Just Happen? | Wright


More on science and technology from The Atlantic Monthly.

The Atlantic Monthly | April 1988
 

I. Flying Solo


d Fredkin is scanning the visual field systematically. He checks the instrument panel regularly. He is cool, collected, in control. He is the optimally efficient pilot.

The plane is a Cessna Stationair Six—a six-passenger single-engine amphibious plane, the kind with the wheels recessed in pontoons. Fredkin bought it not long ago and is still working out a few kinks; right now he is taking it for a spin above the British Virgin Islands after some minor mechanical work.

He points down at several brown-green masses of land, embedded in a turquoise sea so clear that the shadows of yachts are distinctly visible on its sandy bottom. He singles out a small island with a good-sized villa and a swimming pool, and explains that the compound, and the island as well, belong to "the guy that owns Boy George"—the rock star's agent, or manager, or something.

I remark, loudly enough to overcome the engine noise, "It's nice."

Yes, Fredkin says, it's nice. He adds, "It's not as nice as my island."

He's joking, I guess, but he's right. Ed Fredkin's island, which soon comes into view, is bigger and prettier. It is about 125 acres, and the hill that constitutes its bulk is a deep green—a mixture of reeds and cacti, sea grape and turpentine trees, machineel and frangipani. Its beaches range from prosaic to sublime, and the coral in the waters just offshore attracts little and big fish whose colors look as if they were coordinated by Alexander Julian. On the island's west side are immense rocks, suitable for careful climbing, and on the east side are a bar and restaurant and a modest hotel, which consists of three clapboard buildings, each with a few rooms. Between east and west is Fredkin's secluded island villa. All told, Moskito Island—or Drake's Anchorage, as the brochures call it—is a nice place for Fredkin to spend the few weeks of each year when he is not up in the Boston area tending his various other businesses.

In addition to being a self-made millionaire, Fredkin is a self-made intellectual. Twenty years ago, at the age of thirty-four, without so much as a bachelor's degree to his name, he became a full professor at the Massachusetts Institute of Technology. Though hired to teach computer science, and then selected to guide MIT's now eminent computer-science laboratory through some of its formative years, he soon branched out into more-offbeat things. Perhaps the most idiosyncratic of the courses he has taught is one on "digital physics," in which he propounded the most idiosyncratic of his several idiosyncratic theories. This theory is the reason I've come to Fredkin's island. It is one of those things that a person has to be prepared for. The preparer has to say, "Now, this is going to sound pretty weird, and in a way it is, but in a way it's not as weird as it sounds, and you'll see this once you understand it, but that may take a while, so in the meantime don't prejudge it, and don't casually dismiss it." Ed Fredkin thinks that the universe is a computer.

Fredkin works in a twilight zone of modern science—the interface of computer science and physics. Here two concepts that traditionally have ranked among science's most fundamental—matter and energy—keep bumping into a third: information. The exact relationship among the three is a question without a clear answer, a question vague enough, and basic enough, to have inspired a wide variety of opinions. Some scientists have settled for modest and sober answers. Information, they will tell you, is just one of many forms of matter and energy; it is embodied in things like a computer's electrons and a brain's neural firings, things like newsprint and radio waves, and that is that. Others talk in grander terms, suggesting that information deserves full equality with matter and energy, that it should join them in some sort of scientific trinity, that these three things are the main ingredients of reality.

Fredkin goes further still. According to his theory of digital physics, information is more fundamental than matter and energy. He believes that atoms, electrons, and quarks consist ultimately of bits—binary units of information, like those that are the currency of computation in a personal computer or a pocket calculator. And he believes that the behavior of those bits, and thus of the entire universe, is governed by a single programming rule. This rule, Fredkin says, is something fairly simple, something vastly less arcane than the mathematical constructs that conventional physicists use to explain the dynamics of physical reality. Yet through ceaseless repetition—by tirelessly taking information it has just transformed and transforming it further—it has generated pervasive complexity. Fredkin calls this rule, with discernible reverence, "the cause and prime mover of everything."

T THE RESTAURANT ON FREDKIN'S ISLAND THE FOOD is prepared by a large man named Brutus and is humbly submitted to diners by men and women native to nearby islands. The restaurant is open-air, ventilated by a sea breeze that is warm during the day, cool at night, and almost always moist. Between the diners and the ocean is a knee-high stone wall, against which waves lap rhythmically. Beyond are other islands and a horizon typically blanketed by cottony clouds. Above is a thatched ceiling, concealing, if the truth be told, a sheet of corrugated steel. It is lunchtime now, and Fredkin is sitting in a cane-and-wicker chair across the table from me, wearing a light cotton sport shirt and gray swimming trunks. He was out trying to windsurf this morning, and he enjoyed only the marginal success that one would predict on the basis of his appearance. He is fairly tall and very thin, and has a softness about him—not effeminacy, but a gentleness of expression and manner—and the complexion of a scholar; even after a week on the island, his face doesn't vary much from white, except for his nose, which is red. The plastic frames of his glasses, in a modified aviator configuration, surround narrow eyes; there are times—early in the morning or right after a nap—when his eyes barely qualify as slits. His hair, perennially semi-combed, is black with a little gray.

Fredkin is a pleasant mealtime companion. He has much to say that is interesting, which is fortunate because generally he does most of the talking. He has little curiosity about other people's minds, unless their interests happen to coincide with his, which few people's do. "He's right above us," his wife, Joyce, once explained to me, holding her left hand just above her head, parallel to the ground. "Right here looking down. He's not looking down saying, 'I know more than you.' He's just going along his own way."

The food has not yet arrived, and Fredkin is passing the time by describing the world view into which his theory of digital physics fits. "There are three great philosophical questions," he begins. "What is life? What is consciousness and thinking and memory and all that? And how does the universe work?" He says that his "informational viewpoint" encompasses all three. Take life, for example. Deoxyribonucleic acid, the material of heredity, is "a good example of digitally encoded information," he says. "The information that implies what a creature or a plant is going to be is encoded; it has its representation in the DNA, right? Okay, now, there is a process that takes that information and transforms it into the creature, okay?" His point is that a mouse, for example, is "a big, complicated informational process."

Fredkin exudes rationality. His voice isn't quite as even and precise as Mr. Spock's, but it's close, and the parallels don't end there. He rarely displays emotion—except, perhaps, the slightest sign of irritation under the most trying circumstances. He has never seen a problem that didn't have a perfectly logical solution, and he believes strongly that intelligence can be mechanized without limit. More than ten years ago he founded the Fredkin Prize, a $100,000 award to be given to the creator of the first computer program that can beat a world chess champion. No one has won it yet, and Fredkin hopes to have the award raised to $1 million.

Fredkin is hardly alone in considering DNA a form of information, but this observation was less common back when he first made it. So too with many of his ideas. When his world view crystallized, a quarter of a century ago, he immediately saw dozens of large-scale implications, in fields ranging from physics to biology to psychology. A number of these have gained currency since then, and he considers this trend an ongoing substantiation of his entire outlook.

Fredkin talks some more and then recaps. "What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity life, DNA—you know, the biochemical functions—are controlled by a digital information process. Then, at another level, our thought processes are basically information processing." That is not to say, he stresses, that everything is best viewed as information. "It's just like there's mathematics and all these other things, but not everything is best viewed from a mathematical viewpoint. So what's being said is not that this comes along and replaces everything. It's one more avenue of modeling reality, and it happens to cover the sort of three biggest philosophical mysteries. So it sort of completes the picture."

Among the scientists who don't dismiss Fredkin's theory of digital physics out of hand is Marvin Minsky, a computer scientist and polymath at MIT, whose renown approaches cultic proportions in some circles. Minsky calls Fredkin "Einstein-like" in his ability to find deep principles through simple intellectual excursions. If it is true that most physicists think Fredkin is off the wall, Minsky told me, it is also true that "most physicists are the ones who don't invent new theories"; they go about their work with tunnel vision, never questioning the dogma of the day. When it comes to the kind of basic reformulation of thought proposed by Fredkin, "there's no point in talking to anyone but a Feynman or an Einstein or a Pauli," Minsky says. "The rest are just Republicans and Democrats." I talked with Richard Feynman, a Nobel laureate at the California Institute of Technology, before his death, in February. Feynman considered Fredkin a brilliant and consistently original, though sometimes incautious, thinker. If anyone is going to come up with a new and fruitful way of looking at physics, Feynman said, Fredkin will.

Notwithstanding their moral support, though, neither Feynman nor Minsky was ever convinced that the universe is a computer. They were endorsing Fredkin's mind, not this particular manifestation of it. When it comes to digital physics, Ed Fredkin is flying solo.

He knows that, and he regrets that his ideas continue to lack the support of his colleagues. But his self-confidence is unshaken. You see, Fredkin has had an odd childhood, and an odd education, and an odd career, all of which, he explains, have endowed him with an odd perspective, from which the essential nature of the universe happens to be clearly visible. "I feel like I'm the only person with eyes in a world where everyone's blind," he says.

II. A Finely Mottled Universe


HE PRIME MOVER OF EVERYTHING, THE SINGLE principle that governs the universe, lies somewhere within a class of computer programs known as cellular automata, according to Fredkin.

The cellular automaton was invented in the early 1950s by John von Neumann, one of the architects of computer science and a seminal thinker in several other fields. Von Neumann (who was stimulated in this and other inquiries by the ideas of the mathematician Stanislaw Ulam) saw cellular automata as a way to study reproduction abstractly, but the word cellular is not meant biologically when used in this context. It refers, rather, to adjacent spaces—cells—that together form a pattern. These days the cells typically appear on a computer screen, though von Neumann, lacking this convenience, rendered them on paper.

In some respects cellular automata resemble those splendid graphic displays produced by patriotic masses in authoritarian societies and by avid football fans at American universities. Holding up large colored cards on cue, they can collectively generate a portrait of, say, Lenin, Mao Zedong, or a University of Southern California Trojan. More impressive still, one portrait can fade out and another crystallize in no time at all. Again and again one frozen frame melts into another It is a spectacular feat of precision and planning.

But suppose there were no planning. Suppose that instead of arranging a succession of cards to display, everyone learned a single rule for repeatedly determining which card was called for next. This rule might assume any of a number of forms. For example, in a crowd where all cards were either blue or white, each card holder could be instructed to look at his own card and the cards of his four nearest neighbors—to his front, back, left, and right—and do what the majority did during the last frame. (This five-cell group is known as the von Neumann neighborhood.) Alternatively, each card holder could be instructed to do the opposite of what the majority did. In either event the result would be a series not of predetermined portraits but of more abstract, unpredicted patterns. If, by prior agreement, we began with a USC Trojan, its white face might dissolve into a sea of blue, as whitecaps drifted aimlessly across the stadium. Conversely, an ocean of randomness could yield islands of structure—not a Trojan, perhaps, but at least something that didn't look entirely accidental. It all depends on the original pattern of cells and the rule used to transform it incrementally.

This leaves room for abundant variety. There are many ways to define a neighborhood, and for any given neighborhood there are many possible rules, most of them more complicated than blind conformity or implacable nonconformity. Each cell may, for instance, not only count cells in the vicinity but also pay attention to which particular cells are doing what. All told, the number of possible rules is an exponential function of the number of cells in the neighborhood; the von Neumann neighborhood alone has 232, or around 4 billion, possible rules, and the nine-cell neighborhood that results from adding corner cells offers 2512, or roughly 1 with 154 zeros after it, possibilities. But whatever neighborhoods, and whatever rules, are programmed into a computer, two things are always true of cellular automata: all cells use the same rule to determine future behavior by reference to the past behavior of neighbors, and all cells obey the rule simultaneously, time after time.

In the late 1950s, shortly after becoming acquainted with cellular automata, Fredkin began playing around with rules, selecting the powerful and interesting and discarding the weak and bland. He found, for example, that any rule requiring all four of a cell's immediate neighbors to be lit up in order for the cell itself to be lit up at the next moment would not provide sustained entertainment; a single "off" cell would proliferate until darkness covered the computer screen. But equally simple rules could create great complexity. The first such rule discovered by Fredkin dictated that a cell be on if an odd number of cells in its von Neumann neighborhood had been on, and off otherwise. After "seeding" a good, powerful rule with an irregular landscape of off and on cells, Fredkin could watch rich patterns bloom, some freezing upon maturity, some eventually dissipating, others locking into a cycle of growth and decay. A colleague, after watching one of Fredkin's rules in action, suggested that he sell the program to a designer of Persian rugs.

Today new cellular-automaton rules are formulated and tested by the "information-mechanics group" founded by Fredkin at MIT's computer-science laboratory. The core of the group is an international duo of physicists, Tommaso Toffoli, of Italy, and Norman Margolus, of Canada. They differ in the degree to which they take Fredkin's theory of physics seriously, but both agree with him that there is value in exploring the relationship between computation and physics, and they have spent much time using cellular automata to simulate physical processes. In the basement of the computer-science laboratory is the CAM—the cellular automaton machine, designed by Toffoli and Margolus partly for that purpose. Its screen has 65,536 cells, each of which can assume any of four colors and can change color sixty times a second.

The CAM is an engrossing, potentially mesmerizing machine. Its four colors—the three primaries and black—intermix rapidly and intricately enough to form subtly shifting hues of almost any gradation; pretty waves of deep blue or red ebb and flow with fine fluidity and sometimes with rhythm, playing on the edge between chaos and order.

Guided by the right rule, the CAM can do a respectable imitation of pond water rippling outward circularly in deference to a descending pebble, or of bubbles forming at the bottom of a pot of boiling water, or of a snowflake blossoming from a seed of ice: step by step, a single "ice crystal" in the center of the screen unfolds into a full-fledged flake, a six-edged sheet of ice riddled symmetrically with dark pockets of mist. (It is easy to see how a cellular automaton can capture the principles thought to govern the growth of a snowflake: regions of vapor that find themselves in the vicinity of a budding snowflake freeze—unless so nearly enveloped by ice crystals that they cannot discharge enough heat to freeze.)

These exercises are fun to watch, and they provide one a sense of the cellular automaton's power, but Fredkin is not particularly interested in them. After all, a snowflake is not, at the visible level, literally a cellular automaton; an ice crystal is not a single, indivisible bit of information, like the cell that portrays it. Fredkin believes that automata will more faithfully mirror reality as they are applied to its more fundamental levels and the rules needed to model the motion of molecules, atoms, electrons, and quarks are uncovered. And he believes that at the most fundamental level (whatever that turns out to be) the automaton will describe the physical world with perfect precision, because at that level the universe is a cellular automaton, in three dimensions—a crystalline lattice of interacting logic units, each one "deciding" zillions of point in time. The information thus produced, Fredkin says, is the fabric of reality, the stuff of which matter and energy are made. An electron, in Fredkin's universe, is nothing more than a pattern of information, and an orbiting electron is nothing more than that pattern moving. Indeed, even this motion is in some sense illusory: the bits of information that constitute the pattern never move, any more than football fans would change places to slide a USC Trojan four seats to the left. Each bit stays put and confines its activity to blinking on and off. "You see, I don't believe that there are objects like electrons and photons, and things which are themselves and nothing else," Fredkin says. What I believe is that there's an information process, and the bits, when they're in certain configurations, behave like the thing we call the electron, or the hydrogen atom, or whatever."

HE READER MAY NOW HAVE A NUMBER OF questions that unless satisfactorily answered will lead to something approaching contempt for Fredkin's thinking. One such question concerns the way cellular automata chop space and time into little bits. Most conventional theories of physics reflect the intuition that reality is continuous—that one "point" in time is no such thing but, rather, flows seamlessly into the next, and that space, similarly, doesn't come in little chunks but is perfectly smooth. Fredkin's theory implies that both space and time have a graininess to them, and that the grains cannot be chopped up into smaller grains; that people and dogs and trees and oceans, at rock bottom, are more like mosaics than like paintings; and that time's essence is better captured by a digital watch than by a grandfather clock.

The obvious question is, Why do space and time seem continuous if they are not? The obvious answer is, The cubes of space and points of time are very, very small: time seems continuous in just the way that movies seem to move when in fact they are frames, and the illusion of spatial continuity is akin to the emergence of smooth shades from the finely mottled texture of a newspaper photograph.

The obvious answer, Fredkin says, is not the whole answer; the illusion of continuity is yet more deeply ingrained in our situation. Even if the ticks on the universal clock were, in some absolute sense, very slow, time would still seem continuous to us, since our perception, itself proceeding in the same ticks, would be no more finely grained than the processes being perceived. So too with spatial perception: Can eyes composed of the smallest units in existence perceive those units? Could any informational process sense its ultimate constituents? The point is that the basic units of time and space in Fredkin's reality don't just happen to be imperceptibly small. As long as the creatures doing the perceiving are in that reality, the units have to be imperceptibly small.

Though some may find this discreteness hard to comprehend, Fredkin finds a grainy reality more sensible than a smooth one. If reality is truly continuous, as most physicists now believe it is, then there must be quantities that cannot be expressed with a finite number of digits; the number representing the strength of an electromagnetic field, for example, could begin 5.23429847 and go on forever without failing into a pattern of repetition. That seems strange to Fredkin: wouldn't you eventually get to a point, around the hundredth, or thousandth, or millionth decimal place, where you had hit the strength of the field right on the nose? Indeed, wouldn't you expect that every physical quantity has an exactness about it? Well, you might and might not. But Fredkin does expect exactness, and in his universe he gets it.

Fredkin has an interesting way of expressing his insistence that all physical quantities be "rational." (A rational number is a number that can be expressed as a fraction—as a ratio of one integer to another. Expressed as a decimal, a rational number will either end, as 5/2 does in the form of 2.5, or repeat itself endlessly, as 1/7 does in the form of 0.142857142857142 . . .) He says he finds it hard to believe that a finite volume of space could contain an infinite amount of information. It is almost as if he viewed each parcel of space as having the digits describing it actually crammed into it. This seems an odd perspective, one that confuses the thing itself with the information it represents. But such an inversion between the realm of things and the realm of representation is common among those who work at the interface of computer science and physics. Contemplating the essence of information seems to affect the way you think.

The prospect of a discrete reality, however alien to the average person, is easier to fathom than the problem of the infinite regress, which is also raised by Fredkin's theory. The problem begins with the fact that information typically has a physical basis. Writing consists of ink; speech is composed of sound waves; even the computer's ephemeral bits and bytes are grounded in configurations of electrons. If the electrons are in turn made of information, then what is the information made of?

Asking questions like this ten or twelve times is not a good way to earn Fredkin's respect. A look of exasperation passes fleetingly over his face. "What I've tried to explain is that—and I hate to do this, because physicists are always doing this in an obnoxious way—is that the question implies you're missing a very important concept." He gives it one more try, two more tries, three, and eventually some of the fog between me and his view of the universe disappears. I begin to understand that this is a theory not just of physics but of metaphysics. When you disentangle these theories—compare the physics with other theories of physics, and the metaphysics with other ideas about metaphysics—both sound less far-fetched than when jumbled together as one. And, as a bonus, Fredkin's metaphysics leads to a kind of high-tech theology—to speculation about supreme beings and the purpose of life.

III. The Perfect Thing


DWARD FREDKIN WAS BORN IN 1934, THE LAST OF three children in a previously prosperous family. His father, Manuel, had come to Southern California from Russia shortly after the Revolution and founded a chain of radio stores that did not survive the Great Depression. The family learned economy, and Fredkin has not forgotten it. He can reach into his pocket, pull out a tissue that should have been retired weeks ago, and, with cleaning solution, make an entire airplane windshield clear. He can take even a well-written computer program, sift through it for superfluous instructions, and edit it accordingly, reducing both its size and its running time.

Manuel was by all accounts a competitive man, and he focused his competitive energies on the two boys: Edward and his older brother, Norman. Manuel routinely challenged Ed's mastery of fact, inciting sustained arguments over, say, the distance between the moon and the earth. Norman's theory is that his father, though bright, was intellectually insecure; he seemed somehow threatened by the knowledge the boys brought home from school. Manuel's mistrust of books, experts, and all other sources of received wisdom was absorbed by Ed.

So was his competitiveness. Fredkin always considered himself the smartest kid in his class. He used to place bets with other students on test scores. This habit did not endear him to his peers, and he seems in general to have lacked the prerequisites of popularity. His sense of humor was unusual. His interests were not widely shared. His physique was not a force to be reckoned with. He recalls, "When I was young—you know, sixth, seventh grade—two kids would be choosing sides for a game of something. It could be touch football. They'd choose everybody but me, and then there'd be a fight as to whether one side would have to take me. One side would say, 'We have eight and you have seven,' and they'd say, 'That's okay.' They'd be willing to play with seven." Though exhaustive in documenting his social alienation, Fredkin concedes that he was not the only unpopular student in school. "There was a socially active subgroup, probably not a majority, maybe forty percent, who were very socially active. They went out on dates. They went to parties. They did this and they did that. The others were left out. And I was in this big left-out group. But I was in the pole position. I was really left out."

Of the hours Fredkin spent alone, a good many were devoted to courting disaster in the name of science. By wiring together scores of large, 45-volt batteries, he collected enough electricity to conjure up vivid, erratic arcs. By scraping the heads off matches and buying sulfur, saltpeter, and charcoal, he acquired a good working knowledge of pyrotechnics. He built small, minimally destructive but visually impressive bombs, and fashioned rockets out of cardboard tubing and aluminum foil. But more than bombs and rockets, it was mechanisms that captured Fredkin's attention. From an early age he was viscerally attracted to Big Ben alarm clocks, which he methodically took apart and put back together. He also picked up his father's facility with radios and household appliances. But whereas Manuel seemed to fix things without understanding the underlying science, his son was curious about first principles.

So while other kids were playing baseball or chasing girls, Ed Fredkin was taking things apart and putting them back together Children were aloof, even cruel, but a broken clock always responded gratefully to a healing hand. "I always got along well with machines," he remembers.

After graduation from high school, in 1952, Fredkin headed for the California Institute of Technology with hopes of finding a more appreciative social environment. But students at Caltech turned out to bear a disturbing resemblance to people he had observed elsewhere. "They were smart like me," he recalls, "but they had the full spectrum and distribution of social development." Once again Fredkin found his weekends unencumbered by parties. And once again he didn't spend his free time studying. Indeed, one of the few lessons he learned is that college is different from high school: in college if you don't study, you flunk out. This he did a few months into his sophomore year. Then, following in his brother's footsteps, he joined the Air Force and learned to fly fighter planes.

T WAS THE AIR FORCE THAT FINALLY BROUGHT Fredkin face to face with a computer. He was working for the Air Proving Ground Command, whose function was to ensure that everything from combat boots to bombers was of top quality, when the unit was given the job of testing a computerized air-defense system known as SAGE (for "semi-automatic ground environment"), To test SAGE the Air Force needed men who knew something about computers, and so in 1956 a group from the Air Proving Ground Command, including Fredkin, was sent to MIT's Lincoln Laboratory and enrolled in computer-science courses. "Everything made instant sense to me," Fredkin remembers. "I just soaked it up like a sponge."

SAGE, when ready for testing, turned out to be even more complex than anticipated—too complex to be tested by anyone but genuine experts—and the job had to be contracted out. This development, combined with bureaucratic disorder, meant that Fredkin was now a man without a function, a sort of visiting scholar at Lincoln Laboratory. "For a period of time, probably over a year, no one ever came to tell me to do anything. Well, meanwhile, down the hall they installed the latest, most modern computer in the world—IBM's biggest, most powerful computer. So I just went down and started to program it." The computer was an XD-1. It was slower and less capacious than an Apple Macintosh and was roughly the size of a large house.

When Fredkin talks about his year alone with this dinosaur, you half expect to hear violins start playing in the background. "My whole way of life was just waiting for the computer to come along," he says. "The computer was in essence just the perfect thing." It was in some respects preferable to every other conglomeration of matter he had encountered—more sophisticated and flexible than other inorganic machines, and more logical than organic ones. "See, when I write a program, if I write it correctly, it will work. If I'm dealing with a person, and I tell him something, and I tell him correctly, it may or may not work."

The XD-1, in short, was an intelligence with which Fredkin could empathize. It was the ultimate embodiment of mechanical predictability, the refuge to which as a child he had retreated from the incomprehensibly hostile world of humanity. If the universe is indeed a computer, then it could be a friendly place after all.

During the several years after his arrival at Lincoln Lab, as Fredkin was joining the first generation of hackers, he was also immersing himself in physics—finally learning, through self-instruction, the lessons he had missed by dropping out of Caltech. It is this two-track education, Fredkin says, that led him to the theory of digital physics. For a time "there was no one in the world with the same interest in physics who had the intimate experience with computers that I did. I honestly think that there was a period of many years when I was in a unique position."

The uniqueness lay not only in the fusion of physics and computer science but also in the peculiar composition of Fredkin's physics curriculum. Many physicists acquire as children the sort of kinship with mechanism that he still feels, but in most cases it is later diluted by formal education; quantum mechanics, the prevailing paradigm in contemporary physics, seems to imply that at its core, reality, has truly random elements and is thus inherently unpredictable. But Fredkin escaped the usual indoctrination. To this day he maintains, as did Albert Einstein, that the common interpretation of quantum mechanics is mistaken—that any seeming indeterminacy in the subatomic world reflects only our ignorance of the determining principles, not their absence. This is a critical belief, for if he is wrong and the universe is not ultimately deterministic, then it cannot be governed by a process as exacting as computation.

After leaving the Air Force, Fredkin went to work for Bolt Beranek and Newman, a consulting firm in the Boston area, now known for its work in artificial intelligence and computer networking. His supervisor at BBN, J. C. R. Licklider, says of his first encounter with Fredkin, "It was obvious to me he was very unusual and probably a genius, and the more I came to know him, the more I came to think that that was not too elevated a description." Fredkin "worked almost continuously," Licklider recalls. "It was hard to get him to go to sleep sometimes." A pattern emerged. Licklider would provide Fredkin a problem to work on—say, figuring out how to get a computer to search a text in its memory for an only partially specified sequence of letters. Fredkin would retreat to his office and return twenty or thirty hours later with the solution—or, rather, a solution; he often came back with the answer to a question different from the one that Licklider had asked. Fredkin's focus was intense but undisciplined, and it tended to stray from a problem as soon as he was confident that he understood the solution in principle.

This intellectual wanderlust is one of Fredkin's most enduring and exasperating traits. Just about everyone who knows him has a way of describing it: "He doesn't really work. He sort of fiddles." "Very often he has these great ideas and then does not have the discipline to cultivate the idea." "There is a gap between the quality of the original ideas and what follows. There's an imbalance there." Fredkin is aware of his reputation. In self-parody he once brought a cartoon to a friend's attention: A beaver and another forest animal are contemplating an immense man-made dam. The beaver is saying something like, "No, I didn't actually build it. But it's based on an idea of mine."

Among the ideas that congealed in Fredkin's mind during his stay at BBN is the one that gave him his current reputation as (depending on whom you talk to) a thinker of great depth and rare insight, a source of interesting but reckless speculation, or a crackpot.

IV. Tick by Tick, Dot by Dot


HE IDEA THAT THE UNIVERSE IS A COMPUTER WAS inspired partly by the idea of the universal computer. Universal computer, a term that can accurately be applied to everything from an IBM PC to a Cray supercomputer, has a technical, rigorous definition, but here its upshot will do: a universal computer can simulate any process that can be precisely described and perform any calculation that is performable.

This broad power is ultimately grounded in something very simple: the algorithm. An algorithm is a fixed procedure for converting input into output, for taking one body of information and turning it into another. For example, a computer program that takes any number it is given, squares it, and subtracts three is an algorithm. This isn't a very powerful algorithm; by taking a 3 and turning it into a 6, it hasn't created much new information. But algorithms become more powerful with recursion. A recursive algorithm is an algorithm whose output is fed back into it as input. Thus the algorithm that turned 3 into 6, if operating recursively, would continue, turning 6 into 33, then 33 into 1,086, then 1,086 into 1,179,393, and so on.

The power of recursive algorithms is especially apparent in the simulation of physical processes. While Fredkin was at BBN, he would use the company's Digital Equipment Corporation PDP-1 computer to simulate, say, two particles, one that was positively charged and one that was negatively charged, orbiting each other in accordance with the laws of electromagnetism. It was a pretty sight: two phosphor dots dancing, each etching a green trail that faded into yellow and then into darkness. But for Fredkin the attraction lay less in this elegant image than in its underlying logic. The program he had written took the particles' velocities and positions at one point in time, computed those variables for the next point in time, and then fed the new variables back into the algorithm to get newer variables—and so on and so on, thousands of times a second. The several steps in this algorithm, Fredkin recalls, were "very simple and very beautiful." It was in these orbiting phosphor dots that Fredkin first saw the appeal of his kind of universe—a universe that proceeds tick by tick and dot by dot, a universe in which complexity boils down to rules of elementary simplicity.

Fredkin's discovery of cellular automata a few years later permitted him further to indulge his taste for economy of information and strengthened his bond with the recursive algorithm. The patterns of automata are often all but impossible to describe with calculus yet easy to express algorithmically. Nothing is so striking about a good cellular automaton as the contrast between the simplicity of the underlying algorithm and the richness of its result. We have all felt the attraction of such contrasts. It accompanies the comprehension of any process, conceptual or physical, by which simplicity accommodates complexity. Simple solutions to complex problems, for example, make us feel good. The social engineer who designs uncomplicated legislation that will cure numerous social ills, the architect who eliminates several nagging design flaws by moving a single closet, the doctor who traces gastro-intestinal, cardiovascular, and respiratory ailments to a single, correctable cause—all feel the same kind of visceral, aesthetic satisfaction that must have filled the first caveman who literally killed two birds with one stone.

For scientists, the moment of discovery does not simply reinforce the search for knowledge; it inspires further research. Indeed, it directs research. The unifying principle, upon its apprehension, can elicit such devotion that thereafter the scientist looks everywhere for manifestations of it. It was the scientist in Fredkin who, upon seeing how a simple programming rule could yield immense complexity, got excited about looking at physics in a new way and stayed excited. He spent much of the next three decades fleshing out his intuition.

REDKIN'S RESIGNATION FROM BOLT BERANEK AND Newman did not surprise Licklider. "I could tell that Ed was disappointed in the scope of projects undertaken at BBN. He would see them on a grander scale. I would try to argue—hey, let's cut our teeth on this and then move on to bigger things." Fredkin wasn't biting. "He came in one day and said, 'Gosh, Lick, I really love working here, but I'm going to have to leave. I've been thinking about my plans for the future, and I want to make'—I don't remember how many millions of dollars, but it shook me—'and I want to do it in about four years.' And he did amass however many millions he said he would amass in the time he predicted, which impressed me considerably."

In 1962 Fredkin founded Information International Incorporated—an impressive name for a company with no assets and no clients, whose sole employee had never graduated from college. Triple-I, as the company came to be called, was placed on the road to riches by an odd job that Fredkin performed for the Woods Hole Oceanographic Institute. One of Woods Hole's experiments had run into a complication: underwater instruments had faithfully recorded the changing direction and strength of deep ocean currents, but the information, encoded in tiny dots of light on sixteen-millimeter film, was inaccessible to the computers that were supposed to analyze it. Fredkin rented a sixteen-millimeter movie projector and with a surprisingly simple modification turned it into a machine for translating those dots into terms the computer could accept.

This contraption pleased the people at Woods Hole and led to a contract with Lincoln Laboratory. Lincoln was still doing work for the Air Force, and the Air Force wanted its computers to analyze radar information that, like the Woods Hole data, consisted of patterns of light on film. A makeshift information-conversion machine earned Triple-I $10,000, and within a year the Air Force hired Fredkin to build equipment devoted to the task. The job paid $350,000—the equivalent today of around $1 million. RCA and other companies, it turned out, also needed to turn visual patterns into digital data, and "programmable film readers" that sold for $500,000 apiece became Triple-I's stock-in-trade. In 1968 Triple-I went public and Fredkin was suddenly a millionaire. Gradually he cashed in his chips. First he bought a ranch in Colorado. Then one day he was thumbing through the classifieds and saw that an island in the Caribbean was for sale. He bought it.

In the early 1960s, at the suggestion of the Defense Department's Advanced Research Projects Agency, MIT set up what would become its Laboratory for Computer Science. It was then called Project MAC, an acronym that stood for both "machine-aided cognition" and "multiaccess computer." Fredkin had connections with the project from the beginning. Licklider, who had left BBN for the Pentagon shortly after Fredkin's departure, was influential in earmarking federal money for MAC. Marvin Minsky—who would later serve on Triple-I's board, and by the end of 1967 owned some of its stock—was centrally involved In MAC's inception. Fredkin served on Project MAC's steering committee, and in 1966 he began discussing with Minsky the possibility of becoming a visiting professor at MIT. The idea of bringing a college dropout onto the faculty, Minsky recalls, was not as outlandish as it now sounds; computer science had become an academic discipline so suddenly that many of its leading lights possessed meager formal credentials. In 1968, after Licklider had come to MIT and become the director of Project MAC, he and Minsky convinced Louis Smullin, the head of the electrical-engineering department, that Fredkin was worth the gamble. "We were a growing department and we wanted exciting people," Smullin says. "And Ed was exciting."

Fredkin had taught for barely a year before he became a full professor, and not much later, in 1971, he was appointed the head of Project MAC—a position that was also short-lived, for in the fall of 1974 he began a sabbatical at the California Institute of Technology as a Fairchild Distinguished Scholar. He went to Caltech under the sponsorship of Richard Feynman. The deal, Fredkin recalls, was that he would teach Feynman more about computer science, and Feynman would teach him more about physics. While there, Fredkin developed an idea that has slowly come to be seen as a profound contribution to both disciplines. The idea is also—in Fredkin's mind, at least—corroborating evidence for his theory of digital physics. To put its upshot in brief and therefore obscure terms, Fredkin found that computation is not inherently irreversible and thus it is possible, in principle, to build a computer that doesn't use up energy and doesn't provide off heat.

All computers on the market are irreversible. That is, their history of information processing cannot be inferred from their present informational state; you cannot look at the data they contain and figure out how they arrived at it. By the time the average computer tells you that 2 plus 2 equals 4, it has forgotten the question; for all it knows, you asked what 1 plus 3 is. The reason for this ignorance is that computers discharge information once it is no longer needed, so that they won't get clogged up.

In 1961 Rolf Landauer, of IBM's Thomas J. Watson Research Center, established that this destruction of information is the only part of the computational process that unavoidably involves the dissipation of energy. It takes effort, in other words, for a computer to forget things but not necessarily for it to perform other functions. Thus the question of whether you can, in principle, build a universal computer that doesn't dissipate energy in the form of heat is synonymous with the question of whether you can design a logically reversible universal computer, one whose computational history can always be unearthed. Landauer, along with just about everyone else, thought such a computer impossible; all past computer architectures had implied the regular discarding of information, and it was widely believed that this irreversibility was intrinsic to computation. But while at Caltech, Fredkin did one of his favorite things—he showed that everyone had been wrong all along.

Of the two kinds of reversible computers invented by Fredkin, the better known is called the billiard-ball computer. If it were ever actually built, it would consist of billiard balls ricocheting around in a labyrinth of "mirrors," bouncing off the mirrors at 45-degree angles, periodically banging into other moving balls at 90-degree angles, and occasionally exiting through doorways that occasionally would permit new balls to enter. To extract data from the machine, you would superimpose a grid over it, and the presence or absence of a ball in a given square at a given point in time would constitute information. Such a machine, Fredkin showed, would qualify as a universal computer; it could do anything that normal computers do. But unlike other computers, it would be perfectly reversible; to recover its history, all you would have to do is stop it and run it backward. Charles H. Bennett, of IBM's Thomas J. Watson Research Center, independently arrived at a different proof that reversible computation is possible, though he considers the billiard-ball computer to be in some respects a more elegant solution to the problem than his own.

The billiard-ball computer will never be built, because it is a platonic device, existing only in a world of ideals. The balls are perfectly round and hard, and the table perfectly smooth and hard. There is no friction between the two, and no energy is lost when balls collide. Still, although these ideals are unreachable, they could be approached eternally through technological refinement, and the heat produced by fiction and collision could thus be reduced without limit. Since no additional heat would be created by information loss, there would be no necessary minimum on the total heat emitted by the computer. "The cleverer you are, the less heat it will generate," Fredkin says.

The connection Fredkin sees between the billiard-ball computer and digital physics exemplifies the odd assortment of evidence he has gathered in support of his theory. Molecules and atoms and their constituents, he notes, move around in theoretically reversible fashion, like billiard balls (although it is not humanly possible, of course, actually to take stock of the physical state of the universe, or even one small corner of it, and reconstruct history by tracing the motion of microscopic particles backward). Well, he asks, given the theoretical reversibility of physical reality, doesn't the theoretical feasibility of a reversible computer lend credence to the claim that computation is reality's basis?

No and yes. Strictly speaking, Fredkin's theory doesn't demand reversible computation. It is conceivable that an irreversible process at the very core of reality could provide rise to the reversible behavior of molecules, atoms, electrons, and the rest. After all, irreversible computers (that is, all computers on the market) can simulate reversible billiard balls. But they do so in a convoluted way, Fredkin says, and the connection between an irreversible substratum and a reversible stratum would, similarly, be tortuous—or, as he puts it, "aesthetically obnoxious." Fredkin prefers to think that the cellular automaton underlying reversible reality does its work gracefully.

Consider, for example, a variant of the billiard-ball computer invented by Norman Margolus, the Canadian in MIT's information-mechanics group. Margolus showed how a two-state cellular automaton that was itself reversible could simulate the billiard-ball computer using only a simple rule involving a small neighborhood. This cellular automaton in action looks like a jazzed-up version of the original video game, Pong. It is an overhead view of endlessly energetic balls ricocheting off clusters of mirrors and each other It is proof that a very simple binary cellular automaton can provide rise to the seemingly more complex behavior of microscopic particles bouncing off each other. And, as a kind of bonus, these particular particles themselves amount to a computer. Though Margolus discovered this powerful cellular-automaton rule, it was Fredkin who had first concluded that it must exist and persuaded Margolus to look for it. "He has an intuitive idea of how things should be," Margolus says. "And often, if he can't come up with a rational argument to convince you that it should be so, he'll sort of transfer his intuition to you."

That, really, is what Fredkin is trying to do when he argues that the universe is a computer. He cannot provide you a single line of reasoning that leads inexorably, or even very plausibly, to this conclusion. He can tell you about the reversible computer, about Margolus's cellular automaton, about the many physical quantities, like light, that were once thought to be continuous but are now considered discrete, and so on. The evidence consists of many little things—so many, and so little, that in the end he is forced to convey his truth by simile. "I find the supporting evidence for my beliefs in ten thousand different places," he says. "And to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, Where is this animal? I say, Well, he was here, he's about this big, this that and the other. And I know a thousand things about him. I don't have him in hand, but I know he's there." The story changes upon retelling. One day it's Bigfoot that Fredkin's trailing. Another day it's a duck: feathers are everywhere, and the tracks are webbed. Whatever the animal, the moral of the story remains the same: "What I see is so compelling that it can't be a creature of my imagination."

V. Deus ex Machina


HERE WAS SOMETHING BOTHERSOME ABOUT ISAAC Newton's theory of gravitation. The idea that the sun exerts a pull on the earth, and vice versa, sounded vaguely supernatural and, in any event, was hard to explain. How, after all, could such "action at a distance" be realized? Did the earth look at the sun, estimate the distance, and consult the law of gravitation to determine where it should move and how fast? Newton sidestepped such questions. He fudged with the Latin phrase si esset: two bodies, he wrote, behave as if impelled by a force inversely proportional to the square of their distance. Ever since Newton, physics has followed his example. Its "force fields" are, strictly speaking, metaphorical, and its laws purely descriptive. Physicists make no attempt to explain why things obey the law of electromagnetism or of gravitation. The law is the law, and that's all there is to it.

Fredkin refuses to accept authority so blindly. He posits not only laws but also a law-enforcement agency: a computer. Somewhere out there, he believes, is a machinelike thing that actually keeps our individual bits of space abiding by the rule of the universal cellular automaton. With this belief Fredkin crosses the line between physics and metaphysics, between scientific hypothesis and cosmic speculation. If Fredkin had Newton's knack for public relations, if he stopped at saying that the universe operates as if it were a computer, he could Boost his stature among physicists while preserving the essence of his theory—the idea that the dynamics of physical reality will ultimately be better captured by a single recursive algorithm than by the mathematics of conventional physics, and that the continuity of time and space implicit in traditional mathematics is illusory.

Actually, some estimable physicists have lately been saying things not wholly unlike this stripped-down version of the theory. T. D. Lee, a Nobel laureate at Columbia University, has written at length about the possibility that time is discrete. And in 1984 Scientific American, not exactly a soapbox for cranks, published an article in which Stephen Wolfram, then of Princeton's Institute for Advanced Study, wrote, "Scientific laws are now being viewed as algorithms. . . . Physical systems are viewed as computational systems, processing information much the way computers do." He concluded, "A new paradigm has been born."

The line between responsible scientific speculation and off-the-wall metaphysical pronouncement was nicely illustrated by an article in which Tomasso Toffoli, the Italian in MIT's information-mechanics group, stayed barely on the responsible side of it. Published in the journal Physica D, the article was called "Cellular automata as an alternative to (rather than an approximation of) differential equations in modeling physics." Toffoli's thesis captured the core of Fredkin's theory yet had a perfectly reasonable ring to it. He simply suggested that the historical reliance of physicists on calculus may have been due not just to its merits but also to the fact that before the computer, alternative languages of description were not practical.

Why does Fredkin refuse to do the expedient thing—leave out the part about the universe actually being a computer? One reason is that he considers reprehensible the failure of Newton, and of all physicists since, to back up their descriptions of nature with explanations. He is amazed to find "perfectly rational scientists" believing in "a form of mysticism: that things just happen because they happen." The best physics, Fredkin seems to believe, is metaphysics.

The trouble with metaphysics is its endless depth. For every question that is answered, at least one other is raised, and it is not always clear that, on balance, any progress has been made. For example, where is this computer that Fredkin keeps talking about? Is it in this universe, residing along some fifth or sixth dimension that renders it invisible? Is it in some meta-universe? The answer is the latter, apparently, and to understand why, we need to return to the problem of the infinite regress, a problem that Rolf Landauer, among others, has cited with respect to Fredkin's theory. Landauer illustrates the problem by telling the old turtle story. A professor has just finished lecturing at some august university about the origin and structure of the universe, and an old woman in tennis shoes walks up to the lectern. "Excuse me, sir, but you've got it all wrong," she says. "The truth is that the universe is sitting on the back of a huge turtle." The professor decides to humor her. "Oh, really?" he asks. "Well, tell me, what is the turtle standing on?" The lady has a ready reply: "Oh, it's standing on another turtle." The professor asks, "And what is that turtle standing on?" Without hesitation, she says, "Another turtle." The professor, still game, repeats his question. A look of impatience comes across the woman's face. She holds up her hand, stopping him in mid-sentence. "Save your breath, sonny," she says. "It's turtles all the way down."

The infinite-regress problem afflicts Fredkin's theory in two ways, one of which we have already encountered: if matter is made of information, what is the information made of? And even if one concedes that it is no more ludicrous for information to be the most fundamental stuff than for matter or energy to be the most fundamental stuff, what about the computer itself? What is it made of? What energizes it? Who, or what, runs it, or set it in motion to begin with?

HEN FREDKIN IS DISCUSSING THE PROBLEM OF THE infinite regress, his logic seems variously cryptic, evasive, and appealing. At one point he says, "For everything in the world where you wonder, 'What is it made out of?' the only thing I know of where the question doesn't have to be answered with anything else is for information." This puzzles me. Thousands of words later I am still puzzled, and I press for clarification. He talks some more. What he means, as near as I can tell, is what follows.

First of all, it doesn't matter what the information is made of, or what kind of computer produces it. The computer could be of the conventional electronic sort, or it could be a hydraulic machine made of gargantuan sewage pipes and manhole covers, or it could be something we can't even imagine. What's the difference? Who cares what the information consists of? So long as the cellular automaton's rule is the same in each case, the patterns of information will be the same, and so will we, because the structure of our world depends on pattern, not on the pattern's substrate; a carbon atom, according to Fredkin, is a certain configuration of bits, not a certain kind of bits.

Besides, we can never know what the information is made of or what kind of machine is processing it. This point is reminiscent of childhood conversations that Fredkin remembers having with his sister, Joan, about the possibility that they were part of a dream God was having. "Say God is in a room and on his table he has some cookies and tea," Fredkin says. "And he's dreaming this whole universe up. Well, we can't reach out and get his cookies. They're not in our universe. See, our universe has bounds. There are some things in it and some things not." The computer is not; hardware is beyond the grasp of its software. Imagine a vast computer program that contained bodies of information as complex as people, motivated by bodies of information as complex as ideas. These "people" would have no way of figuring out what kind of computer they owed their existence to, because everything they said, and everything they did—including formulate metaphysical hypotheses—would depend entirely on the programming rules and the original input. As long as these didn't change, the same metaphysical conclusions would be reached in an old XD-1 as in a Kaypro 2.

This idea—that sentient beings could be constitutionally numb to the texture of reality—has fascinated a number of people, including, lately, computer scientists. One source of the fascination is the fact that any universal computer can simulate another universal computer, and the simulated computer can, because it is universal, do the same thing. So it is possible to conceive of a theoretically endless series of computers contained, like Russian dolls, in larger versions of themselves and yet oblivious of those containers. To anyone who has lived intimately with, and thought deeply about, computers, says Charles Bennett, of IBM's Watson Lab, this notion is very attractive. "And if you're too attracted to it, you're likely to part company with the physicists." Physicists, Bennett says, find heretical the notion that anything physical is impervious to expertment, removed from the reach of science.

Fredkin's belief in the limits of scientific knowledge may sound like evidence of humility, but in the end it permits great ambition; it helps him go after some of the grandest philosophical questions around. For example, there is a paradox that crops up whenever people think about how the universe came to be. On the one hand, it must have had a beginning. After all, things usually do. Besides, the cosmological evidence suggests a beginning: the big bang. Yet science insists that it is impossible for something to come from nothing; the laws of physics forbid the amount of energy and mass in the universe to change. So how could there have been a time when there was no universe, and thus no mass or energy?

Fredkin escapes from this paradox without breaking a sweat. Granted, he says, the laws of our universe don't permit something to come from nothing. But he can imagine laws that would permit such a thing; in fact, he can imagine algorithmic laws that would permit such a thing. The conservation of mass and energy is a consequence of our cellular automaton's rules, not a consequence of all possible rules. Perhaps a different cellular automaton governed the creation of our cellular automation—just as the rules for loading software are different from the rules running the program once it has been loaded.

What's funny is how hard it is to doubt Fredkin when with such assurance he makes definitive statements about the creation of the universe—or when, for that matter, he looks you in the eye and tells you the universe is a computer. Partly this is because, given the magnitude and intrinsic intractability of the questions he is addressing, his answers aren't all that bad. As ideas about the foundations of physics go, his are not completely out of the ball park; as metaphysical and cosmogonic speculation goes, his isn't beyond the pale.

But there's more to it than that. Fredkin is, in his own odd way, a rhetorician of great skill. He talks softly, even coolly, but with a low-key power, a quiet and relentless confidence, a kind of high-tech fervor. And there is something disarming about his self-awareness. He's not one of these people who say crazy things without having so much as a clue that you're sitting there thinking what crazy things they are. He is acutely conscious of his reputation; he knows that some scientists are reluctant to invite him to conferences for fear that he'll say embarrassing things. But he is not fazed by their doubts. "You know, I'm a reasonably smart person. I'm not the smartest person in the world, but I'm pretty smart—and I know that what I'm involved in makes perfect sense. A lot of people build up what might be called self-delusional systems, where they have this whole system that makes perfect sense to them, but no one else ever understands it or buys it. I don't think that's a major factor here, though others might disagree." It's hard to disagree, when he so forthrightly offers you the chance.

Still, as he gets further from physics, and more deeply into philosophy, he begins to try one's trust. For example, having tackled the question of what sort of process could generate a universe in which spontaneous generation is impossible, he aims immediately for bigger game: Why was the universe created? Why is there something here instead of nothing?

HEN THIS SUBJECT COMES UP, WE ARE SITTING IN the Fredkins' villa. The living area has pale rock walls, shiny-clean floors made of large white ceramic tiles, and built-in bookcases made of blond wood. There is lots of air—the ceiling slopes up in the middle to at least twenty feet—and the air keeps moving; some walls consist almost entirely of wooden shutters that, when open, let the sea breeze pass as fast as it will. I am glad of this. My skin, after three days on Fredkin's island, is hot, and the air, though heavy, is cool. The sun is going down.

Fredkin, sitting on a white sofa, is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "analytical" approach associated with traditional mathematics, including differential equations, and the "computational" approach associated with algorithms. You can predict a future state of a system susceptible to the analytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold.

This indeterminacy is very suggestive. It suggests, first of all, why so many "chaotic" phenomena, like smoke rising from a cigarette, are so difficult to predict using conventional mathematics. (In fact, some scientists have taken to modeling chaotic systems with cellular automata.) To Fredkin, it also suggests that even if human behavior is entirely determined, entirely inevitable, it may be unpredictable; there is room for "pseudo free will" in a completely mechanistic universe. But on this particular evening Fredkin is interested mainly in cosmogony, in the implications of this indeterminacy for the big question: Why does this giant computer of a universe exist?

It's simple, Fredkin explains: "The reason is, there is no way to know the answer to some question any faster than what's going on."

Aware that he may have said something enigmatic, Fredkin elaborates. Suppose, he says, that there is an all-powerful God. "And he's thinking of creating this universe. He's going to spend seven days on the job—this is totally allegorical—or six days on the job. Okay, now, if he's as all-powerful as you might imagine, he can say to himself, 'Wait a minute, why waste the time? I can create the whole thing, or I can just think about it for a minute and just realize what's going to happen so that I don't have to bother.' Now, ordinary physics says, Well, yeah, you got an all-powerful God, he can probably do that. What I can say is—this is very interesting—I can say I don't care how powerful God is; he cannot know the answer to the question any faster than doing it. Now, he can have various ways of doing it, but he has to do every Goddamn single step with every bit or he won't get the right answer. There's no shortcut."

Around sundown on Fredkin's island all kinds of insects start chirping or buzzing or whirring. Meanwhile, the wind chimes hanging just outside the back door are tinkling with methodical randomness. All this music is eerie and vaguely mystical. And so, increasingly, is the conversation. It is one of those moments when the context you've constructed falls apart, and gives way to a new, considerably stranger one. The old context in this case was that Fredkin is an iconoclastic thinker who believes that space and time are discrete, that the laws of the universe are algorithmic, and that the universe works according to the same principles as a computer (he uses this very phrasing in his most circumspect moments). The new context is that Fredkin believes that the universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news/bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places.

So, I say, you're arguing that the reason we're here is that some being wanted to theorize about reality, and the only way he could test his theories was to create reality? "No, you see, my explanation is much more abstract. I don't imagine there is a being or anything. I'm just using that to talk to you about it. What I'm saying is that there is no way to know what the future is any faster than running this [the universe] to get to that [the future]. Therefore, what I'm assuming is that there is a question and there is an answer, okay? I don't make any assumptions about who has the question, who wants the answer, anything."

But the more we talk, the closer Fredkin comes to the religious undercurrents he's trying to avoid. "Every astrophysical phenomenon that's going on is always assumed to be just accident," he says. "To me, this is a fairly arrogant position, in that intelligence—and computation, which includes intelligence, in my view—is a much more universal thing than people think. It's hard for me to believe that everything out there is just an accident." This sounds awfully like a position that Pope John Paul II or Billy Graham would take, and Fredkin is at pains to clarify his position: "I guess what I'm saying is—I don't have any religious belief. I don't believe that there is a God. I don't believe in Christianity or Judaism or anything like that, okay? I'm not an atheist, I'm not an agnostic, I'm just in a simple state. I don't know what there is or might be. But what I can say is that it seems likely to me that this particular universe we have is a consequence of something I would call intelligent." Does he mean that there's something out there that wanted to get the answer to a question? "Yeah." Something that set up the universe to see what would happen? "In some way, yes."

VI. The Language Barrier


N 1974, UPON RETURNING TO MIT FROM CALTECH, Fredkin was primed to revolutionize science. Having done the broad conceptual work (concluding that the universe is a computer), he would enlist the aid of others in taking care of the details—translating the differential equations of physics into algorithms, experimenting with cellular-automaton rules and selecting the most elegant, and, eventually, discovering The Rule, the single law that governs every bit of space and accounts for everything. "He figured that all he needed was some people who knew physics, and that it would all be easy," Margolus says.

One early obstacle was Fredkin's reputation. He says, "I would find a brilliant student; he'd get turned on to this stuff and start to work on it. And then he would come to me and say, 'I'm going to work on something else.' And I would say, 'Why?' And I had a few very honest ones, and they would say, 'Well, I've been talking to my friends about this and they say I'm totally crazy to work on it. It'll ruin my career. I'll be tainted forever.'" Such fears were not entirely unfounded. Fredkin is one of those people who arouse either affection, admiration, and respect, or dislike and suspicion. The latter reaction has come from a number of professors at MIT, particularly those who put a premium on formal credentials, proper academic conduct, and not sounding like a crackpot. Fredkin was never oblivious of the complaints that his work wasn't "worthy of MIT," nor of the movements, periodically afoot, to sever, or at least weaken, his ties to the university. Neither were his graduate students.

Fredkin's critics finally got their way. In the early 1980s, while he was serving briefly as the president of Boston's CBS-TV affiliate, someone noticed that he wasn't spending much time around MIT and pointed to a faculty rule limiting outside professional activities. Fredkin was finding MIT "less and less interesting" anyway, so he agreed to be designated an adjunct professor. As he recalls the deal, he was going to do a moderate amount of teaching and be paid an "appropriate" salary. But he found the real salary insulting, declined payment, and never got around to teaching. Not surprisingly, he was not reappointed adjunct professor when his term expired, in 1986. Meanwhile, he had so nominally discharged his duties as the head of the information-mechanics group that the title was given to Toffoli.

Fredkin doubts that his ideas will achieve widespread acceptance anytime soon. He believes that most physicists are so deeply immersed in their kind of mathematics, and so uncomprehending of computation, as to be incapable of grasping the truth. Imagine, he says, that a twentieth-century time traveler visited Italy in the early seventeenth century and tried to reformulate Galileo's ideas in terms of calculus. Although it would be a vastly more powerful language of description than the old one, conveying its importance to the average scientist would be nearly impossible. There are times when Fredkin breaks through the language barrier, but they are few and far between. He can sell one person on one idea, another on another, but nobody seems to get the big picture. It's like a painting of a horse in a meadow, he says"Everyone else only looks at it with a microscope, and they say, 'Aha, over here I see a little brown pigment. And over here I see a little green pigment.' Okay. Well, I see a horse."

Fredkin's research has nevertheless paid off in unanticipated ways. Comparing a computer's workings and the dynamics of physics turned out to be a good way to figure out how to build a very efficient computer—one that harnesses the laws of physics with great economy. Thus Toffoli and Margolus have designed an inexpensive but powerful cellular-automata machine, the CAM 6. The "machine' is actually a circuit board that when inserted in a personal computer permits it to orchestrate visual complexity at a speed that can be matched only by general-purpose computers costing hundreds of thousands of dollars. Since the circuit board costs only around $1,500, this engrossing machine may well entice young scientific revolutionaries into joining the quest for The Rule. Fredkin speaks of this possibility in almost biblical terms, "The big hope is that there will arise somewhere someone who will have some new, brilliant ideas," he says. "And I think this machine will have a dramatic effect on the probability of that happening."

But even if it does happen, it will not ensure Fredkin a place in scientific history. He is not really on record as believing that the universe is a computer. Although some of his tamer insights have been adopted, fleshed out, and published by Toffoli or Margolus, sometimes in collaboration with him, Fredkin himself has published nothing on digital physics. His stated rationale for not publishing has to do with, of all things, lack of ambition. "I'm just not terribly interested," he says. "A lot of people are fantastically motivated by publishing. It's part of a whole thing of getting ahead in the world." Margolus has another explanation: "Writing something down in good form takes a lot of time. And usually by the time he's done with the first or second draft, he has another wonderful idea that he's off on."

These two theories have merit, but so does a third: Fredkin can't write for academic journals. He doesn't know how. His erratic, hybrid education has left him with a mixture of terminology that neither computer scientists nor physicists recognize as their native tongue. Further, he is not schooled in the rules of scientific discourse; he seems just barely aware of the line between scientific hypothesis and philosophical speculation. He is not politic enough to confine his argument to its essence: that time and space are discrete, and that the state of every point in space at any point in time is determined by a single algorithm. In short, the very background that has allowed Fredkin to see the universe as a computer seems to prevent him from sharing his vision. If he could talk like other scientists, he might see only the things that they see.


Robert Wright is the author of
Three Scientists and Their Gods: Looking for Meaning in an Age of Information, The Moral Animal: Evolutionary Psychology and Everyday Life, and Nonzero: The Logic of Human Destiny.
Copyright © 2002 by The Atlantic Monthly Group. All rights reserved.
The Atlantic Monthly; April 1988; Did the Universe Just Happen?; Volume 261, No. 4; page 29.
Wed, 24 Nov 2010 05:10:00 -0600 text/html https://www.theatlantic.com/past/docs/issues/88apr/wright.htm
Killexams : Unikernels Aren’t Dead, They’re Just Not Containers

Transcript

Buer: I thought I'd start you off by asking you a question. Have you ever considered why our computers don't have a control plane? To provide you the background what I mean with control plane, it's a pretty well-known term if you're dealing with networking equipment. The control plane is firewall rules to your traffic. They decide how the traffic flows through your system. The data plane, on the other hand, is where the data actually moves. So if you can translate these terms to compute, we can have a compute plane and the control plane.

Why don't we distinguish between these? I think it would be a really neat architecture, if we could have something like this, where this is where we compute. This is where we do stuff. Clients talk, they send requests. We respond to them, how much is two plus two? It's four. And this is where we decide how that thing behaves.

So the real question is why have we granted the computers the power to modify themselves? Because generally, we have. We've given them the ability to mess around with their own internals. The reason is, as with everything else in software, because it's always been done like that. History goes back to these two guys, Thompson and Ritchie, working on the PDP 11 implementing Unix. Now, Unix was a third party system and it was running on bare metal. If you were to try to design a system where you separate the compute and the control, you'd actually probably would have to modify the hardware. Others weren't capable of modifying the hardware because it was a third party system.

So basically, they ended up with Unix being granted the ability to modify themselves. And 20 years later, the same thing happened with the other dominant operating system, Windows, which is also a third party operating system. A few of you might have dealt with other operating systems. I remember, in the early 2000, I bought a couple of mini-machines from IBM, and then I have to buy another machine because I got these two mini-machines and I turned them on and nothing happened. You need another computer to configure these two because they've actually done the proper work of separating the compute from the control. And we had the opportunity. We've given the opportunity now to do something about this. With virtualization, you can actually get away with separating control and compute without actually having to modify hardware.

So welcome to my talk, "Unikernels Aren't Dead. They're Just Not containers." My name is Per Buer. I run a company called IncludeOS. Before that I ran a company called Varnish software. And before that, I worked with Tomas over there in a little open source consultancy and product development called Linpro. We worked exclusively with open source from 1996. So I've been in a very privileged position to work exclusively with open source my entire career. Yes, so what I'll try to do in this talk is to share my experience working with Unikernels. Wat are they good at? What aren't they good at? What sort of workload could we put on them? Concretely, what is a Unikernel? How does it behave? Where can this be applied and what are the experiences?

Unikernel Primer

I’ll start off by giving a Unikernel primer. What is a Unikernel? Well, if you want to make something, you start off by making the operating system a library. You want to have a library function for sending a packet, you want a library function from reading a block from disk, you take your operating system, you'd deconstruct it and you put everything in libraries. So we have these functions that are now in the library.

Then as you build your application, you link that functionality into the application itself. So now you have an application that knows how to send packets over the internet. The application knows that. It doesn't need the operating system anymore. It actually has a NIC driver. It has actually memory management linked into it as well. The last thing you do is you add the boot loader or something on top of it. Add some code to initialize hardware at start and that's basically it.

You end up with something that looks like this. This is the memory space or it's a map of the ELF binary, where at the beginning here, you have the boot loader. There's some application code. Here there is a driver. There's some kernel code there, the memory management. There's a TLS library. Yes. Compare this to your typical stack. On the next we have the application, custom application libraries, those link with the system libraries, run on top of kernel which consists of all kind of stuff.

Yes. I should also mention that the underlying system, I don't make any assumptions about the underlying systems. So I think that is perhaps one of the things where we diverge a bit from people that have come before us and done work on the Unikernels, and that we have very few opinions. We don't really care where you run this. When we started development, we started development assuming VTD was there, so hardware virtualization. Some power based virtual machines from Intel are more or less identical to physical machines. So the first time we just took our operating system image, dumped it to a USB stick and stick it in computer and turn it on and it booted up quite fine.

We're not required to run in a virtual machine. I'm going to just try to do a very, very quick demonstration on what does this actually look? Let me see. I'll just try to put on. Can you see this? Is this readable? Yes. Yes, I'm writing just plain C here really. You can tell by where we got our name. So I think cat is underutilized as an editor. It has even basic line editing. So the VI and the Emacs wars I stay out of them, and stick with cat. Now, so this is our basic "Hello world." Yes. And the thing is we just grabbed the boot command because actually there was no other command on Linux that's named boots. So we just took that. That sort of built. It needs good privileges because it sets up a network. So this is the hardware initialization thing. And then it says, "Hello World." So that's super, super simple. I think building that image takes approximately three seconds. Most of the operating system is always compiled. It's also in libraries somewhere. We'll just basically link it. Mostly we're I/O bound on how fast we can do this thing.

Unikernel Characteristics

Let me see there. Was that reasonably clear? Yes? Let's talk about how do these systems behave? What do they do? How did they differ from our more traditional systems? What we've built basically is a bit it's kind of weird to talk about it as an operating system. You might as well talk about it as an operating system kit. So what I did was actually I wrote an application operating system that writes "Hello World." The Hello World operating system. That's what I actually wrote. You can view it in two different ways.

I would say one of the biggest differences is the system is fundamentally immutable. At some point, we sat down and we noticed that we, as operating system developers, we've never given the operating system the ability to modify itself. We hadn't written that. There is no code in the operating system that allows you to replace a function with another. So that makes them, in a very fundamental way, immutable. We have not granted them the power to modify themselves. Modifying an image is just not possible. Or it's really hard.

So it does change things a bit, like if we find an old running VM that's been running for a year, we're not scared of it. When I was Linux sys admin, I found a system that had been running for a year or two. We were treating it with a bit of caution and we would absolutely never ever reboot it, because God knows what lives inside there. We don't really know. With the Unikernel systems, you can be reasonably sure that what's running is what was running initially as well. I believe that this leads to greater security. In addition to this fundamental lack of being able to modify itself, a few practical implementations.

We do have operating system functionality, but the way we access it is through function calls. I take the 4-bit pointers that point to the read function. So even if you write a buggy application and you have remote code execution, potentially in that bug, what are you really going to do with that code? The shell code that you ship, what is it going to do? It'll be, okay, so you can spend the year trying to find the read call or the write call, but what are you going to read or write? There's not necessarily a file system here as well, because we typically just boot up a binary elf image and we don't load it off a file system. We just boot it up and it runs. I think that they're really, really, really immutable.

Another thing we found is that they are perfectly predictable. If an operation takes 5.3 microseconds to execute, it will take 5.3 microseconds every time. I mean, their memory caching will of course change this a bit. But if you compare this to Linux, where you have page faults, you have various internal locks, scheduling jitter, and other factors, that can lead to undefined behavior and it comes to timing. Stack I know, it does HFT or high frequency trading. They have this thing where, yes, it typically takes 2.3 microseconds. But every time all the moons of Jupiter align, it takes 100 milliseconds. 99.9% of the time it's fine. But once in a blue moon, it'll take forever. And that's because it's a page fault, and it that can't be resolved because there's a lock that is held and there are a ton of other things that happened.

Unikernels in general, there is no background processes, there's no task, there's no nothing. It's only the code that you put in there. Like my "Hello World" example, that's all the code that's there. Yes. They're self-contained and they're simple. The only thing we rely on is the underlying hardware. So unless you currently need to talk to hardware, it's not going to talk to anything else. Previously, today, we talked about Secure Enclaves in the operating system track at least. Maybe you're weren’t in other tracks. Anyway, I believe this is perfect. I know Jesse might not agree, but I believe this is pretty perfect to run inside the Secure Enclave.

For those of you who don't know, a Secure Enclave, the idea is that you have an encrypted part of your computer software defined where whatever happens inside of that is completely opaque to the rest of the host. The host has no idea what's running inside there. That's scary as well. But it also is a great way to protect all their secrets. But you need something to run inside to run that application code that's running inside there. And a tiny little operating system that only has some sort of predefined interface, so that you could, for instance, provide it something, you provide it to your email, it signs it, spits it back out signed, done.

Also, I would say a fundamental characteristic is a limited compatibility with Linux, and or generally, a very limited runtime. Normally, the Unikernels offer full POSIX compatibility because they can't, because Unikernels are single POSIX, single address space. In order to be fully POSIX compliant, you have to be able to clone your processes and that doesn't make sense. So there's nobody that's fully POSIX compliant. And personally, I don't believe that we should strive to do that as well, because POSIX is just an after the fact description of Unix, really. They didn't write POSIX and then let’s say... what would be your spin in the operating system? Describe it and then implement it. No, it was the other way around. We've started implementing it and then we described it 15 years later.

I just don't believe that it makes sense for Unikernels to try to do everything that these complex operating systems do. With Linux, you can do really crazy stuff. You could have your application. You could implement it halfway in Haskell and halfway in x86 assembler. That system supports that perfectly. But you have a place to run those crazy things and limiting the scope of what you can do, I believe, is in general a good thing.

Until now, I've talked about Unikernels in general. I will not talk a bit more about more specific things about IncludeOS. There are a couple of things that are different. The most, I would say, best known Unikernels are probably Mirage written in OCaml. So IncludeOS came from an engineering college. So it's very pragmatic about everything. It's written in C++ or C++ 17, because C++ is performant and it's industry standard, and you can do it to solve real problems.

There are some good things about C++, or I would say many good things, but for an operating system in particular, being implemented in a system that allows you to ingest a lot of other runtimes. One of the things that we hope to do this year is implement support for node. So since the V8 engine is written in C++, we basically need libuv and then we should be able to compile and run node. Their event model aligns perfectly with ours, which is not by accident. Yes. We're also multicore. We don't have opinions about threads and multicore systems. Well, in essence, I think we're as much a library operating system as we are a Unikernel. So our library operating system has no limitations on what you can do with it. We provide you with the tools to do whatever you need to do. Yes, and that's basically it.

There are a few practical things as well. So I would say we have a party trick. This is what I do at cocktail parties and stuff. This happened because developing for Unikernels has not always been unicorns and ponies. At some point, we were developing on Google compute and it was a pain in the ass. It was horrible because every time we would change something, we would need to shut down the VM, replace the scrimmage, turn it back up again. That adds five minutes. If you add five minutes on top of compilation, you basically have developers with pitchforks pretty soon.

What we created as a pragmatic approach was this. We call this system live update. It basically relies on the fact that we run in a single address space. On Linux, you don't have a single address space. You have your server, it accepts the connection. You have some state in your application. And the Linux kernel has some state for that same connection. The TCP socket resides both in your application and in the kernel. You don't have access to the bits that are in the kernel. We're in a single address space. We have access to everything. The TCP connection is the C++ object. We can sterilize it. That gives us the ability to do the following.

This has been basically a map of memory on a system and we have my application one all running on the system. Now, what happens is when this thing boots up, it connects to a node service somewhere on the internet or on the local net and that service guides the system, it's the control plane. It tells you what should this system do. Because we don't have a shell, there's no shell. If you want to change something, you basically have to do it on the application node. And what's happening here is that the system decides that that we need to update. So it pushes down an update and it gets split into three chunks, because we couldn't find continuous memory. Then what happens is that we have functionality to serialize all the state in the application/operating system, like list of TCP connections, or if it's a firewall, the list of the connection tracking table, open files, or whatever. And we write that somewhere here.

Now, what we do then is that we have a little handcrafted piece of C code. That's actually stuck way down in low memory. We'll just overwrite the binary. Then we boot it up, we run it. We basically run through the whole system, except that we now know because there's stuff here in high memory, that indicates that this system has run before. And once it's done initializing, it will actually restore the state and it will continue to run. Then we discard the state. And we have now replaced 100% of the running code on the system without downtime. There's some downtime. Downtime is between 5 and 100 milliseconds, depending on how slow your PCI emulation is. I think there's room for optimizations there. But in general over internet connection, you should not be able to detect that this happened. Is that reasonably clear? Yes? Is it pretty cool? For me this is the coolest thing I've ever seen.

Configuration

An interesting thing, of course, is when you do build an operating system based on other principles, you start questioning a lot of the design decisions of your current systems. Why do we have configuration files? Well, we have configuration files mostly because our operating system is vendor supplied. Program code is vendor supplied. You need somewhere to have local adaptations so that you can retain the local adaptations across upgrades. There's a cost to configuration files. Every time you add a non-trivial configuration, you add complexity to the system.

Do you guys remember in the '90s, we used to have a lot of Unix applications that didn't have configuration files? They would be config.h and you would edit config.h. You would compile and install. And that would be actually it. 3D printer has the same thing actually. I punch in what stepper drivers I have and how it should behave and compile it. A configuration file isn't always the only way to solve that problem. I mean, there's a lot of great stuff about configuration files, but I'm not necessarily sure they're always the right answer.

NaCI: an Alternative to Configuration Files

So I'm going to show you something that we've done that gave us an alternative way of solving what many people solve the configuration files. So at some point, we had to create a firewall. This was early in 2017. The nice things about firewalls is you just shovel packets back and forth. If you can route packets, you just disable that and you basically have a firewall, because firewalls, the default position is not throughout the packet. Also, we weren't really that sure how robust our TCP stack was at the time, and it turns out it wasn't very robust. But the thing about firewalls is they can actually just push back and forth and you don't exercise your TCP implementation as much as your words would have to. It's a lot simpler to push TCP packets than to receive them.

So that's why we've wrote a firewall. We started out by looking at Netfilter. So Netfilter is the firewall that lives inside the Linux kernel. We thought that what we want is semantically quite close to that. And we started doing it, and we started creating these chains of rules and populating them. It struck me that I've seen this thing before, having things move along rules, in the performant dependent situation. One of the really cool things about Varnish is the way that it is configured. Varnish is configured. They don't have traditional configuration file. It has a VCL file, and VCL stands for Varnish Configuration Language. So what it does basically, is that file is a high level, non-Turing complete language. And when we load it, we transpile it to C code and we throw GCC at it. And it creates a shared object. That shared object isn't loaded and executed. It's a really interesting pattern which I really implore you to study a bit if you haven't seen it. I think there's great things you can do with it.

But we thought what we're trying to do here is more or less the same thing. Which we're trying to actually to take a packet, write a bunch of rules to describe how the packet flows through it. And then take actions. So the language itself looks something like this. There are some definitions up in the top. I don't know if you can see them. Bastion host is defined with an IP address. There's a list of allowed services. Those are just ain't. Allowed host is a range. And then there are some rules there, which if connection tracking state is established, then syslog it and accept the packets.

This was actually really great. I don't know, I really hate reading IP table scripts because they missed this one crucial thing, this thing. IF statement. IF statement, best shit ever. It allows you to simplify things. There's so much stuff that becomes simpler if you have IF statements. So we can emulate IF statements with sub-chains and stuff, but it it's not as nice as this. This is perfectly human readable. You've never seen this before and you can likely understand everything on it. And that's nice. For security, that's also quite important that your people actually understand what they're doing.

Now, we wrote it. I would say it was a naive implementation. The implementation took between two to three months. It hadn't struck me just until last week, perhaps the most important takeaway from it was the fact that we actually were able to implement a firewall in two to three months. The Netfilter team with Rusty Russell and the people around him, I think they spent two years writing Netfilter. I mean, and they were quite experienced people, like on Annika. She wrote the firewall. She has never touched networking code before. And she wrote something that was semantically quite close. That performance-wise beat the crap out of Netfilter.

This graph was created by a student at the local engineering college. It adds rules to the firewall script, more and more rules, and then sees how it impacts performance. So performance here is just throughputs. It would have maybe made more sense to do packets per second instead of gigabits, but yes, whatever. This is our firewall and it has a completely flat line. I think there's a 3% slow down when we have 5,000 firewall rules, which is where they sent. This is Linux. This is source filter. So here we filter on the source of the IP package, and this is our destination port. Anyone here want to take a guess of why it's slower to filter on destination port rather than IP source address?

I think it's just at least one layer of indirection. TCP is the module in that filter. You also have to parse the TCP part of the packet, which you can skip if you're just filtering on source. But you see that it dramatically slows down. And I feel bad about doing this. I really like Netfilter. I was a huge fan when it came out, because it was so good, that much better than IP Change which was pretty horrible. But I think it was interesting how we were able to write an implementation so quickly without any experience in that field. And it has almost been completely bug free.

I can talk for hours about just that factor and why I believe that we were able to do it so quickly but, yes. I should also notice that this thing here is scheduled to be taken up behind the barn and shot soon. I think at least nftables or eBPF will replace iptables pretty soon. And it has the exact same characteristic. It's almost as flat as we are. Of course, it's like 15% further down but, of course, yes. When they built Netfilter, they had to build the runtime for the system. You have to create all these data structures and push the package through the data structures, and there's all this complexity that you have to do. And we just basically created just an ingestion point in where there are C++ codes that just accept the packet, runs through it, and it's just spits it out in the other end. It's so much simpler.

When Are Unikernels Relevant?

I'm trying to get to a conclusion here. So when are Unikernels relevant? I think there are a couple of things. I think for me currently, I think the most exciting thing is the predictability. We're able to do build predictable systems that are able to perform the same operation again and again, without this long tail of latency. I think we've looked at systems that are using FPGAs today where we can perhaps replace FPGAs because we are as predictable as FPGA-based systems, because we can use small operating systems. We can use all kinds of weird stuff with it. We can turn off interrupts. We don't need the real interrupts in order to run. And likely, there's nothing else happening on that core so we might as well pull.

It's also quite performance, specifically, when you use these tricks, so that's translation. I think that's interesting. I think security could be an interesting thing as well. It used to be our software was here and our infrastructure was here. There's this thing now where we are embedding software into our infrastructure. There are people building houses. They ship with the Linux box inside them. That house is estimated to last 30, 40, 50 years with the Linux box in them. And that thing controls power, heating, maybe the door locks. I think that's great and everything, but would I buy a house that was controlled by a 25-year-old Linux computer in the basement? Not necessarily sure about that. Of course, I have no idea how well IncludeOS will stand up to 25 years of scrutiny. But I suspect it will do it better than then the next world because of the fundamental immutability of the design.

Of course, there will be denial of service vectors to it, which would be unfortunate if you can't get in and it's minus 15 degrees. But at least an attacker won't necessarily have the ability to make some of the power relays jitter so that you have these power spikes that will necessarily make the relay catch fire and burn down your house. At least that would be a lot harder. Security, I think, is an important thing. For some people, they like this. There's no kernel user boundary. I struggle to come up with a good example here of why that is relevant. There's nothing keeping you from snooping raw Ethernet frames. If that's what you want to do, then that's super simple to do. If you want to hook something that processes the raw Ethernet frames as they come onto the wire, it's super simple. If you want to build a system that consists of two machines on the same network with a good interconnect, that cooperatively work under the same TCP connection, I have no idea why would you do such a messed up thing. But it's possible because there are very few boundaries of what you can do.

Now that I've come to the end of the talk, I should, of course, mention that you can interrupt me with questions anytime. That's some blue sky stuff. Since we had those 10 minutes, we have this internal concept, which is just a concept at the moment which we call Shadowkernels. Shadowkernels basically are the ability to run multiple Unikernels on a single VM. We could do some really interesting stuff when we start doing that. We can start to really prop up security even further. The idea here is to load the kernel, kernel zero for Unikernel zero. It runs in privileged mode, ring zero. Then what that does is boot up another kernel on another virtual CPU. Unprivileged runs in ring three. It's completely read only. It runs in ring three, so it doesn't have the ability to modify its own page tables. It could also be running in whatever next generation of Intel Secure Enclaves is, so that this thing can't snoop on it.

And then it will fire up another one, which is also a non-privileged one, runs in also read only memory, which is the load balancer, or the one that actually takes TLS connections and terminates them. This is where your TLS keys resides. This is what talks to strangers at the Internet. And this one is the only one that has the hardware capability to modify these ones. We have right not execute on stuff. I think that would be interesting thing. I'll skip this one. And I think, I'll just take your questions if there are any.

Questions & Answers

Participant 1: One thing I was wondering about was, I think, you possibly paused for questions at that point when you were talking about the live updates?

Buer: Yes.

Participant 1: The state in that case, was that just a state of where things were laid out in memory, or was it the real state of the application?

Buer: That was the state of the application.

Participant 1: So when I do an update, say, well, I guess a proxy or cache is good as example as any, it will come back up with the cache hot.

Buer: Yes. The thing is, you'd have to write a method that would implement serializable so that that object would be serializable. Operating system will just provide it a pointer to where it should dump its data.

Participant 1: Yes. That's exactly what my question was going to be, because when you overload work the data structure changes?

Buer: Yes, it does. And of course, if you have breaking changes, you get to resolve them before it works, naturally.

Participant 1: Can I ask a quick second question? Specific topic, but I guess, when you have everything laid out in memory, as you said, it's that some things are a security issue as well, because an attacker can figure out where things are in memory or?

Buer: I don't think so. I mean, it's virtual memory so I have no idea how things end up in physical memory. We used to have really simplistic randomization of, not ASLR, but the static one where we would mangle the address space a bit when we compiled. That didn't really add anything. We removed it but still the linker does dramatically change where everything is every time we rebuild it. Things are fairly randomly placed. I think we will try to do proper ASLR at some point, so that every time we reboot we come up with slightly a bit different. I don't see it as a vector. Yes.

Participant 2: What's the relation of the IncludeOS to containers? Will it possible in the future to run containers like Docker on Kubernetes on the OS, or would you run the OS within the container?

Buer: So the question was how do we deal with what we view as containers? When we started out, there was this infamous blog post how Unikernels will kill containers in five years. Then Docker had this knee-jerk reaction and went and bought Unikernel systems. One thing is talking about control plane and compute plane, but then other ways of looking at it is, is Unikernels are perfect for these predefined systems. I want a system that behaves like this, this, this, this, and this. This is what I want. Now I build it. That's very much akin to Linux kit, by the way.

So whereas these very, very generic tools like containers are runtime defined, I think, they rely a lot on being that when you have things, tools, like Chef and Puppet that rely only on that. And I think, trying for us to catch up with Linux and being compatible with Linux, I don't think there's ever been a successful operating system that tried to emulate another. OS/2 died. One of the reasons why it died was because it had this brilliant win32 emulation that would allow vendors to just skip writing support for OS/2 because it emulated Windows strategy perfectly. Freebies emulates Linux. It's probably not going to win.

And I know that there are efforts, I think [inaudible 00:45:23]. I think it's really interesting the way they try to take a Linux elf binary and create a Unikernel around it. I think that's technologically impressive. But what I'm afraid of is that being 99.9% compatible with another systems is not probably going to make it. So for us, I think, it was important to find the exact things that we can do. We can do this, Linux can't do that. So that was what this goal of the talk was, try to share that experience with you. If you want this, if you want predictable ultralow latency, you can try do it on Linux. It's going to be painful and is going to cost you a lot of resources and 1 in 10,000 transactions are still going to be hit over the head with a baseball bat. Yes.

Participant 3 That's a good feed into my question. Where in the real world do you see real pickup or interest? Is it, as you mentioned, build time versus runtime? Is it load balancer or firewall builders as a high frequency trading people? Who expresses real interest in it?

Buer: I think the high frequency trading peoples are the ones that express real interest. We've tried doing the firewall and there was some limited interest in it. First and foremost, it's super neat the way we do firewalls with this live update thing. But the only reason that it's neat is because I tell you what's happening on the backend. If you just saw that system and you gave it another set of rules and it just changes the rules, that's not really that impressive. But if I tell you that we build a new operating system and it hot swapped your operating system, it's a lot cooler. But that doesn't really provide you any business value. So I think, there's been a lot of work on figuring out exactly what we can do that you can't do on other operating systems. Yes?

Participant 4: Yes. It's kind of tied into who was going to use you.

Buer: Can I supplement that a bit?

Participant 4: Yes, sure.

Buer: I think HFT is one thing. Probably also might be also telcos because there's lots of really latency sensitive applications inside there. I thought that in really big data centers, if you have more than 100,000 cores, there might be latency sensitivity as well. But I'm not entirely sure. If you have more than 100,000 cores in a single network and you have latency problems, I'd like to hear how that manifests itself. But I think the second thing is going to be appliance. The appliances, IoT or whatever you call appliances these days, because they do one thing, "Do one thing." And Unikernels have been around actually for forever. Just microcontroller systems. They're free actors and the other things. The library operating system was built as a single image. My 3D printer, for all practical purposes, runs a Unikernel at home.

It's just that as people need more and more CPU power, and the need to leverage GPUs, that's not going to cut it, what microcontrollers anymore. And hopefully, some of those people who jump over on the CPU based platform will like to retain the control that they used to have over their microcontroller systems. And, I think, that could potentially be where we go. It was a very long-winded answer, sorry.

Participant 5: I spent the last half hour trying to figure out what to compare you to. Should I compare you to a container or should I compare you to the JVM? I think JVM is a bit fairer. Sure, you end up being polyweb, if compared to Growl, it would actually be multiple languages, multiple support. So do you have any big advantage other than obviously, speed and the predictability?

Buer: The security. Yes.

Participant 5: Yes. Securities needs to be proven though but you definitely have the advantage of how do you attack your system, someone needs to figure that out first obviously...

Buer: Yes. That's just true.

Participant 5: So is there mostly a speed that ends up being ...

Buer: I think that predictability is much more important than speed, actually. Because currently, we're not real-time capable. Basically, that is because for an operating system to be real-time capable, you need to have a sensor goals pi. And you need to be able to run code immediately, like throw whatever is running on the CPU away and then put on the brakes; literally put on the brakes, because if not, you're going to kill that poor lady that's being detected by the LiDAR. Yes. I think that the stop sign was off. So I think if there are any more questions, please come forward and talk to me. I really, really like to hear your questions. I'll be here until Wednesday. So if you will have other questions and can see me later, please come and find me.

See more presentations with transcripts

Sun, 12 May 2019 16:33:00 -0500 en text/html https://www.infoq.com/presentations/unikernels-includeos/
Killexams : Intel’s ATX12VO Standard: A Study In Increasing Computer Power Supply Efficiency

The venerable ATX standard was developed in 1995 by Intel, as an attempt to standardize what had until then been a PC ecosystem formed around the IBM AT PC’s legacy. The preceding AT form factor was not so much a standard as it was the copying of the IBM AT’s approximate mainboard and with it all of its flaws.

With the ATX standard also came the ATX power supply (PSU), the standard for which defines the standard voltage rails and the function of each additional feature, such as soft power on (PS_ON).  As with all electrical appliances and gadgets during the 1990s and beyond, the ATX PSUs became the subject of power efficiency regulations, which would also lead to the 80+ certification program in 2004.

Starting in 2019, Intel has been promoting the ATX12VO (12 V only) standard for new systems, but what is this new standard about, and will switching everything to 12 V really be worth any power savings?

What ATX12VO Is

As the name implies, the ATX12VO standard is essentially about removing the other voltage rails that currently exist in the ATX PSU standard. The idea is that by providing one single base voltage, any other voltages can be generated as needed using step-down (buck) converters. Since the Pentium 4 era this has already become standard practice for the processor and much of the circuitry on the mainboard anyway.

As the ATX PSU standard moved from the old 1.x revisions into the current 2.x revision range, the -5V rail was removed, and the -12V rail made optional. The ATX power connector with the mainboard was increased from 20 to 24 pins to allow for more 12 V capacity to be added. Along with the Pentium 4’s appetite for power came the new 4-pin mainboard connector, which is commonly called the “P4 connector”, but officially the “+12 V Power 4 Pin Connector” in the v2.53 standard. This adds another two 12 V lines.

Power input and output on the ASRock Z490 Phantom Gaming 4SR, an ATX12VO mainboard. (Credit: Anandtech)

In the ATX12VO standard, the -12 V, 5 V, 5 VSB (standby) and 3.3 V rails are deleted. The 24-pin connector is replaced with a 10-pin one that carries three 12 V lines (one more than ATX v2.x) in addition to the new 12 VSB standby voltage rail. The 4-pin 12 V connectors would still remain, and still require one to squeeze one or two of those through impossibly small gaps in the system’s case to get them to the top of the mainboard, near the CPU’s voltage regulator modules (VRMs).

While the PSU itself would be somewhat streamlined, the mainboard would gain these VRM sections for the 5 V and 3.3 V rails, as well as power outputs for SATA, Molex and similar. Essentially the mainboard would take over some of the PSU’s functions.

Why ATX12VO exists

A range of Dell computers and server which will be subject to California’s strict efficiency regulations.

The folk over at GamersNexus have covered their research and the industry’s thoughts on the syllabu of ATX12VO in an article and video that were published last year. To make a long story short, OEM system builders and systems integrators are subject to pretty strong power efficiency regulations, especially in California. Starting in July of 2021, new Tier 2 regulations will come into force that add more strict requirements for OEM and SI computer equipment: see 1605.3(v)(5) (specifically table V-7) for details.

In order to meet these ever more stringent efficiency requirements, OEMs have been creating their own proprietary 12 V-only solutions, as detailed in GamersNexus’ recent video review on the Dell G5 5000 pre-built desktop system. Intel’s ATX12VO standard therefore would seem to be more targeted at unifying these proprietary standards rather than replacing ATX v2.x PSUs in DIY systems. For the latter group, who build their own systems out of standard ATX, mini-ITX and similar components, these stringent efficiency regulations do not apply.

The primary question thus becomes whether ATX12VO makes sense for DIY system builders. While the ability to (theoretically) increase power efficiency especially at low loads seems beneficial, it’s not impossible to accomplish the same with ATX v2.x PSUs. As stated by an anonymous PSU manufacturer in the GamersNexus article, SIs are likely to end up simply using high-efficiency ATX v2.x PSUs to meet California’s Tier 2 regulations.

Evolution vs Revolution

Seasonic’s CONNECT DC-DC module connected to a 12V PSU. (Credit: Seasonic)

Ever since the original ATX PSU standard, the improvements have been gradual and never disruptive. Although some got caught out by the negative voltage rails being left out when trying to power old mainboards that relied on -5 V and -12 V rails being present, in general these changes were minor enough to incorporate these into the natural upgrade cycle of computer systems. Not so with ATX12VO, as it absolutely requires an ATX12VO PSU and mainboard to accomplish the increased efficiency goals.

While the possibility of using an ATX v2.x to ATX12VO adapter exists that passively adapts the 12 V rails to the new 10-pin connector and boosts the 5 VSB line to 12 VSB levels, this actually lowers efficiency instead of increasing it. Essentially, the only way for ATX12VO to make a lot of sense is for the industry to switch over immediately and everyone to upgrade to it as well without reusing non-ATX12VO compatible mainboards and PSUs.

Another crucial point here is that OEMs and SIs are not required to adopt ATX12VO. Much like Intel’s ill-fated BTX alternative to the ATX standard, ATX12VO is a suggested standard that manufacturers and OEMs are free to adopt or ignore at their leisure.

Important here are probably the obvious negatives that ATX12VO introduces:

  • Adding another hot spot to the mainboard and taking up precious board space.
  • Turning mainboard manufacturers into PSU manufacturers.
  • Increasing the cost and complexity of mainboards.
  • Routing peripheral power (including case fans) from the mainboard.
  • Complicating troubleshooting of power issues.
Internals of Seasonic’s CONNECT modular power supply. (Credit: Tom’s Hardware)

Add to this potential alternatives like Seasonic’s CONNECT module. This does effectively the same as the ATX12VO standard, removing the 5 V and 3.3 V rails from the PSU and moving them to an external module, off of the mainboard. It can be fitted into the area behind the mainboard in many computer cases, making for very clean cable management. It also allows for increased efficiency.

As PSUs tend to survive at least a few system upgrades, it could be argued that from an environmental perspective, having the minor rails generated on the mainboard is undesirable. Perhaps the least desirable aspect of ATX12VO is that it reduces the modular nature of ATX-style computers, making them more like notebook-style systems. Instead, a more reasonable solution here might be that of a CONNECT-like solution which offers both an ATX 24-pin and ATX12VO-style 10-pin connectivity option.

Thinking larger

In the larger scheme of power efficiency it can be beneficial to take a few steps back from details like the innards of a computer system and look at e.g. the mains alternating current (AC) that powers these systems. A well-known property of switching mode power supplies (SMPS) like those used in any modern computer is that they’re more efficient at higher AC input voltages.

Power supply efficiency at different input voltages. (Credit: HP)

This can be seen clearly when looking for example at the rating levels for 80 Plus certification. Between 120 VAC and 230 VAC line voltage, the latter is significantly more efficient. To this one can also add the resistive losses from carrying double the amps over the house wiring for the same power draw at 120 V compared to 230 VAC. This is the reason why data centers in North America generally run on 208 VAC according to this APC white paper.

For crypto miners and similar, wiring up their computer room for 240 VAC (North American hot-neutral-hot) is also a popular topic, as it directly boosts their profits.

Future Outlook

Whether ATX12VO will become the next big thing or fizzle out like BTX and so many other proposed standards is hard to tell. One thing which the ATX12VO standard has against it is definitely that it requires a lot of big changes to happen in parallel, and the creation of a lot of electronic waste through forced upgrades within a short timespan. If we consider that many ATX and SFX-style PSUs are offered with 7-10 year warranties compared to the much shorter lifespan of mainboards, this poses a significant obstacle.

Based on the sounds from the industry, it seems highly likely that much will remain ‘business as usual’. There are many efficient ATX v2.x PSUs out there, including 80 Plus Platinum and Titanium rated ones, and Seasonic’s CONNECT and similar solutions would appeal heavily to those who are into neat cable management. For those who buy pre-built systems, the use of ATX12VO is also not relevant, so long as the hardware is compliant to all (efficiency) regulations. The ATX v2.x standard and 80 Plus certification are also changing to set strict 2-10% load efficiency targets, which is the main target with ATX12VO.

What would be the point for you to switch to ATX12VO, and would you pick it over a solution like Seasonic CONNECT if both offered the same efficiency levels?

(Heading image: Asrock Z490 Phantom Gaming 4SR with SATA power connected, credit: c’t)

Wed, 03 Aug 2022 11:59:00 -0500 Maya Posch en-US text/html https://hackaday.com/2021/06/07/intels-atx12vo-standard-a-study-in-increasing-computer-power-supply-efficiency/
Killexams : OSCON 2019: Data Asset eXchange, Kabanero, WSO2 API Micrograteway 3.0, and Red Sky Ops

The O’Reilly Open Source Software Conference (OSCON) is taking place this week in Oregon, gathering together industry leaders to talk about open source, cloud native, data-driven solutions, AI capabilities and product management. 

“OSCON has continued to be the catalyst for open source innovation for twenty years, providing organizations with the latest technological advances and guidance to successfully implement the technology in a way that makes sense for them,” said Rachel Roumeliotis, vice president of content strategy at O’Reilly and OSCON program chair. “To keep OSCON at the forefront of open source innovation for the next twenty years, we’ve shifted the program to focus more on software development with courses such as cloud-native technologies. While not all are open source, they allow software developers to thrive and stay ahead of these shifts.”

A number of companies are also taking OSCON as an opportunity to release new software and solutions. Announcements included: 

IBM’s Data Asset eXchange (DAX)
DAX is an online hub designed to provide developers and data scientists a place to discover free and open datasets under open data licenses. The datasets will use the Linux Foundation’s Community Data License Agreement when possible, and integrate with IBM Cloud and AI services. IBM will also provide new datasets to the online hub regularly. 

“For developers, DAX provides a trusted source for carefully curated open datasets for AI. These datasets are ready for use in enterprise AI applications, with related content such as tutorials to make getting started easier,” the company wrote in a post

DAX joins IBM’s other initiatives to help data scientists and developers discover and access data. IBM Model Asset eXchange (MAX) is geared towards machine learning and deep learning models. The company’s Center for Open-Source Data and AI Technologies will work to make it easier to use DAX and MAX assets. 

New open-source projects
IBM also announced a new open-source project designed for Kubernetes. Kabanero is meant to help developers build cloud-native apps. It features governance and compliance capabilities and the ability to architect, build, deploy and manage the lifecycle of a Kubernetes-based app, IBM explained. 

“Kabanero takes the guesswork out of Kubernetes and DevOps. With Kabanero, you don’t need to spend time mastering DevOps practices and Kubernetes infrastructure courses like networking, ingress and security. Instead, Kabanero integrates the runtimes and frameworks that you already know and use (Node.js, Java, Swift) with a Kubernetes-native DevOps toolchain. Our pre-built deployments to Kubernetes and Knative (using Operators and Helm charts) are built on best practices. So, developers can spend more time developing scalable applications and less time understanding infrastructure,” Nate Ziemann, product manager at IBM, wrote in a post

The company also announced Appsody, an open source project to help with cloud-native apps in containers; Codewind, an IDE integration for cloud-native development; and Razee, a project for multi-cluster continuous delivery tooling for Kubernetes.

“As companies modernize their infrastructure and adopt a hybrid cloud strategy, they’re increasingly turning to Kubernetes and containers. Choosing the right technology for building cloud-native apps and gaining the knowledge you need to effectively adopt Kubernetes is difficult. On top of that, enabling architects, developers, and operations to work together easily, while having their individual requirements met, is an additional challenge when moving to cloud,” Ziemann wrote. 

WSO2 API Micrograteway 3.0 announced
WSO2 is introducing a new version of its WSO2 API MIcrogateway focused on creating, deploying and securing APIs within distributed microservices architectures. The latest release features developer-first runtime generation, runt-time service discovery, support for composing multiple microservices, support for transforming legacy API formats and separation of the WSO2 API Microgateway toolkit. 

“API microgateways are a key part of building resilient, manageable microservices architectures,” said Paul Fremantle, WSO2 CTO and co-founder. “WSO2 API Microgateway 3.0 fits effectively into continuous development practices and has the proven scalability and robustness for mission-critical applications.”

Carbon Relay’s new AIOps platform
Red Sky Ops is a new open-source AIOps platform to help organizations with Kubernetes initiatives as well as deploy, scale and manage containerized apps. According to Carbon Relay, this will help DevOps teams manage hundreds of app variables and configurations. The solution uses machine learning to study, replicate and stress-test app environments as well as configure, schedule and allocate resources. 

Carbon Relay has also announced it will be joining the Cloud Native Computing Foundation to better support the Kubernetes community and the use of cloud native technologies.

Sat, 16 Jul 2022 12:00:00 -0500 en-US text/html https://sdtimes.com/os/oscon-2019-data-asset-exchange-kabanero-wso2-api-micrograteway-3-0-and-red-sky-ops/
Killexams : VCS-260: Administration of Veritas InfoScale Availability 7.3 for UNIX/Linux

VCS-260: Administration of Veritas InfoScale Availability 7.3 for UNIX/Linux

Although each product varies in complexity and depth of technical knowledge, all certification exams target customers in an administrative role, cover core elements measuring technical knowledge against factors such as installation, configuration, deployment, management & administration, and basic troubleshooting.

This program consists of a technical test at a product/version level that validates that the successful candidate has the knowledge and skills necessary to successfully administer Veritas InfoScale Availability 7.3 for UNIX/Linux

Passing this test will result in a Veritas Certified Specialist (VCS) certification and counts towards the requirements for a Veritas Certified Professional (VCP) certification in Storage Management and High Availability for UNIX.

Exam details

# of Questions: 75 - 85
Exam Duration: 105 minutes
Passing Score: 71%
Languages: English
Exam Price: $225 USD (or your country’s currency equivalent)

Suggested preparation

Recommended Course:

Note: If you do not have prior experience with this product, it is recommended that you complete an in-person, classroom training or Virtual Academy virtual classroom training class in preparation for the VCS exam. Be aware that attending a training course does not certain passage of a certification exam.

Recommended preparation steps:

  1. Exam Preparation Guide (PDF): get and review the guide to understand the scope of courses covered in the certification test and how they map to the key lessons and courses in the associated training course(s).
  2. Attend recommended training classes listed above.
  3. Gain hands-on experience with the product. Six to twelve months experience working with InfoScale Availability and/or Veritas Cluster Server for UNIX/Linux in a production or lab environment is recommended.
  4. Sample test (PDF): Test yourself and your exam-taking skills using the sample exam

In addition, you should be familiar with the following product documentation and web sites:

Recommended hands-on experience (real world or virtual):

  • Recommended 9-12 months experience working with InfoScale Availability in a production or lab environment.
  • Recommended 3-6 months experience working with InfoScale Storage in a production or lab environment.
  • Recommended knowledge of UNIX/Linux system and network administration.
  • Recommended knowledge of storage virtualization and high availability concepts.
  • Preparing the environment for Veritas InfoScale Availability
  • Installing and configuring Veritas InfoScale Availability
  • Managing cluster communications and data protection mechanisms
  • Configuring service groups, resources, resource dependencies, agents and resource types
  • Configuring failover policies and service group dependencies
  • Validating the Veritas InfoScale Availability implementation
  • Performing basic troubleshooting
  • Configuring global clustering
  • Managing and administering InfoScale Availability using the Command Line and Veritas InfoScale Operations Manager (VIOM)
  • Deploy, configure and maintain Veritas InfoScale Availability
  • Determine how components external to the cluster may impact high availability
  • Monitor and manage clusters
  • Understand high availability concepts, components and architectures
  • Understand the impact of modifying cluster configurations
  • Understand users and access in a cluster environment
Mon, 24 Sep 2018 20:09:00 -0500 en-US text/html https://www.veritas.com/services/education-services/certification/exams/vcs-260
000-240 exam dump and training guide direct download
Training Exams List