Do not waste time, Download free 000-053 PDF Download and study guide

Move through our 000-053 Queries answers and feel Certified the 000-053 examination. You will move your 000-053 test on high marks or even your cashback. We all have aggregated the database of 000-053 real questions through the actual test in order to be able in order to provide you along with prep in order to get equipped plus pass 000-053 test on the first attempt. Merely install our VCE Exam Simulator plus obtain ready. A person will pass the particular Rational Team Concert V3 exam.

Exam Code: 000-053 Practice test 2022 by Killexams.com team
Rational Team Concert V3
IBM Rational information source
Killexams : IBM Rational information source - BingNews https://killexams.com/pass4sure/exam-detail/000-053 Search results Killexams : IBM Rational information source - BingNews https://killexams.com/pass4sure/exam-detail/000-053 https://killexams.com/exam_list/IBM Killexams : A Step-by-Step Guide to the Rational Unified Process
Three office workers discuss a digital graph.

Image source: Getty Images

The Rational Unified Process (RUP) was developed in the early 2000s to Improve software development. This guide will help you understand what it is and how to implement it.

There's nothing worse than putting out a buggy software platform. End users are complaining, people are demanding refunds, and management is not happy. Oh, and you've got a lot of extra work to do to fix it.

Just look at the blowback video games like No Man's Sky and Cyberpunk 2077 have gotten in latest years for releases that critics considered buggy or incomplete. It's taken years of further development after its initial release for No Man's Sky to recover some of its reputation -- time will tell if Cyberpunk 2077 can do the same. Either way, it's not a great position to be in.

When developing new software, getting it right the first time is critical. That's why Rational Software Corp., a division of IBM, developed the Rational Unified Process (RUP) in the early 2000s, which remains popular today. RUP provides a simplified way for software development teams to create new products while reducing risk.

So, what exactly is RUP? This guide will break down how it can help with project execution and how to implement it.

Overview: What is the Rational Unified Process (RUP)?

The Rational Unified Process model is an iterative software development procedure that works by dividing the product development process into four distinct phases:

  • Inception
  • Elaboration
  • Construction
  • Transition

The purpose of breaking it down this way is to help companies better organize development by identifying each phase to increase the efficiency of executing tasks. Other businesses sometimes implement the RUP project management process as a development best practice.

Phases of the Rational Unified Process (RUP)

As noted, there are four project phases of RUP, each identifying a specific step in the development of a product.

Inception

The development process begins with the idea for the project, which is known as the inception. The team determines the cost-benefit of this idea and maps out necessary resources, such as technology, assets, funding, manpower, and more.

The primary purpose of this phase is to make the business case for creating the software. The team will look at financial forecasts, as well as create a basic project plan to map out what it would look like to execute the project and generally what it would take to do so. A risk assessment would also factor into the discussion.

During this phase, the project manager may opt to kill the project if it doesn't look worth the company's time before any resources are expended on product development.

What’s happening: The team is creating a justification for the existence of this software project. It’s trying to tell management, “This new software will bring value to the company and the risks appear relatively small in comparison at first glance -- as a result, please let us start planning this out in more detail.”

Elaboration

If the software project passes the “smell” test -- i.e., the company thinks that on first pass the project benefits appear to outweigh the risks -- the elaboration phase is next. In this phase, the team dives deeper into the details of software development and leaves no stone unturned to ensure there are no showstoppers.

The team should map out resources in more detail and create a software development architecture. It considers all potential applications and affiliated costs associated with the project.

What’s happening: During this phase, the project is starting to take shape. The team hasn’t started development yet, but it is laying the final groundwork to get going. The project may still be derailed in this phase, but only if the team uncovers problems not revealed during the inception phase.

Construction

With the project mapped out and resources identified, the team moves on to the construction phase and actually starts building the project. It executes tasks and accomplishes project milestones along the way, reporting back to stakeholders on the project’s process.

Thanks to specific resources and a detailed project architecture built in the previous phase, the team is prepared to execute the software and is better positioned to complete it on time and on budget.

What's happening: The team is creating a prototype of the software that can be reviewed and tested. This is the first phase that involves actually creating the product instead of just planning it.

Transition

The final phase is transition, which is when the software product is transitioned from development to production. At this point, all kinks are ironed out and the product is now ready for the end user instead of just developers.

This phase involves training end users, beta testing the system, evaluating product performance, and doing anything else required by the company before a software product is released.

During this phase, the management team may compare the end result to the original concept in the inception phase to see if the team met expectations or if the project went off track.

What's happening: The team is polishing the project and making sure it's ready for customers to use. Also, the software is now ready for a final evaluation.

4 best practices of the Rational Unified Process (RUP)

RUP is similar to other project planning techniques, like alliance management, logical framework approach, project crashing, and agile unified process (a subset of RUP), but it is unique in how it specifically breaks down a project. Here are a few best practices to ensure your team implements RUP properly.

1. Keep the process iterative

By keeping the RUP method iterative -- that is, you break down the project into those four specific and separate chunks -- you reduce the risk of creating bad software. You Improve testing and cut down on risk by allowing a project manager to have more control over the software development as a whole.

2. Use component architectures

Rather than create one big, complicated architecture for the project, supply each component an architecture, which reduces the complexity of the project and leaves you less open to variability. This also gives you more flexibility and control during development.

3. Be vigilant with quality control

Developing software using the RUP process is all about testing, testing, and more testing. RUP allows you to implement quality control at each stage of the project, and you must take advantage of that to ensure development is completed properly. This will help you detect defects, track them in a database, and assure the product works properly in subsequent testing before releasing to the end user.

4. Be flexible

Rigidity doesn’t work with product development, so use RUP’s structure to be flexible. Anticipate challenges and be open to change. Create space within each stage for developers to improvise and make adjustments on the fly. This gives them the opportunity to spot innovative ways of doing things and unleash their creative instincts, which results in a better software product.

Software can help implement RUP in your business

If you’re overwhelmed with planning software development projects, you’re not alone. That’s why project management software is such big business these days. Software can help you implement the RUP process by breaking down your next development project.

Try a few software solutions out with your team and experiment with the RUP process with each of them. See if you can complete an entire project with one software solution and then supply another one a try. Once you settle on a solution that fits your team, it will make you much more effective at executing projects.

Thu, 04 Aug 2022 12:00:00 -0500 en text/html https://www.fool.com/the-ascent/small-business/project-management/articles/rational-unified-process/
Killexams : IBM is Modeling New AI After the Human Brain

Attentive Robots

Currently, artificial intelligence (AI) technologies are able to exhibit seemingly-human traits. Some are intentionally humanoid, and others perform tasks that we normally associate strictly with humanity — songwriting, teaching, and visual art.

But as the field progresses, companies and developers are re-thinking the basis of artificial intelligence by examining our own intelligence and how we might effectively mimic it using machinery and software. IBM is one such company, as they have embarked on the ambitious quest to teach AI to act more like the human brain.

Click to View Full Infographic

Many existing machine learning systems are built around the need to draw from sets of data. Whether they are problem-solving to win a game of Go or identifying skin cancer from images, this often remains true. This basis is, however, limited — and it differentiates from the human brain.

We as humans learn incrementally. Simply put, we learn as we go. While we acquire knowledge to pull from as we go along, our brains adapt and absorb information differently from the way that many existing artificial systems are built. Additionally, we are logical. We use reasoning skills and logic to problem solve, something that these systems aren't yet terrific at accomplishing.

IBM is looking to change this. A research team at DeepMind has created a synthetic neural network that reportedly uses rational reasoning to complete tasks.

Rational Machinery

By giving the AI multiple objects and a specific task, "We are explicitly forcing the network to discover the relationships that exist," says Timothy Lillicrap, a computer scientist at DeepMind in an interview with Science Magazine. In a test of the network back in June, it was questioned about an image with multiple objects. The network was asked, for example: "There is an object in front of the blue thing; does it have the same shape as the tiny cyan thing that is to the right of the gray metal ball?"

In this test, the network correctly identified the object a staggering 96 percent of the time, compared to the measly 42 to 77 percent that more traditional machine learning models achieved. The advanced network was also apt at word problems and continues to be developed and improved upon. In addition to reasoning skills, researchers are advancing the network's ability to pay attention and even make and store memories.

Image Credit: ColiN00B / Pixabay

The future of AI development could be hastened and greatly expanded by using such tactics, according to Irina Rish, an IBM research staff member, in an interview with Engadget, "Neural network learning is typically engineered and it's a lot of work to actually come up with a specific architecture that works best. It's pretty much a trial and error approach ... It would be good if those networks could build themselves."

It might be scary to think of AI networks building and improving themselves, but if monitored, initiated, and controlled correctly, this could allow the field to expand beyond current limitations. Despite the brimming fears of a robot takeover, the advancement of AI technologies could save lives in the medical field, allow humans to get to Mars, and so much more. 


Wed, 29 Dec 2021 18:58:00 -0600 text/html https://futurism.com/ibm-is-modeling-new-ai-after-the-human-brain
Killexams : D-Case - Collaboration with Modeling Environment

This "D-Case OSLC Add-on" software package is to enable D-Case Editor to collaborate with modeling environments and software lifecycle management tool.

Among major tools in modeling environments, information is exchanged through the basic interface complying with Open Services for Lifcycle Collaboration (OSLC). This "D-Case OSLC Add-on" software package includes an OSLC interface module for the D-Case Editor. By installing this software package on existing development environments, D-Case contents may be migrated to the development environment, and the software tools can refer to the D-Case contents.

Rhapsody Plugins

D-Case Editor Plugins

Source Files (.zip)

Copyright (c) 2013-2014 JST DEOS R&D Center

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Mon, 04 Apr 2022 22:25:00 -0500 en text/html https://www.jst.go.jp/crest/crest-os/tech/D-Case-OSLC/index-e.html
Killexams : Did the Universe Just Happen? Killexams : The Atlantic | April 1988 | Did the Universe Just Happen? | Wright


More on science and technology from The Atlantic Monthly.

The Atlantic Monthly | April 1988
 

I. Flying Solo


d Fredkin is scanning the visual field systematically. He checks the instrument panel regularly. He is cool, collected, in control. He is the optimally efficient pilot.

The plane is a Cessna Stationair Six—a six-passenger single-engine amphibious plane, the kind with the wheels recessed in pontoons. Fredkin bought it not long ago and is still working out a few kinks; right now he is taking it for a spin above the British Virgin Islands after some minor mechanical work.

He points down at several brown-green masses of land, embedded in a turquoise sea so clear that the shadows of yachts are distinctly visible on its sandy bottom. He singles out a small island with a good-sized villa and a swimming pool, and explains that the compound, and the island as well, belong to "the guy that owns Boy George"—the rock star's agent, or manager, or something.

I remark, loudly enough to overcome the engine noise, "It's nice."

Yes, Fredkin says, it's nice. He adds, "It's not as nice as my island."

He's joking, I guess, but he's right. Ed Fredkin's island, which soon comes into view, is bigger and prettier. It is about 125 acres, and the hill that constitutes its bulk is a deep green—a mixture of reeds and cacti, sea grape and turpentine trees, machineel and frangipani. Its beaches range from prosaic to sublime, and the coral in the waters just offshore attracts little and big fish whose colors look as if they were coordinated by Alexander Julian. On the island's west side are immense rocks, suitable for careful climbing, and on the east side are a bar and restaurant and a modest hotel, which consists of three clapboard buildings, each with a few rooms. Between east and west is Fredkin's secluded island villa. All told, Moskito Island—or Drake's Anchorage, as the brochures call it—is a nice place for Fredkin to spend the few weeks of each year when he is not up in the Boston area tending his various other businesses.

In addition to being a self-made millionaire, Fredkin is a self-made intellectual. Twenty years ago, at the age of thirty-four, without so much as a bachelor's degree to his name, he became a full professor at the Massachusetts Institute of Technology. Though hired to teach computer science, and then selected to guide MIT's now eminent computer-science laboratory through some of its formative years, he soon branched out into more-offbeat things. Perhaps the most idiosyncratic of the courses he has taught is one on "digital physics," in which he propounded the most idiosyncratic of his several idiosyncratic theories. This theory is the reason I've come to Fredkin's island. It is one of those things that a person has to be prepared for. The preparer has to say, "Now, this is going to sound pretty weird, and in a way it is, but in a way it's not as weird as it sounds, and you'll see this once you understand it, but that may take a while, so in the meantime don't prejudge it, and don't casually dismiss it." Ed Fredkin thinks that the universe is a computer.

Fredkin works in a twilight zone of modern science—the interface of computer science and physics. Here two concepts that traditionally have ranked among science's most fundamental—matter and energy—keep bumping into a third: information. The exact relationship among the three is a question without a clear answer, a question vague enough, and basic enough, to have inspired a wide variety of opinions. Some scientists have settled for modest and sober answers. Information, they will tell you, is just one of many forms of matter and energy; it is embodied in things like a computer's electrons and a brain's neural firings, things like newsprint and radio waves, and that is that. Others talk in grander terms, suggesting that information deserves full equality with matter and energy, that it should join them in some sort of scientific trinity, that these three things are the main ingredients of reality.

Fredkin goes further still. According to his theory of digital physics, information is more fundamental than matter and energy. He believes that atoms, electrons, and quarks consist ultimately of bits—binary units of information, like those that are the currency of computation in a personal computer or a pocket calculator. And he believes that the behavior of those bits, and thus of the entire universe, is governed by a single programming rule. This rule, Fredkin says, is something fairly simple, something vastly less arcane than the mathematical constructs that conventional physicists use to explain the dynamics of physical reality. Yet through ceaseless repetition—by tirelessly taking information it has just transformed and transforming it further—it has generated pervasive complexity. Fredkin calls this rule, with discernible reverence, "the cause and prime mover of everything."

T THE RESTAURANT ON FREDKIN'S ISLAND THE FOOD is prepared by a large man named Brutus and is humbly submitted to diners by men and women native to nearby islands. The restaurant is open-air, ventilated by a sea breeze that is warm during the day, cool at night, and almost always moist. Between the diners and the ocean is a knee-high stone wall, against which waves lap rhythmically. Beyond are other islands and a horizon typically blanketed by cottony clouds. Above is a thatched ceiling, concealing, if the truth be told, a sheet of corrugated steel. It is lunchtime now, and Fredkin is sitting in a cane-and-wicker chair across the table from me, wearing a light cotton sport shirt and gray swimming trunks. He was out trying to windsurf this morning, and he enjoyed only the marginal success that one would predict on the basis of his appearance. He is fairly tall and very thin, and has a softness about him—not effeminacy, but a gentleness of expression and manner—and the complexion of a scholar; even after a week on the island, his face doesn't vary much from white, except for his nose, which is red. The plastic frames of his glasses, in a modified aviator configuration, surround narrow eyes; there are times—early in the morning or right after a nap—when his eyes barely qualify as slits. His hair, perennially semi-combed, is black with a little gray.

Fredkin is a pleasant mealtime companion. He has much to say that is interesting, which is fortunate because generally he does most of the talking. He has little curiosity about other people's minds, unless their interests happen to coincide with his, which few people's do. "He's right above us," his wife, Joyce, once explained to me, holding her left hand just above her head, parallel to the ground. "Right here looking down. He's not looking down saying, 'I know more than you.' He's just going along his own way."

The food has not yet arrived, and Fredkin is passing the time by describing the world view into which his theory of digital physics fits. "There are three great philosophical questions," he begins. "What is life? What is consciousness and thinking and memory and all that? And how does the universe work?" He says that his "informational viewpoint" encompasses all three. Take life, for example. Deoxyribonucleic acid, the material of heredity, is "a good example of digitally encoded information," he says. "The information that implies what a creature or a plant is going to be is encoded; it has its representation in the DNA, right? Okay, now, there is a process that takes that information and transforms it into the creature, okay?" His point is that a mouse, for example, is "a big, complicated informational process."

Fredkin exudes rationality. His voice isn't quite as even and precise as Mr. Spock's, but it's close, and the parallels don't end there. He rarely displays emotion—except, perhaps, the slightest sign of irritation under the most trying circumstances. He has never seen a problem that didn't have a perfectly logical solution, and he believes strongly that intelligence can be mechanized without limit. More than ten years ago he founded the Fredkin Prize, a $100,000 award to be given to the creator of the first computer program that can beat a world chess champion. No one has won it yet, and Fredkin hopes to have the award raised to $1 million.

Fredkin is hardly alone in considering DNA a form of information, but this observation was less common back when he first made it. So too with many of his ideas. When his world view crystallized, a quarter of a century ago, he immediately saw dozens of large-scale implications, in fields ranging from physics to biology to psychology. A number of these have gained currency since then, and he considers this trend an ongoing substantiation of his entire outlook.

Fredkin talks some more and then recaps. "What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity life, DNA—you know, the biochemical functions—are controlled by a digital information process. Then, at another level, our thought processes are basically information processing." That is not to say, he stresses, that everything is best viewed as information. "It's just like there's mathematics and all these other things, but not everything is best viewed from a mathematical viewpoint. So what's being said is not that this comes along and replaces everything. It's one more avenue of modeling reality, and it happens to cover the sort of three biggest philosophical mysteries. So it sort of completes the picture."

Among the scientists who don't dismiss Fredkin's theory of digital physics out of hand is Marvin Minsky, a computer scientist and polymath at MIT, whose renown approaches cultic proportions in some circles. Minsky calls Fredkin "Einstein-like" in his ability to find deep principles through simple intellectual excursions. If it is true that most physicists think Fredkin is off the wall, Minsky told me, it is also true that "most physicists are the ones who don't invent new theories"; they go about their work with tunnel vision, never questioning the dogma of the day. When it comes to the kind of basic reformulation of thought proposed by Fredkin, "there's no point in talking to anyone but a Feynman or an Einstein or a Pauli," Minsky says. "The rest are just Republicans and Democrats." I talked with Richard Feynman, a Nobel laureate at the California Institute of Technology, before his death, in February. Feynman considered Fredkin a brilliant and consistently original, though sometimes incautious, thinker. If anyone is going to come up with a new and fruitful way of looking at physics, Feynman said, Fredkin will.

Notwithstanding their moral support, though, neither Feynman nor Minsky was ever convinced that the universe is a computer. They were endorsing Fredkin's mind, not this particular manifestation of it. When it comes to digital physics, Ed Fredkin is flying solo.

He knows that, and he regrets that his ideas continue to lack the support of his colleagues. But his self-confidence is unshaken. You see, Fredkin has had an odd childhood, and an odd education, and an odd career, all of which, he explains, have endowed him with an odd perspective, from which the essential nature of the universe happens to be clearly visible. "I feel like I'm the only person with eyes in a world where everyone's blind," he says.

II. A Finely Mottled Universe


HE PRIME MOVER OF EVERYTHING, THE SINGLE principle that governs the universe, lies somewhere within a class of computer programs known as cellular automata, according to Fredkin.

The cellular automaton was invented in the early 1950s by John von Neumann, one of the architects of computer science and a seminal thinker in several other fields. Von Neumann (who was stimulated in this and other inquiries by the ideas of the mathematician Stanislaw Ulam) saw cellular automata as a way to study reproduction abstractly, but the word cellular is not meant biologically when used in this context. It refers, rather, to adjacent spaces—cells—that together form a pattern. These days the cells typically appear on a computer screen, though von Neumann, lacking this convenience, rendered them on paper.

In some respects cellular automata resemble those splendid graphic displays produced by patriotic masses in authoritarian societies and by avid football fans at American universities. Holding up large colored cards on cue, they can collectively generate a portrait of, say, Lenin, Mao Zedong, or a University of Southern California Trojan. More impressive still, one portrait can fade out and another crystallize in no time at all. Again and again one frozen frame melts into another It is a spectacular feat of precision and planning.

But suppose there were no planning. Suppose that instead of arranging a succession of cards to display, everyone learned a single rule for repeatedly determining which card was called for next. This rule might assume any of a number of forms. For example, in a crowd where all cards were either blue or white, each card holder could be instructed to look at his own card and the cards of his four nearest neighbors—to his front, back, left, and right—and do what the majority did during the last frame. (This five-cell group is known as the von Neumann neighborhood.) Alternatively, each card holder could be instructed to do the opposite of what the majority did. In either event the result would be a series not of predetermined portraits but of more abstract, unpredicted patterns. If, by prior agreement, we began with a USC Trojan, its white face might dissolve into a sea of blue, as whitecaps drifted aimlessly across the stadium. Conversely, an ocean of randomness could yield islands of structure—not a Trojan, perhaps, but at least something that didn't look entirely accidental. It all depends on the original pattern of cells and the rule used to transform it incrementally.

This leaves room for abundant variety. There are many ways to define a neighborhood, and for any given neighborhood there are many possible rules, most of them more complicated than blind conformity or implacable nonconformity. Each cell may, for instance, not only count cells in the vicinity but also pay attention to which particular cells are doing what. All told, the number of possible rules is an exponential function of the number of cells in the neighborhood; the von Neumann neighborhood alone has 232, or around 4 billion, possible rules, and the nine-cell neighborhood that results from adding corner cells offers 2512, or roughly 1 with 154 zeros after it, possibilities. But whatever neighborhoods, and whatever rules, are programmed into a computer, two things are always true of cellular automata: all cells use the same rule to determine future behavior by reference to the past behavior of neighbors, and all cells obey the rule simultaneously, time after time.

In the late 1950s, shortly after becoming acquainted with cellular automata, Fredkin began playing around with rules, selecting the powerful and interesting and discarding the weak and bland. He found, for example, that any rule requiring all four of a cell's immediate neighbors to be lit up in order for the cell itself to be lit up at the next moment would not provide sustained entertainment; a single "off" cell would proliferate until darkness covered the computer screen. But equally simple rules could create great complexity. The first such rule discovered by Fredkin dictated that a cell be on if an odd number of cells in its von Neumann neighborhood had been on, and off otherwise. After "seeding" a good, powerful rule with an irregular landscape of off and on cells, Fredkin could watch rich patterns bloom, some freezing upon maturity, some eventually dissipating, others locking into a cycle of growth and decay. A colleague, after watching one of Fredkin's rules in action, suggested that he sell the program to a designer of Persian rugs.

Today new cellular-automaton rules are formulated and tested by the "information-mechanics group" founded by Fredkin at MIT's computer-science laboratory. The core of the group is an international duo of physicists, Tommaso Toffoli, of Italy, and Norman Margolus, of Canada. They differ in the degree to which they take Fredkin's theory of physics seriously, but both agree with him that there is value in exploring the relationship between computation and physics, and they have spent much time using cellular automata to simulate physical processes. In the basement of the computer-science laboratory is the CAM—the cellular automaton machine, designed by Toffoli and Margolus partly for that purpose. Its screen has 65,536 cells, each of which can assume any of four colors and can change color sixty times a second.

The CAM is an engrossing, potentially mesmerizing machine. Its four colors—the three primaries and black—intermix rapidly and intricately enough to form subtly shifting hues of almost any gradation; pretty waves of deep blue or red ebb and flow with fine fluidity and sometimes with rhythm, playing on the edge between chaos and order.

Guided by the right rule, the CAM can do a respectable imitation of pond water rippling outward circularly in deference to a descending pebble, or of bubbles forming at the bottom of a pot of boiling water, or of a snowflake blossoming from a seed of ice: step by step, a single "ice crystal" in the center of the screen unfolds into a full-fledged flake, a six-edged sheet of ice riddled symmetrically with dark pockets of mist. (It is easy to see how a cellular automaton can capture the principles thought to govern the growth of a snowflake: regions of vapor that find themselves in the vicinity of a budding snowflake freeze—unless so nearly enveloped by ice crystals that they cannot discharge enough heat to freeze.)

These exercises are fun to watch, and they supply one a sense of the cellular automaton's power, but Fredkin is not particularly interested in them. After all, a snowflake is not, at the visible level, literally a cellular automaton; an ice crystal is not a single, indivisible bit of information, like the cell that portrays it. Fredkin believes that automata will more faithfully mirror reality as they are applied to its more fundamental levels and the rules needed to model the motion of molecules, atoms, electrons, and quarks are uncovered. And he believes that at the most fundamental level (whatever that turns out to be) the automaton will describe the physical world with perfect precision, because at that level the universe is a cellular automaton, in three dimensions—a crystalline lattice of interacting logic units, each one "deciding" zillions of point in time. The information thus produced, Fredkin says, is the fabric of reality, the stuff of which matter and energy are made. An electron, in Fredkin's universe, is nothing more than a pattern of information, and an orbiting electron is nothing more than that pattern moving. Indeed, even this motion is in some sense illusory: the bits of information that constitute the pattern never move, any more than football fans would change places to slide a USC Trojan four seats to the left. Each bit stays put and confines its activity to blinking on and off. "You see, I don't believe that there are objects like electrons and photons, and things which are themselves and nothing else," Fredkin says. What I believe is that there's an information process, and the bits, when they're in certain configurations, behave like the thing we call the electron, or the hydrogen atom, or whatever."

HE READER MAY NOW HAVE A NUMBER OF questions that unless satisfactorily answered will lead to something approaching contempt for Fredkin's thinking. One such question concerns the way cellular automata chop space and time into little bits. Most conventional theories of physics reflect the intuition that reality is continuous—that one "point" in time is no such thing but, rather, flows seamlessly into the next, and that space, similarly, doesn't come in little chunks but is perfectly smooth. Fredkin's theory implies that both space and time have a graininess to them, and that the grains cannot be chopped up into smaller grains; that people and dogs and trees and oceans, at rock bottom, are more like mosaics than like paintings; and that time's essence is better captured by a digital watch than by a grandfather clock.

The obvious question is, Why do space and time seem continuous if they are not? The obvious answer is, The cubes of space and points of time are very, very small: time seems continuous in just the way that movies seem to move when in fact they are frames, and the illusion of spatial continuity is akin to the emergence of smooth shades from the finely mottled texture of a newspaper photograph.

The obvious answer, Fredkin says, is not the whole answer; the illusion of continuity is yet more deeply ingrained in our situation. Even if the ticks on the universal clock were, in some absolute sense, very slow, time would still seem continuous to us, since our perception, itself proceeding in the same ticks, would be no more finely grained than the processes being perceived. So too with spatial perception: Can eyes composed of the smallest units in existence perceive those units? Could any informational process sense its ultimate constituents? The point is that the basic units of time and space in Fredkin's reality don't just happen to be imperceptibly small. As long as the creatures doing the perceiving are in that reality, the units have to be imperceptibly small.

Though some may find this discreteness hard to comprehend, Fredkin finds a grainy reality more sensible than a smooth one. If reality is truly continuous, as most physicists now believe it is, then there must be quantities that cannot be expressed with a finite number of digits; the number representing the strength of an electromagnetic field, for example, could begin 5.23429847 and go on forever without failing into a pattern of repetition. That seems strange to Fredkin: wouldn't you eventually get to a point, around the hundredth, or thousandth, or millionth decimal place, where you had hit the strength of the field right on the nose? Indeed, wouldn't you expect that every physical quantity has an exactness about it? Well, you might and might not. But Fredkin does expect exactness, and in his universe he gets it.

Fredkin has an interesting way of expressing his insistence that all physical quantities be "rational." (A rational number is a number that can be expressed as a fraction—as a ratio of one integer to another. Expressed as a decimal, a rational number will either end, as 5/2 does in the form of 2.5, or repeat itself endlessly, as 1/7 does in the form of 0.142857142857142 . . .) He says he finds it hard to believe that a finite volume of space could contain an infinite amount of information. It is almost as if he viewed each parcel of space as having the digits describing it actually crammed into it. This seems an odd perspective, one that confuses the thing itself with the information it represents. But such an inversion between the realm of things and the realm of representation is common among those who work at the interface of computer science and physics. Contemplating the essence of information seems to affect the way you think.

The prospect of a discrete reality, however alien to the average person, is easier to fathom than the problem of the infinite regress, which is also raised by Fredkin's theory. The problem begins with the fact that information typically has a physical basis. Writing consists of ink; speech is composed of sound waves; even the computer's ephemeral bits and bytes are grounded in configurations of electrons. If the electrons are in turn made of information, then what is the information made of?

Asking questions like this ten or twelve times is not a good way to earn Fredkin's respect. A look of exasperation passes fleetingly over his face. "What I've tried to explain is that—and I hate to do this, because physicists are always doing this in an obnoxious way—is that the question implies you're missing a very important concept." He gives it one more try, two more tries, three, and eventually some of the fog between me and his view of the universe disappears. I begin to understand that this is a theory not just of physics but of metaphysics. When you disentangle these theories—compare the physics with other theories of physics, and the metaphysics with other ideas about metaphysics—both sound less far-fetched than when jumbled together as one. And, as a bonus, Fredkin's metaphysics leads to a kind of high-tech theology—to speculation about supreme beings and the purpose of life.

III. The Perfect Thing


DWARD FREDKIN WAS BORN IN 1934, THE LAST OF three children in a previously prosperous family. His father, Manuel, had come to Southern California from Russia shortly after the Revolution and founded a chain of radio stores that did not survive the Great Depression. The family learned economy, and Fredkin has not forgotten it. He can reach into his pocket, pull out a tissue that should have been retired weeks ago, and, with cleaning solution, make an entire airplane windshield clear. He can take even a well-written computer program, sift through it for superfluous instructions, and edit it accordingly, reducing both its size and its running time.

Manuel was by all accounts a competitive man, and he focused his competitive energies on the two boys: Edward and his older brother, Norman. Manuel routinely challenged Ed's mastery of fact, inciting sustained arguments over, say, the distance between the moon and the earth. Norman's theory is that his father, though bright, was intellectually insecure; he seemed somehow threatened by the knowledge the boys brought home from school. Manuel's mistrust of books, experts, and all other sources of received wisdom was absorbed by Ed.

So was his competitiveness. Fredkin always considered himself the smartest kid in his class. He used to place bets with other students on test scores. This habit did not endear him to his peers, and he seems in general to have lacked the prerequisites of popularity. His sense of humor was unusual. His interests were not widely shared. His physique was not a force to be reckoned with. He recalls, "When I was young—you know, sixth, seventh grade—two kids would be choosing sides for a game of something. It could be touch football. They'd choose everybody but me, and then there'd be a fight as to whether one side would have to take me. One side would say, 'We have eight and you have seven,' and they'd say, 'That's okay.' They'd be willing to play with seven." Though exhaustive in documenting his social alienation, Fredkin concedes that he was not the only unpopular student in school. "There was a socially active subgroup, probably not a majority, maybe forty percent, who were very socially active. They went out on dates. They went to parties. They did this and they did that. The others were left out. And I was in this big left-out group. But I was in the pole position. I was really left out."

Of the hours Fredkin spent alone, a good many were devoted to courting disaster in the name of science. By wiring together scores of large, 45-volt batteries, he collected enough electricity to conjure up vivid, erratic arcs. By scraping the heads off matches and buying sulfur, saltpeter, and charcoal, he acquired a good working knowledge of pyrotechnics. He built small, minimally destructive but visually impressive bombs, and fashioned rockets out of cardboard tubing and aluminum foil. But more than bombs and rockets, it was mechanisms that captured Fredkin's attention. From an early age he was viscerally attracted to Big Ben alarm clocks, which he methodically took apart and put back together. He also picked up his father's facility with radios and household appliances. But whereas Manuel seemed to fix things without understanding the underlying science, his son was curious about first principles.

So while other kids were playing baseball or chasing girls, Ed Fredkin was taking things apart and putting them back together Children were aloof, even cruel, but a broken clock always responded gratefully to a healing hand. "I always got along well with machines," he remembers.

After graduation from high school, in 1952, Fredkin headed for the California Institute of Technology with hopes of finding a more appreciative social environment. But students at Caltech turned out to bear a disturbing resemblance to people he had observed elsewhere. "They were smart like me," he recalls, "but they had the full spectrum and distribution of social development." Once again Fredkin found his weekends unencumbered by parties. And once again he didn't spend his free time studying. Indeed, one of the few lessons he learned is that college is different from high school: in college if you don't study, you flunk out. This he did a few months into his sophomore year. Then, following in his brother's footsteps, he joined the Air Force and learned to fly fighter planes.

T WAS THE AIR FORCE THAT FINALLY BROUGHT Fredkin face to face with a computer. He was working for the Air Proving Ground Command, whose function was to ensure that everything from combat boots to bombers was of top quality, when the unit was given the job of testing a computerized air-defense system known as SAGE (for "semi-automatic ground environment"), To test SAGE the Air Force needed men who knew something about computers, and so in 1956 a group from the Air Proving Ground Command, including Fredkin, was sent to MIT's Lincoln Laboratory and enrolled in computer-science courses. "Everything made instant sense to me," Fredkin remembers. "I just soaked it up like a sponge."

SAGE, when ready for testing, turned out to be even more complex than anticipated—too complex to be tested by anyone but genuine experts—and the job had to be contracted out. This development, combined with bureaucratic disorder, meant that Fredkin was now a man without a function, a sort of visiting scholar at Lincoln Laboratory. "For a period of time, probably over a year, no one ever came to tell me to do anything. Well, meanwhile, down the hall they installed the latest, most modern computer in the world—IBM's biggest, most powerful computer. So I just went down and started to program it." The computer was an XD-1. It was slower and less capacious than an Apple Macintosh and was roughly the size of a large house.

When Fredkin talks about his year alone with this dinosaur, you half expect to hear violins start playing in the background. "My whole way of life was just waiting for the computer to come along," he says. "The computer was in essence just the perfect thing." It was in some respects preferable to every other conglomeration of matter he had encountered—more sophisticated and flexible than other inorganic machines, and more logical than organic ones. "See, when I write a program, if I write it correctly, it will work. If I'm dealing with a person, and I tell him something, and I tell him correctly, it may or may not work."

The XD-1, in short, was an intelligence with which Fredkin could empathize. It was the ultimate embodiment of mechanical predictability, the refuge to which as a child he had retreated from the incomprehensibly hostile world of humanity. If the universe is indeed a computer, then it could be a friendly place after all.

During the several years after his arrival at Lincoln Lab, as Fredkin was joining the first generation of hackers, he was also immersing himself in physics—finally learning, through self-instruction, the lessons he had missed by dropping out of Caltech. It is this two-track education, Fredkin says, that led him to the theory of digital physics. For a time "there was no one in the world with the same interest in physics who had the intimate experience with computers that I did. I honestly think that there was a period of many years when I was in a unique position."

The uniqueness lay not only in the fusion of physics and computer science but also in the peculiar composition of Fredkin's physics curriculum. Many physicists acquire as children the sort of kinship with mechanism that he still feels, but in most cases it is later diluted by formal education; quantum mechanics, the prevailing paradigm in contemporary physics, seems to imply that at its core, reality, has truly random elements and is thus inherently unpredictable. But Fredkin escaped the usual indoctrination. To this day he maintains, as did Albert Einstein, that the common interpretation of quantum mechanics is mistaken—that any seeming indeterminacy in the subatomic world reflects only our ignorance of the determining principles, not their absence. This is a critical belief, for if he is wrong and the universe is not ultimately deterministic, then it cannot be governed by a process as exacting as computation.

After leaving the Air Force, Fredkin went to work for Bolt Beranek and Newman, a consulting firm in the Boston area, now known for its work in artificial intelligence and computer networking. His supervisor at BBN, J. C. R. Licklider, says of his first encounter with Fredkin, "It was obvious to me he was very unusual and probably a genius, and the more I came to know him, the more I came to think that that was not too elevated a description." Fredkin "worked almost continuously," Licklider recalls. "It was hard to get him to go to sleep sometimes." A pattern emerged. Licklider would supply Fredkin a problem to work on—say, figuring out how to get a computer to search a text in its memory for an only partially specified sequence of letters. Fredkin would retreat to his office and return twenty or thirty hours later with the solution—or, rather, a solution; he often came back with the answer to a question different from the one that Licklider had asked. Fredkin's focus was intense but undisciplined, and it tended to stray from a problem as soon as he was confident that he understood the solution in principle.

This intellectual wanderlust is one of Fredkin's most enduring and exasperating traits. Just about everyone who knows him has a way of describing it: "He doesn't really work. He sort of fiddles." "Very often he has these great ideas and then does not have the discipline to cultivate the idea." "There is a gap between the quality of the original ideas and what follows. There's an imbalance there." Fredkin is aware of his reputation. In self-parody he once brought a cartoon to a friend's attention: A beaver and another forest animal are contemplating an immense man-made dam. The beaver is saying something like, "No, I didn't actually build it. But it's based on an idea of mine."

Among the ideas that congealed in Fredkin's mind during his stay at BBN is the one that gave him his current reputation as (depending on whom you talk to) a thinker of great depth and rare insight, a source of interesting but reckless speculation, or a crackpot.

IV. Tick by Tick, Dot by Dot


HE IDEA THAT THE UNIVERSE IS A COMPUTER WAS inspired partly by the idea of the universal computer. Universal computer, a term that can accurately be applied to everything from an IBM PC to a Cray supercomputer, has a technical, rigorous definition, but here its upshot will do: a universal computer can simulate any process that can be precisely described and perform any calculation that is performable.

This broad power is ultimately grounded in something very simple: the algorithm. An algorithm is a fixed procedure for converting input into output, for taking one body of information and turning it into another. For example, a computer program that takes any number it is given, squares it, and subtracts three is an algorithm. This isn't a very powerful algorithm; by taking a 3 and turning it into a 6, it hasn't created much new information. But algorithms become more powerful with recursion. A recursive algorithm is an algorithm whose output is fed back into it as input. Thus the algorithm that turned 3 into 6, if operating recursively, would continue, turning 6 into 33, then 33 into 1,086, then 1,086 into 1,179,393, and so on.

The power of recursive algorithms is especially apparent in the simulation of physical processes. While Fredkin was at BBN, he would use the company's Digital Equipment Corporation PDP-1 computer to simulate, say, two particles, one that was positively charged and one that was negatively charged, orbiting each other in accordance with the laws of electromagnetism. It was a pretty sight: two phosphor dots dancing, each etching a green trail that faded into yellow and then into darkness. But for Fredkin the attraction lay less in this elegant image than in its underlying logic. The program he had written took the particles' velocities and positions at one point in time, computed those variables for the next point in time, and then fed the new variables back into the algorithm to get newer variables—and so on and so on, thousands of times a second. The several steps in this algorithm, Fredkin recalls, were "very simple and very beautiful." It was in these orbiting phosphor dots that Fredkin first saw the appeal of his kind of universe—a universe that proceeds tick by tick and dot by dot, a universe in which complexity boils down to rules of elementary simplicity.

Fredkin's discovery of cellular automata a few years later permitted him further to indulge his taste for economy of information and strengthened his bond with the recursive algorithm. The patterns of automata are often all but impossible to describe with calculus yet easy to express algorithmically. Nothing is so striking about a good cellular automaton as the contrast between the simplicity of the underlying algorithm and the richness of its result. We have all felt the attraction of such contrasts. It accompanies the comprehension of any process, conceptual or physical, by which simplicity accommodates complexity. Simple solutions to complex problems, for example, make us feel good. The social engineer who designs uncomplicated legislation that will cure numerous social ills, the architect who eliminates several nagging design flaws by moving a single closet, the doctor who traces gastro-intestinal, cardiovascular, and respiratory ailments to a single, correctable cause—all feel the same kind of visceral, aesthetic satisfaction that must have filled the first caveman who literally killed two birds with one stone.

For scientists, the moment of discovery does not simply reinforce the search for knowledge; it inspires further research. Indeed, it directs research. The unifying principle, upon its apprehension, can elicit such devotion that thereafter the scientist looks everywhere for manifestations of it. It was the scientist in Fredkin who, upon seeing how a simple programming rule could yield immense complexity, got excited about looking at physics in a new way and stayed excited. He spent much of the next three decades fleshing out his intuition.

REDKIN'S RESIGNATION FROM BOLT BERANEK AND Newman did not surprise Licklider. "I could tell that Ed was disappointed in the scope of projects undertaken at BBN. He would see them on a grander scale. I would try to argue—hey, let's cut our teeth on this and then move on to bigger things." Fredkin wasn't biting. "He came in one day and said, 'Gosh, Lick, I really love working here, but I'm going to have to leave. I've been thinking about my plans for the future, and I want to make'—I don't remember how many millions of dollars, but it shook me—'and I want to do it in about four years.' And he did amass however many millions he said he would amass in the time he predicted, which impressed me considerably."

In 1962 Fredkin founded Information International Incorporated—an impressive name for a company with no assets and no clients, whose sole employee had never graduated from college. Triple-I, as the company came to be called, was placed on the road to riches by an odd job that Fredkin performed for the Woods Hole Oceanographic Institute. One of Woods Hole's experiments had run into a complication: underwater instruments had faithfully recorded the changing direction and strength of deep ocean currents, but the information, encoded in tiny dots of light on sixteen-millimeter film, was inaccessible to the computers that were supposed to analyze it. Fredkin rented a sixteen-millimeter movie projector and with a surprisingly simple modification turned it into a machine for translating those dots into terms the computer could accept.

This contraption pleased the people at Woods Hole and led to a contract with Lincoln Laboratory. Lincoln was still doing work for the Air Force, and the Air Force wanted its computers to analyze radar information that, like the Woods Hole data, consisted of patterns of light on film. A makeshift information-conversion machine earned Triple-I $10,000, and within a year the Air Force hired Fredkin to build equipment devoted to the task. The job paid $350,000—the equivalent today of around $1 million. RCA and other companies, it turned out, also needed to turn visual patterns into digital data, and "programmable film readers" that sold for $500,000 apiece became Triple-I's stock-in-trade. In 1968 Triple-I went public and Fredkin was suddenly a millionaire. Gradually he cashed in his chips. First he bought a ranch in Colorado. Then one day he was thumbing through the classifieds and saw that an island in the Caribbean was for sale. He bought it.

In the early 1960s, at the suggestion of the Defense Department's Advanced Research Projects Agency, MIT set up what would become its Laboratory for Computer Science. It was then called Project MAC, an acronym that stood for both "machine-aided cognition" and "multiaccess computer." Fredkin had connections with the project from the beginning. Licklider, who had left BBN for the Pentagon shortly after Fredkin's departure, was influential in earmarking federal money for MAC. Marvin Minsky—who would later serve on Triple-I's board, and by the end of 1967 owned some of its stock—was centrally involved In MAC's inception. Fredkin served on Project MAC's steering committee, and in 1966 he began discussing with Minsky the possibility of becoming a visiting professor at MIT. The idea of bringing a college dropout onto the faculty, Minsky recalls, was not as outlandish as it now sounds; computer science had become an academic discipline so suddenly that many of its leading lights possessed meager formal credentials. In 1968, after Licklider had come to MIT and become the director of Project MAC, he and Minsky convinced Louis Smullin, the head of the electrical-engineering department, that Fredkin was worth the gamble. "We were a growing department and we wanted exciting people," Smullin says. "And Ed was exciting."

Fredkin had taught for barely a year before he became a full professor, and not much later, in 1971, he was appointed the head of Project MAC—a position that was also short-lived, for in the fall of 1974 he began a sabbatical at the California Institute of Technology as a Fairchild Distinguished Scholar. He went to Caltech under the sponsorship of Richard Feynman. The deal, Fredkin recalls, was that he would teach Feynman more about computer science, and Feynman would teach him more about physics. While there, Fredkin developed an idea that has slowly come to be seen as a profound contribution to both disciplines. The idea is also—in Fredkin's mind, at least—corroborating evidence for his theory of digital physics. To put its upshot in brief and therefore obscure terms, Fredkin found that computation is not inherently irreversible and thus it is possible, in principle, to build a computer that doesn't use up energy and doesn't supply off heat.

All computers on the market are irreversible. That is, their history of information processing cannot be inferred from their present informational state; you cannot look at the data they contain and figure out how they arrived at it. By the time the average computer tells you that 2 plus 2 equals 4, it has forgotten the question; for all it knows, you asked what 1 plus 3 is. The reason for this ignorance is that computers discharge information once it is no longer needed, so that they won't get clogged up.

In 1961 Rolf Landauer, of IBM's Thomas J. Watson Research Center, established that this destruction of information is the only part of the computational process that unavoidably involves the dissipation of energy. It takes effort, in other words, for a computer to forget things but not necessarily for it to perform other functions. Thus the question of whether you can, in principle, build a universal computer that doesn't dissipate energy in the form of heat is synonymous with the question of whether you can design a logically reversible universal computer, one whose computational history can always be unearthed. Landauer, along with just about everyone else, thought such a computer impossible; all past computer architectures had implied the regular discarding of information, and it was widely believed that this irreversibility was intrinsic to computation. But while at Caltech, Fredkin did one of his favorite things—he showed that everyone had been wrong all along.

Of the two kinds of reversible computers invented by Fredkin, the better known is called the billiard-ball computer. If it were ever actually built, it would consist of billiard balls ricocheting around in a labyrinth of "mirrors," bouncing off the mirrors at 45-degree angles, periodically banging into other moving balls at 90-degree angles, and occasionally exiting through doorways that occasionally would permit new balls to enter. To extract data from the machine, you would superimpose a grid over it, and the presence or absence of a ball in a given square at a given point in time would constitute information. Such a machine, Fredkin showed, would qualify as a universal computer; it could do anything that normal computers do. But unlike other computers, it would be perfectly reversible; to recover its history, all you would have to do is stop it and run it backward. Charles H. Bennett, of IBM's Thomas J. Watson Research Center, independently arrived at a different proof that reversible computation is possible, though he considers the billiard-ball computer to be in some respects a more elegant solution to the problem than his own.

The billiard-ball computer will never be built, because it is a platonic device, existing only in a world of ideals. The balls are perfectly round and hard, and the table perfectly smooth and hard. There is no friction between the two, and no energy is lost when balls collide. Still, although these ideals are unreachable, they could be approached eternally through technological refinement, and the heat produced by fiction and collision could thus be reduced without limit. Since no additional heat would be created by information loss, there would be no necessary minimum on the total heat emitted by the computer. "The cleverer you are, the less heat it will generate," Fredkin says.

The connection Fredkin sees between the billiard-ball computer and digital physics exemplifies the odd assortment of evidence he has gathered in support of his theory. Molecules and atoms and their constituents, he notes, move around in theoretically reversible fashion, like billiard balls (although it is not humanly possible, of course, actually to take stock of the physical state of the universe, or even one small corner of it, and reconstruct history by tracing the motion of microscopic particles backward). Well, he asks, given the theoretical reversibility of physical reality, doesn't the theoretical feasibility of a reversible computer lend credence to the claim that computation is reality's basis?

No and yes. Strictly speaking, Fredkin's theory doesn't demand reversible computation. It is conceivable that an irreversible process at the very core of reality could supply rise to the reversible behavior of molecules, atoms, electrons, and the rest. After all, irreversible computers (that is, all computers on the market) can simulate reversible billiard balls. But they do so in a convoluted way, Fredkin says, and the connection between an irreversible substratum and a reversible stratum would, similarly, be tortuous—or, as he puts it, "aesthetically obnoxious." Fredkin prefers to think that the cellular automaton underlying reversible reality does its work gracefully.

Consider, for example, a variant of the billiard-ball computer invented by Norman Margolus, the Canadian in MIT's information-mechanics group. Margolus showed how a two-state cellular automaton that was itself reversible could simulate the billiard-ball computer using only a simple rule involving a small neighborhood. This cellular automaton in action looks like a jazzed-up version of the original video game, Pong. It is an overhead view of endlessly energetic balls ricocheting off clusters of mirrors and each other It is proof that a very simple binary cellular automaton can supply rise to the seemingly more complex behavior of microscopic particles bouncing off each other. And, as a kind of bonus, these particular particles themselves amount to a computer. Though Margolus discovered this powerful cellular-automaton rule, it was Fredkin who had first concluded that it must exist and persuaded Margolus to look for it. "He has an intuitive idea of how things should be," Margolus says. "And often, if he can't come up with a rational argument to convince you that it should be so, he'll sort of transfer his intuition to you."

That, really, is what Fredkin is trying to do when he argues that the universe is a computer. He cannot supply you a single line of reasoning that leads inexorably, or even very plausibly, to this conclusion. He can tell you about the reversible computer, about Margolus's cellular automaton, about the many physical quantities, like light, that were once thought to be continuous but are now considered discrete, and so on. The evidence consists of many little things—so many, and so little, that in the end he is forced to convey his truth by simile. "I find the supporting evidence for my beliefs in ten thousand different places," he says. "And to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, Where is this animal? I say, Well, he was here, he's about this big, this that and the other. And I know a thousand things about him. I don't have him in hand, but I know he's there." The story changes upon retelling. One day it's Bigfoot that Fredkin's trailing. Another day it's a duck: feathers are everywhere, and the tracks are webbed. Whatever the animal, the moral of the story remains the same: "What I see is so compelling that it can't be a creature of my imagination."

V. Deus ex Machina


HERE WAS SOMETHING BOTHERSOME ABOUT ISAAC Newton's theory of gravitation. The idea that the sun exerts a pull on the earth, and vice versa, sounded vaguely supernatural and, in any event, was hard to explain. How, after all, could such "action at a distance" be realized? Did the earth look at the sun, estimate the distance, and consult the law of gravitation to determine where it should move and how fast? Newton sidestepped such questions. He fudged with the Latin phrase si esset: two bodies, he wrote, behave as if impelled by a force inversely proportional to the square of their distance. Ever since Newton, physics has followed his example. Its "force fields" are, strictly speaking, metaphorical, and its laws purely descriptive. Physicists make no attempt to explain why things obey the law of electromagnetism or of gravitation. The law is the law, and that's all there is to it.

Fredkin refuses to accept authority so blindly. He posits not only laws but also a law-enforcement agency: a computer. Somewhere out there, he believes, is a machinelike thing that actually keeps our individual bits of space abiding by the rule of the universal cellular automaton. With this belief Fredkin crosses the line between physics and metaphysics, between scientific hypothesis and cosmic speculation. If Fredkin had Newton's knack for public relations, if he stopped at saying that the universe operates as if it were a computer, he could Improve his stature among physicists while preserving the essence of his theory—the idea that the dynamics of physical reality will ultimately be better captured by a single recursive algorithm than by the mathematics of conventional physics, and that the continuity of time and space implicit in traditional mathematics is illusory.

Actually, some estimable physicists have lately been saying things not wholly unlike this stripped-down version of the theory. T. D. Lee, a Nobel laureate at Columbia University, has written at length about the possibility that time is discrete. And in 1984 Scientific American, not exactly a soapbox for cranks, published an article in which Stephen Wolfram, then of Princeton's Institute for Advanced Study, wrote, "Scientific laws are now being viewed as algorithms. . . . Physical systems are viewed as computational systems, processing information much the way computers do." He concluded, "A new paradigm has been born."

The line between responsible scientific speculation and off-the-wall metaphysical pronouncement was nicely illustrated by an article in which Tomasso Toffoli, the Italian in MIT's information-mechanics group, stayed barely on the responsible side of it. Published in the journal Physica D, the article was called "Cellular automata as an alternative to (rather than an approximation of) differential equations in modeling physics." Toffoli's thesis captured the core of Fredkin's theory yet had a perfectly reasonable ring to it. He simply suggested that the historical reliance of physicists on calculus may have been due not just to its merits but also to the fact that before the computer, alternative languages of description were not practical.

Why does Fredkin refuse to do the expedient thing—leave out the part about the universe actually being a computer? One reason is that he considers reprehensible the failure of Newton, and of all physicists since, to back up their descriptions of nature with explanations. He is amazed to find "perfectly rational scientists" believing in "a form of mysticism: that things just happen because they happen." The best physics, Fredkin seems to believe, is metaphysics.

The trouble with metaphysics is its endless depth. For every question that is answered, at least one other is raised, and it is not always clear that, on balance, any progress has been made. For example, where is this computer that Fredkin keeps talking about? Is it in this universe, residing along some fifth or sixth dimension that renders it invisible? Is it in some meta-universe? The answer is the latter, apparently, and to understand why, we need to return to the problem of the infinite regress, a problem that Rolf Landauer, among others, has cited with respect to Fredkin's theory. Landauer illustrates the problem by telling the old turtle story. A professor has just finished lecturing at some august university about the origin and structure of the universe, and an old woman in tennis shoes walks up to the lectern. "Excuse me, sir, but you've got it all wrong," she says. "The truth is that the universe is sitting on the back of a huge turtle." The professor decides to humor her. "Oh, really?" he asks. "Well, tell me, what is the turtle standing on?" The lady has a ready reply: "Oh, it's standing on another turtle." The professor asks, "And what is that turtle standing on?" Without hesitation, she says, "Another turtle." The professor, still game, repeats his question. A look of impatience comes across the woman's face. She holds up her hand, stopping him in mid-sentence. "Save your breath, sonny," she says. "It's turtles all the way down."

The infinite-regress problem afflicts Fredkin's theory in two ways, one of which we have already encountered: if matter is made of information, what is the information made of? And even if one concedes that it is no more ludicrous for information to be the most fundamental stuff than for matter or energy to be the most fundamental stuff, what about the computer itself? What is it made of? What energizes it? Who, or what, runs it, or set it in motion to begin with?

HEN FREDKIN IS DISCUSSING THE PROBLEM OF THE infinite regress, his logic seems variously cryptic, evasive, and appealing. At one point he says, "For everything in the world where you wonder, 'What is it made out of?' the only thing I know of where the question doesn't have to be answered with anything else is for information." This puzzles me. Thousands of words later I am still puzzled, and I press for clarification. He talks some more. What he means, as near as I can tell, is what follows.

First of all, it doesn't matter what the information is made of, or what kind of computer produces it. The computer could be of the conventional electronic sort, or it could be a hydraulic machine made of gargantuan sewage pipes and manhole covers, or it could be something we can't even imagine. What's the difference? Who cares what the information consists of? So long as the cellular automaton's rule is the same in each case, the patterns of information will be the same, and so will we, because the structure of our world depends on pattern, not on the pattern's substrate; a carbon atom, according to Fredkin, is a certain configuration of bits, not a certain kind of bits.

Besides, we can never know what the information is made of or what kind of machine is processing it. This point is reminiscent of childhood conversations that Fredkin remembers having with his sister, Joan, about the possibility that they were part of a dream God was having. "Say God is in a room and on his table he has some cookies and tea," Fredkin says. "And he's dreaming this whole universe up. Well, we can't reach out and get his cookies. They're not in our universe. See, our universe has bounds. There are some things in it and some things not." The computer is not; hardware is beyond the grasp of its software. Imagine a vast computer program that contained bodies of information as complex as people, motivated by bodies of information as complex as ideas. These "people" would have no way of figuring out what kind of computer they owed their existence to, because everything they said, and everything they did—including formulate metaphysical hypotheses—would depend entirely on the programming rules and the original input. As long as these didn't change, the same metaphysical conclusions would be reached in an old XD-1 as in a Kaypro 2.

This idea—that sentient beings could be constitutionally numb to the texture of reality—has fascinated a number of people, including, lately, computer scientists. One source of the fascination is the fact that any universal computer can simulate another universal computer, and the simulated computer can, because it is universal, do the same thing. So it is possible to conceive of a theoretically endless series of computers contained, like Russian dolls, in larger versions of themselves and yet oblivious of those containers. To anyone who has lived intimately with, and thought deeply about, computers, says Charles Bennett, of IBM's Watson Lab, this notion is very attractive. "And if you're too attracted to it, you're likely to part company with the physicists." Physicists, Bennett says, find heretical the notion that anything physical is impervious to expertment, removed from the reach of science.

Fredkin's belief in the limits of scientific knowledge may sound like evidence of humility, but in the end it permits great ambition; it helps him go after some of the grandest philosophical questions around. For example, there is a paradox that crops up whenever people think about how the universe came to be. On the one hand, it must have had a beginning. After all, things usually do. Besides, the cosmological evidence suggests a beginning: the big bang. Yet science insists that it is impossible for something to come from nothing; the laws of physics forbid the amount of energy and mass in the universe to change. So how could there have been a time when there was no universe, and thus no mass or energy?

Fredkin escapes from this paradox without breaking a sweat. Granted, he says, the laws of our universe don't permit something to come from nothing. But he can imagine laws that would permit such a thing; in fact, he can imagine algorithmic laws that would permit such a thing. The conservation of mass and energy is a consequence of our cellular automaton's rules, not a consequence of all possible rules. Perhaps a different cellular automaton governed the creation of our cellular automation—just as the rules for loading software are different from the rules running the program once it has been loaded.

What's funny is how hard it is to doubt Fredkin when with such assurance he makes definitive statements about the creation of the universe—or when, for that matter, he looks you in the eye and tells you the universe is a computer. Partly this is because, given the magnitude and intrinsic intractability of the questions he is addressing, his answers aren't all that bad. As ideas about the foundations of physics go, his are not completely out of the ball park; as metaphysical and cosmogonic speculation goes, his isn't beyond the pale.

But there's more to it than that. Fredkin is, in his own odd way, a rhetorician of great skill. He talks softly, even coolly, but with a low-key power, a quiet and relentless confidence, a kind of high-tech fervor. And there is something disarming about his self-awareness. He's not one of these people who say crazy things without having so much as a clue that you're sitting there thinking what crazy things they are. He is acutely conscious of his reputation; he knows that some scientists are reluctant to invite him to conferences for fear that he'll say embarrassing things. But he is not fazed by their doubts. "You know, I'm a reasonably smart person. I'm not the smartest person in the world, but I'm pretty smart—and I know that what I'm involved in makes perfect sense. A lot of people build up what might be called self-delusional systems, where they have this whole system that makes perfect sense to them, but no one else ever understands it or buys it. I don't think that's a major factor here, though others might disagree." It's hard to disagree, when he so forthrightly offers you the chance.

Still, as he gets further from physics, and more deeply into philosophy, he begins to try one's trust. For example, having tackled the question of what sort of process could generate a universe in which spontaneous generation is impossible, he aims immediately for bigger game: Why was the universe created? Why is there something here instead of nothing?

HEN THIS SUBJECT COMES UP, WE ARE SITTING IN the Fredkins' villa. The living area has pale rock walls, shiny-clean floors made of large white ceramic tiles, and built-in bookcases made of blond wood. There is lots of air—the ceiling slopes up in the middle to at least twenty feet—and the air keeps moving; some walls consist almost entirely of wooden shutters that, when open, let the sea breeze pass as fast as it will. I am glad of this. My skin, after three days on Fredkin's island, is hot, and the air, though heavy, is cool. The sun is going down.

Fredkin, sitting on a white sofa, is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "analytical" approach associated with traditional mathematics, including differential equations, and the "computational" approach associated with algorithms. You can predict a future state of a system susceptible to the analytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold.

This indeterminacy is very suggestive. It suggests, first of all, why so many "chaotic" phenomena, like smoke rising from a cigarette, are so difficult to predict using conventional mathematics. (In fact, some scientists have taken to modeling chaotic systems with cellular automata.) To Fredkin, it also suggests that even if human behavior is entirely determined, entirely inevitable, it may be unpredictable; there is room for "pseudo free will" in a completely mechanistic universe. But on this particular evening Fredkin is interested mainly in cosmogony, in the implications of this indeterminacy for the big question: Why does this giant computer of a universe exist?

It's simple, Fredkin explains: "The reason is, there is no way to know the answer to some question any faster than what's going on."

Aware that he may have said something enigmatic, Fredkin elaborates. Suppose, he says, that there is an all-powerful God. "And he's thinking of creating this universe. He's going to spend seven days on the job—this is totally allegorical—or six days on the job. Okay, now, if he's as all-powerful as you might imagine, he can say to himself, 'Wait a minute, why waste the time? I can create the whole thing, or I can just think about it for a minute and just realize what's going to happen so that I don't have to bother.' Now, ordinary physics says, Well, yeah, you got an all-powerful God, he can probably do that. What I can say is—this is very interesting—I can say I don't care how powerful God is; he cannot know the answer to the question any faster than doing it. Now, he can have various ways of doing it, but he has to do every Goddamn single step with every bit or he won't get the right answer. There's no shortcut."

Around sundown on Fredkin's island all kinds of insects start chirping or buzzing or whirring. Meanwhile, the wind chimes hanging just outside the back door are tinkling with methodical randomness. All this music is eerie and vaguely mystical. And so, increasingly, is the conversation. It is one of those moments when the context you've constructed falls apart, and gives way to a new, considerably stranger one. The old context in this case was that Fredkin is an iconoclastic thinker who believes that space and time are discrete, that the laws of the universe are algorithmic, and that the universe works according to the same principles as a computer (he uses this very phrasing in his most circumspect moments). The new context is that Fredkin believes that the universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news/bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places.

So, I say, you're arguing that the reason we're here is that some being wanted to theorize about reality, and the only way he could test his theories was to create reality? "No, you see, my explanation is much more abstract. I don't imagine there is a being or anything. I'm just using that to talk to you about it. What I'm saying is that there is no way to know what the future is any faster than running this [the universe] to get to that [the future]. Therefore, what I'm assuming is that there is a question and there is an answer, okay? I don't make any assumptions about who has the question, who wants the answer, anything."

But the more we talk, the closer Fredkin comes to the religious undercurrents he's trying to avoid. "Every astrophysical phenomenon that's going on is always assumed to be just accident," he says. "To me, this is a fairly arrogant position, in that intelligence—and computation, which includes intelligence, in my view—is a much more universal thing than people think. It's hard for me to believe that everything out there is just an accident." This sounds awfully like a position that Pope John Paul II or Billy Graham would take, and Fredkin is at pains to clarify his position: "I guess what I'm saying is—I don't have any religious belief. I don't believe that there is a God. I don't believe in Christianity or Judaism or anything like that, okay? I'm not an atheist, I'm not an agnostic, I'm just in a simple state. I don't know what there is or might be. But what I can say is that it seems likely to me that this particular universe we have is a consequence of something I would call intelligent." Does he mean that there's something out there that wanted to get the answer to a question? "Yeah." Something that set up the universe to see what would happen? "In some way, yes."

VI. The Language Barrier


N 1974, UPON RETURNING TO MIT FROM CALTECH, Fredkin was primed to revolutionize science. Having done the broad conceptual work (concluding that the universe is a computer), he would enlist the aid of others in taking care of the details—translating the differential equations of physics into algorithms, experimenting with cellular-automaton rules and selecting the most elegant, and, eventually, discovering The Rule, the single law that governs every bit of space and accounts for everything. "He figured that all he needed was some people who knew physics, and that it would all be easy," Margolus says.

One early obstacle was Fredkin's reputation. He says, "I would find a brilliant student; he'd get turned on to this stuff and start to work on it. And then he would come to me and say, 'I'm going to work on something else.' And I would say, 'Why?' And I had a few very honest ones, and they would say, 'Well, I've been talking to my friends about this and they say I'm totally crazy to work on it. It'll ruin my career. I'll be tainted forever.'" Such fears were not entirely unfounded. Fredkin is one of those people who arouse either affection, admiration, and respect, or dislike and suspicion. The latter reaction has come from a number of professors at MIT, particularly those who put a premium on formal credentials, proper academic conduct, and not sounding like a crackpot. Fredkin was never oblivious of the complaints that his work wasn't "worthy of MIT," nor of the movements, periodically afoot, to sever, or at least weaken, his ties to the university. Neither were his graduate students.

Fredkin's critics finally got their way. In the early 1980s, while he was serving briefly as the president of Boston's CBS-TV affiliate, someone noticed that he wasn't spending much time around MIT and pointed to a faculty rule limiting outside professional activities. Fredkin was finding MIT "less and less interesting" anyway, so he agreed to be designated an adjunct professor. As he recalls the deal, he was going to do a moderate amount of teaching and be paid an "appropriate" salary. But he found the real salary insulting, declined payment, and never got around to teaching. Not surprisingly, he was not reappointed adjunct professor when his term expired, in 1986. Meanwhile, he had so nominally discharged his duties as the head of the information-mechanics group that the title was given to Toffoli.

Fredkin doubts that his ideas will achieve widespread acceptance anytime soon. He believes that most physicists are so deeply immersed in their kind of mathematics, and so uncomprehending of computation, as to be incapable of grasping the truth. Imagine, he says, that a twentieth-century time traveler visited Italy in the early seventeenth century and tried to reformulate Galileo's ideas in terms of calculus. Although it would be a vastly more powerful language of description than the old one, conveying its importance to the average scientist would be nearly impossible. There are times when Fredkin breaks through the language barrier, but they are few and far between. He can sell one person on one idea, another on another, but nobody seems to get the big picture. It's like a painting of a horse in a meadow, he says"Everyone else only looks at it with a microscope, and they say, 'Aha, over here I see a little brown pigment. And over here I see a little green pigment.' Okay. Well, I see a horse."

Fredkin's research has nevertheless paid off in unanticipated ways. Comparing a computer's workings and the dynamics of physics turned out to be a good way to figure out how to build a very efficient computer—one that harnesses the laws of physics with great economy. Thus Toffoli and Margolus have designed an inexpensive but powerful cellular-automata machine, the CAM 6. The "machine' is actually a circuit board that when inserted in a personal computer permits it to orchestrate visual complexity at a speed that can be matched only by general-purpose computers costing hundreds of thousands of dollars. Since the circuit board costs only around $1,500, this engrossing machine may well entice young scientific revolutionaries into joining the quest for The Rule. Fredkin speaks of this possibility in almost biblical terms, "The big hope is that there will arise somewhere someone who will have some new, brilliant ideas," he says. "And I think this machine will have a dramatic effect on the probability of that happening."

But even if it does happen, it will not ensure Fredkin a place in scientific history. He is not really on record as believing that the universe is a computer. Although some of his tamer insights have been adopted, fleshed out, and published by Toffoli or Margolus, sometimes in collaboration with him, Fredkin himself has published nothing on digital physics. His stated rationale for not publishing has to do with, of all things, lack of ambition. "I'm just not terribly interested," he says. "A lot of people are fantastically motivated by publishing. It's part of a whole thing of getting ahead in the world." Margolus has another explanation: "Writing something down in good form takes a lot of time. And usually by the time he's done with the first or second draft, he has another wonderful idea that he's off on."

These two theories have merit, but so does a third: Fredkin can't write for academic journals. He doesn't know how. His erratic, hybrid education has left him with a mixture of terminology that neither computer scientists nor physicists recognize as their native tongue. Further, he is not schooled in the rules of scientific discourse; he seems just barely aware of the line between scientific hypothesis and philosophical speculation. He is not politic enough to confine his argument to its essence: that time and space are discrete, and that the state of every point in space at any point in time is determined by a single algorithm. In short, the very background that has allowed Fredkin to see the universe as a computer seems to prevent him from sharing his vision. If he could talk like other scientists, he might see only the things that they see.


Robert Wright is the author of
Three Scientists and Their Gods: Looking for Meaning in an Age of Information, The Moral Animal: Evolutionary Psychology and Everyday Life, and Nonzero: The Logic of Human Destiny.
Copyright © 2002 by The Atlantic Monthly Group. All rights reserved.
The Atlantic Monthly; April 1988; Did the Universe Just Happen?; Volume 261, No. 4; page 29.
Wed, 24 Nov 2010 05:10:00 -0600 text/html https://www.theatlantic.com/past/docs/issues/88apr/wright.htm
Killexams : 7 Best Income Stocks to Buy Now

While rational traders participate in the equities market to see solid returns on their investments, the present paradigm encourages everyone to consider the best income stocks to buy now. Of course, no one is going to complain about robust capital gains; That is, until tax season. But the inflationary crisis we’re in drives more emphasis on passive income than ever before.

As of this writing, the annual inflation rate for the U.S. is 8.6% for the 12 months ended May 2022, the largest annual increase since December 1981. Naturally, consumers mostly feel the heat when they pump gasoline into their cars or buy groceries for their family. To mitigate this sticker shock, the best income stocks to buy may help.

Another factor to consider during this period is inflation’s impact on real earnings. Because prices of goods and core utilities are rising, you’re basically receiving a pay cut or hidden tax. Obviously, such a circumstance can be incredibly frustrating, though it cynically adds to the bullish case for the best income stocks to buy now.

Ticker Company Price
CVX Chevron $144.61
ABBV AbbVie $149.74
IBM IBM $130.88
ENB Enbridge $43.15
ADC Agree Realty $76.26
LTC LTC Properties $39.74
SCCO Southern Copper $49.08

Income Stocks to Buy: Chevron (CVX)

Though hydrocarbon-related companies have been unpopular for a very long time, the dirty little secret is that they’re relevant and necessary. As I’ve mentioned several times before, fossil fuels are difficult to quit because of their high energy density. Essentially, for just a gallon of gas, you can move an SUV down the freeway for 20 or 30 miles.

You’re just not going to find that kind of density from electric vehicles, which is one of the most powerful (albeit cynical) arguments bolstering Chevron (NYSE:CVX). One of the big oil giants, Chevron will likely never court the public’s sympathies. Setting that issue aside, though, the company is extraordinarily relevant, especially because Russia’s reckless war in Ukraine has effectively shelved much of the world’s energy supplies.

As The Economist pointed out recently, international policymakers have warned that the Ukraine crisis could last for years. Such a scenario presents myriad questions about societal and economic stability. However, it’s unavoidable that it keeps the lights on at Chevron and then some, making it one of the most effective among the best income stocks to buy.

AbbVie (ABBV)

While the vast majority of the global public is ready to put the coronavirus nightmare behind it, Nature.com reported on a concerning new phenomenon. While Covid-19 vaccines are due for an upgrade, “emerging variants and fickle immune reactions mean it’s not clear what new jabs should look like.”

Nevertheless, after two years of lockdowns and various mitigation mandates, the fear of Covid-19 has been fading. Instead, concepts like retail revenge or revenge travel have taken over public sentiment, which suits AbbVie (NYSE:ABBV) just fine. As a pharmaceutical giant that now owns the Botox neurotoxin, AbbVie is an underappreciated investment for the return to normal.

Back during the worst of the pandemic, the nation experienced what a Washington Post op-ed referred to as our pajama moment. Those days are now gone, with powerful voices in business demanding that their workers return to the office. In other words, the emphasis is now back to looking good, which may help lift Botox sales.

In turn, ABBV stock is one of the best income stocks to buy now — featuring a forward yield of 3.7%.

Income Stocks to Buy: IBM (IBM)

Amid the tight competition in the technology sphere, IBM (NYSE:IBM) often times gets overlooked. It’s not necessarily fair considering that the company has been making significant inroads with cloud computing, cybersecurity, artificial intelligence and other groundbreaking innovations. Still, it’s tough to shed a less-than-favorable reputation.

However, IBM is so far getting the last laugh. On a year-to-date (or YTD) basis, shares are down 2%, which isn’t exactly riveting stuff. But when stacked up against popular tech plays — many of which are hemorrhaging sizable double-digit figures — IBM might as well be shooting to the moon. Indeed, since December of last year, Big Blue has been quietly making a comeback.

Long-term investors may want to consider IBM simply on the basis that it has its hands in several relevant technologies. Adding in its passive income potential is a sweet bonus, particularly with its forward yield of 5.1%. Sometimes, slow and steady wins the race for the best income stocks to buy.

Enbridge (ENB)

While myriad oil and natural gas companies may qualify for the best income stocks to buy now, one of the challenges for companies tied to the upstream business model — or the exploration and initial production of fossil fuels — is that energy pricing can be volatile. For a little bit more stability, you may want to consider midstream operators like Enbridge (NYSE:ENB).

Midstream firms specialize in activities such as processing, storage, transportation and marketing of hydrocarbon products. The beauty about Enbridge is that the company owns and operates the largest network of oil and gas pipelines in North America, making it an ingrained component of the transportation sector and more broadly, national security.

What really makes ENB stock stand out as one of the best income stocks to buy is its generous payout. Featuring a forward yield of 6.2%, Enbridge can help cushion some of the shock associated with inflation. In addition, data on vehicle miles traveled suggests that the company has significant upside ahead.

Income Stocks to Buy: Agree Realty (ADC)

Among the largest real estate investment trusts (REITs), Agree Realty (NYSE:ADC) is particularly attractive for its relevance. While the consumer economy is undoubtedly hurting from the soaring inflation rate, Agree Realty invests in properties net leased to some of the biggest names in commerce such as Walmart (NYSE:WMT) and Home Depot (NYSE:HD).

Put another way, while many analysts expect a recession of some sort, few are calling for a devastating depression that would result in unprecedented cuts to spending. The likely scenario is that consumers will focus more on essential goods, which should benefit Agree Realty.

Another factor that bolsters the case of ADC stock being one of the best income stocks to buy now is that it distributes passive income on a monthly basis. As you know, the frequency of life — mortgage/rent payments, internet service contracts, utility bills — is monthly. Therefore, Agree Realty helps you get the funds you need when you need them the most.

LTC Properties (LTC)

If you’re seeking a diversified portfolio of the best income stocks to buy, LTC Properties (NYSE:LTC) is well worth consideration. For one thing, this REIT also offers monthly payouts, enabling you to align your passive income with the bills that you pay. Furthermore, this payout frequency enables faster compounding, providing a critical tool to combat inflation.

Beyond this administrative point, though, LTC Properties is attractive for its core business. The REIT specializes in senior housing and healthcare, primarily through sale-leasebacks, mortgage financing, joint-ventures, construction financing and structured finance solutions. LTC’s portfolio is roughly divided in half between senior housing and skilled nursing properties.

As myriad publications have mentioned, baby boomers are retiring in large numbers, with this pace of retirement accelerating. The unique factors of the Covid-19 crisis have also led to workers older than 55 representing the majority of participants of the Great Resignation.

Basically, over the next several years, demand for senior care should rise exponentially. Therefore, LTC stock appears a solid long-term bet among the best income stocks to buy.

Income Stocks to Buy: Southern Copper (SCCO)

With the rise of meme stocks and cryptocurrencies, it’s apparent that quite a few people have the speculation bug in their bones. Well, I’m the type of person that likes to supply the audience what they want. So, if you want to dial up the risk-reward factor for the best income stocks to buy now, you may want to have a look at Southern Copper (NYSE:SCCO).

To be clear, copper prices are slipping badly, largely on global recession fears. With inflation reducing real earnings, consumers are naturally going to reduce their expenditures, first avoiding the discretionary purchases and later much of the lower-priority essentials. Eventually, such cuts are going to impact copper demand, which isn’t great for SCCO stock.

At the same time, copper is critical for the industries and technologies of tomorrow, most notably electric vehicles (or EVs). Moreover, with the electrification of transportation being a vital component of the broader strategy to reduce foreign oil dependencies, SCCO stock might be worth consideration.

Oh yeah, the company features a forward yield of 10%.

On the date of publication, Josh Enomoto did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

A former senior business analyst for Sony Electronics, Josh Enomoto has helped broker major contracts with Fortune Global 500 companies. Over the past several years, he has delivered unique, critical insights for the investment markets, as well as various other industries including legal, construction management, and healthcare.

Tue, 19 Jul 2022 23:29:00 -0500 en-US text/html https://investorplace.com/best-income-stocks/
Killexams : What is B2B Marketing? And How to Do It Successfully

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Wed, 20 Jul 2022 21:30:00 -0500 en-US text/html https://www.business2community.com/b2b-marketing/what-is-b2b-marketing-and-how-to-do-it-successfully-02379116
Killexams : Computer Science Courses

CSCI 1074

The Digital World: An Introduction to Information and Computing (periodically) - 3 Credits

Satisfies Mathematics Core Requirement

This course is an introductory-level survey of computer science for non-majors. Students study the historical and intellectual sources of the discipline, examine important problems and the techniques used to solve them, and consider their social impact. Example problems include the representation of information (such as text, images, audio and video), how computer hardware and networks work, computer vision, machine learning, and cryptography. In order to enhance their understanding of these topics, students will also be given a gentle introduction to computer programming.

CSCI 1075

The Digital World of Robots (periodically) - 3 Credits

This course is a gentle introduction to computer programming for non-majors. Students will learn about computers and computer software by working with a small personal robot. Students will learn the Python programming language, and write Python programs to control their robot's behavior, explore its environment, and perform various tasks. As we get our robots to do more and more, we learn how software is designed and written to solve real problems.

CSCI 1101

Computer Science I (fall/spring) - 3 Credits

Students enrolling in a section must register in a corresponding discussion group.

This course is an introduction to the world of computer programming and some of the fundamental concepts of computer science. You will learn to write programs in a modern programming language, such as Python or ML. By the end of the course you will be able to design fairly complex programs that do interesting and useful things. You will also study some of the basic notions of computer science, including computer system organization, files, and some algorithms of fundamental importance. The course assumes no previous programming experience. You may enroll in either a Python-based section or an ML-based section. The latter would be an appropriate choice for you if you are more mathematically inclined. Both sections will prepare you well for the follow-on course CSCI 1102.

CSCI 1102

Computer Science II (fall/spring) - 3 Credits

Prerequisite CSCI 1101

In CSCI 1101 you were introduced to the basics of programming. You wrote some relatively simple programs, and your primary focus was getting your code to work. In this course you will take a more sophisticated look at programming. You will learn several useful ways to organize data within a program (such as lists, stacks, queues, and trees), some of which are quite clever. Each of these data structures has its own advantages and disadvantages, and you will learn how to evaluate tradeoffs in order to determine which one is the best for a particular program. And you will learn to think of programming as a two-stage process: The design stage, in which you figure out what the program ought to be doing and what classes it requires, and the implementation stage, in which you determine which technique(s) should be used to implement each class and write the code for it. The course will use the Java programming language, which will be taught at the beginning of the semester.

CSCI 1103

Computer Science I Honors (fall/spring) - 3 Credits

Description: CSCI 1103 is a good choice for students with strong backgrounds in mathematics. Students who are unsure about the fit should consult with Professor Muller.This is the honors introductory computer science course. The course is organized around three themes: 1. computation, as a subject of study, 2. coding, as a skill and 3. computer science, as an introduction to the field. The first half of the course explores computation from a simple mathematical perspective. From this point of view, computing can be understood as an extension of basic algebra. Midway through, the course turns to a machine-oriented view, considering storage and processor architecture, mutation and mutation-based repetition idioms. The course explores a number of fundamental algorithms with applications in various disciplines. Good program design methodology is stressed throughout. The course is taught using the OCaml programming language. Students will be well prepared for the follow-on course CSCI 1102 Computer Science II.

CSCI 1154

Intro to Programming and Web Applications (spring) - 3 Credits

In this course, students create interactive web-based applications. We begin by learning how to use HTML and CSS to create simple web pages. subjects include basic databases, SQL queries, and client-side scripts. demo projects may include shopping-cart based sales, student registration systems, etc. The course is currently taught using JavaScript and MySQL. No prior programming experience is required.

CSCI 2201

Computer Security (periodically)

The instructor, Etay Maor, is a computer security expert at IBM. Last fall he gave a series of informal lectures to the student ACM group. This course is an expansion of those themes.

CSCI 2227

Introduction to Scientific Computation (fall)

Prerequisite MATH 1101

This is an introductory course in computer programming for students interested numerical and scientific computation. Emphasis will be placed on problems drawn from the sciences. Many problems, such as the behavior of complex physical systems, have no closed-form solution, and computational modeling is needed to obtain an approximate solution. The course discusses different approximation methods, how to implement them as computer programs, and the factors that determine the accuracy. subjects include solutions of nonlinear equations, numerical integration, solving systems of linear equations, error optimization, and data visualization. Students will write programs in the MATLAB or Python programming language.

CSCI 2243

Logic and Computation (fall)

Prerequisite CSCI 1101

This course, together with CSCI 2244, form a two-semester introduction to the mathematical foundations of computer science. Students who successfully complete these courses will have acquired the necessary mathematical tools used in upper-division computer science courses. This course is concerned with the areas of propositional and predicate logic, proof techniques, basic number theory, and mathematical models of computation (such as formal languages, finite state machines, and Turing machines). Each Topic will be illustrated with applications to diverse areas of computer science, such as designing boolean circuits, satisfiability solvers, database query languages, proofs of program correctness, cryptography, and regular expression-based pattern matchers.

CSCI 2244

Randomness and Computation (spring)

Prerequisites: CSCI 1101 and MATH 1100

This course presents the mathematical and computational tools needed to solve problems that involve randomness. For example, an understanding of randomness allows us to efficiently generate the very large prime numbers needed for information security, and to understand the long-term behavior of random sequences used to rank web search results. Multidimensional random variables provide useful models for data mining, computer vision, social networks, and machine learning. subjects include combinatorics and counting, random experiments and probability, computational modeling of randomness, random variables and distributions, Bayes rule, collective behavior of random phenomena, vectors and matrices, and Markov chains. Each Topic is illustrated with applications of its use.

CSCI 2254

Web Application Development (spring)

Prerequisites; CSCI 1101 or CSCI 1103

In this course, students create interactive web-based applications. We begin by learning how to use HTML and CSS to create simple web pages. Emphasis then shifts to creating pages that access databases over the web. subjects include basic database design, SQL queries, and client and server-side scripts. demo projects may include shopping-cart based sales, student registration systems, etc. The course is currently taught using JavaScript and MySQL. Programming experience required.

CSCI 2257

Database Systems and Applications (fall/spring)

Prerequisites: CSCI 1101, ISYS 1157, or equivalent. Crosslisted with ISYS 2257

Database systems play a critical role in the corporate world. Activities such as order fulfillment, billing, and inventory management depend on the prompt availability of the appropriate data. The goal of this course is to supply you the knowledge and skills to use databases effectively in any business situation. We will explore how to design database tables to meet the needs of the company, access these tables using the SQL language, use database system features to Improve the efficiency of database access, and build a web site that enables users to interact with a database via a browser.

CSCI 2267

Technology and Culture (fall/spring)

Crosslisted with ISYS 2267 and SOCY 6670

This interdisciplinary course will first investigate the social, political, psychological, ethical, and spiritual aspects of the Western cultural development with a special emphasis on scientific and technological metaphors and narratives. We will then focus on the contemporary world, examining the impact of our various technological creations on cultural directions, democratic process, the world of work, quality of life, and especially on the emergent meanings for the terms "citizen" and "ethics" in contemporary society. Students will explore technologies in four broad and interrelated domains: (1) Computer, Media, Communications, and Information Technologies; (2) Biotechnology; (3) Globalization; and (4) Environmental Issues.

CSCI 2271

Computer Systems (fall/spring)

Prerequisite: CSCI 1102

This course is concerned with machine-level program and data representation on modern computer systems, how the underlying system uses these representations (in particular, the system stack and memory heap) to support the execution of user code, and the issues associated with the execution of multi-threaded code. Students also learn how various implementation choices can affect the efficiency, reliability, and security of a computing system. This is a hands-on course; programming will be completed in the procedural language C with comparisons to object-oriented languages such as Java.

CSCI 2272

Computer Organization and Lab (fall, 4 credits)

Prerequisite: CSCI 1101

This course studies the internal organization of computers and the processing of machine instructions. Students will obtain a high-level understanding of how to design a general-purpose computer, starting with simple logic gates. subjects include computer representation of numbers, combinational circuit design (decoders, multiplexers), sequential circuit design and analysis, memory design (registers and main memory), and simple processors including data paths, instruction formats, and control units. CSCI 2272 includes laboratory-based computer hardware activities in which the students design and build digital circuits related to the subjects of the course.

CSCI 2291

An Introduction to Data Science (spring)

Prerequisite: CSCI1101., or equivalent introduction to CS with programming, and one of MATH 1101 / 1103/ 1105 or an equivalent calculus course.

This course provides an introduction to concepts and techniques of computational data modeling and inference that can inform rational decision-making based on data. subjects include data preprocessing, exploratory data analysis and visualization, elements of probability and statistical inference, and predictive and descriptive modeling, with an introduction to machine learning concepts and approaches as time allows. Programming in Python will be required. Prospective students should also be comfortable with mathematical notation and reasoning at the college calculus level.

CSCI 3311

Visualization (fall)

Prerequisites; CSCI 1102

Data can capture a snapshot of the world and allow us to understand ourselves and our communities better. With ever-increasing amounts of data, the ability to understand and communicate data is becoming essential for everyone. Visualization leverages our visual perception to provide a powerful yet accessible way to make sense of large and complex data. It has been widely adopted across disciplines, from science and engineering to business and journalism, to combat the overabundance of information in our society. In this course, students will learn to acquire foundational knowledge about how to design effective visualizations for analysis and presentation based on theories and principles from graphic design, perceptual psychology, and cognitive science. Students will also learn practical skills about how to rapidly explore and communicate data using Tableau and build interactive visualization products (e.g., articles, tools, and systems) using web-based frameworks including D3.js and Vega-Lite.

 

CSCI 3333

Computer Graphics (periodically)

Prerequisite: CSCI 1102

This course introduces algorithms and techniques involved in representing, animating, and interacting with three-dimensional objects on a computer screen. The course will involve significant programming in Java and OpenGL.

CSCI 3335

Principles of Multimedia Systems (periodically)

This course introduces principles and current technologies of multimedia systems. subjects include multimedia systems design, multimedia hardware and software, issues in effectively representing, processing, and transmitting multimedia data including text, graphics, sound and music, image, and video. Image, video, and audio standards such as JPEG, MPEG, H.26x, Dolby Digital, and AAC will be reviewed. Applications such as video conferencing, video streaming, multimedia data indexing, and retrieval will also be introduced.

CSCI 3341

Artificial Intelligence (fall, alternate years)

Prerequisites: CSCI 1102, CSCI 2244

This course addresses the modeling and design of intelligent computational software. Artificial intelligence ideas have played a key role in the development of master-level board game players, natural language understanding, self-driving vehicles, and the predictive modeling methods used in data mining. Course subjects include perception and action, search techniques such as A* heuristic search and adversarial search, knowledge representation formalisms including logic and probability, and an introduction to machine learning. Programming assignments will be given throughout the course.

CSCI 3343

Computer Vision (fall, alternate years)

Prerequisites: CSCI 1102, CSCI 2244

Computers are gaining abilities to “see” things just like our vision system. Face recognition has been embedded in almost all the digital cameras. Car detection and tracking have been used in self-driving vehicles. Modern search engines are not only able to find similar text patterns but also able to search for similar objects in huge image databases. This course introduces principles and computational methods of obtaining information from images and videos. subjects include image processing, shape analysis, image matching, segmentation, 3D projective geometry, object tracking, human pose and action, image retrieval, and object recognition.

CSCI 3344

Mobile Application Development (spring)

Prerequisite: CSCI 1102

This is a project-oriented course focusing on the development of applications for smart phones and tablets. The course is currently taught using Google’s Android platform. The course will focus on software and user interface design, emphasizing best practices. The course examines issues arising from the unique characteristics of mobile input devices including touch and gesture input, access to a microphone, camera, and orientation and location awareness. We will also explore engineering aspects of targeting small memory platforms and small screens. Students will be required to design and develop substantial projects by the end of the course.

CSCI 3345

Machine Learning (spring, alternate years)

Prerequisite: CSCI 1102, CSCI 2244

This course provides an introduction to computational mechanisms that Improve their performance based on experience. Machine learning can be used in engineered systems for a wide variety of tasks in personalized information filtering, health care, security, games, computer vision, and human-computer interaction, and can provide computational models of information processing in biological and other complex systems. Supervised and unsupervised learning will be discussed, including demo applications, as well as specific learning paradigms such as decision trees, instance-based learning, neural networks and deep learning, Bayesian approaches, meta-learning, and clustering. General concepts to be described include feature space representations, inductive bias, overfitting, and fundamental tradeoffs.

CSCI 3346

Data Mining (spring, alternate years)

Prerequisite: CSCI 1102, CSCI 2244

The goal of data mining is to discover patterns in data that are informative and useful. This course provides an overview of the field of knowledge discovery and data mining, which deals with the semi-automated analysis of large collections of data that arise in contexts ranging from medical informatics and bioinformatics to e-commerce and security. The course will cover fundamental data mining tasks, relevant concepts and techniques from machine learning and statistics and data mining applications to real-world domains such as e-mail filtering, gene expression, analysis of biomedical signals, and fraud detection.

CSCI 3347

Robotics (spring, alternate years)

Prerequisite: CSCI 1101

This is a hands-on laboratory course about the programming of robots. subjects covered include locomotion, steering, moving an “arm” and “hand,” dealing with sensory input, voice synthesis, and planning. Students will complete several projects using the robots in the Boston College Robotics Laboratory.

 

CSCI 3349

Natural Language Processing (fall)

Prerequisites; CSCI 1102 and CSCI 2244

In this hands-on course, we study natural language processing (NLP), the subfield of artificial intelligence focused on analyzing, producing, and understanding human language. Using models and algorithms from formal language theory, statistics, and machine learning, we will explore methods for gaining insight into the structure and meaning of text. We will apply these methods to tasks such as information extraction, sentiment analysis, and machine translation. Students will work in teams to collect data and to implement their own NLP applications.

CSCI 3353

Object Oriented Design (fall)

Prerequisite: CSCI 1102

CSCI 1102 introduced you to the basic concepts of object-oriented programming: classes, inheritance, and polymorphism. In this course, we look at object-oriented programming from a higher level, and focus on the design of object-oriented software. As an analogy, consider a list—it is a lot easier to understand its operations by drawing pictures than by looking at code. Similarly, you will learn how to draw pictures to describe the design of an object-oriented program. And from these pictures we can develop design rules, such as "separate the model from the view" and "program to interfaces". We will also go over fundamental design patterns that supply us a simple way to talk about complex interactions of classes.

Another analogy is the difference between an architect and a building contractor. An architect designs the building, and is responsible for its usability, aesthetics, and feasibility. The contractor follows the plan, making low-level decisions about each component. Both are professionals, but the architect gets to be more creative and is often more highly valued. This course teaches you how to be a software architect.

Homework assignments will involve the design of inter-related classes and their implementation in Java.

CSCI 3356

Software Engineering (spring, alternate years)

Prerequisite: CSCI 3353

This course covers industrial system development using object-oriented techniques. Students will learn a methodical approach to the development of software and will use this methodology to design, implement, test and evolve Java applications. Students will work in teams to develop applications, experiencing the different roles that are required on projects in industry.

CSCI 3357

Database System Implementation (spring, alternate years)

Prerequisite: CSCI 1102

This course will not cover the use of commercial database systems; students interested in that Topic should consider taking CSCI 2257.

A database system is an amazingly sophisticated piece of software. It contains (1) a language interpreter, for processing user queries; (2) query rewrite strategies, for transforming inefficient queries into more efficient ones; (3) complex algorithms for indexing data, to support fast access; (4) a separate file system from that of the operating system, for managing the disk efficiently; (5) recovery mechanisms, for ensuring database integrity when the system crashes; and (6) an ability to handle concurrent accesses from multiple users. In this course we examine the various algorithms, data structures, and techniques for implementing these features. And to make these theoretical ideas concrete, we will also examine the Java source code for a real-life database system – first to see how it works, and then to write our own additions and improvements to it.

The goals of this course go beyond the study of database systems principles. The algorithms you learn can be used in many other systems and applications. And you get to see how a large software system is structured. The course requires extensive Java programming. You do not need experience using a commercial database system; you will learn all necessary database concepts during the course.

CSCI 3359

Distributed Systems (fall, alternate years)

Prerequisite: CSCI 2271

In this course you will learn the major paradigms of distributed computing, including client-server and peer-to-peer models. We will study how each model addresses the problems of communication, synchronization, performance, fault-tolerance, and security. You will learn how to analyze the correctness of distributed protocols and will be required to build distributed applications.

CSCI 3362

Operating Systems (fall, alternate years)

Prerequisite: CSCI 2271

This course will provide a broad introduction to software systems with emphasis on operating system design and implementation. Its objective is to introduce students to operating systems with main focus on resource management and interfacing issues with hardware layers. Particular emphasis will be given to process management (processes, threads, CPU scheduling, synchronization, and deadlock), (virtual) memory management (segmentation, paging, swapping, caching) with focus on the interplay between architectural components and software layers. If there is time, we will investigate and discuss these same issues for distributed systems. The course programming assignments will be in Java/C.

CSCI 3363

Computer Networks (spring, alternate years)

Prerequisite: CSCI 2271

This course studies computer networks and the services built on top of them. subjects include packet-switch and multi-access networks, routing and flow control, congestion control and quality-of-service, resource sharing, Internet protocols (IP, TCP, BGP), the client-server model and RPC, elements of distributed systems (naming, security, caching, consistency) and the design of network services (peer-to-peer networks, file and web servers, content distribution networks). Coursework involves a significant amount of Java/C programming.

CSCI 3366

Principles of Programming Languages (spring, alternate years)

Prerequisite: CSCI 1102, CSCI 2243

Starting with a simple language of expressions, this course develops a sequence of progressively more expressive programming languages keeping in mind the conflicting constraints between the expressiveness of the language and the requirement that it be reliably and efficiently implemented. The course focuses on these essential concepts and the run-time behavior of programs. Type systems play an essential role. By understanding the concepts the student will be able to evaluate the advantages and disadvantages of a language for a given application.

CSCI 3367

Compilers (periodically)

Prerequisite: CSCI 2271

Compilers are programs that implement high level programming languages by translating programs in such languages into machine code or some other easy to process representation. This course deals with the principles and techniques used in the design of compilers. subjects include parsing, static analysis, translation, memory management, and code optimization. This course includes a significant programming project.

CSCI 3372

Computer Architecture and Lab (spring, alternate years, 4 credits)

Prerequisites: CSCI 2272

This course discusses hardware considerations in computer design. subjects include hardware description languages, arithmetic and logic units, input/output circuits, memory hierarchy, instruction programming and control, data paths, pipelining, processor design, and advanced architecture topics. CSCI 3372 includes laboratory-based computer hardware activities in which students design and build digital circuits related to the subjects of the course.

CSCI 3381

Cryptography (fall, alternate years)

Prerequisites: CSCI 2243 or MATH 2216 or permission of instructor.

When you log onto a secure web site, for example to pay a bill from your bank account, you need to be assured of certain things: Is your communication private? An eavesdropper should not be able to determine what information you and the bank are exchanging. Does the website you are communicating with really belong to the bank? A third party should not be able to successfully impersonate the bank. Are you you? A third party should not be able to impersonate you and make payments from your account. Are the messages you and the bank receive from each other the same ones that were sent? No one should be able to alter the messages in transit without this being detected.

Behind the scenes, an extraordinary series of computations takes place to ensure that these security requirements are met. This course some sophisticated ideas from both mathematics and computer science that make it all work. We will begin the course with a look at some classical cryptographic systems that were in use before the advent of computers, then study modern block ciphers, both the general principles behind their construction and use, and some details about widely-used systems: the Data Encryption Standard (DES) and Advanced Encryption Standard (AES). These are symmetric systems in which the parties share some secret information (a key) used for both encryption and decryption. Cryptography was profoundly changed by the invention, in the late 1970's, of asymmetric, or public-key cryptosystems, in which the two parties do not need to share a secret in order to communicate securely. We will study public-key cryptosystems like RSA, cryptographic hash functions, schemes for digital signatures, and zero-knowledge identification schemes. We'll finish the course looking at some real-world cryptographic protocols (for example, SSL), more speculative protocols (electronic elections or digital cash), and some different ideas for the construction of cryptosystems (quantum cryptography).

CSCI 3383

Algorithms (fall)

Prerequisites: CSCI 1102, CSCI 2243, CSCI 2244

Algorithms are the basis of computing, and their study is, in many ways, the essence of computer science. In this course we study several algorithm-creation techniques, such as "divide and conquer", "dynamic programming", and "be greedy". We shall also learn mathematical tools to help us analyze the efficiency of our algorithms. These techniques are illustrated by the study of interesting algorithms of practical importance.

CSCI 3390

Topics in Computer Science (periodically)

This course can be taken multiple times for credit. It covers new and other interesting subjects not included among the department's regular course offerings. Two sections will be offered in spring 2018, as described below.

CSCI 3390-01
Everyone should know how to design parallel algorithms. Even a laptop or cellphone has multiple CPU cores at our disposal these days. In this hands-on, project-oriented course you will learn the main ideas of parallel computing with GPUs. Our focus will be on the CUDA programming language. You will learn about GPU architectures, parallel algorithms, CUDA libraries and GPU computing applications. Prerequisites: CSCI 3383 / 2271 / 2244, and MATH 2210 / 2202, or permission of the instructor.

CSCI 3390-02
We will study natural language processing, the subfield of artificial intelligence focused on analyzing, producing, and understanding human language. Using models and algorithms from formal language theory, statistics, and machine learning, we will explore methods for gaining insight into the structure and meaning of text. We will apply these methods to tasks such as information extraction, sentiment analysis, and machine translation. Prerequisite: CSCI 1102.

Sun, 22 May 2022 23:06:00 -0500 en text/html https://www.bc.edu/bc-web/schools/mcas/departments/computer-science/academics/courses.html
Killexams : Monarch Casino: Best Gaming Stock Bet, Say Portfolio Wealth Builders
Business on Wall Street in Manhattan

Pgiam/iStock via Getty Images

The primary focus of this article is Monarch Casino & Resort, Inc. (NASDAQ:MCRI)

Investment Thesis

21st Century paces of change in technology and rational behavior (not of emotional reactions) seriously disrupts the commonly accepted productive investment strategy of the 20th century.

One required change is the shortening of forecast horizons, with a shift from the multi-year passive approach of buy and hold to the active strategy of specific price-change target achievement or time-limit actions, with reinvestment set to new nearer-term targets.

That change avoids the irretrievable loss of invested time spent destructively by failure to recognize shifting evolutions like the cases of IBM, Kodak, GM, Xerox, General Electric, and many others.

It recognizes the progress in medical, communication and information technologies and enjoys their operational benefits already present in extended lifetimes, trade-commission-free investments, and coming benefits in transportation utilizations and energy usage.

But it requires the ability to make valid direct comparisons of value between investment reward prospects and risk exposures in the uncertain future. Since uncertainty expands as the future dimension increases, shorter forecast horizons are a means of improving the reward-to-risk comparison.

That shortening is now best attended at the investment entry point by knowing Market-Maker ("MM") expectations for coming prices. When reached, their updates are then reintroduced at the exit/reinvestment point and the term of expectations for the required coming comparisons are recognized as the decision entry point to move forward.

The MM's constant presence, extensive global communications and human resources dedicated to monitoring industry-focused competitive evolution sharpens MM price expectations, essential to their risk-avoidance roles.

Their roles require firm capital be only temporarily risk-exposed, so are hedged by derivative-securities deals to avoid undesired price changes. The deals' prices and contracts provide a window to MM price expectations.

Information technology via the internet makes investment monitoring and management time and attention efficient despite its increase in frequency.

Once an investment choice is made and buy transaction confirmation is received, a target-price GTC sell order for the confirmed number of shares at the target price or better should be placed. Keeping trade actions entered through the internet on your lap/desk-top or cell phone should avoid trade commission charges. Your broker's internal system should keep you informed of your account's progress.

Your own private calendar record should be kept of the date 63 market days (or 91 calendar days) beyond the trade's confirmation date as a time-limit alert to check if the GTC order has not been executed. If not, then start your exit and reinvestment decision process.

The 3-months' time limit is what we find to be a good choice, but may be extended some if desired. Beyond 5-6 months' time investments start to work against the process and are not recommended.

For investments guided by this article or others by me target prices will always be found as the high price in the MM forecast range.

Description of Equity Subject Company

"Monarch Casino & Resort, Inc., through its subsidiaries, owns and operates the Atlantis Casino Resort Spa, a hotel and casino in Reno, Nevada. The company also owns and operates the Monarch Casino Resort Spa Black Hawk in Black Hawk, Colorado. As of December 31, 2021, its Atlantis Casino Resort Spa featured approximately 61,000 square feet of casino space; 818 guest rooms and suites; 8 food outlets; 2 gourmet coffee and pastry bars; a 30,000 square-foot health spa and salon with an enclosed pool; approximately 52,000 square feet of banquet, convention, and meeting room space. The company's Atlantis Casino Resort Spa also featured approximately 1,400 slot and video poker machines; approximately 37 table games, including blackjack, craps, roulette, and others; a race and sports book; a 24-hour live keno lounge; and a poker room. In addition, its Monarch Casino Resort Spa Black Hawk featured approximately 60,000 square feet of casino space; approximately 1,100 slot machines; approximately 40 table games; 10 bars and lounges; 4 dining options; 516 guest rooms and suites. The company was founded in 1972 and is based in Reno, Nevada."

Source: Yahoo Finance

Estimates by Street Amalysts

Yahoo Finance

These growth estimates have been made by and are collected from Wall Street analysts to suggest what conventional methodology currently produces. The typical variations across forecast horizons of different time periods illustrate the difficulty of making value comparisons when the forecast horizon is not clearly defined.

Risk and Reward Balances Among MCRI Competitors

Figure 1

MM hedging forecasts

blockdesk.com

Used with permission.

The risk dimension is of real price draw-downs at their most extreme point while being held in previous pursuit of upside rewards similar to the ones currently being seen. They are measured on the red vertical scale. Reward expectations are measured on the green horizontal scale.

Both scales are of percent change from zero to 25%. Any stock or ETF whose present risk exposure exceeds its reward prospect will be above the dotted diagonal line. Capital-gain-attractive to-buy issues are in the directions down and to the right.

Our principal interest is in MCRI at location [11], at the lower right-hand edge of the competitor crowd. A "market index" norm of reward~risk trade-offs is offered by SPY at [7]. Most appealing by this Figure 1 view for wealth-building investors is MCRI.

Comparing competitive features of Casino Gaming Providers

The Figure 1 map provides a good visual comparison of the two most important aspects of every equity investment in the short term. There are other aspects of comparison which this map sometimes does not communicate well, particularly when general market perspectives like those of SPY are involved. Where questions of "how likely' are present other comparative tables, like Figure 2, may be useful.

Yellow highlighting of the table's cells emphasize factors important to securities valuations and the security MCRI of most promising of near capital gain as ranked in column [R].

Figure 2

detail comparative data

blockdedk.com

Used with permission.

Why do all this math?

Figure 2's purpose is to attempt universally comparable answers, stock by stock, of: a) How BIG the prospective price gain payoff may be; b) how LIKELY the payoff will be a profitable experience; c) how SOON it may happen; and d) what price drawdown RISK may be encountered during its active holding period.

Readers familiar with our analysis methods after quick examination of Figure 2 may wish to skip to the next section viewing price range forecast trends for MCRI.

Column headers for Figure 2 define investment-choice preference elements for each row stock whose symbol appears at the left in column [A]. The elements are derived or calculated separately for each stock, based on the specifics of its situation and current-day MM price-range forecasts. Data in red numerals are negative, usually undesirable to "long" holding positions. Table cells with yellow fills are of data for the stocks of principal interest and of all issues at the ranking column, [R].

The price-range forecast limits of columns [B] and [C] get defined by MM hedging actions to protect firm capital required to be put at risk of price changes from volume trade orders placed by big-$ "institutional" clients.

[E] measures potential upside risks for MM short positions created to fill such orders, and reward potentials for the buy-side positions so created. Prior forecasts like the present provide a history of relevant price draw-down risks for buyers. The most severe ones actually encountered are in [F], during holding periods in effort to reach [E] gains. Those are where buyers are emotionally most likely to accept losses.

The Range Index [G] tells where today's price lies relative to the MM community's forecast of upper and lower limits of coming prices. Its numeric is the percentage proportion of the full low to high forecast seen below the current market price.

[H] tells what proportion of the [L] demo of prior like-balance forecasts have earned gains by either having price reach its [B] target or be above its [D] entry cost at the end of a 3-month max-patience holding period limit. [ I ] gives the net gains-losses of those [L] experiences.

What makes MCRI most attractive in the group at this point in time is its ability to produce capital gains most consistently at its present operating balance between share price risk and reward at the Range Index [G]. At a RI of 12, today's price is near the bottom of its forecast range, with price expectations to the upside seven times those to the downside. Not our expectations, nut those of Market-Makers acting in support of Institutional Investment organizations build the values of their typical multi-billion-$ portfolios. Credibility of the [E] upside prospect as evidenced in the [I] payoff at +18% is shown in [N].

Further Reward~Risk trade-offs involve using the [H] odds for gains with the 100 - H loss odds as weights for N-conditioned [E] and for [F], for a combined-return score [Q]. The typical position holding period [J] on [Q] provides a figure of merit [fom] ranking measure [R] useful in portfolio position preferences. Figure 2 is row-ranked on [R] among alternative candidate securities, with MCRI in top rank.

Along with the candidate-specific stocks these selection considerations are provided for the averages of some 3,000 stocks for which MM price-range forecasts are available today, and 20 of the best-ranked (by fom) of those forecasts, as well as the forecast for S&P500 Index ETF (SPY) as an equity-market proxy.

Current-market index SPY is only moderately competitive as an investment alternative. Its Range Index of 42 indicates half of its forecast range is to the upside, while three quarters of previous SPY forecasts at this range index produced profitable outcomes.

As shown in column [T] of figure 2, those levels vary significantly between stocks. What matters is the net gain between investment gains and losses actually achieved following the forecasts, shown in column [I]. The Win Odds of [H] tells what proportion of the demo RIs of each stock were profitable. Odds below 80% often have proven to lack reliability.

Recent Forecast Trends of the Primary Subject

Figure 3

daily forecasst trends

blockdesk.com

Used with permission.

Many investors confuse any time-repeating picture of stock prices with typical "technical analysis charts" of past stock price history. These are quite different in their content. Instead, here Figure 3's vertical lines are a daily-updated visual record of price range forecast limits expected in the coming few weeks and months. The heavy dot in each vertical is the stock's closing price on the day the forecast was made.

That market price point makes an explicit definition of the price reward and risk exposure expectations which were held by market participants at the time, with a visual display of their vertical balance between risk and reward.

The measure of that balance is the Range Index (RI).

With today's RI there is 14.8% upside price change in prospect. Of the prior 27 forecasts like today's RI, 25 have been profitable. The market's actions of prior forecasts became accomplishments of +15% gains in 30 market days., or 6 weeks. So history's advantage could be repeated eight times or more in a 252 market-day year, which compounds into a CAGR of +232%.

Also please note the smaller low picture in Figure 3. It shows the past 5-year distribution of Range Indexes with the current level visually marked. For MCRI nearly all latest past forecasts have been of higher prices and Range Indexes.

Conclusion

Based on direct comparisons with MCRI and other Casino Gambling establishments, there are strong wealth-building reasons to prefer a capital-gain seeking buy in Monarch Casino & Resort, Inc. over other examined alternatives.

Fri, 29 Jul 2022 04:37:00 -0500 en text/html https://seekingalpha.com/article/4527451-monarch-casino-best-gaming-stock-bet-say-portfolio-wealth-builders Killexams : PPG Industries Stock Bottom-Priced By Portfolio Wealth Builders
Business on Wall Street in Manhattan

Pgiam/iStock via Getty Images

Investment Thesis

21st Century paces of change in technology and rational behavior (not of emotional reactions) seriously disrupts the commonly accepted productive investment strategy of the 20th century.

One required change is the shortening of forecast horizons, with a shift from the multi-year passive approach of buy&hold to the active strategy of specific price-change target achievement or time-limit actions, with reinvestment set to new nearer-term targets.

That change avoids the irretrievable loss of invested time spent destructively by failure to recognize shifting evolution like the cases of IBM, Kodak, GM, Xerox, GE and many others.

It recognizes the progress in medical, communication and information technologies and enjoys their operational benefits already present in extended lifetimes, trade-commission-free investments, and coming benefits in transportation utilization and energy usage.

But it requires the ability to make valid direct comparisons of value between investment reward prospects and risk exposures in the uncertain future. Since uncertainty expands as the future dimension increases, shorter forecast horizons are a means of improving the reward-to-risk comparison.

That shortening is now best attended at the investment entry point by knowing Market-Maker expectations for coming prices. When reached, their updates are then reintroduced at the exit/reinvestment point and the term of expectations for the required coming comparisons are recognized as the decision entry point to move forward.

The MM's constant presence, extensive global communications and human resources dedicated to monitoring industry-focused competitive evolution sharpens MM price expectations, essential to their risk-avoidance roles.

Their roles require firm capital be only temporarily risk-exposed, so are hedged by derivative-securities deals to avoid undesired price changes. The deals' prices and contracts provide a window to MM price expectations.

Information technology via the internet makes investment monitoring and management time and attention efficient despite its increase in frequency.

Once an investment choice is made and buy transaction confirmation is received, a target-price GTC sell order for the confirmed number of shares at the target price or better should be placed. Keeping trade actions entered through the internet on your lap/desk-top or cell phone should avoid trade commission charges. Your broker's internal system should keep you informed of your account's progress.

Your own private calendar record should be kept of the date 63 market days (or 91 calendar days) beyond the trade's confirmation date as a time-limit alert to check if the GTC order has not been executed. If not, then start your exit and reinvestment decision process.

The 3-months time limit is what we find to be a good choice, but may be extended some if desired. Beyond 5-6 months time investments start to work against the process and are not recommended.

For investments guided by this article or others by me target prices will always be found as the high price in the MM forecast range.

Description of Equity Subject Company

"PPG Industries, Inc. manufactures and distributes paints, coatings, and specialty materials worldwide. The company's Performance Coatings segment offers coatings, solvents, adhesives, sealants, sundries, and software for automotive and commercial transport/fleet repair and refurbishing, light industrial coatings, and specialty coatings for signs; and coatings, sealants, transparencies, transparent armor, adhesives, engineered materials, and packaging and chemical management services for commercial, military, regional jet, and general aviation aircraft. The company was incorporated in 1883 and is headquartered in Pittsburgh, Pennsylvania.."

Source: Yahoo Finance

PPG Street analyst estimates

Yahoo Finance

These growth estimates have been made by and are collected from Wall Street analysts to suggest what conventional methodology currently produces. The typical variations across forecast horizons of different time periods illustrate the difficulty of making value comparisons when the forecast horizon is not clearly defined.

Risk and Reward Balances Among NYSE:PPG Competitors

Figure 1

PPG stock hedging forecasts

blockdesk.com

The risk dimension is of real price draw-downs at their most extreme point while being held in previous pursuit of upside rewards similar to the ones currently being seen. They are measured on the red vertical scale. Reward expectations are measured on the green horizontal scale.

Both scales are of percent change from zero to 25%. Any stock or ETF whose present risk exposure exceeds its reward prospect will be above the dotted diagonal line. Capital-gain-attractive to-buy issues are in the directions down and to the right.

Our principal interest is in PPG at location [2], at the right-hand edge of the competitor crowd. A "market index" norm of reward~risk tradeoffs is offered by SPY at [1]. Most appealing by this Figure 1 view for wealth-building investors is PPG.

Comparing competitive features of Specialty Paint Providers

The Figure 1 map provides a good visual comparison of the two most important aspects of every equity investment in the short term. There are other aspects of comparison which this map sometimes does not communicate well, particularly when general market perspectives like those of SPY are involved. Where questions of "how likely' are present other comparative tables, like Figure 2, may be useful.

Yellow highlighting of the table's cells emphasize factors important to securities valuations and the security PPG of most promising of near capital gain as ranked in column [R].

Figure 2

PPG vs peers detailed comparative data

blockdesk.com

(used with permission)

Why do all this math?

Figure 2's purpose is to attempt universally comparable answers, stock by stock, of a) How BIG the prospective price gain payoff may be, b) how LIKELY the payoff will be a profitable experience, c) how SOON it may happen, and d) what price draw-down RISK may be encountered during its active holding period.

Readers familiar with our analysis methods after quick examination of Figure 2 may wish to skip to the next section viewing price range forecast trends for PPG.

Column headers for Figure 2 define investment-choice preference elements for each row stock whose symbol appears at the left in column [A]. The elements are derived or calculated separately for each stock, based on the specifics of its situation and current-day MM price-range forecasts. Data in red numerals are negative, usually undesirable to "long" holding positions. Table cells with yellow fills are of data for the stocks of principal interest and of all issues at the ranking column, [R].

The price-range forecast limits of columns [B] and [C] get defined by MM hedging actions to protect firm capital required to be put at risk of price changes from volume trade orders placed by big-$ "institutional" clients.

[E] measures potential upside risks for MM short positions created to fill such orders, and reward potentials for the buy-side positions so created. Prior forecasts like the present provide a history of relevant price draw-down risks for buyers. The most severe ones actually encountered are in [F], during holding periods in effort to reach [E] gains. Those are where buyers are emotionally most likely to accept losses.

The Range Index [G] tells where today's price lies relative to the MM community's forecast of upper and lower limits of coming prices. Its numeric is the percentage proportion of the full low to high forecast seen below the current market price.

[H] tells what proportion of the [L] demo of prior like-balance forecasts have earned gains by either having price reach its [B] target or be above its [D] entry cost at the end of a 3-month max-patience holding period limit. [ I ] gives the net gains-losses of those [L] experiences.

What makes PPG most attractive in the group at this point in time is its ability to produce capital gains most consistently at its present operating balance between share price risk and reward at the Range Index [G]. At a RI of 1, today's price is at the bottom of its forecast range, with all price expectations only to the upside. Not our expectations, but those of Market-Makers acting in transaction support of Institutional Investment organizations building the values of their typical multi-billion-$ portfolios. Credibility of the [E] upside prospect as evidenced in the [I] payoff at +18% is shown in [N].

Further Reward~Risk tradeoffs involve using the [H] odds for gains with the 100 - H loss odds as weights for N-conditioned [E] and for [F], for a combined-return score [Q]. The typical position holding period [J] on [Q] provides a figure of merit [fom] ranking measure [R] useful in portfolio position preferences. Figure 2 is row-ranked on [R] among alternative candidate securities, with PPG in top rank.

Along with the candidate-specific stocks these selection considerations are provided for the averages of some 3,000 stocks for which MM price-range forecasts are available today, and 20 of the best-ranked (by fom) of those forecasts, as well as the forecast for S&P500 Index ETF (SPY) as an equity-market proxy.

Current-market index SPY is not competitive as an investment alternative. Its Range Index of 26 indicates 3/4ths of its forecast range is to the upside, but little more than half of previous SPY forecasts at this range index produced profitable outcomes.

As shown in column [T] of figure 2, those levels vary significantly between stocks. What matters is the net gain between investment gains and losses actually achieved following the forecasts, shown in column [I]. The Win Odds of [H] tells what proportion of the demo RIs of each stock were profitable. Odds below 80% often have proven to lack reliability.

Recent Forecast Trends of the Primary Subject

Figure 3

PPG daily hedging forecasts trend

blockdesk.com

(used with permission)

Many investors confuse any time-repeating picture of stock prices with typical "technical analysis charts" of past stock price history. These are quite different in their content. Instead, here Figure 3's vertical lines are a daily-updated visual record of price range forecast limits expected in the coming few weeks and months. The heavy dot in each vertical is the stock's closing price on the day the forecast was made.

That market price point makes an explicit definition of the price reward and risk exposure expectations which were held by market participants at the time, with a visual display of their vertical balance between risk and reward.

The measure of that balance is the Range Index (RI).

With today's RI there is 18% upside price change in prospect. Of the prior 43 forecasts like today's RI, 40 have been profitable. The market's actions of prior forecasts became accomplishments of +11% gains in 47 market days.. So history's advantage could be repeated five times or more in a 252 market-day year, which compounds into a CAGR of +72%.

Also please note the smaller low picture in Figure 3. It shows the past 5 year distribution of Range Indexes with the current level visually marked. For PPG nearly all latest past forecasts have been of higher prices and Range Indexes.

Conclusion

Based on direct comparisons with SHW and other Paint producers, there are strong wealth-building reasons to prefer a capital-gain seeking buy in PPG Industries, Inc. (PPG) over other examined alternatives.

Tue, 05 Jul 2022 05:16:00 -0500 en text/html https://seekingalpha.com/article/4521814-ppg-industries-stock-bottom-priced-portfolio-wealth-builders
000-053 exam dump and training guide direct download
Training Exams List