Make your day with C8010-240 practice test for your exam success

Killexams.com provides legitimate and up in order to date and precise C8010-240 Latest Topics with a 100% move guarantee. You require to practice queries for at least twenty-four hrs to score high inside the exam. Your own actual task in order to pass in C8010-240 examination, commences with killexams.com test exercise questions.

Exam Code: C8010-240 Practice test 2022 by Killexams.com team
IBM Sterling Configurator V9.1 Deployment
IBM Configurator reality
Killexams : IBM Configurator reality - BingNews https://killexams.com/pass4sure/exam-detail/C8010-240 Search results Killexams : IBM Configurator reality - BingNews https://killexams.com/pass4sure/exam-detail/C8010-240 https://killexams.com/exam_list/IBM Killexams : Solving the Challenges of Remediating Configuration Settings

A data breach can result in catastrophic consequences for any organization. Ensuring that your IT environment is safe from cyber threats can be a real challenge.

To keep intruders out of your networks and data, you need more than up-to-date guidance. You also need to continually assess system configurations for conformance to security best practices and harden thousands of individual settings in your environment.

But where do you start?

Begin with recognized security best Practices

The CIS Critical Security Controls (CIS Controls) are a prioritized set of actions that mitigate the most common cyber attacks. They translate cyber threat information into action. The CIS Benchmarks are secure configuration recommendations designed to safeguard systems against today’s evolving cyber threats. Both CIS best practices provide organizations of all sizes with specific and actionable recommendations to enhance cyber defenses. Both are also mapped to or referenced by a number of industry standards and frameworks like NIST, HIPAA, PCI DSS, and more.

Starting with these best practice resources can make the process of securing your systems faster, more reliable, and more cost-effective.

Assess, then remediate

Configuration assessments should be performed regularly to identify possible security concerns. Systems very rarely come securely configured right out of the box. Software updates, while necessary, can make your environment vulnerable to configuration drift. That’s why continuous assessment is essential.

CIS-CAT Pro is a tool that can be used to assess configuration at scale. Available to CIS SecureSuite Members, it features two components: CIS-CAT Pro Assessor and CIS-CAT Pro Dashboard.

Assessment without remediation is useless, right?

The latest update to CIS-CAT Pro Assessor includes configuration assessment evidence in the HTML report, which assists in remediation planning.

The reality of remediating configuration settings

To understand what’s so challenging about remediating configuration settings, let’s consider the example of the Microsoft Windows Desktop operating system (OS). The CIS Benchmark for Microsoft Windows 10 has 474 recommendations. If you have 50 instances of that desktop OS in your environment, you’re looking at managing almost 24,000 configuration checks for that platform alone!

And of course, it’s not just the OS that needs configuration. It’s all the other systems, as well. You’re literally looking at thousands of individual judgments and actions needed to secure your environment.

You and your team could do it manually, but to touch every device would be incredibly time-consuming, requiring thousands of personnel-hours. Continuing to remediate systems on a manual basis would far surpass the resources of even the largest IT departments. You could also hire a consulting firm to do it for you. While they’ll likely get the job done, this approach can be expensive.

Thankfully, there are other options.

There’s more than one way to remediate

Any action that corrects a failed/insecure setting is a form of remediation. One of the advantages of using the CIS Benchmarks as your starting point is that you can tailor each Benchmark to your specific needs and circumstances. If a recommended setting is inappropriate for your environment, you can adjust the Benchmark accordingly and note why the exception was required.

CIS-CAT Pro Dashboard provides the ability to create exceptions, giving you even more options for your remediation program. Eventually, however, you will need to adjust the settings in your environment. That’s where an automated tool such as the CIS Build Kits can help.

Remediate system configuration at scale

CIS Build Kits provide the option for a rapid implementation of CIS Benchmark recommendations. Essentially, the CIS Build Kits are pre-configured templates that can be applied via the group policy management console in Windows or shell scripts for Linux/Unix. Applying the Build Kit will change the setting in a target system to the recommended value, providing a “passing” status the next time an assessment is run.

Combined with the use of other CIS SecureSuite resources, Build Kits reduce the time to implement secure configurations. CIS Build Kits can also be customized to an organization’s particular use case. (Please note that it’s important to run Build Kits in a test environment first before deploying.)

CIS SecureSuite: Assess and remediate at scale

Cybersecurity is a responsibility that requires constant attention. It’s not just something you can “set and forget.” Cyber threat actors are always developing new and more sophisticated techniques for attacking established defenses. Any security breach can put your organization and the people who rely on it at risk.

You need to stay up-to-date with current guidance, continually assess your systems, and remediate failed settings.

CIS SecureSuite Membership offers the added value of the CIS Benchmarks in machine-readable formats along with assessment and reporting tools such as CIS-CAT Pro Assessor and CIS-CAT Pro Dashboard, CIS Build Kits, technical support, and more. Attend our CIS Benchmarks Demo Webinar to learn more.

Click here to learn more about CIS SecureSuite

Copyright © 2022 IDG Communications, Inc.

Mon, 01 Aug 2022 00:08:00 -0500 en text/html https://www.csoonline.com/article/3668529/solving-the-challenges-of-remediating-configuration-settings.html
Killexams : 10 Top Data Companies

The term “data company” is certainly broad. It could easily include giant social networks like Meta. The company has perhaps one of the world’s most valuable data sets, which includes about 2.94 billion monthly active users (MAUs). Meta also has many of the world’s elite data scientists on its staff.

But for purposes of this article, the term will be narrower. The focus will be on those operators that build platforms and tools to leverage data – one of the most important technologies in enterprises these days.

Yet even this category still has many companies. For example, if you do a search for data analytics on G2, you will see results for over 2,200 products.

So when coming up with a list of top data companies, it will be, well, imperfect. Regardless, there are companies that are really in a league of their own, from established names to fast-growing startups, publicly traded and privately held. Let’s take a look at 10 of them.

Also see out picks for Top Data Startups.

Databricks

In 2012, a group of computer scientists at the University of California, Berkeley, created the open source project, Apache Spark. The goal was to develop a distributed system for data over a cluster of machines.

From the start, the project saw lots of traction, as there was a huge demand for sophisticated applications like deep learning. The project’s founders would then go on to create a company called Databricks.

The platform combines a data warehouse and data lakes, which are natively in the cloud. This allows for much more powerful analytics and artificial intelligence applications. There are more than 7,000 paying customers, such as H&M Group, Regeneron and Shell. Last summer, the ARR (annual recurring revenue) hit $600 million.

About the same time, Databricks raised $1.6 billion in a Series H funding and the valuation was set at a stunning $38 billion. Some of the investors included Andreessen Horowitz, Franklin Templeton and T. Rowe Price Associates. An IPO is expected at some point, but even before the current tech stock downturn, the company seemed in no hurry to test the public markets.

We’ve included Databricks on our lists of the Top Data Lake Solutions, Top DataOps Tools and the Top Big Data Storage Products.

SAS

SAS (Statistical Analysis System), long a private company, is one of the pioneers of data analytics. The origins of the company actually go back to 1966 at North Carolina State University. Professors created a program that performed statistical functions using the IBM System/360 mainframe. But when government funding dried up, SAS would become a company.

It was certainly a good move. SAS would go on to become the gold standard for data analytics. Its platform allows for AI, machine learning, predictive analytics, risk management, data quality and fraud management.

Currently, there are 80,800 customers, which includes 88 of the Top 100 on the Fortune 500.  There are 11,764 employees and revenues hit $3.2 billion last year.

SAS is one of the world’s largest privately-held software companies. Last summer, SAS was in talks to sell to Broadcom for $15 billion to $20 billion. But the co-founders decided to stay independent and despite having remained private since the company’s 1976 founding, are planning an IPO by 2024.

It should surprise absolutely no one that SAS made our list of the top data analytics products.

Snowflake

Snowflake, which operates a cloud-based data platform, pulled off the largest IPO for a software company in late 2020. It raised a whopping $3.4 billion. The offering price was $120 and it surged to $254 on the first day of trading, bringing the market value to over $70 billion. Not bad for a company that was about eight years old.

Snowflake stock would eventually go above $350. But of course, with the plunge in tech stocks, the company’s stock price would also come under extreme pressure. It would hit a low of $110 a few weeks ago.

Despite all this, Snowflake continues to grow at a blistering pace. In the latest quarter, the company reported an 85% spike in revenues to $422.4 million and the net retention rate was an impressive 174%. The customer base, which was over 6,300, had 206 companies with capacity arrangements that led to more than $1 million in product revenue in the past 12 months.

Snowflake started as a data warehouse. But the company has since expanded on its offerings to include data lakes, cybersecurity, collaboration, and data science applications. Snowflake has also been moving into on-premises storage, such as querying S3-compatible systems without moving data.

Snowflake is actually in the early stages of the opportunity. According to its latest investor presentation, the total addressable market is about $248 billion.

Like Databricks, Snowflake made our lists of the best Data Lake, DataOps and Big Data Storage tools.

Splunk

Founded in 2003, Splunk is the pioneer in collecting and analyzing large amounts of machine-generated data. This makes it possible to create highly useful reports and dashboards.

A key to the success of Splunk is its vibrant ecosystem, which includes more than 2,400 partners. There is also a marketplace that has over 2,400 apps.

A good part of the focus for Splunk has been on cybersecurity. By using real-time log analysis, a company can detect outliers or unusual activities.

Yet the Splunk platform has shown success in many other categories. For example, the technology helps with cloud migration, application modernization, and IT modernization.

In March, Splunk announced a new CEO, Gary Steele. Prior to this, he was CEO of Proofpoint, a fast-growing cloud-based security company.

On Steele’s first earnings report, he said: “Splunk is a system of record that’s deeply embedded within customers’ businesses and provides the foundation for security and resilience so that they can innovate with speed and agility. All of this translated to a massive, untapped, unique opportunity, from which I believe we can drive long-term durable growth while progressively increasing operating margins and cash flow.”

Cloudera

While there is a secular change towards the cloud, the reality is that many large enterprises still have significant on-premises footprints. A key reason for this is compliance. There is a need to have much more control over data because of privacy requirements.

But there are other areas where data fragmentation is inevitable. This is the case for edge devices and streaming from third parties and partners.

For Cloudera – another one of our top data lake solutions – the company has built a platform that is for the hybrid data strategy. This means that customers can take full advantage of their data everywhere.

Holger Mueller at Constellation Research praises Cloudera’s reliance on the open source Apache Iceberg technology for the Cloudera Data Platform.

“Open source is key when it comes to most infrastructure-as-a-service and platform-as-a-service offerings, which is why Cloudera has decided to embrace Apache Iceberg,” Mueller said. “Cloudera could have gone down a proprietary path, but adopting Iceberg is a triple win. First and foremost, it’s a win for customers, who can store their very large analytical tables in a standards-based, open-source format, while being able to access them with a standard language. It’s also a win for Cloudera, as it provides a key feature on an accelerated timeline while supporting an open-source standard. Last, it’s a win for Apache, as it gets another vendor uptake.”

Last year, Cloudera reported revenues over $1 billion. Among its thousands of customers, they include over 400 governments, the top ten global telcos and nine of the top ten healthcare companies.

Also read: Top Artificial Intelligence (AI) Software for 2022

MongoDB

The founders of MongoDB were not from the database industry. Instead, they were pioneers of Internet ad networks. The team – which included Dwight Merriman, Eliot Horowitz and Kevin Ryan – created DoubleClick, which launched in 1996. As the company quickly grew, they had to create their own custom data stores and realized that traditional relational databases were not up to the job.  

There needed to be a new type of approach, which would scale and allow for quick innovation.  So when they left DoubleClick after selling the company to Google for $3.1 billion, they went on to develop their own database system. It was  based on an open source model and this allowed for quick distribution.

The underlying technology relied on a document model and was called NoSQL. It provided for a more flexible way for developers to code their applications. It was also optimized for enormous transactional workloads.

The MongoDB database has since been downloaded more than 265 million times. The company has also added the types of features required by enterprises, such as high performance and security.  

During the latest quarter, revenues hit $285.4 million, up 57% on a year-over-year basis. There are over 33,000 customers.

To keep up the growth, MongoDB is focused on taking market share away from the traditional players like Oracle, IBM and Microsoft. To this end, the company has built the Relational Migrator. It visually analyzes relational schemas and transforms them into NoSQL databases.

Confluent

When engineers Jay Kreps, Jun Rao and Neha Narkhede worked at LinkedIn, they had difficulties creating infrastructure that could handle data in real time. They evaluated off-the-shelf solutions but nothing was up to the job.

So the LinkedIn engineers created their own software platform. It was called Apache Kafka and it was open sourced. The software allowed for high-throughput, low latency data feeds.

From the start, Apache Kafka was popular. And the LinkedIn engineers saw an opportunity to build a company around this technology in 2014. They called it Confluent.

The open source strategy was certainly spot on. Over 70% of the Fortune 500 use Apache Kafka.

But Confluent has also been smart in building a thriving developer ecosystem. There are over 60,000 meet-up members across the globe. The result is that developers outside Confluent have continued to build connectors, new functions and patches.

In the most accurate quarter, Confluent reported a 64% increase in revenues to $126 million.  There were also 791 customers with $100,000 or more in ARR (Annual Recurring revenue), up 41% on a year-over-year basis.

Datadog

Founded in 2010, Datadog started as an operator of a real-time unified data platform. But this certainly was not the last of its new applications.

The company has been an innovator – and has also been quite successful getting adoption for its technologies. The other categories Datadog has entered include infrastructure monitoring, application performance monitoring, log analysis, user experience monitoring, and security. The result is that the company is one of the top players in the fast-growing market for observability

Datadog’s software is not just for large enterprises. In fact, it is available for companies of any size.

Thus, it should be no surprise that Datadog has been a super-fast grower. In the latest quarter, revenues soared by 83% to $363 million. There were also about 2,250 customers with more than $100,000 in ARR, up from 1,406 a year ago.

A key success factor for Datadog has been its focus on breaking down data silos. This has meant much more visibility across organizations.  It has also allowed for better AI.

The opportunity for Datadog is still in the early stages. According to analysis from Gartner, spending on observability is expected to go from $38 billion in 2021 to $53 billion by 2025.

See the Top Observability Tools & Platforms

Fivetran

Traditional data integration tools rely on Extract, Transform and Load (ETL) tools. But this approach really does not handle modern challenges, such as the sprawl of cloud applications and storage.

What to do? Well, entrepreneurs George Fraser and Taylor Brown sought out to create a better way. In 2013, they cofounded Fivetran and got the backing of the famed Y Combinator program.

Interestingly enough, they originally built a tool for Business Intelligence (BI). But they quickly realized that the ETL market was ripe for disruption

In terms of the product development, the founders wanted to greatly simplify the configuration. The goal was to accelerate the time to value for analytics projects. Actually, they came up with the concept of zero configuration and maintenance. The vision for Fivetran is to make “business data as accessible as electricity.”

Last September, Fivetran announced a stunning round of $565 million in venture capital. The valuation was set at $5.6 billion and the investors included Andreessen Horowitz, General Catalyst, CEAS Investments, and Matrix Partners.

Tecton

Kevin Stumpf and Mike Del Balso met at Uber in 2016 and worked on the company’s AI platform, which was called Michelangelo ML. The technology allowed the company to scale thousands of models in production. Just some of the use cases included fraud detection, arrival predictions and real-time pricing.

This was based on the first feature store. It allowed for quickly spinning up ML features that were based on complex data structures.

However, this technology still relied on a large staff of data engineers and scientists. In other words, a feature store was mostly for the mega tech operators.

But Stumpf and Del Balso thought there was an opportunity to democratize the technology. This became the focus of their startup, Tecton, which they launched in 2019.

The platform has gone through various iterations. Currently, it is essentially a platform to manage the complete lifecycle of ML features. The system handles storing, sharing and reusing feature store capabilities. This allows for the automation of pipelines for batch, streaming and real-time data.

In July, Tecton announced a Series C funding round for $100 million. The lead investor was Kleiner Perkins. There was also participation from Snowflake and Databricks.

Read next: 5 Top VCs For Data Startups

Sat, 23 Jul 2022 18:38:00 -0500 en-US text/html https://www.itbusinessedge.com/business-intelligence/top-data-companies/
Killexams : Blockchain ETF

What Is a Blockchain ETF?

Blockchain exchange-traded funds (ETF) are ETFs that hold stocks of companies that profit from blockchain technology or have business operations tied to blockchain technology. While regulators have rejected numerous bitcoin ETFs, they have approved a few blockchain-based ETFs. Note that in October 2021, the first cryptocurrency ETF started trading—the ProShares Bitcoin Strategy ETF (BITO). The universe of blockchain ETFs also remains small, with seven such funds currently trading. These funds nonetheless provide investors with access to companies utilizing blockchain technology.

Key Takeaways

  • Blockchain ETFs hold stocks of companies that have blockchain technology-related operations or profit from the blockchain.
  • There are seven current blockchain ETFs.
  • However, there is only one cryptocurrency ETF (ProShares Bitcoin Strategy ETF), which started trading in October 2021.

How a Blockchain ETF Works

Blockchain ETFs offer an efficient investment vehicle to invest in a select basket of blockchain-specific stocks. Such blockchain ETFs track the performance of an underlying index which acts as a benchmark.

For example, there's the Siren Nasdaq NexGen Economy ETF (BLCN) and Amplify Transformational Data Sharing ETF (BLOK). The indexes these ETFs track include companies from the banking and financial sector, technology, IT services, hardware, internet, telecom services, and even biotechnology which may be using some form of data sharing or blockchain-based systems.

For instance, the BLCN ETF is holding companies like Cisco Systems Inc. (CSCO), Intel Corp. (INTC), Overstock.com Inc. (OSTK), Microsoft Corp. (MSFT), and Barclays PLC (BCS). The BLOK ETF's holdings include Taiwan Semiconductor Co. (TSM), Nvidia Corp. (NVDA), IBM Corp. (IBM), Overstock.com Inc., and GMO Internet Inc.

As blockchain technology remains open and global, companies from across the world are included in these ETFs. Regionally, both ETFs have the bulk of exposure to North America-based blockchain companies, while the rest is shared by Asian and European companies in varying proportions.

Beyond cryptocurrencies, the blockchain is finding use in various other sectors, such as services, supply chain management, digital apps development, digital entertainment industry, biotechnology, and even agriculture.

Blockchain ETF Example

The BLCN ETF is a passively managed ETF that attempts to track the performance of a specially designed index called the Reality Shares Nasdaq Blockchain Economy Index. This index is comprised of companies involved in research, development, support, or utilization of blockchain technology and associated businesses.

The index methodology assigns a “Blockchain Score” to each potential company stock that may be an eligible candidate for inclusion in this index. This score is based on several factors about how the business of the company is contributing to the blockchain ecosystem, its blockchain product maturity and associated economic impact, investments and expenditures on research and development activities, company results, and innovations.

This factor-based methodology ensures that the potential of a blockchain company and its business is gauged with higher accuracy for realistic economic profits, renovated business prospects, and operational competence. The 50 to 100 companies with the top Blockchain Scores qualify for entry into this index, and the same stocks get replicated in the BLCN ETF. The index is rebalanced every six months.

On the other hand, the BLOK ETF is an actively managed ETF that aims to invest in global companies deriving significant income from transformational data sharing-related business or are engaged in the research and development, proof-of-concept testing, and/or implementation of similar technology.

Blockchain ETF Risks

Being a theme-based investment, blockchain ETFs carry the inherent risk of non-performance, non-adaptability, or failure of the blockchain ecosystem. While there is an increasing level of acceptance for blockchain systems, the concept is still in a nascent stage and remains dependent on the evolution of the overall ecosystem, the reliability and stability of the blockchain network, its configuration, and its successful adoption.

Another inherent risk is that one may end up betting a significant portion of money on technology-based startups which are prone to failure. While the diversification through ETFs mitigates such stock-specific risk to a good extent, the risk of certain holdings not performing well remains.

Additionally, there is a mixed bag in the top holding companies of such ETFs, which have a big overlap with existing technology and internet companies.

For example, although MicroStrategy and Nvidia are among the top holdings for both BLCN and BLOK, they are essentially technology companies deriving a larger share of their revenues from non-blockchain-based products and services.

Similarly, Cisco and Intel are primarily hardware components companies that derive most of their revenues from networking equipment and computer processors, while having a limited share from hardware that is used in blockchain-based systems.

Blockchain segments may be contributing only a small part of overall revenues to such stocks, making the overall returns vulnerable to the non-performance of their majority non-blockchain segments.

One also needs to be aware of the expense ratio charged by fund houses, and the trading charges levied by such ETF units.

While purchasing such ETFs, one needs to account for the fact that they are betting on a mixed bag of stocks that are expected to benefit in the long run from the overall emergence of blockchain.

Investing in cryptocurrencies and other Initial Coin Offerings ("ICOs") is highly risky and speculative, and this article is not a recommendation by Investopedia or the writer to invest in cryptocurrencies or other ICOs. Since each individual's situation is unique, a qualified professional should always be consulted before making any financial decisions. Investopedia makes no representations or warranties as to the accuracy or timeliness of the information contained herein.

Mon, 08 Nov 2021 06:20:00 -0600 en text/html https://www.investopedia.com/tech/how-blockchain-etfs-work/
Killexams : How Shadow IT Can Keep Compliance Efforts In The Dark

Gavin Garbutt, chairman and co-founder of Augmentt. Former CEO and co-founder of N-able.

As the critical need for documented IT compliance with industry regulations and standards continues to grow, even the most meticulous of businesses can’t secure, monitor or configure what they can’t see.

For good reason, compliance regulations require that businesses have guardrails in place to protect and ensure the availability of business-critical and sensitive data. To do that, organizations need detailed knowledge of all the applications that interface with and offer access to that data.

That’s challenging enough on its own, but today’s proliferation of remote work, cloud apps and software as a service (SaaS) has given rise to risky shadow IT—the unseen and unauthorized hardware and applications that employees and departments often deploy independently, opening additional doors for cybercriminals and creating a new level of challenge when it comes to compliance.

Shadow IT creates the possibility that organizations may run afoul of regulations such as PCI-DSS, GDPR, HIPAA, SOX and others, exposing them to severe penalties and fines. It can also lead to an increase in the likelihood of data breaches when IT and security operations lose control over the software and applications used in an environment.

According to the annual IBM report on the topic, the average cost of a data breach rose from $3.86 million in 2020 to $4.24 million in 2021.

The Shadowy Specter Of Non-Compliance

Until recently, regulatory compliance was largely a concern only for businesses in highly regulated industries. That all changed with today’s explosion of data and the irresistible efficiency of cloud apps, giving rise to entities such as European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act, to name just a few. These days, remaining compliant is a priority for almost every kind of business.

How can shadow IT cast a dark cloud over compliance? Here are some scenarios.

• Regulations such as software asset management (SAM) help businesses manage the procurement of software licenses, but shadow IT can endanger proper documentation and approval. The discovery of unapproved software can force regulatory bodies to audit a company’s infrastructure, possibly leading to hefty fines or even jail time.

• Organizations adopt ISO/IEC 20000 to demonstrate quality and security to customers and service providers—an assurance that can go to waste if system documentation doesn’t match up with reality.

• When shadow IT crops up, businesses cannot apply the risk-assessment measures they use for authorized applications, can’t audit unauthorized services to understand risks or document compliance and can’t identify the full scope of impact if a data breach occurs.

• More generally, shadow IT often introduces new audit points, expanding the requirements for proof of compliance. For example, if healthcare institutions share patient data in unauthorized cloud applications, they may be compelled to audit, identify and disclose the breadth and impact of each event.

• Non-compliant applications and policies also pose challenges with regard to increasingly expensive cyber insurance, where carriers are becoming ever more particular about how accurately organizations document their adherence to security regulations.

Why is shedding the necessary light on shadow IT such a challenge in today’s IT paradigm? Let’s look, for instance, at challenges that may occur in a nearly ubiquitous platform like Microsoft 365 (M365). According to Statista, more than one million companies worldwide subscribe to M365, relying on the hugely popular SaaS for its accessibility and scalability.

The majority of the time, bad actors choose to exploit vulnerabilities in Outlook email configurations, but platforms like M365 have other susceptible areas to think about as well, including insufficient or incorrectly configured multi-factor authentication (MFA) settings, malicious application registrations and insecure synchronization in hybrid environments.

On the whole, M365 typically requires an added layer of protection for most organizations, one that’s configured by IT security professionals. Without added measures, for example, M365, by default, allows any user to share files freely and to leave meetings open to anyone.

While Microsoft does provide a tool for visibility of the OAuth permissions granted to end-users for adding applications, generally speaking, Microsoft and Google platforms lack the refined suite of tools to help businesses make automated security and compliance decisions.

A Comprehensive Audit For Ongoing Insight

When bringing a halt to SaaS adoption is not feasible and a certain amount of shadow IT will always be there to jeopardize compliance, what can be done to minimize the shadow IT risks? It is important for today’s organizations to have tools that can first comprehensively audit all SaaS applications in use (authorized and non-authorized) and then monitor and assess SaaS usage on an ongoing basis.

The initial discovery of SaaS usage should reveal a shadow IT baseline, produce short-term actionable data and provide insight into the shadow IT challenges ahead.

From there, effective ongoing monitoring should deepen knowledge of shadow IT trends within the organization, provide an understanding of the impact of new security policies or application blocking and even pinpoint the most problematic users and applications.

Rather than continue working virtually in the dark, businesses need to develop policies on SaaS software in an enlightened way. This should begin with, for instance, reviewing and evaluating any potential SaaS provider to fully understand how the service is used and which security model is used to deliver it. In cases where the providers do not offer an adequate level of security, bolstering applications with a cloud access security broker (CASB) solution can help address that.

Similarly, when it comes to the increasingly elemental need for enhanced user authentication, businesses should be mindful of how cloud providers often handle authentication in different ways. Some vendors, for instance, support MFA, while others do not. Insisting that MFA be supported across all SaaS apps is a highly recommended security policy going forward.

These kinds of tools and strategies can help provide full visibility on possible knowledge gaps about which SaaS applications should be approved, which might be restricted, and which users are potentially taking their employers out of compliance.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Tue, 19 Jul 2022 10:17:00 -0500 Gavin Garbutt en text/html https://www.forbes.com/sites/forbestechcouncil/2022/07/19/how-shadow-it-can-keep-compliance-efforts-in-the-dark/
Killexams : Fail Of The Week: Battery Packin’

[NeXT] got himself an IBM ThinkPad TransNote and yeah, we’re pretty jealous. For the uninitiated, the TransNote was IBM’s foray into intelligent note transcription from roughly fifteen years ago. The ThinkPad doesn’t even have to be on to capture your notes because the proprietary pen has 2MB of flash memory. It won an award and everything. Not the pen, the TransNote.

Unfortunately, the battery life is poor in [NeXT]’s machine. The TransNote was (perhaps) ahead of its time. Since it didn’t last on the market very long, there isn’t a Chinese market for replacement batteries. [NeXT] decided to rebuild the replacement battery pack himself after sending it off with no luck.

The TransNote’s battery pack uses some weird, flat Samsung 103450 cells that are both expensive and rare. [NeXT] eventually found some camera batteries that have a single cell and a charge controller. He had to rearrange the wiring because the tabs were on the same side, but ultimately, they did work. He got the cells together in the right configuration, took steps to prevent shorts, and added the TransNote’s charge controller back into the circuit.

Nothing blew up, and the ThinkPad went through POST just fine. He plugged it in to charge and waited a total of 90 minutes. The charging rate was pretty lousy, though. At 94% charge, the estimated life showed 28 minutes, which is worse than before. What are your thoughts on the outcome and if it were you, what would be the next move?


2013-09-05-Hackaday-Fail-tips-tileFail of the Week is a Hackaday column which runs every Wednesday. Help keep the fun rolling by writing about your past failures and sending us a link to the story — or sending in links to fail write ups you find in your Internet travels.

Wed, 27 Jul 2022 12:00:00 -0500 Kristina Panos en-US text/html https://hackaday.com/2014/09/25/fail-of-the-week-battery-packin/
Killexams : Did the Universe Just Happen? Killexams : The Atlantic | April 1988 | Did the Universe Just Happen? | Wright


More on science and technology from The Atlantic Monthly.

The Atlantic Monthly | April 1988
 

I. Flying Solo


d Fredkin is scanning the visual field systematically. He checks the instrument panel regularly. He is cool, collected, in control. He is the optimally efficient pilot.

The plane is a Cessna Stationair Six—a six-passenger single-engine amphibious plane, the kind with the wheels recessed in pontoons. Fredkin bought it not long ago and is still working out a few kinks; right now he is taking it for a spin above the British Virgin Islands after some minor mechanical work.

He points down at several brown-green masses of land, embedded in a turquoise sea so clear that the shadows of yachts are distinctly visible on its sandy bottom. He singles out a small island with a good-sized villa and a swimming pool, and explains that the compound, and the island as well, belong to "the guy that owns Boy George"—the rock star's agent, or manager, or something.

I remark, loudly enough to overcome the engine noise, "It's nice."

Yes, Fredkin says, it's nice. He adds, "It's not as nice as my island."

He's joking, I guess, but he's right. Ed Fredkin's island, which soon comes into view, is bigger and prettier. It is about 125 acres, and the hill that constitutes its bulk is a deep green—a mixture of reeds and cacti, sea grape and turpentine trees, machineel and frangipani. Its beaches range from prosaic to sublime, and the coral in the waters just offshore attracts little and big fish whose colors look as if they were coordinated by Alexander Julian. On the island's west side are immense rocks, suitable for careful climbing, and on the east side are a bar and restaurant and a modest hotel, which consists of three clapboard buildings, each with a few rooms. Between east and west is Fredkin's secluded island villa. All told, Moskito Island—or Drake's Anchorage, as the brochures call it—is a nice place for Fredkin to spend the few weeks of each year when he is not up in the Boston area tending his various other businesses.

In addition to being a self-made millionaire, Fredkin is a self-made intellectual. Twenty years ago, at the age of thirty-four, without so much as a bachelor's degree to his name, he became a full professor at the Massachusetts Institute of Technology. Though hired to teach computer science, and then selected to guide MIT's now eminent computer-science laboratory through some of its formative years, he soon branched out into more-offbeat things. Perhaps the most idiosyncratic of the courses he has taught is one on "digital physics," in which he propounded the most idiosyncratic of his several idiosyncratic theories. This theory is the reason I've come to Fredkin's island. It is one of those things that a person has to be prepared for. The preparer has to say, "Now, this is going to sound pretty weird, and in a way it is, but in a way it's not as weird as it sounds, and you'll see this once you understand it, but that may take a while, so in the meantime don't prejudge it, and don't casually dismiss it." Ed Fredkin thinks that the universe is a computer.

Fredkin works in a twilight zone of modern science—the interface of computer science and physics. Here two concepts that traditionally have ranked among science's most fundamental—matter and energy—keep bumping into a third: information. The exact relationship among the three is a question without a clear answer, a question vague enough, and basic enough, to have inspired a wide variety of opinions. Some scientists have settled for modest and sober answers. Information, they will tell you, is just one of many forms of matter and energy; it is embodied in things like a computer's electrons and a brain's neural firings, things like newsprint and radio waves, and that is that. Others talk in grander terms, suggesting that information deserves full equality with matter and energy, that it should join them in some sort of scientific trinity, that these three things are the main ingredients of reality.

Fredkin goes further still. According to his theory of digital physics, information is more fundamental than matter and energy. He believes that atoms, electrons, and quarks consist ultimately of bits—binary units of information, like those that are the currency of computation in a personal computer or a pocket calculator. And he believes that the behavior of those bits, and thus of the entire universe, is governed by a single programming rule. This rule, Fredkin says, is something fairly simple, something vastly less arcane than the mathematical constructs that conventional physicists use to explain the dynamics of physical reality. Yet through ceaseless repetition—by tirelessly taking information it has just transformed and transforming it further—it has generated pervasive complexity. Fredkin calls this rule, with discernible reverence, "the cause and prime mover of everything."

T THE RESTAURANT ON FREDKIN'S ISLAND THE FOOD is prepared by a large man named Brutus and is humbly submitted to diners by men and women native to nearby islands. The restaurant is open-air, ventilated by a sea breeze that is warm during the day, cool at night, and almost always moist. Between the diners and the ocean is a knee-high stone wall, against which waves lap rhythmically. Beyond are other islands and a horizon typically blanketed by cottony clouds. Above is a thatched ceiling, concealing, if the truth be told, a sheet of corrugated steel. It is lunchtime now, and Fredkin is sitting in a cane-and-wicker chair across the table from me, wearing a light cotton sport shirt and gray swimming trunks. He was out trying to windsurf this morning, and he enjoyed only the marginal success that one would predict on the basis of his appearance. He is fairly tall and very thin, and has a softness about him—not effeminacy, but a gentleness of expression and manner—and the complexion of a scholar; even after a week on the island, his face doesn't vary much from white, except for his nose, which is red. The plastic frames of his glasses, in a modified aviator configuration, surround narrow eyes; there are times—early in the morning or right after a nap—when his eyes barely qualify as slits. His hair, perennially semi-combed, is black with a little gray.

Fredkin is a pleasant mealtime companion. He has much to say that is interesting, which is fortunate because generally he does most of the talking. He has little curiosity about other people's minds, unless their interests happen to coincide with his, which few people's do. "He's right above us," his wife, Joyce, once explained to me, holding her left hand just above her head, parallel to the ground. "Right here looking down. He's not looking down saying, 'I know more than you.' He's just going along his own way."

The food has not yet arrived, and Fredkin is passing the time by describing the world view into which his theory of digital physics fits. "There are three great philosophical questions," he begins. "What is life? What is consciousness and thinking and memory and all that? And how does the universe work?" He says that his "informational viewpoint" encompasses all three. Take life, for example. Deoxyribonucleic acid, the material of heredity, is "a good example of digitally encoded information," he says. "The information that implies what a creature or a plant is going to be is encoded; it has its representation in the DNA, right? Okay, now, there is a process that takes that information and transforms it into the creature, okay?" His point is that a mouse, for example, is "a big, complicated informational process."

Fredkin exudes rationality. His voice isn't quite as even and precise as Mr. Spock's, but it's close, and the parallels don't end there. He rarely displays emotion—except, perhaps, the slightest sign of irritation under the most trying circumstances. He has never seen a problem that didn't have a perfectly logical solution, and he believes strongly that intelligence can be mechanized without limit. More than ten years ago he founded the Fredkin Prize, a $100,000 award to be given to the creator of the first computer program that can beat a world chess champion. No one has won it yet, and Fredkin hopes to have the award raised to $1 million.

Fredkin is hardly alone in considering DNA a form of information, but this observation was less common back when he first made it. So too with many of his ideas. When his world view crystallized, a quarter of a century ago, he immediately saw dozens of large-scale implications, in fields ranging from physics to biology to psychology. A number of these have gained currency since then, and he considers this trend an ongoing substantiation of his entire outlook.

Fredkin talks some more and then recaps. "What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity life, DNA—you know, the biochemical functions—are controlled by a digital information process. Then, at another level, our thought processes are basically information processing." That is not to say, he stresses, that everything is best viewed as information. "It's just like there's mathematics and all these other things, but not everything is best viewed from a mathematical viewpoint. So what's being said is not that this comes along and replaces everything. It's one more avenue of modeling reality, and it happens to cover the sort of three biggest philosophical mysteries. So it sort of completes the picture."

Among the scientists who don't dismiss Fredkin's theory of digital physics out of hand is Marvin Minsky, a computer scientist and polymath at MIT, whose renown approaches cultic proportions in some circles. Minsky calls Fredkin "Einstein-like" in his ability to find deep principles through simple intellectual excursions. If it is true that most physicists think Fredkin is off the wall, Minsky told me, it is also true that "most physicists are the ones who don't invent new theories"; they go about their work with tunnel vision, never questioning the dogma of the day. When it comes to the kind of basic reformulation of thought proposed by Fredkin, "there's no point in talking to anyone but a Feynman or an Einstein or a Pauli," Minsky says. "The rest are just Republicans and Democrats." I talked with Richard Feynman, a Nobel laureate at the California Institute of Technology, before his death, in February. Feynman considered Fredkin a brilliant and consistently original, though sometimes incautious, thinker. If anyone is going to come up with a new and fruitful way of looking at physics, Feynman said, Fredkin will.

Notwithstanding their moral support, though, neither Feynman nor Minsky was ever convinced that the universe is a computer. They were endorsing Fredkin's mind, not this particular manifestation of it. When it comes to digital physics, Ed Fredkin is flying solo.

He knows that, and he regrets that his ideas continue to lack the support of his colleagues. But his self-confidence is unshaken. You see, Fredkin has had an odd childhood, and an odd education, and an odd career, all of which, he explains, have endowed him with an odd perspective, from which the essential nature of the universe happens to be clearly visible. "I feel like I'm the only person with eyes in a world where everyone's blind," he says.

II. A Finely Mottled Universe


HE PRIME MOVER OF EVERYTHING, THE SINGLE principle that governs the universe, lies somewhere within a class of computer programs known as cellular automata, according to Fredkin.

The cellular automaton was invented in the early 1950s by John von Neumann, one of the architects of computer science and a seminal thinker in several other fields. Von Neumann (who was stimulated in this and other inquiries by the ideas of the mathematician Stanislaw Ulam) saw cellular automata as a way to study reproduction abstractly, but the word cellular is not meant biologically when used in this context. It refers, rather, to adjacent spaces—cells—that together form a pattern. These days the cells typically appear on a computer screen, though von Neumann, lacking this convenience, rendered them on paper.

In some respects cellular automata resemble those splendid graphic displays produced by patriotic masses in authoritarian societies and by avid football fans at American universities. Holding up large colored cards on cue, they can collectively generate a portrait of, say, Lenin, Mao Zedong, or a University of Southern California Trojan. More impressive still, one portrait can fade out and another crystallize in no time at all. Again and again one frozen frame melts into another It is a spectacular feat of precision and planning.

But suppose there were no planning. Suppose that instead of arranging a succession of cards to display, everyone learned a single rule for repeatedly determining which card was called for next. This rule might assume any of a number of forms. For example, in a crowd where all cards were either blue or white, each card holder could be instructed to look at his own card and the cards of his four nearest neighbors—to his front, back, left, and right—and do what the majority did during the last frame. (This five-cell group is known as the von Neumann neighborhood.) Alternatively, each card holder could be instructed to do the opposite of what the majority did. In either event the result would be a series not of predetermined portraits but of more abstract, unpredicted patterns. If, by prior agreement, we began with a USC Trojan, its white face might dissolve into a sea of blue, as whitecaps drifted aimlessly across the stadium. Conversely, an ocean of randomness could yield islands of structure—not a Trojan, perhaps, but at least something that didn't look entirely accidental. It all depends on the original pattern of cells and the rule used to transform it incrementally.

This leaves room for abundant variety. There are many ways to define a neighborhood, and for any given neighborhood there are many possible rules, most of them more complicated than blind conformity or implacable nonconformity. Each cell may, for instance, not only count cells in the vicinity but also pay attention to which particular cells are doing what. All told, the number of possible rules is an exponential function of the number of cells in the neighborhood; the von Neumann neighborhood alone has 232, or around 4 billion, possible rules, and the nine-cell neighborhood that results from adding corner cells offers 2512, or roughly 1 with 154 zeros after it, possibilities. But whatever neighborhoods, and whatever rules, are programmed into a computer, two things are always true of cellular automata: all cells use the same rule to determine future behavior by reference to the past behavior of neighbors, and all cells obey the rule simultaneously, time after time.

In the late 1950s, shortly after becoming acquainted with cellular automata, Fredkin began playing around with rules, selecting the powerful and interesting and discarding the weak and bland. He found, for example, that any rule requiring all four of a cell's immediate neighbors to be lit up in order for the cell itself to be lit up at the next moment would not provide sustained entertainment; a single "off" cell would proliferate until darkness covered the computer screen. But equally simple rules could create great complexity. The first such rule discovered by Fredkin dictated that a cell be on if an odd number of cells in its von Neumann neighborhood had been on, and off otherwise. After "seeding" a good, powerful rule with an irregular landscape of off and on cells, Fredkin could watch rich patterns bloom, some freezing upon maturity, some eventually dissipating, others locking into a cycle of growth and decay. A colleague, after watching one of Fredkin's rules in action, suggested that he sell the program to a designer of Persian rugs.

Today new cellular-automaton rules are formulated and tested by the "information-mechanics group" founded by Fredkin at MIT's computer-science laboratory. The core of the group is an international duo of physicists, Tommaso Toffoli, of Italy, and Norman Margolus, of Canada. They differ in the degree to which they take Fredkin's theory of physics seriously, but both agree with him that there is value in exploring the relationship between computation and physics, and they have spent much time using cellular automata to simulate physical processes. In the basement of the computer-science laboratory is the CAM—the cellular automaton machine, designed by Toffoli and Margolus partly for that purpose. Its screen has 65,536 cells, each of which can assume any of four colors and can change color sixty times a second.

The CAM is an engrossing, potentially mesmerizing machine. Its four colors—the three primaries and black—intermix rapidly and intricately enough to form subtly shifting hues of almost any gradation; pretty waves of deep blue or red ebb and flow with fine fluidity and sometimes with rhythm, playing on the edge between chaos and order.

Guided by the right rule, the CAM can do a respectable imitation of pond water rippling outward circularly in deference to a descending pebble, or of bubbles forming at the bottom of a pot of boiling water, or of a snowflake blossoming from a seed of ice: step by step, a single "ice crystal" in the center of the screen unfolds into a full-fledged flake, a six-edged sheet of ice riddled symmetrically with dark pockets of mist. (It is easy to see how a cellular automaton can capture the principles thought to govern the growth of a snowflake: regions of vapor that find themselves in the vicinity of a budding snowflake freeze—unless so nearly enveloped by ice crystals that they cannot discharge enough heat to freeze.)

These exercises are fun to watch, and they provide one a sense of the cellular automaton's power, but Fredkin is not particularly interested in them. After all, a snowflake is not, at the visible level, literally a cellular automaton; an ice crystal is not a single, indivisible bit of information, like the cell that portrays it. Fredkin believes that automata will more faithfully mirror reality as they are applied to its more fundamental levels and the rules needed to model the motion of molecules, atoms, electrons, and quarks are uncovered. And he believes that at the most fundamental level (whatever that turns out to be) the automaton will describe the physical world with perfect precision, because at that level the universe is a cellular automaton, in three dimensions—a crystalline lattice of interacting logic units, each one "deciding" zillions of point in time. The information thus produced, Fredkin says, is the fabric of reality, the stuff of which matter and energy are made. An electron, in Fredkin's universe, is nothing more than a pattern of information, and an orbiting electron is nothing more than that pattern moving. Indeed, even this motion is in some sense illusory: the bits of information that constitute the pattern never move, any more than football fans would change places to slide a USC Trojan four seats to the left. Each bit stays put and confines its activity to blinking on and off. "You see, I don't believe that there are objects like electrons and photons, and things which are themselves and nothing else," Fredkin says. What I believe is that there's an information process, and the bits, when they're in certain configurations, behave like the thing we call the electron, or the hydrogen atom, or whatever."

HE READER MAY NOW HAVE A NUMBER OF questions that unless satisfactorily answered will lead to something approaching contempt for Fredkin's thinking. One such question concerns the way cellular automata chop space and time into little bits. Most conventional theories of physics reflect the intuition that reality is continuous—that one "point" in time is no such thing but, rather, flows seamlessly into the next, and that space, similarly, doesn't come in little chunks but is perfectly smooth. Fredkin's theory implies that both space and time have a graininess to them, and that the grains cannot be chopped up into smaller grains; that people and dogs and trees and oceans, at rock bottom, are more like mosaics than like paintings; and that time's essence is better captured by a digital watch than by a grandfather clock.

The obvious question is, Why do space and time seem continuous if they are not? The obvious answer is, The cubes of space and points of time are very, very small: time seems continuous in just the way that movies seem to move when in fact they are frames, and the illusion of spatial continuity is akin to the emergence of smooth shades from the finely mottled texture of a newspaper photograph.

The obvious answer, Fredkin says, is not the whole answer; the illusion of continuity is yet more deeply ingrained in our situation. Even if the ticks on the universal clock were, in some absolute sense, very slow, time would still seem continuous to us, since our perception, itself proceeding in the same ticks, would be no more finely grained than the processes being perceived. So too with spatial perception: Can eyes composed of the smallest units in existence perceive those units? Could any informational process sense its ultimate constituents? The point is that the basic units of time and space in Fredkin's reality don't just happen to be imperceptibly small. As long as the creatures doing the perceiving are in that reality, the units have to be imperceptibly small.

Though some may find this discreteness hard to comprehend, Fredkin finds a grainy reality more sensible than a smooth one. If reality is truly continuous, as most physicists now believe it is, then there must be quantities that cannot be expressed with a finite number of digits; the number representing the strength of an electromagnetic field, for example, could begin 5.23429847 and go on forever without failing into a pattern of repetition. That seems strange to Fredkin: wouldn't you eventually get to a point, around the hundredth, or thousandth, or millionth decimal place, where you had hit the strength of the field right on the nose? Indeed, wouldn't you expect that every physical quantity has an exactness about it? Well, you might and might not. But Fredkin does expect exactness, and in his universe he gets it.

Fredkin has an interesting way of expressing his insistence that all physical quantities be "rational." (A rational number is a number that can be expressed as a fraction—as a ratio of one integer to another. Expressed as a decimal, a rational number will either end, as 5/2 does in the form of 2.5, or repeat itself endlessly, as 1/7 does in the form of 0.142857142857142 . . .) He says he finds it hard to believe that a finite volume of space could contain an infinite amount of information. It is almost as if he viewed each parcel of space as having the digits describing it actually crammed into it. This seems an odd perspective, one that confuses the thing itself with the information it represents. But such an inversion between the realm of things and the realm of representation is common among those who work at the interface of computer science and physics. Contemplating the essence of information seems to affect the way you think.

The prospect of a discrete reality, however alien to the average person, is easier to fathom than the problem of the infinite regress, which is also raised by Fredkin's theory. The problem begins with the fact that information typically has a physical basis. Writing consists of ink; speech is composed of sound waves; even the computer's ephemeral bits and bytes are grounded in configurations of electrons. If the electrons are in turn made of information, then what is the information made of?

Asking questions like this ten or twelve times is not a good way to earn Fredkin's respect. A look of exasperation passes fleetingly over his face. "What I've tried to explain is that—and I hate to do this, because physicists are always doing this in an obnoxious way—is that the question implies you're missing a very important concept." He gives it one more try, two more tries, three, and eventually some of the fog between me and his view of the universe disappears. I begin to understand that this is a theory not just of physics but of metaphysics. When you disentangle these theories—compare the physics with other theories of physics, and the metaphysics with other ideas about metaphysics—both sound less far-fetched than when jumbled together as one. And, as a bonus, Fredkin's metaphysics leads to a kind of high-tech theology—to speculation about supreme beings and the purpose of life.

III. The Perfect Thing


DWARD FREDKIN WAS BORN IN 1934, THE LAST OF three children in a previously prosperous family. His father, Manuel, had come to Southern California from Russia shortly after the Revolution and founded a chain of radio stores that did not survive the Great Depression. The family learned economy, and Fredkin has not forgotten it. He can reach into his pocket, pull out a tissue that should have been retired weeks ago, and, with cleaning solution, make an entire airplane windshield clear. He can take even a well-written computer program, sift through it for superfluous instructions, and edit it accordingly, reducing both its size and its running time.

Manuel was by all accounts a competitive man, and he focused his competitive energies on the two boys: Edward and his older brother, Norman. Manuel routinely challenged Ed's mastery of fact, inciting sustained arguments over, say, the distance between the moon and the earth. Norman's theory is that his father, though bright, was intellectually insecure; he seemed somehow threatened by the knowledge the boys brought home from school. Manuel's mistrust of books, experts, and all other sources of received wisdom was absorbed by Ed.

So was his competitiveness. Fredkin always considered himself the smartest kid in his class. He used to place bets with other students on test scores. This habit did not endear him to his peers, and he seems in general to have lacked the prerequisites of popularity. His sense of humor was unusual. His interests were not widely shared. His physique was not a force to be reckoned with. He recalls, "When I was young—you know, sixth, seventh grade—two kids would be choosing sides for a game of something. It could be touch football. They'd choose everybody but me, and then there'd be a fight as to whether one side would have to take me. One side would say, 'We have eight and you have seven,' and they'd say, 'That's okay.' They'd be willing to play with seven." Though exhaustive in documenting his social alienation, Fredkin concedes that he was not the only unpopular student in school. "There was a socially active subgroup, probably not a majority, maybe forty percent, who were very socially active. They went out on dates. They went to parties. They did this and they did that. The others were left out. And I was in this big left-out group. But I was in the pole position. I was really left out."

Of the hours Fredkin spent alone, a good many were devoted to courting disaster in the name of science. By wiring together scores of large, 45-volt batteries, he collected enough electricity to conjure up vivid, erratic arcs. By scraping the heads off matches and buying sulfur, saltpeter, and charcoal, he acquired a good working knowledge of pyrotechnics. He built small, minimally destructive but visually impressive bombs, and fashioned rockets out of cardboard tubing and aluminum foil. But more than bombs and rockets, it was mechanisms that captured Fredkin's attention. From an early age he was viscerally attracted to Big Ben alarm clocks, which he methodically took apart and put back together. He also picked up his father's facility with radios and household appliances. But whereas Manuel seemed to fix things without understanding the underlying science, his son was curious about first principles.

So while other kids were playing baseball or chasing girls, Ed Fredkin was taking things apart and putting them back together Children were aloof, even cruel, but a broken clock always responded gratefully to a healing hand. "I always got along well with machines," he remembers.

After graduation from high school, in 1952, Fredkin headed for the California Institute of Technology with hopes of finding a more appreciative social environment. But students at Caltech turned out to bear a disturbing resemblance to people he had observed elsewhere. "They were smart like me," he recalls, "but they had the full spectrum and distribution of social development." Once again Fredkin found his weekends unencumbered by parties. And once again he didn't spend his free time studying. Indeed, one of the few lessons he learned is that college is different from high school: in college if you don't study, you flunk out. This he did a few months into his sophomore year. Then, following in his brother's footsteps, he joined the Air Force and learned to fly fighter planes.

T WAS THE AIR FORCE THAT FINALLY BROUGHT Fredkin face to face with a computer. He was working for the Air Proving Ground Command, whose function was to ensure that everything from combat boots to bombers was of top quality, when the unit was given the job of testing a computerized air-defense system known as SAGE (for "semi-automatic ground environment"), To test SAGE the Air Force needed men who knew something about computers, and so in 1956 a group from the Air Proving Ground Command, including Fredkin, was sent to MIT's Lincoln Laboratory and enrolled in computer-science courses. "Everything made instant sense to me," Fredkin remembers. "I just soaked it up like a sponge."

SAGE, when ready for testing, turned out to be even more complex than anticipated—too complex to be tested by anyone but genuine experts—and the job had to be contracted out. This development, combined with bureaucratic disorder, meant that Fredkin was now a man without a function, a sort of visiting scholar at Lincoln Laboratory. "For a period of time, probably over a year, no one ever came to tell me to do anything. Well, meanwhile, down the hall they installed the latest, most modern computer in the world—IBM's biggest, most powerful computer. So I just went down and started to program it." The computer was an XD-1. It was slower and less capacious than an Apple Macintosh and was roughly the size of a large house.

When Fredkin talks about his year alone with this dinosaur, you half expect to hear violins start playing in the background. "My whole way of life was just waiting for the computer to come along," he says. "The computer was in essence just the perfect thing." It was in some respects preferable to every other conglomeration of matter he had encountered—more sophisticated and flexible than other inorganic machines, and more logical than organic ones. "See, when I write a program, if I write it correctly, it will work. If I'm dealing with a person, and I tell him something, and I tell him correctly, it may or may not work."

The XD-1, in short, was an intelligence with which Fredkin could empathize. It was the ultimate embodiment of mechanical predictability, the refuge to which as a child he had retreated from the incomprehensibly hostile world of humanity. If the universe is indeed a computer, then it could be a friendly place after all.

During the several years after his arrival at Lincoln Lab, as Fredkin was joining the first generation of hackers, he was also immersing himself in physics—finally learning, through self-instruction, the lessons he had missed by dropping out of Caltech. It is this two-track education, Fredkin says, that led him to the theory of digital physics. For a time "there was no one in the world with the same interest in physics who had the intimate experience with computers that I did. I honestly think that there was a period of many years when I was in a unique position."

The uniqueness lay not only in the fusion of physics and computer science but also in the peculiar composition of Fredkin's physics curriculum. Many physicists acquire as children the sort of kinship with mechanism that he still feels, but in most cases it is later diluted by formal education; quantum mechanics, the prevailing paradigm in contemporary physics, seems to imply that at its core, reality, has truly random elements and is thus inherently unpredictable. But Fredkin escaped the usual indoctrination. To this day he maintains, as did Albert Einstein, that the common interpretation of quantum mechanics is mistaken—that any seeming indeterminacy in the subatomic world reflects only our ignorance of the determining principles, not their absence. This is a critical belief, for if he is wrong and the universe is not ultimately deterministic, then it cannot be governed by a process as exacting as computation.

After leaving the Air Force, Fredkin went to work for Bolt Beranek and Newman, a consulting firm in the Boston area, now known for its work in artificial intelligence and computer networking. His supervisor at BBN, J. C. R. Licklider, says of his first encounter with Fredkin, "It was obvious to me he was very unusual and probably a genius, and the more I came to know him, the more I came to think that that was not too elevated a description." Fredkin "worked almost continuously," Licklider recalls. "It was hard to get him to go to sleep sometimes." A pattern emerged. Licklider would provide Fredkin a problem to work on—say, figuring out how to get a computer to search a text in its memory for an only partially specified sequence of letters. Fredkin would retreat to his office and return twenty or thirty hours later with the solution—or, rather, a solution; he often came back with the answer to a question different from the one that Licklider had asked. Fredkin's focus was intense but undisciplined, and it tended to stray from a problem as soon as he was confident that he understood the solution in principle.

This intellectual wanderlust is one of Fredkin's most enduring and exasperating traits. Just about everyone who knows him has a way of describing it: "He doesn't really work. He sort of fiddles." "Very often he has these great ideas and then does not have the discipline to cultivate the idea." "There is a gap between the quality of the original ideas and what follows. There's an imbalance there." Fredkin is aware of his reputation. In self-parody he once brought a cartoon to a friend's attention: A beaver and another forest animal are contemplating an immense man-made dam. The beaver is saying something like, "No, I didn't actually build it. But it's based on an idea of mine."

Among the ideas that congealed in Fredkin's mind during his stay at BBN is the one that gave him his current reputation as (depending on whom you talk to) a thinker of great depth and rare insight, a source of interesting but reckless speculation, or a crackpot.

IV. Tick by Tick, Dot by Dot


HE IDEA THAT THE UNIVERSE IS A COMPUTER WAS inspired partly by the idea of the universal computer. Universal computer, a term that can accurately be applied to everything from an IBM PC to a Cray supercomputer, has a technical, rigorous definition, but here its upshot will do: a universal computer can simulate any process that can be precisely described and perform any calculation that is performable.

This broad power is ultimately grounded in something very simple: the algorithm. An algorithm is a fixed procedure for converting input into output, for taking one body of information and turning it into another. For example, a computer program that takes any number it is given, squares it, and subtracts three is an algorithm. This isn't a very powerful algorithm; by taking a 3 and turning it into a 6, it hasn't created much new information. But algorithms become more powerful with recursion. A recursive algorithm is an algorithm whose output is fed back into it as input. Thus the algorithm that turned 3 into 6, if operating recursively, would continue, turning 6 into 33, then 33 into 1,086, then 1,086 into 1,179,393, and so on.

The power of recursive algorithms is especially apparent in the simulation of physical processes. While Fredkin was at BBN, he would use the company's Digital Equipment Corporation PDP-1 computer to simulate, say, two particles, one that was positively charged and one that was negatively charged, orbiting each other in accordance with the laws of electromagnetism. It was a pretty sight: two phosphor dots dancing, each etching a green trail that faded into yellow and then into darkness. But for Fredkin the attraction lay less in this elegant image than in its underlying logic. The program he had written took the particles' velocities and positions at one point in time, computed those variables for the next point in time, and then fed the new variables back into the algorithm to get newer variables—and so on and so on, thousands of times a second. The several steps in this algorithm, Fredkin recalls, were "very simple and very beautiful." It was in these orbiting phosphor dots that Fredkin first saw the appeal of his kind of universe—a universe that proceeds tick by tick and dot by dot, a universe in which complexity boils down to rules of elementary simplicity.

Fredkin's discovery of cellular automata a few years later permitted him further to indulge his taste for economy of information and strengthened his bond with the recursive algorithm. The patterns of automata are often all but impossible to describe with calculus yet easy to express algorithmically. Nothing is so striking about a good cellular automaton as the contrast between the simplicity of the underlying algorithm and the richness of its result. We have all felt the attraction of such contrasts. It accompanies the comprehension of any process, conceptual or physical, by which simplicity accommodates complexity. Simple solutions to complex problems, for example, make us feel good. The social engineer who designs uncomplicated legislation that will cure numerous social ills, the architect who eliminates several nagging design flaws by moving a single closet, the doctor who traces gastro-intestinal, cardiovascular, and respiratory ailments to a single, correctable cause—all feel the same kind of visceral, aesthetic satisfaction that must have filled the first caveman who literally killed two birds with one stone.

For scientists, the moment of discovery does not simply reinforce the search for knowledge; it inspires further research. Indeed, it directs research. The unifying principle, upon its apprehension, can elicit such devotion that thereafter the scientist looks everywhere for manifestations of it. It was the scientist in Fredkin who, upon seeing how a simple programming rule could yield immense complexity, got excited about looking at physics in a new way and stayed excited. He spent much of the next three decades fleshing out his intuition.

REDKIN'S RESIGNATION FROM BOLT BERANEK AND Newman did not surprise Licklider. "I could tell that Ed was disappointed in the scope of projects undertaken at BBN. He would see them on a grander scale. I would try to argue—hey, let's cut our teeth on this and then move on to bigger things." Fredkin wasn't biting. "He came in one day and said, 'Gosh, Lick, I really love working here, but I'm going to have to leave. I've been thinking about my plans for the future, and I want to make'—I don't remember how many millions of dollars, but it shook me—'and I want to do it in about four years.' And he did amass however many millions he said he would amass in the time he predicted, which impressed me considerably."

In 1962 Fredkin founded Information International Incorporated—an impressive name for a company with no assets and no clients, whose sole employee had never graduated from college. Triple-I, as the company came to be called, was placed on the road to riches by an odd job that Fredkin performed for the Woods Hole Oceanographic Institute. One of Woods Hole's experiments had run into a complication: underwater instruments had faithfully recorded the changing direction and strength of deep ocean currents, but the information, encoded in tiny dots of light on sixteen-millimeter film, was inaccessible to the computers that were supposed to analyze it. Fredkin rented a sixteen-millimeter movie projector and with a surprisingly simple modification turned it into a machine for translating those dots into terms the computer could accept.

This contraption pleased the people at Woods Hole and led to a contract with Lincoln Laboratory. Lincoln was still doing work for the Air Force, and the Air Force wanted its computers to analyze radar information that, like the Woods Hole data, consisted of patterns of light on film. A makeshift information-conversion machine earned Triple-I $10,000, and within a year the Air Force hired Fredkin to build equipment devoted to the task. The job paid $350,000—the equivalent today of around $1 million. RCA and other companies, it turned out, also needed to turn visual patterns into digital data, and "programmable film readers" that sold for $500,000 apiece became Triple-I's stock-in-trade. In 1968 Triple-I went public and Fredkin was suddenly a millionaire. Gradually he cashed in his chips. First he bought a ranch in Colorado. Then one day he was thumbing through the classifieds and saw that an island in the Caribbean was for sale. He bought it.

In the early 1960s, at the suggestion of the Defense Department's Advanced Research Projects Agency, MIT set up what would become its Laboratory for Computer Science. It was then called Project MAC, an acronym that stood for both "machine-aided cognition" and "multiaccess computer." Fredkin had connections with the project from the beginning. Licklider, who had left BBN for the Pentagon shortly after Fredkin's departure, was influential in earmarking federal money for MAC. Marvin Minsky—who would later serve on Triple-I's board, and by the end of 1967 owned some of its stock—was centrally involved In MAC's inception. Fredkin served on Project MAC's steering committee, and in 1966 he began discussing with Minsky the possibility of becoming a visiting professor at MIT. The idea of bringing a college dropout onto the faculty, Minsky recalls, was not as outlandish as it now sounds; computer science had become an academic discipline so suddenly that many of its leading lights possessed meager formal credentials. In 1968, after Licklider had come to MIT and become the director of Project MAC, he and Minsky convinced Louis Smullin, the head of the electrical-engineering department, that Fredkin was worth the gamble. "We were a growing department and we wanted exciting people," Smullin says. "And Ed was exciting."

Fredkin had taught for barely a year before he became a full professor, and not much later, in 1971, he was appointed the head of Project MAC—a position that was also short-lived, for in the fall of 1974 he began a sabbatical at the California Institute of Technology as a Fairchild Distinguished Scholar. He went to Caltech under the sponsorship of Richard Feynman. The deal, Fredkin recalls, was that he would teach Feynman more about computer science, and Feynman would teach him more about physics. While there, Fredkin developed an idea that has slowly come to be seen as a profound contribution to both disciplines. The idea is also—in Fredkin's mind, at least—corroborating evidence for his theory of digital physics. To put its upshot in brief and therefore obscure terms, Fredkin found that computation is not inherently irreversible and thus it is possible, in principle, to build a computer that doesn't use up energy and doesn't provide off heat.

All computers on the market are irreversible. That is, their history of information processing cannot be inferred from their present informational state; you cannot look at the data they contain and figure out how they arrived at it. By the time the average computer tells you that 2 plus 2 equals 4, it has forgotten the question; for all it knows, you asked what 1 plus 3 is. The reason for this ignorance is that computers discharge information once it is no longer needed, so that they won't get clogged up.

In 1961 Rolf Landauer, of IBM's Thomas J. Watson Research Center, established that this destruction of information is the only part of the computational process that unavoidably involves the dissipation of energy. It takes effort, in other words, for a computer to forget things but not necessarily for it to perform other functions. Thus the question of whether you can, in principle, build a universal computer that doesn't dissipate energy in the form of heat is synonymous with the question of whether you can design a logically reversible universal computer, one whose computational history can always be unearthed. Landauer, along with just about everyone else, thought such a computer impossible; all past computer architectures had implied the regular discarding of information, and it was widely believed that this irreversibility was intrinsic to computation. But while at Caltech, Fredkin did one of his favorite things—he showed that everyone had been wrong all along.

Of the two kinds of reversible computers invented by Fredkin, the better known is called the billiard-ball computer. If it were ever actually built, it would consist of billiard balls ricocheting around in a labyrinth of "mirrors," bouncing off the mirrors at 45-degree angles, periodically banging into other moving balls at 90-degree angles, and occasionally exiting through doorways that occasionally would permit new balls to enter. To extract data from the machine, you would superimpose a grid over it, and the presence or absence of a ball in a given square at a given point in time would constitute information. Such a machine, Fredkin showed, would qualify as a universal computer; it could do anything that normal computers do. But unlike other computers, it would be perfectly reversible; to recover its history, all you would have to do is stop it and run it backward. Charles H. Bennett, of IBM's Thomas J. Watson Research Center, independently arrived at a different proof that reversible computation is possible, though he considers the billiard-ball computer to be in some respects a more elegant solution to the problem than his own.

The billiard-ball computer will never be built, because it is a platonic device, existing only in a world of ideals. The balls are perfectly round and hard, and the table perfectly smooth and hard. There is no friction between the two, and no energy is lost when balls collide. Still, although these ideals are unreachable, they could be approached eternally through technological refinement, and the heat produced by fiction and collision could thus be reduced without limit. Since no additional heat would be created by information loss, there would be no necessary minimum on the total heat emitted by the computer. "The cleverer you are, the less heat it will generate," Fredkin says.

The connection Fredkin sees between the billiard-ball computer and digital physics exemplifies the odd assortment of evidence he has gathered in support of his theory. Molecules and atoms and their constituents, he notes, move around in theoretically reversible fashion, like billiard balls (although it is not humanly possible, of course, actually to take stock of the physical state of the universe, or even one small corner of it, and reconstruct history by tracing the motion of microscopic particles backward). Well, he asks, given the theoretical reversibility of physical reality, doesn't the theoretical feasibility of a reversible computer lend credence to the claim that computation is reality's basis?

No and yes. Strictly speaking, Fredkin's theory doesn't demand reversible computation. It is conceivable that an irreversible process at the very core of reality could provide rise to the reversible behavior of molecules, atoms, electrons, and the rest. After all, irreversible computers (that is, all computers on the market) can simulate reversible billiard balls. But they do so in a convoluted way, Fredkin says, and the connection between an irreversible substratum and a reversible stratum would, similarly, be tortuous—or, as he puts it, "aesthetically obnoxious." Fredkin prefers to think that the cellular automaton underlying reversible reality does its work gracefully.

Consider, for example, a variant of the billiard-ball computer invented by Norman Margolus, the Canadian in MIT's information-mechanics group. Margolus showed how a two-state cellular automaton that was itself reversible could simulate the billiard-ball computer using only a simple rule involving a small neighborhood. This cellular automaton in action looks like a jazzed-up version of the original video game, Pong. It is an overhead view of endlessly energetic balls ricocheting off clusters of mirrors and each other It is proof that a very simple binary cellular automaton can provide rise to the seemingly more complex behavior of microscopic particles bouncing off each other. And, as a kind of bonus, these particular particles themselves amount to a computer. Though Margolus discovered this powerful cellular-automaton rule, it was Fredkin who had first concluded that it must exist and persuaded Margolus to look for it. "He has an intuitive idea of how things should be," Margolus says. "And often, if he can't come up with a rational argument to convince you that it should be so, he'll sort of transfer his intuition to you."

That, really, is what Fredkin is trying to do when he argues that the universe is a computer. He cannot provide you a single line of reasoning that leads inexorably, or even very plausibly, to this conclusion. He can tell you about the reversible computer, about Margolus's cellular automaton, about the many physical quantities, like light, that were once thought to be continuous but are now considered discrete, and so on. The evidence consists of many little things—so many, and so little, that in the end he is forced to convey his truth by simile. "I find the supporting evidence for my beliefs in ten thousand different places," he says. "And to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, Where is this animal? I say, Well, he was here, he's about this big, this that and the other. And I know a thousand things about him. I don't have him in hand, but I know he's there." The story changes upon retelling. One day it's Bigfoot that Fredkin's trailing. Another day it's a duck: feathers are everywhere, and the tracks are webbed. Whatever the animal, the moral of the story remains the same: "What I see is so compelling that it can't be a creature of my imagination."

V. Deus ex Machina


HERE WAS SOMETHING BOTHERSOME ABOUT ISAAC Newton's theory of gravitation. The idea that the sun exerts a pull on the earth, and vice versa, sounded vaguely supernatural and, in any event, was hard to explain. How, after all, could such "action at a distance" be realized? Did the earth look at the sun, estimate the distance, and consult the law of gravitation to determine where it should move and how fast? Newton sidestepped such questions. He fudged with the Latin phrase si esset: two bodies, he wrote, behave as if impelled by a force inversely proportional to the square of their distance. Ever since Newton, physics has followed his example. Its "force fields" are, strictly speaking, metaphorical, and its laws purely descriptive. Physicists make no attempt to explain why things obey the law of electromagnetism or of gravitation. The law is the law, and that's all there is to it.

Fredkin refuses to accept authority so blindly. He posits not only laws but also a law-enforcement agency: a computer. Somewhere out there, he believes, is a machinelike thing that actually keeps our individual bits of space abiding by the rule of the universal cellular automaton. With this belief Fredkin crosses the line between physics and metaphysics, between scientific hypothesis and cosmic speculation. If Fredkin had Newton's knack for public relations, if he stopped at saying that the universe operates as if it were a computer, he could Strengthen his stature among physicists while preserving the essence of his theory—the idea that the dynamics of physical reality will ultimately be better captured by a single recursive algorithm than by the mathematics of conventional physics, and that the continuity of time and space implicit in traditional mathematics is illusory.

Actually, some estimable physicists have lately been saying things not wholly unlike this stripped-down version of the theory. T. D. Lee, a Nobel laureate at Columbia University, has written at length about the possibility that time is discrete. And in 1984 Scientific American, not exactly a soapbox for cranks, published an article in which Stephen Wolfram, then of Princeton's Institute for Advanced Study, wrote, "Scientific laws are now being viewed as algorithms. . . . Physical systems are viewed as computational systems, processing information much the way computers do." He concluded, "A new paradigm has been born."

The line between responsible scientific speculation and off-the-wall metaphysical pronouncement was nicely illustrated by an article in which Tomasso Toffoli, the Italian in MIT's information-mechanics group, stayed barely on the responsible side of it. Published in the journal Physica D, the article was called "Cellular automata as an alternative to (rather than an approximation of) differential equations in modeling physics." Toffoli's thesis captured the core of Fredkin's theory yet had a perfectly reasonable ring to it. He simply suggested that the historical reliance of physicists on calculus may have been due not just to its merits but also to the fact that before the computer, alternative languages of description were not practical.

Why does Fredkin refuse to do the expedient thing—leave out the part about the universe actually being a computer? One reason is that he considers reprehensible the failure of Newton, and of all physicists since, to back up their descriptions of nature with explanations. He is amazed to find "perfectly rational scientists" believing in "a form of mysticism: that things just happen because they happen." The best physics, Fredkin seems to believe, is metaphysics.

The trouble with metaphysics is its endless depth. For every question that is answered, at least one other is raised, and it is not always clear that, on balance, any progress has been made. For example, where is this computer that Fredkin keeps talking about? Is it in this universe, residing along some fifth or sixth dimension that renders it invisible? Is it in some meta-universe? The answer is the latter, apparently, and to understand why, we need to return to the problem of the infinite regress, a problem that Rolf Landauer, among others, has cited with respect to Fredkin's theory. Landauer illustrates the problem by telling the old turtle story. A professor has just finished lecturing at some august university about the origin and structure of the universe, and an old woman in tennis shoes walks up to the lectern. "Excuse me, sir, but you've got it all wrong," she says. "The truth is that the universe is sitting on the back of a huge turtle." The professor decides to humor her. "Oh, really?" he asks. "Well, tell me, what is the turtle standing on?" The lady has a ready reply: "Oh, it's standing on another turtle." The professor asks, "And what is that turtle standing on?" Without hesitation, she says, "Another turtle." The professor, still game, repeats his question. A look of impatience comes across the woman's face. She holds up her hand, stopping him in mid-sentence. "Save your breath, sonny," she says. "It's turtles all the way down."

The infinite-regress problem afflicts Fredkin's theory in two ways, one of which we have already encountered: if matter is made of information, what is the information made of? And even if one concedes that it is no more ludicrous for information to be the most fundamental stuff than for matter or energy to be the most fundamental stuff, what about the computer itself? What is it made of? What energizes it? Who, or what, runs it, or set it in motion to begin with?

HEN FREDKIN IS DISCUSSING THE PROBLEM OF THE infinite regress, his logic seems variously cryptic, evasive, and appealing. At one point he says, "For everything in the world where you wonder, 'What is it made out of?' the only thing I know of where the question doesn't have to be answered with anything else is for information." This puzzles me. Thousands of words later I am still puzzled, and I press for clarification. He talks some more. What he means, as near as I can tell, is what follows.

First of all, it doesn't matter what the information is made of, or what kind of computer produces it. The computer could be of the conventional electronic sort, or it could be a hydraulic machine made of gargantuan sewage pipes and manhole covers, or it could be something we can't even imagine. What's the difference? Who cares what the information consists of? So long as the cellular automaton's rule is the same in each case, the patterns of information will be the same, and so will we, because the structure of our world depends on pattern, not on the pattern's substrate; a carbon atom, according to Fredkin, is a certain configuration of bits, not a certain kind of bits.

Besides, we can never know what the information is made of or what kind of machine is processing it. This point is reminiscent of childhood conversations that Fredkin remembers having with his sister, Joan, about the possibility that they were part of a dream God was having. "Say God is in a room and on his table he has some cookies and tea," Fredkin says. "And he's dreaming this whole universe up. Well, we can't reach out and get his cookies. They're not in our universe. See, our universe has bounds. There are some things in it and some things not." The computer is not; hardware is beyond the grasp of its software. Imagine a vast computer program that contained bodies of information as complex as people, motivated by bodies of information as complex as ideas. These "people" would have no way of figuring out what kind of computer they owed their existence to, because everything they said, and everything they did—including formulate metaphysical hypotheses—would depend entirely on the programming rules and the original input. As long as these didn't change, the same metaphysical conclusions would be reached in an old XD-1 as in a Kaypro 2.

This idea—that sentient beings could be constitutionally numb to the texture of reality—has fascinated a number of people, including, lately, computer scientists. One source of the fascination is the fact that any universal computer can simulate another universal computer, and the simulated computer can, because it is universal, do the same thing. So it is possible to conceive of a theoretically endless series of computers contained, like Russian dolls, in larger versions of themselves and yet oblivious of those containers. To anyone who has lived intimately with, and thought deeply about, computers, says Charles Bennett, of IBM's Watson Lab, this notion is very attractive. "And if you're too attracted to it, you're likely to part company with the physicists." Physicists, Bennett says, find heretical the notion that anything physical is impervious to expertment, removed from the reach of science.

Fredkin's belief in the limits of scientific knowledge may sound like evidence of humility, but in the end it permits great ambition; it helps him go after some of the grandest philosophical questions around. For example, there is a paradox that crops up whenever people think about how the universe came to be. On the one hand, it must have had a beginning. After all, things usually do. Besides, the cosmological evidence suggests a beginning: the big bang. Yet science insists that it is impossible for something to come from nothing; the laws of physics forbid the amount of energy and mass in the universe to change. So how could there have been a time when there was no universe, and thus no mass or energy?

Fredkin escapes from this paradox without breaking a sweat. Granted, he says, the laws of our universe don't permit something to come from nothing. But he can imagine laws that would permit such a thing; in fact, he can imagine algorithmic laws that would permit such a thing. The conservation of mass and energy is a consequence of our cellular automaton's rules, not a consequence of all possible rules. Perhaps a different cellular automaton governed the creation of our cellular automation—just as the rules for loading software are different from the rules running the program once it has been loaded.

What's funny is how hard it is to doubt Fredkin when with such assurance he makes definitive statements about the creation of the universe—or when, for that matter, he looks you in the eye and tells you the universe is a computer. Partly this is because, given the magnitude and intrinsic intractability of the questions he is addressing, his answers aren't all that bad. As ideas about the foundations of physics go, his are not completely out of the ball park; as metaphysical and cosmogonic speculation goes, his isn't beyond the pale.

But there's more to it than that. Fredkin is, in his own odd way, a rhetorician of great skill. He talks softly, even coolly, but with a low-key power, a quiet and relentless confidence, a kind of high-tech fervor. And there is something disarming about his self-awareness. He's not one of these people who say crazy things without having so much as a clue that you're sitting there thinking what crazy things they are. He is acutely conscious of his reputation; he knows that some scientists are reluctant to invite him to conferences for fear that he'll say embarrassing things. But he is not fazed by their doubts. "You know, I'm a reasonably smart person. I'm not the smartest person in the world, but I'm pretty smart—and I know that what I'm involved in makes perfect sense. A lot of people build up what might be called self-delusional systems, where they have this whole system that makes perfect sense to them, but no one else ever understands it or buys it. I don't think that's a major factor here, though others might disagree." It's hard to disagree, when he so forthrightly offers you the chance.

Still, as he gets further from physics, and more deeply into philosophy, he begins to try one's trust. For example, having tackled the question of what sort of process could generate a universe in which spontaneous generation is impossible, he aims immediately for bigger game: Why was the universe created? Why is there something here instead of nothing?

HEN THIS SUBJECT COMES UP, WE ARE SITTING IN the Fredkins' villa. The living area has pale rock walls, shiny-clean floors made of large white ceramic tiles, and built-in bookcases made of blond wood. There is lots of air—the ceiling slopes up in the middle to at least twenty feet—and the air keeps moving; some walls consist almost entirely of wooden shutters that, when open, let the sea breeze pass as fast as it will. I am glad of this. My skin, after three days on Fredkin's island, is hot, and the air, though heavy, is cool. The sun is going down.

Fredkin, sitting on a white sofa, is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "analytical" approach associated with traditional mathematics, including differential equations, and the "computational" approach associated with algorithms. You can predict a future state of a system susceptible to the analytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold.

This indeterminacy is very suggestive. It suggests, first of all, why so many "chaotic" phenomena, like smoke rising from a cigarette, are so difficult to predict using conventional mathematics. (In fact, some scientists have taken to modeling chaotic systems with cellular automata.) To Fredkin, it also suggests that even if human behavior is entirely determined, entirely inevitable, it may be unpredictable; there is room for "pseudo free will" in a completely mechanistic universe. But on this particular evening Fredkin is interested mainly in cosmogony, in the implications of this indeterminacy for the big question: Why does this giant computer of a universe exist?

It's simple, Fredkin explains: "The reason is, there is no way to know the answer to some question any faster than what's going on."

Aware that he may have said something enigmatic, Fredkin elaborates. Suppose, he says, that there is an all-powerful God. "And he's thinking of creating this universe. He's going to spend seven days on the job—this is totally allegorical—or six days on the job. Okay, now, if he's as all-powerful as you might imagine, he can say to himself, 'Wait a minute, why waste the time? I can create the whole thing, or I can just think about it for a minute and just realize what's going to happen so that I don't have to bother.' Now, ordinary physics says, Well, yeah, you got an all-powerful God, he can probably do that. What I can say is—this is very interesting—I can say I don't care how powerful God is; he cannot know the answer to the question any faster than doing it. Now, he can have various ways of doing it, but he has to do every Goddamn single step with every bit or he won't get the right answer. There's no shortcut."

Around sundown on Fredkin's island all kinds of insects start chirping or buzzing or whirring. Meanwhile, the wind chimes hanging just outside the back door are tinkling with methodical randomness. All this music is eerie and vaguely mystical. And so, increasingly, is the conversation. It is one of those moments when the context you've constructed falls apart, and gives way to a new, considerably stranger one. The old context in this case was that Fredkin is an iconoclastic thinker who believes that space and time are discrete, that the laws of the universe are algorithmic, and that the universe works according to the same principles as a computer (he uses this very phrasing in his most circumspect moments). The new context is that Fredkin believes that the universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news/bad-news joke: the good news is that our lives have purpose; the bad news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places.

So, I say, you're arguing that the reason we're here is that some being wanted to theorize about reality, and the only way he could test his theories was to create reality? "No, you see, my explanation is much more abstract. I don't imagine there is a being or anything. I'm just using that to talk to you about it. What I'm saying is that there is no way to know what the future is any faster than running this [the universe] to get to that [the future]. Therefore, what I'm assuming is that there is a question and there is an answer, okay? I don't make any assumptions about who has the question, who wants the answer, anything."

But the more we talk, the closer Fredkin comes to the religious undercurrents he's trying to avoid. "Every astrophysical phenomenon that's going on is always assumed to be just accident," he says. "To me, this is a fairly arrogant position, in that intelligence—and computation, which includes intelligence, in my view—is a much more universal thing than people think. It's hard for me to believe that everything out there is just an accident." This sounds awfully like a position that Pope John Paul II or Billy Graham would take, and Fredkin is at pains to clarify his position: "I guess what I'm saying is—I don't have any religious belief. I don't believe that there is a God. I don't believe in Christianity or Judaism or anything like that, okay? I'm not an atheist, I'm not an agnostic, I'm just in a simple state. I don't know what there is or might be. But what I can say is that it seems likely to me that this particular universe we have is a consequence of something I would call intelligent." Does he mean that there's something out there that wanted to get the answer to a question? "Yeah." Something that set up the universe to see what would happen? "In some way, yes."

VI. The Language Barrier


N 1974, UPON RETURNING TO MIT FROM CALTECH, Fredkin was primed to revolutionize science. Having done the broad conceptual work (concluding that the universe is a computer), he would enlist the aid of others in taking care of the details—translating the differential equations of physics into algorithms, experimenting with cellular-automaton rules and selecting the most elegant, and, eventually, discovering The Rule, the single law that governs every bit of space and accounts for everything. "He figured that all he needed was some people who knew physics, and that it would all be easy," Margolus says.

One early obstacle was Fredkin's reputation. He says, "I would find a brilliant student; he'd get turned on to this stuff and start to work on it. And then he would come to me and say, 'I'm going to work on something else.' And I would say, 'Why?' And I had a few very honest ones, and they would say, 'Well, I've been talking to my friends about this and they say I'm totally crazy to work on it. It'll ruin my career. I'll be tainted forever.'" Such fears were not entirely unfounded. Fredkin is one of those people who arouse either affection, admiration, and respect, or dislike and suspicion. The latter reaction has come from a number of professors at MIT, particularly those who put a premium on formal credentials, proper academic conduct, and not sounding like a crackpot. Fredkin was never oblivious of the complaints that his work wasn't "worthy of MIT," nor of the movements, periodically afoot, to sever, or at least weaken, his ties to the university. Neither were his graduate students.

Fredkin's critics finally got their way. In the early 1980s, while he was serving briefly as the president of Boston's CBS-TV affiliate, someone noticed that he wasn't spending much time around MIT and pointed to a faculty rule limiting outside professional activities. Fredkin was finding MIT "less and less interesting" anyway, so he agreed to be designated an adjunct professor. As he recalls the deal, he was going to do a moderate amount of teaching and be paid an "appropriate" salary. But he found the actual salary insulting, declined payment, and never got around to teaching. Not surprisingly, he was not reappointed adjunct professor when his term expired, in 1986. Meanwhile, he had so nominally discharged his duties as the head of the information-mechanics group that the title was given to Toffoli.

Fredkin doubts that his ideas will achieve widespread acceptance anytime soon. He believes that most physicists are so deeply immersed in their kind of mathematics, and so uncomprehending of computation, as to be incapable of grasping the truth. Imagine, he says, that a twentieth-century time traveler visited Italy in the early seventeenth century and tried to reformulate Galileo's ideas in terms of calculus. Although it would be a vastly more powerful language of description than the old one, conveying its importance to the average scientist would be nearly impossible. There are times when Fredkin breaks through the language barrier, but they are few and far between. He can sell one person on one idea, another on another, but nobody seems to get the big picture. It's like a painting of a horse in a meadow, he says"Everyone else only looks at it with a microscope, and they say, 'Aha, over here I see a little brown pigment. And over here I see a little green pigment.' Okay. Well, I see a horse."

Fredkin's research has nevertheless paid off in unanticipated ways. Comparing a computer's workings and the dynamics of physics turned out to be a good way to figure out how to build a very efficient computer—one that harnesses the laws of physics with great economy. Thus Toffoli and Margolus have designed an inexpensive but powerful cellular-automata machine, the CAM 6. The "machine' is actually a circuit board that when inserted in a personal computer permits it to orchestrate visual complexity at a speed that can be matched only by general-purpose computers costing hundreds of thousands of dollars. Since the circuit board costs only around $1,500, this engrossing machine may well entice young scientific revolutionaries into joining the quest for The Rule. Fredkin speaks of this possibility in almost biblical terms, "The big hope is that there will arise somewhere someone who will have some new, brilliant ideas," he says. "And I think this machine will have a dramatic effect on the probability of that happening."

But even if it does happen, it will not ensure Fredkin a place in scientific history. He is not really on record as believing that the universe is a computer. Although some of his tamer insights have been adopted, fleshed out, and published by Toffoli or Margolus, sometimes in collaboration with him, Fredkin himself has published nothing on digital physics. His stated rationale for not publishing has to do with, of all things, lack of ambition. "I'm just not terribly interested," he says. "A lot of people are fantastically motivated by publishing. It's part of a whole thing of getting ahead in the world." Margolus has another explanation: "Writing something down in good form takes a lot of time. And usually by the time he's done with the first or second draft, he has another wonderful idea that he's off on."

These two theories have merit, but so does a third: Fredkin can't write for academic journals. He doesn't know how. His erratic, hybrid education has left him with a mixture of terminology that neither computer scientists nor physicists recognize as their native tongue. Further, he is not schooled in the rules of scientific discourse; he seems just barely aware of the line between scientific hypothesis and philosophical speculation. He is not politic enough to confine his argument to its essence: that time and space are discrete, and that the state of every point in space at any point in time is determined by a single algorithm. In short, the very background that has allowed Fredkin to see the universe as a computer seems to prevent him from sharing his vision. If he could talk like other scientists, he might see only the things that they see.


Robert Wright is the author of
Three Scientists and Their Gods: Looking for Meaning in an Age of Information, The Moral Animal: Evolutionary Psychology and Everyday Life, and Nonzero: The Logic of Human Destiny.
Copyright © 2002 by The Atlantic Monthly Group. All rights reserved.
The Atlantic Monthly; April 1988; Did the Universe Just Happen?; Volume 261, No. 4; page 29.
Wed, 24 Nov 2010 05:10:00 -0600 text/html https://www.theatlantic.com/past/docs/issues/88apr/wright.htm
Killexams : Breaking down CIS's new software supply chain security guidance

Securing the software supply chain continues to be one of the most discussed subjects currently among IT and cybersecurity leaders. A study by In-Q-Tel researchers shows a rapid rise in software supply chain attacks starting around 2016, going from almost none in 2015 to about 1,500 in 2020. The Cloud Native Computing Foundation’s (CNCF’s) catalog of software supply chain attacks also supports a rise in this attack vector.

As software supply chain practices mature, we’ve seen guidance from groups such as the U.S. National Institute of Standards and Technology (NIST), the Open Source Security Foundation (OpenSSF), and now the Center for Internet Security (CIS) with its recently published Software Supply Chain Security Guide. The CIS guide was created in collaboration with Aqua Security, who even made an open-source tool dubbed “chain-bench” to help audit software supply chain stacks for compliance with the guide.

The intent of the CIS Benchmark for Software Supply Chain Guide is to platform agnostic high-level set of best practices that can subsequently be used to build platform-specific guidance for platforms such as GitHub or GitLab. The guide consists of five core areas:

  • Source code
  • Build pipelines
  • Dependencies
  • Artifacts
  • Deployment

It also follows the phases of the software supply chain itself from source to deployment and touches on the various potential threat vectors present throughout that process. The image below is reminiscent of another emerging framework, Supply Chain Levels for Software Artifacts (SLSA).

hughes cis Center for Internet Security

 

 

Copyright © 2022 IDG Communications, Inc.

Wed, 20 Jul 2022 04:55:00 -0500 en text/html https://www.csoonline.com/article/3666742/breaking-down-ciss-new-software-supply-chain-security-guidance.html
Killexams : Red Hat's new Boston-based CEO talks Raleigh HQ, office space No result found, try new keyword!But that doesn’t mean that Red Hat is going anywhere anytime soon – it's parent company, IBM (NYSE ... have the answer yet to what’s the best configuration that serves them right now.” ... Fri, 15 Jul 2022 22:00:00 -0500 text/html https://www.bizjournals.com/triad/news/2022/07/16/new-red-hat-ceo-talks-future-raleigh-headquarters.html Killexams : The Bus That’s Not A Bus: The Joys Of Hacking PCI Express

PCI Express (PCIe) has been around since 2003, and in that time it has managed to become the primary data interconnect for not only expansion cards, but also high-speed external devices. What also makes PCIe interesting is that it replaces the widespread use of parallel buses with serial links. Instead of having a bus with a common medium (traces) to which multiple devices connect, PCIe uses a root complex that directly connects to PCIe end points.

This is similar to how Ethernet originally used a bus configuration, with a common backbone (coax cable), but modern Ethernet (starting in the 90s) moved to a point-to-point configuration, assisted by switches to allow for dynamic switching between which points (devices) are connected. PCIe also offers the ability to add switches which allows more than one PCIe end point (a device or part of a device) to share a PCIe link (called a ‘lane’).

This change from a parallel bus to serial links simplifies the topology a lot compared to ISA or PCI where communication time had to be shared with other PCI devices on the bus and only half-duplex operation was possible. The ability to bundle multiple lanes to provide less or more bandwidth to specific ports or devices has meant that there was no need for a specialized graphics card slot, using e.g. an x16 PCIe slot with 16 lanes. It does however mean we’re using serial links that run at many GHz and must be implemented as differential pairs to protect signal integrity.

This all may seem a bit beyond the means of the average hobbyist, but there are still ways to have fun with PCIe hacking even if they do not involve breadboarding 7400-logic chips and debugging with a 100 MHz budget oscilloscope, like with ISA buses.

High Clocks Demand Differential Pairs

PCIe version 1.0 increases the maximum transfer rate when compared to 32-bit PCI from 133 MB/s to 250 MB/s. This is roughly the same as a PCI-X 64-bit connection (at 133 MHz) if four lanes are used (~1,064 MB/s). Here the PCIe lanes are clocked at 2.5 GHz, with differential signaling send/receive pairs within each lane for full-duplex operation.

Today, PCIe 4 is slowly becoming adopted as more and more systems are upgraded. This version of the standard  runs at 16 GHz, and the already released PCIe version 5 is clocked at 32 GHz. Although this means a lot of bandwidth (>31 GB/s for an x16 PCIe 4 link), it comes with the cost of generating these rapid transitions, keeping these data links full, and keeping the data intact for more than a few millimeters. That requires a few interesting technologies, primarily differential signaling and SerDes.

Basic visualization of how differential signaling works.

Differential signaling is commonly used in many communication protocols, including RS-422, IEA-485, Ethernet (via twisted-pair wiring), DisplayPort, HDMI and USB, as well as on PCBs, where the connection between the Ethernet PHY and magnetics is implemented as differential pairs. Each side of the pair conducts the same signal, just with one side having the inverted signal. Both sides have the same impedance, and are affected similarly by (electromagnetic) noise in the environment. As a result, when the receiver flips the inverted signal back and merges the two signals, noise in the signal will become inverted on one side (negative amplitude) and thus cancel out the noise on the non-inverted side.

The move towards lower signal voltages (in the form of LVDS) in these protocols and the increasing clock speeds makes the use of differential pairs essential. Fortunately they are not extremely hard to implement on, say, a custom PCB design. The hard work of ensuring that the traces in a differential pair have the same length is made easier by common EDA tools (including KiCad, Autodesk Eagle, and Altium) that provide functionality for making the routing of differential pairs a semi-automated affair.

Having It Both Ways: SerDes

Schematic diagram of a SerDes link.

A Serializer/Deserializer (SerDes) is a functional block that is used to convert between serial data and parallel interfaces. Inside an FPGA or communications ASIC the data is usually transferred on a parallel interface, with the parallel data being passed into the SerDes block, where it is serialized for transmission or vice-versa. The PCIe PMA (physical media attachment) layer is the part of the protocol’s physical layer where SerDes in PCIe is located. The exact SerDes implementation differs per ASIC vendor, but their basic functionality is generally the same.

When it comes to producing your own PCIe hardware, an easy way to get started is to use an FPGA with SerDes blocks. One still needs to load the FPGA with a design that includes the actual PCIe data link and transaction layers, but these are often available for free, such as with Xilinx FPGAs.

PCIe HDL Cores

Recent Xilinx FPGAs not only integrate SerDes and PCIe end-point features, but Xilinx also provides free-as-in-beer PCIe IP blocks (limited to x8 at PCIe v2.1) for use with these hardware features that (based on the license) can be used commercially. If one wishes for a slightly less proprietary solution, there are Open Source PCIe cores available as well, such as this PCIe Mini project that was tested on a Spartan 6 FPGA on real hardware and provides a PCIe-to-Wishbone bridge, along with its successor project, which targets Kintex Ultrascale+ FPGAs.

On the other sides of the fence, the Intel (formerly Altera) IP page seems to strongly hint at giving their salesperson a call for a personalized quote. Similarly, Lattice has their sales people standing by to take your call for their amazing PCIe IP blocks. Here one can definitely see the issue with a protocol like PCIe: unlike ISA or PCI devices which could be cobbled together with a handful of 74xx logic chips and the occasional microcontroller or CPLD, PCIe requires fairly specialized hardware.

Even if one buys the physical hardware (e.g. FPGA), use of the SerDes hardware blocks with PCIe functionality may still require a purchase or continuous license (e.g. for the toolchain) depending on the chosen solution. At the moment it seems that Xilinx FPGAs are the ‘go-to’ solution here, but this may change in the future.

Also of note here is that the PCIe protocol itself is officially available to members of PCI-SIG. This complicates an already massive undertaking if one wanted to implement the gargantuan PCIe specification from scratch, and makes it even more admirable that there are Open Source HDL cores at all for PCIe.

Putting it Together

PCI Express x1 edge connector drawing with pin numbers.

The basic board design for a PCIe PCB is highly reminiscent of that of PCI cards. Both use an edge connector with a similar layout. PCIe edge connectors are 1.6 mm thick, use a 1.0 mm pitch (compared to 1.27 mm for PCI), a 1.4 mm spacing between the contact fingers and the same 20° chamfer angle as PCI edge connectors. A connector has at least 36 pins, but can have 164 pins in an x16 slot configuration.

PCIe card edge connector cross section.

An important distinction with PCIe is that there is no fixed length of the edge connector, as with ISA, PCI and similar interfaces. Those have a length that’s defined by the width of the bus. In the case of PCIe, there is no bus, so instead we get the ‘core’ connector pin-out with a single lane (x1 connector). To this single lane additional ‘blocks’ can be added, each adding another lane that gets bonded so that the bandwidth of all connected lanes can be used by a single device.

In addition to regular PCIe cards, one can also pick from a range of different PCIe devices, such as Mini-PCIe. Whatever form factor one chooses, the basic circuitry does not change.

This raises the interesting question of what kind of speeds your PCIe device will require. On one hand more bandwidth is nice, on the other hand it also requires more SerDes channels, and not all PCIe slots allow for every card to be installed. While any card of any configuration (x1, x4, x8 or x16) will fit and work in an x16 slot (mechanical), smaller slots may not physically allow a larger card to fit. Some connectors have an ‘open-ended’ configuration, where you can fit for example an x16 card into an x1 slot if so inclined. Other connectors can be ‘modded’ to allow such larger cards to fit unless warranty is a concern.

The flexibility of PCIe means that the bandwidth scales along with the number of bonded lanes as well as the PCIe protocol version. This allows for graceful degradation, where if, say, a PCIe 3.0 card is inserted into a slot that is capable of only PCIe 1.0, the card will still be recognized and work. The available bandwidth will be severely reduced, which may be an issue for the card in question. The same is true with available PCIe lanes, bringing to mind the story of cryptocoin miners who split up x16 PCIe slots into 16 x1 slots, so that they could run an equal number of GPUs or specialized cryptocoin mining cards.

It’s Full of PCIe

This flexibility of PCIe has also led to PCIe lanes being routed out to strange and wonderful new places. Specifications like Intel’s Thunderbolt (now USB 4) include room for multiple lanes of PCIe 3.0, which enables fast external storage solutions as well as external video cards that work as well as internal ones.

Solid-state storage has moved over from the SATA protocol to NVMe, which essentially defines a storage device that is directly attached to the PCIe controller. This change has allowed NVMe storage devices to be installed or even directly integrated on the main logic board.

Clearly PCIe is the thing to look out for these days. We have even seen that System-on-Chips (SoCs), such as those found on Raspberry Pi 4 boards now come with a single PCIe lane that has already been hacked to expand those boards in ways thought inconceivable. As PCIe becomes more pervasive, this seems like a good time to become more acquainted with it.

Fri, 05 Aug 2022 12:00:00 -0500 Maya Posch en-US text/html https://hackaday.com/2021/02/03/the-bus-thats-not-a-bus-the-joys-of-hacking-pci-express/
Killexams : Apple: Let's All Pay Tribute To Jony Ive
Apple Debuts Latest Products

Justin Sullivan/Getty Images News

Thesis

My last article on Apple Inc. (NASDAQ:AAPL) was co-produced with Envision Research earlier this month. That article analyzed why AAPL is a good candidate for an inter-generational account. Shortly after that article, I read the news on July 13 that Jony Ive, AAPL's former Chief Design Officer, has officially ended his relationship with AAPL.

I have been thinking over and preparing this article since then. This new article is totally "new" in that it just wants to pay tribute to Jony Ive. In my mind, Jony Ive, together with Steve Jobs, transitioned (or more precisely, elevated) AAPL from a tech business to the most successful luxury brand (or fashion brand if you want to call it).

Tech caters to a pretty strong and eternal human need: the need to do things faster. But luxuries cater to an even stronger human need: the need to be different. And in my view, this is the key difference between AAPL and pretty all the other big tech names. This differentiation is the key to why I feel comfortable investing in AAPL not only for myself but also in our inter-generational account. Tim Cook's went a step further and make AAPL also the most efficiently run luxury brand.

With this, let's dive in and see Ive's legacy, together with Tim Cook's opportunities and challenges ahead, more closely.

Brief recap: Jony Ive and AAPL

Jony Ive's partnership with AAPL started about 30 years ago. Ive had either led or left his touch on the design of pretty much every AAPL device - both hardware and software - in the past 3 decades starting from its Newton (introduced back in 1993) all the way to the more accurate AAPL watch and augmented reality headset.

In his earlier days with AAPL, it was an understatement to say that design was not the focus of the business, and Jony Ive planned to leave. However, everything changed when Steve Jobs came back to take over AAPL in 1997. Both of them share an almost paranoid pursuit of design (and also dyslexia).

The first-generation iMac short after Jobs' return serves as a good example as you can see from the photo below. The translucent shell may not seem the sleekest design today. But compared to the mainstream designs (e.g., the IBM models at the same time), it was an immediate hit and sold more than 800k units in the year of its release.

Graphical user interface, website Description automatically generated

Author

Elevation from mundane hardware to a luxury brand

As mentioned above, Tech caters to the eternal human need to do things faster. But luxuries cater to an even stronger human need: the need to be different. Thanks to Ive and Jobs' wicked talent for design and marketing, AAPL is the one tech company (the only one to my knowledge) that successfully completed the elevation from a tech business into a luxury brand. You can see that its margin is comparable to or even higher than top luxury brands such as LVMH Moët Hennessy - Louis Vuitton (OTCPK:LVMHF), both boasting EBITDA margins above 30%. In contrast, the most successful computer (or hardware in general) typically earns a fraction of the margin. Probably, the elevation is even more vivid when we descend from the forest level, and we look closer at some of AAPL's specific products in the next section.

Chart, line chart Description automatically generated

Seeking Alpha

The pricing power

Here I want to draw your attention to the following chart and use the Mac Pro as an example to showcase AAPL's pricing power and its fashion nature. It is a busy chart with quick a bit of information. But the gist just jumps out: the pricing is outlandish, but people are still flocking to buy.

Take the then-new Mac Pro 2019, for example. The base configuration cost $5,999, not only far above the most powerful Mac ever sold but also far above any other topline laptop even in today's market. Fast-forward to now, the trend continues. The newly released 2022 Mac Pro starts with a base price of $3,499. And once you pick some larger memory and hard drive, the price tag rises to $6,099 - even before any software or accessories. Yet, users are more than willing to pay such premier pricing - a hallmark of a successful luxury brand.

Chart, line chart Description automatically generated

Source: 512 Pixels

According to this Dediu report, with 7% of the global market shared, the Mac series captures almost 60% of all the profit in the global laptop market. And Apple keeps gaining market share, as Macs sales grow twice the rate of other PC brand names in Q4 2022.

As a result, in terms of bottom-line margin, AAPL is even more successful than the top luxury brand as you can see from the following chart. Its net profit margin consistently exceeds 20%, leading LVMH consistently by about a whopping 10% over the years. And the most successful computer businesses (or hardware in general) such as DELL again only earn a fraction of what AAPL earns.

Chart, line chart Description automatically generated

Seeking Alpha

Jony Ive's legacy and Tim Cook's challenges ahead

In his book entitled "After Steve," Tripp Mickle predicted that Jony Ive's departure would be enough to wipe out 10% percent of the company's stock price. This, of course, is not what has happened. And in a way, this is an even better demonstration of Ive's legacy. It signals that Ive and Jobs' design-first spirit has been institutionalized at APPL and has become part of its DNA.

Tim Cook went a step further and made AAPL also the most efficiently run luxury brand (and one of the most efficient tech businesses too). AAPL, thanks to Cook's unmatched operational prowess, outperforms both tech businesses and luxury brands in most of the operating metrics such as inventory turnover, asset turnover, days of inventory outstanding et al. As an example, you can see that AAPL's inventory turnover rates are currently about 17x higher than LVMHF and more than 3x higher than DELL.

Looking forward, globalization is suffering a setback in accurate years (due to trade wars, pandemics, the Russian/Ukraine war, et al.). And AAPL will be facing a new set of challenges managing its global logistic chains, as detailed next.

Chart, line chart Description automatically generated

Seeking Alpha

Final thoughts and risks

I wish Jony Ive the best of luck with his post-Apple days. I will be keenly forward to seeing his new designs with his new LoveFrom clients (which include Airbnb and Ferrari).

At the same, I also wish AAPL the best in the post-Ive era. I see a very clear division of labor for Apple's design after Ive's departure. Its Chief Operating Officer Jeff Williams will continue to oversee the design teams. Its industrial design will be led by Evans Hankey and software design by Alan Dye. As aforementioned, I feel Ive's legacy has been institutionalized at APPL and has become part of its DNA. And I trust the new team to keep coming out with designs to keep surprising and thrilling AAPL users.

In terms of other future challenges, Tim Cook is facing a post-globalization challenge. These challenges are articulated in the following question from Citigroup analyst Jim Suva during the last earnings report (slightly edited and emphases added by me). On the one hand, as Cook acknowledged, "in this business, you don't want to hold a ton of inventory". On the other hand, the rewind of globalization requires AAPL to work more strategically with cycle times and hold strategic inventory in places where you need the buffer. It will be a new set of challenges for Cook (or his successor) to find a new balance.

When we hopefully someday get past all of these (COVID, power outages, trade wars, shipping challenges), do you start to reconsider the way you do the supply chain albeit just-in-time ordering and outsourcing so much of your chips? Or do you actually consider like holding more buffer inventory internally?

Thu, 21 Jul 2022 03:58:00 -0500 en text/html https://seekingalpha.com/article/4524916-apple-lets-all-pay-tribute-to-jony-ive
C8010-240 exam dump and training guide direct download
Training Exams List