000-667 braindumps taken recently from test centers

killexams.com gives the most recent and 2022 up-to-date practice questions with Actual 000-667 Examination Questions and Solutions for new subjects. Practice our 000-667 Exam Questions plus VCE to enhance your understanding and pass your own 000-667 examination with excellent Marks. We assurance your success inside the Test Center, covering up each one regarding the purposes regarding the test and building your Familiarity with typically the 000-667 exam. Pass with no question with the actual questions.

Exam Code: 000-667 Practice test 2022 by Killexams.com team
Architectural Design of SOA Solutions
IBM Architectural Questions and Answers
Killexams : IBM Architectural Braindumps - BingNews https://killexams.com/pass4sure/exam-detail/000-667 Search results Killexams : IBM Architectural Braindumps - BingNews https://killexams.com/pass4sure/exam-detail/000-667 https://killexams.com/exam_list/IBM Killexams : Answering the top 10 questions about supercloud

As we exited the isolation economy last year, we introduced supercloud as a term to describe something new that was happening in the world of cloud computing.

In this Breaking Analysis, we address the ten most frequently asked questions we get on supercloud. Today we’ll address the following frequently asked questions:


1. In an industry full of hype and buzzwords, why does anyone need a new term?

2. Aren’t hyperscalers building out superclouds? We’ll try to answer why the term supercloud connotes something different from a hyperscale cloud.

3. We’ll talk about the problems superclouds solve.

4. We’ll further define the critical aspects of a supercloud architecture.

5. We often get asked: Isn’t this just multicloud? Well, we don’t think so and we’ll explain why.

6. In an earlier episode we introduced the notion of superPaaS  – well, isn’t a plain vanilla PaaS already a superPaaS? Again – we don’t think so and we’ll explain why.

7. Who will actually build (and who are the players currently building) superclouds?

8. What workloads and services will run on superclouds?

9. What are some examples of supercloud?

10. Finally, we’ll answer what you can expect next on supercloud from SiliconANGLE and theCUBE.

Why do we need another buzzword?

Late last year, ahead of Amazon Web Services Inc.’s re:Invent conference, we were inspired by a post from Jerry Chen called Castles in the Cloud. In that blog he introduced the idea that there were submarkets emerging in cloud that presented opportunities for investors and entrepreneurs, that the big cloud vendors weren’t going to suck all the value out of the industry. And so we introduced this notion of supercloud to describe what we saw as a value layer emerging above the hyperscalers’ “capex gift.”

It turns out that we weren’t the only ones using the term, as both Cornell and MIT have used the phrase in somewhat similar but different contexts.

The point is something new was happening in the AWS and other ecosystems. It was more than infrastructure as a service and platform as a service and wasn’t just software as a service running in the cloud.

It was a new architecture that integrates infrastructure, unique platform attributes and software to solve new problems that the cloud vendors in our view weren’t addressing by themselves. It seemed to us that the ecosystem was pursuing opportunities across clouds that went beyond conventional implementations of multi-cloud.

In addition, we felt this trend pointed to structural change going on at the industry level that supercloud metaphorically was highlighting.

So that’s the background on why we felt a new catchphrase was warranted. Love it or hate it… it’s memorable.

Industry structures have always mattered in tech

To that last point about structural industry transformation: Andy Rappaport is sometimes credited with identifying the shift from the vertically integrated mainframe era to the horizontally fragmented personal computer- and microprocessor-based era in his Harvard Business Review article from 1991.

In fact, it was actually David Moschella, an International Data Corp. senior vice president at the time, who introduced the concept in 1987, a full four years before Rappaport’s article was published. Moschella, along with IDC’s head of research Will Zachmann, saw that it was clear Intel Corp., Microsoft Corp., Seagate Technology and other would replace the system vendors’ dominance.

In fact, Zachmann accurately predicted in the late 1980s the demise of IBM, well ahead of its epic downfall when the company lost approximately 75% of its value. At an IDC Briefing Session (now called Directions), Moschella put forth a graphic that looked similar to the first two concepts on the chart below.

We don’t have to review the shift from IBM as the epicenter of the industry to Wintel – that’s well-understood.

What isn’t as widely discussed is a structural concept Moschella put out in 2018 in his book “Seeing Digital,” which introduced the idea of the Matrix shown on the righthand side of this chart. Moschella posited that a new digital platform of services was emerging built on top of the internet, hyperscale clouds and other intelligent technologies that would define the next era of computing.

He used the term matrix because the conceptual depiction included horizontal technology rows, like the cloud… but for the first time included connected industry columns. Moschella pointed out that historically, industry verticals had a closed value chain or stack of research and development, production, distribution, etc., and that expertise in that specific vertical was critical to success. But now, because of digital and data, for the first time, companies were able to jump industries and compete using data. Amazon in content, payments and groceries… Apple in payments and content… and so forth. Data was now the unifying enabler and this marked a changing structure of the technology landscape.

Listen to David Moschella explain the Matrix and its implications on a new generation of leadership in tech.

So the term supercloud is meant to imply more than running in hyperscale clouds. Rather, it’s a new type of digital platform comprising a combination of multiple technologies – enabled by cloud scale – with new industry participants from financial services, healthcare, manufacturing, energy, media and virtually all industries. Think of it as kind of an extension of “every company is a software company.”

Basically, thanks to the cloud, every company in every industry now has the opportunity to build their own supercloud. We’ll come back to that.

Aren’t hyperscale clouds superclouds?

Let’s address what’s different about superclouds relative to hyperscale clouds.

This one’s pretty straightforward and obvious. Hyperscale clouds are walled gardens where they want your data in their cloud and they want to keep you there. Sure, every cloud player realizes that not all data will go to their cloud, so they’re meeting customers where their data lives with initiatives such Amazon Outposts and Azure Arc and Google Anthos. But at the end of the day, the more homogeneous they can make their environments, the better control, security, costs and performance they can deliver. The more complex the environment, the more difficult to deliver on their promises and the less margin left for them to capture.

Will the hyperscalers get more serious about cross cloud services? Maybe, but they have plenty of work to do within their own clouds. And today at least they appear to be providing the tools that will enable others to build superclouds on top of their platforms. That said, we never say never when it comes to companies such as AWS. And for sure we see AWS delivering more integrated digital services such as Amazon Connect to solve problems in a specific domain, call centers in this case.

What problems do superclouds solve?

We’ve all seen the stats from IDC or Gartner or whomever that customers on average use more than one cloud. And we know these clouds operate in disconnected silos for the most part. That’s a problem because each cloud requires different skills. The development environment is different, as is the operating environment, with different APIs and primitives and management tools that are optimized for each respective hyperscale cloud. Their functions and value props don’t extend to their competitors’ clouds. Why would they?

As a result, there’s friction when moving between different clouds. It’s hard to share data, move work, secure and govern data, and enforce organizational policies and edicts across clouds.

Supercloud is an architecture designed to create a single environment that enables management of workloads and data across clouds in an effort to take out complexity, accelerate application development, streamline operations and share data safely irrespective of location.

Pretty straightforward, but nontrivial, which is why we often ask company chief executives and execs if stock buybacks and dividends will yield as much return as building out superclouds that solve really specific problems and create differentiable value for their firms.

What are the critical attributes of a supercloud?

Let’s dig in a bit more to the architectural aspects of supercloud. In other words… what are the salient attributes that define supercloud?

First, a supercloud runs a set of specific services, designed to solve a unique problem. Superclouds offer seamless, consumption-based services across multiple distributed clouds.

Supercloud leverages the underlying cloud-native tooling of a hyperscale cloud but it’s optimized for a specific objective that aligns with the problem it’s solving. For example, it may be optimized for cost or low latency or sharing data or governance or security or higher performance networking. But the point is, the collection of services delivered is focused on unique value that isn’t being delivered by the hyperscalers across clouds.

A supercloud abstracts the underlying and siloed primitives of the native PaaS layer from the hyperscale cloud and using its own specific platform-as-a-service tooling, creates a common experience across clouds for developers and users. In other words, the superPaaS ensures that the developer and user experience is identical, irrespective of which cloud or location is running the workload.

And it does so in an efficient manner, meaning it has the metadata knowledge and management that can optimize for latency, bandwidth, recovery, data sovereignty or whatever unique value the supercloud is delivering for the specific use cases in the domain.

A supercloud comprises a superPaaS capability that allows ecosystem partners to add incremental value on top of the supercloud platform to fill gaps, accelerate features and innovate. A superPaaS can use open tooling but applies those development tools to create a unique and specific experience supporting the design objectives of the supercloud.

Supercloud services can be infrastructure-related, application services, data services, security services, users services, etc., designed and packaged to bring unique value to customers… again that the hyperscalers are not delivering across clouds or on-premises.

Finally, these attributes are highly automated where possible. Superclouds take a page from hyperscalers in terms of minimizing human intervention wherever possible, applying automation to the specific problem they’re solving.

Isn’t supercloud just another term for multicloud?

What we’d say to that is: Perhaps, but not really. Call it multicloud 2.0 if you want to invoke a commonly used format. But as Dell’s Chuck Whitten proclaimed, multicloud by design is different than multicloud by default.

What he means is that, to date, multicloud has largely been a symptom of multivendor… or of M&A. And when you look at most so-called multicloud implementations, you see things like an on-prem stack wrapped in a container and hosted on a specific cloud.

Or increasingly a technology vendor has done the work of building a cloud-native version of its stack and running it on a specific cloud… but historically it has been a unique experience within each cloud with no connection between the cloud silos. And certainly not a common developer experience with metadata management across clouds.

Supercloud sets out to build incremental value across clouds and above hyperscale capex that goes beyond cloud compatibility within each cloud. So if you want to call it multicloud 2.0, that’s fine.

We choose to call it supercloud.

Isn’t plain old PaaS already supercloud?

Well, we’d say no. That supercloud and its corresponding superPaaS layer gives the freedom to store, process, manage, secure and connect islands of data across a continuum with a common developer experience across clouds.

Importantly, the sets of services are designed to support the supercloud’s objectives – e.g., data sharing or data protection or storage and retrieval or cost optimization or ultra-low latency, etc. In other words, the services offered are specific to that supercloud and will vary by each offering. OpenShift, for example, can be used to construct a superPaaS but in and of itself isn’t a superPaaS. It’s generic.

The point is that a supercloud and its inherent superPaaS will be optimized to solve specific problems such as low latency for distributed databases or fast backup and recovery and ransomware protection — highly specific use cases that the supercloud is designed to solve for.

SaaS as well is a subset of supercloud. Most SaaS platforms either run in their own cloud or have bits and pieces running in public clouds (e.g. analytics). But the cross-cloud services are few and far between or often nonexistent. We believe SaaS vendors must evolve and adopt supercloud to offer distributed solutions across cloud platforms and stretching out to the near and far edge.

Who is building superclouds?

Another question we often get is: Who has a supercloud and who is building a supercloud? Who are the contenders?

Well, most companies that consider themselves cloud players will, we believe, be building superclouds. Above is a common Enterprise Technology Research graphic we like to show with Net Score or spending momentum on the Y axis and Overlap or pervasiveness in the ETR surveys on the X axis. This is from the April survey of well over 1,000 chief executive officers and information technology buyers. And we’ve randomly chosen a number of players we think are in the supercloud mix and we’ve included the hyperscalers because they are the enablers.

We’ve added some of those nontraditional industry players we see building superclouds such as Capital One, Goldman Sachs and Walmart, in deference to Moschella’s observation about verticals. This goes back to every company being a software company. And rather than pattern-matching an outdated SaaS model we see a new industry structure emerging where software and data and tools specific to an industry will lead the next wave of innovation via the buildout of intelligent digital platforms.

We’ve talked a lot about Snowflake Inc.’s Data Cloud as an example of supercloud, as well as the momentum of Databricks Inc. (not shown above). VMware Inc. is clearly going after cross-cloud services. Basically every large company we see is either pursuing supercloud initiatives or thinking about it. Dell Technologies Inc., for example, showed Project Alpine at Dell Technologies World – that’s a supercloud in development. Snowflake introducing a new app dev capability based on its SuperPaaS (our term, of course, it doesn’t use the phrase), MongoDB Inc., Couchbase Inc., Nutanix Inc., Veeam Software, CrowdStrike Holdings Inc., Okta Inc. and Zscaler Inc. Even the likes of Cisco Systems Inc. and Hewlett Packard Enterprise Co., in our view, will be building superclouds.

Although ironically, as an aside, Fidelma Russo, HPE’s chief technology officer, said on theCUBE she wasn’t a fan of cloaking mechanisms. But when we spoke to HPE’s head of storage services, Omer Asad, we felt his team is clearly headed in a direction that we would consider supercloud. It could be semantics or it could be that parts of HPE are in a better position to execute on supercloud. Storage is an obvious starting point. The same can be said of Dell.

Listen to Fidelma Russo explain her aversion to building a manager of managers.

And we’re seeing emerging companies like Aviatrix Systems Inc. (network performance), Starburst Data Inc. (self-service analytics for distributed data), Clumio Inc. (data protection – not supercloud today but working on it) and others building versions of superclouds that solve a specific problem for their customers. And we’ve spoken to independent software vendors such as Adobe Systems Inc., Automatic Data Processing LLC and UiPath Inc., which are all looking at new ways to go beyond the SaaS model and add value within cloud ecosystems, in particular building data services that are unique to their value proposition and will run across clouds.

So yeah – pretty much every tech vendor with any size or momentum and new industry players are coming out of hiding and competing… building superclouds. Many that look a lot like Moschella’s matrix with machine intelligence and artificial intelligence and blockchains and virtual reality and gaming… all enabled by the internet and hyperscale clouds.

It’s moving fast and it’s the future, in our opinion, so don’t get too caught up in the past or you’ll be left behind.

What are some examples of superclouds?

We’ve given many in the past, but let’s try to be a bit more specific. Below we cite a few and we’ll answer two questions in one section here: What workloads and services will run in superclouds and what are some examples?

Analytics. Snowflake is the furthest along with its data cloud in our view. It’s a supercloud optimized for data sharing, governance, query performance, security, ecosystem enablement and ultimately monetization. Snowflake is now bringing in new data types and open-source tooling and it ticks the attribute boxes on supercloud we laid out earlier.

Converged databases. Running transaction and analytics workloads. Take a look at what Couchbase is doing with Capella and how it’s enabling stretching the cloud to the edge with Arm-based platforms and optimizing for low latency across clouds and out to the edge.

Document database workloads. Look at MongoDB – a developer-friendly platform that with Atlas is moving to a supercloud model running document databases very efficiently. Accommodating analytic workloads and creating a common developer experience across clouds.

Data science workloads. For example, Databricks is bringing a common experience for data scientists and data engineers driving machine intelligence into applications and fixing the broken data lake with the emergence of the lakehouse.

General-purpose workloads. For example, VMware’s domain. Very clearly there’s a need to create a common operating environment across clouds and on-prem and out to the edge and VMware is hard at work on that — managing and moving workloads, balancing workloads and being able to recover very quickly across clouds.

Network routing. This is the primary focus of Aviatrix, building what we consider a supercloud and optimizing network performance and automating security across clouds.

Industry-specific workloads. For example, Capital One announcing its cost optimization platform for Snowflake – piggybacking on Snowflake’s supercloud. We believe it’s going to test that concept outside its own organization and expand across other clouds as Snowflake grows its business beyond AWS. Walmart Inc. is working with Microsoft to create an on-prem to Azure experience – yes, that counts. We’ve written about what Goldman is doing and you can bet dollars to donuts that Oracle Corp. will be building a supercloud in healthcare with its Cerner acquisition.

Supercloud is everywhere you look. Sorry, naysayers. It’s happening.

What’s next from theCUBE?

With all the industry buzz and debate about the future, John Furrier and the team at SiliconANGLE have decided to host an event on supercloud. We’re motivated and inspired to further the conversation. TheCUBE on Supercloud is coming.

On Aug. 9 out of our Palo Alto studios we’ll be running a live program on the topic. We’ve reached out to a number of industry participants — VMware, Snowflake, Confluent, Sky High Security, Hashicorp, Cloudflare and Red Hat — to get the perspective of technologists building superclouds.

And we’ve invited a number of vertical industry participants in financial services, healthcare and retail that we’re excited to have on along with analysts, thought leaders and investors.

We’ll have more details in the coming weeks, but for now if you’re interested please reach out to us with how you think you can advance the discussion and we’ll see if we can fit you in.

So mark your calendars and stay tuned for more information.

Keep in touch

Thanks to Alex Myerson, who does the production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight, who help us keep our community informed and get the word out, and to Rob Hof, our editor in chief at SiliconANGLE.

Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com, DM @dvellante on Twitter and comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai.

Here’s the full video analysis:

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

Image: Rawpixel.com/Adobe Stock

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Sat, 09 Jul 2022 05:06:00 -0500 en-US text/html https://siliconangle.com/2022/07/09/answering-top-10-questions-supercloud/
Killexams : How 'living architecture' could help the world avoid a soul-deadening digital future

My first Apple laptop felt like a piece of magic made just for me—almost a part of myself. The rounded corners, the lively shading, the delightful animations. I had been using Windows my whole life, starting on my family's IBM 386, and I never thought using a computer could be so fun.

Indeed, Apple co-founder Steve Jobs said that computers were like bicycles for the mind, extending your possibilities and helping you do things not only more efficiently but also more beautifully. Some technologies seem to unlock your humanity and make you feel inspired and alive.

But not all technologies are like this. Sometimes devices do not work reliably or as expected. Often you have to change to conform to the limitations of a system, as when you need to speak differently so a digital voice assistant can understand you. And some platforms bring out the worst in people. Think of anonymous flame wars.

As a researcher who studies technology, design and ethics, I believe that a hopeful way forward comes from the world of architecture. It all started decades ago with an architect's observation that newer buildings tended to be lifeless and depressing, even if they were made using ever fancier tools and techniques.

Tech's wear on humanity

The problems with technology are myriad and diffuse, and widely studied and reported: from short attention spans and tech neck to clickbait and AI bias to trolling and shaming to conspiracy theories and misinformation.

As people increasingly live online, these issues may only get worse. Some accurate visions of the metaverse, for example, suggest that humans will come to live primarily in virtual spaces. Already, people worldwide spend on average seven hours per day on digital screens—nearly half of waking hours.

While public awareness of these issues is on the rise, it's not clear whether or how tech companies will be able to address them. Is there a way to ensure that future technologies are more like my first Apple laptop and less like a Twitter pile-on?

Over the past 60 years, the architectural theorist Christopher Alexander pursued questions similar to these in his own field. Alexander, who died in March 2022 at age 85, developed a theory of design that has made inroads in architecture. Translated to the technology field, this theory can provide the principles and process for creating technologies that unlock people's humanity rather than suppress it.

Christopher Alexander discussing place, repetition and adaptation.

How good design is defined

Technology design is beginning to mature. Tech companies and product managers have realized that a well-designed user interface is essential for a product's success, not just nice to have.

As professions mature, they tend to organize their knowledge into concepts. Design patterns are a great example of this. A design pattern is a reusable solution to a problem that designers need to solve frequently.

In user experience design, for instance, such problems include helping users enter their shipping information or get back to the home page. Instead of reinventing the wheel every time, designers can apply a design pattern: clicking the logo at the upper left always takes you home. With design patterns, life is easier for designers, and the end products are better for users.

Design patterns facilitate good design in one sense: They are efficient and productive. Yet they do not necessarily lead to designs that are good for people. They can be sterile and generic. How, exactly, to avoid that is a major challenge.

A seed of hope lies in the very place where design patterns originated: the work of Christopher Alexander. Alexander dedicated his life to understanding what makes an environment good for humans—good in a deep, moral sense—and how designers might create structures that are likewise good.

His work on design patterns, dating back to the 1960s, was his initial effort at an answer. The patterns he developed with his colleagues included details like how many stories a good building should have and how many light sources a good room should have.

But Alexander found design patterns ultimately unsatisfying. He took that work further, eventually publishing his theory in his four-volume magnum opus, "The Nature of Order."

While Alexander's work on design patterns is very well known—his 1977 book "A Pattern Language" remains a bestseller—his later work, which he deemed much more important, has been largely overlooked. No surprise, then, that his deepest insights have not yet entered technology design. But if they do, good design could come to mean something much richer.

On creating structures that foster life

Architecture was getting worse, not better. That was Christopher Alexander's conclusion in the mid-20th century.

Much modern architecture is inert and makes people feel dead inside. It may be sleek and intellectual—it may even win awards—but it does not help generate a feeling of life within its occupants. What went wrong, and how might architecture correct its course?

Motivated by this question, Alexander conducted numerous experiments throughout his career, going deeper and deeper. Beginning with his design patterns, he discovered that the designs that stirred up the most feeling in people, what he called living structure, shared certain qualities. This wasn't just a hunch, but a testable empirical theory, one that he validated and refined from the late 1970s until the turn of the century. He identified 15 qualities, each with a technical definition and many examples.

The qualities are:

  • Levels of scale
  • Strong centers
  • Boundaries
  • Alternating repetition
  • Positive space
  • Good shape
  • Local symmetries
  • Deep interlocking and ambiguity
  • Contrast gradients
  • Roughness
  • Echoes
  • The void
  • Simplicity and inner calm
  • Notseparateness

As Alexander writes, living structure is not just pleasant and energizing, though it is also those. Living structure reaches into humans at a transcendent level—connecting people with themselves and with one another—with all humans across centuries and cultures and climates.

Yet modern architecture, as Alexander showed, has very few of the qualities that make living structure. In other words, over the 20th century architects taught one another to do it all wrong. Worse, these errors were crystallized in building codes, zoning laws, awards criteria and education. He decided it was time to turn things around.

Alexander's ideas have been hugely influential in architectural theory and criticism. But the world has not yet seen the paradigm shift he was hoping for.

By the mid-1990s, Alexander recognized that for his aims to be achieved, there would need to be many more people on board—and not just architects, but all sorts of planners, infrastructure developers and everyday people. And perhaps other fields besides architecture. The digital revolution was coming to a head.

Alexander's invitation to technology designers

As Alexander doggedly pursued his research, he started to notice the potential for digital technology to be a force for good. More and more, digital technology was becoming part of the human environment—becoming, that is, architectural.

Meanwhile, Alexander's ideas about design patterns had entered the world of technology design as a way to organize and communicate design knowledge. To be sure, this older work of Alexander's proved very valuable, particularly to software engineering.

Because of his fame for design patterns, in 1996 Alexander was invited to provide a keynote address at a major software engineering conference sponsored by the Association for Computing Machinery.

In his talk, Alexander remarked that the tech industry was making great strides in efficiency and power but perhaps had not paused to ask: "What are we supposed to be doing with all these programs? How are they supposed to help the Earth?"

"For now, you're like guns for hire," Alexander said. He invited the audience to make technologies for good, not just for pay.

Loosening the design process

In "The Nature of Order," Alexander defined not only his theory of living structure, but also a process for creating such structure.

In short, this process involves democratic participation and springs from the bottom up in an evolving progression incorporating the 15 qualities of living structure. The end result isn't known ahead of time—it's adapted along the way. The term "organic" comes to mind, and this is appropriate, because nature almost invariably creates living structure.

But typical architecture—and design in many fields—is, in contrast, top-down and strictly defined from the outset. In this machinelike process, rigid precision is prioritized over local adaptability, project roles are siloed apart and the emphasis is on commercial value and investment over anything else. This is a recipe for lifeless structure.

Alexander's work suggests that if living structure is the goal, the is the place to focus. And the technology field is starting to show inklings of change.

In project management, for example, the traditional waterfall approach followed a rigid, step-by-step schedule defined upfront. The turn of the century saw the emergence of a more dynamic approach, dubbed agile, which allows for more adaptability through frequent check-ins and prioritization, progressing in "sprints" of one to two weeks rather than longer phases.

And in design, the human-centered design paradigm is likewise gaining steam. Human-centered design emphasizes, among other elements, continually testing and refining small changes with respect to design goals.

A design process that promotes life

However, Alexander would say that both these trajectories are missing some of his deeper insights about living structure. They may spark more purchases and increase stock prices, but these approaches will not necessarily create technologies that are good for each person and good for the world.

Yet there are some emerging efforts toward this deeper end. For example, design pioneer Don Norman, who coined the term "user experience," has been developing his ideas on what he calls humanity-centered design. This goes beyond human-centered design to focus on ecosystems, take a long-term view, incorporate human values and involve stakeholder communities along the way.

The vision of humanity-centered design calls for sweeping changes in the technology field. This is precisely the kind of reorientation that Alexander was calling for in his 1996 keynote speech. Just as design patterns suggested in the first place, the technology field doesn't need to reinvent the wheel. Technologists and people of all stripes can build up from the tremendous, careful work that Alexander has left.



This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: How 'living architecture' could help the world avoid a soul-deadening digital future (2022, August 10) retrieved 10 August 2022 from https://techxplore.com/news/2022-08-architecture-world-soul-deadening-digital-future.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Wed, 10 Aug 2022 01:01:00 -0500 en text/html https://techxplore.com/news/2022-08-architecture-world-soul-deadening-digital-future.html
Killexams : Can IBM Get Back Into HPC With Power10?

The “Cirrus” Power10 processor from IBM, which we codenamed for Big Blue because it refused to do it publicly and because we understand the value of a synonym here at The Next Platform, shipped last September in the “Denali” Power E1080 big iron NUMA machine. And today, the rest of the Power10-based Power Systems product line is being fleshed out with the launch of entry and midrange machines – many of which are suitable for supporting HPC and AI workloads as well as in-memory databases and other workloads in large enterprises.

The question is, will IBM care about traditional HPC simulation and modeling ever again with the same vigor that it has in past decades? And can Power10 help reinvigorate the HPC and AI business at IBM. We are not sure about the answer to the first question, and got the distinct impression from Ken King, the general manager of the Power Systems business, that HPC proper was not a high priority when we spoke to him back in February about this. But we continue to believe that the Power10 platform has some attributes that make it appealing for data analytics and other workloads that need to be either scaled out across small machines or scaled up across big ones.

Today, we are just going to talk about the five entry Power10 machines, which have one or two processor sockets in a standard 2U or 4U form factor, and then we will follow up with an analysis of the Power E1050, which is a four socket machine that fits into a 4U form factor. And the question we wanted to answer was simple: Can a Power10 processor hold its own against X86 server chips from Intel and AMD when it comes to basic CPU-only floating point computing.

This is an important question because there are plenty of workloads that have not been accelerated by GPUs in the HPC arena, and for these workloads, the Power10 architecture could prove to be very interesting if IBM thought outside of the box a little. This is particularly true when considering the feature called memory inception, which is in effect the ability to build a memory area network across clusters of machines and which we have discussed a little in the past.

We went deep into the architecture of the Power10 chip two years ago when it was presented at the Hot Chip conference, and we are not going to go over that ground again here. Suffice it to say that this chip can hold its own against Intel’s current “Ice Lake” Xeon SPs, launched in April 2021, and AMD’s current “Milan” Epyc 7003s, launched in March 2021. And this makes sense because the original plan was to have a Power10 chip in the field with 24 fat cores and 48 skinny ones, using dual-chip modules, using 10 nanometer processes from IBM’s former foundry partner, Globalfoundries, sometime in 2021, three years after the Power9 chip launched in 2018. Globalfoundries did not get the 10 nanometer processes working, and it botched a jump to 7 nanometers and spiked it, and that left IBM jumping to Samsung to be its first server chip partner for its foundry using its 7 nanometer processes. IBM took the opportunity of the Power10 delay to reimplement the Power ISA in a new Power10 core and then added some matrix math overlays to its vector units to make it a good AI inference engine.

IBM also created a beefier core and dropped the core count back to 16 on a die in SMT8 mode, which is an implementation of simultaneous multithreading that has up to eight processing threads per core, and also was thinking about an SMT4 design which would double the core count to 32 per chip. But we have not seen that today, and with IBM not chasing Google and other hyperscalers with Power10, we may never see it. But it was in the roadmaps way back when.

What IBM has done in the entry machines is put two Power10 chips inside of a single socket to increase the core count, but it is looking like the yields on the chips are not as high as IBM might have wanted. When IBM first started talking about the Power10 chip, it said it would have 15 or 30 cores, which was a strange number, and that is because it kept one SMT8 core or two SMT4 cores in reserve as a hedge against bad yields. In the products that IBM is rolling out today, mostly for its existing AIX Unix and IBM i (formerly OS/400) enterprise accounts, the core counts on the dies are much lower, with 4, 8, 10, or 12 of the 16 cores active. The Power10 cores have roughly 70 percent more performance than the Power9 cores in these entry machines, and that is a lot of performance for many enterprise customers – enough to get through a few years of growth on their workloads. IBM is charging a bit more for the Power10 machines compared to the Power9 machines, according to Steve Sibley, vice president of Power product management at IBM, but the bang for the buck is definitely improving across the generations. At the very low end with the Power S1014 machine that is aimed at small and midrange businesses running ERP workloads on the IBM i software stack, that improvement is in the range of 40 percent, provide or take, and the price increase is somewhere between 20 percent and 25 percent depending on the configuration.

Pricing is not yet available on any of these entry Power10 machines, which ship on July 22. When we find out more, we will do more analysis of the price/performance.

There are six new entry Power10 machines, the feeds and speeds of which are shown below:

For the HPC crowd, the Power L1022 and the Power L1024 are probably the most interesting ones because they are designed to only run Linux and, if they are like prior L classified machines in the Power8 and Power9 families, will have lower pricing for CPU, memory, and storage, allowing them to better compete against X86 systems running Linux in cluster environments. This will be particularly important as IBM pushed Red Hat OpenShift as a container platform for not only enterprise workloads but also for HPC and data analytic workloads that are also being containerized these days.

One thing to note about these machines: IBM is using its OpenCAPI Memory Interface, which as we explained in the past is using the “Bluelink” I/O interconnect for NUMA links and accelerator attachment as a memory controller. IBM is now calling this the Open Memory Interface, and these systems have twice as many memory channels as a typical X86 server chip and therefore have a lot more aggregate bandwidth coming off the sockets. The OMI memory makes use of a Differential DIMM form factor that employs DDR4 memory running at 3.2 GHz, and it will be no big deal for IBM to swap in DDR5 memory chips into its DDIMMs when they are out and the price is not crazy. IBM is offering memory features with 32 GB, 64 GB, and 128 GB capacities today in these machines and will offer 256 GB DDIMMs on November 14, which is how you get the maximum capacities shown in the table above. The important thing for HPC customers is that IBM is delivering 409 GB/sec of memory bandwidth per socket and 2 TB of memory per socket.

By the way, the only storage in these machines is NVM-Express flash drives. No disk, no plain vanilla flash SSDs. The machines also support a mix of PCI-Express 4.0 and PCI-Express 5.0 slots, and do not yet support the CXL protocol created by Intel and backed by IBM even though it loves its own Bluelink OpenCAPI interconnect for linking memory and accelerators to the Power compute engines.

Here are the different processor SKUs offered in the Power10 entry machines:

As far as we are concerned, the 24-core Power10 DCM feature EPGK processor in the Power L1024 is the only interesting one for HPC work, aside from what a theoretical 32-core Power10 DCM might be able to do. And just for fun, we sat down and figured out the peak theoretical 64-bit floating point performance, at all-core base and all-core turbo clock speeds, for these two Power10 chips and their rivals in the Intel and AMD CPU lineups. Take a gander at this:

We have no idea what the pricing will be for a processor module in these entry Power10 machines, so we took a stab at what the 24-core variant might cost to be competitive with the X86 alternatives based solely on FP64 throughput and then reckoned the performance of what a full-on 32-core Power10 DCM might be.

The answer is that IBM can absolutely compete, flops to flops, with the best Intel and AMD have right now. And it has a very good matrix math engine as well, which these chips do not.

The problem is, Intel has “Sapphire Rapids” Xeon SPs in the works, which we think will have four 18-core chiplets for a total of 72 cores, but only 56 of them will be exposed because of yield issues that Intel has with its SuperFIN 10 nanometer (Intel 7) process. And AMD has 96-core “Genoa” Epyc 7004s in the works, too. Power11 is several years away, so if IBM wants to play in HPC, Samsung has to get the yields up on the Power10 chips so IBM can sell more cores in a box. Big Blue already has the memory capacity and memory bandwidth advantage. We will see if its L-class Power10 systems can compete on price and performance once we find out more. And we will also explore how memory clustering might make for a very interesting compute platform based on a mix of fat NUMA and memory-less skinny nodes. We have some ideas about how this might play out.

Mon, 11 Jul 2022 12:01:00 -0500 Timothy Prickett Morgan en-US text/html https://www.nextplatform.com/2022/07/12/can-ibm-get-back-into-hpc-with-power10/
Killexams : THE TELEVISION PROGRAM TRANSCRIPTS: PART II THE TELEVISION PROGRAM TRANSCRIPTS: PART II

The story so far.... In 1975, Ed Roberts invented the Altair personal computer. It was a pain to use until 19 year-old pre-billionaire Bill Gates wrote the first personal computer language. Still, the public didn't care. Then two young hackers -- Steve Jobs and Steve Wozniak -- built the Apple computer to impress their friends. We were all impressed and Apple was a stunning success. By 1980, the PC market was worth a billion dollars. Now, view on.....

Christine Comaford
We are nerds.

Vern Raburn
Most of the people in the industry were young because the guys who had any real experience were too smart to get involved in all these crazy little machines.

Gordon Eubanks
It really wasn't that we were going to build billion dollar businesses. We were having a good time.

Vern Raburn
I thought this was the most fun you could possibly have with your clothes on.

When the personal computer was invented twenty years it was just that - an invention - it wasn't a business. These were hobbyists who built these machines and wrote this software to have fun but that has really changed and now this is a business this is a big business. It just goes to show you that people can be bought. How the personal computer industry grew from zero to 100 million units is an amazing story. And it wasn't just those early funky companies of nerds and hackers, like Apple, that made it happen. It took the intervention of a company that was trusted by the corporate world. Big business wasn't interested in the personal computer. In the boardrooms of corporate America a computer still meant something the size of a room that cost at least a hundred thousand dollars. Executives would brag that my mainframe is bigger than your mainframe. The idea of a $2,000 computer that sat on your desk in a plastic box was laughable that is until that plastic box had three letters stamped on it - IBM. IBM was, and is, an American business phenomenon. Over 60 years, Tom Watson and his son, Tom Jr., built what their workers called Big Blue into the top computer company in the world. But IBM made mainframe computers for large companies, not personal computers -- at least not yet. For the PC to be taken seriously by big business, the nerds of Silicon Valley had to meet the suits of corporate America. IBM never fired anyone, requiring only that undying loyalty to the company and a strict dress code. IBM hired conservative hard-workers straight from school. Few IBM'ers were at the summer of love. Their turn-ons were giant mainframes and corporate responsibility. They worked nine to five and on Saturdays washed the car. This is intergalactic HQ for IBM - the largest computer company in the world...but in many ways IBM is really more a country than it is a company. It has hundreds of thousands of citizens, it has a bureaucracy, it has an entire culture everything in fact but an army. OK Sam we're ready to visit IBM country, obviously we're dressed for the part. Now when you were in sales training in 1959 for IBM did you sing company songs?

Sam Albert
Former IBM Executive
Absolutely.

BOB: Well just to get us in the mood let's sing one right here.
SAM: You're kidding.
BOB: I have the IBM - the songs of the IBM and we're going to try for number 74, our IBM salesmen sung to the tune of Jingle Bells.

Bob & Sam singing
'IBM, happy men, smiling all the way, oh what fun it is to sell our products our pruducts night and day. IBM Watson men, partners of TJ. In his service to mankind - that's why we are so gay.'

Sam Albert
Now gay didn't mean what it means today then remember that OK?
BOB: Right ok let's go.
SAM: I guess that was OK.
BOB: Perfect.

Sam Albert
When I started at IBM there was a dress code, that was an informal oral code of white shirts. You couldn't wear anything but a white shirt, generally with a starched collar. I remember attending my first class, and a gentleman said to me as we were entering the building, are you an IBMer, and I said yes. He had a three piece suit on, vests were of the vogue, and he said could you just lift your pants leg please. I said what, and before I knew it he had lifted my pants leg and he said you're not wearing any garters. I said what?! He said your socks, they're not pulled tight to the top, you need garters. And sure enough I had to go get garters.

IBM is like Switzerland -- conservative, a little dull, yet prosperous. It has committees to verify each decision. The safety net is so big that it is hard to make a bad decision - or any decision at all. Rich Seidner, computer programmer and wannabe Paul Simon, spent twenty-five years marching in lockstep at IBM. He feels better now.

Rich Seidner
Former IBM Programmer
I mean it's like getting four hundred thousand people to agree what they want to have for lunch. You know, I mean it's just not going to happen - it's going to be lowest common denominator you know, it's going to be you know hot dogs and beans. So ahm so what are you going to do? So IBM had created this process and it absolutely made sure that quality would be preserved throughout the process, that you actually were doing what you set out to do and what you thought the customer wanted. At one point somebody kind of looked at the process to see well, you know, what's it doing and what's the overhead built into it, what they found is that it would take at least nine months to ship an empty box.

By the late seventies, even IBM had begun to notice the explosive growth of personal computer companies like Apple.

Commercial
The Apple 2 - small inexpensive and simple to use the first computer.....

What's more, it was a computer business they didn't control. In 1980, IBM decided they wanted a piece of this action.

Jack Sams
Former IBM Executive
There were suddenly tens of thousands of people buying machines of that class and they loved them. They were very happy with them and they were showing up in the engineering departments of our clients as machines that were brought in because you can't do the job on your mainframe kind of thing.

Commercial
JB wanted to know why I'm doing better than all the other managers...it's no secret...I have an Apple - sure there's a big computer three flights down but it won't test my options, do my charts or edit my reports like my Apple.

Jack Sams
The people who had gotten it were religious fanatics about them. So the concern was we were losing the hearts and minds and provide me a machine to win back the hearts and minds.

In business, as in comedy, timing is everything, and time looked like it might be running out for an IBM PC. I'm visiting an IBMer who took up the challenge. In August 1979, as IBM's top management met to discuss their PC crisis, Bill Lowe ran a small lab in Boca Raton Florida.

Bill Lowe
Hello Bob nice to see you.
BOB: Nice to see you again. I tried to match the IBM dress code how did I do?
BILL: That's terrific, that's terrific.

He knew the company was in a quandary. Wait another year and the PC industry would be too big even for IBM to take on. Chairman Frank Carey turned to the department heads and said HELP!!!

Bill Lowe
Head, IBM IBM PC Development Team 1980
He kind of said well, what should we do, and I said well, we think we know what we would like to do if we were going to proceed with our own product and he said no, he said at IBM it would take four years and three hundred people to do anything, I mean it's just a fact of life. And I said no sir, we can provide with product in a year. And he abruptly ended the meeting, he said you're on Lowe, come back in two weeks and tell me what you need.

An IBM product in a year! Ridiculous! Down in the basement Bill still has the plan. To save time, instead of building a computer from scratch, they would buy components off the shelf and assemble them -- what in IBM speak was called 'open architecture.' IBM never did this. Two weeks later Bill proposed his heresy to the Chairman.

Bill Lowe
And frankly this is it. The key decisions were to go with an open architecture, non IBM technology, non IBM software, non IBM sales and non IBM service. And we probably spent a full half of the presentation carrying the corporate management committee into this concept. Because this was a new concept for IBM at that point.
BOB: Was it a hard sell?
BILL: Mr. Carey bought it. And as result of him buying it, we got through it.

With the backing of the chairman, Bill and his team then set out to break all the IBM rules and go for a record.

Bill Lowe
We'll put it in the IBM section.

Once IBM decided to do a personal computer and to do it in a year - they couldn't really design anything, they just had to slap it together, so that's what we'll do. You have a central processing unit and eh let's see you need a monitor or display and a keyboard. OK a PC, except it's not, there's something missing. Time for the Cringely crash course in elementary computing. A PC is a boxful of electronic switches, a piece of hardware. It's useless until you tell it what to do. It requires a program of instructions...that's software. Every PC requires at least two essential bits of software in order to work at all. First it requires a computer language. That's what you type in to provide instructions to the computer. To tell it what to do. Remember it was a computer language called BASIC that Paul Allen and Bill Gates adapted to the Altair...the first PC. The other bit of software that's required is called an operating system and that's the internal traffic cop that tells the computer itself how the keyboard is connected to the screen or how to store files on a floppy disk instead of just losing them when you turn off the PC at the end of the day. Operating systems tend to have boring unfriendly names like UNIX and CPM and MS-DOS but though they may be boring it's an operating system that made Bill Gates the richest man in the world. And the story of how that came about is, well, pretty interesting. So the contest begins. Who would IBM buy their software from? Let's meet the two contenders -- the late Gary Kildall, then aged 39, a computer Ph.D., and a 24 year old Harvard drop-out - Bill Gates. By the time IBM came calling in 1980, Bill Gates and his small company Microsoft was the biggest provider of computer languages in the fledgling PC industry.

Commercial
'Many different computer manufacturers are making the CPM Operating System standard on most models.'

For their operating system, though, the logical guy for the IBMers to see was Gary Kildall. He ran a company modestly called Interglactic Digital Research. Gary had invented the PC's first operating system called CP/M. He had already sold 600,000 of them, so he was the big cheese of operating systems.

Gary Kildall
Founder Digital Research
Speaking in 1983
In the early 70s I had a need for an operating system myself and eh it was a very natural thing to write and it turns out other people had a need for an operating system like that and so eh it was a very natural thing I wrote it for my own use and then started selling it.

Gordon Eubanks
In Gary's mind it was the dominant thing and it would always be the dominant of course Bill did languages and Gary did operating systems and he really honestly believed that would never change.

But what would change the balance of power in this young industry was the characters of the two protagonists.

Jim Warren
Founder West Coast Computer Faire 1978
So I knew Gary back when he was an assistant professor at Monterrey Post Grad School and I was simply a grad student. And went down, sat in his hot tub, smoked dope with him and thoroughly enjoyed it all, and commiserated and talked nerd stuff. He liked playing with gadgets, just like Woz did and does, just like I did and do.

Gordon Eubanks
He wasn't really interested in how you drive the business, he worked on projects, things that interested him.

Jim Warren
He didn't go rushing off to the patent office and patent CPM and patent every line of code he could, he didn't try to just squeeze the last dollar out of it.

Gordon Eubanks
Gary was not a fighter, Gary avoided conflict, Gary hated conflict. Bill I don't think anyone could say backed away from conflict.

Nobody said future billionaires have to be nice guys. Here, at the Microsoft Museum, is a shrine to Bill's legacy. Bill Gates hardly fought his way up from the gutter. Raised in a prosperous Seattle household, his mother a homemaker who did charity work, his father was a successful lawyer. But beneath the affluence and comfort of a perfect American family, a competitive spirit ran deep.

Vern Raburn
President, The Paul Allen Group
I ended up spending Memorial Day Weekend with him out at his grandmother's house on Hood Canal. She turned everything in to a game. It was a very very very competitive environment, and if you spent the weekend there, you were part of the competition, and it didn't matter whether it was hearts or pickleball or swimming to the dock. And you know and there was always a reward for winning and there was always a penalty for losing.

Christine Comaford
CEO Corporate Computing Intl.
One time, it was funny. I went to Bill's house and he really wanted to show me his jigsaw puzzle that he was working on, and he really wanted to talk about how he did this jigsaw puzzle in like four minutes, and like on the box it said, if you're a genius you will do the jigsaw puzzle in like seven. And he was into it. He was like I can do it. And I said don't, you know, I believe you. You don't need to break it up and do it for me. You know.

Bill Gates can be so focused that the small things in life get overlooked.

Jean Richardson
Former VP, Corporate Comms, Microsoft
If he was busy he didn't bathe, he didn't change clothes. We were in New York and the demo that we had crashed the evening before the announcement, and Bill worked all night with some other engineers to fix it. Well it didn't occur to him to take ten minutes for a shower after that, it just didn't occur to him that that was important, and he badly needed a shower that day.

The scene is set in California...laid back Gary Kildall already making the best selling PC operating system CPM. In Seattle Bill Gates maker of BASIC the best selling PC language but always prepared to seize an opportunity. So IBM had to choose one of these guys to write the operating system for its new personal computer. One would hit the jackpot the other would be forgotten...a footnote in the history of the personal computer and it all starts with a telephone call to an eighth floor office in that building the headquarters of Microsoft in 1980.

Jack Sams
At about noon I guess I called Bill Gates on Monday and said I would like to come out and talk with him about his products.

Steve Ballmer
Vice-President Microsoft
Bill said well, how's next week, and they said we're on an airplane, we're leaving in an hour, we'd like to be there tomorrow. Well, hallelujah. Right oh.

Steve Ballmer was a Harvard roommate of Gates. He'd just joined Microsoft and would end up its third billionaire. Back then he was the only guy in the company with business training. Both Ballmer and Gates instantly saw the importance of the IBM visit.

Bill Gates
You know IBM was the dominant force in computing. A lot of these computer fairs discussions would get around to, you know, I.. most people thought the big computer companies wouldn't recognise the small computers, and it might be their downfall. But now to have one of the big computer companies coming in and saying at least the - the people who were visiting with us that they were going to invest in it, that - that was er, amazing.

Steve Ballmer
And Bill said Steve, you'd better come to the meeting, you're the only other guy here who can wear a suit. So we figure the two of us will put on suits, we'll put on suits and we'll go to this meeting.

Jack Sams
We got there at roughly two o'clock and we were waiting in the front, and this young fella came out to take us back to Mr. Gates office. I thought he was the office boy, and of course it was Bill. He was quite decisive, we popped out the non-disclosure agreement - the letter that said he wouldn't tell anybody we were there and that we wouldn't hear any secrets and so forth. He signed it immediately.

Bill Gates
IBM didn't make it easy. You had to sign all these funny agreements that sort of said I...IBM could do whatever they wanted, whenever they wanted, and use your secrets however they - they felt. But so it took a little bit of faith.

Jack Sams was looking for a package from Microsoft containing both the BASIC computer language and an Operating System. But IBM hadn't done their homework.

Steve Ballmer
They thought we had an operating system. Because we had this Soft Card product that had CPM on it, they thought we could licence them CPM for this new personal computer they told us they wanted to do, and we said well, no, we're not in that business.

Jack Sams
When we discovered we didn't have - he didn't have the rights to do that and that it was not...he said but I think it's ready, I think that Gary's got it ready to go. So I said well, there's no time like the present, call up Gary.

Steve Ballmer
And so Bill right there with them in the room called Gary Kildall at Digital Research and said Gary, I'm sending some guys down. They're going to be on the phone. Treat them right, they're important guys.

The men from IBM came to this Victorian House in Pacific Grove California, headquarters of Digital Research, headed by Gary and Dorothy Kildall. Just imagine what its like having IBM come to visit - its like having the Queen drop by for tea, its like having the Pope come by looking for advice, its like a visit from God himself. And what did Gary and Dorothy do? They sent them away.

Jack Sams
Gary had some other plans and so he said well, Dorothy will see you. So we went down the three of us...
Gordon Eubanks
Former Head of Language Division, Digital Research
IBM showed up with an IBM non-disclosure and Dorothy made what I...a decision which I think it's easy in retrospect to say was dumb.

Jack Sams
We popped out our letter that said please don't tell anybody we're here, and we don't want to hear anything confidential. And she read it and said and I can't sign this.

Gordon Eubanks
She did what her job was, she got the lawyer to look at the nondisclosure. The lawyer, Gerry Davis who's still in Monterey threw up on this non-disclosure. It was uncomfortable for IBM, they weren't used to waiting. And it was unfortunate situation - here you are in a tiny Victorian House, its overrun with people, chaotic.

Jack Sams
So we spent the whole day in Pacific Grove debating with them and with our attorneys and her attorneys and everybody else about whether or not she could even talk to us about talking to us, and we left.

This is the moment Digital Research dropped the ball. IBM, distinctly unimpressed with their reception, went back to Microsoft.

BOB: It seems to me that Digital Research really screwed up.
STEVE BALLMER: I think so - I think that's spot on. They made a big mistake. We referred IBM to them and they failed to execute.

Bill Gates isn't the man to provide a rival a second chance. He saw the opportunity of a lifetime.

Bill Gates
Digital research didn't seize that, and we knew it was essential, if somebody didn't do it, the project was going to fall apart.

Steve Ballmer
We just got carried away and said look, we can't afford to lose the language business. That was the initial thought - we can't afford to have IBM not go forward. This is the most exciting thing that's going to happen in PCs.

Bill Gates
And we were already out on a limb, because we had licensed them not only Basic, but Fortran, Cobol Assembler er, typing tutor and Venture. And basically every - every product the company had we had committed to do for IBM in a very short time frame.

But there was a problem. IBM needed an operating system fast and Microsoft didn't have one. What they had was a stroke of luck - the ingredient everyone needs to be a billionaire. Unbelievably, the solution was just across town. Paul Allen, Gates's programming partner since high school, had found another operating system.

Paul Allen
There's a local company here in CL called CL Computer Products by a guy named Tim Patterson and he had done an operating system a very rudimentary operating system that was kind of like CPM.

Steve Ballmer
And we just told IBM look, we'll go and get this operating system from this small local company, we'll take care of it, we'll fix it up, and you can still do a PC.

Tim Patterson's operating system, which saved the deal with IBM, was, well, adapted from Gary Kildall's CPM.

Tim Patterson
Programmer
So I took a CPM manual that I'd gotten from the Retail Computer Store five dollars in 1976 or something, and used that as the basis for what would be the application program interface, the API for my operating system. And so using these ideas that came from different places I started in April and it was about half time for four months before I had my first working version.

This is it, the operating system Tim Patterson wrote. He called in QDOS the quick and dirty operating system. Microsoft and IBM called it PC DOS 1.0 and under any name it looks an awful lot like CPM. On this computer here I have running a PC DOS and CPM 86 and frankly it�s very hard to tell the difference between the two. The command structures are the same, so are the directories, in fact the only obvious external difference is the floppy dirive is labelled A in PC DOS and and C in CPM. Some difference and yet one generated billions in revenue and the other disappeared. As usual in the PC business the prize didn't go to the inventor but to the exploiter of the invention. In this case that wasn't Gary Kildall it wasn't even Tim Paterson.

There was still one problem. Tim Patterson worked for Seattle Computer Products, or SCP. They still owned the rights to QDOS - rights that Microsoft had to have.

Vern Raburn
Former Vice-President Microsoft
But then we went back and said to them look, you know, we want to buy this thing, and SCP was like most little companies, you know. They always needed cash and so that was when they went in to the negotiation.

Paul Allen
And so ended up working out a deal to buy the operating system from him for whatever usage we wanted for fifty thousand dollars.

Hey, let's pause there. To savour an historic moment.

Paul Allen
For whatever usage we wanted for fifty thousand dollars.

It had to be the deal of the century if not the millenium it was certainly the deal that made Bill Gates and Paul Allen multi billionaires and allowed Paul Allen to buy toys like these, his own NBA basketball team and arena. Microsoft bought outright for fifty thousand dollars the operating system they needed and they turned around and licensed it to the world for up to fifty dollars per PC. Think of it - one hundred million personal computers running MS DOS software funnelling billions into Microsoft - a company that back then was fifty kids managed by a twenty-five year old who needed to wash his hair. Nice work if you can get it and Microsoft got it. There are no two places further apart in the USA than south eastern Florida and Washington State where Microsoft is based. This - this is Florida, Boca Raton and this building right here is where the IBM PC was developed. Here the nerds from Seattle joined forces with the suits of corporate and in that first honeymoon year they pulled off a fantastic achievement.

Dan Bricklin
After we got a package in the mail from the people down in Florida...

As August 1981 approached, the deadline for the launch of the IBM Acorn, the PC industry held its breath.

Dan Bricklin
Supposedly, maybe at this very moment eh, IBM is announcing the personal computer. We don't know that yet.

Software writers like Dan Bricklin, the creator of the first spreadsheet VisiCalc waited by the phones for news of the announcement. This is a moment of PC history. IBM secrecy had codenamed the PC 'The Floridian Project.' Everyone in the PC business knew IBM would change their world forever. They also knew that if their software was on the IBM PC, they would make fortunes.

Dan Bricklin
Please note that the attached information is not to be disclosed prior to any public announcement. (It's on the ticker) It's on the ticker OK so now you can tell people.

What we're watching are the first few seconds of a $100 billion industry.

Promo
After years of thinking big today IBM came up with something small. Big Blue is looking for a slice of Apple's market share. Bits and Bytes mean nothing try this one. Now they're going to sell $1,000 computers to millions of customers. I have seen the future said one analyst and it computes.

Commercial
Today an IBM computer has reached a personal......

Nobody was ever fired for buying IBM. Now companies could put PCs with the name they trusted on desks from Wisconsin to Wall Street.

Bob Metcalfe
Founder 3COM
When the IBM PC came and the PC became a serious business tool, a lot of them, the first of them went into those buildings over there and that was the real ehm when the PC industry started taking off, it happened there too.

Commercial
Can learn to use it with ease...

Sparky Sparks
Former IBM Executive
What IBM said was it's okay corporate America for you to now start buying and using PCs. And if it's okay for corporate America, it's got to be okay for everybody.

For all the hype, the IBM PC wasn't much better than what came before. So while the IBM name could create immense demand, it took a killer application to sustain it. The killer app for the IBM PC was yet another spreadsheet. Based on Visicalc, but called Lotus 1-2-3, its creators were the first of many to get rich on IBM's success. Within a year Lotus was worth $150 million bucks. Wham! Bam! Thank you IBM!

Commercial
Time to rock time for code...

IBM had forecast sales of half a million computers by 1984. In those 3 years, they sold 2 million.

Jack Sams
Euphoric I guess is the right word. Everybody was believed that they were not going to... At that point two million or three million, you know, they were now thinking in terms of a hundred million and they were probably off the scale in the other direction.

What did all this mean to Bill Gates, whose operating system, DOS, was at the heart of every IBM PC sold? Initially, not much, because of the deal with IBM. But it did provide him a vital bridgehead to other players in the PC marketplace, which meant trouble in the long run for Big Blue.

Bill Gates
The key to our...the structure of our deal was that IBM had no control over...over our licensing to other people. In a lesson on the computer industry in mainframes was that er, over time, people built compatible machines or clones, whatever term you want to use, and so really, the primary upside on the deal we had with IBM, because they had a fixed fee er, we got about $80,000 - we got some other money for some special work we did er, but no royalty from them. And that's the DOS and Basic as well. And so we were hoping a lot of other people would come along and do compatible machines. We were expecting that that would happen because we knew Intel wanted to vend the chip to a lot more than just than just IBM and so it was great when people did start showing up and ehm having an interest in the licence.

IBM now had fifty per cent market share and was defining what a PC meant. There were other PCs that were sorta like the IBM PC, kinda like it. But what the public wanted was IBM PCs. So to be successful other manufacturers would have to build computers exactly like the IBM. They wanted to copy the IBM PC, to clone it. How could they do that legally, well welcome to the world of reverse engineering. This is what reverse engineering can get you if you do it right. It's the modest Aspen, Colorado ski shack of Rod Canion, one of the founders of Compaq, the company set up to compete head-on with the IBM PC. Back in 1982, Rod and three fellow engineers from Texas Instruments sketched out a computer design on a place mat at the House of Pies restaurant in Houston, Texas. They decided to manufacture and market a portable version of the IBM PC using the curious technique of reverse engineering.

Rod Canion
Co-founder Compaq
Reverse engineering is figuring out after something has already been created how it ticks, what makes it work, usually for the purpose of creating something that works the same way or at least does something like the thing you're trying to reverse engineer.

Here's how you clone a PC. IBM had made it easy to copy. The microprocessor was available off the shelf from Intel and the other parts came from many sources. Only one part was IBM's alone, a vital chip that connected the hardware with the software. Called the ROM-BIOS, this was IBM's own design, protected by copyright and Big Blue's army of lawyers. Compaq had to somehow copy the chip without breaking the law.

Rod Canion
First you have to decide how the ROM works, so what we had to do was have an engineer sit down with that code and through trial and error write a specification that said here's how the BIOS ROM needs to work. It couldn't be close it had to be exact so there was a lot of detailed testing that went on.

You test how that all-important chip behaves, and make a list of what it has to do - now it's time to meet my lawyer, Claude.

Claude Stern
Silicon Valley Attorney
BOB: I've examined the internals of the ROM BIOS and written this book of specifications now I need some help because I've done as much as I can do, and you need to explain what's next.
CLAUDE: Well,the first thing I'm going to do is I'm going to go through the book of specifications myself, but the first thing I can tell you Robert is that you're out of it now. You are contaminated, you are dirty. You've seen the product that's the original work of authorship, you've seen the target product, so now from here on in we're going to be working with people who are not dirty. We're going to be working with so called virgins, who are going to be operating in the clean room.
BOB: I certainly don't qualify there.
CLAUDE: I imagine you don't. So what we're going to do is this. We're going to hire a group of engineers who have never seen the IBM ROM BIOS. They have never seen it, they have never operated it, they know nothing about it.

Claude interrogates Mark
CLAUDE: Have you ever before attempted to disassemble decompile or to in any way shape or form reverse engineer any IBM equipment?
MARK: Oh no.
CLAUDE: And have you ever tried to disassemble....

This is the Silicon Valley virginity test. And good virgins are hard to find.

CLAUDE: You understand that in the event that we discover that the information you are providing us is inaccurate you are subject to discipline by the company and that can include but not limited to termination immediately do you understand that?
MARK: Yes I do.
CLAUDE: OK.

After the virgins are deemed intact, they are forbidden contact with the outside world while they build a new chip -- one that behaves exactly like the one in the specifications. In Compaq's case, it took l5 senior programmers several months and cost $1 million to do the reverse engineering. In November 1982, Rod Canion unveiled the result.

Bill Murto
What I�ve brought today is a Compaq portable computer.

When Bill Murto, another Compaq founder got a plug on a cable TV show their selling point was clear 100 percent IBM compatibility.

Bill Murto
It turns out that all major popular software runs on the IBM personal computer or the Compaq portable computer.
Q: That extends through all software written for IBM?
A: Eh Yes.
Q: It all works on the Compaq?

The Compaq was an instant hit. In their first year, on the strength of being exactly like IBM but a little cheaper, they sold 47,000 PCs.

Rod Canion
In our first year of sales we set an American business record. I guess maybe a world business record. Largest first year sales in history. It was a hundred and eleven million dollars.

So Rod Canion ends up in Aspen, famous for having the most expensive real estate in America and I try not to look envious while Rod tells me which executive jet he plans to buy next.
ROD: And finally I picked the Lear 31.
BOB: Oh really?
ROD: Now thart was a fun airplane.
BOB: Oh yeh.

Poor Big Blue! Suddenly everybody was cashing in on IBM's success. The most obvious winner at first was Intel, maker of the PCs microprocessor chip. Intel was selling chips like hotcakes to clonemakers -- and making them smaller, quicker and cheaper. This was unheard of! What kind of an industry had Big Blue gotten themselves into?

Jim Cannavino
Former Head, IBM PC Division
Things get less expensive every year. People aren't used to that in general. I mean, you buy a new car, you buy one now and four years later you go and buy one it costs more than the one you bought before. Here is this magical piece of an industry - you go buy one later it costs less and it does more. What a wonderful thing. But it causes some funny things to occur when you think about an industry. An industry where prices are coming down, where you have to sell it and use it right now, because if you wait later it's worth less.

Where Compaq led, others soon followed. IBM was now facing dozens of rivals - soon to be familiar names began to appear, like AST, Northgate and Dell. It was getting spectacularly easy to build a clone. You could get everything off the shelf, including a guaranteed-virgin ROM BIOS chip. Every Tom, Dick & Bob could now make an IBM compatible PC and take another bite out of Big Blue's business. OK we're at Dominos Computers at Los Altos California, Silicon Valley and this is Yukio and we're going to set up the Bob and Yukio Personal Computer Company making IBM PC clones. You're the expert, I of course brought all the money so what is it that we're going to do.

Yukio
OK first of all we need a motherboard.
BOB: What's a motherboard?
YUKIO: That's where the CPU is set in...that's the central processor unit.
BOB: OK.
YUKIO: In fact I have one right here. OK so this is the video board...
BOB: That drives the monitor.
YUKIO: Right.
BOB: Terror?
BILL LOWE: Oh, of course. I mean we were able to sell a lot of products but it was getting difficult to make money.
YUKIO: And this is the controller card which would control the hard drive and the floppy drive.
BOB: OK.

Rod Canion
And the way we did it was by having low overhead. IBM had low cost of product but a lot of overhead - they were a very big company.

YUKIO: Right this is a high density recorder.
BOB: So this is a hard disk drive.

Rod Canion
And by keeping our overhead low even though our margins were low we were able to make a profit.

YUKIO: OK I have one right here.
BOB: Hey...OK we have a keyboard which plugs in right over here.
YUKIO: Right...
BOB: People build them themselves - how long does it take?
YUKIO: About an hour.
BOB: About an hour.

And where did every two-bit clone-maker buy his operating system? Microsoft, of course. IBM never iniagined Bill Gates would sell DOS to anyone else. Who was there? But by the mid 80's it was boom time for Bill. The teenage entrepreneur had predicted a PC on every desk and in every home, running Microsoft software. It was actually coming true. As Microsoft mushroomed there was no way that Bill Gates could personally dominate thousands of employees but that didn't stop him. He still had a need to be both industry titan and top programmer. So he had to come up with a whole new corporate culture for Microsoft. He had to find a way to satisfy both his adolescent need to dominate and his adult need to inspire. The average Microsoftee is male and about 25. When he's not working, well he's always working. All his friends are Microsoft programmers too. He has no life outside the office but all the sodas are free. From the beginning, Microsoft recruited straight out of college. They chose people who had no experience of life in other companies. In time they'd be called Microserfs.

Charles Simonyi
Chief Programmer, Microsoft
It was easier to to to create a new culture with people who are fresh out of school rather than people who came from, from from eh other companies and and and other cultures. You can rely on it you can predict it you can measure it you can optimise it you can make a machine out of it.

Christine Comaford
I mean everyone like lived together, ate together dated each other you know. Went to the movies together it was just you know very much a it was like a frat or a dorm.

Steve Ballmer
Everybody's just push push push - is it right, is it right, do we have it right keep on it - no that's not right ugh and you're very frank about that - you loved it and it wasn't very formal and hierarchical because you were just so desirous to do the right thing and get it right. Why - it reflects Bill's personality.

Jean Richardson
And so a lot of young, I say people, but mostly it was young men, who just were out of school saw him as this incredible role model or leader, almost a guru I guess. And they could spend hours with him and he valued their contributions and there was just a wonderful camaraderie that seemed to exist between all these young men and Bill, and this strength that he has and his will and his desire to be the best and to be the winner - he is just like a cult leader, really.

As the frenzied 80's came to a close IBM reached a watershed - they had created an open PC architecture that anyone could copy. This was intentional but IBM always thought their inside track would keep them ahead - wrong. IBM's glacial pace and high overhead put them at a disadvantage to the leaner clone makers - everything was turning into a nightmare as IBM lost its dominant market share. So in a big gamble they staked their PC future to a new system a new line of computers with proprietary closed hardware and their very own operating system. It was war.

Presentation
Start planning for operating system 2 today.

IBM planned to steal the market from Gates with a brand new operating system, called - drum roll please - OS/2. IBM would design OS/2. Yet they asked Microsoft to write the code. Why would Microsoft help create what was intended to be the instrument of their own destruction? Because Microsoft knew IBM was was the source of their success and they would tolerate almost anything to stay close to Big Blue.

Steve Ballmer
It was just part of, as we used to call it, the time riding the bear. You just had to try to stay on the bear's back and the bear would twist and turn and try to buck you and throw you, but darn, we were going to ride the bear because the bear was the biggest, the most important you just had to be with the bear, otherwise you would be under the bear in the computer industry, and IBM was the bear, and we were going to ride the back of the bear.

Bill Gates
It's easy for people to forget how pervasive IBM's influence over this industry was. When you talked to people who've come in to the industry recently there's no way you can get that in to their - in to their head, that was the environment.

The relationship between IBM and Microsoft was always a culture clash. IBMers were buttoned-up organization men. Microsoftees were obsessive hackers. With the development of OS/2 the strains really began to show.

Steve Ballmer
In IBM there's a religion in software that says you have to count K-LOCs, and a K-LOC is a thousand line of code. How big a project is it? Oh, it's sort of a 10K-LOC project. This is a 20K-LOCer. And this is 5OK-LOCs. And IBM wanted to sort of make it the religion about how we got paid. How much money we made off OS 2, how much they did. How many K-LOCs did you do? And we kept trying to convince them - hey, if we have - a developer's got a good idea and he can get something done in 4K-LOCs instead of 20K-LOCs, should we make less money? Because he's made something smaller and faster, less KLOC. K-LOCs, K-LOCs, that's the methodology. Ugh anyway, that always makes my back just crinkle up at the thought of the whole thing.

Jim Cannavino
When I took over in '89 there was an enormous amount of resources working on OS 2, both in Microsoft and the IBM company. Bill Gates and I met on that several times. And we pretty quickly came to the conclusion together that that was not going to be a success, the way it was being managed. It was also pretty clear that the negotiating and the contracts had given most of that control to Microsoft.

It was no longer just a question of styles. There was now a clear conflict of business interest. OS/2 was planned to undermine the clone market, where DOS was still Microsoft's major money-maker. Microsoft was DOS. But Microsoft was helping develop the opposition? Bad idea. To keep DOS competitive, Gates had been pouring resources into a new programme called Windows. It was designed to provide a nice user-friendly facade to boring old DOS. Selling it was another job for shy, retiring Steve Ballmer.

Steve Ballmer (Commercial)
How much do you think this advanced operating environment is worth - wait just one minute before you answer - watch as Windows integrates Lotus 1, 2, 3 with Miami Vice. Now we can take this...

Just as Bill Gates saw OS/2 as a threat, IBM regarded Windows as another attempt by Microsoft to hold on to the operating system business.

Bill Gates
We created Windows in parallel. We kept saying to IBM, hey, Windows is the way to go, graphics is the way to go, and we got virtually everyone else, enthused about Windows. So that was a divergence that we kept thinking we could get IBM to - to come around on.

Jim Cannavino
It was clear that IBM had a different vision of its relationship with Microsoft than Microsoft had of its vision with IBM. Was that Microsoft's fault? You know, maybe some, but IBM's not blameless there either. So I don't view any of that as anything but just poor business on IBM's part.

Bill Gates is a very disciplined guy. He puts aside everything he wants to read and twice a year goes away for secluded practicing weeks - the decisive moment in the Microsoft/IBM relationship came during just such a retreat. In front of a log fire Bill concluded that it was no longer in Microsoft's long term interests to blindly follow IBM. If Bill had to choose between OS2, IBM's new operating system and Windows, he'd choose Windows.

Steve Ballmer
We said ooh, IBM's probably not going to like this. This is going to threaten OS 2. Now we told them about it, right away we told them about it, but we still did it. They didn't like it, we told em about it, we told em about it, we offered to licence it to em.

Bill Gates
We always thought the best thing to do is to try and combine IBM promoting the software with us doing the engineering. And so it was only when they broke off communication and decided to go their own way that we thought, okay, we're on our own, and that was definitely very, very scary.

Steve Ballmer
We were in a major negotiation in early 1990, right before the Windows launch. We wanted to have IBM on stage with us to launch Windows 3.0, but they wouldn't do the kind of deal that would allow us to profit it would allow them essentially to take over Windows from us, and we walked away from the deal.

Jack Sams, who started IBM's relationship with Microsoft with that first call to Bill Gates in 1980, could only look on as the partnership disintegrated.

Jack Sams
Then they at that point I think they agreed to disagree on the future progress of OS 2 and Windows. And internally we were told thou shalt not ship any more products on Windows. And about that time I got the opportunity to take early retirement so I did.

Bill's decison by the fireplace ended the ten year IBM/Microsoft partnership and turned IBM into an also-ran in the PC business. Did David beat Goliath? The Boca Raton, Florida birthplace of the IBM's PC is deserted - a casualty of diminishing market share. Today, IBM is again what it was before - a profitable, dominant mainframe computer company. For awhile IBM dominated the PC market. They legitimised the PC business, created the standards most of us now use, and introduced the PC to the corporate world. But in the end they lost out. Maybe it was to a faster, more flexible business culture. Or maybe they just threw it away. That's the view of a guy who's been competing with IBM for 20 years, Silicon Valley's most outspoken software billionaire, Larry Ellison.

Larry Ellison
Founder, Oracle
I think IBM made the single worst mistake in the history of enterprise on earth.
Q: Which was?
LARRY: Which was the manufacture - being the first manufacturer and distributor of the Microsoft/Intel PC which they mistakenly called the IBM PC. I mean they were the first manufacturer and distributor of that technology I mean it's just simply astounding that they could ah basically provide a third of their market value to Intel and a third of their market value to Microsoft by accident - I mean no-one, no-one I mean those two companies today are worth close to you know approaching a hundred billion dollars I mean not many of us get a chance to make a $100 billion mistake.

As fast as IBM abandons its buildings, Microsoft builds new ones. In 1980 IBM was 3000 times the size of Microsoft. Though still a smaller company, today Wall Street says Microsoft is worth more. Both have faced anti-trust investigations about their monopoly positions. For years IBM defined successful American corporate culture - as a machine of ordered bureaucracy. Here in the corridors of Microsoft it's a different style, it's personal. This company - in its drive, its hunger to succeed - is a reflection of one man, its founder, Bill Gates.

Jean Richardson
Bill wanted to win. Incredible desire to win and to beat other people. At Microsoft we, the whole idea was that we would put people under, you know. Unfortunately that's happened a lot.

Esther Dyson
Computer Industry Analyst
Bill Gates is special. You wouldn't have had a Microsoft with take a random other person like Gary Kildall. On the other hand, Bill Gates was also lucky. But Bill Gates knows that, unlike a lot of other people in the industry, and he's paranoid. Every morning he gets up and he doesn't feel secure, he feels nervous about this. They're trying hard, they're not relaxing, and that's why they're so successful.

Christine Comaford
And I remember, I was talking to Bill once and I asked him what he feared, and he said that he feared growing old because you know, once you're beyond thirty, this was his belief at the time, you know once you're beyond thirty, you know, you don't have as many good ideas anymore. You're not as smart anymore.

Bill Gates
If you just slow down a little bit who knows who it'll be, probably some company that may not even exist yet, but eh someone else can come in and take the lead.

Christine Comaford
And I said well, you know, you're going to age, it's going to happen, it's kind of inevitable, what are you going to do about it? And he said I'm just going to hire the smartest people and I'm going to surround myself with all these smart people, you know. And I thought that was kind of interesting. It was almost - it was like he was like oh, I can't be immortal, but like maybe this is the second best and I can buy that, you know.

Bill Gates
If you miss what's happening then the same kind of thing that happened to IBM or many other companies could happen to Microsoft very easily. So no-one's got a guaranteed position in the high technology business, and the more you think about, you know, how could we move faster, what could we do better, are there good ideas out there that we should be going beyond, it's important. And I wouldn't trade places with anyone, but the reason I like my job so much is that we have to constantly stay on top of those things.

The Windows software system that ended the alliance between Microsoft and IBM pushed Gates past all his rivals. Microsoft had been working on the software for years, but it wasn't until 1990 that they finally came up with a version that not only worked properly, it blew their rivals away and where did the idea for this software come from? Well not from Microsoft, of course. It came from the hippies at Apple. Lights! Camera! Boot up! In 1984, they made a famous TV commercial. Apple had set out to create the first user friendly PC just as IBM and Microsoft were starting to make a machine for businesses. When the TV commercial aired, Apple launched the Macintosh.

Commercial
Glorious anniversary of the information...

The computer and the commercial were aimed directly at IBM - which the kids in Cupertino thought of as Big Brother. But Apple had targeted the wrong people. It wasn't Big Brother they should have been worrying about, it was big Bill Gates.

Commercial
We are one people....

To find out why, join me for the concluding episode of Triumph of the Nerds.

Commercial
...........we shall prevail.

Fri, 17 Jun 2022 09:42:00 -0500 text/html https://www.pbs.org/nerds/part2.html
Killexams : How IBM Could Become A Digital Winner

Last week, after IBM’s report of positive quarterly earnings, CEO Arvind Krishna and CNBC’s Jim Cramer shared their frustration that IBM’s stock “got clobbered.” IBM’s stock price immediately fell by10%, while the S&P500 remained steady (Figure 1)

While a five-day stock price fluctuation is by itself meaningless, questions remain about the IBM’s longer-term picture. “These are great numbers,” declared Krishna.

“You gave solid revenue growth and solid earnings,” Cramer sympathized. “You far exceeded expectations. Maybe someone is changing the goal posts here?”

The Goal Posts To Become A Digital Winner

It is also possible that Krishna and Cramer missed where today’s goal posts are located. Strong quarterly numbers do not a digital winner make. They may induce the stock market to regard a firm as a valuable cash cow, like other remnants of the industrial era. But to become a digital winner, a firm must take the kind of steps that Satya Nadella took at Microsoft to become a digital winner: kill its dogs, commit to a mission of customer primacy, identify real growth opportunities, transform its culture, make empathy central, and unleash its agilists. (Figure 2)

Since becoming CEO, Nadella has been brilliantly successful at Microsoft, growing market capitalization by more than a trillion dollars.

Krishna’s Two Years As CEO

Krishna has been IBM CEO since April 2020. He began his career at IBM in 1990, and had been managing IBM’s cloud and research divisions since 2015. He was a principal architect of the Red Hat acquisition.

They are remarkable parallels between the careers of Krishna and Nadella.

· Both are Indian-American engineers, who were born in India.

· Both worked at the firm for several decades before they became CEOs.

· Prior to becoming CEOs, both were in charge of cloud computing.

Both inherited companies in trouble. Microsoft was stagnating after CEO Steve Ballmer, while IBM was also in rapid decline, after CEO Ginny Rometty: the once famous “Big Blue” had become known as a “Big Bruise.”

Although it is still early days in Krishna’s CEO tenure, IBM is under-performing the S&P500 since he took over (Figure 3).

More worrying is the fact that Krishna has not yet completed the steps that Nadella took in his first 27 months. (Figure 1).

1. Jettison Losing Baggage

Nadella wrote off the Nokia phone and declared that IBM would no longer sell its flagship Windows as a business. This freed up energy and resources to focus on creating winning businesses.

By contrast, Krishna has yet to jettison, IBM’s most distracting baggage:

· Commitment to maximizing shareholder value (MSV): For the two prior decades, IBM was the public champion of MSV, first under CEO Palmisano 2001-2011, and again under Rometty 2012-2020—a key reason behind IBM’s calamitous decline (Figure 2) Krishna has yet to explicitly renounce IBM’s MSV heritage.

· Top-down bureaucracy: The necessary accompaniment of MSV is top-down bureaucracy, which flourished under CEOs Palmisano and Rometty. Here too, bureaucratic processes must be explicitly eradicated, otherwise they become permanent weeds.

· The ‘Watson problem’: IBM’s famous computer, Watson, may have won ‘Jeopardy!’ but it continues to have problems in the business marketplace. In January 2022, IBM reported that it had sold Watson Health assets to an investment firm for around $1 billion, after acquisitions that had cost some $4 billion. Efforts to monetize Watson continue.

· Infrastructure Services: By spinning off its Cloud computing business as a publicly listed company (Kyndryl), IBM created nominal separation, but Kyndryl immediately lost 57% of its share value.

· Quantum Computing: IBM pours resources into research on quantum computing and touts its potential to revolutionize computing. However unsolved technical problems of “decoherence” and “entanglement” mean that any meaningful benefits are still some years away.

· Self-importance: Perhaps the heaviest baggage that IBM has yet to jettison is the over-confidence reflected in sales slogans like “no one ever got fired for hiring IBM”. The subtext is that firms “can leave IT to IBM” and that the safe choice for any CIO is to stick with IBM. It’s a status quo mindset—the opposite of the clients that IBM needs to attract.

2. Commit To A Clear Customer-Obsessed Mission

At the outset of his tenure as CEO of Microsoft, Nadella spent the first nine months getting consensus on a simple customer-driven mission statement.

Krishna did write at the end of the letter to staff on day one as CEO, and he added at the end:“Third, we all must be obsessed with continually delighting our clients. At every interaction, we must strive to offer them the best experience and value. The only way to lead in today’s ever-changing marketplace is to constantly innovate according to what our clients want and need.” This would have been more persuasive if it had come at the beginning of the letter, and if there had been stronger follow-up.

What is IBM’s mission? No clear answer appears from IBM’s own website. The best one gets from About IBM is the fuzzy do-gooder declaration: “IBMers believe in progress — that the application of intelligence, reason and science can Strengthen business, society and the human condition.” Customer primacy is not explicit, thereby running the risk that IBM’s 280,000 employees will assume that the noxious MSV goal is still in play.

3. Focus On Major Growth Opportunities

At Microsoft, Nadella dismissed competing with Apple on phones or with Google on Search. He defined the two main areas of opportunity—mobility and the cloud.

Krishna has identified the Hybrid Cloud and AI as IBM’s main opportunities. Thus, Krishna wrote in his newsletter to staff on day one as CEO: “Hybrid cloud and AI are two dominant forces driving change for our clients and must have the maniacal focus of the entire company.”

However, both fields are now very crowded. IBM is now a tiny player in Cloud in comparison to Amazon, Microsoft, and Google. In conversations, Krishna portrays IBM as forging working partnerships with the big Cloud players, and “integrating their offerings in IBM’s hybrid Cloud.” One risk here is whether the big Cloud players will facilitate this. The other risk is that IBM will attract only lower-performing firms that use IBM as a crutch so that they can cling to familiar legacy programs.

4. Address Culture And The Importance Of Empathy Upfront

At Microsoft, Nadella addressed culture upfront, rejecting Microsoft’s notoriously confrontational culture, and set about instilling a collaborative customer-driven culture throughout the firm.

Although Krishna talks openly to the press, he has not, to my knowledge, frontally addressed the “top-down” “we know best” culture that prevailed in IBM under his predecessor CEOs. He has, to his credit, pledged “neutrality” with respect to the innovative, customer-centric Red Hat, rather than applying the “Blue washing” that the old IBM systematically applied to its acquisitions to bring them into line with IBM’s top-down culture, and is said to have honored its pledge—so far. But there is little indication that IBM is ready to adopt Red Hat’s innovative culture for itself. It is hard to see these two opposed cultures remain “neutral” forever. Given the size differential between IBM and Red Hat, the likely winner is easy to predict, unless Krishna makes a more determined effort to transform IBM’s culture.

5. Empower The Hidden Agilists

As in any large tech firm, when Nadella and Krishna took over their respective firms, there were large hidden armies of agilists waiting in the shadows but hamstrung by top-down bureaucracies. At Microsoft, Nadella’s commitment to “agile, agile, agile” combined with a growth mindset, enabled a fast start.. At IBM, if Krishna has any passion for Agile, it has not yet shared it widely.

Bottom Line

Although IBM has made progress under Krishna, it is not yet on a path to become a clear digital winner.

And read also:

Is Your Firm A Cash-Cow Or A Growth-Stock?

Why Companies Must Learn To Discuss The Undiscussable

Sun, 24 Jul 2022 23:19:00 -0500 Steve Denning en text/html https://www.forbes.com/sites/stevedenning/2022/07/25/how-ibm-could-become-a-digital-winner/
Killexams : Nanosheet FETs Drive Changes In Metrology And Inspection

In the Moore’s Law world, it has become a truism that smaller nodes lead to larger problems. As fabs turn to nanosheet transistors, it is becoming increasingly challenging to detect line-edge roughness and other defects due to the depths and opacities of these and other multi-layered structures. As a result, metrology is taking even more of a hybrid approach, with some well-known tools moving from the lab to the fab.

Nanosheets are the successor to finFETs, an architecture evolution prompted by the industry’s continuing desire to increase speed, capacity, and power. They also help solve short-channel effects, which lead to current leakage. The great vulnerability of advanced planar MOSFET structures is that they are never fully “off.” Due to their configuration, in which the metal-oxide gate sits on top of the channel (conducting current between source and drain terminals), some current continues to flow even when voltage isn’t applied to the gate.

FinFETs raise the channel into a “fin.” The gate is then arched over that fin, allowing it to connect on three sides. Nevertheless, the bottom of the gate and the bottom of the fin are level with each other, so some current can still sneak through. The gate-all-around design turns the fin into multiple, stacked nanosheets, which horizontally “pierce” the gate, giving coverage on all four sides and containing the current. An additional benefit is the nanosheets’ width can be varied for device optimization.

Fig. 1: Comparison of finFET and gate-all-around with nanosheets. Source: Lam Research

Fig. 1: Comparison of finFET and gate-all-around with nanosheets. Source: Lam Research

Unfortunately, with one problem solved, others emerge. “With nanosheet architecture, a lot of defects that could kill a transistor are not line-of-sight,” said Nelson Felix, director of process technology at IBM. “They’re on the underside of nanosheets, or other hard-to-access places. As a result, the traditional methods to very quickly find defects without any prior knowledge don’t necessarily work.”

So while this may appear linear from an evolutionary perspective, many process and materials challenges have to be solved. “Because of how the nanosheets are formed, it’s not as straightforward as it was in the finFET generation to create a silicon-germanium channel,” Nelson said.

Hybrid combinations
Several techniques are being utilized, ranging from faster approaches like optical microscopy to scanning electron microscopes (SEMs), atomic force microscopes (AFMs), X-ray, and even Raman spectroscopy.

Well-known optical vendors like KLA provide the first-line tools, employing techniques such as scatterometry and ellipsometry, along with high-powered e-beam microscopes.

With multiple gate stacks, optical CD measurement needs to separate one level from the next according to Nick Keller, senior technologist, strategic marketing for Onto Innovation. “In a stacked nanosheet device, the physical dimensions of each sheet need to be measured individually — especially after selective source-drain recess etch, which determines drive current, and the inner spacer etch, which determines source-to-gate capacitance, and also affects transistor performance. We’ve done demos with all the key players and they’re really interested in being able to differentiate individual nanosheet widths.”

Onto’s optical critical dimension (OCD) solution combines spectroscopic reflectometry and spectroscopic ellipsometry with an AI analysis engine, called AI-Diffract, to provide angstrom-level CD measurements with superior layer contrast versus traditional OCD tools.

Fig. 2: A model of a GAA device generated using AI Diffract software, showing the inner spacer region (orange) of each nanosheet layer. Source: Onto Innovation

Fig. 2: A model of a GAA device generated using AI Diffract software, showing the inner spacer region (orange) of each nanosheet layer. Source: Onto Innovation

Techniques like spectroscopic ellipsometry or reflectometry from gratings (scatterometry) can measure CDs and investigate feature shapes. KLA describes scatterometry as using broadband light to illuminate a target to derive measurements. The reflected signal is fed into algorithms that compare the signal to a library of models created based on known material properties and other data to see 3D structures. The company’s latest OCD and shape metrology system identifies subtle variations (in CD, high k and metal gate recess, side wall angle, resist height, hard mask height, pitch walking) across a range of process layers. An improved stage and new measurement modules help accelerate throughput.

Chipmakers rely on AI engines and deep computing in metrology just to handle the data streams. “They do the modeling data for what we should be looking at that day, and that helps us out,” said Subodh Kulkarni, CEO of CyberOptics. “But they want us to provide them speedy resolution and accuracy. That’s incredibly difficult to deliver. We’re ultimately relying on things like the resolution of CMOS and the bandwidth of GPUs to crunch all that data. So in a way, we’re relying on those chips to develop inspection solutions for those chips.”

In addition to massive data crunching, data from different tools must be combined seamlessly. “Hybrid metrology is a prevailing trend, because each metrology technique is so unique and has such defined strengths and weaknesses,” said Lior Levin, director of product marketing at Bruker. “No single metrology can cover all needs.”

The hybrid approach is well accepted. “System manufacturers are putting two distinct technologies into one system,” said Hector Lara, Bruker’s director and business manager for Microelectronics AFM. He says Bruker has decided against that approach based on real-world experience, which has shown it leads to sub-optimal performance.

On the other hand, hybrid tools can save time and allow a smaller footprint in fabs. Park Systems, for example, integrates AFM precision with white light interferometry (WLI) into a single instrument. Its purpose, according to Stefan Kaemmer, president of Park Systems Americas, is in-line throughput. While the WLI can quickly spot a defect, “You can just move the demo over a couple of centimeters to the AFM head and not have to take the time to unload it and then load it on another tool,” Kaemmer said.

Bruker, meanwhile, offers a combination of X-ray diffraction (XRD)/X-ray reflectometry (XRR) and X-ray fluorescence (XRF)/XRR for 3D logic applications. However, “for the vast majority of applications, the approach is a very specialized tool with a single metrology,” Levin said. “Then you hybridize the data. That’s the best alternative.”

What AFMs provide
AFMs are finding traction in nanosheet inspection because of their ability to distinguish fine details, a capability already proven in 3D NAND and DRAM production. “In AFM, we don’t really find the defects,” Kaemmer explained. “Predominantly, we read the defect map coming typically from some KLA tool and then we go to whatever the customer picks to closely examine. Why that’s useful is the optical tool tells you there’s a defect, but one defect could actually be three smaller defects that are so close together the optical tool can’t differentiate them.”

The standard joke about AFMs is that their operation was easier to explain when they were first developed nearly forty years ago. In 1985, when record players were in every home, it required little to imagine an instrument in which a sharp tip extended from a cantilevered arm felt its way along a surface to produce signals. With electromagnetic (and sometimes chemical) modifications, that is essentially the hardware design of all modern AFMs. There are now many variations of tip geometries, from pyramids to cones, in a range of materials including silicon, diamond, and tungsten.

In one mode of operation, tapping, the cantilever is put into oscillation at its natural resonant frequency, giving the AFM controlling systems greater precision of force control, resulting in a nanometer scale spatial topographic rendering of the semiconductor structure. The second sub-resonant mode enables greatest force control during tip/sample interaction. That approach becomes invaluable for high-aspect structures rendering high-accuracy depth measurements, and in some structures, sidewall angles and roughness.

Today’s commercial production tools are geared to specific applications, such as defect characterization or surface profile measurement. Unlike optical microscopes, where improvements center on improved resolution, AFMs are looking at subtle profile changes in bond pads for hybrid bonding, for instance, or to reveal defect characteristics like molecular adhesion.

“Bonding is really a sweet spot for AFM,” said Sean Hand, senior staff applications scientist at Bruker. “It’s really planar, it’s flat, we’re able to see the nanoscale roughness, and the nanoscale slope changes that are important.”

Additionally, because tips can exert enough force to move particles, AFMs can both find errors and correct them. They have been used in production to remove debris and make pattern adjustments on lithography masks for nearly two decades. Figure 3 (below) shows a probe-based particle removal during lithography process for advanced node development. Contaminants are removed from EUV masks, allowing the photomask to be quickly returned to production use. That extends the life of the reticle, and avoids surface degradation caused by wet cleaning.

AFM-based particle removal is a significantly lower-cost dry cleaning process and adds no residual contamination to the photomask surface, which can degrade mask life. Surface interaction is local to the defect, which minimizes the potential for contamination of other mask areas. The high precision of the process allows for cleaning within fragile mask features without risk of damage.

Fig. 3: Example of pattern repair. Source: Bruker

Fig. 3: Example of pattern repair. Source: Bruker

An application using probe-based particle removal is used in the lithography process for advanced node development. Contamination removal on EUV masks in production allows the photomask to be quickly returned to production use. This dry cleaning removal process may extend mask life while avoiding surface degradation caused by wet cleaning.

AFMs also are used to evaluate the many photoresist candidates for high-NA EUV, including metal oxide resists and more traditional chemically amplified resists. “With the thin resist evaluation of high NA EUV studies, now you have thin, resist trenches that are much more shallow,” said Anne-Laure Charley, R&D metrology manager at Imec. “And that becomes a very nice use case for AFM.”

The drawback to AFMs, however, is that they are limited to surface characterization. They cannot measure the thickness of layers, and can be limited in terms of deep 3D profile information. Charley recently co-authored a paper that explores a deep-learning-enabled correction for the problem of vertical (z) drift in AFMs. “If you have a structure with a small trench opening, but which is very deep, you will not be able to answer with the tip at the bottom of the trench, and you will not then be able to characterize the full edge depth and also the profile at the bottom of the trench,” she said.

Raman spectroscopy
Raman spectroscopy, which relies on the analysis of inelastically scattered light, is a well-established offline technique for materials characterization that is moving its way inline into fabs. According to IBM’s Felix, it is likely to come online to answer the difficult questions of 3D metrology. “There’s a suite of wafer characterization techniques that historically have been offline techniques. For example, Raman spectroscopy lets you really probe what the bonding looks like,” he said. “But with nanosheet, this is no longer a data set you can just spot-check and have it be only one-way information. We have to use that data in a much different way. Bringing these techniques into the fab and being able to use them non-destructively on a wafer that keeps moving is really what’s required because of the complexity of the material set and the geometries.”

XRD/XRF
In addition to AFM, other powerful techniques are being pulled into the nanosheet metrology arsenal. Bruker, for example, is employing X-ray diffraction (XRD), the crystallography technique with which Rosalind Franklin created the famous “Photograph 51” to show the helical structure of DNA in 1952.

According to Levin, during the height of finFET development, companies adopted XRD technology, but mainly for R&D. “It looks like in this generation of devices, X-ray metrology adoption is much higher.”

“For the gate all around, we have both XRD — the most advanced XRD, the high brightness source XRD, for measurement of the nanosheet stack — combined with XRF,” said Levin. “Both of them are to measure the residue part, making sure everything is connected, as well as those recessed edge steps. An XRF can provide a very accurate volumetric measurement. It can measure single atoms. So in a very sensitive manner, you can measure the recessed edge of the material that is remaining after the recessed etch. And it’s a direct measurement that doesn’t require any calibration. The signal you get is directly proportional to what you’re looking to measure. So there’s significant adoption of these two techniques for GAA initial development.”

Matthew Wormington, chief technologist at Bruker Semi X-ray, gave more details: “High resolution X-ray diffraction and X-ray reflectometry are two techniques that are very sensitive to the individual layer thicknesses and to the compositions, which are key for controlling some of the x parameters downstream in the 3D process. The gate-all-around structure is built on engineered substrates. The first step is planar structures, a periodic array of silicon and silicon germanium layers. X-ray measurement is critical in that very key step because everything is built on top of that. It’s a key enabling measurement. So the existing techniques become much more valuable, because if you don’t get your base substrate correct — not just the silicon but the SiGe/Si multilayer structure — everything following it is challenged.”

Conclusion
The introduction of nanosheet transistors and other 3D structures is calling for wider usage of tools like AFM, X-ray systems, ellipsometry and Raman spectroscopy. And new processes, like hybrid bonding, leads to older processes being brought in for new applications. Imec’s Charley said, “There are some specific challenges that we see linked to stacking of wafers. You eventually need to measure through silicon because when you start to stack two wafers on top of each other, you need to measure or inspect through the backside and eventually you still have a relatively thick silicon. And that’s implies working with different wavelengths, in particular infrared. So vendors are developing specific overlay tools using infrared for these kinds of use cases.”

As for who will ultimately drive the research, it depends on when you ask that question. “The roadmap for technology is always bi-directional,” said Lior. “It’s hard to quantify, but roughly half comes from the technology side from what is possible, and half comes from what’s needed in the marketplace. Every two or three years we have a new generation of tools.”

REFERENCES
D. Cerbu, et. al., “Deep Learning-Enabled Vertical Drift Artefact Correction for AFM Images,” Proc. SPIE Metrology, Inspection, and Process Control XXXVI, May 2022; doi: 10.1117/12.2614029

A.A. Sifat, J. Jahng, and E.O. Potma, “Photo-Induced Force Microscopy (PiFM) — Principles and Implementations,” Chem. Soc. Rev., 2022,51, 4208-4222. https://pubs.rsc.org/en/content/articlelanding/2022/cs/d2cs00052k

Mary A. Breton, Daniel Schmidt, Andrew Greene, Julien Frougier, and Nelson Felix, “Review of nanosheet metrology opportunities for technology readiness,” J. of Micro/Nanopatterning, Materials, and Metrology, 21(2), 021206 (2022). https://doi.org/10.1117/1.JMM.21.2.021206

Daniel Schmidt, Curtis Durfee, Juntao Li, Nicolas Loubet, Aron Cepler, Lior Neeman, Noga Meir, Jacob Ofek, Yonatan Oren, and Daniel Fishman, “In-line Raman spectroscopy for gate-all-around nanosheet device manufacturing,” J. of Micro/Nanopatterning, Materials, and Metrology, 21(2), 021203 (2022). https://doi.org/10.1117/1.JMM.21.2.021203

Related Stories
Speeding Up The R&D Metrology Process
The goal is to use fab-like methods in the lab, but that’s not easy.

Metrology Challenges For Gate-All-Around
Why future nodes will require new equipment and approaches.

Contact Mode versus Tapping Mode AFM


Tue, 09 Aug 2022 03:01:00 -0500 en-US text/html https://semiengineering.com/nanosheet-fets-drive-changes-in-metrology-and-inspection/
Killexams : Has the cloud caught up with the mainframe?

Yes, you read that right. For much of the last couple of decades, it’s felt as if everyone has been talking about the impending demise of the mainframe, whilst simultaneously attempting to emulate as many as possible of its key operational characteristics.

Originally this emulation was via industry-standard servers, but in the last few years “the cloud” has taken up this challenge. It began with cloud computing promising the same level of scalability, flexibility and operational efficiency that mainframe systems have long provided, and on scalability going somewhat further. For a while these were more words than reality, but now cloud capabilities are (finally) getting close to what mainframe users have long taken for granted.

More recently, attention in cloud circles has turned to other – what we might regard as core – mainframe attributes such as security, privacy, resilience and failover. Whether you believe the marketing of cloud providers on this is up to you (as it is with any vendor marketing messages). But ensuring such things certainly requires very careful practicing of the service level guarantees and contractual small print.

Today much of the focus of cloud services has switched to support for specialist workloads, and again, we see cloud following in the footsteps of the mainframe by using dedicated offload engines designed to optimise workload performance, and in many cases to minimise software licensing costs as well. But it’s always seemed as if cloud has been in catch-up mode, and the mainframe has remained in the lead. Which leads to the question, has the cloud now caught up?

Has the cloud caught up?

In many ways, the answer is “yes”, but this is a qualified yes. When it comes to scalability, throughput, operational efficiency, and arguably even resilience and failover, cloud has arguably caught up with the mainframe of the 1990s or early 2000s. But there are other factors to bear in mind as the mainframe has not stood still.

For example, it is fair to say that cloud providers have made great strides on security and privacy, but in reality the mainframe is still recognised as the gold standard, with security baked into every layer in the systems stack.

Then there are questions such as latency and data location. With the mainframe, there is no doubt where the data resides and who can access it. Managing these details and the associated operational policies has been part of the platform for over fifty years. When it comes to latency, the mainframe is probably sitting very close to the data you are working with, making latency as low as possible in terms of system response times, something reinforced when considering the system’s very powerful processors and sophisticated, mature partitioning capabilities.

And the mainframe environment is getting even stronger when you look at the announcements made at the accurate launch of the IBM z16. These include quantum-safe cryptography to protect against the development of Quantum computers able to decrypt current encryption standards, on-chip AI acceleration to boost ML and AI execution, and flexible capacity combined with on-demand workload transfer across multiple locations to further reduce the chance of service disruption.

But there are places where things are arguably closer, one of which is in the area of workload optimisation, although the two environments are developing in different ways. For example, the mainframe strives to deliver a consistent environment that can handle a wide range of workloads, but managed through the same set of frameworks and tools. The cloud, on the other hand, allows you to spin up dedicated specialised environments, e.g. for AI or analytics.

What about developers?

Which leaves the question of where is “the cloud” ahead of the mainframe? The obvious place to start is in terms of the diverse geographic distribution of the major public clouds which spread across the globe with huge resources that no mainframe or mainframe cluster can match that. But this advantage is no longer quite so huge given that IBM will shortly be making “mainframe as a service” available from its IBM Cloud data centres around the world.

Not quite as a corollary, it is also fair to say that cloud was ahead for a while with regard to modern software delivery methods such as DevOps and the implementation of various agile delivery solutions. But we must recognise that it hasn’t taken long for the gap to close because the fundamental principles underlying things like DevOps, container, microservices, APIs, etc. have been intrinsic to the mainframe environment for decades, indeed pretty much since its beginning. In addition, IBM and the other software vendors in the mainframe ecosystem, such as Broadcom and BMC, have developed their offerings to such a degree that today there’s almost absolute parity.

In essence today’s mainframe environment is one where the latest generation of developers should not feel out of place. It uses the same standards-based, open tools they handle daily. And with the mainframe-as-a-service soon to be available, devs will be able to build code wherever they like and run it on the mainframe with a few clicks and no need to build a complex environment.

This is good news for the mainframe, but having the technological capabilities is less than half of the challenge. What’s really needed is for the mainframe to catch the eye of modern developers. IBM needs to ensure that developers understand that the mainframe is not a new and alien place, but instead is ready for them to exploit using the tools they are already comfortable with.

Some final thoughts

When you stand back and consider the modern mainframe, particularly the LinuxOne version and the new Z16, it’s pretty clear any claims of the mainframe being out of date or legacy stem from a fundamental lack of awareness. Indeed, the mainframe has continued to lead the way in many critical areas, delivering IT cost-effectively and securely at scale. The bottom line is, it’s not that the mainframe has been trying to keep up with industry developments, it’s that the mainframe is still very much leading the way.

Fri, 05 Aug 2022 00:04:00 -0500 en text/html https://www.computerweekly.com/blog/Write-side-up-by-Freeform-Dynamics/Has-the-cloud-caught-up-with-the-mainframe
Killexams : Beyond CRUD – Why Data-Driven Insight is Taking its Next Leap Forward

In this special guest feature, Darin Briskman, Director of Technology, Imply, discusses the history of database evolutions of relational CRUD (create, read, update, delete) from data warehousing into the modern era, and why there is a need for architecture to succeed beyond CRUD. Darin helps developers create modern analytics applications. He began his career at NASA in the 1980s (ask him about rockets!), and has been working with large and interesting data sets ever since. Most recently, he’s had various technical and leadership roles at Couchbase, Amazon Web Services, and Snowflake. When he’s not writing code, Darin likes to juggle, blow glass (usually not while juggling), and working to help children on the autism spectrum learn to use their special abilities to work better with the neuronormative.

Making better decisions using data-driven insight is now front and center in the race for growth and success. Whether the objective is tracking customer behavior, improving R&D or achieving competitive advantage, organizations all across the world are racing to harness the value of data, investing in people, techniques, and technology. 

The value received from data-driven insight has been steadily increasing. According to Accenture, only 3 of the 10 most valuable enterprises were actively taking a data-driven approach in 2008, but now 7 out of 10 do. Across organizations large and small, the annual growth rate achieved by those who are ‘data driven’ sits at over 30% – over 10 times the average business growth rate in the US.

Getting to this point has been an interesting journey. Fifty years ago, IBM published the foundational model for Relational Database Management Systems (RDBMS), with the familiar tables, rows and columns format still widely used to this day. Further innovation, such as the development of Structured Query Language (SQL) made it much easier to manage CRUD (Create, Read, Update, and Delete) data. This made it more practical to build and maintain large data sets, driving growth of databases and computing.

Fast forward to the Internet era and organizations across the world capitalized on ubiquitous connectivity to bring about exponential growth in the collection, storage and management of data. This also revealed the capacity and cost shortcomings of the CRUD processing technologies available at the time.

More recently still, the arrival of cloud computing has broadened the availability of affordable technology infrastructure, bringing with it another step forward in the development and delivery of analytics. In contrast to legacy on-premise strategies, where infrastructure and applications were expensive to implement and scale, the cloud has allowed IT teams to add or remove both compute and storage on demand. The result? Analytics is now both much more scalable and much less expensive as new offerings from upstart vendors now service the increasingly demanding requirements of data-centric organizations the world over.

Cloud computing also enables huge growth in the availability and power of affordable applications that level the playing field so everyone can support huge numbers of users. But this wasn’t enough to enable interactive conversations with high volume data streams from the web, the Internet of things, and other sources.

The answer is to analyze the data stream instead of converting everything to relational CRUD. For this to happen, organizations need a modern database which comes in the form of Apache Druid®, an open source project which has been adopted across a wide range of use cases and teams looking to analyze streams or a combination of stream data and historical data. Its advantages over traditional approaches – from faster time to production, better productivity and performance – has made Druid a leader for modern analytics applications.

Looking ahead

So where are we today and what further developments in data-driven insights can we expect to emerge in the near term? Many organizations are seeing a growing need for solutions that can deliver sub-second response times for questions across billions of data points (and for both historic and streaming data). With perhaps hundreds of people asking questions at the same time, concurrent performance is also crucial to deliver the kind of capabilities organizations need.

These capabilities must be delivered via affordable solutions where cost translates into better decision-making. Even though storage and computing still add an important cost base to any data insight solution, they are not as significant as developer time. Technology advances have rapidly reduced the costs of infrastructure and software, but humans aren’t any smarter than we were fifty years ago, so the percentage of IT costs that pays developers and other humans is continually increasing. Solutions that accelerate developer productivity and help leaders make better decisions more quickly deliver value that matters.

In accurate decades, data insight has advanced almost beyond recognition. From relational databases’ pioneering impact to modern data warehousing, there is no let-up in the need for innovation. The CRUD approach, which has served as the foundation for data analytics up to this point is no longer enough and demonstrates why current stream data needs to evolve to meet the needs of organizations in the years ahead.

Granted, there remain relevant uses of analytics with relational CRUD – many organizations still require quarterly and annual reporting, for instance – a requirement that is well suited to CRUD. But increasingly, teams need to conduct meaningful interactive conversations with data and to attempt this with a CRUD data pipeline simply costs too much and takes too long.

The solution is a new class of real-time analytic databases that mix CRUD and streams for high concurrency and sub-second response rates across billions of data points. Organizations implementing these capabilities are already embracing the new era of data-driven insight, powered by technology built from the ground up that meets growing needs for faster, better decisions.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Tue, 09 Aug 2022 03:20:00 -0500 Editorial Team en-US text/html https://insidebigdata.com/2022/08/09/beyond-crud-why-data-driven-insight-is-taking-its-next-leap-forward/
Killexams : IBM Aims to Capture Growing Market Opportunity for Data Observability with Databand.ai Acquisition

Acquisition helps enterprises catch "bad data" at the source

Extends IBM's leadership in observability to the full stack of capabilities for IT -- across infrastructure, applications, data and machine learning

ARMONK, N.Y., July 6, 2022 /PRNewswire/ -- IBM (NYSE: IBM) today announced it has acquired Databand.ai, a leading provider of data observability software that helps organizations fix issues with their data, including errors, pipeline failures and poor quality — before it impacts their bottom-line. Today's news further strengthens IBM's software portfolio across data, AI and automation to address the full spectrum of observability and helps businesses ensure that trustworthy data is being put into the right hands of the right users at the right time.

IBM has acquired Databand.ai, a leading provider of data observability software that helps organizations fix issues with their data, including errors, pipeline failures and poor quality.

Databand.ai is IBM's fifth acquisition in 2022 as the company continues to bolster its hybrid cloud and AI skills and capabilities. IBM has acquired more than 25 companies since Arvind Krishna became CEO in April 2020.

As the volume of data continues to grow at an unprecedented pace, organizations are struggling to manage the health and quality of their data sets, which is necessary to make better business decisions and gain a competitive advantage. A rapidly growing market opportunity, data observability is quickly emerging as a key solution for helping data teams and engineers better understand the health of data in their system and automatically identify, troubleshoot and resolve issues, like anomalies, breaking data changes or pipeline failures, in near real-time. According to Gartner, every year poor data quality costs organizations an average $12.9 million. To help mitigate this challenge, the data observability market is poised for strong growth.1

Data observability takes traditional data operations to the next level by using historical trends to compute statistics about data workloads and data pipelines directly at the source, determining if they are working, and pinpointing where any problems may exist. When combined with a full stack observability strategy, it can help IT teams quickly surface and resolve issues from infrastructure and applications to data and machine learning systems.

Databand.ai's open and extendable approach allows data engineering teams to easily integrate and gain observability into their data infrastructure. This acquisition will unlock more resources for Databand.ai to expand its observability capabilities for broader integrations across more of the open source and commercial solutions that power the modern data stack. Enterprises will also have full flexibility in how to run Databand.ai, whether as-a-Service (SaaS) or a self-hosted software subscription.

The acquisition of Databand.ai builds on IBM's research and development investments as well as strategic acquisitions in AI and automation. By using Databand.ai with IBM Observability by Instana APM and IBM Watson Studio, IBM is well-positioned to address the full spectrum of observability across IT operations.

For example, Databand.ai capabilities can alert data teams and engineers when the data they are using to fuel an analytics system is incomplete or missing. In common cases where data originates from an enterprise application, Instana can then help users quickly explain exactly where the missing data originated from and why an application service is failing. Together, Databand.ai and IBM Instana provide a more complete and explainable view of the entire application infrastructure and data platform system, which can help organizations prevent lost revenue and reputation.

"Our clients are data-driven enterprises who rely on high-quality, trustworthy data to power their mission-critical processes. When they don't have access to the data they need in any given moment, their business can grind to a halt," said Daniel Hernandez, General Manager for Data and AI, IBM. "With the addition of Databand.ai, IBM offers the most comprehensive set of observability capabilities for IT across applications, data and machine learning, and is continuing to provide our clients and partners with the technology they need to deliver trustworthy data and AI at scale."

Data observability solutions are also a key part of an organization's broader data strategy and architecture. The acquisition of Databand.ai further extends IBM's existing data fabric solution  by helping ensure that the most accurate and trustworthy data is being put into the right hands at the right time – no matter where it resides.

"You can't protect what you can't see, and when the data platform is ineffective, everyone is impacted –including customers," said Josh Benamram, Co-Founder and CEO, Databand.ai. "That's why global brands such as FanDuel, Agoda and Trax Retail already rely on Databand.ai to remove bad data surprises by detecting and resolving them before they create costly business impacts. Joining IBM will help us scale our software and significantly accelerate our ability to meet the evolving needs of enterprise clients."

Headquartered in Tel Aviv, Israel, Databand.ai employees will join IBM Data and AI, further building on IBM's growing portfolio of Data and AI products, including its IBM Watson capabilities and IBM Cloud Pak for Data. Financial details of the deal were not disclosed. The acquisition closed on June 27, 2022.

To learn more about Databand.ai and how this acquisition enhances IBM's data fabric solution and builds on its full stack of observability software, you can read our blog about the news or visit here: https://www.ibm.com/analytics/data-fabric.

About Databand.ai

Databand.ai is a product-driven technology company that provides a proactive data observability platform, which empowers data engineering teams to deliver reliable and trustworthy data. Databand.ai removes bad data surprises such as data incompleteness, anomalies, and breaking data changes by detecting and resolving issues before they create costly business impacts. Databand.ai's proactive approach ties into all stages of your data pipelines, beginning with your source data, through ingestion, transformation, and data access. Databand.ai serves organizations throughout the globe, including some of the world's largest companies in entertainment, technology, and communications. Our focus is on enabling customers to extract the maximum value from their strategic data investments. Databand.ai is backed by leading VCs Accel, Blumberg Capital, Lerer Hippeau, Differential Ventures, Ubiquity Ventures, Bessemer Venture Partners, Hyperwise, and F2. To learn more, visit www.databand.ai.

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visit www.ibm.com.

Media Contact:
Sarah Murphy
IBM Communications
Srmurphy@us.ibm.com

[1] Source: Smarter with Gartner, "How to Strengthen Your Data Quality," Manasi Sakpal, [July 14, 2021]

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

IBM Corporation logo.

SOURCE IBM

Wed, 06 Jul 2022 00:58:00 -0500 en-US text/html https://finance.yahoo.com/news/ibm-aims-capture-growing-market-120000648.html
000-667 exam dump and training guide direct download
Training Exams List