Learning from failure is a hallmark of the technology business. Nick Baker, a 37-year-old system architect at Microsoft, knows that well. A British transplant at the software giant's Silicon Valley campus, he went from failed project to failed project in his career. He worked on such dogs as Apple Computer's defunct video card business, 3DO's failed game consoles, a chip startup that screwed up a deal with Nintendo, the never-successful WebTV and Microsoft's canceled Ultimate TV satellite TV recorder.
But Baker finally has a hot seller with the Xbox 360, Microsoft's video game console launched worldwide last holiday season. The adventure on which he embarked four years ago would ultimately prove that failure is often the best teacher. His new gig would once again provide copious evidence that flexibility and understanding of detailed customer needs will beat a rigid business model every time. And so far the score is Xbox 360, one, and the delayed PlayStation 3, nothing.
The Xbox 360 console is Microsoft's living room Trojan horse, purchased as a game box but capable of so much more in the realm of digital entertainment in the living room. Since the day after Microsoft terminated the Ultimate TV box in February 2002, Baker has been working on the Xbox 360 silicon architecture team at Microsoft's campus in Mountain View, CA. He is one of the 3DO survivors who now gets a shot at revenge against the Japanese companies that vanquished his old firm.
"It feels good," says Baker. "I can play it at home with the kids. It's family-friendly, and I don't have to play on the Nintendo anymore."
Baker is one of the people behind the scenes who pulled together the Xbox 360 console by engineering some of the most complicated chips ever designed for a consumer entertainment device. The team labored for years and made critical decisions that enabled Microsoft to beat Sony and Nintendo to market with a new box, despite a late start with the Xbox in the previous product cycle. Their story, captured here and in a forthcoming book by the author of this article, illustrates the ups and downs in any big project.
When Baker and his pal Jeff Andrews joined games programmer Mike Abrash in early 2002, they had clear marching orders. Their bosses — Microsoft CEO Steve Ballmer, at the top of Microsoft; Robbie Bach, running the Xbox division; Xbox hardware chief Todd Holmdahl; Greg Gibson, for Xbox 360 system architecture; and silicon chief Larry Yang — all dictated what Microsoft needed this time around.
They couldn't be late. They had to make hardware that could become much cheaper over time and they had to pack as much performance into a game console as they could without overheating the box.
The group of silicon engineers started first among the 2,000 people in the Xbox division on a project that Baker had code-named Trinity. But they couldn't use that name, because someone else at Microsoft had taken it. So they named it Xenon, for the colorless and odorless gas, because it sounded cool enough. Their first order of business was to study computing architectures, from those of the best supercomputers to those of the most power-efficient portable gadgets. Although Microsoft had chosen Intel and NVIDIA to make the chips for the original Xbox the first time around, the engineers now talked to a broad spectrum of semiconductor makers.
"For us, 2002 was about understanding what the technology could do," says Greg Gibson, system designer.
Sony teamed up with IBM and Toshiba to create a full-custom microprocessor from the ground up. They planned to spend $400 million developing the cell architecture and even more fabricating the chips. Microsoft didn't have the time or the chip engineers to match the effort on that scale, but Todd Holmdahl and Larry Yang saw a chance to beat Sony. They could marshal a host of virtual resources and create a semicustom design that combined both off-the-shelf technology and their own ideas for game hardware. Microsoft would lead the integration of the hardware, own the intellectual property, set the cost-reduction schedules, and manage its vendors closely.
They believed this approach would get them to market by 2005, which was when they estimated Sony would be ready with the PlayStation 3. (As it turned out, Microsoft's dreams were answered when Sony, in March, postponed the PlayStation 3 launch until November.)
More important, using an IP ownership strategy with the chips could dramatically cut Microsoft's costs on the original Xbox. Microsoft had lost an estimated $3.7 billion over four years, or roughly a whopping $168 per box. By cutting costs, Microsoft could erase a lot of red ink.
Baker and Andrews quickly decided they wanted to create a balanced design, trading off power efficiency and performance. So they envisioned a multicore microprocessor, one with as many as 16 cores — or miniprocessors — on one chip. They wanted a graphics chip with 60 shaders, or parallel processors for rendering distinct features in graphic animations.
Laura Fryer, manager of the Xbox Advanced Technology Group in Redmond, WA, solicited feedback on the new microprocessor. She said game developers were wary of managing multiple software threads associated with multiple cores, because the switch created a juggling task they didn't have to do on the original Xbox or the PC. But they appreciated the power efficiency and added performance they could get.
Microsoft's current vendors, Intel and NVIDIA, didn't like the idea that Microsoft would own the IP they created. For Intel, allowing Microsoft to take the x86 design to another manufacturer was as troubling as signing away the rights to Windows would be to Microsoft. NVIDIA was willing to do the work, but if it had to deviate from its road map for PC graphics chips in order to tailor a chip for a game box, then it wanted to get paid for it. Microsoft didn't want to pay that high a price. "It wasn't a good deal," says Jen Hsun-Huang, CEO of NVIDIA. Microsoft had also been through a painful arbitration on pricing for the original Xbox graphics chips.
IBM, on the other hand, had started a chip engineering services business and was perfectly willing to customize a PowerPC design for Microsoft, says Jim Comfort, an IBM vice president. At first IBM didn't believe that Microsoft wanted to work together, given a history of rancor dating back to the DOS and OS/2 operating systems in the 1980s. Moreover, IBM was working for Microsoft rivals Sony and Nintendo. But Microsoft pressed IBM for its views on multicore chips and discovered that Big Blue was ahead of Intel in thinking about these kinds of designs.
When Bill Adamec, a Microsoft program manager, traveled to IBM's chip design campus in Rochester, NY, he did a double take when he arrived at the meeting room where 26 engineers were waiting for him. Although IBM had reservations about Microsoft's schedule, the company was clearly serious.
Meanwhile, ATI Technologies assigned a small team to conceive a proposal for a game console graphics chip. Instead of pulling out a derivative of a PC graphics chip, ATI's engineers decided to design a brand-new console graphics chip that relied on embedded memory to feed a lot data to the graphics chip while keeping the main data pathway clear of traffic — critical for avoiding bottlenecks that would slow down the system.
By the fall of 2002, Microsoft's chip architects decided they favored the IBM and ATI solutions. They met with Ballmer and Gates, who wanted to be involved in the critical design decisions at an early juncture. Larry Yang recalls, "We asked them if they could stomach a relationship with IBM." Their affirmative answer pleased the team.
By early 2003, the list of potential chip suppliers had been narrowed down. At that point, Robbie Bach, the chief Xbox officer, took his team to a retreat at the Salish Lodge, on the edge of Washington's beautiful Snoqualmie Falls, made famous by the "Twin Peaks" television show. The team hashed out a battle plan. They would own the IP for silicon that could take the costs of the box down quickly. They would launch the box in 2005 at the same time as Sony would launch its box, or even earlier. The last time, Sony had had a 20-month head start with the PlayStation 2. By the time Microsoft sold its first 1.4 million Xboxes, Sony had sold more than 25 million PlayStation 2s.
Those goals fit well with the choice of IBM and ATI for the two pieces of silicon that would account for more than half the cost of the box. Each chip provider moved forward, based on a "statement of work," but Gibson kept his options open, and it would be months before the team finalized a contract. Both IBM and ATI could pull blocks of IP from their existing products and reuse them in the Microsoft chips. Engineering teams from both companies began working on joint projects such as the data pathway that connected the chips. ATI had to make contingency plans, in case Microsoft chose Intel over IBM, and IBM also had to consider the possibility that Microsoft might choose NVIDIA.
Through the summer, Microsoft executives and marketers created detailed plans for the console launch. They decided to build security into the microprocessor to prevent hacking, which had proved to be a major embarrassment on the original Xbox. Marketers such as David Reid all but demanded that Microsoft try to develop the new machine in a way that would allow the games for the original Xbox to run on it. So-called backward compatibility wasn't necessarily exploited by customers, but it was a big factor in deciding which box to buy. And Bach insisted that Microsoft had to make gains in Japan and Europe by launching in those regions at the same time as in North America.
For a period in July 2003, Bob Feldstein, the ATI vice president in charge of the Xenon graphics chip, thought NVIDIA had won the deal, but in August Microsoft signed a deal with ATI and announced it to the world. The ATI chip would have 48 shaders, or processors that would handle the nuances of color shading and surface features on graphics objects, and would come with 10 Mbytes of embedded memory.
IBM followed with a contract signing a month later. The deal was more complicated than ATI's, because Microsoft had negotiated the right to take the IBM design and have it manufactured in an IBM-licensed foundry being built by contract chip maker Chartered Semiconductor. The chip would have three cores and run at 3.2 GHz. It was a little short of the 3.5 GHz that IBM had originally pitched, but it wasn't off by much.
By October 2003, the entire Xenon team had made its pitch to Gates and Ballmer. They faced some tough questions. Gates wanted to know if there was any chance the box would run the complete Windows operating system. The top executives ended up giving the green light to Xenon without a Windows version.
The ranks of Microsoft's hardware team swelled to more than 200, with half of the team members working on silicon integration. Many of these people were like Baker and Andrews, stragglers who had come from failed projects such as 3DO and WebTV. About 10 engineers worked on "Ana," a Microsoft video encoder chip, while others managed the schedule and cost reduction with IBM and ATI. Others supported suppliers, such as Silicon Integrated Systems, the provider of the "south bridge," the communications and input/output chip. The rest of the team helped handle relationships with vendors for the other 1,700 parts in the game console.
Ilan Spillinger headed the IBM chip program, which carried the code name Waternoose, after the spiderlike creature from the film "Monsters, Inc." He supervised IBM's chief engineer, Dave Shippy, and worked closely with Microsoft's Andrews on every aspect of the design program.
Everything happened in parallel. For much of 2003, a team of industrial designers created the look and feel of the box. They tested the design on gamers, and the feedback suggested that the design seemed like something either Apple or Sony had created. The marketing team decided to call the machine the Xbox 360, because it put the gamer at the center. A small software team led by Tracy Sharp developed the operating system in Redmond. Microsoft started investing heavily in games. By February 2004, Microsoft sent out the first kits to game developers for making games on Apple Macintosh G5 computers. And in early 2004, Greg Gibson's evaluation team began testing subsystems to make sure they would all work together when the final design came together.
IBM assigned 421 engineers from six or seven sites to the project, which was a proving ground for its design services business. The effort paid off, with an early test chip that came out in August 2004. With that chip, Microsoft was able to begin debugging the operating system. ATI taped out its first design in September 2004, and IBM taped out its full chip in October 2004. Both chips ran game code early on, which was good, considering that it's very hard to get chips working at all when they first come out of the factory.
IBM executed without many setbacks. As it revised the chip, it fixed bugs with two revisions of the chip's layers. The company was able to debug the design in the factory quickly, because IBM's fab engineers could work on one part while the Chartered engineers could debug a different part of the chip. They fed the information to each other, speeding the cycle of revisions. By Jan. 30, 2005, IBM tapped out the final version of the microprocessor.
ATI, meanwhile, had a more difficult time. The company had assigned 180 engineers to the project. Although games ran on the chip early, problems came up in the lab. Feldstein said that in one game, one frame of animation would freeze as every other frame went by. It took six weeks to uncover the bug and find a fix. Delays in debugging threatened to throw the beta-development-kit program off schedule. That meant thousands of game developers might not get the systems they needed on time. If that happened, the Xbox 360 might launch without enough games, a disaster in the making.
The pressure was intense. But Neil McCarthy, a Microsoft engineer in Mountain View, designed a modification of the metal layers of the graphics chip. By doing so, he enabled Microsoft to get working chips from the interim design. ATI's foundry, Taiwan Semiconductor Manufacturing Co., churned out enough chips to seed the developer systems. The beta kits went out in the spring of 2005.
Meanwhile, Microsoft's brass was panic that Sony would trump the Xbox 360 by coming out with more memory in the PlayStation 3. So in the spring of 2005, Microsoft made what would become a fateful decision. It decided to double the amount of memory in the box, from 256 Mbytes to 512 Mbytes of graphics Double Data Rate 3 (GDDR3) chips. The decision would cost Microsoft $900 million over five years, so the company had to pare back spending in other areas to stay on its profit targets.
Microsoft started tying up all the loose ends. It rehired Seagate Technology, which it had hired for the original Xbox, to make hard disk drives for the box, but this time Microsoft decided to have two SKUs — one with a hard drive, for the enthusiasts, and one without, for the budget-conscious. It brought aboard both Flextronics and Wistron, the current makers of the Xbox, as contract manufacturers. But it also laid plans to have Celestica build a third factory for building the Xbox 360.
Just as everyone started to worry about the schedule going off course, ATI spun out the final graphics chip design in mid-July 2005. Everyone breathed a sigh of relief, and they moved on to the tough work of ramping up manufacturing. There was enough time for both ATI and IBM to build a stockpile of chips for the launch, which was set for Nov. 22 in North America, Dec. 2 in Europe and Dec. 10 in Japan.
Flextronics debugged the assembly process first. Nick Baker traveled to China to debug the initial boxes as they came off the line. Although assembly was scheduled to start in August, it didn't get started until September. Because the machines were being built in southern China, they had to be shipped over a period of six weeks by boat to the regions. Each factory could build only as many as 120,000 machines a week, running at full tilt. The slow start, combined with the multiregion launch, created big risks for Microsoft.
The hardware team was on pins and needles. The most-complicated chips came in on time and were remarkable achievements. Typically, it took more than two years to do the initial designs of complicated chip projects, but both companies were actually manufacturing inside that time window.
Then something unexpected hit. Both Samsung and Infineon Technologies had committed to making the GDDR3 memory for Microsoft. But some of Infineon's chips fell short of the 700 MHz specified by Microsoft. Using such chips could have slowed games down noticeably. Microsoft's engineers decided to start sorting the chips, not using the subpar ones. Because GDDR3 700 MHz chips were just ramping up, there was no way to get more chips. Each system used eight chips. The shortage constrained the supply of Xbox 360s.
Microsoft blamed the resulting shortfall of Xbox 360s on a variety of component shortages. Some users complained of overheating systems. But overall, the company said, the launch was still a great achievement. In its first holiday season, Microsoft sold 1.5 million Xbox 360s, compared to 1.4 million original Xboxes in the holiday season of 2001. But the shortage continued past the holidays.
Leslie Leland, hardware evaluation director, says she felt "terrible" about the shortage and that Microsoft would strive to get a box into the hands of every consumer who wanted one. But Greg Gibson, system designer, says that Microsoft could have worse problems on its hands than a shortage. The IBM and ATI teams had outdone themselves.
The project was by far the most successful Nick Baker had ever worked on. One night, hoisting a beer and looking at a finished console, he said it felt good.
J Allard, the head of the Xbox platform business, praised the chip engineers such as Baker: "They were on the highest wire with the shortest net."
Get more information on Takahashi's book.
This story first appeared in the May issue of Electronic Businessmagazine.
As we exited the isolation economy last year, we introduced supercloud as a term to describe something new that was happening in the world of cloud computing.
In this Breaking Analysis, we address the ten most frequently asked questions we get on supercloud. Today we’ll address the following frequently asked questions:
1. In an industry full of hype and buzzwords, why does anyone need a new term?
2. Aren’t hyperscalers building out superclouds? We’ll try to answer why the term supercloud connotes something different from a hyperscale cloud.
3. We’ll talk about the problems superclouds solve.
4. We’ll further define the critical aspects of a supercloud architecture.
5. We often get asked: Isn’t this just multicloud? Well, we don’t think so and we’ll explain why.
6. In an earlier episode we introduced the notion of superPaaS – well, isn’t a plain vanilla PaaS already a superPaaS? Again – we don’t think so and we’ll explain why.
7. Who will actually build (and who are the players currently building) superclouds?
8. What workloads and services will run on superclouds?
9. What are some examples of supercloud?
10. Finally, we’ll answer what you can expect next on supercloud from SiliconANGLE and theCUBE.
Late last year, ahead of Amazon Web Services Inc.’s re:Invent conference, we were inspired by a post from Jerry Chen called Castles in the Cloud. In that blog he introduced the idea that there were submarkets emerging in cloud that presented opportunities for investors and entrepreneurs, that the big cloud vendors weren’t going to suck all the value out of the industry. And so we introduced this notion of supercloud to describe what we saw as a value layer emerging above the hyperscalers’ “capex gift.”
It turns out that we weren’t the only ones using the term, as both Cornell and MIT have used the phrase in somewhat similar but different contexts.
The point is something new was happening in the AWS and other ecosystems. It was more than infrastructure as a service and platform as a service and wasn’t just software as a service running in the cloud.
It was a new architecture that integrates infrastructure, unique platform attributes and software to solve new problems that the cloud vendors in our view weren’t addressing by themselves. It seemed to us that the ecosystem was pursuing opportunities across clouds that went beyond conventional implementations of multi-cloud.
In addition, we felt this trend pointed to structural change going on at the industry level that supercloud metaphorically was highlighting.
So that’s the background on why we felt a new catchphrase was warranted. Love it or hate it… it’s memorable.
To that last point about structural industry transformation: Andy Rappaport is sometimes credited with identifying the shift from the vertically integrated mainframe era to the horizontally fragmented personal computer- and microprocessor-based era in his Harvard Business Review article from 1991.
In fact, it was actually David Moschella, an International Data Corp. senior vice president at the time, who introduced the concept in 1987, a full four years before Rappaport’s article was published. Moschella, along with IDC’s head of research Will Zachmann, saw that it was clear Intel Corp., Microsoft Corp., Seagate Technology and other would replace the system vendors’ dominance.
In fact, Zachmann accurately predicted in the late 1980s the demise of IBM, well ahead of its epic downfall when the company lost approximately 75% of its value. At an IDC Briefing Session (now called Directions), Moschella put forth a graphic that looked similar to the first two concepts on the chart below.
We don’t have to review the shift from IBM as the epicenter of the industry to Wintel – that’s well-understood.
What isn’t as widely discussed is a structural concept Moschella put out in 2018 in his book “Seeing Digital,” which introduced the idea of the Matrix shown on the righthand side of this chart. Moschella posited that a new digital platform of services was emerging built on top of the internet, hyperscale clouds and other intelligent technologies that would define the next era of computing.
He used the term matrix because the conceptual depiction included horizontal technology rows, like the cloud… but for the first time included connected industry columns. Moschella pointed out that historically, industry verticals had a closed value chain or stack of research and development, production, distribution, etc., and that expertise in that specific vertical was critical to success. But now, because of digital and data, for the first time, companies were able to jump industries and compete using data. Amazon in content, payments and groceries… Apple in payments and content… and so forth. Data was now the unifying enabler and this marked a changing structure of the technology landscape.
Listen to David Moschella explain the Matrix and its implications on a new generation of leadership in tech.
So the term supercloud is meant to imply more than running in hyperscale clouds. Rather, it’s a new type of digital platform comprising a combination of multiple technologies – enabled by cloud scale – with new industry participants from financial services, healthcare, manufacturing, energy, media and virtually all industries. Think of it as kind of an extension of “every company is a software company.”
Basically, thanks to the cloud, every company in every industry now has the opportunity to build their own supercloud. We’ll come back to that.
Let’s address what’s different about superclouds relative to hyperscale clouds.
This one’s pretty straightforward and obvious. Hyperscale clouds are walled gardens where they want your data in their cloud and they want to keep you there. Sure, every cloud player realizes that not all data will go to their cloud, so they’re meeting customers where their data lives with initiatives such Amazon Outposts and Azure Arc and Google Anthos. But at the end of the day, the more homogeneous they can make their environments, the better control, security, costs and performance they can deliver. The more complex the environment, the more difficult to deliver on their promises and the less margin left for them to capture.
Will the hyperscalers get more serious about cross cloud services? Maybe, but they have plenty of work to do within their own clouds. And today at least they appear to be providing the tools that will enable others to build superclouds on top of their platforms. That said, we never say never when it comes to companies such as AWS. And for sure we see AWS delivering more integrated digital services such as Amazon Connect to solve problems in a specific domain, call centers in this case.
We’ve all seen the stats from IDC or Gartner or whomever that customers on average use more than one cloud. And we know these clouds operate in disconnected silos for the most part. That’s a problem because each cloud requires different skills. The development environment is different, as is the operating environment, with different APIs and primitives and management tools that are optimized for each respective hyperscale cloud. Their functions and value props don’t extend to their competitors’ clouds. Why would they?
As a result, there’s friction when moving between different clouds. It’s hard to share data, move work, secure and govern data, and enforce organizational policies and edicts across clouds.
Supercloud is an architecture designed to create a single environment that enables management of workloads and data across clouds in an effort to take out complexity, accelerate application development, streamline operations and share data safely irrespective of location.
Pretty straightforward, but nontrivial, which is why we often ask company chief executives and execs if stock buybacks and dividends will yield as much return as building out superclouds that solve really specific problems and create differentiable value for their firms.
Let’s dig in a bit more to the architectural aspects of supercloud. In other words… what are the salient attributes that define supercloud?
First, a supercloud runs a set of specific services, designed to solve a unique problem. Superclouds offer seamless, consumption-based services across multiple distributed clouds.
Supercloud leverages the underlying cloud-native tooling of a hyperscale cloud but it’s optimized for a specific objective that aligns with the problem it’s solving. For example, it may be optimized for cost or low latency or sharing data or governance or security or higher performance networking. But the point is, the collection of services delivered is focused on unique value that isn’t being delivered by the hyperscalers across clouds.
A supercloud abstracts the underlying and siloed primitives of the native PaaS layer from the hyperscale cloud and using its own specific platform-as-a-service tooling, creates a common experience across clouds for developers and users. In other words, the superPaaS ensures that the developer and user experience is identical, irrespective of which cloud or location is running the workload.
And it does so in an efficient manner, meaning it has the metadata knowledge and management that can optimize for latency, bandwidth, recovery, data sovereignty or whatever unique value the supercloud is delivering for the specific use cases in the domain.
A supercloud comprises a superPaaS capability that allows ecosystem partners to add incremental value on top of the supercloud platform to fill gaps, accelerate features and innovate. A superPaaS can use open tooling but applies those development tools to create a unique and specific experience supporting the design objectives of the supercloud.
Supercloud services can be infrastructure-related, application services, data services, security services, users services, etc., designed and packaged to bring unique value to customers… again that the hyperscalers are not delivering across clouds or on-premises.
Finally, these attributes are highly automated where possible. Superclouds take a page from hyperscalers in terms of minimizing human intervention wherever possible, applying automation to the specific problem they’re solving.
What we’d say to that is: Perhaps, but not really. Call it multicloud 2.0 if you want to invoke a commonly used format. But as Dell’s Chuck Whitten proclaimed, multicloud by design is different than multicloud by default.
What he means is that, to date, multicloud has largely been a symptom of multivendor… or of M&A. And when you look at most so-called multicloud implementations, you see things like an on-prem stack wrapped in a container and hosted on a specific cloud.
Or increasingly a technology vendor has done the work of building a cloud-native version of its stack and running it on a specific cloud… but historically it has been a unique experience within each cloud with no connection between the cloud silos. And certainly not a common developer experience with metadata management across clouds.
Supercloud sets out to build incremental value across clouds and above hyperscale capex that goes beyond cloud compatibility within each cloud. So if you want to call it multicloud 2.0, that’s fine.
We choose to call it supercloud.
Well, we’d say no. That supercloud and its corresponding superPaaS layer gives the freedom to store, process, manage, secure and connect islands of data across a continuum with a common developer experience across clouds.
Importantly, the sets of services are designed to support the supercloud’s objectives – e.g., data sharing or data protection or storage and retrieval or cost optimization or ultra-low latency, etc. In other words, the services offered are specific to that supercloud and will vary by each offering. OpenShift, for example, can be used to construct a superPaaS but in and of itself isn’t a superPaaS. It’s generic.
The point is that a supercloud and its inherent superPaaS will be optimized to solve specific problems such as low latency for distributed databases or fast backup and recovery and ransomware protection — highly specific use cases that the supercloud is designed to solve for.
SaaS as well is a subset of supercloud. Most SaaS platforms either run in their own cloud or have bits and pieces running in public clouds (e.g. analytics). But the cross-cloud services are few and far between or often nonexistent. We believe SaaS vendors must evolve and adopt supercloud to offer distributed solutions across cloud platforms and stretching out to the near and far edge.
Another question we often get is: Who has a supercloud and who is building a supercloud? Who are the contenders?
Well, most companies that consider themselves cloud players will, we believe, be building superclouds. Above is a common Enterprise Technology Research graphic we like to show with Net Score or spending momentum on the Y axis and Overlap or pervasiveness in the ETR surveys on the X axis. This is from the April survey of well over 1,000 chief executive officers and information technology buyers. And we’ve randomly chosen a number of players we think are in the supercloud mix and we’ve included the hyperscalers because they are the enablers.
We’ve added some of those nontraditional industry players we see building superclouds such as Capital One, Goldman Sachs and Walmart, in deference to Moschella’s observation about verticals. This goes back to every company being a software company. And rather than pattern-matching an outdated SaaS model we see a new industry structure emerging where software and data and tools specific to an industry will lead the next wave of innovation via the buildout of intelligent digital platforms.
We’ve talked a lot about Snowflake Inc.’s Data Cloud as an example of supercloud, as well as the momentum of Databricks Inc. (not shown above). VMware Inc. is clearly going after cross-cloud services. Basically every large company we see is either pursuing supercloud initiatives or thinking about it. Dell Technologies Inc., for example, showed Project Alpine at Dell Technologies World – that’s a supercloud in development. Snowflake introducing a new app dev capability based on its SuperPaaS (our term, of course, it doesn’t use the phrase), MongoDB Inc., Couchbase Inc., Nutanix Inc., Veeam Software, CrowdStrike Holdings Inc., Okta Inc. and Zscaler Inc. Even the likes of Cisco Systems Inc. and Hewlett Packard Enterprise Co., in our view, will be building superclouds.
Although ironically, as an aside, Fidelma Russo, HPE’s chief technology officer, said on theCUBE she wasn’t a fan of cloaking mechanisms. But when we spoke to HPE’s head of storage services, Omer Asad, we felt his team is clearly headed in a direction that we would consider supercloud. It could be semantics or it could be that parts of HPE are in a better position to execute on supercloud. Storage is an obvious starting point. The same can be said of Dell.
Listen to Fidelma Russo explain her aversion to building a manager of managers.
And we’re seeing emerging companies like Aviatrix Systems Inc. (network performance), Starburst Data Inc. (self-service analytics for distributed data), Clumio Inc. (data protection – not supercloud today but working on it) and others building versions of superclouds that solve a specific problem for their customers. And we’ve spoken to independent software vendors such as Adobe Systems Inc., Automatic Data Processing LLC and UiPath Inc., which are all looking at new ways to go beyond the SaaS model and add value within cloud ecosystems, in particular building data services that are unique to their value proposition and will run across clouds.
So yeah – pretty much every tech vendor with any size or momentum and new industry players are coming out of hiding and competing… building superclouds. Many that look a lot like Moschella’s matrix with machine intelligence and artificial intelligence and blockchains and virtual reality and gaming… all enabled by the internet and hyperscale clouds.
It’s moving fast and it’s the future, in our opinion, so don’t get too caught up in the past or you’ll be left behind.
We’ve given many in the past, but let’s try to be a bit more specific. Below we cite a few and we’ll answer two questions in one section here: What workloads and services will run in superclouds and what are some examples?
Analytics. Snowflake is the furthest along with its data cloud in our view. It’s a supercloud optimized for data sharing, governance, query performance, security, ecosystem enablement and ultimately monetization. Snowflake is now bringing in new data types and open-source tooling and it ticks the attribute boxes on supercloud we laid out earlier.
Converged databases. Running transaction and analytics workloads. Take a look at what Couchbase is doing with Capella and how it’s enabling stretching the cloud to the edge with Arm-based platforms and optimizing for low latency across clouds and out to the edge.
Document database workloads. Look at MongoDB – a developer-friendly platform that with Atlas is moving to a supercloud model running document databases very efficiently. Accommodating analytic workloads and creating a common developer experience across clouds.
Data science workloads. For example, Databricks is bringing a common experience for data scientists and data engineers driving machine intelligence into applications and fixing the broken data lake with the emergence of the lakehouse.
General-purpose workloads. For example, VMware’s domain. Very clearly there’s a need to create a common operating environment across clouds and on-prem and out to the edge and VMware is hard at work on that — managing and moving workloads, balancing workloads and being able to recover very quickly across clouds.
Network routing. This is the primary focus of Aviatrix, building what we consider a supercloud and optimizing network performance and automating security across clouds.
Industry-specific workloads. For example, Capital One announcing its cost optimization platform for Snowflake – piggybacking on Snowflake’s supercloud. We believe it’s going to test that concept outside its own organization and expand across other clouds as Snowflake grows its business beyond AWS. Walmart Inc. is working with Microsoft to create an on-prem to Azure experience – yes, that counts. We’ve written about what Goldman is doing and you can bet dollars to donuts that Oracle Corp. will be building a supercloud in healthcare with its Cerner acquisition.
Supercloud is everywhere you look. Sorry, naysayers. It’s happening.
With all the industry buzz and debate about the future, John Furrier and the team at SiliconANGLE have decided to host an event on supercloud. We’re motivated and inspired to further the conversation. TheCUBE on Supercloud is coming.
On Aug. 9 out of our Palo Alto studios we’ll be running a live program on the topic. We’ve reached out to a number of industry participants — VMware, Snowflake, Confluent, Sky High Security, Hashicorp, Cloudflare and Red Hat — to get the perspective of technologists building superclouds.
And we’ve invited a number of vertical industry participants in financial services, healthcare and retail that we’re excited to have on along with analysts, thought leaders and investors.
We’ll have more details in the coming weeks, but for now if you’re interested please reach out to us with how you think you can advance the discussion and we’ll see if we can fit you in.
So mark your calendars and stay tuned for more information.
Thanks to Alex Myerson, who does the production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight, who help us keep our community informed and get the word out, and to Rob Hof, our editor in chief at SiliconANGLE.
Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.
Email email@example.com, DM @dvellante on Twitter and comment on our LinkedIn posts.
Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at firstname.lastname@example.org.
Here’s the full video analysis:
All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.
Modern-day hard disk drives (HDDs) hold the interesting juxtaposition of being simultaneously the pinnacle of mass-produced, high-precision mechanical engineering, as well as the most scorned storage technology. Despite being called derogatory names such as ‘spinning rust’, most of these drives manage a lifetime of spinning ultra-smooth magnetic storage platters only nanometers removed from the recording and reading heads whose read arms are twitching around using actuators that manage to position the head precisely above the correct microscopic magnetic trace within milliseconds.
Despite decade after decade of more and more of these magnetic traces being crammed on a single square millimeter of these platters, and the simple read and write heads being replaced every few years by more and more complicated ones, hard drive reliability has gone up. The second quarter report from storage company Backblaze on their HDDs shows that the annual failure rate has gone significantly down compared to last year.
The question is whether this means that HDDs stand to become only more reliable over time, and how upcoming technologies like MAMR and HAMR may affect these metrics over the coming decades.
The first HDDs were sold in the 1950s, with IBM’s IBM 350 storing a total of 3.75 MB on fifty 24″ (610 mm) discs, inside a cabinet measuring 152x172x74 cm. Fast-forward to today, and a top-of-the-line HDD in 3.5″ form factor (~14.7×10.2×2.6 cm) can store up to around 18 TB with conventional (non-shingled) recording.
Whereas the IBM 350 spun its platters at 1,200 RPM, HDDs for the past decades have focused on reducing the size of the platters, increasing the spindle speed (5,400 – 15,000 RPM). Other improvements have focused on moving the read and write heads closer to the platter surface.
The IBM 1301 DSU (Disk Storage Unit) from 1961 was a major innovation in that it used a separate arm with read and write heads for each platter. It also innovated by using aerodynamic forces to let the arms fly over the platter surface on a cushion of air, enabling a far smaller distance between heads and platter surface.
After 46 years of development IBM sold its HDD business to Hitachi in 2003. By that time, storage capacity had increased by 48,000 times in a much smaller volume. Like 29,161 times smaller. Power usage had dropped from over 2.3 kW to around 10 W (for desktop models), while price per megabyte had dropped from $68,000 USD to $0.002. At the same time the number of platters shrunk from dozens to only a couple at most.
Miniaturization has always been the name of the game, whether it was about mechanical constructs, electronics or computer technology. The hulking, vacuum tube or relay-powered computer monsters of the 1940s and 1950s morphed into less hulking transistor-powered computer systems before turning into today’s sleek, ASIC-powered marvels. At the same time, HDD storage technology underwent a similar change.
The control electronics for HDDs experienced all of the benefits of increased use of VLSI circuitry, along with increasingly more precise and low-power servo technology. As improvements from materials science enabled lighter, smoother (glass or aluminium) platters with improved magnetic coatings, areal density kept shooting up. With the properties of all the individual components (ASIC packaging, solder alloys, actuators, aerodynamics of HDD arms, etc.) better understood, major revolutions turned into incremental improvements.Although extreme miniaturization with HDDs has been attempted at least twice in the form of the HP Kittyhawk microdrive (1.3″) in 1992 and the 1″ Microdrive in 1999, eventually the market would settle on the 3.5″ and 2.5″ form factors. The Microdrive form factor was marketed as an alternative to NAND Flash-based CompactFlash cards, featuring higher capacity and essentially unlimited writes, making them useful for embedded systems.
As happened in other fields, the physical limits on write speeds and random access times would eventually mean that HDDs are most useful where large amounts of storage for little money and high durability are essential. This allowed the HDD market to optimize for desktop and server systems, as well as surveillance and backup (competing with tape).
Although the mechanical parts of an HDD are often considered the weakest spot, there are a number of possible causes, including:
HDDs are given an impact rating while powered down or when in operation (platters spinning and heads not parked). If these ratings are exceeded, damage to the actuators that move the arms, or a crash of the heads onto the platter surface can occur. If these tolerances are not exceeded, then normal wear is most likely to be the primary cause of failure, which is specified by the manufacturer’s MTBF (Mean Time Between Failures) number.
This MTBF number is derived by extrapolating from the observed wear after a certain time period, as is industry standard. With the MTBF for HDDs generally given as between 100,000 and 1 million hours, to test this entire period would require the drive to be active between 10 to 100 years. This number thus assumes the drive operating under the recommended operating conditions, as happens at a storage company like Backblaze.
Obviously, exposing a HDD to extreme shock (e.g. dropping it on a concrete floor) or extreme power events (power surge, ESD, etc.) will limit their lifespan. Less obvious are manufacturing flaws, which can occur with any product, and is the reason why there is an ‘acceptable failure rate’ for most products.
Despite the great MTBF numbers for HDDs and the obvious efforts by Backblaze to keep each of their nearly 130,000 drives happily spinning along until being retired to HDD Heaven (usually in the form of a mechanical shredder), they reported an annualized failure rate (AFR) of 1.07% for the first quarter of 2020. Happily, this is the lowest failure rate for them since they began to publish these reports in 2013. The Q1 2019 AFR was 1.56%, for example.
As we have covered previously, during the manufacturing and handling of integrated circuits (ICs), flaws can be introduced that only become apparent later during the product’s lifespan. Over time, issues like electromigration, thermal stress and mechanical stress can cause failures in a circuit, from bond wires inside the IC packaging snapping off, electromigration destroying solder joints as well as circuits inside IC packaging (especially after ESD events).
The mechanical elements of an HDD depend on precise manufacturing tolerances, as well as proper lubrication. In the past, stuck heads (‘stiction‘) could be an issue, whereby the properties of the lubricant changed over time to the point where the arms could no longer move out of their parked position. Improved lubrication types have more or less solved this issue by now.
Yet, every step in a manufacturing process has a certain chance to introduce flaws, which ultimately add up to something that could spoil the nice, shiny MTBF number, instead making the product part of the wrong side of the ‘bathtub curve’ for failure rates. This curve is characterized by an early spike in product failures, due to serious manufacturing defects, with defects decreasing after that until the end of the MTBF lifespan approaches.
HDDs as we know them today are indicative of a mature manufacturing process, with many of the old issues that plagued them over the past half decade fixed or mitigated. Relatively major changes, such as the shift to helium-filled drives, have not had a noticeable performance on failure rates so far. Other changes, such as the shift from Perpendicular recording (PMR, or CMR) to Heat-Assisted Magnetic Recording (HAMR) should not have a noticeable effect on HDD longevity, barring any issues with the new technology itself.
Basically, HDD technology’s future appears to be boring in all the right ways for anyone who likes to have a lot of storage capacity for little money that should last for at least a solid decade. The basic principle behind HDDs, namely that of storing magnetic orientations on a platter, happens to one that could essentially be taken down to singular molecules. With additions like HAMR, the long-term stability of these magnetic orientations should be improved significantly as well.
This is a massive benefit over NAND Flash, which instead uses small capacitors to store charges, and uses a write method that physically damages these capacitors. The physical limits here are much more severe, which has led to ever more complicated constructs, such as quad-level (QLC) Flash, which has to differentiate between 16 possible voltage states in each cell. This complexity has led to QLC-based storage drives being barely faster than a 5,400 RPM HDD in many scenarios, especially when it comes to latency.
The first HDD which I used in a system of my own was probably a 20 or 30 MB Seagate one in the IBM PS/2 (386SX) which my father gave to me after his work had switched over to new PCs and probably wanted to free up some space in their storage area. Back in the MS DOS days this was sufficient for DOS, a stack of games, WordPerfect 5.1 and much more. By the end of the 90s, this was of course a laughable amount, and we were then talking about gigabytes, not megabytes when it came to HDDs.
Despite having gone through many PCs and laptops since then, I have ironically only had an SSD outright provide up and die on me. This, along with the data from the industry — such as these Backblaze reports — make me feel pretty confident that the last HDD won’t spin down yet for a while. Maybe when something like 3D XPoint memory becomes affordable and large enough might this change.
Until then, keep spinning.
Autism is known as a spectrum disorder because every autistic person is different, with unique strengths and challenges.
Varney says many autistic people experienced education as a system that focused on these challenges, which can include social difficulties and anxiety.
He is pleased this is changing, with exact reforms embracing autistic students’ strengths.
But the unemployment rate of autistic people remains disturbingly high. ABS data from 2018 shows 34.1 per cent of autistic people are unemployed – three times higher than that of people with any type of disability and almost eight times that of those without a disability.
“A lot of the time people hear that someone’s autistic and they assume incompetence,” says Varney, who was this week appointed the chair of the Victorian Disability Advisory Council.
“But we have unique strengths, specifically hyper focus, great creativity, and we can think outside the box, which is a great asset in workplaces.”
In Israel, the defence force has a specialist intelligence unit made up exclusively of autistic soldiers, whose skills are deployed in analysing, interpreting and understanding satellite images and maps.
Locally, organisations that actively recruit autistic talent include software giant SAP, Westpac, IBM, ANZ, the Australian Tax Office, Telstra, NAB and PricewaterhouseCoopers.
Chris Pedron is a junior data analyst at Australian Spatial Analytics, a social enterprise that says on its website “neurodiversity is our advantage – our team is simply faster and more precise at data processing”.
He was hired after an informal chat. (Australian Spatial Analytics also often provides interview questions 48 hours in advance.)
Pedron says the traditional recruitment process can work against autistic people because there are a lot of unwritten social cues, such as body language, which he doesn’t always pick up on.
“If I’m going in and I’m acting a bit physically standoffish, I’ve got my arms crossed or something, it’s not that I’m not wanting to be there, it’s just that new social interaction is something that causes anxiety.”
Pedron also finds eye contact uncomfortable and has had to train himself over the years to concentrate on a point on someone’s face.
Australian Spatial Analytics addresses a skills shortage by delivering a range of data services that were traditionally outsourced offshore.
Projects include digital farm maps for the grazing industry, technical documentation for large infrastructure and map creation for land administration.
Pedron has always found it easy to map things out in his head. “A lot of the work done here at ASA is geospatial so having autistic people with a very visual mindset is very much an advantage for this particular job.”
Pedron listens to music on headphones in the office, which helps him concentrate, and stops him from being distracted. He says the simpler and clearer the instructions, the easier it is for him to understand. “The less I have to read between the lines to understand what is required of me the better.”
Australian Spatial Analytics is one of three jobs-focused social enterprises launched by Queensland charity White Box Enterprises.
It has grown from three to 80 employees in 18 months and – thanks to philanthropist Naomi Milgrom, who has provided office space in Cremorne – has this year expanded to Melbourne, enabling Australian Spatial Analytics to create 50 roles for Victorians by the end of the year.
Chief executive Geoff Smith hopes they are at the front of a wave of employers recognising that hiring autistic people can make good business sense.
“Rather than focus on the deficits of the person, focus on the strengths. A quarter of National Disability Insurance Scheme plans name autism as the primary disability, so society has no choice – there’s going to be such a huge number of people who are young and looking for jobs who are autistic. There is a skills shortage as it is, so you need to look at neurodiverse talent.”
In 2017, IBM launched a campaign to hire more neurodiverse (a term that covers a range of conditions including autism, Attention Deficit Hyperactivity Disorder, or ADHD, and dyslexia) candidates.
The initiative was in part inspired by software and data quality engineering services firm Ultranauts, who boasted at an event “they ate IBM’s lunch at testing by using an all-autistic staff”.
The following year Belinda Sheehan, a senior managing consultant at IBM, was tasked with rolling out a pilot at its client innovation centre in Ballarat.
“IBM is very big on inclusivity,” says Sheehan. “And if we don’t have diversity of thought, we won’t have innovation. So those two things go hand in hand.”
Sheehan worked with Specialisterne Australia, a social enterprise that assists businesses in recruiting and supporting autistic people, to find talent using a non-traditional recruitment process that included a week-long task.
Candidates were asked to work together to find a way for a record shop to connect with customers when the bricks and mortar store was closed due to COVID.
Ten employees were eventually selected. They started in July 2019 and work in roles across IBM, including data analysis, testing, user experience design, data engineering, automation, blockchain and software development. Another eight employees were hired in July 2021.
Sheehan says clients have been delighted with their ideas. “The UX [user experience] designer, for example, comes in with such a different lens. Particularly as we go to artificial intelligence, you need those different thinkers.”
One client said if they had to describe the most valuable contribution to the project in two words it would be “ludicrous speed”. Another said: “automation genius.”
IBM has sought to make the office more inclusive by creating calming, low sensory spaces.
It has formed a business resource group for neurodiverse employees and their allies, with four squads focusing on recruitment, awareness, career advancement and policies and procedures.
And it has hired a neurodiversity coach to work with individuals and managers.
Sheehan says that challenges have included some employees getting frustrated because they did not have enough work.
“These individuals want to come to work and get the work done – they are not going off for a coffee and chatting.”
Increased productivity is a good problem to have, Sheehan says, but as a manager, she needs to come up with ways they can enhance their skills in their downtime.
There have also been difficulties around different communication styles, with staff finding some autistic employees a bit blunt.
Sheehan encourages all staff to do a neurodiversity 101 training course run by IBM.
“Something may be perceived as rude, but we have to turn that into a positive. It’s good to have someone who is direct, at least we all know what that person is thinking.”
Chris Varney is delighted to see neurodiversity programs in some industries but points out that every autistic person has different interests and abilities.
Some are non-verbal, for example, and not all have the stereotypical autism skills that make them excel at data analysis.
“We’ve seen a big recognition that autistic people are an asset to banks and IT firms, but there’s a lot more work to be done,” Varney says.
“We need to see jobs for a diverse range of autistic people.”
The Morning Edition newsletter is our guide to the day’s most important and interesting stories, analysis and insights. Sign up here.
The story so far.... In 1975, Ed Roberts invented the Altair personal computer. It was a pain to use until 19 year-old pre-billionaire Bill Gates wrote the first personal computer language. Still, the public didn't care. Then two young hackers -- Steve Jobs and Steve Wozniak -- built the Apple computer to impress their friends. We were all impressed and Apple was a stunning success. By 1980, the PC market was worth a billion dollars. Now, view on.....
We are nerds.
Most of the people in the industry were young because the guys who had any real experience were too smart to get involved in all these crazy little machines.
It really wasn't that we were going to build billion dollar businesses. We were having a good time.
I thought this was the most fun you could possibly have with your clothes on.
When the personal computer was invented twenty years it was just that - an invention - it wasn't a business. These were hobbyists who built these machines and wrote this software to have fun but that has really changed and now this is a business this is a big business. It just goes to show you that people can be bought. How the personal computer industry grew from zero to 100 million units is an amazing story. And it wasn't just those early funky companies of nerds and hackers, like Apple, that made it happen. It took the intervention of a company that was trusted by the corporate world. Big business wasn't interested in the personal computer. In the boardrooms of corporate America a computer still meant something the size of a room that cost at least a hundred thousand dollars. Executives would brag that my mainframe is bigger than your mainframe. The idea of a $2,000 computer that sat on your desk in a plastic box was laughable that is until that plastic box had three letters stamped on it - IBM. IBM was, and is, an American business phenomenon. Over 60 years, Tom Watson and his son, Tom Jr., built what their workers called Big Blue into the top computer company in the world. But IBM made mainframe computers for large companies, not personal computers -- at least not yet. For the PC to be taken seriously by big business, the nerds of Silicon Valley had to meet the suits of corporate America. IBM never fired anyone, requiring only that undying loyalty to the company and a strict dress code. IBM hired conservative hard-workers straight from school. Few IBM'ers were at the summer of love. Their turn-ons were giant mainframes and corporate responsibility. They worked nine to five and on Saturdays washed the car. This is intergalactic HQ for IBM - the largest computer company in the world...but in many ways IBM is really more a country than it is a company. It has hundreds of thousands of citizens, it has a bureaucracy, it has an entire culture everything in fact but an army. OK Sam we're ready to visit IBM country, obviously we're dressed for the part. Now when you were in sales training in 1959 for IBM did you sing company songs?
Former IBM Executive
BOB: Well just to get us in the mood let's sing one right here.
SAM: You're kidding.
BOB: I have the IBM - the songs of the IBM and we're going to try for number 74, our IBM salesmen sung to the tune of Jingle Bells.
Bob & Sam singing
'IBM, happy men, smiling all the way, oh what fun it is to sell our products our pruducts night and day. IBM Watson men, partners of TJ. In his service to mankind - that's why we are so gay.'
Now gay didn't mean what it means today then remember that OK?
BOB: Right ok let's go.
SAM: I guess that was OK.
When I started at IBM there was a dress code, that was an informal oral code of white shirts. You couldn't wear anything but a white shirt, generally with a starched collar. I remember attending my first class, and a gentleman said to me as we were entering the building, are you an IBMer, and I said yes. He had a three piece suit on, vests were of the vogue, and he said could you just lift your pants leg please. I said what, and before I knew it he had lifted my pants leg and he said you're not wearing any garters. I said what?! He said your socks, they're not pulled tight to the top, you need garters. And sure enough I had to go get garters.
IBM is like Switzerland -- conservative, a little dull, yet prosperous. It has committees to verify each decision. The safety net is so big that it is hard to make a bad decision - or any decision at all. Rich Seidner, computer programmer and wannabe Paul Simon, spent twenty-five years marching in lockstep at IBM. He feels better now.
Former IBM Programmer
I mean it's like getting four hundred thousand people to agree what they want to have for lunch. You know, I mean it's just not going to happen - it's going to be lowest common denominator you know, it's going to be you know hot dogs and beans. So ahm so what are you going to do? So IBM had created this process and it absolutely made sure that quality would be preserved throughout the process, that you actually were doing what you set out to do and what you thought the customer wanted. At one point somebody kind of looked at the process to see well, you know, what's it doing and what's the overhead built into it, what they found is that it would take at least nine months to ship an empty box.
By the late seventies, even IBM had begun to notice the explosive growth of personal computer companies like Apple.
The Apple 2 - small inexpensive and simple to use the first computer.....
What's more, it was a computer business they didn't control. In 1980, IBM decided they wanted a piece of this action.
Former IBM Executive
There were suddenly tens of thousands of people buying machines of that class and they loved them. They were very happy with them and they were showing up in the engineering departments of our clients as machines that were brought in because you can't do the job on your mainframe kind of thing.
JB wanted to know why I'm doing better than all the other managers...it's no secret...I have an Apple - sure there's a big computer three flights down but it won't test my options, do my charts or edit my reports like my Apple.
The people who had gotten it were religious fanatics about them. So the concern was we were losing the hearts and minds and provide me a machine to win back the hearts and minds.
In business, as in comedy, timing is everything, and time looked like it might be running out for an IBM PC. I'm visiting an IBMer who took up the challenge. In August 1979, as IBM's top management met to discuss their PC crisis, Bill Lowe ran a small lab in Boca Raton Florida.
Hello Bob nice to see you.
BOB: Nice to see you again. I tried to match the IBM dress code how did I do?
BILL: That's terrific, that's terrific.
He knew the company was in a quandary. Wait another year and the PC industry would be too big even for IBM to take on. Chairman Frank Carey turned to the department heads and said HELP!!!
Head, IBM IBM PC Development Team 1980
He kind of said well, what should we do, and I said well, we think we know what we would like to do if we were going to proceed with our own product and he said no, he said at IBM it would take four years and three hundred people to do anything, I mean it's just a fact of life. And I said no sir, we can provide with product in a year. And he abruptly ended the meeting, he said you're on Lowe, come back in two weeks and tell me what you need.
An IBM product in a year! Ridiculous! Down in the basement Bill still has the plan. To save time, instead of building a computer from scratch, they would buy components off the shelf and assemble them -- what in IBM speak was called 'open architecture.' IBM never did this. Two weeks later Bill proposed his heresy to the Chairman.
And frankly this is it. The key decisions were to go with an open architecture, non IBM technology, non IBM software, non IBM sales and non IBM service. And we probably spent a full half of the presentation carrying the corporate management committee into this concept. Because this was a new concept for IBM at that point.
BOB: Was it a hard sell?
BILL: Mr. Carey bought it. And as result of him buying it, we got through it.
With the backing of the chairman, Bill and his team then set out to break all the IBM rules and go for a record.
We'll put it in the IBM section.
Once IBM decided to do a personal computer and to do it in a year - they couldn't really design anything, they just had to slap it together, so that's what we'll do. You have a central processing unit and eh let's see you need a monitor or display and a keyboard. OK a PC, except it's not, there's something missing. Time for the Cringely crash course in elementary computing. A PC is a boxful of electronic switches, a piece of hardware. It's useless until you tell it what to do. It requires a program of instructions...that's software. Every PC requires at least two essential bits of software in order to work at all. First it requires a computer language. That's what you type in to provide instructions to the computer. To tell it what to do. Remember it was a computer language called BASIC that Paul Allen and Bill Gates adapted to the Altair...the first PC. The other bit of software that's required is called an operating system and that's the internal traffic cop that tells the computer itself how the keyboard is connected to the screen or how to store files on a floppy disk instead of just losing them when you turn off the PC at the end of the day. Operating systems tend to have boring unfriendly names like UNIX and CPM and MS-DOS but though they may be boring it's an operating system that made Bill Gates the richest man in the world. And the story of how that came about is, well, pretty interesting. So the contest begins. Who would IBM buy their software from? Let's meet the two contenders -- the late Gary Kildall, then aged 39, a computer Ph.D., and a 24 year old Harvard drop-out - Bill Gates. By the time IBM came calling in 1980, Bill Gates and his small company Microsoft was the biggest provider of computer languages in the fledgling PC industry.
'Many different computer manufacturers are making the CPM Operating System standard on most models.'
For their operating system, though, the logical guy for the IBMers to see was Gary Kildall. He ran a company modestly called Interglactic Digital Research. Gary had invented the PC's first operating system called CP/M. He had already sold 600,000 of them, so he was the big cheese of operating systems.
Founder Digital Research
Speaking in 1983
In the early 70s I had a need for an operating system myself and eh it was a very natural thing to write and it turns out other people had a need for an operating system like that and so eh it was a very natural thing I wrote it for my own use and then started selling it.
In Gary's mind it was the dominant thing and it would always be the dominant of course Bill did languages and Gary did operating systems and he really honestly believed that would never change.
But what would change the balance of power in this young industry was the characters of the two protagonists.
Founder West Coast Computer Faire 1978
So I knew Gary back when he was an assistant professor at Monterrey Post Grad School and I was simply a grad student. And went down, sat in his hot tub, smoked dope with him and thoroughly enjoyed it all, and commiserated and talked nerd stuff. He liked playing with gadgets, just like Woz did and does, just like I did and do.
He wasn't really interested in how you drive the business, he worked on projects, things that interested him.
He didn't go rushing off to the patent office and patent CPM and patent every line of code he could, he didn't try to just squeeze the last dollar out of it.
Gary was not a fighter, Gary avoided conflict, Gary hated conflict. Bill I don't think anyone could say backed away from conflict.
Nobody said future billionaires have to be nice guys. Here, at the Microsoft Museum, is a shrine to Bill's legacy. Bill Gates hardly fought his way up from the gutter. Raised in a prosperous Seattle household, his mother a homemaker who did charity work, his father was a successful lawyer. But beneath the affluence and comfort of a perfect American family, a competitive spirit ran deep.
President, The Paul Allen Group
I ended up spending Memorial Day Weekend with him out at his grandmother's house on Hood Canal. She turned everything in to a game. It was a very very very competitive environment, and if you spent the weekend there, you were part of the competition, and it didn't matter whether it was hearts or pickleball or swimming to the dock. And you know and there was always a reward for winning and there was always a penalty for losing.
CEO Corporate Computing Intl.
One time, it was funny. I went to Bill's house and he really wanted to show me his jigsaw puzzle that he was working on, and he really wanted to talk about how he did this jigsaw puzzle in like four minutes, and like on the box it said, if you're a genius you will do the jigsaw puzzle in like seven. And he was into it. He was like I can do it. And I said don't, you know, I believe you. You don't need to break it up and do it for me. You know.
Bill Gates can be so focused that the small things in life get overlooked.
Former VP, Corporate Comms, Microsoft
If he was busy he didn't bathe, he didn't change clothes. We were in New York and the demo that we had crashed the evening before the announcement, and Bill worked all night with some other engineers to fix it. Well it didn't occur to him to take ten minutes for a shower after that, it just didn't occur to him that that was important, and he badly needed a shower that day.
The scene is set in California...laid back Gary Kildall already making the best selling PC operating system CPM. In Seattle Bill Gates maker of BASIC the best selling PC language but always prepared to seize an opportunity. So IBM had to choose one of these guys to write the operating system for its new personal computer. One would hit the jackpot the other would be forgotten...a footnote in the history of the personal computer and it all starts with a telephone call to an eighth floor office in that building the headquarters of Microsoft in 1980.
At about noon I guess I called Bill Gates on Monday and said I would like to come out and talk with him about his products.
Bill said well, how's next week, and they said we're on an airplane, we're leaving in an hour, we'd like to be there tomorrow. Well, hallelujah. Right oh.
Steve Ballmer was a Harvard roommate of Gates. He'd just joined Microsoft and would end up its third billionaire. Back then he was the only guy in the company with business training. Both Ballmer and Gates instantly saw the importance of the IBM visit.
You know IBM was the dominant force in computing. A lot of these computer fairs discussions would get around to, you know, I.. most people thought the big computer companies wouldn't recognise the small computers, and it might be their downfall. But now to have one of the big computer companies coming in and saying at least the - the people who were visiting with us that they were going to invest in it, that - that was er, amazing.
And Bill said Steve, you'd better come to the meeting, you're the only other guy here who can wear a suit. So we figure the two of us will put on suits, we'll put on suits and we'll go to this meeting.
We got there at roughly two o'clock and we were waiting in the front, and this young fella came out to take us back to Mr. Gates office. I thought he was the office boy, and of course it was Bill. He was quite decisive, we popped out the non-disclosure agreement - the letter that said he wouldn't tell anybody we were there and that we wouldn't hear any secrets and so forth. He signed it immediately.
IBM didn't make it easy. You had to sign all these funny agreements that sort of said I...IBM could do whatever they wanted, whenever they wanted, and use your secrets however they - they felt. But so it took a little bit of faith.
Jack Sams was looking for a package from Microsoft containing both the BASIC computer language and an Operating System. But IBM hadn't done their homework.
They thought we had an operating system. Because we had this Soft Card product that had CPM on it, they thought we could licence them CPM for this new personal computer they told us they wanted to do, and we said well, no, we're not in that business.
When we discovered we didn't have - he didn't have the rights to do that and that it was not...he said but I think it's ready, I think that Gary's got it ready to go. So I said well, there's no time like the present, call up Gary.
And so Bill right there with them in the room called Gary Kildall at Digital Research and said Gary, I'm sending some guys down. They're going to be on the phone. Treat them right, they're important guys.
The men from IBM came to this Victorian House in Pacific Grove California, headquarters of Digital Research, headed by Gary and Dorothy Kildall. Just imagine what its like having IBM come to visit - its like having the Queen drop by for tea, its like having the Pope come by looking for advice, its like a visit from God himself. And what did Gary and Dorothy do? They sent them away.
Gary had some other plans and so he said well, Dorothy will see you. So we went down the three of us...
Former Head of Language Division, Digital Research
IBM showed up with an IBM non-disclosure and Dorothy made what I...a decision which I think it's easy in retrospect to say was dumb.
We popped out our letter that said please don't tell anybody we're here, and we don't want to hear anything confidential. And she read it and said and I can't sign this.
She did what her job was, she got the lawyer to look at the nondisclosure. The lawyer, Gerry Davis who's still in Monterey threw up on this non-disclosure. It was uncomfortable for IBM, they weren't used to waiting. And it was unfortunate situation - here you are in a tiny Victorian House, its overrun with people, chaotic.
So we spent the whole day in Pacific Grove debating with them and with our attorneys and her attorneys and everybody else about whether or not she could even talk to us about talking to us, and we left.
This is the moment Digital Research dropped the ball. IBM, distinctly unimpressed with their reception, went back to Microsoft.
BOB: It seems to me that Digital Research really screwed up.
STEVE BALLMER: I think so - I think that's spot on. They made a big mistake. We referred IBM to them and they failed to execute.
Bill Gates isn't the man to provide a rival a second chance. He saw the opportunity of a lifetime.
Digital research didn't seize that, and we knew it was essential, if somebody didn't do it, the project was going to fall apart.
We just got carried away and said look, we can't afford to lose the language business. That was the initial thought - we can't afford to have IBM not go forward. This is the most exciting thing that's going to happen in PCs.
And we were already out on a limb, because we had licensed them not only Basic, but Fortran, Cobol Assembler er, typing tutor and Venture. And basically every - every product the company had we had committed to do for IBM in a very short time frame.
But there was a problem. IBM needed an operating system fast and Microsoft didn't have one. What they had was a stroke of luck - the ingredient everyone needs to be a billionaire. Unbelievably, the solution was just across town. Paul Allen, Gates's programming partner since high school, had found another operating system.
There's a local company here in CL called CL Computer Products by a guy named Tim Patterson and he had done an operating system a very rudimentary operating system that was kind of like CPM.
And we just told IBM look, we'll go and get this operating system from this small local company, we'll take care of it, we'll fix it up, and you can still do a PC.
Tim Patterson's operating system, which saved the deal with IBM, was, well, adapted from Gary Kildall's CPM.
So I took a CPM manual that I'd gotten from the Retail Computer Store five dollars in 1976 or something, and used that as the basis for what would be the application program interface, the API for my operating system. And so using these ideas that came from different places I started in April and it was about half time for four months before I had my first working version.
This is it, the operating system Tim Patterson wrote. He called in QDOS the quick and dirty operating system. Microsoft and IBM called it PC DOS 1.0 and under any name it looks an awful lot like CPM. On this computer here I have running a PC DOS and CPM 86 and frankly it�s very hard to tell the difference between the two. The command structures are the same, so are the directories, in fact the only obvious external difference is the floppy dirive is labelled A in PC DOS and and C in CPM. Some difference and yet one generated billions in revenue and the other disappeared. As usual in the PC business the prize didn't go to the inventor but to the exploiter of the invention. In this case that wasn't Gary Kildall it wasn't even Tim Paterson.
There was still one problem. Tim Patterson worked for Seattle Computer Products, or SCP. They still owned the rights to QDOS - rights that Microsoft had to have.
Former Vice-President Microsoft
But then we went back and said to them look, you know, we want to buy this thing, and SCP was like most little companies, you know. They always needed cash and so that was when they went in to the negotiation.
And so ended up working out a deal to buy the operating system from him for whatever usage we wanted for fifty thousand dollars.
Hey, let's pause there. To savour an historic moment.
For whatever usage we wanted for fifty thousand dollars.
It had to be the deal of the century if not the millenium it was certainly the deal that made Bill Gates and Paul Allen multi billionaires and allowed Paul Allen to buy toys like these, his own NBA basketball team and arena. Microsoft bought outright for fifty thousand dollars the operating system they needed and they turned around and licensed it to the world for up to fifty dollars per PC. Think of it - one hundred million personal computers running MS DOS software funnelling billions into Microsoft - a company that back then was fifty kids managed by a twenty-five year old who needed to wash his hair. Nice work if you can get it and Microsoft got it. There are no two places further apart in the USA than south eastern Florida and Washington State where Microsoft is based. This - this is Florida, Boca Raton and this building right here is where the IBM PC was developed. Here the nerds from Seattle joined forces with the suits of corporate and in that first honeymoon year they pulled off a fantastic achievement.
After we got a package in the mail from the people down in Florida...
As August 1981 approached, the deadline for the launch of the IBM Acorn, the PC industry held its breath.
Supposedly, maybe at this very moment eh, IBM is announcing the personal computer. We don't know that yet.
Software writers like Dan Bricklin, the creator of the first spreadsheet VisiCalc waited by the phones for news of the announcement. This is a moment of PC history. IBM secrecy had codenamed the PC 'The Floridian Project.' Everyone in the PC business knew IBM would change their world forever. They also knew that if their software was on the IBM PC, they would make fortunes.
Please note that the attached information is not to be disclosed prior to any public announcement. (It's on the ticker) It's on the ticker OK so now you can tell people.
What we're watching are the first few seconds of a $100 billion industry.
After years of thinking big today IBM came up with something small. Big Blue is looking for a slice of Apple's market share. Bits and Bytes mean nothing try this one. Now they're going to sell $1,000 computers to millions of customers. I have seen the future said one analyst and it computes.
Today an IBM computer has reached a personal......
Nobody was ever fired for buying IBM. Now companies could put PCs with the name they trusted on desks from Wisconsin to Wall Street.
When the IBM PC came and the PC became a serious business tool, a lot of them, the first of them went into those buildings over there and that was the real ehm when the PC industry started taking off, it happened there too.
Can learn to use it with ease...
Former IBM Executive
What IBM said was it's okay corporate America for you to now start buying and using PCs. And if it's okay for corporate America, it's got to be okay for everybody.
For all the hype, the IBM PC wasn't much better than what came before. So while the IBM name could create immense demand, it took a killer application to sustain it. The killer app for the IBM PC was yet another spreadsheet. Based on Visicalc, but called Lotus 1-2-3, its creators were the first of many to get rich on IBM's success. Within a year Lotus was worth $150 million bucks. Wham! Bam! Thank you IBM!
Time to rock time for code...
IBM had forecast sales of half a million computers by 1984. In those 3 years, they sold 2 million.
Euphoric I guess is the right word. Everybody was believed that they were not going to... At that point two million or three million, you know, they were now thinking in terms of a hundred million and they were probably off the scale in the other direction.
What did all this mean to Bill Gates, whose operating system, DOS, was at the heart of every IBM PC sold? Initially, not much, because of the deal with IBM. But it did provide him a vital bridgehead to other players in the PC marketplace, which meant trouble in the long run for Big Blue.
The key to our...the structure of our deal was that IBM had no control over...over our licensing to other people. In a lesson on the computer industry in mainframes was that er, over time, people built compatible machines or clones, whatever term you want to use, and so really, the primary upside on the deal we had with IBM, because they had a fixed fee er, we got about $80,000 - we got some other money for some special work we did er, but no royalty from them. And that's the DOS and Basic as well. And so we were hoping a lot of other people would come along and do compatible machines. We were expecting that that would happen because we knew Intel wanted to vend the chip to a lot more than just than just IBM and so it was great when people did start showing up and ehm having an interest in the licence.
IBM now had fifty per cent market share and was defining what a PC meant. There were other PCs that were sorta like the IBM PC, kinda like it. But what the public wanted was IBM PCs. So to be successful other manufacturers would have to build computers exactly like the IBM. They wanted to copy the IBM PC, to clone it. How could they do that legally, well welcome to the world of reverse engineering. This is what reverse engineering can get you if you do it right. It's the modest Aspen, Colorado ski shack of Rod Canion, one of the founders of Compaq, the company set up to compete head-on with the IBM PC. Back in 1982, Rod and three fellow engineers from Texas Instruments sketched out a computer design on a place mat at the House of Pies restaurant in Houston, Texas. They decided to manufacture and market a portable version of the IBM PC using the curious technique of reverse engineering.
Reverse engineering is figuring out after something has already been created how it ticks, what makes it work, usually for the purpose of creating something that works the same way or at least does something like the thing you're trying to reverse engineer.
Here's how you clone a PC. IBM had made it easy to copy. The microprocessor was available off the shelf from Intel and the other parts came from many sources. Only one part was IBM's alone, a vital chip that connected the hardware with the software. Called the ROM-BIOS, this was IBM's own design, protected by copyright and Big Blue's army of lawyers. Compaq had to somehow copy the chip without breaking the law.
First you have to decide how the ROM works, so what we had to do was have an engineer sit down with that code and through trial and error write a specification that said here's how the BIOS ROM needs to work. It couldn't be close it had to be exact so there was a lot of detailed testing that went on.
You test how that all-important chip behaves, and make a list of what it has to do - now it's time to meet my lawyer, Claude.
Silicon Valley Attorney
BOB: I've examined the internals of the ROM BIOS and written this book of specifications now I need some help because I've done as much as I can do, and you need to explain what's next.
CLAUDE: Well,the first thing I'm going to do is I'm going to go through the book of specifications myself, but the first thing I can tell you Robert is that you're out of it now. You are contaminated, you are dirty. You've seen the product that's the original work of authorship, you've seen the target product, so now from here on in we're going to be working with people who are not dirty. We're going to be working with so called virgins, who are going to be operating in the clean room.
BOB: I certainly don't qualify there.
CLAUDE: I imagine you don't. So what we're going to do is this. We're going to hire a group of engineers who have never seen the IBM ROM BIOS. They have never seen it, they have never operated it, they know nothing about it.
Claude interrogates Mark
CLAUDE: Have you ever before attempted to disassemble decompile or to in any way shape or form reverse engineer any IBM equipment?
MARK: Oh no.
CLAUDE: And have you ever tried to disassemble....
This is the Silicon Valley virginity test. And good virgins are hard to find.
CLAUDE: You understand that in the event that we discover that the information you are providing us is inaccurate you are subject to discipline by the company and that can include but not limited to termination immediately do you understand that?
MARK: Yes I do.
After the virgins are deemed intact, they are forbidden contact with the outside world while they build a new chip -- one that behaves exactly like the one in the specifications. In Compaq's case, it took l5 senior programmers several months and cost $1 million to do the reverse engineering. In November 1982, Rod Canion unveiled the result.
What I�ve brought today is a Compaq portable computer.
When Bill Murto, another Compaq founder got a plug on a cable TV show their selling point was clear 100 percent IBM compatibility.
It turns out that all major popular software runs on the IBM personal computer or the Compaq portable computer.
Q: That extends through all software written for IBM?
A: Eh Yes.
Q: It all works on the Compaq?
The Compaq was an instant hit. In their first year, on the strength of being exactly like IBM but a little cheaper, they sold 47,000 PCs.
In our first year of sales we set an American business record. I guess maybe a world business record. Largest first year sales in history. It was a hundred and eleven million dollars.
So Rod Canion ends up in Aspen, famous for having the most expensive real estate in America and I try not to look envious while Rod tells me which executive jet he plans to buy next.
ROD: And finally I picked the Lear 31.
BOB: Oh really?
ROD: Now thart was a fun airplane.
BOB: Oh yeh.
Poor Big Blue! Suddenly everybody was cashing in on IBM's success. The most obvious winner at first was Intel, maker of the PCs microprocessor chip. Intel was selling chips like hotcakes to clonemakers -- and making them smaller, quicker and cheaper. This was unheard of! What kind of an industry had Big Blue gotten themselves into?
Former Head, IBM PC Division
Things get less expensive every year. People aren't used to that in general. I mean, you buy a new car, you buy one now and four years later you go and buy one it costs more than the one you bought before. Here is this magical piece of an industry - you go buy one later it costs less and it does more. What a wonderful thing. But it causes some funny things to occur when you think about an industry. An industry where prices are coming down, where you have to sell it and use it right now, because if you wait later it's worth less.
Where Compaq led, others soon followed. IBM was now facing dozens of rivals - soon to be familiar names began to appear, like AST, Northgate and Dell. It was getting spectacularly easy to build a clone. You could get everything off the shelf, including a guaranteed-virgin ROM BIOS chip. Every Tom, Dick & Bob could now make an IBM compatible PC and take another bite out of Big Blue's business. OK we're at Dominos Computers at Los Altos California, Silicon Valley and this is Yukio and we're going to set up the Bob and Yukio Personal Computer Company making IBM PC clones. You're the expert, I of course brought all the money so what is it that we're going to do.
OK first of all we need a motherboard.
BOB: What's a motherboard?
YUKIO: That's where the CPU is set in...that's the central processor unit.
YUKIO: In fact I have one right here. OK so this is the video board...
BOB: That drives the monitor.
BILL LOWE: Oh, of course. I mean we were able to sell a lot of products but it was getting difficult to make money.
YUKIO: And this is the controller card which would control the hard drive and the floppy drive.
And the way we did it was by having low overhead. IBM had low cost of product but a lot of overhead - they were a very big company.
YUKIO: Right this is a high density recorder.
BOB: So this is a hard disk drive.
And by keeping our overhead low even though our margins were low we were able to make a profit.
YUKIO: OK I have one right here.
BOB: Hey...OK we have a keyboard which plugs in right over here.
BOB: People build them themselves - how long does it take?
YUKIO: About an hour.
BOB: About an hour.
And where did every two-bit clone-maker buy his operating system? Microsoft, of course. IBM never iniagined Bill Gates would sell DOS to anyone else. Who was there? But by the mid 80's it was boom time for Bill. The teenage entrepreneur had predicted a PC on every desk and in every home, running Microsoft software. It was actually coming true. As Microsoft mushroomed there was no way that Bill Gates could personally dominate thousands of employees but that didn't stop him. He still had a need to be both industry titan and top programmer. So he had to come up with a whole new corporate culture for Microsoft. He had to find a way to satisfy both his adolescent need to dominate and his adult need to inspire. The average Microsoftee is male and about 25. When he's not working, well he's always working. All his friends are Microsoft programmers too. He has no life outside the office but all the sodas are free. From the beginning, Microsoft recruited straight out of college. They chose people who had no experience of life in other companies. In time they'd be called Microserfs.
Chief Programmer, Microsoft
It was easier to to to create a new culture with people who are fresh out of school rather than people who came from, from from eh other companies and and and other cultures. You can rely on it you can predict it you can measure it you can optimise it you can make a machine out of it.
I mean everyone like lived together, ate together dated each other you know. Went to the movies together it was just you know very much a it was like a frat or a dorm.
Everybody's just push push push - is it right, is it right, do we have it right keep on it - no that's not right ugh and you're very frank about that - you loved it and it wasn't very formal and hierarchical because you were just so desirous to do the right thing and get it right. Why - it reflects Bill's personality.
And so a lot of young, I say people, but mostly it was young men, who just were out of school saw him as this incredible role model or leader, almost a guru I guess. And they could spend hours with him and he valued their contributions and there was just a wonderful camaraderie that seemed to exist between all these young men and Bill, and this strength that he has and his will and his desire to be the best and to be the winner - he is just like a cult leader, really.
As the frenzied 80's came to a close IBM reached a watershed - they had created an open PC architecture that anyone could copy. This was intentional but IBM always thought their inside track would keep them ahead - wrong. IBM's glacial pace and high overhead put them at a disadvantage to the leaner clone makers - everything was turning into a nightmare as IBM lost its dominant market share. So in a big gamble they staked their PC future to a new system a new line of computers with proprietary closed hardware and their very own operating system. It was war.
Start planning for operating system 2 today.
IBM planned to steal the market from Gates with a brand new operating system, called - drum roll please - OS/2. IBM would design OS/2. Yet they asked Microsoft to write the code. Why would Microsoft help create what was intended to be the instrument of their own destruction? Because Microsoft knew IBM was was the source of their success and they would tolerate almost anything to stay close to Big Blue.
It was just part of, as we used to call it, the time riding the bear. You just had to try to stay on the bear's back and the bear would twist and turn and try to buck you and throw you, but darn, we were going to ride the bear because the bear was the biggest, the most important you just had to be with the bear, otherwise you would be under the bear in the computer industry, and IBM was the bear, and we were going to ride the back of the bear.
It's easy for people to forget how pervasive IBM's influence over this industry was. When you talked to people who've come in to the industry recently there's no way you can get that in to their - in to their head, that was the environment.
The relationship between IBM and Microsoft was always a culture clash. IBMers were buttoned-up organization men. Microsoftees were obsessive hackers. With the development of OS/2 the strains really began to show.
In IBM there's a religion in software that says you have to count K-LOCs, and a K-LOC is a thousand line of code. How big a project is it? Oh, it's sort of a 10K-LOC project. This is a 20K-LOCer. And this is 5OK-LOCs. And IBM wanted to sort of make it the religion about how we got paid. How much money we made off OS 2, how much they did. How many K-LOCs did you do? And we kept trying to convince them - hey, if we have - a developer's got a good idea and he can get something done in 4K-LOCs instead of 20K-LOCs, should we make less money? Because he's made something smaller and faster, less KLOC. K-LOCs, K-LOCs, that's the methodology. Ugh anyway, that always makes my back just crinkle up at the thought of the whole thing.
When I took over in '89 there was an enormous amount of resources working on OS 2, both in Microsoft and the IBM company. Bill Gates and I met on that several times. And we pretty quickly came to the conclusion together that that was not going to be a success, the way it was being managed. It was also pretty clear that the negotiating and the contracts had given most of that control to Microsoft.
It was no longer just a question of styles. There was now a clear conflict of business interest. OS/2 was planned to undermine the clone market, where DOS was still Microsoft's major money-maker. Microsoft was DOS. But Microsoft was helping develop the opposition? Bad idea. To keep DOS competitive, Gates had been pouring resources into a new programme called Windows. It was designed to provide a nice user-friendly facade to boring old DOS. Selling it was another job for shy, retiring Steve Ballmer.
Steve Ballmer (Commercial)
How much do you think this advanced operating environment is worth - wait just one minute before you answer - watch as Windows integrates Lotus 1, 2, 3 with Miami Vice. Now we can take this...
Just as Bill Gates saw OS/2 as a threat, IBM regarded Windows as another attempt by Microsoft to hold on to the operating system business.
We created Windows in parallel. We kept saying to IBM, hey, Windows is the way to go, graphics is the way to go, and we got virtually everyone else, enthused about Windows. So that was a divergence that we kept thinking we could get IBM to - to come around on.
It was clear that IBM had a different vision of its relationship with Microsoft than Microsoft had of its vision with IBM. Was that Microsoft's fault? You know, maybe some, but IBM's not blameless there either. So I don't view any of that as anything but just poor business on IBM's part.
Bill Gates is a very disciplined guy. He puts aside everything he wants to read and twice a year goes away for secluded reading weeks - the decisive moment in the Microsoft/IBM relationship came during just such a retreat. In front of a log fire Bill concluded that it was no longer in Microsoft's long term interests to blindly follow IBM. If Bill had to choose between OS2, IBM's new operating system and Windows, he'd choose Windows.
We said ooh, IBM's probably not going to like this. This is going to threaten OS 2. Now we told them about it, right away we told them about it, but we still did it. They didn't like it, we told em about it, we told em about it, we offered to licence it to em.
We always thought the best thing to do is to try and combine IBM promoting the software with us doing the engineering. And so it was only when they broke off communication and decided to go their own way that we thought, okay, we're on our own, and that was definitely very, very scary.
We were in a major negotiation in early 1990, right before the Windows launch. We wanted to have IBM on stage with us to launch Windows 3.0, but they wouldn't do the kind of deal that would allow us to profit it would allow them essentially to take over Windows from us, and we walked away from the deal.
Jack Sams, who started IBM's relationship with Microsoft with that first call to Bill Gates in 1980, could only look on as the partnership disintegrated.
Then they at that point I think they agreed to disagree on the future progress of OS 2 and Windows. And internally we were told thou shalt not ship any more products on Windows. And about that time I got the opportunity to take early retirement so I did.
Bill's decison by the fireplace ended the ten year IBM/Microsoft partnership and turned IBM into an also-ran in the PC business. Did David beat Goliath? The Boca Raton, Florida birthplace of the IBM's PC is deserted - a casualty of diminishing market share. Today, IBM is again what it was before - a profitable, dominant mainframe computer company. For awhile IBM dominated the PC market. They legitimised the PC business, created the standards most of us now use, and introduced the PC to the corporate world. But in the end they lost out. Maybe it was to a faster, more flexible business culture. Or maybe they just threw it away. That's the view of a guy who's been competing with IBM for 20 years, Silicon Valley's most outspoken software billionaire, Larry Ellison.
I think IBM made the single worst mistake in the history of enterprise on earth.
Q: Which was?
LARRY: Which was the manufacture - being the first manufacturer and distributor of the Microsoft/Intel PC which they mistakenly called the IBM PC. I mean they were the first manufacturer and distributor of that technology I mean it's just simply astounding that they could ah basically provide a third of their market value to Intel and a third of their market value to Microsoft by accident - I mean no-one, no-one I mean those two companies today are worth close to you know approaching a hundred billion dollars I mean not many of us get a chance to make a $100 billion mistake.
As fast as IBM abandons its buildings, Microsoft builds new ones. In 1980 IBM was 3000 times the size of Microsoft. Though still a smaller company, today Wall Street says Microsoft is worth more. Both have faced anti-trust investigations about their monopoly positions. For years IBM defined successful American corporate culture - as a machine of ordered bureaucracy. Here in the corridors of Microsoft it's a different style, it's personal. This company - in its drive, its hunger to succeed - is a reflection of one man, its founder, Bill Gates.
Bill wanted to win. Incredible desire to win and to beat other people. At Microsoft we, the whole idea was that we would put people under, you know. Unfortunately that's happened a lot.
Computer Industry Analyst
Bill Gates is special. You wouldn't have had a Microsoft with take a random other person like Gary Kildall. On the other hand, Bill Gates was also lucky. But Bill Gates knows that, unlike a lot of other people in the industry, and he's paranoid. Every morning he gets up and he doesn't feel secure, he feels nervous about this. They're trying hard, they're not relaxing, and that's why they're so successful.
And I remember, I was talking to Bill once and I asked him what he feared, and he said that he feared growing old because you know, once you're beyond thirty, this was his belief at the time, you know once you're beyond thirty, you know, you don't have as many good ideas anymore. You're not as smart anymore.
If you just slow down a little bit who knows who it'll be, probably some company that may not even exist yet, but eh someone else can come in and take the lead.
And I said well, you know, you're going to age, it's going to happen, it's kind of inevitable, what are you going to do about it? And he said I'm just going to hire the smartest people and I'm going to surround myself with all these smart people, you know. And I thought that was kind of interesting. It was almost - it was like he was like oh, I can't be immortal, but like maybe this is the second best and I can buy that, you know.
If you miss what's happening then the same kind of thing that happened to IBM or many other companies could happen to Microsoft very easily. So no-one's got a guaranteed position in the high technology business, and the more you think about, you know, how could we move faster, what could we do better, are there good ideas out there that we should be going beyond, it's important. And I wouldn't trade places with anyone, but the reason I like my job so much is that we have to constantly stay on top of those things.
The Windows software system that ended the alliance between Microsoft and IBM pushed Gates past all his rivals. Microsoft had been working on the software for years, but it wasn't until 1990 that they finally came up with a version that not only worked properly, it blew their rivals away and where did the idea for this software come from? Well not from Microsoft, of course. It came from the hippies at Apple. Lights! Camera! Boot up! In 1984, they made a famous TV commercial. Apple had set out to create the first user friendly PC just as IBM and Microsoft were starting to make a machine for businesses. When the TV commercial aired, Apple launched the Macintosh.
Glorious anniversary of the information...
The computer and the commercial were aimed directly at IBM - which the kids in Cupertino thought of as Big Brother. But Apple had targeted the wrong people. It wasn't Big Brother they should have been worrying about, it was big Bill Gates.
We are one people....
To find out why, join me for the concluding episode of Triumph of the Nerds.
...........we shall prevail.
IBM Research’s Deep Search product uses natural language processing (NLP) to “ingest and analyze massive amounts of data—structured and unstructured.” Over the years, Deep Search has seen a wide range of scientific uses, from Covid-19 research to molecular synthesis. Now, IBM Research is streamlining the scientific applications of Deep Search by open-sourcing part of the product through the release of Deep Search for Scientific Discovery (DS4SD).
DS4SD includes specific segments of Deep Search aimed at document conversion and processing. First is the Deep Search Experience, a document conversion service that includes a drag-and-drop interface and interactive conversion to allow for quality checks. The second element of DS4SD is the Deep Search Toolkit, a Python package that allows users to “programmatically upload and convert documents in bulk” by pointing the toolkit to a folder whose contents will then be uploaded and converted from PDFs into “easily decipherable” JSON files. The toolkit integrates with existing services, and IBM Research is welcoming contributions to the open-source toolkit from the developer community.
IBM Research paints DS4SD as a boon for handling unstructured data (data not contained in a structured database). This data, IBM Research said, holds a “lot of value” for scientific research; by way of example, they cited IBM’s own Project Photoresist, which in 2020 used Deep Search to comb through more than 6,000 patents, documents, and material data sheets in the hunt for a new molecule. IBM Research says that Deep Search offers up to a 1,000× data ingestion speedup and up to a 100× data screening speedup compared to manual alternatives.
The launch of DS4SD follows the launch of GT4SD—IBM Research’s Generative Toolkit for Scientific Discovery—in March of this year. GT4SD is an open-source library to accelerate hypothesis generation for scientific discovery. Together, DS4SD and GT4SD constitute the first steps in what IBM Research is calling its Open Science Hub for Accelerated Discovery. IBM Research says more is yet to come, with “new capabilities, such as AI models and high quality data sources” to be made available through DS4SD in the future. Deep Search has also added “over 364 million” public documents (like patents and research papers) for users to leverage in their research—a big change from the previous “bring your own data” nature of the tool.
The Deep Search Toolkit is accessible here.
In the rush to build, test and deploy AI systems, businesses often lack the resources and time to fully validate their systems and ensure they’re bug-free. In a 2018 report, Gartner predicted that 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them. Even Big Tech companies aren’t immune to the pitfalls — for one client, IBM ultimately failed to deliver an AI-powered cancer diagnostics system that wound up costing $62 million over 4 years.
Inspired by “bug bounty” programs, Jeong-Suh Choi and Soohyun Bae founded Bobidi, a platform aimed at helping companies validate their AI systems by exposing the systems to the global data science community. With Bobidi, Bae and Choi sought to build a product that lets customers connect AI systems with the bug-hunting community in a “secure” way, via an API.
The idea is to let developers test AI systems and biases — that is, the edge cases where the systems perform poorly — to reduce the time needed for validation, Choi explained in an email interview. Bae was previously a senior engineer at Google and led augmented reality mapping at Niantic, while Choi was a senior manager at eBay and headed the “people engineering” team at Facebook. The two met at a tech industry function about 10 years ago.
“By the time bias or flaws are revealed from the model, the damage is already irrevocable,” Choi said. “For example, natural language processing algorithms [like OpenAI’s GPT-3] are often found to be making problematic comments, or mis-responding to those comments, related to hate speech, discrimination, and insults. Using Bobidi, the community can ‘pre-test’ the algorithm and find those loopholes, which is actually very powerful as you can test the algorithm with a lot of people under certain conditions that represent social and political contexts that change constantly.”
To test models, the Bobidi “community” of developers builds a validation dataset for a given system. As developers attempt to find loopholes in the system, customers get an analysis that includes patterns of false negatives and positives and the metadata associated with them (e.g., the number of edge cases).
Exposing sensitive systems and models to the outside world might provide some companies pause, but Choi asserts that Bobidi “auto-expires” models after a certain number of days so that they can’t be reverse-engineered. Customers pay for service based on the number of “legit” attempts made by the community, which works out to a dollar ($0.99) per 10 attempts.
Choi notes that the amount of money developers can make through Bobidi — $10 to $20 per hour — is substantially above the minimum wage in many regions around the world. Assuming Choi’s estimations are rooted in fact, Bobidi bucks the trend in the data science industry, which tends to pay data validators and labelers poorly. The annotators of the widely used ImageNet computer vision dataset made a median wage of $2 per hour, one study found, with only 4% making more than $7.25 per hour.
Pay structure aside, crowd-powered validation isn’t a new idea. In 2017, the Computational Linguistics and Information Processing Laboratory at the University of Maryland launched a platform called Break It, Build It that let researchers submit models to users tasked with coming up with examples to defeat them. Elsewhere, Meta maintains a platform called Dynabench that has users “fool” models designed to analyze sentiment, answer questions, detect hate speech and more.
But Bae and Choi believe the “gamified” approach will help Bobidi stand out from the pack. While it’s early days, the vendor claims to have customers in augmented reality and computer vision startups, including Seerslab, Deepixel and Gunsens.
The traction was enough to convince several investors to pledge money toward the venture. Today, Bobidi closed a $5.8 million seed round with participation from Y Combinator, We Ventures, Hyundai Motor Group, Scrum Ventures, New Product Experimentation (NPE) at Meta, Lotte Ventures, Atlas Pac Capital and several undisclosed angel investors.
Of note, Bobidi is among the first investments for NPE, which shifted gears last year from building consumer-facing apps to making seed-stage investments in AI-focused startups. When contacted for comment, head of NPE investments Sunita Parasuraman said via email: “We’re thrilled to back the talented founders of Bobidi, who are helping companies better validate AI models with an innovative solution driven by people around the globe.”
“Bobidi is a mashup between community and AI, a unique combination of expertise that we share,” Choi added. “We believe that the era of big data is ending and we’re about to enter the new era of quality data. It means we are moving from the era — where the focus was to build the best model given with the datasets — to the new era, where people are tasked to find the best dataset given with the model-complete opposite approach.”
Choi said that the proceeds from the seed round will be put toward hiring — Bobidi currently has 12 employees — and building “customer insights experiences” and various “core machine learning technologies.” The company hopes to triple the size of its team by the end of the year despite economic headwinds.
Expect some major changes now that IBM is spending $1 billion to acquire Merge Healthcare, says IHS analyst Stephen Holloway.
IBM and its Watson supercomputer are set to have a big impact on radiology and medical imaging once IBM's $1 billion purchase of Merge Healthcare closes, according to a new analyst report.
Chicago-based Merge Healthcare's medical imaging management platform is used at more than 7500 U.S. healthcare sites, as well as many of the world's leading clinical research institutes and pharmaceutical firms. Stephen Holloway, associate director for IHS Inc., notes that the deal will provide Watson access to more than a half billion medical images stored in Merge's enterprise archive storage platform.
The goal at IBM (Armonk, NY) is to enable Merge's customers to use the Watson Health Cloud to analyze and cross-reference medical images against a deep trove of lab results, electronic health records, genomic tests, clinical studies, and other health-related data sources. IBM officials think there is a desire in the healthcare field for such imaging analytics. According to IBM, radiologists in some hospital emergency rooms are presented with as many as 100,000 images a day.
Holloway lists three ways IBM's Watson could spur what he describes as a new era of radiology:
IBM represents a deep-pocketed entrant into a market that has been dominated by six companies--GE Healthcare, Philips Healthcare, Siemens Healthcare, Toshiba Medical Systems, Hitachi Medical, and Samsung. Most of these vendors already have their own radiology IT platforms that they've bundled with the hardware, Holloway says.
The arrival of image storage and management software vendors such as Merge and Lexmark Healthcare has already eroded traditional imaging vendor share in exact years. "If IBM can make Watson AI products for image analytics clinically relevant and seamlessly integrate these tools into the EMR, control of the radiology IT market will increasingly shift away from traditional radiology IT vendors. It may even force a departure of industrial medical imaging suppliers away from IT software all-together, as most do not have the big data or analytics capability to compete," Holloway says.
In the short term, Watson will likely provide decision-support tools, similar to the computer aided diagnosis software for breast imaging that has assisted radiologist reporting. But in the long term, look out for Watson joining the dots by drawing on a wealth of other medical diagnostic information gathered from the health and medical record data of a huge population.
"If this happens, radiologists may increasingly find themselves redefining their role in care provision," Holloway says.
Bringing artificial intelligence into healthcare could spark a whole host of ethical and legal issues, according to Holloway.
"Will AI decision-support tools remain just so, as decision support tool, or will over-time the judgement of physicians be called into question? With increasing electronic tracking of care management and metrics to ensure quality of care and drive efficiency, will reliance on such analytics override physician diagnosis?" Holloway says.
Watson's advice could even conceivably become evidence in a lawsuit against a physician over an incorrect diagnosis.
"What is certainly clear though, is that radiology will likely never be the same again," Holloway says.
|Refresh your medical device industry knowledge at MEDevice San Diego, September 1-2, 2015.|
Chris Newmarker is senior editor of Qmed and MPMN. Follow him on Twitter at @newmarker.
Create your own user feedback survey
For the first time, scientists at IBM Research have demonstrated that a relatively new memory technology, known as phase-change memory (PCM), can reliably store multiple data bits per cell over extended periods of time. This significant improvement advances the development of low-cost, faster and more durable memory applications for consumer devices, including mobile phones and cloud storage, as well as high-performance applications, such as enterprise data storage. With a combination of speed, endurance, non-volatility and density, PCM can enable a paradigm shift for enterprise IT and storage systems within the next five years.
Scientists have long been searching for a universal, non-volatile memory technology with far superior performance than Flash – today’s most ubiquitous non-volatile memory technology. The benefits of such a memory technology would allow computers and servers to boot instantaneously and significantly enhance the overall performance of IT systems. A promising contender is PCM that can write and retrieve data 100 times faster than Flash, enable high storage capacities and not lose data when the power is turned off. Unlike Flash, PCM is also very durable and can endure at least 10 million write cycles, compared to current enterprise-class Flash at 30,000 cycles or consumer-class Flash at 3,000 cycles. While 3,000 cycles will out live many consumer devices, 30,000 cycles are orders of magnitude too low to be suitable for enterprise applications.
“As organizations and consumers increasingly embrace cloud-computing models and services, whereby most of the data is stored and processed in the cloud, ever more powerful and efficient, yet affordable storage technologies are needed,” states Dr. Haris Pozidis, Manager of Memory and Probe Technologies at IBM Research – Zurich. “By demonstrating a multi-bit phase-change memory technology which achieves for the first time reliability levels akin to those required for enterprise applications, we made a big step towards enabling practical memory devices based on multi-bit PCM.”
Multi-level Phase Change Memory Breakthrough
To achieve this breakthrough demonstration IBM scientists in Zurich used advanced modulation coding techniques to mitigate the problem of short-term drift in multi-bit PCM, which causes the stored resistance levels to shift over time, which in turn creates read errors. Up to now, reliable retention of data has only been shown for single bit-per-cell PCM, whereas no such results on multi-bit PCM have been reported.
PCM leverages the resistance change that occurs in the material - an alloy of various elements - when it changes its phase from crystalline – featuring low resistance – to amorphous – featuring high resistance – to store data bits. In a PCM cell, where a phase-change material is deposited between a top and a bottom electrode, phase change can controllably be induced by applying voltage or current pulses of different strengths. These heat up the material and when distinct temperature thresholds are reached cause the material to change from crystalline to amorphous or vice versa.
In addition, depending on the voltage, more or less material between the electrodes will undergo a phase change, which directly affects the cell's resistance. Scientists exploit that aspect to store not only one bit, but multiple bits per cell. In the present work, IBM scientists used four distinct resistance levels to store the bit combinations “00”, “01” 10” and “11”.
To achieve the demonstrated reliability, crucial technical advancements in the “read” and “write” process were necessary. The scientists implemented an iterative “write” process to overcome deviations in the resistance due to inherent variability in the memory cells and the phase-change materials:
“We apply a voltage pulse based on the deviation from the desired level and then measure the resistance. If the desired level of resistance is not achieved, we apply another voltage pulse and measure again – until we achieve the exact level,” explains Pozidis.
Despite using the iterative process, the scientists achieved a worst-case write latency of about 10 microseconds, which represents a 100x performance increase over even the most advanced Flash memory on the market today.
For demonstrating reliable read-out of data bits, the scientists needed to tackle the problem of resistance drift. Because of structural relaxation of the atoms in the amorphous state, the resistance increases over time after the phase change, eventually causing errors in the read-out. To overcome that issue, the IBM scientists applied an advanced modulation coding technique that is inherently drift-tolerant. The modulation coding technique is based on the fact that, on average, the relative order of programmed cells with different resistance levels does not change due to drift.
Using that technique, the IBM scientists were able to mitigate drift and demonstrate long- term retention of bits stored in a subarray of 200,000 cells of their PCM test chip, fabricated in 90-nanometer CMOS technology.
The PCM test chip was designed and fabricated by scientists and engineers located in Burlington, Vermont; Yorktown Heights, New York and in Zurich. This retention experiment has been under way for more than five months, indicating that multi-bit PCM can achieve a level of reliability that is suitable for practical applications.
The PCM research project at IBM Research – Zurich will continue to be studied at the recently opened Binnig and Rohrer Nanotechnology Center. The center, which is jointly operated by IBM and ETH Zurich as part of a strategic partnership in nanosciences, offers a cutting-edge infrastructure, including a large cleanroom for micro- and nanofabrication as well as six “noise-free” labs, especially shielded laboratories for highly sensitive experiments.
The paper “Drift-tolerant Multilevel Phase-Change Memory” by N. Papandreou, H. Pozidis, T. Mittelholzer, G.F. Close, M. Breitwisch, C. Lam and E. Eleftheriou, was recently presented by Haris Pozidis at the 3rd IEEE International Memory Workshop in Monterey, CA.