(Reuters) - A Houston federal judge awarded BMC Software Inc over $21 million in attorneys' fees and other litigation costs from International Business Machines Corp on Monday, months after BMC won $1.6 billion from the tech company in a dispute over mainframe software.
U.S. District Judge Gray Miller said BMC was entitled to the award, between the $30.2 million BMC had requested and $13.9 million IBM had proposed, based on the terms of the agreement that IBM breached.
An IBM spokesperson declined to comment.
BMC attorney Sean Gorman of Bracewell LLP said Tuesday that the "significant" award "underscores our success for BMC against a well-funded, intense and aggressive defendant."
Miller awarded BMC over $1.6 billion in May after finding IBM broke an agreement by inducing BMC customer AT&T Corp to swap out its mainframe software for IBM's competing software.
The companies' contract allowed IBM to service BMC's software on client mainframes for free. IBM said it would not replace BMC clients' software with its own in return.
Miller said that IBM had convinced BMC through fraud to sign a contract that allowed it to "exercise rights without paying for them, secure other contractual benefits, and ultimately acquire one of BMC's core customers."
Miller awarded BMC $717.7 million in unpaid license fees and doubled it based on IBM's misconduct, which he said "offends the sense of justice and propriety that the public expects from American businesses." Miller also awarded BMC $168.2 million in interest.
The agreement entitled BMC to an additional $21.6 million, Miller ruled Monday, citing a contract provision that the losing party in any litigation between the companies would pay the winner's reasonable attorneys' fees and costs.
Miller found the number of hours Bracewell billed was reasonable, as was the rate it charged. He reduced the award from the amount BMC requested based on the time Bracewell spent on BMC's unsuccessful trade-secret claims and other considerations.
The case is BMC Software Inc v. International Business Machines Corp, U.S. District Court for the Southern District of Texas, No. 4:17-cv-02254.
For BMC: Sean Gorman and Christopher Dodson of Bracewell
For IBM: Richard Werder of Quinn Emanuel Urquhart & Sullivan
Our Standards: The Thomson Reuters Trust Principles.
Last week, after IBM’s report of positive quarterly earnings, CEO Arvind Krishna and CNBC’s Jim Cramer shared their frustration that IBM’s stock “got clobbered.” IBM’s stock price immediately fell by10%, while the S&P500 remained steady (Figure 1)
While a five-day stock price fluctuation is by itself meaningless, questions remain about the IBM’s longer-term picture. “These are great numbers,” declared Krishna.
“You gave solid revenue growth and solid earnings,” Cramer sympathized. “You far exceeded expectations. Maybe someone is changing the goal posts here?”
It is also possible that Krishna and Cramer missed where today’s goal posts are located. Strong quarterly numbers do not a digital winner make. They may induce the stock market to regard a firm as a valuable cash cow, like other remnants of the industrial era. But to become a digital winner, a firm must take the kind of steps that Satya Nadella took at Microsoft to become a digital winner: kill its dogs, commit to a mission of customer primacy, identify real growth opportunities, transform its culture, make empathy central, and unleash its agilists. (Figure 2)
Since becoming CEO, Nadella has been brilliantly successful at Microsoft, growing market capitalization by more than a trillion dollars.
Krishna has been IBM CEO since April 2020. He began his career at IBM in 1990, and had been managing IBM’s cloud and research divisions since 2015. He was a principal architect of the Red Hat acquisition.
They are remarkable parallels between the careers of Krishna and Nadella.
· Both are Indian-American engineers, who were born in India.
· Both worked at the firm for several decades before they became CEOs.
· Prior to becoming CEOs, both were in charge of cloud computing.
Both inherited companies in trouble. Microsoft was stagnating after CEO Steve Ballmer, while IBM was also in rapid decline, after CEO Ginny Rometty: the once famous “Big Blue” had become known as a “Big Bruise.”
Although it is still early days in Krishna’s CEO tenure, IBM is under-performing the S&P500 since he took over (Figure 3).
More worrying is the fact that Krishna has not yet completed the steps that Nadella took in his first 27 months. (Figure 1).
Nadella wrote off the Nokia phone and declared that IBM would no longer sell its flagship Windows as a business. This freed up energy and resources to focus on creating winning businesses.
By contrast, Krishna has yet to jettison, IBM’s most distracting baggage:
· Commitment to maximizing shareholder value (MSV): For the two prior decades, IBM was the public champion of MSV, first under CEO Palmisano 2001-2011, and again under Rometty 2012-2020—a key reason behind IBM’s calamitous decline (Figure 2) Krishna has yet to explicitly renounce IBM’s MSV heritage.
· Top-down bureaucracy: The necessary accompaniment of MSV is top-down bureaucracy, which flourished under CEOs Palmisano and Rometty. Here too, bureaucratic processes must be explicitly eradicated, otherwise they become permanent weeds.
· The ‘Watson problem’: IBM’s famous computer, Watson, may have won ‘Jeopardy!’ but it continues to have problems in the business marketplace. In January 2022, IBM reported that it had sold Watson Health assets to an investment firm for around $1 billion, after acquisitions that had cost some $4 billion. Efforts to monetize Watson continue.
· Infrastructure Services: By spinning off its Cloud computing business as a publicly listed company (Kyndryl), IBM created nominal separation, but Kyndryl immediately lost 57% of its share value.
· Quantum Computing: IBM pours resources into research on quantum computing and touts its potential to revolutionize computing. However unsolved technical problems of “decoherence” and “entanglement” mean that any meaningful benefits are still some years away.
· Self-importance: Perhaps the heaviest baggage that IBM has yet to jettison is the over-confidence reflected in sales slogans like “no one ever got fired for hiring IBM”. The subtext is that firms “can leave IT to IBM” and that the safe choice for any CIO is to stick with IBM. It’s a status quo mindset—the opposite of the clients that IBM needs to attract.
At the outset of his tenure as CEO of Microsoft, Nadella spent the first nine months getting consensus on a simple customer-driven mission statement.
Krishna did write at the end of the letter to staff on day one as CEO, and he added at the end:“Third, we all must be obsessed with continually delighting our clients. At every interaction, we must strive to offer them the best experience and value. The only way to lead in today’s ever-changing marketplace is to constantly innovate according to what our clients want and need.” This would have been more persuasive if it had come at the beginning of the letter, and if there had been stronger follow-up.
What is IBM’s mission? No clear answer appears from IBM’s own website. The best one gets from About IBM is the fuzzy do-gooder declaration: “IBMers believe in progress — that the application of intelligence, reason and science can Boost business, society and the human condition.” Customer primacy is not explicit, thereby running the risk that IBM’s 280,000 employees will assume that the noxious MSV goal is still in play.
At Microsoft, Nadella dismissed competing with Apple on phones or with Google on Search. He defined the two main areas of opportunity—mobility and the cloud.
Krishna has identified the Hybrid Cloud and AI as IBM’s main opportunities. Thus, Krishna wrote in his newsletter to staff on day one as CEO: “Hybrid cloud and AI are two dominant forces driving change for our clients and must have the maniacal focus of the entire company.”
However, both fields are now very crowded. IBM is now a tiny player in Cloud in comparison to Amazon, Microsoft, and Google. In conversations, Krishna portrays IBM as forging working partnerships with the big Cloud players, and “integrating their offerings in IBM’s hybrid Cloud.” One risk here is whether the big Cloud players will facilitate this. The other risk is that IBM will attract only lower-performing firms that use IBM as a crutch so that they can cling to familiar legacy programs.
At Microsoft, Nadella addressed culture upfront, rejecting Microsoft’s notoriously confrontational culture, and set about instilling a collaborative customer-driven culture throughout the firm.
Although Krishna talks openly to the press, he has not, to my knowledge, frontally addressed the “top-down” “we know best” culture that prevailed in IBM under his predecessor CEOs. He has, to his credit, pledged “neutrality” with respect to the innovative, customer-centric Red Hat, rather than applying the “Blue washing” that the old IBM systematically applied to its acquisitions to bring them into line with IBM’s top-down culture, and is said to have honored its pledge—so far. But there is little indication that IBM is ready to adopt Red Hat’s innovative culture for itself. It is hard to see these two opposed cultures remain “neutral” forever. Given the size differential between IBM and Red Hat, the likely winner is easy to predict, unless Krishna makes a more determined effort to transform IBM’s culture.
As in any large tech firm, when Nadella and Krishna took over their respective firms, there were large hidden armies of agilists waiting in the shadows but hamstrung by top-down bureaucracies. At Microsoft, Nadella’s commitment to “agile, agile, agile” combined with a growth mindset, enabled a fast start.. At IBM, if Krishna has any passion for Agile, it has not yet shared it widely.
Although IBM has made progress under Krishna, it is not yet on a path to become a clear digital winner.
And read also:
Is Your Firm A Cash-Cow Or A Growth-Stock?
Why Companies Must Learn To Discuss The Undiscussable
Bracewell attorneys were over awarded $21 million in fees and costs on behalf of BMC Software Inc., in connection to a $1.6 billion judgment against IBM for fraudulent inducement and violation of a licensing agreement.
U.S. District Judge Gray H. Miller for the Southern District of Texas found that, as the prevailing party, BMC was entitled to recover legal fees and costs of $21,615,144 for the “extraordinary result” of the Bracewell team.
I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.
Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.
Edge In, not Cloud Out
In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.
A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.
“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.
IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.
IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).
IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.
It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.
Why edge is important
Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.
Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.
Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.
IBM at the Edge
In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.
Example #1 – McDonald’s drive-thru
Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.
McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.
Example #2 – Boston Dynamics and Spot the agile mobile robot
According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Boost future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.
To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.
IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.
IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.
IBM market opportunities
Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.
Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.
Challenges with scaling
“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”
Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.
Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.
IBM AI entry points at the edge
IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.
IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.
There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.
Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Boost quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.
For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:
Maximo Application Suite
IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.
IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.
Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.
Day-2 AI Operations (retraining and scaling)
Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.
IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.
A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).
“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”
Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.
Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.
The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.
Data Fabric Extensions to Hub and Spokes
IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.
In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.
Multicloud and Edge platform
In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.
For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.
Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.
Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.
First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).
Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.
Telco network intelligence and slice management with AL/ML
Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:
The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.
An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.
5G network slicing and slice management
Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.
5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.
Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.
Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”
In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:
Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.
5G radio access
Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.
O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.
The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:
IBM Cloud and Infrastructure
The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.
IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.
As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).
Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.
IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.
IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.
Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.
Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.
However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.
It is reassuring that IBM has a plan and that its plan is sound.
Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.
Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.
IBM said 60pc of organisations raised their product or service prices due to a data breach.
Consumers are feeling the effects of data breaches as the average cost of a breach has reached a record high of $4.35m, according to the latest IBM Security report.
The report suggests data breach costs have increased by nearly 13pc over the last two years. It also highlights the lingering impact these breaches can have, as nearly 50pc of the costs are incurred more than a year after the breach.
Rising costs are also causing impacts for consumers, as 60pc of surveyed organisations raised their product or service prices due to a data breach. IBM noted that this is occurring at a time when the cost of goods is soaring worldwide amid inflation and supply chain issues.
Compromised credentials continued to be the most common cause of a breach, standing at 19pc. This was followed by phishing at 16pc, which was also the most costly cause of a breach, leading to $4.91m in average breach costs for responding organisations.
IBM’s report last year noted that the rapid shift to remote working and operations during the pandemic had an impact on the average cost of a data breach.
IBM found that ransomware and destructive attacks represented 28pc of breaches among critical infrastructure organisations studied. This includes companies in financial services, industry, transport and healthcare.
Despite the risks that a data breach poses for these organisations and global warnings about cyberattacks in this space, only 21pc of critical infrastructure organisations studied have adopted a zero-trust security model.
IBM said 17pc of critical infrastructure breaches were caused due to a business partner being compromised first.
Healthcare in particular is facing the pressure of rising data breach costs. This sector saw the highest-cost breaches for the 12th year in a row. Average data breach costs for healthcare organisations increased by nearly $1m to reach a record high of $10.1m.
A report last month by cybersecurity firm Rapid7 found that financial data is leaked most often from ransomware attacks, followed by customer or patient data.
In cases of ransomware attacks, paying a ransom is generally not advised by cybersecurity experts. IBM’s report suggests that companies do not feel benefits if they choose to pay the demands of a ransomware attacker.
The report found businesses that paid ransom demands saw only $610,000 less in average breach costs compared to those that chose not to pay, not including the ransom amount.
However, when accounting for the average ransom payment – estimated to be $812,000 in 2021 – the report suggests businesses that pay could net higher total costs, while also potentially funding future cyberattacks.
IBM found that businesses that adopted a hybrid cloud model observed lower breach costs compared to businesses with a solely public or private cloud model.
Hybrid cloud environments were also the most prevalent infrastructure among studied organisations, at 45pc.
The report highlighted that 45pc of studied breaches occurred in the cloud, emphasising an importance of cloud security. However, 43pc of organisations in the report stated they are only in the early stages or have not started implementing security practices to protect their cloud environments.
More than 60pc of studied organisations said they are not sufficiently staffed to meet their security needs. These organisations averaged $550,000 more in breach costs than those that said they are sufficiently staffed.
“The more businesses try to perfect their perimeter instead of investing in detection and response, the more breaches can fuel cost of living increases,” said IBM Security X-Force global head Charles Henderson.
“This report shows that the right strategies coupled with the right technologies can help make all the difference when businesses are attacked.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.
As with many goods and services, healthcare has not been immune to inflationary pressures. U.S. health systems faced a combination of rising supply and labor expenses in accurate months, according to Healthcare Dive, even as patient volumes have increased. As a result, providers are likely to pass on increased costs to commercial insurers during upcoming contract negotiations, Fierce Healthcare reported last week.
But from the employer perspective, employee benefits programs — including health benefits — remain a key component of talent management in a difficult hiring market, according to a McKinsey & Co. report from May. The consultancy also found that many employers have turned to HDHPs, among other strategies, as a way to address rising costs.
Yet increasing employee deductibles creates a “fundamental tension” between employers’ dual goals of securing workers’ well-being and controlling costs, according to EBRI.
“On the one hand, employers are more frequently implementing financial wellness programs as a means to Boost their employees’ financial wellbeing,” Jake Spiegel, research associate at EBRI, said in the statement. “On the other hand, in an effort to wrangle health care cost increases, employers often turn to raising their health plan’s deductible, potentially offsetting the positive impact of any financial wellness initiatives.”
In their report, EBRI’s researchers also noted the role that health savings accounts, which may be offered in conjunction with a HDHP, may play in balancing increased out-of-pocket costs. Those enrolled in an HSA-eligible HDHP may be able to cover those costs using HSA contributions made by themselves and their employers. A previous EBRI report highlighted the role that pre-deductible coverage of chronic condition medications may play in HSA-eligible plans.
Aside from increasing patient deductibles, there are a variety of other cost-saving healthcare measures employers may seek. An executive for insurer NFP previously told HR Dive that these options can include care navigation services, virtual care options and value-based care arrangements, among others.
At this point we’re all familiar with the global chip shortage. It’s affected every single industry in the world, it seems. Now IBM has come up with a new way to manufacture silicon wafers that it says could ease the strain a bit. It partnered with Tokyo Electron (TEL) on creating a new method for stacking silicon wafers vertically. Although IBM’s most advanced research node is currently 2nm, it doesn’t state which process it’s using for this technique. It only mentions it’s using it to stack 300mm (12-inch) wafers.
IBM’s announcement claims it’s the first of its kind for a wafer of this size. The goal is to advance Moore’s Law by making wafer stacking a simpler process. This will allow IBM to add more transistors to a given volume via stacking. It notes that traditionally 3D stacking has only been used in “high end operations” such as with High-Bandwidth Memory (HBM). AMD has notably also done it recently with the L3 cache on its Ryzen 7 5800X3D CPU. It also was the first GPU company to employe HBM on a GPU with its Fiji and Fury families, back in 2015.
IBM’s new process is essentially a novel way to join silicon wafers together. Traditional chip-stacking requires through-silicon vias (TSVs) between the layers. This allows electricity to flow upwards into the stack, and for both layers to work in tandem. This requires the backside of the layer to be thinned to reveal the TSVs for the other layer to connect to them. The layers in a stack are very thin, measuring less than 100 microns. Due to their fragility, they require a carrier wafer to support them.
Typically these carrier wafers are made of glass. The carrier wafer is bonded to the wafer to make sure it can go through production without being damaged. Once it’s finished production, the carrier is removed with a UV laser. In some cases a silicon carrier can be used too, but separating it from the layer requires a mechanical force. This can be dangerous for the integrity of the wafer it’s supposed to be protecting. This is where IBM’s new invention comes into play, as it’s figured out a way to debond two silicon wafers that’s transparent to silicon. It has achieved this by using an infrared laser to decouple the wafers.
This will allow two silicon wafers to be stacked without the use of glass carriers. Instead manufactures can just skip that step and go straight to silicon-to-silicon. IBM says in addition to simplifying the process by no longer requiring this extra step, there are other advantages as well. As an example it says it will help in eliminating tool compatibility and chucking issues, introduce fewer defects, and allow for inline testing of thinned wafers. These benefits will enable “advanced chiplet production” according to IBM. It also says its technology can scale very well.
IBM and TEL have been working on this technology since 2018, so it’s been in the hopper for a little while. This could be a crucial development for the industry given where things are headed in silicon fabrication. As node sizes shrink down to sub-2nm, packaging and stacking technologies will become a crucial advantage for companies looking to increase performance when “moving to a smaller node” is no longer an option.
Intel is already looking to begin advanced 3D stacking with Meteor Lake, using its Foveros technology. AMD is way ahead of the game on that front, as mentioned previously. However, so far it’s only stacking L3 cache on its CPUs with Zen 3. However, there are rumors it will repeat that with Zen 4 as well with so-called Raphael-X products. It remains unclear if stacking will also be employed in its upcoming RDNA3 GPUs.
IBM says it’s built a beta tooling facility in Albany, NY to work on its new technology. In the future it will be expanding its work. Its goal is to eventually create a full 3D chip stack using this technology. The company says this advancement will help with supply chain issues, while also allowing for performance benefits too. “We hope our work will help cut down on the number of products needed in the semiconductor supply chain, while also helping drive processing power improvements for years to come,” it stated.
The guides leverage Astadia’s 25+ years of expertise in partnering with organizations to reduce costs, risks and timeframes when migrating their IBM mainframe applications to cloud platforms
BOSTON, August 03, 2022--(BUSINESS WIRE)--Astadia is pleased to announce the release of a new series of Mainframe-to-Cloud reference architecture guides. The documents cover how to refactor IBM mainframes applications to Microsoft Azure, Amazon Web Services (AWS), Google Cloud, and Oracle Cloud Infrastructure (OCI). The documents offer a deep dive into the migration process to all major target cloud platforms using Astadia’s FastTrack software platform and methodology.
As enterprises and government agencies are under pressure to modernize their IT environments and make them more agile, scalable and cost-efficient, refactoring mainframe applications in the cloud is recognized as one of the most efficient and fastest modernization solutions. By making the guides available, Astadia equips business and IT professionals with a step-by-step approach on how to refactor mission-critical business systems and benefit from highly automated code transformation, data conversion and testing to reduce costs, risks and timeframes in mainframe migration projects.
"Understanding all aspects of legacy application modernization and having access to the most performant solutions is crucial to accelerating digital transformation," said Scott G. Silk, Chairman and CEO. "More and more organizations are choosing to refactor mainframe applications to the cloud. These guides are meant to assist their teams in transitioning fast and safely by benefiting from Astadia’s expertise, software tools, partnerships, and technology coverage in mainframe-to-cloud migrations," said Mr. Silk.
The new guides are part of Astadia’s free Mainframe-to-Cloud Modernization series, an ample collection of guides covering various mainframe migration options, technologies, and cloud platforms. The series covers IBM (NYSE:IBM) Mainframes.
In addition to the reference architecture diagrams, these comprehensive guides include various techniques and methodologies that may be used in forming a complete and effective Legacy Modernization plan. The documents analyze the important role of the mainframe platform, and how to preserve previous investments in information systems when transitioning to the cloud.
In each of the IBM Mainframe Reference Architecture white papers, readers will explore:
Benefits, approaches, and challenges of mainframe modernization
Understanding typical IBM Mainframe Architecture
An overview of Azure/AWS/Google Cloud/Oracle Cloud
Detailed diagrams of IBM mappings to Azure/AWS/ Google Cloud/Oracle Cloud
How to ensure project success in mainframe modernization
The guides are available for download here:
To access more mainframe modernization resources, visit the Astadia learning center on www.astadia.com.
Astadia is the market-leading software-enabled mainframe migration company, specializing in moving IBM and Unisys mainframe applications and databases to distributed and cloud platforms in unprecedented timeframes. With more than 30 years of experience, and over 300 mainframe migrations completed, enterprises and government organizations choose Astadia for its deep expertise, range of technologies, and the ability to automate complex migrations, as well as testing at scale. Learn more on www.astadia.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220803005031/en/
Wilson Rains, Chief Revenue Officer
Women have been disproportionately saddled with the impact COVID has had on their families and careers.
Nearly 5.4 million women have lost or left their jobs since February 2020, according to data from the National Women’s Law Center. Creating more inclusive work cultures, offering family-friendly benefits, and providing support in the form of mentorship and sponsorship can help women stay on track and make up for pandemic-era losses.
In this week’s top stories, workplace insights platform Comparably recently released its list of top-ranked CEOs, chosen by their female employees. Leaders from Hubspot, IBM, Adobe and others all made the list. For women, it’s not just who they work for, but where: lending firm Clarify Capital ranked the best and worst states for women-owned businesses, based on factors like the percentage of women-owned businesses, the gender pay gap and female unemployment rate in those states.
Read more: Nominations for EBN’s Excellence in Benefits Awards are now open
Expecting women to make it on their own won’t help close the wage gap or get women back to work. Three leadership experts share why sponsorship is a key component to getting more women into leadership roles. While a mentor helps women with their personal and professional goals, a sponsor takes responsibility for promoting an employee to a higher level position.
“Coaching is about development. Mentoring is about guidance. Sponsorship is about pulling someone up and advocating for them,” says Rubina F. Malik, a learning and development adviser at Malik Global Solutions. “More CEOs and higher-ups need to be allies for women. Put them in the spotlight and get them opportunities to be seen.”