IBM
IBMI recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.
Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.
Edge In, not Cloud Out
In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.
A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.
“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.
IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.
IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).
IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.
It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.
Why edge is important
Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.
Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.
Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.
IBM at the Edge
In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.
Example #1 – McDonald’s drive-thru
An ordering system using AI and NLP for QRS applications has a global market. A car orders lunch at ... [+]
Tim Malone Licensed under CC BY-SA 2.5Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.
McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.
Example #2 – Boston Dynamics and Spot the agile mobile robot
The author with Boston Dynamics “Spot the agile mobile robot” at IBM Think 2022
Moor Insights & StrategyAccording to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Improve future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.
Mobile readings with Boston Dynamics mobile robot
IBMTo develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.
IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.
Thermal Inspection of Planer & Non-Planar Assets
IBMIBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.
IBM market opportunities
Edge Market & Use Cases
IBMDrive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.
Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.
Challenges with scaling
Challenges in scaling AI Application deployments
IBM“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”
Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.
Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.
IBM AI entry points at the edge
IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.
IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.
Industry 4.0
There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.
Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Improve quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.
Major Automotive OEM
IBMFor its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:
Maximo Application Suite
IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.
IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.
Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.
Day-2 AI Operations (retraining and scaling)
Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.
IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.
A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).
“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”
Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.
Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.
Data and AI Platform: Scaling Day 2 - AI Operations
IBMThe graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.
Data Fabric Extensions to Hub and Spokes
Extending Data Fabric to Hub and Spokes: Key Capabilities
IBMIBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.
In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.
Multicloud and Edge platform
Multicloud and Edge Platform
IBMIn the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.
For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.
Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.
Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.
First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).
Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.
Telco network intelligence and slice management with AL/ML
Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:
The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.
An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.
5G network slicing and slice management
5G Network Slice Management
IBMNetwork slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.
5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.
Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.
Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”
In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:
Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.
5G radio access
Intelligence @ the Edge of 5G networks
IBMOpen radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.
O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.
The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:
IBM Cloud and Infrastructure
The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.
Secure Decentralized Edge Data Lake
IBMIBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.
As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).
Wrap up
Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.
IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.
IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.
Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.
Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.
However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.
It is reassuring that IBM has a plan and that its plan is sound.
Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.
Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.
IBM has published details on a collection of techniques it hopes will usher in quantum advantage, the inflection point at which the utility of quantum computers exceeds that of traditional machines.
The focus is on a process known as error mitigation, which is designed to Improve the consistency and reliability of circuits running on quantum processors by eliminating sources of noise.
IBM says that advances in error mitigation will allow quantum computers to scale steadily in performance, in a similar pattern exhibited over the years in the field of classical computing.
Although plenty has been said about the potential of quantum computers, which exploit a phenomenon known as superposition to perform calculations extremely quickly, the reality is that current systems are incapable of outstripping traditional supercomputers on a consistent basis.
A lot of work is going into improving performance by increasing the number of qubits on a quantum processor, but researchers are also investigating opportunities related to qubit design, the pairing of quantum and classical computers, new refrigeration techniques and more.
IBM, for its part, has now said it believes an investment in error mitigation will bear the most fruit at this stage in the development of quantum computing.
“Indeed, it is widely accepted that one must first build a large fault-tolerant quantum processor before any of the quantum algorithms with proven super-polynomial speed-up can be implemented. Building such a processor therefore is the central goal for our development,” explained IBM, in a blog post (opens in new tab).
“However, latest advances in techniques we refer to broadly as quantum error mitigation allow us to lay out a smoother path towards this goal. Along this path, advances in qubit coherence, gate fidelities, and speed immediately translate to measurable advantage in computation, akin to the steady progress historically observed with classical computers.”
The post is geared towards a highly technical audience and goes into great detail, but the main takeaway is this: the ability to quiet certain sources of error will allow for increasingly complex quantum workloads to be executed with reliable results.
According to IBM, the latest error mitigation techniques go “beyond just theory”, with the advantage of these methods having already been demonstrated on some of the most powerful quantum hardware currently available.
“At IBM Quantum, we plan to continue developing our hardware and software with this path in mind,” the company added.
“At the same time, together with our partners and the growing quantum community, we will continue expanding the list of problems that we can map to quantum circuits and develop better ways of comparing quantum circuit approaches to traditional classical methods to determine if a problem can demonstrate quantum advantage. We fully expect that this continuous path that we have outlined will bring us practical quantum computing.”
NEW YORK, Aug. 9, 2022 /PRNewswire/ -- The Insight Partners published latest research study on "Predictive Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Component [Solution (Risk Analytics, Marketing Analytics, Sales Analytics, Customer Analytics, and Others) and Service], Deployment Mode (On-Premise and Cloud-Based), Organization Size [Small and Medium Enterprises (SMEs) and Large Enterprises], and Industry Vertical (IT & Telecom, BFSI, Energy & Utilities, Government and Defence, Retail and e-Commerce, Manufacturing, and Others)", the global predictive analytics market size is projected to grow from $12.49 billion in 2022 to $38.03 billion by 2028; it is expected to grow at a CAGR of 20.4% from 2022 to 2028.
Download PDF Brochure of Predictive Analytics Market Size - COVID-19 Impact and Global Analysis with Strategic Developments at: https://www.theinsightpartners.com/sample/TIPTE100000160/
Predictive Analytics Market Report Scope & Strategic Insights:
Report Coverage |
Details |
Market Size Value in |
US$ 12.49 Billion in 2022 |
Market Size Value by |
US$ 38.03 Billion by 2028 |
Growth rate |
CAGR of 20.4% from 2022 to 2028 |
Forecast Period |
2022-2028 |
Base Year |
2022 |
No. of Pages |
229 |
No. Tables |
142 |
No. of Charts & Figures |
100 |
Historical data available |
Yes |
Segments covered |
Component, Deployment Mode, Organization Size, and Industry Vertical |
Regional scope |
North America; Europe; Asia Pacific; Latin America; MEA |
Country scope |
US, UK, Canada, Germany, France, Italy, Australia, Russia, China, Japan, South Korea, Saudi Arabia, Brazil, Argentina |
Report coverage |
Revenue forecast, company ranking, competitive landscape, growth factors, and trends |
Predictive Analytics Market: Competitive Landscape and Key Developments
IBM Corporation; Microsoft Corporation; Oracle Corporation; SAP SE; Google LLC; SAS Institute Inc.; Salesforce.com, inc.; Amazon Web Services; Hewlett Packard Enterprise Development LP (HPE); and NTT DATA Corporation are among the leading players profiled in this report of the predictive analytics market. Several other essential predictive analytics market players were analyzed for a holistic view of the predictive analytics market and its ecosystem. The report provides detailed predictive analytics market insights, which help the key players strategize their growth.
Inquiry Before Purchase: https://www.theinsightpartners.com/inquiry/TIPTE100000160/
In 2022, Microsoft partnered with Teradata, a provider of a multi-cloud platform for enterprise analytics, for the integration of Teradata's Vantage data platform into Microsoft Azure.
In 2021, IBM and Black & Veatch collaborated to assist customers in keeping their assets and equipment working at peak performance and reliability by integrating AI with real-time data analytics.
In 2020, Microsoft partnered with SAS for the extension of their business solutions. As a part of this move, the companies will migrate SAS analytical products and solutions to Microsoft Azure as a preferred cloud provider for SAS cloud.
Increase in Uptake of Predictive Analytics Tools Propels Predictive Analytics Market Growth:
Predictive analytics tools use data to state the probabilities of the possible outcomes in the future. Knowing these probabilities can help users plan many aspects of their business. Predictive analytics is part of a larger set of data analytics; other aspects of data analytics include descriptive analytics, which helps users understand what their data represent; diagnostic analytics, which helps identify the causes of past events; and prescriptive analytics, which provides users with practical advice to make better decisions.
Have a question? Speak to Research Analyst: https://www.theinsightpartners.com/speak-to-analyst/TIPTE100000160
Prescriptive analytics is similar to predictive analytics. Predictive modeling is the most technical aspect of predictive analytics. Data analysts perform modeling with statistics and other historical data. The model then estimates the likelihood of different outcomes. In e-commerce, predictive modeling tools help analyze customer data. It can predict how many people are likely to buy a certain product. It can also predict the return on investment (ROI) of targeted marketing campaigns. Some software-as-a-service (SaaS) may collect data directly from online stores, such as Amazon Marketplace.
Predictive analytics tools may benefit social media marketing by guiding users to plan the type of content to post; these tools also recommend the best time and day to post. Manufacturing industries need predictive analytics to manage inventory, supply chains, and staff hiring processes. Transport planning and execution are performed more efficiently with predictive analytics tools. For instance, SAP is a leading multinational software company. Its Predictive Analytics was one of the leading data analytics platforms across the world. Now, the software is gradually being integrated into SAP's larger Cloud Analytics platform, which does more business intelligence (BI) than SAP Predictive Analytics. SAP Analytics Cloud, which works on all devices, utilizes artificial intelligence (AI) to Improve business planning and forecasting. This analytics platform can be easily extended to businesses of all sizes.
North America is one of the most vital regions for the uptake and growth of new technologies due to favorable government policies that boost innovation, the presence of a substantial industrial base, and high purchasing power, especially in developed countries such as the US and Canada. The industrial sector in the US is a prominent market for security analytics. The country consists of a large number of predictive analytics platform developers. The COVID-19 pandemic enforced companies to adopt the work-from-home culture, increasing the demand for big data and data analytics.
Avail Lucrative DISCOUNTS on "Predictive Analytics Market" Research Study: https://www.theinsightpartners.com/discount/TIPTE100000160/
The pandemic created an enormous challenge for businesses in North America to continue operating despite massive shutdowns of offices and other facilities. Furthermore, the surge in digital traffic presented an opportunity for numerous online frauds, phishing attacks, denial of inventory, and ransomware attacks. Due to the increased risk of cybercrimes, enterprises began adopting advanced predictive analytics-based solutions to detect and manage any abnormal behavior in their networks. Thus, with the growing number of remote working facilities, the need for predictive analytics solutions also increased in North America during the COVID-19 pandemic.
Predictive Analytics Market: Industry Overview
The predictive analytics market is segmented on the basis of component, deployment mode, organization size, industry vertical, and geography. The predictive analytics market analysis, by component, is segmented into solutions and services. The predictive analytics market based on solution is segmented into risk analytics, marketing analytics, sales analytics, customer analytics, and others. The predictive analytics market analysis, by deployment mode, is bifurcated into cloud and on-premises. The predictive analytics market, by organization size, is segmented into large enterprises, and small and medium-sized enterprises (SMEs). The predictive analytics market, by vertical, is segmented into BFSI, manufacturing, retail and e-Commerce, IT and telecom, energy and utilities, government and defense, and others.
In terms of geography, the predictive analytics market is categorized into five regions—North America, Europe, Asia Pacific (APAC), the Middle East & Africa (MEA), and South America (SAM). The predictive analytics market in North America is sub segmented into the US, Canada, and Mexico. Predictive analytics software is increasingly being adopted in multiple organizations, and cloud-based predictive analytics software solutions are gaining significance in SMEs in North America. The highly competitive retail sector in this region is harnessing the potential of this technique to efficiently transform store layouts and enhance the customer experience in various businesses. In a few North American countries, retailers use smart carts with locator beacons, pin-sized cameras installed near shelves, or the store's Wi-Fi network to determine the footfall in the store, provide directions to a specific product section, and check key areas visited by customers. This process can also provide basic demographic data for parameters such as gender and age.
Directly Purchase Premium Copy of Predictive Analytics Market Growth Report (2022-2028) at: https://www.theinsightpartners.com/buy/TIPTE100000160/
Wal-Mart, Costco, Kroger, The Home Depot, and Target have their origin in North America. The amount of data generated by stores surges with the rise in sales. Without implementing analytics solutions, it becomes difficult to manage such vast data that include records, behaviors, etc., of all customers. Players such as Euclid Analytics offer spatial analytics platforms for retailers operating offline to help them track customer traffic, loyalty, and other indicators associated with customer visits. Euclid's solutions include preconfigured sensors connected to switches that are linked through a network. These sensors can detect customer calls from devices that have Wi-Fi turned on. Additionally, IBM's Sterling Store Engagement solution provides a real-time view of store inventory, and order data through an intuitive user interface that can be accessed by store owners from counters and mobile devices.
Heavy investments in healthcare sectors, advancements in technologies to help manage a large number of medical records, and the use of Big Data analytics to efficiently predict at-risk patients and create effective treatment plans are further contributing to the growth of the predictive analytics market in North America. Predictive analytics helps assess patterns in a patients' medical records, thereby allowing healthcare professionals to develop effective treatment plans to Improve outcomes. During the COVID-19 pandemic, healthcare predictive analytics solutions helped provide hospitals with insightful predictions of the number of hospitalizations for various treatments, which significantly helped them deal with the influx of a large number of patients. However, the high costs of installation and a shortage of skilled workers may limit the use of predictive analytics solutions in, both, the retail and healthcare sectors.
Browse Adjoining Reports:
Procurement Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Application (Supply Chain Analytics, Risk Analytics, Spend Analytics, Demand Forecasting, Contract Management, Vendor Management); Deployment (Cloud, On Premises); Industry Vertical (Retail and E Commerce, Manufacturing, Government and Defense, Healthcare and Life sciences, Telecom and IT, Energy and Utility, Banking Financial Services and Insurance) and Geography
Risk Analytics Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Component (Software, Services); Type (Strategic Risk, Financial Risk, Operational Risk, Others); Deployment Mode (Cloud, On-Premise); Industry Vertical (BFSI, IT and Telecom, Manufacturing, Retail and Consumer Goods, Transportation and Logistics, Government and Defense, Energy and Utilities, Healthcare and Life Sciences, Others) and Geography
Preventive Risk Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Component (Solution, Services); Deployment Type (On-Premise, Cloud); Organization Size (SMEs, Large Enterprises); Type (Strategic Risks, Financial Risks, Operational Risks, Compliance Risks); Industry (BFSI, Energy and Utilities, Government and Defense, Healthcare, Manufacturing, IT and Telecom, Retail, Others) and Geography
Business Analytics Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Application (Supply Chain Analytics, Spatial Analytics, Workforce Analytics, Marketing Analytics, Behavioral Analytics, Risk And Credit Analytics, and Pricing Analytics); Deployment (On-Premise, Cloud, and Hybrid); End-user (BFSI, IT & Telecom, Manufacturing, Retail, Energy & Power, and Healthcare)
Big Data Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Component (Software and Services), Analytics Tool (Dashboard and Data Visualization, Data Mining and Warehousing, Self-Service Tool, Reporting, and Others), Application (Customer Analytics, Supply Chain Analytics, Marketing Analytics, Pricing Analytics, Workforce Analytics, and Others), and End Use Industry (Pharmaceutical, Semiconductor, Battery Manufacturing, Electronics, and Others)
Data Analytics Outsourcing Market to 2027 - Global Analysis and Forecasts by Type (Descriptive Data Analytics, Predictive Data Analytics, and Prescriptive Data Analytics); Application (Sales Analytics, Marketing Analytics, Risk & Finance Analytics, and Supply Chain Analytics); and End-user (BFSI, Healthcare, Retail, Manufacturing, Telecom, and Media & Entertainment)
Sales Performance Management Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Solution (Incentive Compensation Management, Territory Management, Sales Monitoring and Planning, and Sales Analytics), Deployment Type (On-premise, Cloud), Services (Professional Services, Managed Services), End User (BFSI, Manufacturing, Energy and Utility, and Healthcare)
Customer Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Component (Solution, Services); Deployment Type (On-premises, Cloud); Enterprise Size (Small and Medium-sized Enterprises, Large Enterprises); End-user (BFSI, IT and Telecom, Media and Entertainment, Consumer Goods and Retail, Travel and Hospitality, Others) and Geography
Life Science Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Type (Predictive Analytics, Prescriptive Analytics, Descriptive Analytics); Component (Services, Software); End User (Pharmaceutical & Biotechnology Companies, Research Centers, Medical Device Companies, Third-Party Administrators)
About Us:
The Insight Partners is a one stop industry research provider of actionable intelligence. We help our clients in getting solutions to their research requirements through our syndicated and consulting research services. We specialize in industries such as Semiconductor and Electronics, Aerospace and Defense, Automotive and Transportation, Biotechnology, Healthcare IT, Manufacturing and Construction, Medical Device, Technology, Media and Telecommunications, Chemicals and Materials.
Contact Us:
If you have any queries about this report or if you would like further information, please contact us:
Contact Person: Sameer Joshi
E-mail: [email protected]
Phone: +1-646-491-9876
Press Release: https://www.theinsightpartners.com/pr/predictive-analytics-market
Logo: https://mma.prnewswire.com/media/1586348/The_Insight_Partners_Logo.jpg
A month after the National Institute of Standards and Technology (NIST) revealed the first quantum-safe algorithms, Amazon Web Services (AWS) and IBM have swiftly moved forward. Google was also quick to outline an aggressive implementation plan for its cloud service that it started a decade ago.
It helps that IBM researchers contributed to three of the four algorithms, while AWS had a hand in one. Google is also among those who contributed to SPHINCS+.
A long process that started in 2016 with 69 original candidates ends with the selection of four algorithms that will become NIST standards, which will play a critical role in protecting encrypted data from the vast power of quantum computers.
NIST's four choices include CRYSTALS-Kyber, a public-private key-encapsulation mechanism (KEM) for general asymmetric encryption, such as when connecting websites. For digital signatures, NIST selected CRYSTALS-Dilithium, FALCON, and SPHINCS+. NIST will add a few more algorithms to the mix in two years.
Vadim Lyubashevsky, a cryptographer who works in IBM's Zurich Research Laboratories, contributed to the development of CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon. Lyubashevsky was predictably pleased by the algorithms selected, but he had only anticipated NIST would pick two digital signature candidates rather than three.
Ideally, NIST would have chosen a second key establishment algorithm, according to Lyubashevsky. "They could have chosen one more right away just to be safe," he told Dark Reading. "I think some people expected McEliece to be chosen, but maybe NIST decided to hold off for two years to see what the backup should be to Kyber."
After NIST identified the algorithms, IBM moved forward by specifying them into its recently launched z16 mainframe. IBM introduced the z16 in April, calling it the "first quantum-safe system," enabled by its new Crypto Express 8S card and APIs that provide access to the NIST APIs.
IBM was championing three of the algorithms that NIST selected, so IBM had already included them in the z16. Since IBM had unveiled the z16 before the NIST decision, the company implemented the algorithms into the new system. IBM last week made it official that the z16 supports the algorithms.
Anne Dames, an IBM distinguished engineer who works on the company's z Systems team, explained that the Crypto Express 8S card could implement various cryptographic algorithms. Nevertheless, IBM was betting on CRYSTAL-Kyber and Dilithium, according to Dames.
"We are very fortunate in that it went in the direction we hoped it would go," she told Dark Reading. "And because we chose to implement CRYSTALS-Kyber and CRYSTALS-Dilithium in the hardware security module, which allows clients to get access to it, the firmware in that hardware security module can be updated. So, if other algorithms were selected, then we would add them to our roadmap for inclusion of those algorithms for the future."
A software library on the system allows application and infrastructure developers to incorporate APIs so that clients can generate quantum-safe digital signatures for both classic computing systems and quantum computers.
"We also have a CRYSTALS-Kyber interface in place so that we can generate a key and provide it wrapped by a Kyber key so that could be used in a potential key exchange scheme," Dames said. "And we've also incorporated some APIs that allow clients to have a key exchange scheme between two parties."
Dames noted that clients might use Dilithium to generate digital signatures on documents. "Think about code signing servers, things like that, or documents signing services, where people would like to actually use the digital signature capability to ensure the authenticity of the document or of the code that's being used," she said.
During Amazon's AWS re:Inforce security conference last week in Boston, the cloud provider emphasized its post-quantum cryptography (PQC) efforts. According to Margaret Salter, director of applied cryptography at AWS, Amazon is already engineering the NIST standards into its services.
During a breakout session on AWS' cryptography efforts at the conference, Salter said AWS had implemented an open source, hybrid post-quantum key exchange based on a specification called s2n-tls, which implements the Transport Layer Security (TLS) protocol across different AWS services. AWS has contributed it as a draft standard to the Internet Engineering Task Force (IETF).
Salter explained that the hybrid key exchange brings together its traditional key exchanges while enabling post-quantum security. "We have regular key exchanges that we've been using for years and years to protect data," she said. "We don't want to get rid of those; we're just going to enhance them by adding a public key exchange on top of it. And using both of those, you have traditional security, plus post quantum security."
Last week, Amazon announced that it deployed s2n-tls, the hybrid post-quantum TLS with CRYSTALS-Kyber, which connects to the AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In an update this week, Amazon documented its stated support for AWS Secrets Manager, a service for managing, rotating, and retrieving database credentials and API keys.
While Google didn't make implementation announcements like AWS in the immediate aftermath of NIST's selection, VP and CISO Phil Venables said Google has been focused on PQC algorithms "beyond theoretical implementations" for over a decade. Venables was among several prominent researchers who co-authored a technical paper outlining the urgency of adopting PQC strategies. The peer-reviewed paper was published in May by Nature, a respected journal for the science and technology communities.
"At Google, we're well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information," Venables wrote in a blog post published following the NIST announcement. "We have one goal: ensure that Google is PQC ready."
Venables recalled an experiment in 2016 with Chrome where a minimal number of connections from the Web browser to Google servers used a post-quantum key-exchange algorithm alongside the existing elliptic-curve key-exchange algorithm. "By adding a post-quantum algorithm in a hybrid mode with the existing key exchange, we were able to test its implementation without affecting user security," Venables noted.
Google and Cloudflare announced a "wide-scale post-quantum experiment" in 2019 implementing two post-quantum key exchanges, "integrated into Cloudflare's TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients." The experiment helped Google understand the implications of deploying two post-quantum key agreements with TLS.
Venables noted that last year Google tested post-quantum confidentiality in TLS and found that various network products were not compatible with post-quantum TLS. "We were able to work with the vendor so that the issue was fixed in future firmware updates," he said. "By experimenting early, we resolved this issue for future deployments."
The four algorithms NIST announced are an important milestone in advancing PQC, but there's other work to be done besides quantum-safe encryption. The AWS TLS submission to the IETF is one example; others include such efforts as Hybrid PQ VPN.
"What you will see happening is those organizations that work on TLS protocols, or SSH, or VPN type protocols, will now come together and put together proposals which they will evaluate in their communities to determine what's best and which protocols should be updated, how the certificates should be defined, and things like things like that," IBM's Dames said.
Dustin Moody, a mathematician at NIST who leads its PQC project, shared a similar view during a panel discussion at the RSA Conference in June. "There's been a lot of global cooperation with our NIST process, rather than fracturing of the effort and coming up with a lot of different algorithms," Moody said. "We've seen most countries and standards organizations waiting to see what comes out of our nice progress on this process, as well as participating in that. And we see that as a very good sign."
With tensions between the United States and China mounting as a fallout of Nancy Pelosi’s provocative Taiwan visit, the technology war between them is also taking a new turn. Both houses of Congress have approved the CHIPS and Science Act—a $280 billion plan to boost chip manufacturing in the United States. Currently, 75% of chip manufacturing is in East Asia, centred around Taiwan, South Korea and China. The United States aims to re-shore the semiconductor industry back to itself. It hopes to revive the fortunes of its chip manufacturers, like the once-upon-a-time king of chip-making Intel, currently fighting not to become another has-been like IBM.
While the plan offers carrots to the semiconductor industry, which some call “corporate handouts”, it comes with a substantial stick. Any company availing of the $52.7 billion subsidy to locate chip manufacturing in the United States is prohibited from expanding or upgrading advanced chip-making facilities in China. As a result, companies like Samsung and SK Hynix, two major chip manufacturers who have made substantial investments in China, will have to choose between walking away from their investments or not availing of the subsidy.
And China was not sitting on its hands, waiting for the United States to ratchet up sanctions on its high-tech ambitions. Recognising the semiconductor industry, particularly advanced chip-making, as a key area of struggle, it has significantly advanced its capabilities. Shanghai-based chip manufacturer SMIC released 7nm chips in the market twelve months back. Currently, only Taiwan’s TSMC and South Korea’s Samsung have succeeded in manufacturing 7-nm chips. Dylan Patel, a leading tech analyst, has written, “China’s SMIC is shipping a foundry process with commercially available chips in the open market which are more advanced than any American or European company.... The most advanced American or European foundry-produced chips are based on GlobalFoundries 12nm.” SMIC is the fifth-largest chip manufacturer.
The astonishing progress in the computational power of electronic chips comes from the ability to pack more and more components into a silicon chip. This is rooted in Moore’s Law, which has operated for five decades. A measure of the increased density of chip components is the reduction in the size of the transistors created within its silicon. Therefore, 14nm, 7nm, and 5nm indicate the extent of miniaturisation of components. They are also a measure of the density of devices on the chip: the higher the number of its components, the more computing power grows.
Lithography, a critical process in chip-making, creates patterns on silicon wafers using ultraviolet (UV) light. The thinner the line the lithographic machine creates, the more devices a silicon chip can pack.
I have written earlier about chip manufacturing and the importance of tools, specifically the Extreme UV or EUV machines from ASML, a requisite to move beyond 14nm chips. It is not that older Deep Ultraviolet Lithography or DUV machines cannot create higher densities. But the productivity of a DUV machine to produce 10nm or 7nm chips is lower than when EUV technology is used. Going to 5nm or 3nm is impossible without EUV machines.
ASML in the Netherlands is the only manufacturer of EUV machines. The light source in its EUV machines—which create the patterns on chips—is made by an ASML-owned company, but which is American. Technically, that company falls under United States regulations. Though ASML was quite unhappy to lose a part of its China market, it has accepted it will not supply EUV machines to China. For now, it can continue supplying DUV machines to China, but this may also change in the future.
The United States had bought into the idea that without EUV machines, Chinese manufacturers would fail to produce chips below 14nm. The SMIC 7nm chip blows a big hole in this assumption.
DUV tools can pack a high degree of devices on a chip but require many more runs and more complex operations for results. That is how even lithographic machines meant for 28nm chips could produce 14nm chips. For some time, Intel and others have been trying to use DUV tech to create 10nm or 7nm chips. But SMIC is the first to have used DUV machines to create 7nm chips successfully.
It does not put SMIC in the same bracket as Taiwan’s TSMC or South Korea’s Samsung, which use EUV technology. Still, it puts SMIC ahead of the rest of the pack. It allows China to compete in the market for products that carry 7nm chips despite hundreds of its leading companies, including Huawei and SMIC, coming under United States sanctions. The United States’ interpretation of its powers is that if any company uses its technology, it must obey its sanctions regime under the US Foreign Direct Product Rule. That is why ASML machines have come under the United States sanctions regime, as have products manufactured using those machines. Because of the sanctions regime, TSMC or Samsung—who use ASML’s EUV machines—also cannot export any advanced chips to entities in China.
There is criticism that SMIC’s 7nm chip is only a copy of the TSMC chip and therefore not a major advance. It is indeed a simple chip meant for cryptocurrency mining, but according to TechInsights, its importance is that it is a stepping stone for a “true 7nm process”.
On the flip side, China cannot go to 5nm or 3nm technology without EUV lithographic machines. Currently, it can import DUV machines from ASML. Two Japanese companies, Canon and Nikon, also manufacture DUV machines.
But where is China itself in manufacturing lithographic machines? It has had indigenous capability to make machines that manufacture chips for some time. Shanghai Micro Electronics Equipment or SMEE is the leading manufacturer. SMEE announced it would release its first 28nm DUV machine—which can be used to create 14nm chips—in 2022. As SMIC has shown, the machine can make 7nm chips. There is still no announcement from SMEE of a supply date for its DUV machine, but it would be crucial for China’s ability to set up large-scale local chip manufacturing units.
The semiconductor industry is at a crossroads. The global semiconductor supply chain is at risk of splitting into two competing blocks, one led by the United States and the other by China. The semiconductor industry in the United States has argued that in case of a split, the country will lose its commanding lead in several technology areas in five or ten years. It is because a huge part of the industry’s profits and, therefore, R&D investments, are financed from Chinese sales. Losing that market would mean a temporary setback for China and a permanent loss of the lead position for the United States. This is why ASML CEO Peter Wennink has said the export restrictions regime the United States is forcing on the industry will not work.
The bulk of the chip market is not for more advanced chips. According to a Boston Consulting Group-Semiconductor Industry Association (BCG-SIA) report published in late 2020, chips with less than 10nm density are only 2% of the market, though they are the most glamorous and figure in the latest laptops and mobile phones. The bulk of the market is of chips for which China already has the technology or can play catch up, thanks to continuing investments in both R&D and building entire supply chains, from chip fabrication units to DUV machines.
According to the BCG-SIA report, the smart way for the west to “combat” China would be to restrict sanctions to military tech and, from the profits from the rest, finance the R&D expenditure of their companies. Without these profits, American companies will not be able to fund their future development.
But with “politics in command” in the United States and bipartisan war hysteria being whipped up, the United States seems to prefer the carrot-and-stick approach: carrots for investing in local chip-making and the stick for any company setting up production in China. If the Covid-19 pandemic damaged the semiconductor supply chain, leading to a chip shortage in 2021, the supply chain shock of the future would be because of the United States sanctions regime. The other weakness of the United States’ strategy is the belief that it can restrict the trade war only to sectors with a technical edge. It leaves open the possibility of asymmetric responses from China. “May you live in interesting times” is supposedly a traditional Chinese curse. The world appears to be entering such a phase, starting with the US-China chip war
(Bloomberg) -- IBM’s Red Hat named Matt Hicks, head of products and technologies, as its new leader, solidifying a bet that hybrid-cloud offerings will fuel the company’s growth.
Most Read from Bloomberg
Hicks takes over as the software unit’s chief executive officer and president from Paul Cormier, who will serve as chairman. “Paul and I have planned this for a while,” Hicks said Tuesday in an interview. “There’ll be a lot of similarities in what I did yesterday and what I’ll be doing tomorrow.”
International Business Machine Corp. acquired Red Hat for about $34 billion in 2019 as a central component of Chief Executive Arvind Krishna’s plan to steer the century-old company into the fast-growing cloud-computing market. As a division, Red Hat’s has seen steady revenue growth near 20%, far outpacing IBM as a whole.
IBM hopes to distinguish itself in the crowded cloud market by targeting a hybrid model, which helps clients store and analyze information across their own data centers, private cloud services and servers run by major public providers such as Amazon.com Inc. and Microsoft Corp. IBM has been a rare pocket of stability in the latest stock market meltdown. The shares have gained 4.1% this year, closing at $139.18 Tuesday in New York, compared with a 28% decline for the tech-heavy Nasdaq 100.
“Together, we can really lead a a new era of hybrid computing,” said Hicks, who joined Red Hat in 2006. “Red Hat has the technology expertise and open source model -- IBM has the reach.”
Hicks said demand for hybrid cloud and software services should remain strong despite questions about the global economic outlook, touting latest deals with General Motors Co. and ABB Ltd. The telecommunication and automotive industries are two areas he is targeting for expansion because they require geographically distributed data.
Most Read from Bloomberg Businessweek
©2022 Bloomberg L.P.
During the first six months of 2021, the FBI’s Internet Crime Complaint Center (IC3) reported more than 2,000 ransomware complaints, resulting in nearly $17M in losses — a 62% year-over-year increase. In addition to strong endpoint protection, the best way retailers can minimize the damage from ransomware (and other malware attacks) is by having a strong backup and disaster recovery (DR) plan in place to recover from attacks that successfully encrypt data.
Without a robust backup and DR strategy, the fallout from an attack could bankrupt a company or damage brand reputation for years. Retailers are an attractive target for threat actors looking to steal credit card data or skim off purchases. A successful ransomware attack means lost sales while applications are down, costs for investigation, remediation and insurance, reputational damage and long-term customer loss. In a 2020 Arcserve report around ransomware and consumer loyalty, 59% of customers said they wouldn’t do business with an organization that experienced a cybersecurity attack in the last six months.
Being able to restore their systems to a latest backup prevents organizations from having to make a hard choice between paying the ransom or losing their data. Not only will a strong business continuity and disaster recovery plan maintain business operations in the event of a breach, but it can also help save a brand’s reputation and keep customers happy and loyal.
Fortunately, retail is one of the more innovative industries when it comes to digital transformation and use of the cloud (due in part to steep competition from cloud-native ecommerce sites and startups). Today’s consumers expect services such as frictionless payments, mobile shopping and buy now, pay later, and retailers are listening: In a latest Comcast Business study surveying more than 200 retail IT executives, 43% reported digital business growth as their number one priority going forward.
Running backup and disaster recovery in the cloud is a great option for most retailers. It provides easy scalability and flexibility as workloads grow and can provide greater resilience by housing backups in a separate region from production workloads. It also can easily follow data sovereignty laws and regulations and allows businesses to embark on their digital transformation journey right away.
In addition to all these benefits, it decreases costs in most situations. Users will only need to pay for the minimum resources needed to replicate data to the cloud-based backup server most of the time, and then “turn it up” when needed. It also removes the cost of buying hardware, various data center costs, and the salaries of the employees needed to manage and update a physical backup server. Further information on general principles of resiliency as it relates to high availability and disaster recovery (HA/DR) can be found here.
Even with retailers’ push for digital transformation, many backup and DR programs at enterprises older than 10 years stay “stuck” on-premises because their ERP systems or custom, long-running business-critical applications (like point-of-sale systems) were written for on-premises server hardware. Software written for the IBM i or AIX operating system for IBM Power servers cannot be migrated to the cloud without rewriting them (these servers use a different chipset and network architecture). Rewriting 15- or 20-year-old software that the business depends on, especially if it’s been customized heavily, is more risk than most IT teams want to take on.
But latest developments now allow retailers to use cloud-based backups and DR without this risk. Specialized solutions now allow IBM Power applications to run ‘as-is’ in the public cloud. Retailers can set up backup servers for even these “cloud stubborn” IBM Power applications in the cloud to take advantage of the benefits detailed above.
All in all, the cloud provides a cost-effective, flexible, secure option for backups and disaster recovery and can help mitigate the impact of a ransomware attack and ensure business continuity. With retailers continuing to be prime targets for ransomware and the high stakes of a successful attack, I urge retail IT teams to seriously evaluate their backup and DR program and assess if the cloud is a good fit for them.
Matthew Romero is the Technical Product Evangelist at Skytap, the leading cloud service to run IBM Power and x86 workloads natively in the public cloud. Romero has extensive expertise supporting and creating technical content for cloud technologies, Microsoft Azure in particular. He spent nine years at 3Sharp and Indigo Slate managing corporate IT services and building technical demos, and before that spent four years at Microsoft as a program and lab manager in the Server and Tools Business unit.
A report by IBM states that 60 per cent of breached businesses raised product prices post-breach. Consumers are paying the price as data breach costs reach all-time high.
The cost of data breach report by IBM revealed that there have been costlier and higher-impact data breaches than ever before, with the global average cost of a data breach reaching an all-time high of $4.35 million for surveyed organizations.
With breach costs increasing nearly 13 per cent over the last two years of the report, the findings suggest these incidents may also be contributing to rising costs of goods and services.
In India, the cost of data breach averaged at ₹176 million in 2022, reaching an all-time high. This represents a 6.6 per cent increase from last year, when the average cost of a breach was ₹165 million. The average cost has climbed 25 per cent from ₹140 million in the 2020 report.
Viswanath Ramaswamy, Vice President, Technology, IBM Technology Sales, IBM India and South Asia said, “It’s clear, businesses cannot evade cyberattacks. Keeping security capabilities flexible enough to match attacker agility will be the biggest challenge as the industry moves forward.”
To stay on top of growing cybersecurity challenges investment in zero-trust deployments, mature security practices, and AI-based platforms can help make all the difference when businesses are attacked, he added.
The perpetuality of cyberattacks is also shedding light on the “haunting effect” data breaches are having on businesses, with the report finding 83 per cent of studied organizations have experienced more than one data breach in their lifetime.
Another factor rising over time is the after-effects of breaches on these organizations, which linger long after they occur, as nearly 50 per cent of breach costs are incurred more than a year after the breach, the report said.
Published on July 27, 2022