Free C4040-100 PDF and VCE at killexams.com

Create sure that a person has IBM C4040-100 free pdf of actual questions for the particular Enterprise Technical Support for AIX and Linux PDF Download before you choose to take the particular real test. All of us give the most up-to-date and valid C4040-100 practice questions that will contain C4040-100 real examination questions. We possess collected and produced a database associated with C4040-100 free pdf from actual examinations having a specific finish goal to provide you an opportunity to get ready plus pass C4040-100 examination upon the first try. Simply memorize our own C4040-100

Exam Code: C4040-100 Practice test 2022 by Killexams.com team
Enterprise Technical Support for AIX and Linux
IBM Enterprise helper
Killexams : IBM Enterprise helper - BingNews https://killexams.com/pass4sure/exam-detail/C4040-100 Search results Killexams : IBM Enterprise helper - BingNews https://killexams.com/pass4sure/exam-detail/C4040-100 https://killexams.com/exam_list/IBM Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Boost future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Boost quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Boost the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : CXL Borgs IBM’s OpenCAPI, Weaves Memory Fabrics With 3.0 Spec

System architects are often impatient about the future, especially when they can see something good coming down the pike. And thus, we can expect a certain amount of healthy and excited frustration when it comes to the Compute Express Link, or CXL, interconnect created by Intel, which with the absorption of Gen-Z technology from Hewlett Packard Enterprise and now OpenCAPI technology from IBM will become the standard for memory fabrics across compute engines for the foreseeable future.

The CXL 2.0 specification, which brings memory pooling across the PCI-Express 5.0 peripheral interconnect, will soon available on CPU engines. Which is great. But all eyes are already turning to the just-released CXL 3.0 specification, which rides atop the PCI-Express 6.0 interconnect coming in 2023 with 2X the bandwidth, and people are already contemplating what another 2X of bandwidth might offer with CXL 4.0 atop PCI-Express 7.0 coming in 2025.

In a way, we expect for CXL to follow the path blazed by IBM’s “Bluelink” OpenCAPI interconnect. Big Blue used the Bluelink interconnect in the “Cumulus” and “Nimbus” Power9 processors to provide NUMA interconnects across multiple processors, to run the NVLink protocol from Nvidia to provide memory coherence across the Power9 CPU and the Nvidia “Volta” V100 GPU accelerators, and to provide more generic memory coherent links to other kinds of accelerators through OpenCAPI ports. But the path that OpenCAPI and CXL will not be exactly the same, obviously. OpenCAPI is kaput and CXL is the standard for memory coherence in the datacenter.

IBM put faster OpenCAPI ports on the “Cirrus” Power10 processors, and they are used to provide those NUMA links as with the Power9 chips as well as a new OpenCAPI Memory Interface that uses the Bluelink SerDes as a memory controller, which runs a bit slower than a DDR4 or DDR5 controller but which takes up a lot less chip real estate and burns less power – and has the virtue of being exactly like the other I/O in the chip. In theory, IBM could have supported the CXL and NVLink protocols running atop its OpenCAPI interconnect on Power10, but there are some sour grapes there with Nvidia that we don’t understand – it seems foolish not to offer memory coherence with Nvidia’s current “Ampere” A100 and impending “Hopper” H100 GPUs. There may be an impedance mismatch between IBM and Nvidia in regards to signaling rates and lane counts between OpenCAPI and NVLink. IBM has PCI-Express 5.0 controllers on its Power10 chips – these are unique controllers and are not the Bluelink SerDes – and therefore could have supported the CXL coherence protocol, but as far as we know, Big Blue has chosen not to do that, either.

Given that we think CXL is the way a lot of GPU accelerators and their memories will link to CPUs in the future, this strategy by IBM seems odd. We are therefore nudging IBM to do a Power10+ processor with support for CXL 2.0 and NVLink 3.0 coherent links as well as with higher core counts and maybe higher clock speeds, perhaps in a year or a year and a half from now. There is no reason IBM cannot get some of the AI and HPC budget given the substantial advantages of its OpenCAPI memory, which is driving 818 GB/sec of memory bandwidth out of a dual chip module with 24 cores. We also expect for future datacenter GPU compute engines from Nvidia will support CXL in some fashion, but exactly how it will sit side-by-side with or merge with NVLink is unclear.

It is also unclear how the Gen-Z intellectual property donated to the CXL Consortium by HPE back in November 2021 and the OpenCAPI intellectual property donated to the organization steering CXL by IBM last week will be used to forge a CXL 4.0 standard, but these two system vendors are offering up what they have to help the CXL effort along. For which they should be commended. That said, we think both Gen-Z and OpenCAPI were way ahead of CXL and could have easily been tapped as in-node and inter-node memory and accelerator fabrics in their own right. HPE had a very elegant set of memory fabric switches and optical transceivers already designed, and IBM is the only CPU provider that offered CPU-GPU coherence across Nvidia GPUs and the ability to hook memory inside the box or across boxes over its OpenCAPI Memory Interface riding atop the Bluelink SerDes. (AMD is offering CPU-GPU coherence across its custom “Trento” Epyc 7003 series processors and its “Aldebaran” Instinct MI250X GPU accelerators in the “Frontier” exascale supercomputer at Oak Ridge National Laboratories.)

We are convinced that the Gen-Z and OpenCAPI technology will help make CXL better, and Boost the kinds and varieties of coherence that are offered. CXL initially offered a kind of asymmetrical coherence, where CPUs can read and write to remote memories in accelerators as if they are local but using the PCI-Express bus instead of a proprietary NUMA interconnect – that is a vast oversimplification – rather than having full cache coherence across the CPUs and accelerators, which has a lot of overhead and which would have an impedance mismatch of its own because PCI-Express was, in days gone by, slower than a NUMA interconnect.

But as we have pointed out before, with PCI-Express doubling its speed every two years or so and latencies holding steady as that bandwidth jumps, we think there is a good chance that CXL will emerge as a kind of universal NUMA interconnect and memory controller, much as IBM has done with OpenCAPI, and Intel has suggested this for both CXL memory and CXL NUMA and Marvell certainly thinks that way about CXL memory as well. And that is why with CXL 3.0, the protocol is offering what is called “enhanced coherency,” which is another way of saying that it is precisely the kind of full coherency between devices that, for example, Nvidia offers across clusters of GPUs on an NVSwitch network or IBM offered between Power9 CPUs and Nvidia Volta GPUs. The kind of full coherency that Intel did not want to do in the beginning. What this means is that devices supporting the CXL.memory sub-protocol can access each other’s memory directly, not asymmetrically, across a CXL switch or a direct point-to-point network.

There is no reason why CXL cannot be the foundation of a memory area network as IBM has created with its “memory inception” implementation of OpenCAPI memory on the Power10 chip, either. As Intel and Marvell have shown in their conceptual presentations, the palette of chippery and interconnects is wide open with a standard like CXL, and improving it across many vectors is important. The industry let Intel win this one, and we will be better off in the long run because of it. Intel has largely let go of CXL and now all kinds of outside innovation can be brought to bear.

Ditto for the Universal Chiplet Interconnect Express being promoted by Intel as a standard for linking chiplets inside of compute engine sockets. Basically, we will live in a world where PCI-Express running UCI-Express connects chiplets inside of a socket, PCI-Express running CXL connects sockets and chips within a node (which is becoming increasingly ephemeral), and PCI-Express switch fabrics spanning a few racks or maybe even a row someday use CXL to link CPUs, accelerators, memory, and flash all together into disaggregated and composable virtual hardware servers.

For now, what is on the immediate horizon is CXL 3.0 running atop the PCI-Express 6.0 transport, and here is how CXL 3.0 is stacking up against the prior CXL 1.0/1.1 release and the current CXL 2.0 release on top of PCI-Express 5.0 transports:

When the CXL protocol is running in I/O mode – what is called CXL.io – it is essentially just the same as the PCI-Express peripheral protocol for I/O devices. The CXL.cache and CXL.memory protocols add caching and memory addressing atop the PCI-Express transport, and run at about half the latency of the PCI-Express protocol. To put some numbers on this, as we did back in September 2021 when talking to Intel, the CXL protocol specification requires that a snoop response on a snoop command when a cache line is missed has to be under 50 nanoseconds, pin to pin, and for memory reads, pin to pin, latency has to be under 80 nanoseconds. By contrast, a local DDR4 memory access one a CPU socket is around 80 nanoseconds, and a NUMA access to far memory in an adjacent CPU socket is around 135 nanoseconds in a typical X86 server.

With the CXL 3.0 protocol running atop the PCI-Express 6.0 transport, the bandwidth is being doubled on all three types of drivers without any increase in latency. That bandwidth increase, to 256 GB/sec across x16 lanes (including both directions) is thanks to the 256 byte flow control unit, or flit, fixed packet size (which is larger than the 64 byte packet used in the PCI-Express 5.0 transport) and the PAM-4 pulsed amplitude modulation encoding that doubles up the bits per signal on the PCI-Express transport. The PCI-Express protocol uses a combination of cyclic redundancy check (CRC) and three-way forward error correction (FEC) algorithms to protect the data being transported across the wire, which is a better method than was employed with prior PCI-Express protocols and hence why PCI-Express 6.0 and therefore CXL 3.0 will have much better performance for memory devices.

The CXL 3.0 protocol does have a low latency CRC algorithm that breaks the 256 B flits into 128 B half flits and does its CRC check and transmissions on these subflits, which can reduce latencies in transmissions by somewhere between 2 nanosecond and 5 nanoseconds.

The neat new thing coming with CXL 3.0 is memory sharing, and this is distinct from the memory pooling that was available with CXL 2.0. Here is what memory pooling looks like:

With memory pooling, you put a glorified PCI-Express switch that speaks CXL between hosts with CPUs and enclosures with accelerators with their own memories or just blocks of raw memory – with or without a fabric manager – and you allocate the accelerators (and their memory) or the memory capacity to the hosts as needed. As the diagram above shows on the right, you can do a point to point interconnect between all hosts and all accelerators or memory devices without a switch, too, if you want to hard code a PCI-Express topology for them to link on.

With CXL 3.0 memory sharing, memory out on a device can be literally shared simultaneously with multiple hosts at the same time. This chart below shows the combination of device shared memory and coherent copies of shared regions enabled by CXL 3.0:

System and cluster designers will be able to mix and match memory pooling and memory sharing techniques with CXL 3.0. CXL 3.0 will allow for multiple layers of switches, too, which was not possible with CXL 2.0, and therefore you can imagine PCI-Express networks with various topologies and layers being able to lash together all kinds of devices and memories into switch fabrics. Spine/leaf networks common among hyperscalers and cloud builders are possible, including devices that just share their cache, devices that just share their memory, and devices that share their cache and memory. (That is Type 1, Type 3, and Type 2 in the CXL device nomenclature.)

The CXL fabric is what will be truly useful and what is enabled in the 3.0 specification. With a fabric, a you get a software-defined, dynamic network of CXL-enabled devices instead of a static network set up with a specific topology linking specific CXL devices. Here is a simple example of a non-tree topology implemented in a fabric that was not possible with CXL 2.0:

And here is the neat bit. The CXL 3.0 fabric can stretch to 4,096 CXL devices. Now, ask yourself this: How many of the big iron NUMA systems and HPC or AI supercomputers in the world have more than 4,096 devices? Not as many as you think. And so, as we have been saying for years now, for a certain class of clustered systems, whether the nodes are loosely or tightly coupled at their memories, a PCI-Express fabric running CXL is just about all they are going to need for networking. Ethernet or InfiniBand will just be used to talk to the outside world. We would expect to see flash devices front-ended by DRAM as a fast cache as the hardware under storage clusters, too. (Optane 3D XPoint persistent memory is no longer an option. But there is always hope for some form of PCM memory or another form of ReRAM. Don’t hold your breath, though.)

As we sit here mulling all of this over, we can’t help thinking about how memory sharing might simplify the programming of HPC and AI applications, especially if there is enough compute in the shared memory to do some collective operations on data as it is processed. There are all kinds of interesting possibilities. . . .

Anyway, making CXL fabrics is going to be interesting, and it will be the heart of many system architectures. The trick will be sharing the memory to drive down the effective cost of DRAM – research by Microsoft Azure showed that on its cloud, memory capacity utilization was only an average of about 40 percent, and half of the VMs running never touched more than half of the memory allocated to their hypervisors from the underlying hardware – to pay for the flexibility that comes through CXL switching and composability for devices with memory and devices as memory.

What we want, and what we have always wanted, was a memory-centric systems architecture that allows all kinds of compute engines to share data in memory as it is being manipulated and to move that data as little as possible. This is the road to higher energy efficiency in systems, at least in theory. Within a few years, we will get to test this all out in practice, and it is legitimately exciting. All we need now is PCI-Express 7.0 two years earlier and we can have some real fun.

Tue, 09 Aug 2022 06:18:00 -0500 Timothy Prickett Morgan en-US text/html https://www.nextplatform.com/2022/08/09/cxl-borgs-ibms-opencapi-weaves-memory-fabrics-with-3-0-spec/
Killexams : IBM extends Power10 server lineup for enterprise use cases

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


IBM is looking to grow its enterprise server business with the expansion of its Power10 portfolio announced today.

IBM Power is a RISC (reduced instruction set computer) based chip architecture that is competitive with other chip architectures including x86 from Intel and AMD. IBM’s Power hardware has been used for decades for running IBM’s AIX Unix operating system, as well as the IBM i operating system that was once known as the AS/400. In more latest years, Power has increasingly been used for Linux and specifically in support of Red Hat and its OpenShift Kubernetes platform that enables organizations to run containers and microservices.

The IBM Power10 processor was announced in August 2020, with the first server platform, the E1080 server, coming a year later in September 2021. Now IBM is expanding its Power10 lineup with four new systems, including the Power S1014, S1024, S1022 and E1050, which are being positioned by IBM to help solve enterprise use cases, including the growing need for machine learning (ML) and artificial intelligence (AI).

What runs on IBM Power servers?

Usage of IBM’s Power servers could well be shifting into territory that Intel today still dominates.

Steve Sibley, vp, IBM Power product management, told VentureBeat that approximately 60% of Power workloads are currently running AIX Unix. The IBM i operating system is on approximately 20% of workloads. Linux makes up the remaining 20% and is on a growth trajectory.

IBM owns Red Hat, which has its namesake Linux operating system supported on Power, alongside the OpenShift platform. Sibley noted that IBM has optimized its new Power10 system for Red Hat OpenShift.

“We’ve been able to demonstrate that you can deploy OpenShift on Power at less than half the cost of an Intel stack with OpenShift because of IBM’s container density and throughput that we have within the system,” Sibley said.

A look inside IBM’s four new Power servers

Across the new servers, the ability to access more memory at greater speed than previous generations of Power servers is a key feature. The improved memory is enabled by support of the Open Memory Interface (OMI) specification that IBM helped to develop, and is part of the OpenCAPI Consortium.

“We have Open Memory Interface technology that provides increased bandwidth but also reliability for memory,” Sibley said. “Memory is one of the common areas of failure in a system, particularly when you have lots of it.”

The new servers announced by IBM all use technology from the open-source OpenBMC project that IBM helps to lead. OpenBMC provides secure code for managing the baseboard of the server in an optimized approach for scalability and performance.

E1050

Among the new servers announced today by IBM is the E1050, which is a 4RU (4 rack unit) sized server, with 4 CPU sockets, that can scale up to 16TB of memory, helping to serve large data- and memory-intensive workloads.

S1014 and S1024

The S1014 and the S1024 are also both 4RU systems, with the S1014 providing a single CPU socket and the S1024 integrating a dual-socket design. The S1014 can scale up to 2TB of memory, while the S1024 supports up to 8TB.

S1022

Rounding out the new services is the S1022, which is a 1RU server that IBM is positioning as an ideal platform for OpenShift container-based workloads.

Bringing more Power to AI and ML

AI and ML workloads are a particularly good use case for all the Power10 systems, thanks to optimizations that IBM has built into the chip architecture.

Sibley explained that all Power10 chips benefit from IBM’s Matrix Match Acceleration (MMA) capability. The enterprise use cases that Power10-based servers can help to support include organizations that are looking to build out risk analytics, fraud detection and supply chain forecasting AI models, among others. 

IBM’s Power10 systems support and have been optimized for multiple popular open-source machine learning frameworks including PyTorch and TensorFlow.

“The way we see AI emerging is that a vast majority of AI in the future will be done on the CPU from an inference standpoint,” Sibley said.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Mon, 11 Jul 2022 09:01:00 -0500 Sean Michael Kerner en-US text/html https://venturebeat.com/programming-development/ibm-extends-power10-server-lineup-for-enterprise-use-cases/
Killexams : Can IBM Get Back Into HPC With Power10?

The “Cirrus” Power10 processor from IBM, which we codenamed for Big Blue because it refused to do it publicly and because we understand the value of a synonym here at The Next Platform, shipped last September in the “Denali” Power E1080 big iron NUMA machine. And today, the rest of the Power10-based Power Systems product line is being fleshed out with the launch of entry and midrange machines – many of which are suitable for supporting HPC and AI workloads as well as in-memory databases and other workloads in large enterprises.

The question is, will IBM care about traditional HPC simulation and modeling ever again with the same vigor that it has in past decades? And can Power10 help reinvigorate the HPC and AI business at IBM. We are not sure about the answer to the first question, and got the distinct impression from Ken King, the general manager of the Power Systems business, that HPC proper was not a high priority when we spoke to him back in February about this. But we continue to believe that the Power10 platform has some attributes that make it appealing for data analytics and other workloads that need to be either scaled out across small machines or scaled up across big ones.

Today, we are just going to talk about the five entry Power10 machines, which have one or two processor sockets in a standard 2U or 4U form factor, and then we will follow up with an analysis of the Power E1050, which is a four socket machine that fits into a 4U form factor. And the question we wanted to answer was simple: Can a Power10 processor hold its own against X86 server chips from Intel and AMD when it comes to basic CPU-only floating point computing.

This is an important question because there are plenty of workloads that have not been accelerated by GPUs in the HPC arena, and for these workloads, the Power10 architecture could prove to be very interesting if IBM thought outside of the box a little. This is particularly true when considering the feature called memory inception, which is in effect the ability to build a memory area network across clusters of machines and which we have discussed a little in the past.

We went deep into the architecture of the Power10 chip two years ago when it was presented at the Hot Chip conference, and we are not going to go over that ground again here. Suffice it to say that this chip can hold its own against Intel’s current “Ice Lake” Xeon SPs, launched in April 2021, and AMD’s current “Milan” Epyc 7003s, launched in March 2021. And this makes sense because the original plan was to have a Power10 chip in the field with 24 fat cores and 48 skinny ones, using dual-chip modules, using 10 nanometer processes from IBM’s former foundry partner, Globalfoundries, sometime in 2021, three years after the Power9 chip launched in 2018. Globalfoundries did not get the 10 nanometer processes working, and it botched a jump to 7 nanometers and spiked it, and that left IBM jumping to Samsung to be its first server chip partner for its foundry using its 7 nanometer processes. IBM took the opportunity of the Power10 delay to reimplement the Power ISA in a new Power10 core and then added some matrix math overlays to its vector units to make it a good AI inference engine.

IBM also created a beefier core and dropped the core count back to 16 on a die in SMT8 mode, which is an implementation of simultaneous multithreading that has up to eight processing threads per core, and also was thinking about an SMT4 design which would double the core count to 32 per chip. But we have not seen that today, and with IBM not chasing Google and other hyperscalers with Power10, we may never see it. But it was in the roadmaps way back when.

What IBM has done in the entry machines is put two Power10 chips inside of a single socket to increase the core count, but it is looking like the yields on the chips are not as high as IBM might have wanted. When IBM first started talking about the Power10 chip, it said it would have 15 or 30 cores, which was a strange number, and that is because it kept one SMT8 core or two SMT4 cores in reserve as a hedge against bad yields. In the products that IBM is rolling out today, mostly for its existing AIX Unix and IBM i (formerly OS/400) enterprise accounts, the core counts on the dies are much lower, with 4, 8, 10, or 12 of the 16 cores active. The Power10 cores have roughly 70 percent more performance than the Power9 cores in these entry machines, and that is a lot of performance for many enterprise customers – enough to get through a few years of growth on their workloads. IBM is charging a bit more for the Power10 machines compared to the Power9 machines, according to Steve Sibley, vice president of Power product management at IBM, but the bang for the buck is definitely improving across the generations. At the very low end with the Power S1014 machine that is aimed at small and midrange businesses running ERP workloads on the IBM i software stack, that improvement is in the range of 40 percent, supply or take, and the price increase is somewhere between 20 percent and 25 percent depending on the configuration.

Pricing is not yet available on any of these entry Power10 machines, which ship on July 22. When we find out more, we will do more analysis of the price/performance.

There are six new entry Power10 machines, the feeds and speeds of which are shown below:

For the HPC crowd, the Power L1022 and the Power L1024 are probably the most interesting ones because they are designed to only run Linux and, if they are like prior L classified machines in the Power8 and Power9 families, will have lower pricing for CPU, memory, and storage, allowing them to better compete against X86 systems running Linux in cluster environments. This will be particularly important as IBM pushed Red Hat OpenShift as a container platform for not only enterprise workloads but also for HPC and data analytic workloads that are also being containerized these days.

One thing to note about these machines: IBM is using its OpenCAPI Memory Interface, which as we explained in the past is using the “Bluelink” I/O interconnect for NUMA links and accelerator attachment as a memory controller. IBM is now calling this the Open Memory Interface, and these systems have twice as many memory channels as a typical X86 server chip and therefore have a lot more aggregate bandwidth coming off the sockets. The OMI memory makes use of a Differential DIMM form factor that employs DDR4 memory running at 3.2 GHz, and it will be no big deal for IBM to swap in DDR5 memory chips into its DDIMMs when they are out and the price is not crazy. IBM is offering memory features with 32 GB, 64 GB, and 128 GB capacities today in these machines and will offer 256 GB DDIMMs on November 14, which is how you get the maximum capacities shown in the table above. The important thing for HPC customers is that IBM is delivering 409 GB/sec of memory bandwidth per socket and 2 TB of memory per socket.

By the way, the only storage in these machines is NVM-Express flash drives. No disk, no plain vanilla flash SSDs. The machines also support a mix of PCI-Express 4.0 and PCI-Express 5.0 slots, and do not yet support the CXL protocol created by Intel and backed by IBM even though it loves its own Bluelink OpenCAPI interconnect for linking memory and accelerators to the Power compute engines.

Here are the different processor SKUs offered in the Power10 entry machines:

As far as we are concerned, the 24-core Power10 DCM feature EPGK processor in the Power L1024 is the only interesting one for HPC work, aside from what a theoretical 32-core Power10 DCM might be able to do. And just for fun, we sat down and figured out the peak theoretical 64-bit floating point performance, at all-core base and all-core turbo clock speeds, for these two Power10 chips and their rivals in the Intel and AMD CPU lineups. Take a gander at this:

We have no idea what the pricing will be for a processor module in these entry Power10 machines, so we took a stab at what the 24-core variant might cost to be competitive with the X86 alternatives based solely on FP64 throughput and then reckoned the performance of what a full-on 32-core Power10 DCM might be.

The answer is that IBM can absolutely compete, flops to flops, with the best Intel and AMD have right now. And it has a very good matrix math engine as well, which these chips do not.

The problem is, Intel has “Sapphire Rapids” Xeon SPs in the works, which we think will have four 18-core chiplets for a total of 72 cores, but only 56 of them will be exposed because of yield issues that Intel has with its SuperFIN 10 nanometer (Intel 7) process. And AMD has 96-core “Genoa” Epyc 7004s in the works, too. Power11 is several years away, so if IBM wants to play in HPC, Samsung has to get the yields up on the Power10 chips so IBM can sell more cores in a box. Big Blue already has the memory capacity and memory bandwidth advantage. We will see if its L-class Power10 systems can compete on price and performance once we find out more. And we will also explore how memory clustering might make for a very interesting compute platform based on a mix of fat NUMA and memory-less skinny nodes. We have some ideas about how this might play out.

Mon, 11 Jul 2022 12:01:00 -0500 Timothy Prickett Morgan en-US text/html https://www.nextplatform.com/2022/07/12/can-ibm-get-back-into-hpc-with-power10/
Killexams : IBM Expands Power10 Server Family to Help Clients Respond Faster to Rapidly Changing Business Demands

New Power10 scale-out and midrange models extend IBM's capabilities to deliver flexible and secured infrastructure for hybrid cloud environments

ARMONK, N.Y., July 12, 2022 /PRNewswire/ -- IBM (NYSE: IBM) today announced a significant expansion of its Power10 server line with the introduction of mid-range and scale-out systems to modernize, protect and automate business applications and IT operations. The new Power10 servers combine performance, scalability, and flexibility with new pay-as-you-go consumption offerings for clients looking to deploy new services quickly across multiple environments.

IBM Corporation logo. (PRNewsfoto/IBM)

IBM announced an expansion of its Power10 server line with mid-range and scale-out systems.

Digital transformation is driving organizations to modernize both their applications and IT infrastructures. IBM Power systems are purpose-built for today's demanding and dynamic business environments, and these new systems are optimized to run essential workloads such as databases and core business applications, as well as maximize the efficiency of containerized applications. An ecosystem of solutions with Red Hat OpenShift also enables IBM to collaborate with clients, connecting critical workloads to new, cloud-native services designed to maximize the value of their existing infrastructure investments.

The new servers join the popular Power10 E1080 server introduced in September 2021 to deliver a secured, resilient hybrid cloud experience that can be managed with other x86 and multi-cloud management software across clients' IT infrastructure. This expansion of the IBM Power10 family with the new midrange and scale-out servers brings high-end server capabilities throughout the product line. Not only do the new systems support critical security features such as transparent memory encryption and advanced processor/system isolation, but also leverage the OpenBMC project from the Linux Foundation for high levels of security for the new scale-out servers.

Highlights of the announcements include:

  • New systems: The expanded IBM Power10 portfolio, built around the next-generation IBM Power10 processor with 2x more cores and more than 2x memory bandwidth than previous Power generations, now includes the Power10 Midrange E1050, delivering record-setting 4-socket compute1, Java2, and ERP3 performance capabilities. New scale-out servers include the entry-level Power S1014, as well as S1022, and S1024 options, bringing enterprise capabilities to SMBs and remote-office/branch office environments, such as Capacity Upgrade on Demand (CuOD).

  • Cloud on premises with new flexible consumption choices: IBM has recently announced new flexible consumption offerings with pay-as-you-go options and by-the-minute metering for IBM Power Private Cloud, bringing more opportunities to help lower the cost of running OpenShift solutions on Power when compared against alternative platforms. These new consumption models build on options already available with IBM Power Virtual Server to enable greater flexibility in clients' hybrid journeys. Additionally, the highly anticipated IBM i subscription delivers a comprehensive platform solution with the hardware, software and support/services included in the subscription service.

  • Business transformation with SAP®: IBM continues its innovations for SAP solutions. The new midrange E1050 delivers scale (up to 16 TB) and performance for a 4-socket system for clients who run BREAKTHROUGH with IBM for RISE with SAP. In addition, an expansion of the premium provider option is now available to provide more flexibility and computing power with an additional choice to run workloads on IBM Power on Red Hat Enterprise Linux on IBM Cloud.

"Today's highly dynamic environment has created volatility, from materials to people and skills, all of which impact short-term operations and long-term sustainability of the business," said Steve Sibley, Vice President, IBM Power Product Management. "The right IT investments are critical to business and operational resilience. Our new Power10 models offer clients a variety of flexible hybrid cloud choices with the agility and automation to best fit their needs, without sacrificing performance, security or resilience."

The expansion of the IBM Power10 family has been engineered to establish one of the industry's most flexible and broadest range of servers for data-intensive workloads such as SAP S/4HANA – from on-premises workloads to hybrid cloud. IBM now offers more ways to implement dynamic capacity – with metering across all operating environments including IBM i, AIX, Linux and OpenShift supporting modern and traditional applications on the same platforms – as well as integrated infrastructure automation software for improved visibility and management.

The new systems with IBM Power Virtual Server also help clients operate a secured hybrid cloud experience that delivers high performance and architectural consistency across their IT infrastructure. The systems are uniquely designed so as to protect sensitive data from core to cloud, and enable virtual machines and containerized workloads to run simultaneously on the same systems. For critical business workloads that have traditionally needed to reside on-premises, they can now be moved into the cloud as workloads and needs demand. This flexibility can help clients mitigate risk and time associated with rewriting applications for a different platform.

"As organizations around the world continue to adapt to unpredictable changes in consumer behaviors and needs, they need a platform that can deliver their applications and insights securely where and when they need them," said Peter Rutten, IDC Worldwide Infrastructure Research Vice President. "IBM Power continues its laser focus on helping clients respond faster to dynamically changing environments and business demands, while protecting information security and distilling new insights from data, all with high reliability and availability."

Ecosystem of ISVs and Channel Partners Enhance Capabilities for IBM Power10

Critical in the launch of the expanded Power10 family is a robust ecosystem of ISVs, Business Partners, and lifecycle services. Ecosystem partners such as SVA and Solutions II provide examples of how the IBM Ecosystem collaborates with clients to build hybrid environments, connecting essential workloads to the cloud to maximize the value of their existing infrastructure investments:

"SVA customers have appreciated the enormous flexibility of IBM Power systems through Capacity Upgrade On-Demand in the high-end systems for many years," said Udo Sachs, Head of Competence Center Power Systems at SVA. "The flexible consumption models using prepaid capacity credits have been well-received by SVA customers, and now the monthly pay-as-you-go option for the scale-out models makes the platform even more attractive. When it comes to automation, IBM helps us to roll out complex workloads such as entire SAP landscapes at the push of a button by supporting Ansible on all OS derivatives, including AIX, IBM i and Linux, as well as ready-to-use modules for deploying the complete Power infrastructure."

"Solutions II provides technology design, deployment, and managed services to hospitality organizations that leverage mission critical IT infrastructure to execute their mission, often requiring 24/7 operation," said Dan Goggiano, Director of Gaming, Solutions II. "System availability is essential to maintaining our clients' revenue streams, and in our experience, they rely on the stability and resilience of IBM Power systems to help solidify their uptime. Our clients are excited that the expansion of the Power10 family further extends these capabilities and bolsters their ability to run applications securely, rapidly, and efficiently."

For more information on IBM Power and the new servers and consumption models announced today, visit: https://www.ibm.com/it-infrastructure/power

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visit www.ibm.com.

SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE in Germany and other countries. Please see https://www.sap.com/copyright for additional trademark information and notices.

1Comparison based on best performing 4-socket systems (IBM Power E1050 3.15-3.9 GHz, 96 core and Inspur NF8480M6 2.90 GHz, Intel Xeon Platinum 8380H) using published results at https://www.spec.org/cpu2017/results/rint2017.html as of 22 June 2022. For more information about SPEC CPU 2017, see https://www.spec.org/cpu2017/.

2Comparison based on best performing 4-socket systems (IBM Power E1050 3.15-3.9 GHz, 96 core; and Inspur NF8480M6 2.90 GHz, Intel Xeon Platinum 8380H) using published results at https://www.spec.org/cpu2017/results/rint2017.html as of 22 June 2022. For more information about SPEC CPU 2017, see www. http:/spec.org/cpu2017

3Comparison based on best performing 4-socket systems (1) IBM Power E1050; two-tier SAP SD standard application benchmark running SAP ERP 6.0 EHP5; Power10 2.95 GHz processor, 4,096 GB memory, 4p/96c/768t, 134,016 SD benchmark users, 736,420 SAPS, AIX 7.3, DB2 11.5,  Certification # 2022018  and (2) Dell EMC PowerEdge 840; two-tier SAP SD standard application benchmark running SAP ERP 6.0 EHP5; Intel Xeon Platinum 8280 2.7 GHz, 4p/112c/224t, 69,500 SD benchmark users (380,280 SAPS), SUSE Linux Enterprise Server 12 and SAP ASE 16, Certification # 2019045. All results can be found at sap.com/benchmark Valid as of 7 July 2022.

Contact:
Ben Stricker
ben.stricker@ibm.com

Cision

View original content to obtain multimedia:https://www.prnewswire.com/news-releases/ibm-expands-power10-server-family-to-help-clients-respond-faster-to-rapidly-changing-business-demands-301584186.html

SOURCE IBM

Tue, 12 Jul 2022 05:03:00 -0500 en-US text/html https://finance.yahoo.com/news/ibm-expands-power10-server-family-040100099.html
Killexams : How one research center is driving AI innovation for academics and enterprise partners alike

A new research center for artificial intelligence and machine learning has sprung up at the University of Oregon, thanks to a collaboration between IBM and the Oregon Advanced Computing Institute for Science and Society. The Oregon Center for Enterprise AI eXchange (CE-AIX) leverages the university's high-performance computing technology and enterprise servers from IBM to create new training opportunities and collaborations with industry.

"The new lab facility will be a valuable resource for worldwide universities and enterprise companies wanting to take advantage of IBM Enterprise Servers POWER9 and POWER10 combined with IBM Spectrum storage, along with AIX and RHEL with OpenShift," said Ganesan Narayanasamy, IBM's leader for academic and research worldwide.

Narayanasamy said the new center extends state-of-the-art facilities and other Silicon Valley-style services to researchers, system developers, and other users looking to take advantage of open-source high-performance computing resources.  The center has already helped thousands of students gain exposure and practice with its high-performance computing training, and it is expected to serve as a global hub that will help prepare the next generation of computer scientists, according to the center's director Sameer Shende.

"We aim to expand the skillset of researchers and students in the area of commercial application of artificial intelligence and machine learning, as well as high-performance computing technologies," Shende said.

Thanks to a long-term loan agreement with IBM, the center has access to powerful enterprise servers and other capabilities. It was envisioned to bring together data scientists from businesses in different domains, such as financial services, manufacturing, and transportation, along with IBM research and development engineers, IBM partner data scientists, and university students and researchers.

The new center also has the potential to be leveraged by everyone from global transportation companies seeking to design more efficient trucking routes to clean energy firms looking to design better wind turbines based on models of airflow patterns. At the University of Oregon, there are potential applications in data science, machine learning, environmental hazards monitoring, and other emerging areas of research and innovation.

"Enterprise AI is a team sport," said Raj Krishnamurthy, an IBM chief architect for enterprise AI and co-director of the new center. "As businesses continue to operationalize AI in mission-critical systems, the use cases and methodologies developed from collaboration in this center will further promote the adoption of trusted AI techniques in the enterprise."

Ultimately, the center will contribute to the University of Oregon's overall research excellence, said AR Razdan, who serves as the university's vice president for research and innovation.

"The center marks another great step forward in [the university's] ongoing efforts to bring together interdisciplinary teams of researchers and innovators," Razdan said.

This post was created by IBM with Insider Studios.

Sun, 24 Jul 2022 12:00:00 -0500 en-US text/html https://www.businessinsider.com/sc/how-one-tech-partnership-is-making-ai-research-possible
Killexams : IBM Flash Storage and Cyber Resiliency

Flash storage has historically had a reputation for delivering large amounts of storage capacity and high performance in a relatively small package. But with the current threat landscape, it has become important to focus on the resilience of flash. 

IBM's 2021 Cost of a Data Breach Report found that the average cost of a customer data breach is more than four million dollars, and recovery from such an event can take days or even weeks. IBM is responding to the need for protection and rapid recovery from ransomware and other cyber threats by releasing new data resilience capabilities for its FlashSystem family of all-flash arrays.

Flash storage with the power of data protection

Even if your company has a robust security strategy, you still need to be prepared if and when an attack succeeds. IBM empowers organizations to recover from this eventuality by enhancing its FlashSystem storage with IBM Safeguarded Copy. 

Safeguarded Copy enables flash storage to play a role in recovery by automatically creating point-in-time snapshots on production storage on an administrator-defined schedule. Once snapshots have been created, they cannot be changed or deleted. These protections prevent malware and internal threats from tampering with backups.

With Safeguarded Copy, companies can recover from an attack quickly and completely. Safeguarded Copy snapshots reside on the same FlashSystem storage as operational data, which dramatically reduces recovery time when compared to tiered or offsite copy-based recovery solutions.

Rapid recovery with IBM FlashSystem Cyber Vault

IBM has also enhanced its FlashSystem storage with IBM FlashSystem Cyber Vault to enable it to quickly perform all three stages of the recovery process: detection, response and recovery. 

Cyber Vault runs continuously and monitors snapshots as Safeguarded Copy creates them and uses standard database tools and other software to ensure Safeguarded Copy snapshots haven't been compromised. If Cyber Vault finds the snapshots have been corrupted, it interprets that as a sign of an attack. Cyber Vault can reduce recovery time from days to hours by quickly determining which snapshots to use.

Flash storage designed for resiliency

IBM has added members to its FlashSystem family that are built to deliver on performance while also providing resilience: FlashSystem 9500 and 7300. 

The FlashSystem 9500 is IBM's flagship enterprise storage array, designed for environments that need the highest capability and resilience. It offers twice the performance, connectivity and capacity of its predecessor and 50 percent more cache. The 9500 also provides data resilience with numerous safeguards, including multi-factor authentication (MFA) and secure boot to help guarantee only IBM-authorized software runs on the system. Additionally, IBM's FlashCore Modules (FCMs) offer real-time hardware-based encryption and up to 7x increase in endurance compared to commodity SSDs.

The IBM FlashSystem 7300 offers about 25 percent better performance than the previous generation of FlashSystem storage. It has a smaller footprint than the 9500 but runs the same software and features, including 3:1 real-time compression and hardware encryption. The FlashSystem 7300 supports up to 2.2PB effective capacity per 2U control enclosure. 

The IBM FlashSystem family offers two- and three-site replication along with configuration options that can include an optional 100 percent data availability guarantee for business continuity.

Explore next-generation flash storage

The IBM FlashSystem family is continuously evolving with expanded capabilities around capacity, performance and data protection. 

WWT can help your company evaluate and choose the right flash storage solution to meet your needs. WWT is an IBM-designated global and regional systems integrator (SI) and solution provider, and we know how important data protection is for modern companies. We encourage your organization to take a holistic approach to data resilience.

Sun, 24 Jul 2022 17:00:00 -0500 en text/html https://www.wwt.com/article/ibm-flash-storage-and-cyber-resiliency
Killexams : IBM Expands Power10 Server Family to Help Clients Respond Faster to Rapidly Changing Business Demands

New Power10 scale-out and midrange models extend IBM's capabilities to deliver flexible and secured infrastructure for hybrid cloud environments

ARMONK, N.Y., July 12, 2022 /PRNewswire/ -- IBM IBM today announced a significant expansion of its Power10 server line with the introduction of mid-range and scale-out systems to modernize, protect and automate business applications and IT operations. The new Power10 servers combine performance, scalability, and flexibility with new pay-as-you-go consumption offerings for clients looking to deploy new services quickly across multiple environments.

IBM announced an expansion of its Power10 server line with mid-range and scale-out systems.

 Digital transformation is driving organizations to modernize both their applications and IT infrastructures. IBM Power systems are purpose-built for today's demanding and dynamic business environments, and these new systems are optimized to run essential workloads such as databases and core business applications, as well as maximize the efficiency of containerized applications. An ecosystem of solutions with Red Hat OpenShift also enables IBM to collaborate with clients, connecting critical workloads to new, cloud-native services designed to maximize the value of their existing infrastructure investments.

The new servers join the popular Power10 E1080 server introduced in September 2021 to deliver a secured, resilient hybrid cloud experience that can be managed with other x86 and multi-cloud management software across clients' IT infrastructure. This expansion of the IBM Power10 family with the new midrange and scale-out servers brings high-end server capabilities throughout the product line. Not only do the new systems support critical security features such as transparent memory encryption and advanced processor/system isolation, but also leverage the OpenBMC project from the Linux Foundation for high levels of security for the new scale-out servers.  

Highlights of the announcements include:

  • New systems: The expanded IBM Power10 portfolio, built around the next-generation IBM Power10 processor with 2x more cores and more than 2x memory bandwidth than previous Power generations, now includes the Power10 Midrange E1050, delivering record-setting 4-socket compute1, Java2, and ERP3 performance capabilities. New scale-out servers include the entry-level Power S1014, as well as S1022, and S1024 options, bringing enterprise capabilities to SMBs and remote-office/branch office environments, such as Capacity Upgrade on Demand (CuOD).
  • Cloud on premises with new flexible consumption choices: IBM has recently announced new flexible consumption offerings with pay-as-you-go options and by-the-minute metering for IBM Power Private Cloud, bringing more opportunities to help lower the cost of running OpenShift solutions on Power when compared against alternative platforms. These new consumption models build on options already available with IBM Power Virtual Server to enable greater flexibility in clients' hybrid journeys. Additionally, the highly anticipated IBM i subscription delivers a comprehensive platform solution with the hardware, software and support/services included in the subscription service.
  • Business transformation with SAP®: IBM continues its innovations for SAP solutions. The new midrange E1050 delivers scale (up to 16 TB) and performance for a 4-socket system for clients who run BREAKTHROUGH with IBM for RISE with SAP. In addition, an expansion of the premium provider option is now available to provide more flexibility and computing power with an additional choice to run workloads on IBM Power on Red Hat Enterprise Linux on IBM Cloud.

"Today's highly dynamic environment has created volatility, from materials to people and skills, all of which impact short-term operations and long-term sustainability of the business," said Steve Sibley, Vice President, IBM Power Product Management. "The right IT investments are critical to business and operational resilience. Our new Power10 models offer clients a variety of flexible hybrid cloud choices with the agility and automation to best fit their needs, without sacrificing performance, security or resilience."

The expansion of the IBM Power10 family has been engineered to establish one of the industry's most flexible and broadest range of servers for data-intensive workloads such as SAP S/4HANA – from on-premises workloads to hybrid cloud. IBM now offers more ways to implement dynamic capacity – with metering across all operating environments including IBM i, AIX, Linux and OpenShift supporting modern and traditional applications on the same platforms – as well as integrated infrastructure automation software for improved visibility and management.

The new systems with IBM Power Virtual Server also help clients operate a secured hybrid cloud experience that delivers high performance and architectural consistency across their IT infrastructure. The systems are uniquely designed so as to protect sensitive data from core to cloud, and enable virtual machines and containerized workloads to run simultaneously on the same systems. For critical business workloads that have traditionally needed to reside on-premises, they can now be moved into the cloud as workloads and needs demand. This flexibility can help clients mitigate risk and time associated with rewriting applications for a different platform.

"As organizations around the world continue to adapt to unpredictable changes in consumer behaviors and needs, they need a platform that can deliver their applications and insights securely where and when they need them," said Peter Rutten, IDC Worldwide Infrastructure Research Vice President. "IBM Power continues its laser focus on helping clients respond faster to dynamically changing environments and business demands, while protecting information security and distilling new insights from data, all with high reliability and availability."

Ecosystem of ISVs and Channel Partners Enhance Capabilities for IBM Power10

Critical in the launch of the expanded Power10 family is a robust ecosystem of ISVs, Business Partners, and lifecycle services. Ecosystem partners such as SVA and Solutions II provide examples of how the IBM Ecosystem collaborates with clients to build hybrid environments, connecting essential workloads to the cloud to maximize the value of their existing infrastructure investments:

"SVA customers have appreciated the enormous flexibility of IBM Power systems through Capacity Upgrade On-Demand in the high-end systems for many years," said Udo Sachs, Head of Competence Center Power Systems at SVA. "The flexible consumption models using prepaid capacity credits have been well-received by SVA customers, and now the monthly pay-as-you-go option for the scale-out models makes the platform even more attractive. When it comes to automation, IBM helps us to roll out complex workloads such as entire SAP landscapes at the push of a button by supporting Ansible on all OS derivatives, including AIX, IBM i and Linux, as well as ready-to-use modules for deploying the complete Power infrastructure."

"Solutions II provides technology design, deployment, and managed services to hospitality organizations that leverage mission critical IT infrastructure to execute their mission, often requiring 24/7 operation," said Dan Goggiano, Director of Gaming, Solutions II. "System availability is essential to maintaining our clients' revenue streams, and in our experience, they rely on the stability and resilience of IBM Power systems to help solidify their uptime. Our clients are excited that the expansion of the Power10 family further extends these capabilities and bolsters their ability to run applications securely, rapidly, and efficiently." 

For more information on IBM Power and the new servers and consumption models announced today, visit: https://www.ibm.com/it-infrastructure/power

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visit www.ibm.com.

SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE in Germany and other countries. Please see https://www.sap.com/copyright for additional trademark information and notices.

1Comparison based on best performing 4-socket systems (IBM Power E1050 3.15-3.9 GHz, 96 core and Inspur NF8480M6 2.90 GHz, Intel Xeon Platinum 8380H) using published results at https://www.spec.org/cpu2017/results/rint2017.html as of 22 June 2022. For more information about SPEC CPU 2017, see https://www.spec.org/cpu2017/.

2Comparison based on best performing 4-socket systems (IBM Power E1050 3.15-3.9 GHz, 96 core; and Inspur NF8480M6 2.90 GHz, Intel Xeon Platinum 8380H) using published results at https://www.spec.org/cpu2017/results/rint2017.html as of 22 June 2022. For more information about SPEC CPU 2017, see www. http:/spec.org/cpu2017

3Comparison based on best performing 4-socket systems (1) IBM Power E1050; two-tier SAP SD standard application benchmark running SAP ERP 6.0 EHP5; Power10 2.95 GHz processor, 4,096 GB memory, 4p/96c/768t, 134,016 SD benchmark users, 736,420 SAPS, AIX 7.3, DB2 11.5,  Certification # 2022018  and (2) Dell EMC PowerEdge 840; two-tier SAP SD standard application benchmark running SAP ERP 6.0 EHP5; Intel Xeon Platinum 8280 2.7 GHz, 4p/112c/224t, 69,500 SD benchmark users (380,280 SAPS), SUSE Linux Enterprise Server 12 and SAP ASE 16, Certification # 2019045. All results can be found at sap.com/benchmark Valid as of 7 July 2022. 

Contact:
Ben Stricker
ben.stricker@ibm.com

View original content to obtain multimedia:https://www.prnewswire.com/news-releases/ibm-expands-power10-server-family-to-help-clients-respond-faster-to-rapidly-changing-business-demands-301584186.html

SOURCE IBM

© 2022 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Ad Disclosure: The rate information is obtained by Bankrate from the listed institutions. Bankrate cannot guaranty the accuracy or availability of any rates shown above. Institutions may have different rates on their own websites than those posted on Bankrate.com. The listings that appear on this page are from companies from which this website receives compensation, which may impact how, where, and in what order products appear. This table does not include all companies or all available products.

All rates are subject to change without notice and may vary depending on location. These quotes are from banks, thrifts, and credit unions, some of whom have paid for a link to their own Web site where you can find additional information. Those with a paid link are our Advertisers. Those without a paid link are listings we obtain to Boost the consumer shopping experience and are not Advertisers. To receive the Bankrate.com rate from an Advertiser, please identify yourself as a Bankrate customer. Bank and thrift deposits are insured by the Federal Deposit Insurance Corp. Credit union deposits are insured by the National Credit Union Administration.

Consumer Satisfaction: Bankrate attempts to verify the accuracy and availability of its Advertisers' terms through its quality assurance process and requires Advertisers to agree to our Terms and Conditions and to adhere to our Quality Control Program. If you believe that you have received an inaccurate quote or are otherwise not satisfied with the services provided to you by the institution you choose, please click here.

Rate collection and criteria: Click here for more information on rate collection and criteria.

Mon, 11 Jul 2022 16:07:00 -0500 text/html https://www.benzinga.com/pressreleases/22/07/n28027128/ibm-expands-power10-server-family-to-help-clients-respond-faster-to-rapidly-changing-business-dema
Killexams : IBM Expands Power10 Server Family to Help Clients Respond Faster to Rapidly Changing Business Demands

New Power10 scale-out and midrange models extend IBM's capabilities to deliver flexible and secured infrastructure for hybrid cloud environments

ARMONK, N.Y., July 12, 2022 /PRNewswire/ -- IBM (NYSE: IBM) today announced a significant expansion of its Power10 server line with the introduction of mid-range and scale-out systems to modernize, protect and automate business applications and IT operations. The new Power10 servers combine performance, scalability, and flexibility with new pay-as-you-go consumption offerings for clients looking to deploy new services quickly across multiple environments.

IBM Corporation logo. (PRNewsfoto/IBM)

IBM announced an expansion of its Power10 server line with mid-range and scale-out systems.

Digital transformation is driving organizations to modernize both their applications and IT infrastructures. IBM Power systems are purpose-built for today's demanding and dynamic business environments, and these new systems are optimized to run essential workloads such as databases and core business applications, as well as maximize the efficiency of containerized applications. An ecosystem of solutions with Red Hat OpenShift also enables IBM to collaborate with clients, connecting critical workloads to new, cloud-native services designed to maximize the value of their existing infrastructure investments.

The new servers join the popular Power10 E1080 server introduced in September 2021 to deliver a secured, resilient hybrid cloud experience that can be managed with other x86 and multi-cloud management software across clients' IT infrastructure. This expansion of the IBM Power10 family with the new midrange and scale-out servers brings high-end server capabilities throughout the product line. Not only do the new systems support critical security features such as transparent memory encryption and advanced processor/system isolation, but also leverage the OpenBMC project from the Linux Foundation for high levels of security for the new scale-out servers.

Highlights of the announcements include:

  • New systems: The expanded IBM Power10 portfolio, built around the next-generation IBM Power10 processor with 2x more cores and more than 2x memory bandwidth than previous Power generations, now includes the Power10 Midrange E1050, delivering record-setting 4-socket compute1, Java2, and ERP3 performance capabilities. New scale-out servers include the entry-level Power S1014, as well as S1022, and S1024 options, bringing enterprise capabilities to SMBs and remote-office/branch office environments, such as Capacity Upgrade on Demand (CuOD).
  • Cloud on premises with new flexible consumption choices: IBM has recently announced new flexible consumption offerings with pay-as-you-go options and by-the-minute metering for IBM Power Private Cloud, bringing more opportunities to help lower the cost of running OpenShift solutions on Power when compared against alternative platforms. These new consumption models build on options already available with IBM Power Virtual Server to enable greater flexibility in clients' hybrid journeys. Additionally, the highly anticipated IBM i subscription delivers a comprehensive platform solution with the hardware, software and support/services included in the subscription service.
  • Business transformation with SAP®: IBM continues its innovations for SAP solutions. The new midrange E1050 delivers scale (up to 16 TB) and performance for a 4-socket system for clients who run BREAKTHROUGH with IBM for RISE with SAP. In addition, an expansion of the premium provider option is now available to provide more flexibility and computing power with an additional choice to run workloads on IBM Power on Red Hat Enterprise Linux on IBM Cloud.

"Today's highly dynamic environment has created volatility, from materials to people and skills, all of which impact short-term operations and long-term sustainability of the business," said Steve Sibley, Vice President, IBM Power Product Management. "The right IT investments are critical to business and operational resilience. Our new Power10 models offer clients a variety of flexible hybrid cloud choices with the agility and automation to best fit their needs, without sacrificing performance, security or resilience."

The expansion of the IBM Power10 family has been engineered to establish one of the industry's most flexible and broadest range of servers for data-intensive workloads such as SAP S/4HANA – from on-premises workloads to hybrid cloud. IBM now offers more ways to implement dynamic capacity – with metering across all operating environments including IBM i, AIX, Linux and OpenShift supporting modern and traditional applications on the same platforms – as well as integrated infrastructure automation software for improved visibility and management.

The new systems with IBM Power Virtual Server also help clients operate a secured hybrid cloud experience that delivers high performance and architectural consistency across their IT infrastructure. The systems are uniquely designed so as to protect sensitive data from core to cloud, and enable virtual machines and containerized workloads to run simultaneously on the same systems. For critical business workloads that have traditionally needed to reside on-premises, they can now be moved into the cloud as workloads and needs demand. This flexibility can help clients mitigate risk and time associated with rewriting applications for a different platform.

"As organizations around the world continue to adapt to unpredictable changes in consumer behaviors and needs, they need a platform that can deliver their applications and insights securely where and when they need them," said Peter Rutten, IDC Worldwide Infrastructure Research Vice President. "IBM Power continues its laser focus on helping clients respond faster to dynamically changing environments and business demands, while protecting information security and distilling new insights from data, all with high reliability and availability."

Ecosystem of ISVs and Channel Partners Enhance Capabilities for IBM Power10

Critical in the launch of the expanded Power10 family is a robust ecosystem of ISVs, Business Partners, and lifecycle services. Ecosystem partners such as SVA and Solutions II provide examples of how the IBM Ecosystem collaborates with clients to build hybrid environments, connecting essential workloads to the cloud to maximize the value of their existing infrastructure investments:

"SVA customers have appreciated the enormous flexibility of IBM Power systems through Capacity Upgrade On-Demand in the high-end systems for many years," said Udo Sachs, Head of Competence Center Power Systems at SVA. "The flexible consumption models using prepaid capacity credits have been well-received by SVA customers, and now the monthly pay-as-you-go option for the scale-out models makes the platform even more attractive. When it comes to automation, IBM helps us to roll out complex workloads such as entire SAP landscapes at the push of a button by supporting Ansible on all OS derivatives, including AIX, IBM i and Linux, as well as ready-to-use modules for deploying the complete Power infrastructure."

"Solutions II provides technology design, deployment, and managed services to hospitality organizations that leverage mission critical IT infrastructure to execute their mission, often requiring 24/7 operation," said Dan Goggiano, Director of Gaming, Solutions II. "System availability is essential to maintaining our clients' revenue streams, and in our experience, they rely on the stability and resilience of IBM Power systems to help solidify their uptime. Our clients are excited that the expansion of the Power10 family further extends these capabilities and bolsters their ability to run applications securely, rapidly, and efficiently."

For more information on IBM Power and the new servers and consumption models announced today, visit: https://www.ibm.com/it-infrastructure/power

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visit www.ibm.com.

SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE in Germany and other countries. Please see https://www.sap.com/copyright for additional trademark information and notices.

1Comparison based on best performing 4-socket systems (IBM Power E1050 3.15-3.9 GHz, 96 core and Inspur NF8480M6 2.90 GHz, Intel Xeon Platinum 8380H) using published results at https://www.spec.org/cpu2017/results/rint2017.html as of 22 June 2022. For more information about SPEC CPU 2017, see https://www.spec.org/cpu2017/.

2Comparison based on best performing 4-socket systems (IBM Power E1050 3.15-3.9 GHz, 96 core; and Inspur NF8480M6 2.90 GHz, Intel Xeon Platinum 8380H) using published results at https://www.spec.org/cpu2017/results/rint2017.html as of 22 June 2022. For more information about SPEC CPU 2017, see www. http:/spec.org/cpu2017

3Comparison based on best performing 4-socket systems (1) IBM Power E1050; two-tier SAP SD standard application benchmark running SAP ERP 6.0 EHP5; Power10 2.95 GHz processor, 4,096 GB memory, 4p/96c/768t, 134,016 SD benchmark users, 736,420 SAPS, AIX 7.3, DB2 11.5, Certification # 2022018 and (2) Dell EMC PowerEdge 840; two-tier SAP SD standard application benchmark running SAP ERP 6.0 EHP5; Intel Xeon Platinum 8280 2.7 GHz, 4p/112c/224t, 69,500 SD benchmark users (380,280 SAPS), SUSE Linux Enterprise Server 12 and SAP ASE 16, Certification # 2019045. All results can be found at sap.com/benchmark Valid as of 7 July 2022.

Contact:
Ben Stricker
ben.stricker@ibm.com

Cision View original content to obtain multimedia:https://www.prnewswire.com/news-releases/ibm-expands-power10-server-family-to-help-clients-respond-faster-to-rapidly-changing-business-demands-301584186.html

SOURCE IBM

Mon, 11 Jul 2022 16:03:00 -0500 text/html https://stockhouse.com/news/press-releases/2022/07/12/ibm-expands-power10-server-family-to-help-clients-respond-faster-to-rapidly
C4040-100 exam dump and training guide direct download
Training Exams List