Thanks to 100% valid and up to date C5050-287 Latest Topics by killexams.com

killexams.com give latest Pass4sure C5050-287 exam prep with Actual C5050-287 Practice Test. Practice these Genuine Questions and Answers to Improve your insight and breeze through your C5050-287 test with a great score. We ensure you 100% that if you memorize these C5050-287 real questions and practice, You will pass with great score.

Exam Code: C5050-287 Practice exam 2022 by Killexams.com team
C5050-287 Foundations of IBM Cloud Reference Architecture V5

Exam Title : IBM Certified Solution Advisor - Cloud Reference Architecture V5
Exam ID : C5050-287
Exam Duration : 90 mins
Questions in exam : 60
Passing Score : 38 / 60
Official Training : Cloud Computing Fundamentals
Exam Center : Pearson VUE
Real Questions : IBM Foundations of IBM Cloud Reference Architecture Real Questions
VCE practice test : IBM C5050-287 Certification VCE Practice Test

Cloud Computing Concepts and Benefits
- Define the cloud computing business advantages.
- Define cloud architecture.
- Describe considerations such as risk, cost and compliance around cloud computing.
- Define automation as it pertains to cloud computing.
- Define why standardization is important to cloud computing.
- Define service catalog as it pertains to cloud computing.
- Define a public cloud.
- Define a private cloud.
- Define a hybrid cloud.
- Define the difference between a private cloud, a public cloud, and a hybrid cloud.
- Define Software as a Service (SaaS).
- Define Platform as a Service (PaaS).
- Define Infrastructure as a Service (IaaS).
- Define DevOps as it pertains to cloud computing.
- Explain Maturity as it relates to SaaS, PaaS, and IaaS.
- Explain the benefits of patterns as description of cloud services.
- Define software defined environments as they relate to cloud computing.
- Summarize how business processes can be automated in a cloud environment.

Cloud Computing Design Principles
- Demonstrate base knowledge needed to advise on creating a cloud infrastructure.
- Explain Cloud networking principles.
- Explain Cloud storage principles such as block object file and storage area networks
- Describe security strategies in a cloud computing environment.
- Design principle for cloud ready applications including patterns, chef, puppet, and heat templates.
- Design principles for cloud native applications such as open standards and 12-Factor app.
- Design principles for DevOps.
- Designing consumable applications for the cloud.
- Define hybrid integration capabilities.
- Define API Economy in Cloud Computing.
- Define how solutions in the cloud can be more effective.
- Explain to the customer how some popular billing models work and how they pertain to the software the customer has.
- Describe principles for governance, compliance and service management.

IBM Cloud Reference Architecture
- Explain the four defining principles of IBM Cloud.
- Explain the benefits of using the IBM Cloud Reference Architecture (ICRA).
- Explain the Cloud Platform Services for ICRA including containers, foundational services, etc.
- Explain the Cloud Service Provider Adoption Pattern for ICRA.
- Describe the ICRA Building SaaS cloud adoption pattern.
- Explain the Hybrid patterns for ICRA
- Describe the solution integration process detailed in the ICRA to take an existing environment to an IBM Cloud Computing environment.
- Design a secure cloud service model using ICRA.
- Describe high availability and disaster recovery as it pertains to cloud computing.
- Describe actors and roles as defined in ICRA (Cloud Service Consumers, Cloud Service Creators, Cloud Service Provider, Cloud Services and the Common Cloud Management Platform).
- Describe how IBM Service Management can effectively manage a customer's cloud environment.
- Describe the IBM API management capabilities.
- Describe the role of governance in the ICRA.
- Describe non-functional requirements (NFRs) as described by ICRA.
- Explain the role of mobile as part of the ICRA.
- Explain the Cognitive pattern as part of the ICRA.
- Explain the IOT pattern as part of the ICRA.
- Explain the DevOps pattern as part of the ICRA.
- Explain the Big Data and Analytics pattern as part of the ICRA.

IBM Cloud Solutions
- Describe the IBM capabilities for Cloud Managed Services.
- Describe the IBM capabilities for hybrid integration.
- Describe the IBM capabilities for video services.
- Describe the IBM capabilities for cloud brokerage.
- Describe the IBM capabilities for DevOps.
- Describe the IBM capabilities for cloud native applications.
- Describe the IBM capabilities for service management.
- Describe the IBM capabilities for storage.
- Describe the IBM capabilities for business process management.
- Describe the IBM capabilities for the IBM Marketplace.

Foundations of IBM Cloud Reference Architecture V5
IBM Architecture learn
Killexams : IBM Architecture learn - BingNews https://killexams.com/pass4sure/exam-detail/C5050-287 Search results Killexams : IBM Architecture learn - BingNews https://killexams.com/pass4sure/exam-detail/C5050-287 https://killexams.com/exam_list/IBM Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Partnerships & Use Cases

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Strengthen future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Strengthen quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Strengthen the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : Astadia Publishes Mainframe to Cloud Reference Architecture Series

Press release content from Business Wire. The AP news staff was not involved in its creation.

BOSTON--(BUSINESS WIRE)--Aug 3, 2022--

Astadia is pleased to announce the release of a new series of Mainframe-to-Cloud reference architecture guides. The documents cover how to refactor IBM mainframes applications to Microsoft Azure, Amazon Web Services (AWS), Google Cloud, and Oracle Cloud Infrastructure (OCI). The documents offer a deep dive into the migration process to all major target cloud platforms using Astadia’s FastTrack software platform and methodology.

As enterprises and government agencies are under pressure to modernize their IT environments and make them more agile, scalable and cost-efficient, refactoring mainframe applications in the cloud is recognized as one of the most efficient and fastest modernization solutions. By making the guides available, Astadia equips business and IT professionals with a step-by-step approach on how to refactor mission-critical business systems and benefit from highly automated code transformation, data conversion and testing to reduce costs, risks and timeframes in mainframe migration projects.

“Understanding all aspects of legacy application modernization and having access to the most performant solutions is crucial to accelerating digital transformation,” said Scott G. Silk, Chairman and CEO. “More and more organizations are choosing to refactor mainframe applications to the cloud. These guides are meant to assist their teams in transitioning fast and safely by benefiting from Astadia’s expertise, software tools, partnerships, and technology coverage in mainframe-to-cloud migrations,” said Mr. Silk.

The new guides are part of Astadia’s free Mainframe-to-Cloud Modernization series, an ample collection of guides covering various mainframe migration options, technologies, and cloud platforms. The series covers IBM (NYSE:IBM) Mainframes.

In addition to the reference architecture diagrams, these comprehensive guides include various techniques and methodologies that may be used in forming a complete and effective Legacy Modernization plan. The documents analyze the important role of the mainframe platform, and how to preserve previous investments in information systems when transitioning to the cloud.

In each of the IBM Mainframe Reference Architecture white papers, readers will explore:

  • Benefits, approaches, and challenges of mainframe modernization
  • Understanding typical IBM Mainframe Architecture
  • An overview of Azure/AWS/Google Cloud/Oracle Cloud
  • Detailed diagrams of IBM mappings to Azure/AWS/ Google Cloud/Oracle Cloud
  • How to ensure project success in mainframe modernization

The guides are available for get here:

To access more mainframe modernization resources, visit the Astadia learning center on www.astadia.com.

About Astadia

Astadia is the market-leading software-enabled mainframe migration company, specializing in moving IBM and Unisys mainframe applications and databases to distributed and cloud platforms in unprecedented timeframes. With more than 30 years of experience, and over 300 mainframe migrations completed, enterprises and government organizations choose Astadia for its deep expertise, range of technologies, and the ability to automate complex migrations, as well as testing at scale. Learn more on www.astadia.com.

View source version on businesswire.com:https://www.businesswire.com/news/home/20220803005031/en/

CONTACT: Wilson Rains, Chief Revenue Officer

Wilson.Rains@astadia.com

+1.877.727.8234

KEYWORD: UNITED STATES NORTH AMERICA MASSACHUSETTS

INDUSTRY KEYWORD: DATA MANAGEMENT TECHNOLOGY OTHER TECHNOLOGY SOFTWARE NETWORKS INTERNET

SOURCE: Astadia

Copyright Business Wire 2022.

PUB: 08/03/2022 10:00 AM/DISC: 08/03/2022 10:02 AM

http://www.businesswire.com/news/home/20220803005031/en

Wed, 03 Aug 2022 02:02:00 -0500 en text/html https://apnews.com/press-release/BusinessWire/technology-f50b643965d24115b2c526c8f96321a6
Killexams : Learning from Failure

Learning from failure is a hallmark of the technology business. Nick Baker, a 37-year-old system architect at Microsoft, knows that well. A British transplant at the software giant's Silicon Valley campus, he went from failed project to failed project in his career. He worked on such dogs as Apple Computer's defunct video card business, 3DO's failed game consoles, a chip startup that screwed up a deal with Nintendo, the never-successful WebTV and Microsoft's canceled Ultimate TV satellite TV recorder.

But Baker finally has a hot seller with the Xbox 360, Microsoft's video game console launched worldwide last holiday season. The adventure on which he embarked four years ago would ultimately prove that failure is often the best teacher. His new gig would once again provide copious evidence that flexibility and understanding of detailed customer needs will beat a rigid business model every time. And so far the score is Xbox 360, one, and the delayed PlayStation 3, nothing.

The Xbox 360 console is Microsoft's living room Trojan horse, purchased as a game box but capable of so much more in the realm of digital entertainment in the living room. Since the day after Microsoft terminated the Ultimate TV box in February 2002, Baker has been working on the Xbox 360 silicon architecture team at Microsoft's campus in Mountain View, CA. He is one of the 3DO survivors who now gets a shot at revenge against the Japanese companies that vanquished his old firm.

"It feels good," says Baker. "I can play it at home with the kids. It's family-friendly, and I don't have to play on the Nintendo anymore."

Baker is one of the people behind the scenes who pulled together the Xbox 360 console by engineering some of the most complicated chips ever designed for a consumer entertainment device. The team labored for years and made critical decisions that enabled Microsoft to beat Sony and Nintendo to market with a new box, despite a late start with the Xbox in the previous product cycle. Their story, captured here and in a forthcoming book by the author of this article, illustrates the ups and downs in any big project.

When Baker and his pal Jeff Andrews joined games programmer Mike Abrash in early 2002, they had clear marching orders. Their bosses — Microsoft CEO Steve Ballmer, at the top of Microsoft; Robbie Bach, running the Xbox division; Xbox hardware chief Todd Holmdahl; Greg Gibson, for Xbox 360 system architecture; and silicon chief Larry Yang — all dictated what Microsoft needed this time around.

They couldn't be late. They had to make hardware that could become much cheaper over time and they had to pack as much performance into a game console as they could without overheating the box.

Trinity Taken

The group of silicon engineers started first among the 2,000 people in the Xbox division on a project that Baker had code-named Trinity. But they couldn't use that name, because someone else at Microsoft had taken it. So they named it Xenon, for the colorless and odorless gas, because it sounded cool enough. Their first order of business was to study computing architectures, from those of the best supercomputers to those of the most power-efficient portable gadgets. Although Microsoft had chosen Intel and NVIDIA to make the chips for the original Xbox the first time around, the engineers now talked to a broad spectrum of semiconductor makers.

"For us, 2002 was about understanding what the technology could do," says Greg Gibson, system designer.

Sony teamed up with IBM and Toshiba to create a full-custom microprocessor from the ground up. They planned to spend $400 million developing the cell architecture and even more fabricating the chips. Microsoft didn't have the time or the chip engineers to match the effort on that scale, but Todd Holmdahl and Larry Yang saw a chance to beat Sony. They could marshal a host of virtual resources and create a semicustom design that combined both off-the-shelf technology and their own ideas for game hardware. Microsoft would lead the integration of the hardware, own the intellectual property, set the cost-reduction schedules, and manage its vendors closely.

They believed this approach would get them to market by 2005, which was when they estimated Sony would be ready with the PlayStation 3. (As it turned out, Microsoft's dreams were answered when Sony, in March, postponed the PlayStation 3 launch until November.)

More important, using an IP ownership strategy with the chips could dramatically cut Microsoft's costs on the original Xbox. Microsoft had lost an estimated $3.7 billion over four years, or roughly a whopping $168 per box. By cutting costs, Microsoft could erase a lot of red ink.

Balanced Design

Baker and Andrews quickly decided they wanted to create a balanced design, trading off power efficiency and performance. So they envisioned a multicore microprocessor, one with as many as 16 cores — or miniprocessors — on one chip. They wanted a graphics chip with 60 shaders, or parallel processors for rendering distinct features in graphic animations.

Laura Fryer, manager of the Xbox Advanced Technology Group in Redmond, WA, solicited feedback on the new microprocessor. She said game developers were wary of managing multiple software threads associated with multiple cores, because the switch created a juggling task they didn't have to do on the original Xbox or the PC. But they appreciated the power efficiency and added performance they could get.

Microsoft's current vendors, Intel and NVIDIA, didn't like the idea that Microsoft would own the IP they created. For Intel, allowing Microsoft to take the x86 design to another manufacturer was as troubling as signing away the rights to Windows would be to Microsoft. NVIDIA was willing to do the work, but if it had to deviate from its road map for PC graphics chips in order to tailor a chip for a game box, then it wanted to get paid for it. Microsoft didn't want to pay that high a price. "It wasn't a good deal," says Jen Hsun-Huang, CEO of NVIDIA. Microsoft had also been through a painful arbitration on pricing for the original Xbox graphics chips.

IBM, on the other hand, had started a chip engineering services business and was perfectly willing to customize a PowerPC design for Microsoft, says Jim Comfort, an IBM vice president. At first IBM didn't believe that Microsoft wanted to work together, given a history of rancor dating back to the DOS and OS/2 operating systems in the 1980s. Moreover, IBM was working for Microsoft rivals Sony and Nintendo. But Microsoft pressed IBM for its views on multicore chips and discovered that Big Blue was ahead of Intel in thinking about these kinds of designs.

When Bill Adamec, a Microsoft program manager, traveled to IBM's chip design campus in Rochester, NY, he did a double take when he arrived at the meeting room where 26 engineers were waiting for him. Although IBM had reservations about Microsoft's schedule, the company was clearly serious.

Meanwhile, ATI Technologies assigned a small team to conceive a proposal for a game console graphics chip. Instead of pulling out a derivative of a PC graphics chip, ATI's engineers decided to design a brand-new console graphics chip that relied on embedded memory to feed a lot data to the graphics chip while keeping the main data pathway clear of traffic — critical for avoiding bottlenecks that would slow down the system.

Stomaching IBM

By the fall of 2002, Microsoft's chip architects decided they favored the IBM and ATI solutions. They met with Ballmer and Gates, who wanted to be involved in the critical design decisions at an early juncture. Larry Yang recalls, "We asked them if they could stomach a relationship with IBM." Their affirmative answer pleased the team.

By early 2003, the list of potential chip suppliers had been narrowed down. At that point, Robbie Bach, the chief Xbox officer, took his team to a retreat at the Salish Lodge, on the edge of Washington's beautiful Snoqualmie Falls, made famous by the "Twin Peaks" television show. The team hashed out a battle plan. They would own the IP for silicon that could take the costs of the box down quickly. They would launch the box in 2005 at the same time as Sony would launch its box, or even earlier. The last time, Sony had had a 20-month head start with the PlayStation 2. By the time Microsoft sold its first 1.4 million Xboxes, Sony had sold more than 25 million PlayStation 2s.

Those goals fit well with the choice of IBM and ATI for the two pieces of silicon that would account for more than half the cost of the box. Each chip provider moved forward, based on a "statement of work," but Gibson kept his options open, and it would be months before the team finalized a contract. Both IBM and ATI could pull blocks of IP from their existing products and reuse them in the Microsoft chips. Engineering teams from both companies began working on joint projects such as the data pathway that connected the chips. ATI had to make contingency plans, in case Microsoft chose Intel over IBM, and IBM also had to consider the possibility that Microsoft might choose NVIDIA.

Hacking Embarrassment

Through the summer, Microsoft executives and marketers created detailed plans for the console launch. They decided to build security into the microprocessor to prevent hacking, which had proved to be a major embarrassment on the original Xbox. Marketers such as David Reid all but demanded that Microsoft try to develop the new machine in a way that would allow the games for the original Xbox to run on it. So-called backward compatibility wasn't necessarily exploited by customers, but it was a big factor in deciding which box to buy. And Bach insisted that Microsoft had to make gains in Japan and Europe by launching in those regions at the same time as in North America.

For a period in July 2003, Bob Feldstein, the ATI vice president in charge of the Xenon graphics chip, thought NVIDIA had won the deal, but in August Microsoft signed a deal with ATI and announced it to the world. The ATI chip would have 48 shaders, or processors that would handle the nuances of color shading and surface features on graphics objects, and would come with 10 Mbytes of embedded memory.

IBM followed with a contract signing a month later. The deal was more complicated than ATI's, because Microsoft had negotiated the right to take the IBM design and have it manufactured in an IBM-licensed foundry being built by contract chip maker Chartered Semiconductor. The chip would have three cores and run at 3.2 GHz. It was a little short of the 3.5 GHz that IBM had originally pitched, but it wasn't off by much.

By October 2003, the entire Xenon team had made its pitch to Gates and Ballmer. They faced some tough questions. Gates wanted to know if there was any chance the box would run the complete Windows operating system. The top executives ended up giving the green light to Xenon without a Windows version.

The ranks of Microsoft's hardware team swelled to more than 200, with half of the team members working on silicon integration. Many of these people were like Baker and Andrews, stragglers who had come from failed projects such as 3DO and WebTV. About 10 engineers worked on "Ana," a Microsoft video encoder chip, while others managed the schedule and cost reduction with IBM and ATI. Others supported suppliers, such as Silicon Integrated Systems, the provider of the "south bridge," the communications and input/output chip. The rest of the team helped handle relationships with vendors for the other 1,700 parts in the game console.

Ilan Spillinger headed the IBM chip program, which carried the code name Waternoose, after the spiderlike creature from the film "Monsters, Inc." He supervised IBM's chief engineer, Dave Shippy, and worked closely with Microsoft's Andrews on every aspect of the design program.

Games at Center

Everything happened in parallel. For much of 2003, a team of industrial designers created the look and feel of the box. They tested the design on gamers, and the feedback suggested that the design seemed like something either Apple or Sony had created. The marketing team decided to call the machine the Xbox 360, because it put the gamer at the center. A small software team led by Tracy Sharp developed the operating system in Redmond. Microsoft started investing heavily in games. By February 2004, Microsoft sent out the first kits to game developers for making games on Apple Macintosh G5 computers. And in early 2004, Greg Gibson's evaluation team began testing subsystems to make sure they would all work together when the final design came together.

IBM assigned 421 engineers from six or seven sites to the project, which was a proving ground for its design services business. The effort paid off, with an early test chip that came out in August 2004. With that chip, Microsoft was able to begin debugging the operating system. ATI taped out its first design in September 2004, and IBM taped out its full chip in October 2004. Both chips ran game code early on, which was good, considering that it's very hard to get chips working at all when they first come out of the factory.

IBM executed without many setbacks. As it revised the chip, it fixed bugs with two revisions of the chip's layers. The company was able to debug the design in the factory quickly, because IBM's fab engineers could work on one part while the Chartered engineers could debug a different part of the chip. They fed the information to each other, speeding the cycle of revisions. By Jan. 30, 2005, IBM tapped out the final version of the microprocessor.

ATI, meanwhile, had a more difficult time. The company had assigned 180 engineers to the project. Although games ran on the chip early, problems came up in the lab. Feldstein said that in one game, one frame of animation would freeze as every other frame went by. It took six weeks to uncover the bug and find a fix. Delays in debugging threatened to throw the beta-development-kit program off schedule. That meant thousands of game developers might not get the systems they needed on time. If that happened, the Xbox 360 might launch without enough games, a disaster in the making.

The pressure was intense. But Neil McCarthy, a Microsoft engineer in Mountain View, designed a modification of the metal layers of the graphics chip. By doing so, he enabled Microsoft to get working chips from the interim design. ATI's foundry, Taiwan Semiconductor Manufacturing Co., churned out enough chips to seed the developer systems. The beta kits went out in the spring of 2005.

Meanwhile, Microsoft's brass was thinking that Sony would trump the Xbox 360 by coming out with more memory in the PlayStation 3. So in the spring of 2005, Microsoft made what would become a fateful decision. It decided to double the amount of memory in the box, from 256 Mbytes to 512 Mbytes of graphics Double Data Rate 3 (GDDR3) chips. The decision would cost Microsoft $900 million over five years, so the company had to pare back spending in other areas to stay on its profit targets.

Microsoft started tying up all the loose ends. It rehired Seagate Technology, which it had hired for the original Xbox, to make hard disk drives for the box, but this time Microsoft decided to have two SKUs — one with a hard drive, for the enthusiasts, and one without, for the budget-conscious. It brought aboard both Flextronics and Wistron, the current makers of the Xbox, as contract manufacturers. But it also laid plans to have Celestica build a third factory for building the Xbox 360.

Just as everyone started to worry about the schedule going off course, ATI spun out the final graphics chip design in mid-July 2005. Everyone breathed a sigh of relief, and they moved on to the tough work of ramping up manufacturing. There was enough time for both ATI and IBM to build a stockpile of chips for the launch, which was set for Nov. 22 in North America, Dec. 2 in Europe and Dec. 10 in Japan.

Flextronics debugged the assembly process first. Nick Baker traveled to China to debug the initial boxes as they came off the line. Although assembly was scheduled to start in August, it didn't get started until September. Because the machines were being built in southern China, they had to be shipped over a period of six weeks by boat to the regions. Each factory could build only as many as 120,000 machines a week, running at full tilt. The slow start, combined with the multiregion launch, created big risks for Microsoft.

An Unexpected Turn

The hardware team was on pins and needles. The most-complicated chips came in on time and were remarkable achievements. Typically, it took more than two years to do the initial designs of complicated chip projects, but both companies were actually manufacturing inside that time window.

Then something unexpected hit. Both Samsung and Infineon Technologies had committed to making the GDDR3 memory for Microsoft. But some of Infineon's chips fell short of the 700 MHz specified by Microsoft. Using such chips could have slowed games down noticeably. Microsoft's engineers decided to start sorting the chips, not using the subpar ones. Because GDDR3 700 MHz chips were just ramping up, there was no way to get more chips. Each system used eight chips. The shortage constrained the supply of Xbox 360s.

Microsoft blamed the resulting shortfall of Xbox 360s on a variety of component shortages. Some users complained of overheating systems. But overall, the company said, the launch was still a great achievement. In its first holiday season, Microsoft sold 1.5 million Xbox 360s, compared to 1.4 million original Xboxes in the holiday season of 2001. But the shortage continued past the holidays.

Leslie Leland, hardware evaluation director, says she felt "terrible" about the shortage and that Microsoft would strive to get a box into the hands of every consumer who wanted one. But Greg Gibson, system designer, says that Microsoft could have worse problems on its hands than a shortage. The IBM and ATI teams had outdone themselves.

The project was by far the most successful Nick Baker had ever worked on. One night, hoisting a beer and looking at a finished console, he said it felt good.

J Allard, the head of the Xbox platform business, praised the chip engineers such as Baker: "They were on the highest wire with the shortest net."

Get more information on Takahashi's book.

This story first appeared in the May issue of Electronic Businessmagazine.

Tue, 26 Jul 2022 12:00:00 -0500 en text/html https://www.designnews.com/automation-motion-control/learning-failure
Killexams : IBM report shows healthcare has a growing cybersecurity gap

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


While enterprises are setting records in cybersecurity spending, the cost and severity of breaches continue to soar. IBM’s latest data breach report provides insights into why there’s a growing disconnect between enterprise spending on cybersecurity and record costs for data breaches. 

This year, 2022, is on pace to be a record-breaking year for enterprise breaches globally, with the average cost of a data breach reaching $4.35 million. That’s 12.7% higher than the average cost of a data breach in 2020, which was $3.86 million. It also found a record 83% of enterprises reporting more than one breach and that the average time to identify a breach is 277 days. As a result, enterprises need to look at their cybersecurity tech stacks to see where the gaps are and what can be improved.  

Enhanced security around privileged access credentials and identity management is an excellent first place to start. More enterprises need to define identities as their new security perimeter. IBM’s study found that 19% of all breaches begin with compromised privileged credentials. Breaches caused by compromised credentials lasted an average of 327 days. Privileged access credentials are also bestsellers on the Dark Web, with high demand for access to financial services’ IT infrastructure.  

The study also shows how dependent enterprises remain on implicit trust across their security and broader IT infrastructure tech stacks. The gaps in cloud security, identity and access management (IAM) and privileged access management (PAM) allow expensive breaches to happen. Seventy-nine percent of critical infrastructure organizations didn’t deploy a zero-trust architecture, when zero trust can reduce average breach losses by nearly $1 million. 

Enterprises need to treat implicit trust as the unlocked back door that allows cybercriminals access to their systems, credentials and most valuable confidential data to reduce the incidence of breaches. 

What enterprises can learn from IBM’s data on healthcare breaches 

The report quantifies how wide healthcare’s cybersecurity gap is growing. IBM’s report estimates the average cost of a healthcare data breach is now $10.1 million, a record and nearly $1 million over last year’s $9.23 million. Healthcare has had the highest average breach cost for twelve consecutive years, increasing 41.6% since 2020. 

The findings suggest that the skyrocketing cost of breaches adds inflationary fuel to the fire, as runaway prices are financially squeezing global consumers and companies. Sixty percent of organizations participating in IBM’s study say, they raised their product and service prices due to the breach, as supply chain disruptions, the war in Ukraine and tepid demand for products continue. Consumers are already struggling to meet healthcare costs, which will likely increase by 6.5% next year

The study also found that nearly 30% of breach costs are incurred 12 to 24 months after, translating into permanent price increases for consumers. 

“It is clear that cyberattacks are evolving into market stressors that are triggering chain reactions, [and] we see that these breaches are contributing to those inflationary pressures,” says John Hendley, head of strategy for IBM Security’s X-Force research team.  

Getting quick wins in encryption

For healthcare providers with limited cybersecurity budgets, prioritizing these three areas can reduce the cost of a breach while making progress toward zero-trust initiatives. Getting identity access management (IAM) right is core to a practical zero-trust framework, one that can quickly adapt and protect human and machine identities are essential. IBM’s study found that of the zero-trust components measured in the study, IAM is the most effective in reducing breach costs. Leading IAM includes Akamai, Fortinet, Ericom, Ivanti, Palo Alto Networks and others. Ericom’s ZTEdge platform is noteworthy for its combining ML-enabled identity and access management, zero-trust network access (ZTNA), microsegmentation and secure web gateway (SWG) with remote browser isolation (RBI) and Web Application Isolation.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Mon, 01 Aug 2022 16:20:00 -0500 Louis Columbus en-US text/html https://venturebeat.com/2022/08/01/ibm-report-shows-healthcare-has-a-growing-cybersecurity-gap/
Killexams : IBM Uses Power10 CPU As An I/O Switch

Back in early July, we covered the launch of IBM’s entry and midrange Power10 systems and mused about how Big Blue could use these systems to reinvigorate an HPC business rather than just satisfy the needs of the enterprise customers who run transaction processing systems and are looking to add AI inference to their applications through matrix math units on the Power10 chip.

We are still gathering up information on how the midrange Power E1050 stacks up on SAP HANA and other workloads, but in poking around the architecture of the entry single-socket Power S1014 and the dual-socket S1022 and S1024 machines, we found something interesting that we thought we should share with you. We didn’t see it at first, and you will understand immediately why.

Here is the block diagram we got our hands on from IBM’s presentations to its resellers for the Power S1014 machine:

You can clearly see an I/O chip that adds some extra PCI-Express traffic lanes to the Power10 processor complex, right?

Same here with the block diagram of the Power S1022 (2U chassis) machines, which use the same system boards:

There are a pair of I/O switches in there, as you can see, which is not a big deal. Intel has co-packaged PCH chipsets in the same package as the Xeon CPUs with the Xeon D line for years, starting with the “Broadwell-DE” Xeon D processor in May 2015. IBM has used PCI-Express switches in the past to stretch the I/O inside a single machine beyond what comes off natively from the CPUs, such as with the Power IC922 inference engine Big Blue launched in January 2020, which you can see here:

The two PEX blocks in the center are PCI-Express switches, either from Broadcom or MicroChip if we had to guess.

But, that is not what is happening with the Power10 entry machines. Rather, IBM has created a single dual-chip module with two whole Power10 chips inside of it, and in the case of the low-end machines where AIX and IBM i customers don’t need a lot of compute but they do need a lot of I/O, the second Power10 chip has all of its cores turned off and it is acting like an I/O switch for the first Power10 chip that does have cores turned on.

You can see this clearly in this more detailed block diagram of the Power S1014 machine:

And in a more detailed block diagram of the two-socket Power S1022 motherboard:

This is the first time we can recall seeing something like this, but obviously any processor architecture could support the same functions.

In the two-socket Power S1024 and Power L1024 machines

What we find particularly interesting is the idea that those Power10 “switch” chips – the ones with no cores activated – could in theory also have eight OpenCAPI Memory Interface (OMI) ports turned on, doubling the memory capacity of the systems using skinnier and slightly faster 128 GB memory sticks, which run at 3.2 GHz, rather than having to move to denser 256 GB memory sticks that run at a slower 2.67 GHz when they are available next year. And in fact, you could take this all one step further and turn off all of the Power10 cores and turn on all of the 16 OMI memory slots across each DCM and create a fat 8 TB or 16 TB memory server that through the Power10 memory area network – what IBM calls memory inception – could serve as the main memory for a bunch of Power10 nodes with no memory of their own.

We wonder if IBM will do such a thing, and also ponder what such a cluster of memory-less server nodes talking to a centralized memory node might do with SAP HANA, Spark, data analytics, and other memory intensive work like genomics. The Power10 chip has a 2 PB upper memory limit, and that is the only cap on where this might go.

There is another neat thing IBM could do here, too. Imagine if the Power10 compute chip in a DCM had no I/O at all but just lots of memory attached to it and the secondary Power10 chip had only a few cores and all of the I/O of the complex. That would, in effect, make the second Power10 chip a DPU for the first one.

The engineers at IBM are clearly thinking outside of the box; it will be interesting to see if the product managers and marketeers do so.

Tue, 26 Jul 2022 05:34:00 -0500 Timothy Prickett Morgan en-US text/html https://www.nextplatform.com/2022/07/26/ibm-uses-power10-cpu-as-an-i-o-switch/
Killexams : IBM Annual Cost of Data Breach Report 2022: Record Costs Usually Passed On to Consumers, “Long Breach” Expenses Make Up Half of Total Damage

IBM’s annual Cost of Data Breach Report for 2022 is packed with revelations, and as usual none of them are good news. Headlining the report is the record-setting cost of data breaches, with the global average now at $4.35 million. The report also reveals that much of that expense comes with the data breach version of “long Covid,” expenses that are realized more than a year after the attack.

Most organizations (60%) are passing these added costs on to consumers in the form of higher prices. And while 83% of organizations now report experiencing at least one data breach, only a small minority are adopting zero trust strategies.

Security AI and automation greatly reduces expected damage

The IBM report draws on input from 550 global organizations surveyed about the period between March 2021 and March 2022, in partnership with the Ponemon Institute.

Though the average cost of a data breach is up, it is only by about 2.6%; the average in 2021 was $4.24 million. This represents a total climb of 13% since 2020, however, reflecting the general spike in cyber crime seen during the pandemic years.

Organizations are also increasingly not opting to absorb the cost of data breaches, with the majority (60%) compensating by raising consumer prices separate from any other latest increases due to inflation or supply chain issues. The report indicates that this may be an underreported upward influence on prices of consumer goods, as 83% of organizations now say that they have been breached at least once.

Brad Hong, Customer Success Manager for Horizon3.ai, sees a potential consumer backlash on the horizon once public awareness of this practice grows: “It’s already a breach of confidence to lose the confidential data of customers, and sure there’s bound to be an organization across those surveyed who genuinely did put in the effort to protect against and curb attacks, but for those who did nothing, those who, instead of creating a disaster recovery plan, just bought cyber insurance to cover the org’s operational losses, and those who simply didn’t care enough to heed the warnings, it’s the coup de grâce to then pass the cost of breaches to the same customers who are now the victims of a data breach. I’d be curious to know what percent of the 60% of organizations who increased the price of their products and services are using the extra revenue for a war chest or to actually reinforce their security—realistically, it’s most likely just being used to fill a gap in lost revenue for shareholders’ sake post-breach. Without government regulations outlining restrictions on passing cost of breach to consumer, at the least, not without the honest & measurable efforts of a corporation as their custodian, what accountability do we all have against that one executive who didn’t want to change his/her password?”

Breach costs also have an increasingly long tail, as nearly half now come over a year after the date of the attack. The largest of these are generally fines that are levied after an investigation, and decisions or settlements in class action lawsuits. While the popular new “double extortion” approach of ransomware attacks can drive long-term costs in this way, the study finds that companies paying ransom demands to settle the problem quickly aren’t necessarily seeing a large amount of overall savings: their average breach cost drops by just $610,000.

Sanjay Raja, VP of Product with Gurucul, expands on how knock-on data breach damage can continue for years: “The follow-up attack effect, as described, is a significant problem as the playbooks and solutions provided to security operations teams are overly broad and lack the necessary context and response actions for proper remediation. For example, shutting down a user or application or adding a firewall block rule or quarantining a network segment to negate an attack is not a sustainable remediation step to protect an organization on an ongoing basis. It starts with a proper threat detection, investigation and response solution. Current SIEMs and XDR solutions lack the variety of data, telemetry and combined analytics to not only identify an attack campaign and even detect variants on previously successful attacks, but also provide the necessary context, accuracy and validation of the attack to build both a precise and complete response that can be trusted. This is an even greater challenge when current solutions cannot handle complex hybrid multi-cloud architectures leading to significant blind spots and false positives at the very start of the security analyst journey.”

Rising cost of data breach not necessarily prompting dramatic security action

In spite of over four out of five organizations now having experienced some sort of data breach, only slightly over 20% of critical infrastructure companies have moved to zero trust strategies to secure their networks. Cloud security is also lagging as well, with a little under half (43%) of all respondents saying that their security practices in this area are either “early stage” or do not yet exist.

Those that have onboarded security automation and AI elements are the only group seeing massive savings: their average cost of data breach is $3.05 million lower. This particular study does not track average ransom demands, but refers to Sophos research that puts the most latest number at $812,000 globally.

The study also notes serious problems with incident response plans, especially troubling in an environment in which the average ransomware attack is now carried out in four days or less and the “time to ransom” has dropped to a matter of hours in some cases. 37% of respondents say that they do not test their incident response plans regularly. 62% say that they are understaffed to meet their cybersecurity needs, and these organizations tend to suffer over half a million more dollars in damages when they are breached.

Of course, cost of data breaches is not distributed evenly by geography or by industry type. Some are taking much bigger hits than others, reflecting trends established in prior reports. The health care industry is now absorbing a little over $10 million in damage per breach, with the average cost of data breach rising by $1 million from 2021. And companies in the United States face greater data breach costs than their counterparts around the world, at over $8 million per incident.

Shawn Surber, VP of Solutions Architecture and Strategy with Tanium, provides some insight into the unique struggles that the health care industry faces in implementing effective cybersecurity: “Healthcare continues to suffer the greatest cost of breaches but has among the lowest spend on cybersecurity of any industry, despite being deemed ‘critical infrastructure.’ The increased vulnerability of healthcare organizations to cyber threats can be traced to outdated IT systems, the lack of robust security controls, and insufficient IT staff, while valuable medical and health data— and the need to pay ransoms quickly to maintain access to that data— make healthcare targets popular and relatively easy to breach. Unlike other industries that can migrate data and sunset old systems, limited IT and security budgets at healthcare orgs make migration difficult and potentially expensive, particularly when an older system provides a small but unique function or houses data necessary for compliance or research, but still doesn’t make the cut to transition to a newer system. Hackers know these weaknesses and exploit them. Additionally, healthcare orgs haven’t sufficiently updated their security strategies and the tools that manufacturers, IT software vendors, and the FDA have made haven’t been robust enough to thwart the more sophisticated techniques of threat actors.”

Familiar incident types also lead the list of the causes of data breaches: compromised credentials (19%), followed by phishing (16%). Breaches initiated by these methods also tended to be a little more costly, at an average of $4.91 million per incident.

Global average cost of #databreach is now $4.35M, up 13% since 2020. Much of that are realized more than a year after the attack, and 60% of organizations are passing the costs on to consumers in the form of higher prices. #cybersecurity #respectdataClick to Tweet

Cutting the cost of data breach

Though the numbers are never as neat and clean as averages would indicate, it would appear that the cost of data breaches is cut dramatically for companies that implement solid automated “deep learning” cybersecurity tools, zero trust systems and regularly tested incident response plans. Mature cloud security programs are also a substantial cost saver.

Mon, 01 Aug 2022 10:00:00 -0500 Scott Ikeda en-US text/html https://www.cpomagazine.com/cyber-security/ibm-annual-cost-of-data-breach-report-2022-record-costs-usually-passed-on-to-consumers-long-breach-expenses-make-up-half-of-total-damage/
Killexams : IBM Watson Helps Create Sculpture Inspired by Gaudi

Watson is already very good at recognizing images. Drop in an image of a building and it will tell you the type of building and even what materials it's likely made of. But New York City-based design studio SOFTlab wanted to know if Watson could do more than just recognize art. Could Watson help create it?

It turns out it can. IBM is calling it the first “thinking sculpture” – an art piece that helped pick its own materials, shapes, and colors.

Antoni Gaudí was a 19th century Spanish architect whose avant-garde work has become synonymous with the look and feel of Barcelona. Inspired by naturally-occurring forms, Gaudí was known for his unique treatment of materials, including ceramics, that has given his pieces, including his most well-known work – the Sagrada Família – their distinctive look.

As MWC 2017 is being held in Barcelona this week, SOFTlab decided creating a sculpture inspired by Gaudí's work would be the perfect task to set Watson to for the event. The team at SOFTlab fed Watson a plethora of academic and artistic work around Gaudí and the city of Barcelona, including images, articles, literature, and even music – teaching it to become an expert on Gaudi and his design process. From there Watson was able to identify themes and patterns in Gaudi's work, including his use of materials, and was then able to suggest designs based on its knowledge.

Watson was able to recognize structures, elements, and features in [Gaudí's] art and his work,” Jonas Nwuke, Manager, IBM Watson, told Design News. “It gets to the essence of an image and when it looks at another it tries to make sense of that image through what it's been taught.”

Essentially by teaching Watson the difference between two categories (i.e. an image of a Gaudí structure and a non-Gaudí structure), via its Visual Recognition API, Watson is able to learn the difference between the two. The more examples it has, the better it gets. It can then take in new images and figure out what category they belong in. The other half was performed by Watson's AlchemyLanguage API, which analyzes text and language for keywords, taxonomy, and concepts that it is taught. Again, the more text about Gaudí the system is exposed to, the better it gets at recognizing words, phrases, and even emotions associated with his work.

While certain patterns in Gaudí's structures, such as waves and arches, would be clear to any architect well-versed in his work, Watson was also able to draw on its existing database and find less obvious connections in forms found in things like crabs, spiders, shells, and even candies. It also helped the designers with their material selection based on their criteria, helping them to arrive at the color scheme (ultramarine blue, jade green, yellow and orange) as well as the the iridescent dichroic film material used throughout the sculpture.

As an added layer, the sculpture is also being fed social media data from MWC attendees via Twitter and it is able to move and reshape itself based on the emotions it reads from the tweets by utilizing Watson's Tone Analyzer API.

“As we've opened up the Watson platform for developers and makers what we found was there were some creative pursuits that presented themselves,” Nwuke said. “Our engineering team got involved in creating food recipes, music, fashion, even movie trailers ... 2017 has become the year that we are going to see what Watson can do in the architectural field.”

Nwuke said a lot what IBM looks at when bringing Watson into the real world is very constrained. This collaboration with SOFTlab presented an opportunity to see how well Watson could be applied to a purely creative endeavour. And though this particular instance was centered around Gaudi, Nwuke added that Watson could be trained in expertise in any artist and could even be trained on multiple artists in order to mix and match influences.

The same concept could be extended into areas of design including product engineering. Perhaps a design engineer wants to create a product inspired by a certain artist, form, or even other product, or maybe they're looking to find patterns and associations in existing product designs. Watson could be taught to become an expert on a particular product and design and assist engineers in the design process, including material selection.

READ MORE ARTICLES ON ARTIFICIAL INTELLIGENCE:

Nwuke pointed to another project, OmniEarth as an example of how robust and flexible Watson's visual recognition is. OmniEarth is leveraging Watson's services to analyze satellite images for water conversation, by being able to classify irrigable, irrigated and non-irrigated areas, agricultural zones, lawns, and even swimming pools.

But the goal is not to have Watson design something, according to Nwuke. It's part of an initiative IBM is calling “augmented intelligence.” “The endgame is not to replace [architects], it's to provide a way to augment them,” Nwuke said.

Chris Wiltz is the Managing Editor of Design News.  

Thu, 21 Jul 2022 12:00:00 -0500 en text/html https://www.designnews.com/design-hardware-software/ibm-watson-helps-create-sculpture-inspired-gaudi
Killexams : IBM extends Power10 server lineup for enterprise use cases

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


IBM is looking to grow its enterprise server business with the expansion of its Power10 portfolio announced today.

IBM Power is a RISC (reduced instruction set computer) based chip architecture that is competitive with other chip architectures including x86 from Intel and AMD. IBM’s Power hardware has been used for decades for running IBM’s AIX Unix operating system, as well as the IBM i operating system that was once known as the AS/400. In more latest years, Power has increasingly been used for Linux and specifically in support of Red Hat and its OpenShift Kubernetes platform that enables organizations to run containers and microservices.

The IBM Power10 processor was announced in August 2020, with the first server platform, the E1080 server, coming a year later in September 2021. Now IBM is expanding its Power10 lineup with four new systems, including the Power S1014, S1024, S1022 and E1050, which are being positioned by IBM to help solve enterprise use cases, including the growing need for machine learning (ML) and artificial intelligence (AI).

What runs on IBM Power servers?

Usage of IBM’s Power servers could well be shifting into territory that Intel today still dominates.

Steve Sibley, vp, IBM Power product management, told VentureBeat that approximately 60% of Power workloads are currently running AIX Unix. The IBM i operating system is on approximately 20% of workloads. Linux makes up the remaining 20% and is on a growth trajectory.

IBM owns Red Hat, which has its namesake Linux operating system supported on Power, alongside the OpenShift platform. Sibley noted that IBM has optimized its new Power10 system for Red Hat OpenShift.

“We’ve been able to demonstrate that you can deploy OpenShift on Power at less than half the cost of an Intel stack with OpenShift because of IBM’s container density and throughput that we have within the system,” Sibley said.

A look inside IBM’s four new Power servers

Across the new servers, the ability to access more memory at greater speed than previous generations of Power servers is a key feature. The improved memory is enabled by support of the Open Memory Interface (OMI) specification that IBM helped to develop, and is part of the OpenCAPI Consortium.

“We have Open Memory Interface technology that provides increased bandwidth but also reliability for memory,” Sibley said. “Memory is one of the common areas of failure in a system, particularly when you have lots of it.”

The new servers announced by IBM all use technology from the open-source OpenBMC project that IBM helps to lead. OpenBMC provides secure code for managing the baseboard of the server in an optimized approach for scalability and performance.

E1050

Among the new servers announced today by IBM is the E1050, which is a 4RU (4 rack unit) sized server, with 4 CPU sockets, that can scale up to 16TB of memory, helping to serve large data- and memory-intensive workloads.

S1014 and S1024

The S1014 and the S1024 are also both 4RU systems, with the S1014 providing a single CPU socket and the S1024 integrating a dual-socket design. The S1014 can scale up to 2TB of memory, while the S1024 supports up to 8TB.

S1022

Rounding out the new services is the S1022, which is a 1RU server that IBM is positioning as an ideal platform for OpenShift container-based workloads.

Bringing more Power to AI and ML

AI and ML workloads are a particularly good use case for all the Power10 systems, thanks to optimizations that IBM has built into the chip architecture.

Sibley explained that all Power10 chips benefit from IBM’s Matrix Match Acceleration (MMA) capability. The enterprise use cases that Power10-based servers can help to support include organizations that are looking to build out risk analytics, fraud detection and supply chain forecasting AI models, among others. 

IBM’s Power10 systems support and have been optimized for multiple popular open-source machine learning frameworks including PyTorch and TensorFlow.

“The way we see AI emerging is that a vast majority of AI in the future will be done on the CPU from an inference standpoint,” Sibley said.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Mon, 11 Jul 2022 09:01:00 -0500 Sean Michael Kerner en-US text/html https://venturebeat.com/2022/07/11/ibm-extends-power10-server-lineup-for-enterprise-use-cases/
Killexams : CIOReview Names Cobalt Iron Among 10 Most Promising IBM Solution Providers 2022

LAWRENCE, Kan.--(BUSINESS WIRE)--Jul 28, 2022--

Cobalt Iron Inc., a leading provider of SaaS-based enterprise data protection, today announced that the company has been deemed one of the 10 Most Promising IBM Solution Providers 2022 by CIOReview Magazine. The annual list of companies is selected by a panel of experts and members of CIOReview Magazine’s editorial board to recognize and promote innovation and entrepreneurship. A technology partner for IBM, Cobalt Iron earned the distinction based on its Compass ® enterprise SaaS backup platform for monitoring, managing, provisioning, and securing the entire enterprise backup landscape.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220728005043/en/

Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection. (Graphic: Business Wire)

According to CIOReview, “Cobalt Iron has built a patented cyber-resilience technology in a SaaS model to alleviate the complexities of managing large, multivendor setups, providing an effectual humanless backup experience. This SaaS-based data protection platform, called Compass, leverages strong IBM technologies. For example, IBM Spectrum Protect is embedded into the platform from a data backup and recovery perspective. ... By combining IBM’s technologies and the intellectual property built by Cobalt Iron, the company delivers a secure, modernized approach to data protection, providing a ‘true’ software as a service.”

Through proprietary technology, the Compass data protection platform integrates with, automates, and optimizes best-of-breed technologies, including IBM Spectrum Protect, IBM FlashSystem, IBM Red Hat Linux, IBM Cloud, and IBM Cloud Object Storage. Compass enhances and extends IBM technologies by automating more than 80% of backup infrastructure operations, optimizing the backup landscape through analytics, and securing backup data, making it a valuable addition to IBM’s data protection offerings.

CIOReview also praised Compass for its simple and intuitive interface to display a consolidated view of data backups across an entire organization without logging in to every backup product instance to extract data. The machine learning-enabled platform also automates backup processes and infrastructure, and it uses open APIs to connect with ticket management systems to generate tickets automatically about any backups that need immediate attention.

To ensure the security of data backups, Cobalt Iron has developed an architecture and security feature set called Cyber Shield for 24/7 threat protection, detection, and analysis that improves ransomware responsiveness. Compass is also being enhanced to use several patented techniques that are specific to analytics and ransomware. For example, analytics-based cloud brokering of data protection operations helps enterprises make secure, efficient, and cost-effective use of their cloud infrastructures. Another patented technique — dynamic IT infrastructure optimization in response to cyberthreats — offers unique ransomware analytics and automated optimization that will enable Compass to reconfigure IT infrastructure automatically when it detects cyberthreats, such as a ransomware attack, and dynamically adjust access to backup infrastructure and data to reduce exposure.

Compass is part of IBM’s product portfolio through the IBM Passport Advantage program. Through Passport Advantage, IBM sellers, partners, and distributors around the world can sell Compass under IBM part numbers to any organizations, particularly complex enterprises, that greatly benefit from the automated data protection and anti-ransomware solutions Compass delivers.

CIOReview’s report concludes, “With such innovations, all eyes will be on Cobalt Iron for further advancements in humanless, secure data backup solutions. Cobalt Iron currently focuses on IP protection and continuous R&D to bring about additional cybersecurity-related innovations, promising a more secure future for an enterprise’s data.”

About Cobalt Iron

Cobalt Iron was founded in 2013 to bring about fundamental changes in the world’s approach to secure data protection, and today the company’s Compass ® is the world’s leading SaaS-based enterprise data protection system. Through analytics and automation, Compass enables enterprises to transform and optimize legacy backup solutions into a simple cloud-based architecture with built-in cybersecurity. Processing more than 8 million jobs a month for customers in 44 countries, Compass delivers modern data protection for enterprise customers around the world. www.cobaltiron.com

Product or service names mentioned herein are the trademarks of their respective owners.

Link to Word Doc:www.wallstcom.com/CobaltIron/220728-Cobalt_Iron-CIOReview_Top_IBM_Provider_2022.docx

Photo Link:www.wallstcom.com/CobaltIron/Cobalt_Iron_CIO_Review_Top_IBM_Solution_Provider_Award_Logo.pdf

Photo Caption: Cobalt Iron Compass ® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection.

Follow Cobalt Iron

https://twitter.com/cobaltiron

https://www.linkedin.com/company/cobalt-iron/

https://www.youtube.com/user/CobaltIronLLC

View source version on businesswire.com:https://www.businesswire.com/news/home/20220728005043/en/

CONTACT: Agency Contact:

Sunny Branson

Wall Street Communications

Tel: +1 801 326 9946

Email:sunny@wallstcom.com

Web:www.wallstcom.comCobalt Iron Contact:

Mary Spurlock

VP of Marketing

Tel: +1 785 979 9461

Email:maspurlock@cobaltiron.com

Web:www.cobaltiron.com

KEYWORD: EUROPE UNITED STATES NORTH AMERICA KANSAS

INDUSTRY KEYWORD: DATA MANAGEMENT SECURITY TECHNOLOGY SOFTWARE NETWORKS INTERNET

SOURCE: Cobalt Iron

Copyright Business Wire 2022.

PUB: 07/28/2022 09:00 AM/DISC: 07/28/2022 09:03 AM

http://www.businesswire.com/news/home/20220728005043/en

Thu, 28 Jul 2022 01:03:00 -0500 en text/html https://www.eagletribune.com/region/cioreview-names-cobalt-iron-among-10-most-promising-ibm-solution-providers-2022/article_56f7dda7-cbd5-586a-9d5f-f882022100da.html
Killexams : IBM Expands Power10 Server Family To Help Clients Respond Faster To Rapidly Changing Business Demands'

(MENAFN- PR Newswire)

New Power10 scale-out and midrange models extend IBM's capabilities to deliver flexible and secured infrastructure for hybrid cloud environments

ARMONK, N.Y., July 12, 2022 /PRNewswire/ -- IBM (NYSE: IBM ) today announced a significant expansion of its Power10 server line with the introduction of mid-range and scale-out systems to modernize, protect and automate business applications and IT operations. The new Power10 servers combine performance, scalability, and flexibility with new pay-as-you-go consumption offerings for clients looking to deploy new services quickly across multiple environments.

Continue Reading

 Digital transformation is driving organizations to modernize both their applications and IT infrastructures. IBM Power systems are purpose-built for today's demanding and dynamic business environments, and these new systems are optimized to run essential workloads such as databases and core business applications, as well as maximize the efficiency of containerized applications. An ecosystem of solutions with Red Hat OpenShift also enables IBM to collaborate with clients, connecting critical workloads to new, cloud-native services designed to maximize the value of their existing infrastructure investments.

IBM announced an expansion of its Power10 server line with mid-range and scale-out systems.

Tweet this

The new servers join the popular Power10 E1080 server introduced in September 2021 to deliver a secured, resilient hybrid cloud experience that can be managed with other x86 and multi-cloud management software across clients' IT infrastructure. This expansion of the IBM Power10 family with the new midrange and scale-out servers brings high-end server capabilities throughout the product line. Not only do the new systems support critical security features such as transparent memory encryption and advanced processor/system isolation, but also leverage the OpenBMC project from the Linux Foundation for high levels of security for the new scale-out servers.

Highlights of the announcements include:

  • New systems: The expanded IBM Power10 portfolio , built around the next-generation IBM Power10 processor with 2x more cores and more than 2x memory bandwidth than previous Power generations, now includes the Power10 Midrange E1050 , delivering record-setting 4-socket compute1, Java2, and ERP3 performance capabilities. New scale-out servers include the entry-level Power S1014 , as well as S1022 , and S1024 options, bringing enterprise capabilities to SMBs and remote-office/branch office environments, such as Capacity Upgrade on Demand (CuOD).
  • Cloud on premises with new flexible consumption choices: IBM has recently announced new flexible consumption offerings with pay-as-you-go options and by-the-minute metering for IBM Power Private Cloud, bringing more opportunities to help lower the cost of running OpenShift solutions on Power when compared against alternative platforms. These new consumption models build on options already available with IBM Power Virtual Server to enable greater flexibility in clients' hybrid journeys. Additionally, the highly anticipated IBM i subscription delivers a comprehensive platform solution with the hardware, software and support/services included in the subscription service.
  • Business transformation with SAP®: IBM continues its innovations for SAP solutions. The new midrange E1050 delivers scale (up to 16 TB) and performance for a 4-socket system for clients who run BREAKTHROUGH with IBM for RISE with SAP. In addition, an expansion of the premium provider option is now available to provide more flexibility and computing power with an additional choice to run workloads on IBM Power on Red Hat Enterprise Linux on IBM Cloud.

'Today's highly dynamic environment has created volatility, from materials to people and skills, all of which impact short-term operations and long-term sustainability of the business,' said Steve Sibley, Vice President, IBM Power Product Management. 'The right IT investments are critical to business and operational resilience. Our new Power10 models offer clients a variety of flexible hybrid cloud choices with the agility and automation to best fit their needs, without sacrificing performance, security or resilience.'

The expansion of the IBM Power10 family has been engineered to establish one of the industry's most flexible and broadest range of servers for data-intensive workloads such as SAP S/4HANA – from on-premises workloads to hybrid cloud. IBM now offers more ways to implement dynamic capacity – with metering across all operating environments including IBM i, AIX, Linux and OpenShift supporting modern and traditional applications on the same platforms – as well as integrated infrastructure automation software for improved visibility and management.

The new systems with IBM Power Virtual Server also help clients operate a secured hybrid cloud experience that delivers high performance and architectural consistency across their IT infrastructure. The systems are uniquely designed so as to protect sensitive data from core to cloud, and enable virtual machines and containerized workloads to run simultaneously on the same systems. For critical business workloads that have traditionally needed to reside on-premises, they can now be moved into the cloud as workloads and needs demand. This flexibility can help clients mitigate risk and time associated with rewriting applications for a different platform.

'As organizations around the world continue to adapt to unpredictable changes in consumer behaviors and needs, they need a platform that can deliver their applications and insights securely where and when they need them,' said Peter Rutten, IDC Worldwide Infrastructure Research Vice President. 'IBM Power continues its laser focus on helping clients respond faster to dynamically changing environments and business demands, while protecting information security and distilling new insights from data, all with high reliability and availability.'

Ecosystem of ISVs and Channel Partners Enhance Capabilities for IBM Power10

Critical in the launch of the expanded Power10 family is a robust ecosystem of ISVs, Business Partners, and lifecycle services. Ecosystem partners such as SVA and Solutions II provide examples of how the IBM Ecosystem collaborates with clients to build hybrid environments, connecting essential workloads to the cloud to maximize the value of their existing infrastructure investments:

'SVA customers have appreciated the enormous flexibility of IBM Power systems through Capacity Upgrade On-Demand in the high-end systems for many years,' said Udo Sachs, Head of Competence Center Power Systems at SVA . 'The flexible consumption models using prepaid capacity credits have been well-received by SVA customers, and now the monthly pay-as-you-go option for the scale-out models makes the platform even more attractive. When it comes to automation, IBM helps us to roll out complex workloads such as entire SAP landscapes at the push of a button by supporting Ansible on all OS derivatives, including AIX, IBM i and Linux, as well as ready-to-use modules for deploying the complete Power infrastructure.'

'Solutions II provides technology design, deployment, and managed services to hospitality organizations that leverage mission critical IT infrastructure to execute their mission, often requiring 24/7 operation,' said Dan Goggiano, Director of Gaming, Solutions II. 'System availability is essential to maintaining our clients' revenue streams, and in our experience, they rely on the stability and resilience of IBM Power systems to help solidify their uptime. Our clients are excited that the expansion of the Power10 family further extends these capabilities and bolsters their ability to run applications securely, rapidly, and efficiently.' 

For more information on IBM Power and the new servers and consumption models announced today, visit:

  • Read today' s blog by IBM Power GM Ken King, Announcing IBM Power10 Scale-Out and Midrange Servers: The Right Compute Architecture for Today's Unpredictable and Dynamic Business Climate.
  • Sign up to attend the July 14 webinar , Creating business agility with IBM Power , to learn more about the latest from IBM Power and hear from clients and IBM experts about how Power helps create digital advantage with hybrid cloud infrastructure to modernize, automate and secure businesses with class-leading reliability.
  • Read more about the expanded IBM Power10 product family .
  • IBM Power Expert Care offers a way of attaching services and support through tiers at the time of product purchase. This offering provides the client an optimum level of support over multiple years for mission-critical requirements of the IT infrastructure. Read more about IBM Power Expert Care.
About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visit  .

SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE in Germany and other countries. Please see  for additional trademark information and notices.

1Comparison based on best performing 4-socket systems (IBM Power E1050 3.15-3.9 GHz, 96 core and Inspur NF8480M6 2.90 GHz, Intel Xeon Platinum 8380H) using published results at as of 22 June 2022. For more information about SPEC CPU 2017, see .

2Comparison based on best performing 4-socket systems (IBM Power E1050 3.15-3.9 GHz, 96 core; and Inspur NF8480M6 2.90 GHz, Intel Xeon Platinum 8380H) using published results at as of 22 June 2022. For more information about SPEC CPU 2017, see

3Comparison based on best performing 4-socket systems (1) IBM Power E1050; two-tier SAP SD standard application benchmark running SAP ERP 6.0 EHP5; Power10 2.95 GHz processor, 4,096 GB memory, 4p/96c/768t, 134,016 SD benchmark users, 736,420 SAPS, AIX 7.3, DB2 11.5, Certification # 2022018 and (2) Dell EMC PowerEdge 840; two-tier SAP SD standard application benchmark running SAP ERP 6.0 EHP5; Intel Xeon Platinum 8280 2.7 GHz, 4p/112c/224t, 69,500 SD benchmark users (380,280 SAPS), SUSE Linux Enterprise Server 12 and SAP ASE 16, Certification # 2019045. All results can be found at sap.com/benchmark Valid as of 7 July 2022. 

Contact:Ben Stricker[email protected]

SOURCE IBM

MENAFN12072022003732001241ID1104515029


Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Mon, 11 Jul 2022 17:41:00 -0500 Date text/html https://menafn.com/1104515029/IBM-Expands-Power10-Server-Family-To-Help-Clients-Respond-Faster-To-Rapidly-Changing-Business-Demands
C5050-287 exam dump and training guide direct download
Training Exams List