Is it true that you are searching for 000-M71 dump that works great in test center?

killexams.com offer you to endeavor its free 000-M71 test questions that are taken from full form of 000-M71 test. Our 000-M71 dump contains concluded Question Bank test assortment. Killexams.com offers you three months free updates of 000-M71 IBM Information Management Content Management OnDemand Technical Mastery Test v1 Question Bank questions. Our Certified gathering is accessible 100% of the time at the back end who refreshes the dumps as and when required.

Exam Code: 000-M71 Practice exam 2022 by Killexams.com team
IBM Information Management Content Management OnDemand Technical Mastery Test v1
IBM Information plan
Killexams : IBM Information plan - BingNews https://killexams.com/pass4sure/exam-detail/000-M71 Search results Killexams : IBM Information plan - BingNews https://killexams.com/pass4sure/exam-detail/000-M71 https://killexams.com/exam_list/IBM Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Improve future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Improve quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Improve the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : IBM earnings show solid growth but stock slides anyway

IBM Corp. beat second-quarter earnings estimates today, but shareholders were unimpressed, sending the computing giant’s shares down more than 4% in early after-hours trading.

Revenue rose 16%, to $15.54 billion in constant currency terms, and rose 9% from the $14.22 billion IBM reported in the same quarter a year ago after adjusting for the spinoff of managed infrastructure-service business Kyndryl Holdings Inc. Net income jumped 45% year-over-year, to $2.5 billion, and diluted earnings per share of $2.31 a share were up 43% from a year ago.

Analysts had expected adjusted earnings of $2.26 a share on revenue of $15.08 billion.

The strong numbers weren’t a surprise given that IBM had guided expectations toward high single-digit growth. The stock decline was attributed to a lower free cash flow forecast of $10 billion for 2022, which was below the $10 billion-to-$10.5 billion range it had initially forecast. However, free cash flow was up significantly for the first six months of the year.

It’s also possible that a report saying Apple was looking at slowing down hiring, which caused the overall market to fall slightly today, might have spilled over to other tech stocks such as IBM in the extended trading session.

Delivered on promises

On the whole, the company delivered what it said it would. Its hybrid platform and solutions category grew 9% on the back of 17% growth in its Red Hat Business. Hybrid cloud revenue rose 19%, to $21.7 billion. Transaction processing sales rose 19% and the software segment of hybrid cloud revenue grew 18%.

“This quarter says that [Chief Executive Arvind Krishna] and his team continue to get the big calls right both from a platform strategy and also from the investments and acquisitions IBM has made over the last 18 months,” said Bola Rotibi, research director for software development at CCS Insight Ltd. Despite broad fears of a downturn in the economy, “the company is bucking the expected trend and more than meeting expectations,” she said.

Software revenue grew 11.6% in constant currency terms, to $6.2 billion, helped by a 7% jump in sales to Kyndryl. Consulting revenue rose almost 18% in constant currency, to $4.8 billion, while infrastructure revenue grew more than 25%, to $4.2 billion, driven largely by the announcement of a new series of IBM z Systems mainframes, which delivered 69% revenue growth.

With investors on edge about the risk of recession and his potential impact on technology spending, Chief Executive Arvind Krishna (pictured) delivered an upbeat message. “There’s every reason to believe technology spending in the [business-to-business] market will continue to surpass GDP growth,” he said. “Demand for solutions remains strong. We continue to have double-digit growth in IBM consulting, broad growth in software and, with the z16 launch, strong growth in infrastructure.”

Healthy pipeline

Krishna called IBM’s current sales pipeline “pretty healthy. The second half at this point looks consistent with the first half by product line and geography,” he said. He suggested that technology spending is benefiting from its leverage in reducing costs, making the sector less vulnerable to recession. ”We see the technology as deflationary,” he said. “It acts as a counterbalance to all of the inflation and labor demographics people are facing all over the globe.”

While IBM has been criticized for spending $34 billion to buy Red Hat Inc. instead of investing in infrastructure, the deal appears to be paying off as expected, Rotibi said. Although second-quarter growth in the Red Hat business was lower than the 21% recorded in the first quarter, “all the indices show that they are getting very good value from the portfolio,” she said. Red Hat has boosted IBM’s consulting business but products like Red Hat Enterprise Linux and OpenShift have also benefited from the Big Blue sales force.

With IBM being the first major information technology provider to report results, Pund-IT Inc. Chief Analyst Charles King said the numbers bode well for reports soon to come from other firms. “The strength of IBM’s quarter could portend good news for other vendors focused on enterprises,” he said. “While those businesses aren’t immune to systemic problems, they have enough heft and buoyancy to ride out storms.”

One area that IBM has talked less and less about over the past few quarters is its public cloud business. The company no longer breaks out cloud revenues and prefers to talk instead about its hybrid business and partnerships with major public cloud providers.

Hybrid focus

“IBM’s primary focus has long been on developing and enabling hybrid cloud offerings and services; that’s what its enterprise customers want, and that’s what its solutions and consultants aim to deliver,” King said.

IBM’s recently expanded partnership with Amazon Web Services Inc. is an example of how the company has pivoted away from competing with the largest hyperscalers and now sees them as a sales channel, Rotibi said. “It is a pragmatic recognition of the footprint of the hyperscalers but also playing to IBM’s strength in the services it can build on top of the other cloud platforms, its consulting arm and infrastructure,” she said.

Krishna asserted that, now that the Kyndryl spinoff is complete, IBM is in a strong position to continue on its plan to deliver high-single-digit revenue growth percentages for the foreseeable future. Its consulting business is now focused principally on business transformation projects rather than technology implementation and the people-intensive business delivered a pretax profit margin of 9%, up 1% from last year. “Consulting is a critical part of our hybrid platform thesis,” said Chief Financial Officer James Kavanaugh.

Pund-IT’s King said IBM Consulting “is firing on all cylinders. That includes double-digit growth in its three main categories of business transformation, technology consulting and application operations as well as a notable 32% growth in hybrid cloud consulting.”

Dollar worries

With the U.S. dollar at a 20-year high against the euro and a 25-year high against the yen, analysts on the company’s earnings call directed several questions to the impact of currency fluctuations on IBM’s results.

Kavanaugh said these are unknown waters but the company is prepared. “The velocity of the [dollar’s] strengthening is the sharpest we’ve seen in over a decade; over half of currencies are down-double digits against the U.S. dollar,” he said. “This is unprecedented in rate, breadth and magnitude.”

Kavanaugh said IBM is more insulated against currency fluctuations than most companies because it has long hedged against volatility. “Hedging mitigates volatility in the near term,” he said. “It does not eliminate currency as a factor but it allows you time to address your business model for price, for source, for labor pools and for cost structures.”

The company’s people-intensive consulting business also has some built-in protections against a downturn, Kavanaugh said. “In a business where you hire tens of thousands of people, you also churn tens of thousands each year,” he said. “It gives you an automatic way to hit a pause in some of the profit controls because if you don’t see demand you can slow down your supply-side. You can get a 10% to 20% impact that you pretty quickly control.”

Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Mon, 18 Jul 2022 12:15:00 -0500 en-US text/html https://siliconangle.com/2022/07/18/ibm-earnings-show-solid-growth-stock-slides-anyway/
Killexams : Astadia Publishes Mainframe to Cloud Reference Architecture Series

The guides leverage Astadia’s 25+ years of expertise in partnering with organizations to reduce costs, risks and timeframes when migrating their IBM mainframe applications to cloud platforms

BOSTON, August 03, 2022--(BUSINESS WIRE)--Astadia is pleased to announce the release of a new series of Mainframe-to-Cloud reference architecture guides. The documents cover how to refactor IBM mainframes applications to Microsoft Azure, Amazon Web Services (AWS), Google Cloud, and Oracle Cloud Infrastructure (OCI). The documents offer a deep dive into the migration process to all major target cloud platforms using Astadia’s FastTrack software platform and methodology.

As enterprises and government agencies are under pressure to modernize their IT environments and make them more agile, scalable and cost-efficient, refactoring mainframe applications in the cloud is recognized as one of the most efficient and fastest modernization solutions. By making the guides available, Astadia equips business and IT professionals with a step-by-step approach on how to refactor mission-critical business systems and benefit from highly automated code transformation, data conversion and testing to reduce costs, risks and timeframes in mainframe migration projects.

"Understanding all aspects of legacy application modernization and having access to the most performant solutions is crucial to accelerating digital transformation," said Scott G. Silk, Chairman and CEO. "More and more organizations are choosing to refactor mainframe applications to the cloud. These guides are meant to assist their teams in transitioning fast and safely by benefiting from Astadia’s expertise, software tools, partnerships, and technology coverage in mainframe-to-cloud migrations," said Mr. Silk.

The new guides are part of Astadia’s free Mainframe-to-Cloud Modernization series, an ample collection of guides covering various mainframe migration options, technologies, and cloud platforms. The series covers IBM (NYSE:IBM) Mainframes.

In addition to the reference architecture diagrams, these comprehensive guides include various techniques and methodologies that may be used in forming a complete and effective Legacy Modernization plan. The documents analyze the important role of the mainframe platform, and how to preserve previous investments in information systems when transitioning to the cloud.

In each of the IBM Mainframe Reference Architecture white papers, readers will explore:

  • Benefits, approaches, and challenges of mainframe modernization

  • Understanding typical IBM Mainframe Architecture

  • An overview of Azure/AWS/Google Cloud/Oracle Cloud

  • Detailed diagrams of IBM mappings to Azure/AWS/ Google Cloud/Oracle Cloud

  • How to ensure project success in mainframe modernization

The guides are available for obtain here:

To access more mainframe modernization resources, visit the Astadia learning center on www.astadia.com.

About Astadia

Astadia is the market-leading software-enabled mainframe migration company, specializing in moving IBM and Unisys mainframe applications and databases to distributed and cloud platforms in unprecedented timeframes. With more than 30 years of experience, and over 300 mainframe migrations completed, enterprises and government organizations choose Astadia for its deep expertise, range of technologies, and the ability to automate complex migrations, as well as testing at scale. Learn more on www.astadia.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20220803005031/en/

Contacts

Wilson Rains, Chief Revenue Officer
Wilson.Rains@astadia.com
+1.877.727.8234

Wed, 03 Aug 2022 02:00:00 -0500 en-US text/html https://finance.yahoo.com/news/astadia-publishes-mainframe-cloud-reference-140000599.html
Killexams : Amazon, IBM Move Swiftly on Post-Quantum Cryptographic Algorithms Selected by NIST

A month after the National Institute of Standards and Technology (NIST) revealed the first quantum-safe algorithms, Amazon Web Services (AWS) and IBM have swiftly moved forward. Google was also quick to outline an aggressive implementation plan for its cloud service that it started a decade ago.

It helps that IBM researchers contributed to three of the four algorithms, while AWS had a hand in two. Google contributed to one of the submitted algorithms, SPHINCS+.

A long process that started in 2016 with 69 original candidates ends with the selection of four algorithms that will become NIST standards, which will play a critical role in protecting encrypted data from the vast power of quantum computers.

NIST's four choices include CRYSTALS-Kyber, a public-private key-encapsulation mechanism (KEM) for general asymmetric encryption, such as when connecting websites. For digital signatures, NIST selected CRYSTALS-Dilithium, FALCON, and SPHINCS+. NIST will add a few more algorithms to the mix in two years.

Vadim Lyubashevsky, a cryptographer who works in IBM's Zurich Research Laboratories, contributed to the development of CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon. Lyubashevsky was predictably pleased by the algorithms selected, but he had only anticipated NIST would pick two digital signature candidates rather than three.

Ideally, NIST would have chosen a second key establishment algorithm, according to Lyubashevsky. "They could have chosen one more right away just to be safe," he told Dark Reading. "I think some people expected McEliece to be chosen, but maybe NIST decided to hold off for two years to see what the backup should be to Kyber."

IBM's New Mainframe Supports NIST-Selected Algorithms

After NIST identified the algorithms, IBM moved forward by specifying them into its recently launched z16 mainframe. IBM introduced the z16 in April, calling it the "first quantum-safe system," enabled by its new Crypto Express 8S card and APIs that provide access to the NIST APIs.

IBM was championing three of the algorithms that NIST selected, so IBM had already included them in the z16. Since IBM had unveiled the z16 before the NIST decision, the company implemented the algorithms into the new system. IBM last week made it official that the z16 supports the algorithms.

Anne Dames, an IBM distinguished engineer who works on the company's z Systems team, explained that the Crypto Express 8S card could implement various cryptographic algorithms. Nevertheless, IBM was betting on CRYSTAL-Kyber and Dilithium, according to Dames.

"We are very fortunate in that it went in the direction we hoped it would go," she told Dark Reading. "And because we chose to implement CRYSTALS-Kyber and CRYSTALS-Dilithium in the hardware security module, which allows clients to get access to it, the firmware in that hardware security module can be updated. So, if other algorithms were selected, then we would add them to our roadmap for inclusion of those algorithms for the future."

A software library on the system allows application and infrastructure developers to incorporate APIs so that clients can generate quantum-safe digital signatures for both classic computing systems and quantum computers.

"We also have a CRYSTALS-Kyber interface in place so that we can generate a key and provide it wrapped by a Kyber key so that could be used in a potential key exchange scheme," Dames said. "And we've also incorporated some APIs that allow clients to have a key exchange scheme between two parties."

Dames noted that clients might use Kyber to generate digital signatures on documents. "Think about code signing servers, things like that, or documents signing services, where people would like to actually use the digital signature capability to ensure the authenticity of the document or of the code that's being used," she said.

AWS Engineers Algorithms Into Services

During Amazon's AWS re:Inforce security conference last week in Boston, the cloud provider emphasized its post-quantum cryptography (PQC) efforts. According to Margaret Salter, director of applied cryptography at AWS, Amazon is already engineering the NIST standards into its services.

During a breakout session on AWS' cryptography efforts at the conference, Salter said AWS had implemented an open source, hybrid post-quantum key exchange based on a specification called s2n-tls, which implements the Transport Layer Security (TLS) protocol across different AWS services. AWS has contributed it as a draft standard to the Internet Engineering Task Force (IETF).

Salter explained that the hybrid key exchange brings together its traditional key exchanges while enabling post-quantum security. "We have regular key exchanges that we've been using for years and years to protect data," she said. "We don't want to get rid of those; we're just going to enhance them by adding a public key exchange on top of it. And using both of those, you have traditional security, plus post quantum security."

Last week, Amazon announced that it deployed s2n-tls, the hybrid post-quantum TLS with CRYSTALS-Kyber, which connects to the AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In an update this week, Amazon documented its stated support for AWS Secrets Manager, a service for managing, rotating, and retrieving database credentials and API keys.

Google's Decade-Long PQC Migration

While Google didn't make implementation announcements like AWS in the immediate aftermath of NIST's selection, VP and CISO Phil Venables said Google has been focused on PQC algorithms "beyond theoretical implementations" for over a decade. Venables was among several prominent researchers who co-authored a technical paper outlining the urgency of adopting PQC strategies. The peer-reviewed paper was published in May by Nature, a respected journal for the science and technology communities.

"At Google, we're well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information," Venables wrote in a blog post published following the NIST announcement. "We have one goal: ensure that Google is PQC ready."

Venables recalled an experiment in 2016 with Chrome where a minimal number of connections from the Web browser to Google servers used a post-quantum key-exchange algorithm alongside the existing elliptic-curve key-exchange algorithm. "By adding a post-quantum algorithm in a hybrid mode with the existing key exchange, we were able to test its implementation without affecting user security," Venables noted.

Google and Cloudflare announced a "wide-scale post-quantum experiment" in 2019 implementing two post-quantum key exchanges, "integrated into Cloudflare's TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients." The experiment helped Google understand the implications of deploying two post-quantum key agreements with TLS.

Venables noted that last year Google tested post-quantum confidentiality in TLS and found that various network products were not compatible with post-quantum TLS. "We were able to work with the vendor so that the issue was fixed in future firmware updates," he said. "By experimenting early, we resolved this issue for future deployments."

Other Standards Efforts

The four algorithms NIST announced are an important milestone in advancing PQC, but there's other work to be done besides quantum-safe encryption. The AWS TLS submission to the IETF is one example; others include such efforts as Hybrid PQ VPN.

"What you will see happening is those organizations that work on TLS protocols, or SSH, or VPN type protocols, will now come together and put together proposals which they will evaluate in their communities to determine what's best and which protocols should be updated, how the certificates should be defined, and things like things like that," IBM's Dames said.

Dustin Moody, a mathematician at NIST who leads its PQC project, shared a similar view during a panel discussion at the RSA Conference in June. "There's been a lot of global cooperation with our NIST process, rather than fracturing of the effort and coming up with a lot of different algorithms," Moody said. "We've seen most countries and standards organizations waiting to see what comes out of our nice progress on this process, as well as participating in that. And we see that as a very good sign."

Thu, 04 Aug 2022 09:03:00 -0500 en text/html https://www.darkreading.com/dr-tech/amazon-ibm-move-swiftly-on-post-quantum-cryptographic-algorithms-selected-by-nist
Killexams : IBM report shows cyberattacks growing fast in number, scale No result found, try new keyword!A new report out of IBM shows that when it comes to the rising threat of data breaches, it’s the consumer – not the company – fronting the price tag. Fri, 29 Jul 2022 22:30:00 -0500 text/html https://www.bizjournals.com/triad/news/2022/07/30/ibm-data-cyberattacks-growing-in-number-scale.html Killexams : Zero trust: Chaos creates cybercriminal opportunities

If there is any word to best describe the first few years of the decade, it is chaotic. And chaos is where cybercriminals flourish. While many fleets and other transportation industry organizations and businesses are more secure than last decade, there are more threats to the industry, which could impact fleets, their customers, and supply chains.

In the past year, the transportation industry was among the top 10 most targeted sectors by cybercriminals, according to a 2022 IBM Security study. While transportation was the seventh-most cyberattack-targeted industry, industries relying on trucking and other transportation services, such as manufacturing (No. 1), energy (No. 4), and retail/wholesale (No. 5), were victims of ransomware and business email compromise (BEC) attacks, according to the study.

See also: Still waiting on blockchain to catch up with the hype

These attacks, particularly against manufacturing, which accounted for nearly a quarter of all cyberattacks worldwide in 2021, added to the supply chain pressures created during the COVID-19 pandemic.

"Cybercriminals usually chase the money. Now with ransomware, they are chasing leverage," said Charles Henderson, head of IBM X-Force. "Businesses should recognize that vulnerabilities are holding them in a deadlock—as ransomware actors use that to their advantage. This is a non-binary challenge. The attack surface is only growing larger, so instead of operating under the assumption that every vulnerability in their environment has been patched, businesses should operate under an assumption of compromise and enhance their vulnerability management with a zero trust strategy."

Joe Russo, VP of IT and Security at Isaac Instruments, a trucking technology company, said more companies are shifting toward “zero-trust.” It’s a new security approach that assumes a breach has already happened—so it increases the difficulty for an attacker to move through a company’s network.

“Zero trust is something that can help all fleets,” Russo told FleetOwner. Fundamentally, zero trust is understanding where critical data resides and who has access to it. It’s one of the bases for blockchain. Then, he explained, fleets should create robust verification measures throughout a network to ensure only the right people are accessing that crucial data in the right way.

Transportation industry security improves

IBM’s study found that 4% of all attacks were aimed at the transportation industry, which made it the seventh-most targeted group in 2021. Transportation was No. 9 in 2020. IBM found that as international borders and transportation networks reopened in 2021, it renewed cybercriminal interest in transportation. While transportation ranked lower overall in 2020, it saw more cyberattacks. 

The transportation industry had already started taking cyber issues more seriously last year, according to Ben Barnes, chief information security officer and VP of IT services for transportation solutions provider McLeod Software

See also: How to reduce the risk of a data breach

“I think we, as an industry, have come a long way in our cybersecurity,” he told FleetOwner. “A lack of cyber adoption was our big hurdle for a long time. I don’t think we suffer that anymore.”

While the transportation industry was once the “low-hanging fruit” for cybercriminals, that is no longer the case, Barnes said. “I think a lot of the attacks in the transportation industry now are very targeted. It’s a high-value market now,” he explained. “High value doesn’t mean profitable, but there’s a lot of revenue; there’s a lot of dollars in transportation that are moving. And that makes us very likable for a thief.”

Malicious insiders—those who intentionally abuse legitimate credentials to steal information—was the top attack type against transportation organizations in 2021, according to the IBM study. These attacks made up 29% of those in the industry. Ransomware, remote access trojans (RATs), data theft, credential harvesting, and server access were also aimed at transportation organizations.

Half of the incidents IBM X-Force remediated at transportation companies originated with phishing emails, followed by stolen credentials (33%), and vulnerability exploitation (17%).

Russo noted that during the pandemic, as more companies were dealing with remote workers and more entry points for attacks, cybersecurity technologies improved. “If there’s a ransomware attack, it can be isolated to just that device so it doesn’t spread,” he explained. “A lot more proactive and containment is happening than in the past.”

Transportation targets

While transportation is no longer one of the top five targets for cybercriminals, it’s no reason for fleets and similar businesses to rest, Russo said. 

“With the Russian war in Ukraine, hackers are going after high-value targets, such as financial systems and health care,” Russo explained. “They haven’t gone down the list yet and hit transportation. But everyone must be vigilant—it could hit anytime.” 

See also: Are cybercriminals waiting for an opportune time to attack U.S. trucks?

When the fragility of U.S. supply chains was exposed during the COVID pandemic, cybercriminals were also shown how attacks could affect specific transportation organizations and businesses such as fleets, according to John Sheehy, SVP of research and strategy for IOActive

“You might be attacked because of who your client is—or who their client is,” Sheehy told FleetOwner. He explained that a criminal looking to infiltrate a high-value target could use a fleet’s weaker cybersecurity as a way to get into a fleet customer’s network. That’s why he believes sharing information about company security breaches can contribute to the common good.

“Empowering them with the information they need to make decisions to protect themselves and their clients is very helpful,” Sheehy said.

Cyberattacks aren’t going away, McLeod’s Barnes said. And like all business practices, companies need to review and revisit their cybersecurity practices regularly. 

“We’re all targets because we’re all part of the transportation sector—but there is strength in collective action,” he said. The transportation industry needs to work together to combat cybercrime. As more companies take steps to protect their IT systems, the transportation sector will become a less attractive target for cybercriminals. If we can raise awareness and take action to defeat cybercrime, the entire industry will benefit.”

Fri, 29 Jul 2022 01:19:00 -0500 text/html https://www.fleetowner.com/technology/article/21246668/chaos-creates-cybercriminal-opportunities
Killexams : IBM Report: South African data breach costs reach all-time high
.

.

IBM Security today released the annual Cost of a Data Breach Report, revealing costlier and higher-impact data breaches than ever before, with the average cost of a data breach in South Africa reaching an all-time high of R49.25 million for surveyed organisations. With breach costs increasing nearly 20% over the last two years of the report, the findings suggest that security incidents became more costly and harder to contain compared to the year prior.

The 2022 report revealed that the average time to detect and contain a data breach was at its highest in seven years for organisations in South Africa – taking 247 days (187 to detect, 60 to contain). Companies who contained a breach in under 200 days were revealed to save almost R12 million – while breaches cost organisations R2650 per lost or stolen record on average.

The 2022 Cost of a Data Breach Report is based on in-depth analysis of real-world data breaches experienced by 550 organisations globally between March 2021 and March 2022. The research, which was sponsored and analysed by IBM Security, was conducted by the Ponemon Institute.

“As this year’s report reveals – organisations must adopt the right strategies coupled with the right technologies can help make all the difference when they are attacked. Businesses today need to continuously look into solutions that reduce complexity and speed up response to cyber threats across the hybrid cloud environment – minimising the impact of attacks,” says Ria Pinto, General Manager and Technology Leader, IBM South Africa.

Some of the key findings in the 2022 IBM report include:

  • Security Immaturity in Clouds – Organisations studied which had mature security across their cloud environments, the costs of a breach were observed to be R4 million lower than those that were in the midstage and applied many practices across their organisation. 
  • Incident Response Testing is a Multi-Million Rand Cost Saver – Organisations with an Incident Response (IR) team saved over R3.4 million, while those that extensively tested their IR plan lowered the cost of a breach by over R2.6 million, the study revealed. The study also found that organisations which deployed security AI or analytics incurred over R2 million less on average in breach costs compared to studied organisations that have not deployed either  technology– making them the top mitigating factors shown to reduce the cost of a breach.
  • Cloud Misconfiguration, Malicious Insider Attacks and Stolen Credentials are Costliest Breach Causes – Cloud misconfiguration reigned as the costliest cause of a breach (R58.6 million), malicious insider attacks came in second (R55 million) and the stolen credentials came in third, leading to R53 million in average breach costs for responding organisations.
  • Financial Services organisations experienced the Highest Breach Costs – Financial participants saw the costliest breaches amongst industries with average breach costs reaching a high of R4.9 million per record. This was followed by the industrial sector with losses per record reaching R4.7 million.

 Hybrid Cloud Advantage

Globally, the report also showcased hybrid cloud environments as the most prevalent (45%) infrastructure amongst organisations studied. Global findings revealed that organisations that adopted a hybrid cloud model observed lower breach costs compared to businesses with a solely public or private cloud model. In fact, hybrid cloud adopters studied were able to identify and contain data breaches 15 days faster on average than the global average of 277 days for participants.

The report highlights that 45% of studied breaches globally occurred in the cloud, emphasising the importance of cloud security.

South African businesses studied that had not started to deploy zero trust security practices across their cloud environments suffered losses averaging R56 million. Those in the mature stages of deployment decreased this cost significantly – recording R20 million savings as their total cost of a data breach was found to be R36 million.

The study revealed that more businesses are implementing security practices to protect their cloud environments, lowering breach costs with 44% of reporting organisations stating their zero-trust deployment is in the mature stage and another 42% revealing they are in the midstage.

Thu, 28 Jul 2022 00:16:00 -0500 text/html https://www.biztechafrica.com/article/ibm-report-south-african-data-breach-costs-reach-a/17008/
Killexams : Everything Falcons fans need to know ahead of open practices Friday

FLOWERY BRANCH, Ga. (CBS46) - With the Atlanta Falcons set to open training camp practices free and open to fans on Friday, here is important information you will need to know if you plan to attend.

The first practice open to fans in 2022 is scheduled at the IBM Performance Field in Flowery Branch beginning at 9:30 a.m., team officials said.

Make sure to pay attention to the forecast as Friday is expected to be another hot and humid day in metro Atlanta with clouds building through the afternoon.

Just in case of stormy weather in your area, bring your umbrellas, a blanket and remember to follow all of the NFL policies on COVID-19 health and safety. There could potentially be opportunities open for autographs or to take photos, so bring your markers, posters and camera phones.

Make sure to get there early to get a good parking spot.

You can also obtain our CBS46 mobile news app and check for traffic updates to check the traffic alerts as you plan to head to the team’s Flowery Branch training facility.

The team held day 1 of training camp practices with veterans and rookies on Wednesday, before amping up the intensity and urgency on Thursday.

Falcons team officials say head coach Arthur Smith and general manager Terry Fontenot will speak at an upcoming practice, while Falcons legends, the mascot Freddie Falcon and Falcons cheerleaders will be in attendance. Food trucks and an official team merchandise tent will also be on-site for fans.

For more information on all of the open training camp practices, click here.

In case you’re unable to attend Friday but plan on attending at a future date, here is the 2022 Atlanta Falcons Training Camp Open Dates schedule:

  • Saturday, July 30 | IBM Performance Field | 9:30 a.m.
  • Monday, August 1 | IBM Performance Field | 10 a.m.
  • Tuesday, August 2 | IBM Performance Field | 9:30 a.m.
  • Wednesday, August 3 | IBM Performance Field | 9:30 a.m.
  • Friday, August 5 | IBM Performance Field | 9:30 a.m.
  • Saturday. August 6 | IBM Performance Field | 9:30 a.m.
  • Monday, August 8 | IBM Performance Field | 10 a.m.
  • Tuesday, August 9 | IBM Performance Field | 9:30 a.m.
  • Wednesday, August 10 | IBM Performance Field | 9:30 a.m.
  • Monday. August 15 | Mercedes-Benz Stadium | 6:30 p.m.
  • Wednesday, August 24 | IBM Performance Field | Joint practices with Jacksonville | 1 p.m.
  • Thursday, August 25 | IBM Performance Field | Joint practices with Jacksonville | 1 p.m.
Thu, 28 Jul 2022 12:37:00 -0500 en text/html https://www.cbs46.com/2022/07/29/everything-falcons-fans-need-know-ahead-open-practices-friday/
Killexams : IBM’s Red Hat taps product and technology chief as new leader

IBM’s Red Hat named Matt Hicks, head of products and technologies, as its new leader, solidifying a bet that hybrid-cloud offerings will fuel the company’s growth.

Hicks takes over as the software unit’s chief executive officer and president from Paul Cormier, who will serve as chairman. “Paul and I have planned this for a while,” Hicks said Tuesday in an interview. “There’ll be a lot of similarities in what I did yesterday and what I’ll be doing tomorrow.”

International Business Machine (IBM) acquired Red Hat for about $34bn in 2019 as a central component of chief executive Arvind Krishna’s plan to steer the century-old company into the fast-growing cloud-computing market. As a division, Red Hat’s has seen steady revenue growth near 20 per cent, far outpacing IBM as a whole.

IBM hopes to distinguish itself in the crowded cloud market by targeting a hybrid model, which helps clients store and analyse information across their own data centres, private cloud services and servers run by major public providers such as Amazon.com and Microsoft. IBM has been a rare pocket of stability in the accurate stock market meltdown. The shares have gained 4.1 per cent this year, closing at $139.18 Tuesday in New York, compared with a 28 per cent decline for the tech-heavy Nasdaq 100.

“Together, we can really lead a a new era of hybrid computing,” said Hicks, who joined Red Hat in 2006. “Red Hat has the technology expertise and open source model – IBM has the reach.”

Hicks said demand for hybrid cloud and software services should remain strong despite questions about the global economic outlook, touting accurate deals with General Motors and ABB. The telecommunication and automotive industries are two areas he is targeting for expansion because they require geographically distributed data.

Read: IBM, Saudi’s King Saud University partner to advance skills development

Tue, 12 Jul 2022 17:49:00 -0500 Bloomberg en-US text/html https://gulfbusiness.com/ibms-red-hat-taps-product-and-technology-chief-as-new-leader/
Killexams : Asia Pacific Artificial Intelligence In Fintech Market Report 2022: Featuring Key Players IBM, Oracle, Google, Microsoft & Others

Company Logo

Dublin, Aug. 09, 2022 (GLOBE NEWSWIRE) -- The "Asia Pacific Artificial Intelligence In Fintech Market Size, Share & Industry Trends Analysis Report By Component (Solutions and Services), By Deployment (On-premise and Cloud), By Application, By Country and Growth Forecast, 2022 - 2028" report has been added to ResearchAndMarkets.com's offering.

The Asia Pacific Artificial Intelligence In Fintech Market is expected to witness market growth of 17.7% CAGR during the forecast period (2022-2028).

Artificial intelligence enhances outcomes by employing approaches derived from human intellect but applied at a scale that is not human. Fintech firms have been transformed in accurate years as a result of the computational arms race. Additionally, near-endless volumes of data are altering AI to unprecedented heights, and smart contracts may simply be a continuation of the current market trend.

In the banking industry, AI is used to look at a person's entire financial health, maintain up with real-time changes, and offer tailored advice based on fresh incoming data by examining cash accounts, investment accounts, and credit accounts. Banks and fintech companies have profited from AI and machine learning because they can process large amounts of data on clients. This information and data is then compared to arrive at conclusions about what services/products clients want, which has benefited in the development of customer relationships.

Hong Kong is a developed metropolis with a high rate of mobile phone use and internet access, providing a solid foundation for the city's fintech ecosystem. As per Invest Hong Kong, the country is home to approximately 600 fintech enterprises and startups. Similarly, 86% of local banks have implemented or plan to implement fintech solutions across all financial services. Consumer fintech adoption in the city was placed in the top five in the world's developed markets. Since 2014, Hong Kong fintech businesses have raised over 1.1 billion dollars in venture funding. Digital payments, securities settlement, wealthtech, electronic Know Your Customer (KYC) and digital identification utilities, insurtech, blockchain, data analytics, and other fintech opportunities abound in Hong Kong.

The HKMA introduced the Fintech Supervisory Sandbox (FSS) in September 2016, allowing banks and their collaborating technology businesses to perform pilot trials of their fintech projects with a small number of consumers without having to meet all of the HKMA's supervisory standards. This arrangement allows banks and tech companies to collect data and user feedback in order to Improve their new efforts, allowing them to deploy new technological solutions faster and for less money. Owing to this government support and huge investment in advanced solutions, the growth of the regional artificial intelligence in fintech market is expected to escalate in the forecast years.

The China market dominated the Asia Pacific Artificial Intelligence In Fintech Market by Country in 2021, and is expected to continue to be a dominant market till 2028; thereby, achieving a market value of $1,908.9 Million by 2028. The Japan market is poised to grow at a CAGR of 17% during (2022-2028). Additionally, The India market is expected to display a CAGR of 18.4% during (2022-2028).

Scope of the Study
Market Segments Covered in the Report:
By Component

By Deployment

By Application

  • Business Analytics & Reporting

  • Customer Behavioral Analytics

  • Fraud Detection

  • Virtual Assistant (Chatbots)

  • Quantitative & Asset Management

  • Others

By Country

  • China

  • Japan

  • India

  • South Korea

  • Singapore

  • Malaysia

  • Rest of Asia Pacific

Key Market Players

  • IBM Corporation

  • Oracle Corporation

  • Microsoft Corporation

  • Google LLC

  • Intel Corporation

  • Salesforce.com, Inc.

  • Amazon Web Services, Inc.

  • ComplyAdvantage

  • Amelia US LLC

  • Inbenta Technologies, Inc.

Key Topics Covered:

Chapter 1. Market Scope & Methodology

Chapter 2. Market Overview

Chapter 3. Competition Analysis - Global

Chapter 4. Asia Pacific Artificial Intelligence In Fintech Market by Component

Chapter 5. Asia Pacific Artificial Intelligence In Fintech Market by Deployment

Chapter 6. Asia Pacific Artificial Intelligence In Fintech Market by Application

Chapter 7. Asia Pacific Artificial Intelligence In Fintech Market by Country

Chapter 8. Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/c76s9d

CONTACT: CONTACT: ResearchAndMarkets.com Laura Wood, Senior Press Manager press@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470 For U.S./CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900
Mon, 08 Aug 2022 21:23:00 -0500 en-CA text/html https://ca.finance.yahoo.com/news/asia-pacific-artificial-intelligence-fintech-092300545.html
000-M71 exam dump and training guide direct download
Training Exams List