Make your day with C2070-994 Latest Topics for your exam success

Killexams.com provides legitimate and up in order to date and precise C2070-994 questions answers with a 100% move guarantee. You require to practice queries for at least twenty-four hrs to score high inside the exam. Your own actual task in order to pass in C2070-994 examination, commences with killexams.com test exercise questions.

Exam Code: C2070-994 Practice test 2022 by Killexams.com team
C2070-994 IBM Datacap V9.0 Solution Designer

Exam Code : C2070-994
Exam Name : IBM Datacap V9.0 Solution Designer
Number of questions: 66
Number of questions to pass: 43
Time allowed: 120 mins
Status: Live

The test contains five sections, totaling 65ti multiple-choice questions. The percentages after each section title reflect the approximate distribution of the total question set across the sections.
This multiple-choice test contains questions requiring single and multiple answers. For multiple-answer questions, you need to choose all required options to get the answer correct. You will be advised how many options make up the correct answer.
This test is designed to provide diagnostic feedback on the Examination Score Report, correlating back to the test objectives, informing the test taker how he or she did on each section of the test. As a result, to maintain the integrity of each test, Q&A are not distributed.

Section 1 - Datacap Architecture 17%
Demonstrate understanding of Datacap architecture
Explain general architecture
Explain capacity planning and load balancing configuration
Demonstrate an understanding of distributed environments
Demonstrate an understanding of mobile architecture
Demonstrate an understanding of High Availability and Scaling
Demonstrate an understanding of High Availability and Scaling
Identify minimum installation requirements
Demonstrate knowledge of Datacap databases
Demonstrate knowledge of Datacap application offerings
Demonstrate understanding of wTM
Section 2 - Analyze Business Requirements 19%
Determine what information to gather from images
Identify ingestion sources
Determine business validation rules
Determine workflow steps
Demonstrate understanding of export requirements
Determine security requirements
Determine reporting requirements
Section 3 - Design a Datacap Taskmaster Solution 26%
Demonstrate knowledge of creating Datacap document hierarchy
Analyze job and task design
Demonstrate knowledge of ingestion
Demonstrate knowledge of document conversions
Demonstrate knowledge of page identification
Demonstrate general page id knowledge
Demonstrate knowledge of Fingerprinting
Demonstrate knowledge of recognition methodology
Demonstrate knowledge of validation techniques
Demonstrate knowledge of export approaches
Demonstrate knowledge of FastDoc
Demonstrate knowledge of RuleRunner
Demonstrate knowledge of critical Datacap folder/file configuration
Section 4 - Develop Application and Solution Components 29%
Identify uses of Maintenance Manager
Demonstrate understanding of Report Viewer
Demonstrate understanding of RuleRunner
Demonstrate knowledge of using FastDoc admin
Demonstrate understanding of Datacap Studio
Demonstrate understanding of Studio AppWizard
Demonstrate knowledge of how to develop Task Profiles
Demonstrate knowledge of how to develop rulesets, rules, and functions
Demonstrate knowledge of applying rules to Datacap Document Hierarchy (DCO)
Demonstrate understanding of web administration
Section 5 - Solution Testing and Deployment 9%
Demonstrate knowledge of performing solution testing
Demonstrate knowledge on how to use Datacap Studio to test
Explain steps required to move solution between deployment targets
Demonstrate ability to enable logging for all the components
Demonstrated knowledge of troubleshooting techniques

IBM Datacap V9.0 Solution Designer
IBM Solution learn
Killexams : IBM Solution learn - BingNews https://killexams.com/pass4sure/exam-detail/C2070-994 Search results Killexams : IBM Solution learn - BingNews https://killexams.com/pass4sure/exam-detail/C2070-994 https://killexams.com/exam_list/IBM Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Partnerships & Use Cases

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Improve future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Improve quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Improve the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : ‘India is a perfect example of the application of open hybrid cloud’

NEW DELHI : Matt Hicks is president and chief executive officer of US enterprise open source solutions company Red Hat Inc., which was acquired by International Business Machines Corp. (IBM) in July 2019 for $34 billion but runs as an independent unit. Hicks, who took charge from Paul Cormier (now chairman of Red Hat) this July, is an IBM veteran and has been at the forefront of cloud computing for over a decade. In a video interview from the US, he spoke about his relationship with  Arvind Krishna, chairman and CEO of IBM, and Cormier, even as he shared Red Hat’s overall roadmap, plans for India, and tech trends. Edited excerpts: 

You have seen the company grow from being just about enterprise Linux to becoming a multi-billion dollar open source enterprise products firm. Having stepped into Cormier’s shoes, are you planning any change in strategy?

The short answer is ‘No’. I’m pretty lucky that I have worked within about 20 feet of Paul for the last 10 years. So, I’ve had the opportunity to have a hand in the team we’ve built and the strategy we’ve built and the bets and positions we’ve made around open hybrid cloud. In my last role, I was heading all of our products and technology and business unit teams. Hence, I know the team and the strategy. And we will evolve. If we look at the cloud services market that’s moving fast, our commercial models will change there to make sure that as customers have a foot on prem (on premises) and in private cloud, we serve them well. As hybrid extends to edge (computing), it will also change how we approach that market. But our fundamental strategy around open hybrid cloud doesn’t change. So, it’s a nice spot to be here, where I don’t feel compelled to make any change, but focus more on execution. 

Tell us a bit about Red Hat’s focus on India, and your expansion plans in the country.

When we see the growth and opportunity in India, it mimics what we see in a lot of parts of the globe—software-defined innovation that is going to be the thing that lets enterprises compete. That could be in traditional markets where they’re leveraging their data centres; or it could be leveraging public cloud technologies. In certain industries, that software innovation is moving to the devices themselves, which we call edge. India is a perfect example of the application of open hybrid cloud because we can serve all of those use cases—from edge deployments in 5G and the adjacent businesses that will be built around that, to connectivity to the public clouds.

Correia (Marshall Correia is vice-president and general manager, India, South Asia at Red Hat): We have been operating in the country for multiple decades and our interest in India is two-fold. One is go-to-market in India, working with the Indian government, Indian enterprises, private sector as well as public sector enterprises. We have a global delivery presence in cities like Pune and Bengaluru. Whether you look at the front office, back office, or mid-office, we are deeply embedded into it (BSE, National Stock Exchange (NSE), Aadhaar, GST Network (GSTN), Life Insurance Corporation of India (LIC), SBI Insurance and most core banking services across India use Red Hat open source technologies). For instance, we work with Infosys on GSTN. So, I would say there is a little bit of Red Hat played out everywhere (in India) but with some large enterprises, we have a very deep relationship. 

Do you believe Red Hat is meeting IBM’s expectations? How often do you interact with Arvind Krishna, and what do you discuss?

About five years ago, Arvind and I were on stage together, announcing our new friendship around IBM middleware on OpenShift. I talk to him every few days. A lot of this credit goes to Paul. We’ve struck the balance with IBM. Arvind would describe it as Red Hat being “independent" (since) we have to partner with other cloud providers, other consulting providers, (and) other technology providers (including Verizon, Accenture, Deloitte, Tata Consultancy Services, and IBM Consulting). But IBM is very opinionated on Red Hat—they built their middleware to Red Hat, and we are their core choice for hybrid. Red Hat gives them (IBM) a technology base that they can apply their global reach to. IBM has the ability to bring open source Red Hat technology to every corner of the planet. 

How are open source architectures helping data scientists and CXOs with the much-needed edge adopting AI-ML (artificial intelligence and machine learning)?

AI is a really big space, and we have always sort of operated in how to get code built and (get it) into production faster. But now training models that can answer questions with precision are running in parallel. Our passion is to integrate that whole flow of models into production, right next to the apps that you’re already building today—we call this the ML ops (machine learning operations, which is jargon for a set of best practices for businesses to run AI successfully) space.

What that means is that we’re not trying to be the best in natural language processing (NLP) or building foundation AI models on it or convolutional neural networks (CNNs). We want to play in our sweet spot, which is how we arm data science teams to be able to get their models from development to production and time into those apps. This is the work we’ve done on OpenShift data science (managed cloud service for data scientists and developers) with it.

Another piece that’s changing and has been exciting for us, is hardware. As an example, cars today and going forward are moving to running just a computer in them. What we do really well is to put Linux on computers and the computer in your car, and the future will look very similar to the computer in your data centre today. And when we’re able to combine that platform, with bringing these AI models into that environment with the speed that you do with code with application integration, it opens up a lot of exciting opportunities for customers to get that data science model of building into the devices, or as close to customers as they possibly can.

This convergence is important, and it’s not tied to edge. Companies have realized that the closer they can push the interaction to the user, the better the experience it’s going to be.

And that could be in banking or pushing self-service to users‘ phones.

Catch all the Corporate news and Updates on Live Mint. download The Mint News App to get Daily Market Updates & Live Business News.
More Less

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Mon, 08 Aug 2022 06:04:00 -0500 en text/html https://www.livemint.com/companies/people/india-is-a-perfect-example-of-the-application-of-open-hybrid-cloud-11659981260451.html
Killexams : IBM’s gobbling up AI companies left and right — and we love it

Big Blue’s been on a buying spree lately with Databand.ai, a big data startup, becoming its latest acquisition. Don’t blink. If you do, you might miss another huge IBM buyout.

Up front: Big data is a big deal. Less than a decade ago, many businesses were manually entering data into spreadsheets to meet their insight needs. Today, even the most modest startups can benefit from deep analytics.

Unfortunately, the landscape of companies that provide targeted services for a spectrum of industries is somewhat barren.

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

Simply put, you can’t just integrate a bunch of generic AI models into your IT stack and hope to magically pipeline solutions to your company’s problems.

It takes infrastructure and expertise to turn your hoard of data into action points.

Background: IBM’s spent over a century developing the infrastructure. But expertise is a moving target. To keep up with the modern influx of deep learning and data solutions in a period of technological turbulence, the company’s new CEO has opened up the corporate wallet in hopes of building a data-parsing juggernaut.

Per a latest IBM blog post:

Databand.ai is IBM’s fifth acquisition in 2022 as the company continues to bolster its hybrid cloud and AI skills and capabilities. IBM has acquired more than 25 companies since Arvind Krishna became CEO in April 2020.

This particular acquisition shores up IBM’s ability to provide “data observability” solutions for its clients and customers.

In other words, Databand.ai comes with a suite of products and a team of employees who know how to turn giant troves of data into useful insights.

According to IBM:

A rapidly growing market opportunity, data observability is quickly emerging as a key solution for helping data teams and engineers better understand the health of data in their system and automatically identify, troubleshoot and resolve issues, like anomalies, breaking data changes or pipeline failures, in near real-time.

Quick take: There’s a lot more to the world of big data than you might think. With this acquisition, IBM not only gets software solutions it can integrate to its current cornucopia of management and analytics tools, but it also gets a team that’s ready to hit the ground running for the company’s clients.

Databand.ai just finished a funding round prior to the acquisition wherein it raised over $14 million — that’s a pretty good indication the company’s on solid footing.

Here at Neural, we love it. Databand’s joining a company whose CEO has their finger firmly on the pulse of big data and IBM’s expanding its already industry-leading portfolio of AI-powered solutions.

Sun, 17 Jul 2022 05:58:00 -0500 en text/html https://thenextweb.com/news/ibm-positions-itself-as-global-big-data-boss-with-latest-acquisition
Killexams : How to Use AI in Video Workflows

The key to understanding how to best use artificial intelligence in video workflows is knowing when to implement it for the appropriate applications. Ethan Dreilinger, Client Solutions Engineer, IBM Watson Advertising and The Weather Company, Carlos Hernandez, Chief Revenue Officer, SSIMWAVE, and Gordon Brooks, Executive Chairman and CEO, Zixi, discuss the distinctions between automation and AI, along with how AI can be applied to automation to help maximize and streamline production workflows.

“When I talk to media companies about applying AI, I start at the bottom of a workflow and take those mundane tasks and start to think about how you can express them with AI,” Dreilinger says. He believes that letting machines analyze data is the ideal approach, while people such as Producers should handle creative content and cognitive-oriented tasks. “I think in terms of the bottom rung, really start low and build on top of that and take those low-lying tasks and start to automate them,” he says.

Hernandez asks where Dreilinger draws the distinction between automation and AI. “Not all automation is AI-driven,” he says. “So where would you say it makes sense?”

“AI can feed automation, certainly, but automation just kind of runs on its own,” Dreilinger says. “We have some products in our stack where you can go into an app and the app will start to understand what your preferences are because of AI.” This application of AI can be very useful for end users. “'That should go above this because that person likes pollen count more than precipitation forecast,’” he cites as an example.

Hernandez notes that automation and AI are often confused in the market. “We believe that in some cases, machine learning processes make sense, but in others, knowledge-based [uses] are the most appropriate. The key about artificial intelligence is knowing where it has value and where it doesn't," he says.

“It's a tool, like anything else, right?” Dreilinger says. “When you do a home repair job and you need a Phillips head screwdriver, you don't pull out a flathead screwdriver, you get your Phillips head screwdriver. It's a tool to use and you need to know when to use it.”

Gordon Brooks concurs and sums up these points. “It goes through the concept of advanced analytics. It's just one of the tools in that toolbox of doing those advanced analytics, which then drives automation and other things. So, I agree.”

Related Articles

How to Improve Live Video Workflows Through Optimized Root Cause Analysis

Zixi has improved live video workflows through their specialized Software-Defined Video Platform, which uses dynamic machine learning and an automated analytics approach to Root Cause Analysis to assist with faster team problem-solving collaboration

Biggest Challenges of Moving from On-Prem to Cloud Video Workflows

Some streaming pros say cloud production has made 5-10 years of progress in the two years since the pandemic shook live production to its core, but when will cloud offer the sort of no-latency communication, preview, and replay producers expect and rely on in on-prem workflows? And will it require a paradigm shift in the way producers think about hardware purchases and usage? Live X's Corey Behnke, CNN's Ben Ratner, LiveU's Mike Savello, and Signiant's Jon Finegold discuss the challenges of cloud migration and the current state of cloud in this clip from Streaming Media East 2022.

The Biggest Misconceptions About AI Video Workflow Automation

Zixi VP of Business Development Eric Bolten discusses some of the prevailing misconceptions about AI/ML video workflow automation in this clip from Streaming Media Connect 2022.

Mon, 08 Aug 2022 01:00:00 -0500 text/html https://www.streamingmedia.com/Articles/Editorial/Short-Cuts/How-to-Use-AI-in-Video-Workflows-154284.aspx
Killexams : CIOReview Names Cobalt Iron Among 10 Most Promising IBM Solution Providers 2022

Cobalt Iron Inc., a leading provider of SaaS-based enterprise data protection, today announced that the company has been deemed one of the 10 Most Promising IBM Solution Providers 2022 by CIOReview Magazine. The annual list of companies is selected by a panel of experts and members of CIOReview Magazine's editorial board to recognize and promote innovation and entrepreneurship. A technology partner for IBM, Cobalt Iron earned the distinction based on its Compass® enterprise SaaS backup platform for monitoring, managing, provisioning, and securing the entire enterprise backup landscape.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220728005043/en/

Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection. (Graphic: Business Wire)

Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection. (Graphic: Business Wire)

According to CIOReview, "Cobalt Iron has built a patented cyber-resilience technology in a SaaS model to alleviate the complexities of managing large, multivendor setups, providing an effectual humanless backup experience. This SaaS-based data protection platform, called Compass, leverages strong IBM technologies. For example, IBM Spectrum Protect is embedded into the platform from a data backup and recovery perspective. ... By combining IBM's technologies and the intellectual property built by Cobalt Iron, the company delivers a secure, modernized approach to data protection, providing a 'true' software as a service."

Through proprietary technology, the Compass data protection platform integrates with, automates, and optimizes best-of-breed technologies, including IBM Spectrum Protect, IBM FlashSystem, IBM Red Hat Linux, IBM Cloud, and IBM Cloud Object Storage. Compass enhances and extends IBM technologies by automating more than 80% of backup infrastructure operations, optimizing the backup landscape through analytics, and securing backup data, making it a valuable addition to IBM's data protection offerings.

CIOReview also praised Compass for its simple and intuitive interface to display a consolidated view of data backups across an entire organization without logging in to every backup product instance to extract data. The mahine learning-enabled platform also automates backup processes and infrastructure, and it uses open APIs to connect with ticket management systems to generate tickets automatically about any backups that need immediate attention.

To ensure the security of data backups, Cobalt Iron has developed an architecture and security feature set called Cyber Shield for 24/7 threat protection, detection, and analysis that improves ransomware responsiveness. Compass is also being enhanced to use several patented techniques that are specific to analytics and ransomware. For example, analytics-based cloud brokering of data protection operations helps enterprises make secure, efficient, and cost-effective use of their cloud infrastructures. Another patented technique - dynamic IT infrastructure optimization in response to cyberthreats - offers unique ransomware analytics and automated optimization that will enable Compass to reconfigure IT infrastructure automatically when it detects cyberthreats, such as a ransomware attack, and dynamically adjust access to backup infrastructure and data to reduce exposure.

Compass is part of IBM's product portfolio through the IBM Passport Advantage program. Through Passport Advantage, IBM sellers, partners, and distributors around the world can sell Compass under IBM part numbers to any organizations, particularly complex enterprises, that greatly benefit from the automated data protection and anti-ransomware solutions Compass delivers.

CIOReview's report concludes, "With such innovations, all eyes will be on Cobalt Iron for further advancements in humanless, secure data backup solutions. Cobalt Iron currently focuses on IP protection and continuous R&D to bring about additional cybersecurity-related innovations, promising a more secure future for an enterprise's data."

About Cobalt Iron

Cobalt Iron was founded in 2013 to bring about fundamental changes in the world's approach to secure data protection, and today the company's Compass® is the world's leading SaaS-based enterprise data protection system. Through analytics and automation, Compass enables enterprises to transform and optimize legacy backup solutions into a simple cloud-based architecture with built-in cybersecurity. Processing more than 8 million jobs a month for customers in 44 countries, Compass delivers modern data protection for enterprise customers around the world. www.cobaltiron.com

Product or service names mentioned herein are the trademarks of their respective owners.

Link to Word Doc: www.wallstcom.com/CobaltIron/220728-Cobalt_Iron-CIOReview_Top_IBM_Provider_2022.docx

Photo Link: www.wallstcom.com/CobaltIron/Cobalt_Iron_CIO_Review_Top_IBM_Solution_Provider_Award_Logo.pdf

Photo Caption: Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection.

Follow Cobalt Iron

https://twitter.com/cobaltiron
https://www.linkedin.com/company/cobalt-iron/
https://www.youtube.com/user/CobaltIronLLC

[ Back To TMCnet.com's Homepage ]

Thu, 28 Jul 2022 02:51:00 -0500 text/html https://www.tmcnet.com/usubmit/2022/07/28/9646864.htm
Killexams : IBM Annual Cost of Data Breach Report 2022: Record Costs Usually Passed On to Consumers, “Long Breach” Expenses Make Up Half of Total Damage

IBM’s annual Cost of Data Breach Report for 2022 is packed with revelations, and as usual none of them are good news. Headlining the report is the record-setting cost of data breaches, with the global average now at $4.35 million. The report also reveals that much of that expense comes with the data breach version of “long Covid,” expenses that are realized more than a year after the attack.

Most organizations (60%) are passing these added costs on to consumers in the form of higher prices. And while 83% of organizations now report experiencing at least one data breach, only a small minority are adopting zero trust strategies.

Security AI and automation greatly reduces expected damage

The IBM report draws on input from 550 global organizations surveyed about the period between March 2021 and March 2022, in partnership with the Ponemon Institute.

Though the average cost of a data breach is up, it is only by about 2.6%; the average in 2021 was $4.24 million. This represents a total climb of 13% since 2020, however, reflecting the general spike in cyber crime seen during the pandemic years.

Organizations are also increasingly not opting to absorb the cost of data breaches, with the majority (60%) compensating by raising consumer prices separate from any other latest increases due to inflation or supply chain issues. The report indicates that this may be an underreported upward influence on prices of consumer goods, as 83% of organizations now say that they have been breached at least once.

Brad Hong, Customer Success Manager for Horizon3.ai, sees a potential consumer backlash on the horizon once public awareness of this practice grows: “It’s already a breach of confidence to lose the confidential data of customers, and sure there’s bound to be an organization across those surveyed who genuinely did put in the effort to protect against and curb attacks, but for those who did nothing, those who, instead of creating a disaster recovery plan, just bought cyber insurance to cover the org’s operational losses, and those who simply didn’t care enough to heed the warnings, it’s the coup de grâce to then pass the cost of breaches to the same customers who are now the victims of a data breach. I’d be curious to know what percent of the 60% of organizations who increased the price of their products and services are using the extra revenue for a war chest or to actually reinforce their security—realistically, it’s most likely just being used to fill a gap in lost revenue for shareholders’ sake post-breach. Without government regulations outlining restrictions on passing cost of breach to consumer, at the least, not without the honest & measurable efforts of a corporation as their custodian, what accountability do we all have against that one executive who didn’t want to change his/her password?”

Breach costs also have an increasingly long tail, as nearly half now come over a year after the date of the attack. The largest of these are generally fines that are levied after an investigation, and decisions or settlements in class action lawsuits. While the popular new “double extortion” approach of ransomware attacks can drive long-term costs in this way, the study finds that companies paying ransom demands to settle the problem quickly aren’t necessarily seeing a large amount of overall savings: their average breach cost drops by just $610,000.

Sanjay Raja, VP of Product with Gurucul, expands on how knock-on data breach damage can continue for years: “The follow-up attack effect, as described, is a significant problem as the playbooks and solutions provided to security operations teams are overly broad and lack the necessary context and response actions for proper remediation. For example, shutting down a user or application or adding a firewall block rule or quarantining a network segment to negate an attack is not a sustainable remediation step to protect an organization on an ongoing basis. It starts with a proper threat detection, investigation and response solution. Current SIEMs and XDR solutions lack the variety of data, telemetry and combined analytics to not only identify an attack campaign and even detect variants on previously successful attacks, but also provide the necessary context, accuracy and validation of the attack to build both a precise and complete response that can be trusted. This is an even greater challenge when current solutions cannot handle complex hybrid multi-cloud architectures leading to significant blind spots and false positives at the very start of the security analyst journey.”

Rising cost of data breach not necessarily prompting dramatic security action

In spite of over four out of five organizations now having experienced some sort of data breach, only slightly over 20% of critical infrastructure companies have moved to zero trust strategies to secure their networks. Cloud security is also lagging as well, with a little under half (43%) of all respondents saying that their security practices in this area are either “early stage” or do not yet exist.

Those that have onboarded security automation and AI elements are the only group seeing massive savings: their average cost of data breach is $3.05 million lower. This particular study does not track average ransom demands, but refers to Sophos research that puts the most latest number at $812,000 globally.

The study also notes serious problems with incident response plans, especially troubling in an environment in which the average ransomware attack is now carried out in four days or less and the “time to ransom” has dropped to a matter of hours in some cases. 37% of respondents say that they do not test their incident response plans regularly. 62% say that they are understaffed to meet their cybersecurity needs, and these organizations tend to suffer over half a million more dollars in damages when they are breached.

Of course, cost of data breaches is not distributed evenly by geography or by industry type. Some are taking much bigger hits than others, reflecting trends established in prior reports. The health care industry is now absorbing a little over $10 million in damage per breach, with the average cost of data breach rising by $1 million from 2021. And companies in the United States face greater data breach costs than their counterparts around the world, at over $8 million per incident.

Shawn Surber, VP of Solutions Architecture and Strategy with Tanium, provides some insight into the unique struggles that the health care industry faces in implementing effective cybersecurity: “Healthcare continues to suffer the greatest cost of breaches but has among the lowest spend on cybersecurity of any industry, despite being deemed ‘critical infrastructure.’ The increased vulnerability of healthcare organizations to cyber threats can be traced to outdated IT systems, the lack of robust security controls, and insufficient IT staff, while valuable medical and health data— and the need to pay ransoms quickly to maintain access to that data— make healthcare targets popular and relatively easy to breach. Unlike other industries that can migrate data and sunset old systems, limited IT and security budgets at healthcare orgs make migration difficult and potentially expensive, particularly when an older system provides a small but unique function or houses data necessary for compliance or research, but still doesn’t make the cut to transition to a newer system. Hackers know these weaknesses and exploit them. Additionally, healthcare orgs haven’t sufficiently updated their security strategies and the tools that manufacturers, IT software vendors, and the FDA have made haven’t been robust enough to thwart the more sophisticated techniques of threat actors.”

Familiar incident types also lead the list of the causes of data breaches: compromised credentials (19%), followed by phishing (16%). Breaches initiated by these methods also tended to be a little more costly, at an average of $4.91 million per incident.

Global average cost of #databreach is now $4.35M, up 13% since 2020. Much of that are realized more than a year after the attack, and 60% of organizations are passing the costs on to consumers in the form of higher prices. #cybersecurity #respectdataClick to Tweet

Cutting the cost of data breach

Though the numbers are never as neat and clean as averages would indicate, it would appear that the cost of data breaches is cut dramatically for companies that implement solid automated “deep learning” cybersecurity tools, zero trust systems and regularly tested incident response plans. Mature cloud security programs are also a substantial cost saver.

Mon, 01 Aug 2022 10:00:00 -0500 Scott Ikeda en-US text/html https://www.cpomagazine.com/cyber-security/ibm-annual-cost-of-data-breach-report-2022-record-costs-usually-passed-on-to-consumers-long-breach-expenses-make-up-half-of-total-damage/
Killexams : Top 10 data lake solution vendors in 2022

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


As the world becomes increasingly data-driven, businesses must find suitable solutions to help them achieve their desired outcomes. Data lake storage has garnered the attention of many organizations that need to store large amounts of unstructured, raw information until it can be used in analytics applications.

The data lake solution market is expected to grow rapidly in the coming years and is driven by vendors that offer cost-effective, scalable solutions for their customers.

Learn more about data lake solutions, what key features they should have and some of the top vendors to consider this year. 

What is a data lake solution?

A data lake is defined as a single, centralized repository that can store massive amounts of unstructured and semi-structured information in its native, raw form. 

It’s common for an organization to store unstructured data in a data lake if it hasn’t decided how that information will be used. Some examples of unstructured data include images, documents, videos and audio. These data types are useful in today’s advanced machine learning (ML) and advanced analytics applications.

Data lakes differ from data warehouses, which store structured, filtered information for specific purposes in files or folders. Data lakes were created in response to some of the limitations of data warehouses. For example, data warehouses are expensive and proprietary, cannot handle certain business use cases an organization must address, and may lead to unwanted information homogeneity.

On-premise data lake solutions were commonly used before the widespread adoption of the cloud. Now, it’s understood that some of the best hosts for data lakes are cloud-based platforms on the edge because of their inherent scalability and considerably modular services. 

A 2019 report from the Government Accountability Office (GAO) highlights several business benefits of using the cloud, including better customer service and the acquisition of cost-effective options for IT management services.

Cloud data lakes and on-premise data lakes have pros and cons. Businesses should consider cost, scale and available technical resources to decide which type is best.

Read more about data lakes: What is a data lake? Definition, benefits, architecture and best practices

5 must-have features of a data lake solution

It’s critical to understand what features a data lake offers. Most solutions come with the same core components, but each vendor may have specific offerings or unique selling points (USPs) that could influence a business’s decision.

Below are five key features every data lake should have:

1. Various interfaces, APIs and endpoints

Data lakes that offer diverse interfaces, APIs and endpoints can make it much easier to upload, access and move information. These capabilities are important for a data lake because it allows unstructured data for a wide range of use cases, depending on a business’s desired outcome.

2. Support for or connection to processing and analytics layers

ML engineers, data scientists, decision-makers and analysts benefit most from a centralized data lake solution that stores information for easy access and availability. This characteristic can help data professionals and IT managers work with data more seamlessly and efficiently, thus improving productivity and helping companies reach their goals.

3. Robust search and cataloging features

Imagine a data lake with large amounts of information but no sense of organization. A viable data lake solution must incorporate generic organizational methods and search capabilities, which provide the most value for its users. Other features might include key-value storage, tagging, metadata, or tools to classify and collect subsets of information.

4. Security and access control

Security and access control are two must-have features with any digital tool. The current cybersecurity landscape is expanding, making it easier for threat actors to exploit a company’s data and cause irreparable damage. Only certain users should have access to a data lake, and the solution must have strong security to protect sensitive information.

5. Flexibility and scalability

More organizations are growing larger and operating at a much faster rate. Data lake solutions must be flexible and scalable to meet the ever-changing needs of modern businesses working with information.

Also read: Unlocking analytics with data lake and graph analysis

Top 10 data lake solution vendors in 2022

Some data lake solutions are best suited for businesses in certain industries. In contrast, others may work well for a company of a particular size or with a specific number of employees or customers. This can make choosing a potential data lake solution vendor challenging. 

Companies considering investing in a data lake solution this year should check out some of the vendors below.

1. Amazon Web Services (AWS)

The AWS Cloud provides many essential tools and services that allow companies to build a data lake that meets their needs. The AWS data lake solution is widely used, cost-effective and user-friendly. It leverages the security, durability, flexibility and scalability that Amazon S3 object storage offers to its users. 

The data lake also features Amazon DynamoDB to handle and manage metadata. AWS data lake offers an intuitive, web-based console user interface (UI) to manage the data lake easily. It also forms data lake policies, removes or adds data packages, creates manifests of datasets for analytics purposes, and features search data packages.

2. Cloudera

Cloudera is another top data lake vendor that will create and maintain safe, secure storage for all data types. Some of Cloudera SDX’s Data Lake Service capabilities include:

  • Data schema/metadata information
  • Metadata management and governance
  • Compliance-ready access auditing
  • Data access authorization and authentication for improved security

Other benefits of Cloudera’s data lake include product support, downloads, community and documentation. GSK and Toyota leveraged Cloudera’s data lake to garner critical business intelligence (BI) insights and manage data analytics processes.

3. Databricks 

Databricks is another viable vendor, and it also offers a handful of data lake alternatives. The Databricks Lakehouse Platform combines the best elements of data lakes and warehouses to provide reliability, governance, security and performance.

Databricks’ platform helps break down silos that normally separate and complicate data, which frustrates data scientists, ML engineers and other IT professionals. Aside from the platform, Databricks also offers its Delta Lake solution, an open-format storage layer that can Improve data lake management processes. 

4. Domo

Domo is a cloud-based software company that can provide big data solutions to all companies. Users have the freedom to choose a cloud architecture that works for their business. Domo is an open platform that can augment existing data lakes, whether it’s in the cloud or on-premise. Users can use combined cloud options, including:

  • Choosing Domo’s cloud
  • Connecting to any cloud data
  • Selecting a cloud data platform

Domo offers advanced security features, such as BYOK (bring your own key) encryption, control data access and governance capabilities. Well-known corporations such as Nestle, DHL, Cisco and Comcast leverage the Domo Cloud to better manage their needs.

5. Google Cloud

Google is another big tech player offering customers data lake solutions. Companies can use Google Cloud’s data lake to analyze any data securely and cost-effectively. It can handle large volumes of information and IT professionals’ various processing tasks. Companies that don’t want to rebuild their on-premise data lakes in the cloud can easily lift and shift their information to Google Cloud. 

Some key features of Google’s data lakes include Apache Spark and Hadoop migration, which are fully managed services, integrated data science and analytics, and cost management tools. Major companies like Twitter, Vodafone, Pandora and Metro have benefited from Google Cloud’s data lakes.

6. HP Enterprise

Hewlett Packard Enterprise (HPE) is another data lake solution vendor that can help businesses harness the power of their big data. HPE’s solution is called GreenLake — it offers organizations a truly scalable, cloud-based solution that simplifies their Hadoop experiences. 

HPE GreenLake is an end-to-end solution that includes software, hardware and HPE Pointnext Services. These services can help businesses overcome IT challenges and spend more time on meaningful tasks. 

7. IBM

Business technology leader IBM also offers data lake solutions for companies. IBM is well-known for its cloud computing and data analytics solutions. It’s a great choice if an operation is looking for a suitable data lake solution. IBM’s cloud-based approach operates on three key principles: embedded governance, automated integration and virtualization.

These are some data lake solutions from IBM: 

  • IBM Db2
  • IBM Db2 BigSQL
  • IBM Netezza
  • IBM Watson Query
  • IBM Watson Knowledge Catalog
  • IBM Cloud Pak for Data

With so many data lakes available, there’s surely one to fit a company’s unique needs. Financial services, healthcare and communications businesses often use IBM data lakes for various purposes.

8. Microsoft Azure

Microsoft offers its Azure Data Lake solution, which features easy storage methods, processing, and analytics using various languages and platforms. Azure Data Lake also works with a company’s existing IT investments and infrastructure to make IT management seamless.

The Azure Data Lake solution is affordable, comprehensive, secure and supported by Microsoft. Companies benefit from 24/7 support and expertise to help them overcome any big data challenges they may face. Microsoft is a leader in business analytics and tech solutions, making it a popular choice for many organizations.

9. Oracle

Companies can use Oracle’s Big Data Service to build data lakes to manage the influx of information needed to power their business decisions. The Big Data Service is automated and will provide users with an affordable and comprehensive Hadoop data lake platform based on Cloudera Enterprise. 

This solution can be used as a data lake or an ML platform. Another important feature of Oracle is it is one of the best open-source data lakes available. It also comes with Oracle-based tools to add even more value. Oracle’s Big Data Service is scalable, flexible, secure and will meet data storage requirements at a low cost.

10. Snowflake

Snowflake’s data lake solution is secure, reliable and accessible and helps businesses break down silos to Improve their strategies. The top features of Snowflake’s data lake include a central platform for all information, fast querying and secure collaboration.

Siemens and Devon Energy are two companies that provide testimonials regarding Snowflake’s data lake solutions and offer positive feedback. Another benefit of Snowflake is its extensive partner ecosystem, including AWS, Microsoft Azure, Accenture, Deloitte and Google Cloud.

The importance of choosing the right data lake solution vendor 

Companies that spend extra time researching which vendors will offer the best enterprise data lake solutions for them can manage their information better. Rather than choose any vendor, it’s best to consider all options available and determine which solutions will meet the specific needs of an organization.

Every business uses information, some more than others. However, the world is becoming highly data-driven — therefore, leveraging the right data solutions will only grow more important in the coming years. This list will help companies decide which data lake solution vendor is right for their operations.

Read next: Get the most value from your data with data lakehouse architecture

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Fri, 15 Jul 2022 07:14:00 -0500 Shannon Flynn en-US text/html https://venturebeat.com/2022/07/15/top-10-data-lake-solution-vendors-in-2022/
Killexams : IBM aims for immediate quantum advantage with error mitigation technique

You don’t have to be a physicist to know that noise and quantum computing don’t mix. Any noise, movement or temperature swing causes qubits – the quantum computing equivalent to a binary bit in classical computing – to fail.

That’s one of the main reasons quantum advantage (the point at which quantum surpasses classic computing) and quantum supremacy (when quantum computers solve a problem not feasible for classical computing) feel like longer-term goals and emerging technology.  It’s worth the wait, though, as quantum computers promise exponential increases over classic computing, which tops out at supercomputing.  However, due to the intricacies of quantum physics (e.g., entanglement), quantum computers are also more prone to errors based on environmental factors when compared to supercomputers or high-performance computers.

Quantum errors arise from what’s known as decoherence, a process that occurs when noise or nonoptimal temperatures interfere with qubits, changing their quantum states and causing information stored by the quantum computer to be lost.

The road(s) to quantum

Many enterprises view quantum computing technology as a zero-sum scenario and that if you want value from a quantum computer, you need fault-tolerant quantum processors and a multitude of qubits. While we wait, we’re stuck in the NISQ era — noisy intermediate-scale quantum — where quantum hasn’t surpassed  classical computers.

That’s an impression IBM hopes to change.

In a blog published today by IBM, its quantum team (Kristan Temme, Ewout van den Berg, Abhinav Kandala and Jay Gambett) writes that the history of classical computing is one of incremental advances. 

“Although quantum computers have seen tremendous improvements in their scale, quality and speed in latest years, such a gradual evolution seems to be missing from the narrative,” the team wrote.  “However, latest advances in techniques we refer to broadly as quantum error mitigation allow us to lay out a smoother path towards this goal. Along this path, advances in qubit coherence, gate fidelities and speed immediately translate to measurable advantage in computation, akin to the steady progress historically observed with classical computers.”

Finding value in noisy qubits

In a move to get a quantum advantage sooner – and in incremental steps – IBM claims to have created a technique that’s designed to tap more value from noisy qubits and move away from NISQ.

Instead of focusing solely on fault-tolerant computers. IBM’s goal is continuous and incremental improvements, Jerry Chow, the director of hardware development for IBM Quantum, told VentureBeat.

To mitigate errors, Chow points to IBM’s new probabilistic error cancellation, a technique designed to invert noisy quantum circuits to achieve error-free results, even though the circuits themselves are noisy. It does bring a runtime tradeoff, he said, because you’re giving up running more circuits to gain insight into the noise causing the errors.

The goal of the new technique is to provide a step, rather than a leap, towards quantum supremacy.  It’s “a near-term solution,” Chow said, and a part of a suite of techniques that will help IBM learn about error correction through error migration. “As you increase the runtime, you learn more as you run more qubits,” he explained.

Chow said that while  IBM continues to scale its quantum platform, this offers an incremental step. Last year, IBM unveiled a 127-qubit Eagle processor, which is capable of running quantum circuits that can’t be replicated classically.  Based on its quantum roadmap laid out in May, IBM systems is on track to reach 4,000-plus qubit quantum devices in 2025.

Not an either-or scenario: Quantum starts now

Probabilistic error cancellation represents a shift for IBM and the quantum field overall. Rather than relying solely on experiments to achieve full error correction under certain circumstances, IBM has focused on a continuous push to address quantum errors today while still moving toward fault-tolerant machines, Chow said. “You need high-quality hardware to run billions of circuits. Speed is needed. The goal is not to do error mitigation  long-term. It’s not all or nothing.”

IBM quantum computing bloggers add that its quantum error mitigation technique “is the continuous path that will take us from today’s quantum hardware to tomorrow’s fault-tolerant quantum computers. This path will let us run larger circuits needed for quantum advantage, one hardware improvement at a time.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Tue, 19 Jul 2022 15:31:00 -0500 Dan Muse en-US text/html https://venturebeat.com/2022/07/19/ibm-aims-for-immediate-quantum-advantage-with-error-mitigation-technique/
Killexams : Astadia Publishes Mainframe to Cloud Reference Architecture Series

Press release content from Business Wire. The AP news staff was not involved in its creation.

BOSTON--(BUSINESS WIRE)--Aug 3, 2022--

Astadia is pleased to announce the release of a new series of Mainframe-to-Cloud reference architecture guides. The documents cover how to refactor IBM mainframes applications to Microsoft Azure, Amazon Web Services (AWS), Google Cloud, and Oracle Cloud Infrastructure (OCI). The documents offer a deep dive into the migration process to all major target cloud platforms using Astadia’s FastTrack software platform and methodology.

As enterprises and government agencies are under pressure to modernize their IT environments and make them more agile, scalable and cost-efficient, refactoring mainframe applications in the cloud is recognized as one of the most efficient and fastest modernization solutions. By making the guides available, Astadia equips business and IT professionals with a step-by-step approach on how to refactor mission-critical business systems and benefit from highly automated code transformation, data conversion and testing to reduce costs, risks and timeframes in mainframe migration projects.

“Understanding all aspects of legacy application modernization and having access to the most performant solutions is crucial to accelerating digital transformation,” said Scott G. Silk, Chairman and CEO. “More and more organizations are choosing to refactor mainframe applications to the cloud. These guides are meant to assist their teams in transitioning fast and safely by benefiting from Astadia’s expertise, software tools, partnerships, and technology coverage in mainframe-to-cloud migrations,” said Mr. Silk.

The new guides are part of Astadia’s free Mainframe-to-Cloud Modernization series, an ample collection of guides covering various mainframe migration options, technologies, and cloud platforms. The series covers IBM (NYSE:IBM) Mainframes.

In addition to the reference architecture diagrams, these comprehensive guides include various techniques and methodologies that may be used in forming a complete and effective Legacy Modernization plan. The documents analyze the important role of the mainframe platform, and how to preserve previous investments in information systems when transitioning to the cloud.

In each of the IBM Mainframe Reference Architecture white papers, readers will explore:

  • Benefits, approaches, and challenges of mainframe modernization
  • Understanding typical IBM Mainframe Architecture
  • An overview of Azure/AWS/Google Cloud/Oracle Cloud
  • Detailed diagrams of IBM mappings to Azure/AWS/ Google Cloud/Oracle Cloud
  • How to ensure project success in mainframe modernization

The guides are available for download here:

To access more mainframe modernization resources, visit the Astadia learning center on www.astadia.com.

About Astadia

Astadia is the market-leading software-enabled mainframe migration company, specializing in moving IBM and Unisys mainframe applications and databases to distributed and cloud platforms in unprecedented timeframes. With more than 30 years of experience, and over 300 mainframe migrations completed, enterprises and government organizations choose Astadia for its deep expertise, range of technologies, and the ability to automate complex migrations, as well as testing at scale. Learn more on www.astadia.com.

View source version on businesswire.com:https://www.businesswire.com/news/home/20220803005031/en/

CONTACT: Wilson Rains, Chief Revenue Officer

Wilson.Rains@astadia.com

+1.877.727.8234

KEYWORD: UNITED STATES NORTH AMERICA MASSACHUSETTS

INDUSTRY KEYWORD: DATA MANAGEMENT TECHNOLOGY OTHER TECHNOLOGY SOFTWARE NETWORKS INTERNET

SOURCE: Astadia

Copyright Business Wire 2022.

PUB: 08/03/2022 10:00 AM/DISC: 08/03/2022 10:02 AM

http://www.businesswire.com/news/home/20220803005031/en

Wed, 03 Aug 2022 02:02:00 -0500 en text/html https://apnews.com/press-release/BusinessWire/technology-f50b643965d24115b2c526c8f96321a6
Killexams : InterPro Grows Q2 Bookings, Named One of 10 IBM Solution Providers to Watch in 2022

STONEHAM, Mass., July 12, 2022 (GLOBE NEWSWIRE) -- InterPro Solutions, which offers the first and only suite of mobile solutions designed exclusively for IBM Maximo®, announced today that Q2 was another successful quarter, improving sales bookings by more than 50 percent over Q1, and was selected by CIOCoverage to its list of 10 IBM Solution Providers to Watch in 2022.

IBM Maximo is the top enterprise asset management (EAM) software in the world, used by millions of operations and maintenance professionals to manage complex facilities and field environments. InterPro offers a suite of mobile apps built exclusively for Maximo that O&M teams need to do their jobs efficiently and effectively without the cost, complexity, and service impacts of available alternatives.

In Q2, InterPro improved year-over-year bookings by over 13 percent as compared to Q2 2021, and more than 50 percent over Q1 2022. Over the period, InterPro added a number of innovative organizations to its client list, including a major theme park and an Ivy League medical school. InterPro also saw expansions at current clients Maryland Department of Transportation, Duke Energy, Hammerhead Resources, EQT Corporation, West Fraser, Hong Kong Jockey Club, and Maricopa County, among others. For the sixth straight quarter, the company’s sales pipeline increased to a new high.

A number of bookings were for a 2021 addition to the EZMax Suite, EZMaxVendor. EZMaxVendor is a cloud solution that enables organizations to manage and schedule external service vendors like they’re an extension of their internal workforce. It eliminates surprises by establishing a shared understanding on work scope, cost, location, start time, and technicians, and automatically saves all work execution details to the organization’s Enterprise Asset Management (EAM) system.

“We went into Q2 with high expectations. Driven by expanded EZMaxMobile footprints at a number of existing clients and continued success with our newer products, we increased our sales by more than 50 percent over Q1 and again saw expansion of our sales pipeline to an all-time high,” said Dan Smith, Vice President, Sales and Marketing at InterPro Solutions.

In April, InterPro announced it was selected by CIOCoverage to its list of 10 IBM Solution Providers to Watch in 2022. CIOCoverage helps CEOs, CXOs, and CIOs stay aware and abreast of all the latest digital advancements and technological surges, helping their organizations effectively respond to client expectations and evolve their digital technologies and processes. Its 10 IBM Solution Providers to Watch in 2022 Special Edition highlights a select list of IBM business partners that are leading that digital advancement.

CIOCoverage described InterPro’s EZMax Suite for IBM Maximo as having “accessible, understandable interfaces, vibrant visuals, and robust functionality allowing for maintenance managers and technicians to work efficiently and effectively.” They continued, “Offering the first and only suite of mobile Enterprise Asset Management (EAM) applications built specifically for IBM Maximo, InterPro Solutions leverages native Maximo rules and permissions, and even extends native Maximo capabilities to reflect how people carry out their work activities.”

“InterPro has developed a suite of Maximo mobile products with unparalleled performance and unmatched mobile functionality,” said Bill Fahey, InterPro Solutions’ Chief Executive Officer. “Our efforts resulted in new clients across a variety of industries, an expanded footprint across many existing clients and industry recognition by CIOCoverage. Having increased our sales bookings by more than 50 percent over Q1 while continuing to grow our sales pipeline, we’re very bullish about the remainder of 2022.”

To learn more about InterPro’s EZMax Suite for Maximo, visit https://interprosoft.com/ezmax-suite/

About InterPro Solutions
InterPro Solutions, an IBM Business Partner, offers the first and only suite of mobile Enterprise Asset Management (EAM) solutions designed exclusively for IBM Maximo – using native Maximo rules, permissions and datastores – eliminating double updates, data lags, and synchronization failures. InterPro’s EZMax Suite expands upon native Maximo capabilities to mirror the way people actually work – with intuitive navigation, rapid app response, and rich functionality – allowing operations and maintenance professionals to effectively communicate with their community members and manage tasks, technicians, and vendors in a way that improves responsiveness to their organizations. To learn more, visit interprosoft.com.


Media contact:
Melissa Tyler
mtyler@interprosoft.com
781-213-1166

Primary Logo

Tue, 12 Jul 2022 07:56:00 -0500 en-US text/html https://fox5sandiego.com/business/press-releases/globenewswire/8597170/interpro-grows-q2-bookings-named-one-of-10-ibm-solution-providers-to-watch-in-2022/
C2070-994 exam dump and training guide direct download
Training Exams List