C2010-501 IBM Maximo Asset Management V7.5 Infrastructure Implementation dumps with practice exam made up good pass marks

We are prompted that an essential issue in the IT business is that there is the unavailability of important C2010-501 free pdf. Our test prep Practice Questions gives every one of you that you should take a certification test. Our IBM C2010-501 Practice Questions will offer you actual test questions with substantial responses that reflect the authentic test. We at killexams.com are made game plans to draw in you to finish your C2010-501 test with high scores.

Exam Code: C2010-501 Practice test 2022 by Killexams.com team
C2010-501 IBM Maximo Asset Management V7.5 Infrastructure Implementation

Exam Title : IBM Certified Infrastructure Deployment Professional - Maximo Asset Management V7.5
Exam ID : C2010-501
Exam Duration : 90 mins
Questions in test : 57
Passing Score : 41 / 57
Official Training : Product Documentation
Exam Center : Pearson VUE
Real Questions : IBM Maximo Asset Management Infrastructure Implementation Real Questions
VCE practice test : IBM C2010-501 Certification VCE Practice Test

Planning
- Given a customers need to deploy IBM Maximo Asset Management (Maximo), evaluate the environment, user requirements, security considerations, language support and organizational processes so that an implementable, quantifiable, and scalable build plan for the installation of Maximo has been developed.
- Given that a customer will have Maximo installed on a supported J2EE platform, explain the J2EE configuration concepts so that the customer understands how J2EE can be configured to meet their needs.
- Given that a customer is planning for a Maximo Asset Management installation, explain JVM Performance and Optimization Settings and Concepts so that the tools to optimize the system have been explained.
- Given that Maximo functionality is to be separated onto different JVMs for performance reasons, explain the JVM roles for Maximo so that an individual understands why roles should be separated onto different JVM and the benefits for doing so.
- Given that Maximo is to be installed, define the installation system requirements so that the environment is ready for Maximo to be installed.
- Given that Maximo security implementation decisions need to be made, explain the different options so that the correct planning decisions can be made.
- Given that Maximo is to be installed, configure the security requirements needed to install Maximo so that pre-installation Security Configuration has been completed.
- Given that Maximo is to be installed, review the Maximo search types with the customer and explain the impact that they can have on system performance so that search types and their implications have been reviewed with the customer.
- Given a customers need to deploy Maximo, assess the proposed infrastructure so that the installation of Maximo can be implemented on the proposed infrastructure.

Installation
- Given that IBM Maximo Asset Management (Maximo ) middleware has been defined and Maximo is to be installed explain the Maximo installation options so that the Maximo installation options are defined.
- Given that Maximo is to be installed, explain the Maximo installation flow so that the Maximo, fix packs, add-ons and industry solutions installation flow is understood.
- Given that Maximo is to be installed, explain the database script installation process so that process to update the Maximo database is understood.
- Given that Maximo is to be installed, explain the different Maximo installation components so that the components and how they are used are understood.
- Given that Maximo is to be installed, describe the use of the autonomic deployment engine so that the Maximo installation use of the deployment engine is understood.
- Given that Maximo has been installed describe the use of the Maximo tools so that the Maximo tools usage is understood.
- Given that an Maximo product is to be installed, perform the tasks to manually install Maximo so that Maximo is installed with the current fix pack.
- Given that the middleware has been installed, validate core technology configurations so that users can connect to the installed Maximo system via the middleware.
- Given that the Maximo product is in the process of installation or has been installed, describe the installation and tool log files so that the appropriate course of action can be taken.
- Given that the Maximo product is installed, verify that the Maximo database has been installed correctly so that the Maximo system is ready to be configured.
- Given that Maximo is to be upgraded, explain the Maximo upgrade process from 7.1 to 7.5 so that the customer understands the upgrade process.
- Given that the IBM product is installed, verify which version of Maximo has been installed so that Maximo is installed to the targeted level.

Configuration
- Given IBM Maximo Asset Management (Maximo) is already installed, enable Application Server Security within Maximo configuration and J2EE server so that users can connect to Maximo by using LDAP authentication.
- Given Maximo is already installed and is ready to integrate with other systems, validate the Maximo Integration Framework (MIF) configuration is correct and functional so that Maximo can send and receive transactions to and from external systems.
- Given Maximo is already installed,know, explain and utilize the maximo.properties file so that the Maximo system can be configured to use alternate middleware and database connection points.
- Given that Maximo is already installed, manually build a Maximo EAR file and deploy it so that the Maximo application has been updated on J2EE server.

Performance Tuning and Problem Determination
- Given that IBM Maximo Asset Management (Maximo) is installed,review and explain the Maximo performance log settings so that the Maximo system is running at optimal performance level.
- Given that Maximo is installed, utilize basic database functionality to analyze installation/performance issues so that the database is normalized and tuned to top performance levels.
- Given that the Maximo product is installed, determine if queries are efficient so that Maximo components queries are optimized.
- Given that Maximo is installed, review the Start Center portlets so that users achieve a balance between system performance and key data accessibility.
- Given that the application server instance requires performance analysis, assess the application server performance so that application server performance is analyzed and corrective actions can be taken.
- Given that Thread Dumps and Heap Dumps are to be analyzed,perform the tasks to manually install Maximo so that the Heap Dumps are analyzed and Garbage Collection is tuned in the middleware.

IBM Maximo Asset Management V7.5 Infrastructure Implementation
IBM Infrastructure learning
Killexams : IBM Infrastructure learning - BingNews https://killexams.com/pass4sure/exam-detail/C2010-501 Search results Killexams : IBM Infrastructure learning - BingNews https://killexams.com/pass4sure/exam-detail/C2010-501 https://killexams.com/exam_list/IBM Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Strengthen future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Strengthen quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Strengthen the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : IILM University partners with IBM to provide students skill training on new-age technologies No result found, try new keyword!The program will help students gain competitive edge over others during interviews, internships along with IBM's globally-recognised Digital Badge, in addition to the degree offered by the University. Mon, 08 Aug 2022 19:19:21 -0500 en-in text/html https://www.msn.com/en-in/money/news/iilm-university-partners-with-ibm-to-provide-students-skill-training-on-new-age-technologies/ar-AA10sASO?fromMaestro=true Killexams : IBM’s gobbling up AI companies left and right — and we love it

Big Blue’s been on a buying spree lately with Databand.ai, a big data startup, becoming its latest acquisition. Don’t blink. If you do, you might miss another huge IBM buyout.

Up front: Big data is a big deal. Less than a decade ago, many businesses were manually entering data into spreadsheets to meet their insight needs. Today, even the most modest startups can benefit from deep analytics.

Unfortunately, the landscape of companies that provide targeted services for a spectrum of industries is somewhat barren.

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

Simply put, you can’t just integrate a bunch of generic AI models into your IT stack and hope to magically pipeline solutions to your company’s problems.

It takes infrastructure and expertise to turn your hoard of data into action points.

Background: IBM’s spent over a century developing the infrastructure. But expertise is a moving target. To keep up with the modern influx of deep learning and data solutions in a period of technological turbulence, the company’s new CEO has opened up the corporate wallet in hopes of building a data-parsing juggernaut.

Per a recent IBM blog post:

Databand.ai is IBM’s fifth acquisition in 2022 as the company continues to bolster its hybrid cloud and AI skills and capabilities. IBM has acquired more than 25 companies since Arvind Krishna became CEO in April 2020.

This particular acquisition shores up IBM’s ability to provide “data observability” solutions for its clients and customers.

In other words, Databand.ai comes with a suite of products and a team of employees who know how to turn giant troves of data into useful insights.

According to IBM:

A rapidly growing market opportunity, data observability is quickly emerging as a key solution for helping data teams and engineers better understand the health of data in their system and automatically identify, troubleshoot and resolve issues, like anomalies, breaking data changes or pipeline failures, in near real-time.

Quick take: There’s a lot more to the world of big data than you might think. With this acquisition, IBM not only gets software solutions it can integrate to its current cornucopia of management and analytics tools, but it also gets a team that’s ready to hit the ground running for the company’s clients.

Databand.ai just finished a funding round prior to the acquisition wherein it raised over $14 million — that’s a pretty good indication the company’s on solid footing.

Here at Neural, we love it. Databand’s joining a company whose CEO has their finger firmly on the pulse of big data and IBM’s expanding its already industry-leading portfolio of AI-powered solutions.

Sun, 17 Jul 2022 05:58:00 -0500 en text/html https://thenextweb.com/news/ibm-positions-itself-as-global-big-data-boss-with-latest-acquisition
Killexams : The origin of Neo4j

“The first code for Neo4j and the property graph database was written in IIT Bombay”, said the chief Marketing Officer at Neo4j, Chandra Rangan.

In an exclusive interview with Analytics India Magazine, Rangan said that the first piece of code was sketched by Emil Eifrem — who is the founder and CEO of Neo4j — on a flight to Bombay, where he worked with an intern from IIT Bombay to develop the graph database platform.

Rangan joined Neo4j as the chief marketing officer (CMO) on May 10, 2022. Prior to this, he worked at Google, running Google Cloud Platform product marketing and, more recently, product-led growth, strategy, and operations for Google Maps Platform. Rangan has over two decades of technology infrastructure experience across marketing leadership, strategy, and operations at Hewlett Packard Enterprise, Gartner, Symantec, McKinsey, and IBM. 

Founded in 2007, Neo4j has more than 700 employees globally. In June 2022, the company raised about $325 million in a Series F funding round led by Eurazeo, alongside participation from GV (formerly Google Ventures) and other existing investors like One Peak, Creandum, Greenbridge Partners, DTCP, and Lightrock

This is one of the largest investments in a private database company. It raised Neo4j’s valuation to over $2 billion. In contrast, even bigger than MongoDB, which raised a total of $311 million, and post-IPO, it raised about $192 million in IPO, making it worth $1.2 billion. 

Bets big on India 

With its latest funding round, Neo4j is looking to invest in expanding its footprint globally, and India is one of its top choices, thanks to a larger developer ecosystem, alongside a burgeoning startup ecosystem and IT service providers using its platform to offer solutions to global customers. 

Neo4j’s community edition, which is open source, is widely adopted by developers in the country. “We have an overall community of almost a quarter million users who are familiar with our platform”, said Rangan, explaining that it has one of the largest developers in the country. With the fresh infusion of funds, the company looks to tap into the market, expand its services, sales and support, and invest in the right strategies going forward. 

As part of its expansion plans, Neo4j started hiring in sales leadership and country manager roles from last year onwards and would also continue that momentum this year. “This is a big bet for us in multiple ways”, added Rangan, pointing at its Indian root and all the innovations in the country. 

Besides India, Neo4j has a strong presence in Silicon Valley and Sweden and has a huge developer ecosystem in the US, China, Europe, South East Asia and others. 

Strategies for expansion 

Over the years, Neo4j has grown through developers and some of the early adopters of its platform. “Unfortunately, developers interested in graph databases will typically start with us”, said Rangan affirmatively. 

Further, explaining the conversion cycle, he said that once they know about graph databases, they later join the community edition. Then, once they get comfortable with the use cases and start putting this into production, they eventually get into a paid version for the advanced security, support, scalability, and commercial constructs. 

“In India, that’s the similar motion we are seeing”, said Rangan. He revealed that they already have a huge developer community. Banking on this community, they plan to invest in continuing the engagement with the community in a meaningful way. 

Of late, the company has also started hiring several community leaders to encourage proactive engagement within the community. In addition, it is also investing heavily in sales and marketing engines, including technical sales, which work closely with organisations in building the use cases, alongside the implementation of services and support. 

What makes Neo4j special? 

One thing that makes Neo4j stand apart from other players is its intuitiveness in helping deploy applications faster because of its flexible schema. This helps developers to add properties, nodes, and more. “It gives tremendous flexibility for developers so they can get to the outcome much more quickly”, said Rangan. 

But what about the learning curve? Rangan said, “Literally, for a new developer, if they start learning graphs for the first time, it is very intuitive.” He explained that the learning curve is not that steep and doesn’t take long. “But, for folks who have been working in the development space and building applications and are very familiar and comfortable with RDBMS, i.e., rows and tables. Strangely enough, the learning curve is a little higher and steeper”, added Rangan, discussing that they have to unlearn to model intuitively versus modelling tables. He said the best way to overcome that learning curve is to try it out. 

“So, when you think about the learning curve, it is a very easy learning curve, especially if you can put aside the former way of thinking about things like rows and tables and go back to first principles.”—Chandra Rangan. 

Discovering use cases with Neo4j 

The International Consortium of Investigative Journalists (ICIJ) released the full list of companies and individuals in the Panama Papers, implicating at least 140 politicians from more than 50 countries in tax evasion schemes. The journalist used Neo4j to draw the relationship with their data and found common touchpoints and names of people involved in having multiple offshore accounts and evading tax. 

“We believe a whole bunch of sectors can actually get value. We have seen new sectors kind of pop up on a pretty regular basis”, said Rangan while citing various use cases in financial service sectors (fraud detection), healthcare (vaccine distribution), pharmaceuticals (drug discovery), supply chain and logistics (mapping automation), tech companies (managing IT networks), retail (recommendation systems), and more. 

Chandra Rangan further explained that people are still discovering what they can use graph databases for and how useful it is in some sense. He said that it is unleashing a whole bunch of innovations. “So, we are hoping for a lot of that to happen here in India because of the developer community”, he added. 

What’s next? 

Rangan said Neo4j would be aggressively investing in the community and ecosystem here in India. Besides this, he said they are investing in building a marketing and sales team, which has grown significantly in the last year. In addition, Neo4j is also investing in building a partner ecosystem to support a wider range of customers. 

“Depending on how quickly we can grow or cannot grow—again, responsible growth—we want to grow as fast as possible. But, we also want to make sure as we hire people as we establish the relationship, we are investing enough time, effort, and money to make sure that these relationships are successful”, concluded Rangan.

Mon, 08 Aug 2022 20:07:00 -0500 en-US text/html https://analyticsindiamag.com/the-origin-of-neo4j/
Killexams : IILM University signs MoU with IBM, illuminating students about new-age technologies and the exclusive IBM Digital Badge

Greater Noida : IILM University, Greater Noida, signed a Memorandum of Understanding (MoU) with IBM Innovation Centre for Education in August , 2022. The MoU was signed by Vice-Chancellor, IILM University, Dr. Taruna Gautam and Program Director, IBM Innovation Centre for Education, Mr. Vithal Madyalkar. Among those who attended the formal signing ceremony included Mr. R. Hari, IBM leader for Business Development & Academia relationships, Dr. Raveendranath Nayak, Director-IILM Graduate School of Management and Dr. Shilpy Agrawal, Head of Computer Science and Engineering Department, IILM University, Greater Noida.

Commenting over the collaboration between the two knowledge hubs, Dr. Taruna Gautam, Vice-Chancellor IILM University, Greater Noida, IILM University, said, “We are extremely excited about the new development as it aligns with our core aim to raise a race of competent professionals and make them future-ready. As part of the newly formed alliance, IBM would offer the university students much-needed applied IT knowledge, establishing a structured learning pathway. IBM’s Innovation Centre for Education Programs would impart students with information about the emerging technologies and in-demand industry domains like Cloud Computing and Virtualization, Data Sciences & Business Analytics, Graphics and Gaming Technology, Artificial Intelligence, Machine Learning, Blockchain, Cyber Security and Forensics, IT Infrastructure management, and Internet of Things.”

The students will also get a chance to enhance their skills pertaining to information technology required for operating different business domains such as Telecom informatics, Banking, Financial services and Insurance informatics, e-commerce & Retail Informatics, and Healthcare Informatics.

IBM Innovation Centre for Education offers various unique, time-tested initiatives and skills developed by IBM Trained & Certified faculty & Technology Experts. The in-depth and applied courseware powered by IBM will be exclusively available to the students at IILM University. The new progression is in line with the NEP 2020 norms, promoting the project and lab-based learning combined with Instructor-led classroom training.

The program will help students gain not only a competitive edge over others during interviews, internships as well as national and International contests, but also IBM’s globally-recognized Digital Badge, in addition to the degree offered by the University.

Mon, 08 Aug 2022 06:06:00 -0500 en-US text/html https://indiaeducationdiary.in/iilm-university-signs-mou-with-ibm-illuminating-students-about-new-age-technologies-and-the-exclusive-ibm-digital-badge/
Killexams : A majority of companies have raised prices because of a data breach

IBM Security on Wednesday released its annual Cost of a Data Breach Report, which found that the cost of a breach reached an all-time high of $4.35 million in 2022.

The report also found that breach costs increased nearly 13% over the last two years, an indication that these cyber incidents may also contribute to the rising costs of goods and services.

Some 60% of organizations surveyed by IBM raised their prices because of a breach — at a time when the economy has experienced the worst inflationary spiral since the early 1980s.

“The more businesses try to perfect their perimeter instead of investing in detection and response, the more breaches can fuel cost of living increases." said Charles Henderson, global head of IBM Security X-Force. "This report shows that the right strategies coupled with the right technologies can help make all the difference when businesses are attacked."

Each year IBM releases its report, the average cost of a breach increases, said Hank Schless, senior manager, security solutions at Lookout. Schless said the value of sensitive data has increased, and as a byproduct of that, the long-term damage to a company that experiences a breach is getting ever more costly. 

“The numbers found in this report should be a wakeup call to anyone who thinks data security and infrastructure integrity can take a back seat to other priorities,” Schless said. “The findings in this report show how challenging it is for organizations to keep their security practices up with the speed of cloud adoption. This pain is only aggravated for organizations that weren’t born in the cloud and need to go through a massive infrastructure transformation to move their data from legacy on-premises servers to the cloud.”

Jerrod Piker, product marketing manager at Deep Instinct, countered the findings of the IBM report by saying that as the cost of a data breach has reached an all-time high, artificial intelligence and automation are helping the cause by reducing these costs by an average of $3 million for organizations that have fully implemented these technologies in their security environments.

Piker said some AI models, such as machine learning, offer improved threat detection capabilities to help close the gaps left by traditional security tools like firewalls and antivirus. However, more recent innovations, such as deep learning, are moving the needle even further, offering long-lasting protection against even the most advanced and evasive of attacks without the need for constant human interaction and model re-training.

“At the end of the day, attackers are always going to go after the low-hanging fruit first, and it’s up to us as security professionals to help ensure that organizations are armed with the most advanced tools available to stay ahead of the bad guys and keep their data where it belongs,” Piker said.

Wed, 27 Jul 2022 08:33:00 -0500 en text/html https://www.scmagazine.com/editorial/news/breach/a-majority-of-companies-have-raised-prices-because-of-a-data-breach?es_id=bd6d2e579f
Killexams : CIOReview Names Cobalt Iron Among 10 Most Promising IBM Solution Providers 2022

Cobalt Iron Inc., a leading provider of SaaS-based enterprise data protection, today announced that the company has been deemed one of the 10 Most Promising IBM Solution Providers 2022 by CIOReview Magazine. The annual list of companies is selected by a panel of experts and members of CIOReview Magazine's editorial board to recognize and promote innovation and entrepreneurship. A technology partner for IBM, Cobalt Iron earned the distinction based on its Compass® enterprise SaaS backup platform for monitoring, managing, provisioning, and securing the entire enterprise backup landscape.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220728005043/en/

Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection. (Graphic: Business Wire)

Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection. (Graphic: Business Wire)

According to CIOReview, "Cobalt Iron has built a patented cyber-resilience technology in a SaaS model to alleviate the complexities of managing large, multivendor setups, providing an effectual humanless backup experience. This SaaS-based data protection platform, called Compass, leverages strong IBM technologies. For example, IBM Spectrum Protect is embedded into the platform from a data backup and recovery perspective. ... By combining IBM's technologies and the intellectual property built by Cobalt Iron, the company delivers a secure, modernized approach to data protection, providing a 'true' software as a service."

Through proprietary technology, the Compass data protection platform integrates with, automates, and optimizes best-of-breed technologies, including IBM Spectrum Protect, IBM FlashSystem, IBM Red Hat Linux, IBM Cloud, and IBM Cloud Object Storage. Compass enhances and extends IBM technologies by automating more than 80% of backup infrastructure operations, optimizing the backup landscape through analytics, and securing backup data, making it a valuable addition to IBM's data protection offerings.

CIOReview also praised Compass for its simple and intuitive interface to display a consolidated view of data backups across an entire organization without logging in to every backup product instance to extract data. The mahine learning-enabled platform also automates backup processes and infrastructure, and it uses open APIs to connect with ticket management systems to generate tickets automatically about any backups that need immediate attention.

To ensure the security of data backups, Cobalt Iron has developed an architecture and security feature set called Cyber Shield for 24/7 threat protection, detection, and analysis that improves ransomware responsiveness. Compass is also being enhanced to use several patented techniques that are specific to analytics and ransomware. For example, analytics-based cloud brokering of data protection operations helps enterprises make secure, efficient, and cost-effective use of their cloud infrastructures. Another patented technique - dynamic IT infrastructure optimization in response to cyberthreats - offers unique ransomware analytics and automated optimization that will enable Compass to reconfigure IT infrastructure automatically when it detects cyberthreats, such as a ransomware attack, and dynamically adjust access to backup infrastructure and data to reduce exposure.

Compass is part of IBM's product portfolio through the IBM Passport Advantage program. Through Passport Advantage, IBM sellers, partners, and distributors around the world can sell Compass under IBM part numbers to any organizations, particularly complex enterprises, that greatly benefit from the automated data protection and anti-ransomware solutions Compass delivers.

CIOReview's report concludes, "With such innovations, all eyes will be on Cobalt Iron for further advancements in humanless, secure data backup solutions. Cobalt Iron currently focuses on IP protection and continuous R&D to bring about additional cybersecurity-related innovations, promising a more secure future for an enterprise's data."

About Cobalt Iron

Cobalt Iron was founded in 2013 to bring about fundamental changes in the world's approach to secure data protection, and today the company's Compass® is the world's leading SaaS-based enterprise data protection system. Through analytics and automation, Compass enables enterprises to transform and optimize legacy backup solutions into a simple cloud-based architecture with built-in cybersecurity. Processing more than 8 million jobs a month for customers in 44 countries, Compass delivers modern data protection for enterprise customers around the world. www.cobaltiron.com

Product or service names mentioned herein are the trademarks of their respective owners.

Link to Word Doc: www.wallstcom.com/CobaltIron/220728-Cobalt_Iron-CIOReview_Top_IBM_Provider_2022.docx

Photo Link: www.wallstcom.com/CobaltIron/Cobalt_Iron_CIO_Review_Top_IBM_Solution_Provider_Award_Logo.pdf

Photo Caption: Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection.

Follow Cobalt Iron

https://twitter.com/cobaltiron
https://www.linkedin.com/company/cobalt-iron/
https://www.youtube.com/user/CobaltIronLLC

[ Back To TMCnet.com's Homepage ]

Thu, 28 Jul 2022 02:51:00 -0500 text/html https://www.tmcnet.com/usubmit/2022/07/28/9646864.htm
Killexams : IBM acquires Databand.ai

IBM recently announced it has acquired Databand.ai, a leading provider of data observability software that helps organizations fix issues with their data, including errors, pipeline failures and poor quality — before it impacts their bottom-line. Today's news further strengthens IBM's software portfolio across data, AI and automation to address the full spectrum of observability and helps businesses ensure that trustworthy data is being put into the right hands of the right users at the right time.

Databand.ai is IBM's fifth acquisition in 2022 as the company continues to bolster its hybrid cloud and AI skills and capabilities. IBM has acquired more than 25 companies since Arvind Krishna became CEO in April 2020.

As the volume of data continues to grow at an unprecedented pace, organizations are struggling to manage the health and quality of their data sets, which is necessary to make better business decisions and gain a competitive advantage.

Data observability takes traditional data operations to the next level by using historical trends to compute statistics about data workloads and data pipelines directly at the source, determining if they are working, and pinpointing where any problems may exist. When combined with a full stack observability strategy, it could help IT teams quickly surface and resolve issues from infrastructure and applications to data and machine learning systems.

Databand.ai's open and extendable approach allows data engineering teams to easily integrate and gain observability into their data infrastructure. This acquisition would unlock more resources for Databand.ai to expand its observability capabilities for broader integrations across more of the open source and commercial solutions that power the modern data stack. Enterprises would also have full flexibility in how to run Databand.ai, whether as-a-Service (SaaS) or a self-hosted software subscription.

Get the latest news
delivered to your inbox

Sign up for The Manila Times’ daily newsletters

By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.

The acquisition of Databand.ai builds on IBM's research and development investments as well as strategic acquisitions in AI and automation. By using Databand.ai with IBM Observability by Instana APM and IBM Watson Studio, IBM is well-positioned to address the full spectrum of observability across IT operations.

"Our clients are data-driven enterprises who rely on high-quality, trustworthy data to power their mission-critical processes. When they don't have access to the data they need at any given moment, their business could grind to a halt," said Daniel Hernandez, general manager for Data and AI, IBM. "With the addition of Databand.ai, IBM offers the most comprehensive set of observability capabilities for IT across applications, data and machine learning, and is continuing to provide our clients and partners with the technology they need to deliver trustworthy data and AI at scale."

Headquartered in Tel Aviv, Israel, Databand.ai employees will join IBM Data and AI, further building on IBM's growing portfolio of Data and AI products, including its IBM Watson capabilities and IBM Cloud Pak for Data. Financial details of the deal were not disclosed. The acquisition closed on June 27, 2022.

Sat, 16 Jul 2022 12:00:00 -0500 en text/html https://www.manilatimes.net/2022/07/17/business/sunday-business-it/ibm-acquires-databandai/1851170
Killexams : Astadia Publishes Mainframe to Cloud Reference Architecture Series

Press release content from Business Wire. The AP news staff was not involved in its creation.

BOSTON--(BUSINESS WIRE)--Aug 3, 2022--

Astadia is pleased to announce the release of a new series of Mainframe-to-Cloud reference architecture guides. The documents cover how to refactor IBM mainframes applications to Microsoft Azure, Amazon Web Services (AWS), Google Cloud, and Oracle Cloud Infrastructure (OCI). The documents offer a deep dive into the migration process to all major target cloud platforms using Astadia’s FastTrack software platform and methodology.

As enterprises and government agencies are under pressure to modernize their IT environments and make them more agile, scalable and cost-efficient, refactoring mainframe applications in the cloud is recognized as one of the most efficient and fastest modernization solutions. By making the guides available, Astadia equips business and IT professionals with a step-by-step approach on how to refactor mission-critical business systems and benefit from highly automated code transformation, data conversion and testing to reduce costs, risks and timeframes in mainframe migration projects.

“Understanding all aspects of legacy application modernization and having access to the most performant solutions is crucial to accelerating digital transformation,” said Scott G. Silk, Chairman and CEO. “More and more organizations are choosing to refactor mainframe applications to the cloud. These guides are meant to assist their teams in transitioning fast and safely by benefiting from Astadia’s expertise, software tools, partnerships, and technology coverage in mainframe-to-cloud migrations,” said Mr. Silk.

The new guides are part of Astadia’s free Mainframe-to-Cloud Modernization series, an ample collection of guides covering various mainframe migration options, technologies, and cloud platforms. The series covers IBM (NYSE:IBM) Mainframes.

In addition to the reference architecture diagrams, these comprehensive guides include various techniques and methodologies that may be used in forming a complete and effective Legacy Modernization plan. The documents analyze the important role of the mainframe platform, and how to preserve previous investments in information systems when transitioning to the cloud.

In each of the IBM Mainframe Reference Architecture white papers, readers will explore:

  • Benefits, approaches, and challenges of mainframe modernization
  • Understanding typical IBM Mainframe Architecture
  • An overview of Azure/AWS/Google Cloud/Oracle Cloud
  • Detailed diagrams of IBM mappings to Azure/AWS/ Google Cloud/Oracle Cloud
  • How to ensure project success in mainframe modernization

The guides are available for get here:

To access more mainframe modernization resources, visit the Astadia learning center on www.astadia.com.

About Astadia

Astadia is the market-leading software-enabled mainframe migration company, specializing in moving IBM and Unisys mainframe applications and databases to distributed and cloud platforms in unprecedented timeframes. With more than 30 years of experience, and over 300 mainframe migrations completed, enterprises and government organizations choose Astadia for its deep expertise, range of technologies, and the ability to automate complex migrations, as well as testing at scale. Learn more on www.astadia.com.

View source version on businesswire.com:https://www.businesswire.com/news/home/20220803005031/en/

CONTACT: Wilson Rains, Chief Revenue Officer

Wilson.Rains@astadia.com

+1.877.727.8234

KEYWORD: UNITED STATES NORTH AMERICA MASSACHUSETTS

INDUSTRY KEYWORD: DATA MANAGEMENT TECHNOLOGY OTHER TECHNOLOGY SOFTWARE NETWORKS INTERNET

SOURCE: Astadia

Copyright Business Wire 2022.

PUB: 08/03/2022 10:00 AM/DISC: 08/03/2022 10:02 AM

http://www.businesswire.com/news/home/20220803005031/en

Wed, 03 Aug 2022 02:02:00 -0500 en text/html https://apnews.com/press-release/BusinessWire/technology-f50b643965d24115b2c526c8f96321a6
Killexams : Red Hat in, Kyndryl out as IBM New Zealand reports a positive 2021

IBM New Zealand's financial results for the year to the end of December 2021 reflected massive global changes at the company known as "Big Blue".

For the first time IBM's 2019 buyout, Red Hat has been consolidated into the local subsidiary's numbers while its local managed infrastructure service business, spun out in early September 2021 as part of New York listed Kyndryl, has been reported separately.

The end result for IBM NZ was a large increase in revenue from continuing operations, with sales surging to $172.4 million in 2021 from a restated $124.9 million in 2020.

Red Hat NZ, which was consolidated into IBM's numbers from 1 January 2021, also reported its local revenue for the year separately at $23.2 million, up from $14.7 million in 2020.

The business now known as Kyndryl earned $85.7 million in 2020 and $53.1 million in the ten months up to separation in 2021.