We have legitimate and state-of-the-art 000-175 braindumps and Practice Test. killexams.com gives the specific and latest 000-175 free pdf download with braindumps which essentially contain all data that you really want to finish the 000-175 test. With the aid of our 000-175 test dumps, you Do not need to chance your chance on perusing reference books yet basically need to consume 10-20 hours to retain our 000-175 free pdf download and replies.
Exam Code: 000-175 Practice test 2022 by Killexams.com team IBM WebSphere Lombardi Edition V7.2, Development (Entry) IBM Development availability Killexams : IBM Development availability - BingNews
https://killexams.com/pass4sure/exam-detail/000-175
Search resultsKillexams : IBM Development availability - BingNews
https://killexams.com/pass4sure/exam-detail/000-175
https://killexams.com/exam_list/IBMKillexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Partnerships & Use Cases
IBM
IBM
I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.
Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.
Edge In, not Cloud Out
In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.
A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.
“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.
IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.
IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).
IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.
It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.
Why edge is important
Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.
Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.
Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.
IBM at the Edge
In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.
Example #1 – McDonald’s drive-thru
An ordering system using AI and NLP for QRS applications has a global market. A car orders lunch at ... [+]the McDonalds drive-thru in Charnwood, Australian Capital Territory
Tim Malone Licensed under CC BY-SA 2.5
Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.
McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.
Example #2 – Boston Dynamics and Spot the agile mobile robot
The author with Boston Dynamics “Spot the agile mobile robot” at IBM Think 2022
Moor Insights & Strategy
According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Improve future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.
Mobile readings with Boston Dynamics mobile robot
IBM
To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.
IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.
Thermal Inspection of Planer & Non-Planar Assets
IBM
IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.
IBM market opportunities
Edge Market & Use Cases
IBM
Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.
Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.
Challenges with scaling
Challenges in scaling AI Application deployments
IBM
“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”
Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.
Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.
IBM AI entry points at the edge
IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.
IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.
Industry 4.0
There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.
Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Improve quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.
Major Automotive OEM
IBM
For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:
Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.
Maximo Application Suite
IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.
IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.
Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.
Day-2 AI Operations (retraining and scaling)
Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.
IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.
A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).
“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”
Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.
Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.
Data and AI Platform: Scaling Day 2 - AI Operations
IBM
The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.
Data Fabric Extensions to Hub and Spokes
Extending Data Fabric to Hub and Spokes: Key Capabilities
IBM
IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.
First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Improve the model. Identification is also made for atypical data that is judged worthy of human attention.
The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.
In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.
Multicloud and Edge platform
Multicloud and Edge Platform
IBM
In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.
For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.
Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.
Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.
First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).
Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.
Telco network intelligence and slice management with AL/ML
Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:
Reduced operating costs
Improved efficiency
Increased distribution and density
Lower latency
The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.
An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.
5G network slicing and slice management
5G Network Slice Management
IBM
Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.
5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.
Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.
Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”
In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:
End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
Improved operational efficiency and reduced cost
Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.
5G radio access
Intelligence @ the Edge of 5G networks
IBM
Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.
O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.
The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:
Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
Utilization of the antenna control plane to optimize throughput
Primitives for forecasting, anomaly detection and root cause analysis using ML
Opportunity of value-added functions for O-RAN
IBM Cloud and Infrastructure
The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.
Secure Decentralized Edge Data Lake
IBM
IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.
As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).
Wrap up
Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.
IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.
IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.
Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.
Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.
However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.
It is reassuring that IBM has a plan and that its plan is sound.
Paul Smith-Goodsonis Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him onTwitterfor more current information on quantum, AI, and space.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.
Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.
Mon, 08 Aug 2022 03:51:00 -0500Paul Smith-Goodsonentext/htmlhttps://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/Killexams : IBM claims to have mapped out a route to quantum advantage
IBM has published details on a collection of techniques it hopes will usher in quantum advantage, the inflection point at which the utility of quantum computers exceeds that of traditional machines.
The focus is on a process known as error mitigation, which is designed to Improve the consistency and reliability of circuits running on quantum processors by eliminating sources of noise.
IBM says that advances in error mitigation will allow quantum computers to scale steadily in performance, in a similar pattern exhibited over the years in the field of classical computing.
Although plenty has been said about the potential of quantum computers, which exploit a phenomenon known as superposition to perform calculations extremely quickly, the reality is that current systems are incapable of outstripping traditional supercomputers on a consistent basis.
A lot of work is going into improving performance by increasing the number of qubits on a quantum processor, but researchers are also investigating opportunities related to qubit design, the pairing of quantum and classical computers, new refrigeration techniques and more.
IBM, for its part, has now said it believes an investment in error mitigation will bear the most fruit at this stage in the development of quantum computing.
“Indeed, it is widely accepted that one must first build a large fault-tolerant quantum processor before any of the quantum algorithms with proven super-polynomial speed-up can be implemented. Building such a processor therefore is the central goal for our development,” explained IBM, in a blog post(opens in new tab).
“However, recent advances in techniques we refer to broadly as quantum error mitigation allow us to lay out a smoother path towards this goal. Along this path, advances in qubit coherence, gate fidelities, and speed immediately translate to measurable advantage in computation, akin to the steady progress historically observed with classical computers.”
The post is geared towards a highly technical audience and goes into great detail, but the main takeaway is this: the ability to quiet certain sources of error will allow for increasingly complex quantum workloads to be executed with reliable results.
According to IBM, the latest error mitigation techniques go “beyond just theory”, with the advantage of these methods having already been demonstrated on some of the most powerful quantum hardware currently available.
“At IBM Quantum, we plan to continue developing our hardware and software with this path in mind,” the company added.
“At the same time, together with our partners and the growing quantum community, we will continue expanding the list of problems that we can map to quantum circuits and develop better ways of comparing quantum circuit approaches to traditional classical methods to determine if a problem can demonstrate quantum advantage. We fully expect that this continuous path that we have outlined will bring us practical quantum computing.”
As quantum computing evolves, all new encryption technologies will need to be developed
Thu, 21 Jul 2022 08:08:00 -0500Joel Khalilientext/htmlhttps://www.techradar.com/news/ibm-claims-to-have-mapped-out-a-route-to-quantum-advantageKillexams : IBM Whale Trades For August 05No result found, try new keyword!Someone with a lot of money to spend has taken a bearish stance on IBM (NYSE:IBM). And retail traders should know. We noticed this today when the big position showed up on publicly available options ...Fri, 05 Aug 2022 07:16:15 -0500en-ustext/htmlhttps://www.msn.com/en-us/money/savingandinvesting/ibm-whale-trades-for-august-05/ar-AA10m9EDKillexams : IBM Unveils $1 Billion Platform-as-a-Service InvestmentNo result found, try new keyword!IBM says that with more than 200 application and middleware patterns available from IBM and IBM ... It has an integrated development environment which makes it possible to develop in it but ...Fri, 22 Jul 2022 12:00:00 -0500en-ustext/htmlhttps://www.thestreet.com/technology/ibm-unveils-1-billion-platform-as-a-service-investment-12438325Killexams : IBM Rolls Out New Power10 Servers And Flexible Consumption Models
The high-end Power10 server launched last year has enjoyed “fantastic” demand, according to IBM. Let’s look into how IBM Power has maintained its unique place in the processor landscape.
This article is a bit of a walk down memory lane for me, as I recall 4 years working as the VP of Marketing at IBM Power back in the 90s. The IBM Power development team is unique as many of the engineers came from a heritage of developing processors for the venerable and durable mainframe (IBMz) and the IBM AS400. These systems were not cheap, but they offered enterprises advanced features that were not available in processors from SUN or DEC, and are still differentiated versus the industry standard x86.
While a great deal has changed in the industry since I left IBM, the Power processor remains the king of the hill when it comes to performance, security, reliability, availability, OS choice, and flexible pricing models in an open platform. The new Power10 processor-based systems are optimized to run both mission-critical workloads like core business applications and databases, as well as maximize the efficiency of containerized and cloud-native applications.
What has IBM announced?
IBM introduced the high-end Power10 server last September and is now broadening the portfolio with four new systems: the scale-out 2U Power S1014, Power S1022, and Power S1024, along with a 4U midrange server, the Power E1050. These new systems, built around the Power10 processor, have twice the cores and memory bandwidth of the previous generation to bring high-end advantages to the entire Power10 product line. Supporting AIX, Linux, and IBM i operating systems, these new servers provide Enterprise clients a resilient platform for hybrid cloud adoption models.
The latest IBM Power10 processor design includes the Dual Chip Module (DCM) and the entry Single Chip Module SCM) packaging, which is available in various configurations from four cores to 24 cores per socket. Native PCIe 5th generation connectivity from the processor socket delivers higher performance and bandwidth for connected adapters. And IBM Power10 remains the only 8-way simultaneous multi-threaded core in the industry.
The IBM Power10 chip is a tour-de-force of advanced technologies.
IBM
An example of the advanced technology offered in Power10 is the Open Memory Interface (OMI) connected differential DIMM (DDIMM) memory cards delivering increased performance, resilience, and security over industry-standard memory technologies, including the implementation of transparent memory encryption. The Power10 servers include PowerVM Enterprise Edition to deliver virtualized environments and support a frictionless hybrid cloud deployment model.
Surveys say IBM Power experiences 3.3 minutes or less of unplanned outage due to security issues, while an ITIC survey of 1,200 corporations across 28 vertical markets gives IBM Power a 99.999% or greater availability rating. Power10 also stepped up the AI Inferencing game with 5X faster inferencing per socket versus Power9 with each Power10 processor core sporting 4 Matrix Math Accelerators.
But perhaps even more telling of the IBM Power strategy is the consumption-based pricing in the Power Private Cloud with Shared Utility Capacity commercial model allowing customers to consume resources more flexibly and efficiently for all supported operating systems. As x86 continued to lower server pricing over the last two decades, IBM has rolled out innovative pricing models to keep these advanced systems more affordable in the face of ever-increasing cloud adoption and commoditization.
Conclusions
While most believe that IBM has left the hardware business, the company’s investments in underlying hardware technology at the IBM Research Labs, and the continual enhancements to IBM Power10 and IBM z demonstrate that the firm remains committed to advanced hardware capabilities while eschewing the battles for commoditized (and lower margin) hardware such as x86, Arm, and RISC-V.
Enterprises demanding more powerful, flexible, secure, and yes, even affordable innovation would do well to familiarize themselves with IBM’s latests in advanced hardware designs.
Mon, 18 Jul 2022 04:29:00 -0500Karl Freundentext/htmlhttps://www.forbes.com/sites/karlfreund/2022/07/18/ibm-rolls-out-new-power10-servers-and-flexible-consumption-models/Killexams : Machine developments could spur IBM growthInterest in injection blowmolding machinery is rising as machine makers meet processors'' wishes for more machines able to process polyethylene terephthalate (PET)—the in-demand polymer for many applications—and as more moldmakers begin to serve the market. The end of a U.S. patent also freed potential injection blowmolding (IBM) processors there from concerns about legal battles when processing PET.
Typically IBM is a three-station process: preform molding, rotation of preforms to a blowing station, and parts removal from core rods. Parts formed, usually for packaging, offer excellent appearance, tight and consistent tolerances, and the process creates no scrap. Nonetheless, the global market for IBM machines is only 120 to 140 machines a year, according to Luca Bertolotti, technical manager at manufacturer Uniloy Milacron, in its Magenta, Italy, plant.
But as processor interest grows, he says Uniloy is expanding its range to include larger machines with preform mold clamp force range increasing from the current 45 to 75 tonnes, to new machines with 90 to 150 tonnes of clamp force. "This will allow higher cavitation" for increased outputs, he explains. He expects most processors will continue to process the same small (to 200 ml) products and not use the additional force for larger parts.
Some already offer such large machines, including Novapax (Leer, Germany), with IBM machines having preform clamp forces of 40, 65, 85 or 125 tonnes, and blowmold clamp force up to 65 tonnes. Its machines also are PET-capable. Novapax''s machines have four stations instead of three, which Novapax claims lets processors more easily run existing molds with the extra fourth station and makes for more efficient hydraulics. Ossberger (Weissenberg, Germany) makes large IBM units but these typically see use in automotive parts processing (see August 2002 MP/MPI).
Jomar Corp. (Pleasantville, NJ) offers IBM units with preform clamps from 12 to 170 tonnes. President Bill Petrino notes the firm recently added proportional hydraulics to its larger models, something it already offered on the smaller ones. Proportional hydraulics allow for more precise process control. "In the U.S. [proportional hydraulics] proves a good selling point; outside the U.S., processors often are not so eager to spend the additional $15,000 or so," he says.
Adds Petrino, "Jomar machines have always had vertical plastifiers, but in the last year we''ve added horizontal" for those customers more accustomed to horizontal extruders, as used on competitors'' models. Still, Jomar claims vertical extruders help processors save up to 33% on energy consumption. Uniloy argues for horizontal as a means of improving pack pressure, and easing maintenance and services.
Jomar began work on a PET-capable IBM unit in 2000 as processor Wheaton Plastics'' (Millville, NJ) U.S. patent on injection blowmolding of PET expired. Wheaton had made its own IBM machines, with electric servodrives, for processing PET. "We think we''ve got this [IBM of PET] down," Petrino says, as the firm has developed its own extrusion screw for plastifying PET and uses closed loop controls to control temperature of core rods so that temperatures remain within the narrow range suitable for PET processing. Higher temperatures can cause the material to crystallize and lose its clarity. "The PET kit can be retrofitted to other IBMs," he says.
Bertolotti says one of the biggest developments in the past few years has been the increase in the availability—and the decrease in the price—of the injection molds used on these machines. "It''s possible now to buy much better molds at lower cost" than a few years ago, he says.
Last year the world''s largest independent manufacturer of blowmolds, Wentworth Technologies Co. Ltd. (Burlington, ON), acquired IBM moldmaking leader Jersey Mold (Millville, NJ) to get its hat into the expanding IBM ring. Tooling is a major financial issue with these machines since, generally, two sets of molds cost as much as the machine itself. He says that as the selection and quality of tooling has improved, an increasing number of applications are shifting from extrusion blowmolding to injection blow, as the latter offers processors greater flexibility to fill more smaller orders.
Others question that assertion, among them Joe Spohr, Sr. VP global business development at extrusion blowmolding machine Graham Machinery Group (York, PA), who responds: "Hardly. I think injection blowmolding had its heyday 10 years ago. IBM cannot do multilayer and it doesn''t offer the outputs required." Uniloy is a major supplier of extrusion blowmolding equipment and ranks among Graham''s fiercest competitors.
Though the trend is clearly from PP or polyethylene (PE) to PET, the latter material still accounts for a small share of the market, says Petrino. When processing PP or PE, Bertolotti recommends specifying injection molding grades rather than extrusion blowmolding ones. "The good thing is, [materials] research is increasing for this process; there has been a lot of development with barrier resins, especially Barex," he says. Barex, supplied by BP and based on acrylonitrile co-monomers, is used to make single-layer bottles with high barrier performance.
Demand for IBM machines has been solid enough to attract the attention of more manufacturers. Novapax entered the market in 1998, and more recently Parker Plastic Machinery Co. Ltd. (Taichung, Taiwan) did so. In May, Meccanoplastica (Campi Bisenzio, Italy) exhibited the first commercially available electric-servodrive-powered injection blow machine (see July 2003 MP/MPI). There have also been some novel attempts at combining preform injection and blowmolding in a single step. Merle Norman Cosmetics (Los Angeles, CA) molds preforms and blows them in one step to transparent bottles using copolyester supplied by Eastman Chemical (Kingsport, TN), making bottles with walls .20 inch thick.
Thu, 14 Jul 2022 12:01:00 -0500entext/htmlhttps://www.plasticstoday.com/machine-developments-could-spur-ibm-growthKillexams : IBM launches Db2 operator for Kubenetes on AWSNo result found, try new keyword!But, despite large chunks of the western economies relying on the aging database, IBM does not like to shout about it. Nonetheless, The Reg managed to squirrel out some news from the recent ...Wed, 20 Jul 2022 07:30:35 -0500en-ustext/htmlhttps://www.msn.com/en-us/money/technologyinvesting/ibm-launches-db2-operator-for-kubenetes-on-aws/ar-AAZN0dAKillexams : IBM Whale Trades For July 19
A whale with a lot of money to spend has taken a noticeably bearish stance on IBM.
Looking at options history for IBM IBM we detected 33 strange trades.
If we consider the specifics of each trade, it is accurate to state that 42% of the investors opened trades with bullish expectations and 57% with bearish.
From the overall spotted trades, 24 are puts, for a total amount of $1,311,439 and 9, calls, for a total amount of $355,127.
What's The Price Target?
Taking into account the Volume and Open Interest on these contracts, it appears that whales have been targeting a price range from $100.0 to $141.0 for IBM over the last 3 months.
Volume & Open Interest Development
In terms of liquidity and interest, the mean open interest for IBM options trades today is 1111.11 with a total volume of 35,589.00.
In the following chart, we are able to follow the development of volume and open interest of call and put options for IBM's big money trades within a strike price range of $100.0 to $141.0 over the last 30 days.
IBM Option Volume And Open Interest Over Last 30 Days
Biggest Options Spotted:
Symbol
PUT/CALL
Trade Type
Sentiment
Exp. Date
Strike Price
Total Trade Price
Open Interest
Volume
IBM
PUT
SWEEP
BEARISH
11/18/22
$125.00
$381.0K
1.8K
794
IBM
PUT
SWEEP
BULLISH
07/22/22
$140.00
$99.9K
1.8K
994
IBM
PUT
SWEEP
BULLISH
07/22/22
$140.00
$90.7K
1.8K
561
IBM
CALL
TRADE
NEUTRAL
01/20/23
$100.00
$61.5K
255
45
IBM
PUT
SWEEP
BEARISH
07/22/22
$136.00
$59.9K
1.0K
488
Symbol
PUT/CALL
Trade Type
Sentiment
Exp. Date
Strike Price
Total Trade Price
Open Interest
Volume
IBM
PUT
SWEEP
BEARISH
11/18/22
$125.00
$381.0K
1.8K
794
IBM
PUT
SWEEP
BULLISH
07/22/22
$140.00
$99.9K
1.8K
994
IBM
PUT
SWEEP
BULLISH
07/22/22
$140.00
$90.7K
1.8K
561
IBM
CALL
TRADE
NEUTRAL
01/20/23
$100.00
$61.5K
255
45
IBM
PUT
SWEEP
BEARISH
07/22/22
$136.00
$59.9K
1.0K
488
Where Is IBM Standing Right Now?
With a volume of 20,708,588, the price of IBM is down -5.99% at $129.85.
RSI indicators hint that the underlying stock may be approaching oversold.
Next earnings are expected to be released in 92 days.
What The Experts Say On IBM:
BMO Capital has decided to maintain their Market Perform rating on IBM, which currently sits at a price target of $148.
Morgan Stanley has decided to maintain their Overweight rating on IBM, which currently sits at a price target of $155.
Options are a riskier asset compared to just trading the stock, but they have higher profit potential. Serious options traders manage this risk by educating themselves daily, scaling in and out of trades, following more than one indicator, and following the markets closely.
If you want to stay updated on the latest options trades for IBM, Benzinga Pro gives you real-time options trades alerts.
Ad Disclosure: The rate information is obtained by Bankrate from the listed institutions. Bankrate cannot guaranty the accuracy or availability of any rates shown above. Institutions may have different rates on their own websites than those posted on Bankrate.com. The listings that appear on this page are from companies from which this website receives compensation, which may impact how, where, and in what order products appear. This table does not include all companies or all available products.
All rates are subject to change without notice and may vary depending on location. These quotes are from banks, thrifts, and credit unions, some of whom have paid for a link to their own Web site where you can find additional information. Those with a paid link are our Advertisers. Those without a paid link are listings we obtain to Improve the consumer shopping experience and are not Advertisers. To receive the Bankrate.com rate from an Advertiser, please identify yourself as a Bankrate customer. Bank and thrift deposits are insured by the Federal Deposit Insurance Corp. Credit union deposits are insured by the National Credit Union Administration.
Consumer Satisfaction: Bankrate attempts to verify the accuracy and availability of its Advertisers' terms through its quality assurance process and requires Advertisers to agree to our Terms and Conditions and to adhere to our Quality Control Program. If you believe that you have received an inaccurate quote or are otherwise not satisfied with the services provided to you by the institution you choose, please click here.
Rate collection and criteria: Click here for more information on rate collection and criteria.
Tue, 19 Jul 2022 07:12:00 -0500text/htmlhttps://www.benzinga.com/markets/options/22/07/28121917/ibm-whale-trades-for-july-19Killexams : FG, IBM Sign MoU for Digital Skill Development
Emma Okonji
The federal government through the Ministry of Communications and Digital Economy, has signed a memorandum of understanding (MoU) with Tech giant, International Business Machines (IBM) West Africa.
The deal is for partnership and collaboration in the area of digital skills development in Nigeria.
The Minister of Communications and Digital Economy, Dr. Isa Pantami, who signed on behalf of the federal government, said the partnership would supply impetus to the digital, innovation and entrepreneurship skills of the economic development plan of President Muhammadu Buhari.
He described the partnership as a quantum leap in the digital economy strategy of the ministry.
The MoU, which was signed in Abuja recently, is scheduled to take off in February 2020.
The MoU provides a platform to empower Nigerian youths with digital literacy skills, enable innovation, design and development of indigenous solutions, self -sufficiency and make Nigeria a hub for critical skills for Africa and the world at large.
Under the partnership, and in line with the Digital Literacy initiative and drive of the Minister, IBM would through its Digital Nation Africa Initiative, provide free training to Nigerians for a period of 12 to 16 weeks, in diverse areas of Information Technology (IT).
The MoU seeks to create awareness and support in the development and use of digital tools and applications to Improve the delivery of government services; create a pool of Nigerians with digital skills validated by globally recognised certifications; bridge the gap between the academia and the industry through sensitisation on digital tools and skills; and lower the access barrier to digital tools for the citizens.
Addressing IBM representatives led by the Country General Manager, Pantami expressed satisfaction at the organisation’s response to the digital economy policy by, sufficiently keying in, to bridge the divide between the academia and the industry, education and entrepreneurship.
Pantami noted that, “to achieve a digital economy, digital skills are central, and this has been adequately captured in the second pillar of the Digital Economy Strategy Policy Document as approved and launched by the President on the 28th of November 2019.”
The minister further disclosed that the importance of broadband in the implementation of a digital economy is the life line to its success and this again has been reflected in the seventh pillar of the strategy document.
“The importance of broadband penetration in achieving a digital economy has given rise to the National Broadband Committee to ensure that we thoroughly address the impediments to broadband penetration and achieving a Digital Economy,” Pantami said.
The minister urged institutions of learning to supply priority to skills, especially digital skills over paper qualifications.
According to him, “Digital skills are more relevant in today’s world of emerging technologies, therefore we must encourage innovation and drive digital literacy and skills among the populace.”
In his remarks, the Country General Manager at IBM, Mr. Dipo Faulkner, said: “IBM works with governments and key Ministries to address the societal impact of digital technology, leveraging our investment in education with platforms such as IBM Digital Nation Africa. This new collaboration furthers our aims of scaling digital job skills across Africa.”
Tue, 12 Jul 2022 12:00:00 -0500en-UStext/htmlhttps://www.thisdaylive.com/index.php/2020/01/20/fg-ibm-sign-mou-for-digital-skill-development/Killexams : IBM Research Open-Sources Deep Search Tools
(Laborant/Shutterstock)
IBM Research’s Deep Search product uses natural language processing (NLP) to “ingest and analyze massive amounts of data—structured and unstructured.” Over the years, Deep Search has seen a wide range of scientific uses, from Covid-19 research to molecular synthesis. Now, IBM Research is streamlining the scientific applications of Deep Search by open-sourcing part of the product through the release of Deep Search for Scientific Discovery (DS4SD).
DS4SD includes specific segments of Deep Search aimed at document conversion and processing. First is the Deep Search Experience, a document conversion service that includes a drag-and-drop interface and interactive conversion to allow for quality checks. The second element of DS4SD is the Deep Search Toolkit, a Python package that allows users to “programmatically upload and convert documents in bulk” by pointing the toolkit to a folder whose contents will then be uploaded and converted from PDFs into “easily decipherable” JSON files. The toolkit integrates with existing services, and IBM Research is welcoming contributions to the open-source toolkit from the developer community.
IBM Research paints DS4SD as a boon for handling unstructured data (data not contained in a structured database). This data, IBM Research said, holds a “lot of value” for scientific research; by way of example, they cited IBM’s own Project Photoresist, which in 2020 used Deep Search to comb through more than 6,000 patents, documents, and material data sheets in the hunt for a new molecule. IBM Research says that Deep Search offers up to a 1,000× data ingestion speedup and up to a 100× data screening speedup compared to manual alternatives.
The launch of DS4SD follows the launch of GT4SD—IBM Research’s Generative Toolkit for Scientific Discovery—in March of this year. GT4SD is an open-source library to accelerate hypothesis generation for scientific discovery. Together, DS4SD and GT4SD constitute the first steps in what IBM Research is calling its Open Science Hub for Accelerated Discovery. IBM Research says more is yet to come, with “new capabilities, such as AI models and high quality data sources” to be made available through DS4SD in the future. Deep Search has also added “over 364 million” public documents (like patents and research papers) for users to leverage in their research—a big change from the previous “bring your own data” nature of the tool.