Memorize 000-155 test questions before attempting real exam

killexams.com served legit, valid and up to date 000-155 Practice Test with Actual Exam Questions and Answers for new subjects of IBM 000-155 Exam. Practice our Real 000-155 Questions and Answers to enhance your knowledge and pass your 000-155 exam in first attempt. We are making sure that your success when you face 000-155 exam in actual exam.

Exam Code: 000-155 Practice test 2022 by Killexams.com team
System x Server Family Sales V1
IBM System techniques
Killexams : IBM System techniques - BingNews https://killexams.com/pass4sure/exam-detail/000-155 Search results Killexams : IBM System techniques - BingNews https://killexams.com/pass4sure/exam-detail/000-155 https://killexams.com/exam_list/IBM Killexams : IBM unveils a bold new ‘quantum error mitigation’ strategy

IBM today announced a new strategy for the implementation of several “error mitigation” techniques designed to bring about the era of fault-tolerant quantum computers.

Up front: Anyone still clinging to the notion that quantum circuits are too noisy for useful computing is about to be disillusioned.

A decade ago, the idea of a working quantum computing system seemed far-fetched to most of us. Today, researchers around the world connect to IBM’s cloud-based quantum systems with such frequency that, according to IBM’s director of quantum infrastructure, some three billion quantum circuits are completed every day.

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

IBM and other companies are already using quantum technology to do things that either couldn’t be done by classical binary computers or would take too much time or energy. But there’s still a lot of work to be done.

The dream is to create a useful, fault-tolerant quantum computer capable of demonstrating clear quantum advantage — the point where quantum processors are capable of doing things that classical ones simply cannot.

Background: Here at Neural, we identified quantum computing as the most important technology of 2022 and that’s unlikely to change as we continue the perennial march forward.

The short and long of it is that quantum computing promises to do away with our current computational limits. Rather than replacing the CPU or GPU, it’ll add the QPU (quantum processing unit) to our tool belt.

What this means is up to the individual use case. Most of us don’t need quantum computers because our day-to-day problems aren’t that difficult.

But, for industries such as banking, energy, and security, the existence of new technologies capable of solving problems more complex than today’s technology can represents a paradigm shift the likes of which we may not have seen since the advent of steam power.

If you can imagine a magical machine capable of increasing efficiency across numerous high-impact domains — it could save time, money, and energy at scales that could ultimately affect every human on Earth — then you can understand why IBM and others are so desparate on building QPUs that demonstrate quantum advantage.

The problem: Building pieces of hardware capable of manipulating quantum mechanics as a method by which to perform a computation is, as you can imagine, very hard.

IBM’s spent the past decade or so figuring out how to solve the foundational problems plaguing the field — to include the basic infrastructure, cooling, and power source requirements necessary just to get started in the labs.

Today, IBM’s quantum roadmap shows just how far the industry has come:

But to get where it’s going, we need to solve one of the few remaining foundational problems related to the development of useful quantum processors: they’re noisy as heck.

The solution: Noisy qubits are the quantum computer engineer’s current bane. Essentially, the more processing power you try to squeeze out of a quantum computer the noisier its qubits get (qubits are essentially the computer bits of quantum computing).

Until now, the bulk of the work in squelching this noise has involved scaling qubits so that the signal the scientists are trying to read is strong enough to squeeze through.

In the experimental phases, solving noisy qubits was largely a game of Wack-a-mole. As scientists came up with new techniques — many of which were pioneered in IBM laboratories — they pipelined them to researchers for novel application.

But, these days, the field has advanced quite a bit. The art of error mitigation has evolved from targeted one-off solutions to a full suite of techniques.

Per IBM:

Current quantum hardware is subject to different sources of noise, the most well-known being qubit decoherence, individual gate errors, and measurement errors. These errors limit the depth of the quantum circuit that we can implement. However, even for shallow circuits, noise can lead to faulty estimates. Fortunately, quantum error mitigation provides a collection of tools and methods that allow us to evaluate accurate expectation values from noisy, shallow depth quantum circuits, even before the introduction of fault tolerance.

In latest years, we developed and implemented two general-purpose error mitigation methods, called zero noise extrapolation (ZNE) and probabilistic error cancellation (PEC).

Both techniques involve extremely complex applications of quantum mechanics, but they basically boil down to finding ways to eliminate or squelch the noise coming off quantum systems and/or to amplify the signal that scientists are trying to measure for quantum computations and other processes.

Neural’s take: We spoke to IBM’s director of quantum infrastructure, Jerry Chow, who seemed pretty excited about the new paradigm.

He explained that the techniques being touted in the new press release were already in production. IBM’s already demonstrated massive improvements in their ability to scale solutions, repeat cutting-edge results, and speed up classical processes using quantum hardware.

The bottom line is that quantum computers are here, and they work. Currently, it’s a bit hit or miss whether they can solve a specific problem better than classical systems, but the last remaining hard obstacle is fault-tolerance.

IBM’s new “error mitigation” strategy signals a change from the discovery phase of fault-tolerance solutions to implementation.

We tip our hats to the IBM quantum research team. Learn more here at IBM’s official blog.

Thu, 28 Jul 2022 03:42:00 -0500 en text/html https://thenextweb.com/news/ibm-unveils-bold-new-quantum-error-mitigation-strategy
Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Excellerate future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Excellerate quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Excellerate the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : CXL Borgs IBM’s OpenCAPI, Weaves Memory Fabrics With 3.0 Spec

System architects are often impatient about the future, especially when they can see something good coming down the pike. And thus, we can expect a certain amount of healthy and excited frustration when it comes to the Compute Express Link, or CXL, interconnect created by Intel, which with the absorption of Gen-Z technology from Hewlett Packard Enterprise and now OpenCAPI technology from IBM will become the standard for memory fabrics across compute engines for the foreseeable future.

The CXL 2.0 specification, which brings memory pooling across the PCI-Express 5.0 peripheral interconnect, will soon available on CPU engines. Which is great. But all eyes are already turning to the just-released CXL 3.0 specification, which rides atop the PCI-Express 6.0 interconnect coming in 2023 with 2X the bandwidth, and people are already contemplating what another 2X of bandwidth might offer with CXL 4.0 atop PCI-Express 7.0 coming in 2025.

In a way, we expect for CXL to follow the path blazed by IBM’s “Bluelink” OpenCAPI interconnect. Big Blue used the Bluelink interconnect in the “Cumulus” and “Nimbus” Power9 processors to provide NUMA interconnects across multiple processors, to run the NVLink protocol from Nvidia to provide memory coherence across the Power9 CPU and the Nvidia “Volta” V100 GPU accelerators, and to provide more generic memory coherent links to other kinds of accelerators through OpenCAPI ports. But the path that OpenCAPI and CXL will not be exactly the same, obviously. OpenCAPI is kaput and CXL is the standard for memory coherence in the datacenter.

IBM put faster OpenCAPI ports on the “Cirrus” Power10 processors, and they are used to provide those NUMA links as with the Power9 chips as well as a new OpenCAPI Memory Interface that uses the Bluelink SerDes as a memory controller, which runs a bit slower than a DDR4 or DDR5 controller but which takes up a lot less chip real estate and burns less power – and has the virtue of being exactly like the other I/O in the chip. In theory, IBM could have supported the CXL and NVLink protocols running atop its OpenCAPI interconnect on Power10, but there are some sour grapes there with Nvidia that we don’t understand – it seems foolish not to offer memory coherence with Nvidia’s current “Ampere” A100 and impending “Hopper” H100 GPUs. There may be an impedance mismatch between IBM and Nvidia in regards to signaling rates and lane counts between OpenCAPI and NVLink. IBM has PCI-Express 5.0 controllers on its Power10 chips – these are unique controllers and are not the Bluelink SerDes – and therefore could have supported the CXL coherence protocol, but as far as we know, Big Blue has chosen not to do that, either.

Given that we think CXL is the way a lot of GPU accelerators and their memories will link to CPUs in the future, this strategy by IBM seems odd. We are therefore nudging IBM to do a Power10+ processor with support for CXL 2.0 and NVLink 3.0 coherent links as well as with higher core counts and maybe higher clock speeds, perhaps in a year or a year and a half from now. There is no reason IBM cannot get some of the AI and HPC budget given the substantial advantages of its OpenCAPI memory, which is driving 818 GB/sec of memory bandwidth out of a dual chip module with 24 cores. We also expect for future datacenter GPU compute engines from Nvidia will support CXL in some fashion, but exactly how it will sit side-by-side with or merge with NVLink is unclear.

It is also unclear how the Gen-Z intellectual property donated to the CXL Consortium by HPE back in November 2021 and the OpenCAPI intellectual property donated to the organization steering CXL by IBM last week will be used to forge a CXL 4.0 standard, but these two system vendors are offering up what they have to help the CXL effort along. For which they should be commended. That said, we think both Gen-Z and OpenCAPI were way ahead of CXL and could have easily been tapped as in-node and inter-node memory and accelerator fabrics in their own right. HPE had a very elegant set of memory fabric switches and optical transceivers already designed, and IBM is the only CPU provider that offered CPU-GPU coherence across Nvidia GPUs and the ability to hook memory inside the box or across boxes over its OpenCAPI Memory Interface riding atop the Bluelink SerDes. (AMD is offering CPU-GPU coherence across its custom “Trento” Epyc 7003 series processors and its “Aldebaran” Instinct MI250X GPU accelerators in the “Frontier” exascale supercomputer at Oak Ridge National Laboratories.)

We are convinced that the Gen-Z and OpenCAPI technology will help make CXL better, and Excellerate the kinds and varieties of coherence that are offered. CXL initially offered a kind of asymmetrical coherence, where CPUs can read and write to remote memories in accelerators as if they are local but using the PCI-Express bus instead of a proprietary NUMA interconnect – that is a vast oversimplification – rather than having full cache coherence across the CPUs and accelerators, which has a lot of overhead and which would have an impedance mismatch of its own because PCI-Express was, in days gone by, slower than a NUMA interconnect.

But as we have pointed out before, with PCI-Express doubling its speed every two years or so and latencies holding steady as that bandwidth jumps, we think there is a good chance that CXL will emerge as a kind of universal NUMA interconnect and memory controller, much as IBM has done with OpenCAPI, and Intel has suggested this for both CXL memory and CXL NUMA and Marvell certainly thinks that way about CXL memory as well. And that is why with CXL 3.0, the protocol is offering what is called “enhanced coherency,” which is another way of saying that it is precisely the kind of full coherency between devices that, for example, Nvidia offers across clusters of GPUs on an NVSwitch network or IBM offered between Power9 CPUs and Nvidia Volta GPUs. The kind of full coherency that Intel did not want to do in the beginning. What this means is that devices supporting the CXL.memory sub-protocol can access each other’s memory directly, not asymmetrically, across a CXL switch or a direct point-to-point network.

There is no reason why CXL cannot be the foundation of a memory area network as IBM has created with its “memory inception” implementation of OpenCAPI memory on the Power10 chip, either. As Intel and Marvell have shown in their conceptual presentations, the palette of chippery and interconnects is wide open with a standard like CXL, and improving it across many vectors is important. The industry let Intel win this one, and we will be better off in the long run because of it. Intel has largely let go of CXL and now all kinds of outside innovation can be brought to bear.

Ditto for the Universal Chiplet Interconnect Express being promoted by Intel as a standard for linking chiplets inside of compute engine sockets. Basically, we will live in a world where PCI-Express running UCI-Express connects chiplets inside of a socket, PCI-Express running CXL connects sockets and chips within a node (which is becoming increasingly ephemeral), and PCI-Express switch fabrics spanning a few racks or maybe even a row someday use CXL to link CPUs, accelerators, memory, and flash all together into disaggregated and composable virtual hardware servers.

For now, what is on the immediate horizon is CXL 3.0 running atop the PCI-Express 6.0 transport, and here is how CXL 3.0 is stacking up against the prior CXL 1.0/1.1 release and the current CXL 2.0 release on top of PCI-Express 5.0 transports:

When the CXL protocol is running in I/O mode – what is called CXL.io – it is essentially just the same as the PCI-Express peripheral protocol for I/O devices. The CXL.cache and CXL.memory protocols add caching and memory addressing atop the PCI-Express transport, and run at about half the latency of the PCI-Express protocol. To put some numbers on this, as we did back in September 2021 when talking to Intel, the CXL protocol specification requires that a snoop response on a snoop command when a cache line is missed has to be under 50 nanoseconds, pin to pin, and for memory reads, pin to pin, latency has to be under 80 nanoseconds. By contrast, a local DDR4 memory access one a CPU socket is around 80 nanoseconds, and a NUMA access to far memory in an adjacent CPU socket is around 135 nanoseconds in a typical X86 server.

With the CXL 3.0 protocol running atop the PCI-Express 6.0 transport, the bandwidth is being doubled on all three types of drivers without any increase in latency. That bandwidth increase, to 256 GB/sec across x16 lanes (including both directions) is thanks to the 256 byte flow control unit, or flit, fixed packet size (which is larger than the 64 byte packet used in the PCI-Express 5.0 transport) and the PAM-4 pulsed amplitude modulation encoding that doubles up the bits per signal on the PCI-Express transport. The PCI-Express protocol uses a combination of cyclic redundancy check (CRC) and three-way forward error correction (FEC) algorithms to protect the data being transported across the wire, which is a better method than was employed with prior PCI-Express protocols and hence why PCI-Express 6.0 and therefore CXL 3.0 will have much better performance for memory devices.

The CXL 3.0 protocol does have a low latency CRC algorithm that breaks the 256 B flits into 128 B half flits and does its CRC check and transmissions on these subflits, which can reduce latencies in transmissions by somewhere between 2 nanosecond and 5 nanoseconds.

The neat new thing coming with CXL 3.0 is memory sharing, and this is distinct from the memory pooling that was available with CXL 2.0. Here is what memory pooling looks like:

With memory pooling, you put a glorified PCI-Express switch that speaks CXL between hosts with CPUs and enclosures with accelerators with their own memories or just blocks of raw memory – with or without a fabric manager – and you allocate the accelerators (and their memory) or the memory capacity to the hosts as needed. As the diagram above shows on the right, you can do a point to point interconnect between all hosts and all accelerators or memory devices without a switch, too, if you want to hard code a PCI-Express topology for them to link on.

With CXL 3.0 memory sharing, memory out on a device can be literally shared simultaneously with multiple hosts at the same time. This chart below shows the combination of device shared memory and coherent copies of shared regions enabled by CXL 3.0:

System and cluster designers will be able to mix and match memory pooling and memory sharing techniques with CXL 3.0. CXL 3.0 will allow for multiple layers of switches, too, which was not possible with CXL 2.0, and therefore you can imagine PCI-Express networks with various topologies and layers being able to lash together all kinds of devices and memories into switch fabrics. Spine/leaf networks common among hyperscalers and cloud builders are possible, including devices that just share their cache, devices that just share their memory, and devices that share their cache and memory. (That is Type 1, Type 3, and Type 2 in the CXL device nomenclature.)

The CXL fabric is what will be truly useful and what is enabled in the 3.0 specification. With a fabric, a you get a software-defined, dynamic network of CXL-enabled devices instead of a static network set up with a specific topology linking specific CXL devices. Here is a simple example of a non-tree topology implemented in a fabric that was not possible with CXL 2.0:

And here is the neat bit. The CXL 3.0 fabric can stretch to 4,096 CXL devices. Now, ask yourself this: How many of the big iron NUMA systems and HPC or AI supercomputers in the world have more than 4,096 devices? Not as many as you think. And so, as we have been saying for years now, for a certain class of clustered systems, whether the nodes are loosely or tightly coupled at their memories, a PCI-Express fabric running CXL is just about all they are going to need for networking. Ethernet or InfiniBand will just be used to talk to the outside world. We would expect to see flash devices front-ended by DRAM as a fast cache as the hardware under storage clusters, too. (Optane 3D XPoint persistent memory is no longer an option. But there is always hope for some form of PCM memory or another form of ReRAM. Don’t hold your breath, though.)

As we sit here mulling all of this over, we can’t help thinking about how memory sharing might simplify the programming of HPC and AI applications, especially if there is enough compute in the shared memory to do some collective operations on data as it is processed. There are all kinds of interesting possibilities. . . .

Anyway, making CXL fabrics is going to be interesting, and it will be the heart of many system architectures. The trick will be sharing the memory to drive down the effective cost of DRAM – research by Microsoft Azure showed that on its cloud, memory capacity utilization was only an average of about 40 percent, and half of the VMs running never touched more than half of the memory allocated to their hypervisors from the underlying hardware – to pay for the flexibility that comes through CXL switching and composability for devices with memory and devices as memory.

What we want, and what we have always wanted, was a memory-centric systems architecture that allows all kinds of compute engines to share data in memory as it is being manipulated and to move that data as little as possible. This is the road to higher energy efficiency in systems, at least in theory. Within a few years, we will get to test this all out in practice, and it is legitimately exciting. All we need now is PCI-Express 7.0 two years earlier and we can have some real fun.

Tue, 09 Aug 2022 06:18:00 -0500 Timothy Prickett Morgan en-US text/html https://www.nextplatform.com/2022/08/09/cxl-borgs-ibms-opencapi-weaves-memory-fabrics-with-3-0-spec/
Killexams : IBM Research Albany Nanotech Center Is A Model To Emulate For CHIPS Act

With the passage of the CHIPS+ Act by Congress and its imminent signing by the President of the United States, a lot of attention has been paid to the construction of new semiconductor manufacturing megasites by Intel, TSMC, and Samsung. But beyond the manufacturing side of the semiconductor business, there is a significant need to invest in related areas such as research, talent training, small and medium business development, and academic cooperation. I recently had the opportunity to tour a prime example of such a facility that integrates all these other aspects of chip manufacturing into a tight industry, government, and academic partnership. That partnership has been going on for over 20 years in Albany, New York where IBM Research has a nanotechnology center that is located within the State University of New York (SUNY) Poly's Albany NanoTech Complex. With significant investment by New York State through the NY CREATES development agency, IBM in close partnership with several universities and industry partners is developing state-of-the-art semiconductor process technologies in working labs for the next generation of computer chips.

The center provides a unique facility for semiconductor research – its open environment facilitates collaboration between leading equipment and materials suppliers, researchers, engineers, academics, and EDA vendors. Presently, IBM has a manufacturing and research partnership with Samsung Electronics and a research partnership was announced with Intel last year. Key chipmaking suppliers such as ASML, KLA, and Tokyo Electron (TEL) have equipment installed, and are working actively with IBM developing advanced processes and metrology for leading edge technologies.

These facilities do not come cheap. It takes billions of dollars of investment and many years of research to achieve each new breakthrough. For example, the High-k metal gate took 15 years to go into products; the FinFET transistor, essential today, took 13 years; and the next generation transistor, the gate-all-around/nano sheet, which Samsung is putting into production now, was in development for 14 years. In addition, the cost to manufacture chips at each new process node is increasing 20-30% and the R&D costs are doubling for each node’s development. To continue supporting this strategic development, there needs to be a partnership between industry, academia, and government.

IBM Makes The Investment

You might ask why IBM, which sold off its semiconductor manufacturing facilities over the years, is so involved in this deep and expensive research. Well, for one, IBM is very, very good at semiconductor process development. The company pioneered several critical semiconductor technologies over the decades. But being good at a technology does not pay the bills, so IBM’s second motivation is that the company needs the best technology for its own Power and IBM Z computers. To that end, IBM is primarily focused on developments that support high-performance computing and AI processing.

Additional strategic suppliers and partners help to scale these innovations beyond just IBM’s contribution. The best equipment from the world-class equipment suppliers provides a testbed for partners to experiment and advance the state-of-the-art technology. IBM along with its equipment partners have built specialized equipment where needed to experiment beyond the capabilities of standard equipment.

But IBM only succeeds if it can transfer the technology from the labs into production. To do so, IBM and Samsung have been working closely on process developments and the technology transfer.

MORE FROM FORBESIBM Goes Vertical To Scale Transistors

The Albany NanoTech Complex dovetails with the CHIPS Act in that it will allow the United States to develop leadership in manufacturing technologies. It can also allow smaller companies to test innovative technologies in this facility. The present fab building is running 24/7/365 and is highly utilized, but there’s space to build another building that can double significantly expand the clean room space. There’s also a plan for a building that will be able to support the next generation of ASML EUV equipment called high NA EUV.

The Future is Vertical

The Albany site also is a center for chiplet technology research. As semiconductor scaling slows, unique packaging solutions for multi-die chips will become the norm for high-performance and power-efficient computing. IBM Research has an active program of developing unique 2.5D and 3D die-stacking technologies. Today the preferred substrate for building these multi-die chips is still made from silicon, based on the availability of tools and manufacturing knowledge. There are still unique process steps that must be developed to handle the specialized processing, including laser debonding techniques.

IBM also works with test equipment manufacturers because building 3D structures with chiplets presents some unique testing challenges. Third party EDA vendors also need to be part of the development process, because the ultimate goal of chiplet-based design is to be able to combine chips from different process nodes and different foundries.

Today chiplet technology is embryonic, but the future will absolutely need this technology to build the next generation of data center hardware. The is a situation where the economics and technology are coming together at the right time.

Summary

The Albany NanoTech Complex is a model for the semiconductor industry and demonstrates one way to bring researchers from various disciplines and various organizations together to advance the state-of-the-art semiconductor technology. But this model also needs to scale up and be replicated throughout North America. With more funding and more scale, there also needs to be an appropriately skilled workforce. Here is where the US needs to make investments in STEM education on par with the late 1950s Space Race and sites like Albany that offer R&D on leading-edge process development that should inspire more students to go into physics, chemistry, and electrical engineering and not into building the next crypto currency startup.

Tirias Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud. Members of the Tirias Research team have consulted for IBM, Intel, GlobalFoundries, Samsung, and other foundries.

Mon, 08 Aug 2022 11:08:00 -0500 Kevin Krewell en text/html https://www.forbes.com/sites/tiriasresearch/2022/08/08/ibm-research-albany-nanotech-center-is-a-model-to-emulate-for-chips-act/
Killexams : How 'living architecture' could help the world avoid a soul-deadening digital future

My first Apple laptop felt like a piece of magic made just for me—almost a part of myself. The rounded corners, the lively shading, the delightful animations. I had been using Windows my whole life, starting on my family's IBM 386, and I never thought using a computer could be so fun.

Indeed, Apple co-founder Steve Jobs said that computers were like bicycles for the mind, extending your possibilities and helping you do things not only more efficiently but also more beautifully. Some technologies seem to unlock your humanity and make you feel inspired and alive.

But not all technologies are like this. Sometimes devices do not work reliably or as expected. Often you have to change to conform to the limitations of a system, as when you need to speak differently so a digital voice assistant can understand you. And some platforms bring out the worst in people. Think of anonymous flame wars.

As a researcher who studies technology, design and ethics, I believe that a hopeful way forward comes from the world of architecture. It all started decades ago with an architect's observation that newer buildings tended to be lifeless and depressing, even if they were made using ever fancier tools and techniques.

Tech's wear on humanity

The problems with technology are myriad and diffuse, and widely studied and reported: from short attention spans and tech neck to clickbait and AI bias to trolling and shaming to conspiracy theories and misinformation.

As people increasingly live online, these issues may only get worse. Some latest visions of the metaverse, for example, suggest that humans will come to live primarily in virtual spaces. Already, people worldwide spend on average seven hours per day on digital screens—nearly half of waking hours.

While public awareness of these issues is on the rise, it's not clear whether or how tech companies will be able to address them. Is there a way to ensure that future technologies are more like my first Apple laptop and less like a Twitter pile-on?

Over the past 60 years, the architectural theorist Christopher Alexander pursued questions similar to these in his own field. Alexander, who died in March 2022 at age 85, developed a theory of design that has made inroads in architecture. Translated to the technology field, this theory can provide the principles and process for creating technologies that unlock people's humanity rather than suppress it.

Christopher Alexander discussing place, repetition and adaptation.

How good design is defined

Technology design is beginning to mature. Tech companies and product managers have realized that a well-designed user interface is essential for a product's success, not just nice to have.

As professions mature, they tend to organize their knowledge into concepts. Design patterns are a great example of this. A design pattern is a reusable solution to a problem that designers need to solve frequently.

In user experience design, for instance, such problems include helping users enter their shipping information or get back to the home page. Instead of reinventing the wheel every time, designers can apply a design pattern: clicking the logo at the upper left always takes you home. With design patterns, life is easier for designers, and the end products are better for users.

Design patterns facilitate good design in one sense: They are efficient and productive. Yet they do not necessarily lead to designs that are good for people. They can be sterile and generic. How, exactly, to avoid that is a major challenge.

A seed of hope lies in the very place where design patterns originated: the work of Christopher Alexander. Alexander dedicated his life to understanding what makes an environment good for humans—good in a deep, moral sense—and how designers might create structures that are likewise good.

His work on design patterns, dating back to the 1960s, was his initial effort at an answer. The patterns he developed with his colleagues included details like how many stories a good building should have and how many light sources a good room should have.

But Alexander found design patterns ultimately unsatisfying. He took that work further, eventually publishing his theory in his four-volume magnum opus, "The Nature of Order."

While Alexander's work on design patterns is very well known—his 1977 book "A Pattern Language" remains a bestseller—his later work, which he deemed much more important, has been largely overlooked. No surprise, then, that his deepest insights have not yet entered technology design. But if they do, good design could come to mean something much richer.

On creating structures that foster life

Architecture was getting worse, not better. That was Christopher Alexander's conclusion in the mid-20th century.

Much modern architecture is inert and makes people feel dead inside. It may be sleek and intellectual—it may even win awards—but it does not help generate a feeling of life within its occupants. What went wrong, and how might architecture correct its course?

Motivated by this question, Alexander conducted numerous experiments throughout his career, going deeper and deeper. Beginning with his design patterns, he discovered that the designs that stirred up the most feeling in people, what he called living structure, shared certain qualities. This wasn't just a hunch, but a testable empirical theory, one that he validated and refined from the late 1970s until the turn of the century. He identified 15 qualities, each with a technical definition and many examples.

The qualities are:

  • Levels of scale
  • Strong centers
  • Boundaries
  • Alternating repetition
  • Positive space
  • Good shape
  • Local symmetries
  • Deep interlocking and ambiguity
  • Contrast gradients
  • Roughness
  • Echoes
  • The void
  • Simplicity and inner calm
  • Notseparateness

As Alexander writes, living structure is not just pleasant and energizing, though it is also those. Living structure reaches into humans at a transcendent level—connecting people with themselves and with one another—with all humans across centuries and cultures and climates.

Yet modern architecture, as Alexander showed, has very few of the qualities that make living structure. In other words, over the 20th century architects taught one another to do it all wrong. Worse, these errors were crystallized in building codes, zoning laws, awards criteria and education. He decided it was time to turn things around.

Alexander's ideas have been hugely influential in architectural theory and criticism. But the world has not yet seen the paradigm shift he was hoping for.

By the mid-1990s, Alexander recognized that for his aims to be achieved, there would need to be many more people on board—and not just architects, but all sorts of planners, infrastructure developers and everyday people. And perhaps other fields besides architecture. The digital revolution was coming to a head.

Alexander's invitation to technology designers

As Alexander doggedly pursued his research, he started to notice the potential for digital technology to be a force for good. More and more, digital technology was becoming part of the human environment—becoming, that is, architectural.

Meanwhile, Alexander's ideas about design patterns had entered the world of technology design as a way to organize and communicate design knowledge. To be sure, this older work of Alexander's proved very valuable, particularly to software engineering.

Because of his fame for design patterns, in 1996 Alexander was invited to provide a keynote address at a major software engineering conference sponsored by the Association for Computing Machinery.

In his talk, Alexander remarked that the tech industry was making great strides in efficiency and power but perhaps had not paused to ask: "What are we supposed to be doing with all these programs? How are they supposed to help the Earth?"

"For now, you're like guns for hire," Alexander said. He invited the audience to make technologies for good, not just for pay.

Loosening the design process

In "The Nature of Order," Alexander defined not only his theory of living structure, but also a process for creating such structure.

In short, this process involves democratic participation and springs from the bottom up in an evolving progression incorporating the 15 qualities of living structure. The end result isn't known ahead of time—it's adapted along the way. The term "organic" comes to mind, and this is appropriate, because nature almost invariably creates living structure.

But typical architecture—and design in many fields—is, in contrast, top-down and strictly defined from the outset. In this machinelike process, rigid precision is prioritized over local adaptability, project roles are siloed apart and the emphasis is on commercial value and investment over anything else. This is a recipe for lifeless structure.

Alexander's work suggests that if living structure is the goal, the is the place to focus. And the technology field is starting to show inklings of change.

In project management, for example, the traditional waterfall approach followed a rigid, step-by-step schedule defined upfront. The turn of the century saw the emergence of a more dynamic approach, dubbed agile, which allows for more adaptability through frequent check-ins and prioritization, progressing in "sprints" of one to two weeks rather than longer phases.

And in design, the human-centered design paradigm is likewise gaining steam. Human-centered design emphasizes, among other elements, continually testing and refining small changes with respect to design goals.

A design process that promotes life

However, Alexander would say that both these trajectories are missing some of his deeper insights about living structure. They may spark more purchases and increase stock prices, but these approaches will not necessarily create technologies that are good for each person and good for the world.

Yet there are some emerging efforts toward this deeper end. For example, design pioneer Don Norman, who coined the term "user experience," has been developing his ideas on what he calls humanity-centered design. This goes beyond human-centered design to focus on ecosystems, take a long-term view, incorporate human values and involve stakeholder communities along the way.

The vision of humanity-centered design calls for sweeping changes in the technology field. This is precisely the kind of reorientation that Alexander was calling for in his 1996 keynote speech. Just as design patterns suggested in the first place, the technology field doesn't need to reinvent the wheel. Technologists and people of all stripes can build up from the tremendous, careful work that Alexander has left.



This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: How 'living architecture' could help the world avoid a soul-deadening digital future (2022, August 10) retrieved 10 August 2022 from https://techxplore.com/news/2022-08-architecture-world-soul-deadening-digital-future.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Wed, 10 Aug 2022 01:01:00 -0500 en text/html https://techxplore.com/news/2022-08-architecture-world-soul-deadening-digital-future.html
Killexams : 8 Ultimate AI Projects for Beginners

The MarketWatch News Department was not involved in the creation of this content.

Aug 07, 2022 (AmericaNewsHour) -- So you have decided to take an Artificial Intelligence (AI) course to stay relevant in an increasingly digital world. Perhaps you are a fresh grad who wants to Excellerate your chances of getting hired, or you may be a working professional in IT with a passion for technology and innovative applications.

Whatever be your reason to upskill, you can gain job-ready skills with a comprehensive Artificial Intelligence Engineer course in Seattle in collaboration with IBM. Learn about the different AI technologies, such as Machine Learning and Deep learning, and the integrated programming skills to become an AI expert.

A collaborative course with IBM can help you learn AI and Analytics on the IBM platform while getting hands-on experience with cloud services. What's more, you have working exposure with IBM Watson and other suites while receiving $1200 worth of IBM cloud credits. This industry-recognized master's program validates your knowledge of AI on the IBM platform. Simplilearn's AI course in Seattle trains you on Python libraries and in-demand techniques in artificial neural networks and more. You learn to work on AI-driven projects and carve your career in the vast AI space.

At the same time, access to real-life projects across industry verticals helps you hone your theoretical knowledge and skills with a certificate to authenticate the learning path.

A Brief Guide to writing your AI Project

To start an AI project, begin with learning tools and libraries. Most AI jobs do not require you to know the complexity behind Machine Learning models but the capability to build AI solutions, scale them, and deploy them for solving end-user problems.

So you do not need an in-depth knowledge of how the models work. Instead, as a beginner, pick up some valuable tools.

To begin with, learn TensorFlow, the popular Machine Learning tool. Get familiar with the Google Colab website. Here you can start coding your first neural network within minutes, without the need for installing any application on your computer.

Next, import the necessary libraries, like TensorFlow and matplotlib, the go-to data visualization library.

Load the data you plan to work with and split it into training and test datasets. Training data set trains your neural network, and upon completion of training, tests the performance on the test dataset.

Next, visualize the data.

Define the Machine Learning model and then compile it. As a beginner, the "Sequential" model works great for creating your neural network, allowing you to stack the neural network layers sequentially. Compile the model using optimizers and loss functions. 

Train the model, test, and check the functions for accuracy on the test data. Retrieve the model from the file system, and ensure it works.

Top 8 Ultimate AI Projects for Beginners

You can work on several AI projects with just a good grasp of libraries.  Even if you are a newbie to AI, you can build simple projects and showcase them in your portfolio to land good jobs.

Building AI projects can help sharpen your skill sets and display your hands-on knowledge to prospective employers.

Here are some simple AI Projects for beginners:

1. Fake News Detection

Fake news has become a matter of concern, especially in an era of social media. False information is often circulated as news making it tough to distinguish between fake and real news. Rumors and fake news can threaten peace in a neighborhood and create panic. So it is necessary to detect fake news early on to prevent it from spreading and causing harm.

Build a fake news detector using the Real and Fake News dataset available on Kaggle. Using a pre-trained open-source Machine Learning/NLP model, you can add to your output layer for text classification and detection.

2. Translation Services

If Natural Language Processing (NLP) interests you, try building a translation services application.

A translator model extracts features from sentences and designates each word with importance. An encoding and decoding component trains the model end-to-end.

To build your AI translator app, load a pre-trained transformer model into Python and convert the said text into tokens to feed it into the pre-trained model. Use the GluonNLP library to load the dataset for training and testing.

3. Auto-Correction

The auto-correction application is part of everyday functions, making lives easier by correcting spelling and grammatical errors.

To begin with, build an autocorrect tool in Python using the TextBlob library. Use the function 'correct()' to call on a piece of text, check for mistakes and replace them with the most similar words typed. However, the TextBlob library has limitations and cannot map words when used together. You can work on these limitations by building algorithms using a pre-trained NLP model already trained to identify the most appropriate words.

4. Spam Detection

Bots are a part of social media platforms. They can be irritating when you keep receiving notifications that promote some brand or ideology. For instance, the comment section of most Instagram posts has annoying bots. By building a spam detection model using AI, you can filter the spam from legitimate comments. Scrape the Web for data and access the social media platform's API with Python for gathering unlabelled comments. You can also use training data from Kaggle's YouTube spam dataset and identify keywords that appear in spam comments.

Assign weightage to words in spam comments, and evaluate them against the scraped comments. Eliminate whitespaces and punctuation errors and clean data correctly for fine-tuning algorithm performance on similar texts.

Another way is to leverage a pre-trained model like BERT and ALBERT. They also consider the sentence context, coherence, and interpretation. 

5. Handwriting Detection

You can build a system to detect handwritten digits using artificial neural networks. Handwriting differs for individuals, with characters varying in styles and shapes. You can use a convolution neural network (CNN) to identify and digitize handwritten characters for precise interpretation. Using the HASYv2 dataset with 168,000 images from 369 classifications, you can build a handwriting digit recognition system to identify and digitalize mathematical symbols and handwriting from photos, touchscreen devices, and paper. Handwriting recognition applications are used for bank cheque authentication and digitizing hand-filled forms and notes.

6. Chatbots

Chatbots are a simple AI project using Python to embed within a website or application. They serve customer queries, automate conversations, are available 24/7 for personalized customer experience, and extract insights from customer behavior. You can begin with a simple chatbot and move on to building advanced ones.

Chatbots use NLP for algorithms and human engagements through various languages. 

Audio signals and human text are broken down into simple components,  analyzed, and converted into a machine-understandable language. You can use pre-trained tools, packages, and speech recognition systems as you move on to build more intelligent and responsive chatbots.

7. Resume Parser

HR firms and recruiters spend time browsing resumes to find the best fit for a job. Automation of the process can, however, save time and resources. A popular method is keyword matching and short listing of resumes based on the keywords identified in an application. Where keywords are missing, the candidate's resume is rejected. However, where candidates know the keyword matching algorithm, they tailor their resumes to get shortlisted. But this may not be what the hirer wants.

A resume parser built using artificial intelligence and machine learning techniques can scan a resume to identify skilled and best-fit candidates, removing applications filled with unnecessary keywords. The Resume Dataset on Kaggle can be pre-processed to build this model. 

8. Plagiarism Checker

Content duplication is a problem, especially for research papers and article submissions.

Besides copyright infringement issues, it leads to reputational damage to businesses and poor ranks in search engines. So there is a need to discover plagiarized content for educational institutions, journals, magazines, and others with a website presence.

You can build a plagiarism checker using AI to detect duplication of content. Python Flask or text mining are tools that detect plagiarism.

Plagiarism checkers are also for end-users like content creators, bloggers, editors, publishers, and writers. They can recognize duplicated content and identify whether a piece is unique or copy-pasted.

Summary

So if you are planning a career in AI, register for an AI Engineer course and get started on projects to build a great portfolio.

The post 8 Ultimate AI Projects for Beginners appeared first on America News Hour.

COMTEX_411784272/2606/2022-08-07T11:40:41

Is there a problem with this press release? Contact the source provider Comtex at editorial@comtex.com. You can also contact MarketWatch Customer Service via our Customer Center.

The MarketWatch News Department was not involved in the creation of this content.

Sun, 07 Aug 2022 03:40:00 -0500 en-US text/html https://www.marketwatch.com/press-release/8-ultimate-ai-projects-for-beginners-2022-08-07
Killexams : IBM Annual Cost of Data Breach Report 2022: Record Costs Usually Passed On to Consumers, “Long Breach” Expenses Make Up Half of Total Damage

IBM’s annual Cost of Data Breach Report for 2022 is packed with revelations, and as usual none of them are good news. Headlining the report is the record-setting cost of data breaches, with the global average now at $4.35 million. The report also reveals that much of that expense comes with the data breach version of “long Covid,” expenses that are realized more than a year after the attack.

Most organizations (60%) are passing these added costs on to consumers in the form of higher prices. And while 83% of organizations now report experiencing at least one data breach, only a small minority are adopting zero trust strategies.

Security AI and automation greatly reduces expected damage

The IBM report draws on input from 550 global organizations surveyed about the period between March 2021 and March 2022, in partnership with the Ponemon Institute.

Though the average cost of a data breach is up, it is only by about 2.6%; the average in 2021 was $4.24 million. This represents a total climb of 13% since 2020, however, reflecting the general spike in cyber crime seen during the pandemic years.

Organizations are also increasingly not opting to absorb the cost of data breaches, with the majority (60%) compensating by raising consumer prices separate from any other latest increases due to inflation or supply chain issues. The report indicates that this may be an underreported upward influence on prices of consumer goods, as 83% of organizations now say that they have been breached at least once.

Brad Hong, Customer Success Manager for Horizon3.ai, sees a potential consumer backlash on the horizon once public awareness of this practice grows: “It’s already a breach of confidence to lose the confidential data of customers, and sure there’s bound to be an organization across those surveyed who genuinely did put in the effort to protect against and curb attacks, but for those who did nothing, those who, instead of creating a disaster recovery plan, just bought cyber insurance to cover the org’s operational losses, and those who simply didn’t care enough to heed the warnings, it’s the coup de grâce to then pass the cost of breaches to the same customers who are now the victims of a data breach. I’d be curious to know what percent of the 60% of organizations who increased the price of their products and services are using the extra revenue for a war chest or to actually reinforce their security—realistically, it’s most likely just being used to fill a gap in lost revenue for shareholders’ sake post-breach. Without government regulations outlining restrictions on passing cost of breach to consumer, at the least, not without the honest & measurable efforts of a corporation as their custodian, what accountability do we all have against that one executive who didn’t want to change his/her password?”

Breach costs also have an increasingly long tail, as nearly half now come over a year after the date of the attack. The largest of these are generally fines that are levied after an investigation, and decisions or settlements in class action lawsuits. While the popular new “double extortion” approach of ransomware attacks can drive long-term costs in this way, the study finds that companies paying ransom demands to settle the problem quickly aren’t necessarily seeing a large amount of overall savings: their average breach cost drops by just $610,000.

Sanjay Raja, VP of Product with Gurucul, expands on how knock-on data breach damage can continue for years: “The follow-up attack effect, as described, is a significant problem as the playbooks and solutions provided to security operations teams are overly broad and lack the necessary context and response actions for proper remediation. For example, shutting down a user or application or adding a firewall block rule or quarantining a network segment to negate an attack is not a sustainable remediation step to protect an organization on an ongoing basis. It starts with a proper threat detection, investigation and response solution. Current SIEMs and XDR solutions lack the variety of data, telemetry and combined analytics to not only identify an attack campaign and even detect variants on previously successful attacks, but also provide the necessary context, accuracy and validation of the attack to build both a precise and complete response that can be trusted. This is an even greater challenge when current solutions cannot handle complex hybrid multi-cloud architectures leading to significant blind spots and false positives at the very start of the security analyst journey.”

Rising cost of data breach not necessarily prompting dramatic security action

In spite of over four out of five organizations now having experienced some sort of data breach, only slightly over 20% of critical infrastructure companies have moved to zero trust strategies to secure their networks. Cloud security is also lagging as well, with a little under half (43%) of all respondents saying that their security practices in this area are either “early stage” or do not yet exist.

Those that have onboarded security automation and AI elements are the only group seeing massive savings: their average cost of data breach is $3.05 million lower. This particular study does not track average ransom demands, but refers to Sophos research that puts the most latest number at $812,000 globally.

The study also notes serious problems with incident response plans, especially troubling in an environment in which the average ransomware attack is now carried out in four days or less and the “time to ransom” has dropped to a matter of hours in some cases. 37% of respondents say that they do not test their incident response plans regularly. 62% say that they are understaffed to meet their cybersecurity needs, and these organizations tend to suffer over half a million more dollars in damages when they are breached.

Of course, cost of data breaches is not distributed evenly by geography or by industry type. Some are taking much bigger hits than others, reflecting trends established in prior reports. The health care industry is now absorbing a little over $10 million in damage per breach, with the average cost of data breach rising by $1 million from 2021. And companies in the United States face greater data breach costs than their counterparts around the world, at over $8 million per incident.

Shawn Surber, VP of Solutions Architecture and Strategy with Tanium, provides some insight into the unique struggles that the health care industry faces in implementing effective cybersecurity: “Healthcare continues to suffer the greatest cost of breaches but has among the lowest spend on cybersecurity of any industry, despite being deemed ‘critical infrastructure.’ The increased vulnerability of healthcare organizations to cyber threats can be traced to outdated IT systems, the lack of robust security controls, and insufficient IT staff, while valuable medical and health data— and the need to pay ransoms quickly to maintain access to that data— make healthcare targets popular and relatively easy to breach. Unlike other industries that can migrate data and sunset old systems, limited IT and security budgets at healthcare orgs make migration difficult and potentially expensive, particularly when an older system provides a small but unique function or houses data necessary for compliance or research, but still doesn’t make the cut to transition to a newer system. Hackers know these weaknesses and exploit them. Additionally, healthcare orgs haven’t sufficiently updated their security strategies and the tools that manufacturers, IT software vendors, and the FDA have made haven’t been robust enough to thwart the more sophisticated techniques of threat actors.”

Familiar incident types also lead the list of the causes of data breaches: compromised credentials (19%), followed by phishing (16%). Breaches initiated by these methods also tended to be a little more costly, at an average of $4.91 million per incident.

Global average cost of #databreach is now $4.35M, up 13% since 2020. Much of that are realized more than a year after the attack, and 60% of organizations are passing the costs on to consumers in the form of higher prices. #cybersecurity #respectdataClick to Tweet

Cutting the cost of data breach

Though the numbers are never as neat and clean as averages would indicate, it would appear that the cost of data breaches is cut dramatically for companies that implement solid automated “deep learning” cybersecurity tools, zero trust systems and regularly tested incident response plans. Mature cloud security programs are also a substantial cost saver.

Mon, 01 Aug 2022 10:00:00 -0500 Scott Ikeda en-US text/html https://www.cpomagazine.com/cyber-security/ibm-annual-cost-of-data-breach-report-2022-record-costs-usually-passed-on-to-consumers-long-breach-expenses-make-up-half-of-total-damage/
Killexams : CIOReview Names Cobalt Iron Among 10 Most Promising IBM Solution Providers 2022

LAWRENCE, Kan.--(BUSINESS WIRE)--Jul 28, 2022--

Cobalt Iron Inc., a leading provider of SaaS-based enterprise data protection, today announced that the company has been deemed one of the 10 Most Promising IBM Solution Providers 2022 by CIOReview Magazine. The annual list of companies is selected by a panel of experts and members of CIOReview Magazine’s editorial board to recognize and promote innovation and entrepreneurship. A technology partner for IBM, Cobalt Iron earned the distinction based on its Compass ® enterprise SaaS backup platform for monitoring, managing, provisioning, and securing the entire enterprise backup landscape.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220728005043/en/

Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection. (Graphic: Business Wire)

According to CIOReview, “Cobalt Iron has built a patented cyber-resilience technology in a SaaS model to alleviate the complexities of managing large, multivendor setups, providing an effectual humanless backup experience. This SaaS-based data protection platform, called Compass, leverages strong IBM technologies. For example, IBM Spectrum Protect is embedded into the platform from a data backup and recovery perspective. ... By combining IBM’s technologies and the intellectual property built by Cobalt Iron, the company delivers a secure, modernized approach to data protection, providing a ‘true’ software as a service.”

Through proprietary technology, the Compass data protection platform integrates with, automates, and optimizes best-of-breed technologies, including IBM Spectrum Protect, IBM FlashSystem, IBM Red Hat Linux, IBM Cloud, and IBM Cloud Object Storage. Compass enhances and extends IBM technologies by automating more than 80% of backup infrastructure operations, optimizing the backup landscape through analytics, and securing backup data, making it a valuable addition to IBM’s data protection offerings.

CIOReview also praised Compass for its simple and intuitive interface to display a consolidated view of data backups across an entire organization without logging in to every backup product instance to extract data. The machine learning-enabled platform also automates backup processes and infrastructure, and it uses open APIs to connect with ticket management systems to generate tickets automatically about any backups that need immediate attention.

To ensure the security of data backups, Cobalt Iron has developed an architecture and security feature set called Cyber Shield for 24/7 threat protection, detection, and analysis that improves ransomware responsiveness. Compass is also being enhanced to use several patented techniques that are specific to analytics and ransomware. For example, analytics-based cloud brokering of data protection operations helps enterprises make secure, efficient, and cost-effective use of their cloud infrastructures. Another patented technique — dynamic IT infrastructure optimization in response to cyberthreats — offers unique ransomware analytics and automated optimization that will enable Compass to reconfigure IT infrastructure automatically when it detects cyberthreats, such as a ransomware attack, and dynamically adjust access to backup infrastructure and data to reduce exposure.

Compass is part of IBM’s product portfolio through the IBM Passport Advantage program. Through Passport Advantage, IBM sellers, partners, and distributors around the world can sell Compass under IBM part numbers to any organizations, particularly complex enterprises, that greatly benefit from the automated data protection and anti-ransomware solutions Compass delivers.

CIOReview’s report concludes, “With such innovations, all eyes will be on Cobalt Iron for further advancements in humanless, secure data backup solutions. Cobalt Iron currently focuses on IP protection and continuous R&D to bring about additional cybersecurity-related innovations, promising a more secure future for an enterprise’s data.”

About Cobalt Iron

Cobalt Iron was founded in 2013 to bring about fundamental changes in the world’s approach to secure data protection, and today the company’s Compass ® is the world’s leading SaaS-based enterprise data protection system. Through analytics and automation, Compass enables enterprises to transform and optimize legacy backup solutions into a simple cloud-based architecture with built-in cybersecurity. Processing more than 8 million jobs a month for customers in 44 countries, Compass delivers modern data protection for enterprise customers around the world. www.cobaltiron.com

Product or service names mentioned herein are the trademarks of their respective owners.

Link to Word Doc:www.wallstcom.com/CobaltIron/220728-Cobalt_Iron-CIOReview_Top_IBM_Provider_2022.docx

Photo Link:www.wallstcom.com/CobaltIron/Cobalt_Iron_CIO_Review_Top_IBM_Solution_Provider_Award_Logo.pdf

Photo Caption: Cobalt Iron Compass ® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection.

Follow Cobalt Iron

https://twitter.com/cobaltiron

https://www.linkedin.com/company/cobalt-iron/

https://www.youtube.com/user/CobaltIronLLC

View source version on businesswire.com:https://www.businesswire.com/news/home/20220728005043/en/

CONTACT: Agency Contact:

Sunny Branson

Wall Street Communications

Tel: +1 801 326 9946

Email:sunny@wallstcom.com

Web:www.wallstcom.comCobalt Iron Contact:

Mary Spurlock

VP of Marketing

Tel: +1 785 979 9461

Email:maspurlock@cobaltiron.com

Web:www.cobaltiron.com

KEYWORD: EUROPE UNITED STATES NORTH AMERICA KANSAS

INDUSTRY KEYWORD: DATA MANAGEMENT SECURITY TECHNOLOGY SOFTWARE NETWORKS INTERNET

SOURCE: Cobalt Iron

Copyright Business Wire 2022.

PUB: 07/28/2022 09:00 AM/DISC: 07/28/2022 09:03 AM

http://www.businesswire.com/news/home/20220728005043/en

Thu, 28 Jul 2022 01:29:00 -0500 en text/html https://www.eagletribune.com/region/cioreview-names-cobalt-iron-among-10-most-promising-ibm-solution-providers-2022/article_56f7dda7-cbd5-586a-9d5f-f882022100da.html
Killexams : IBM Server VARs Market Product Type Reseller,Service Provider,Agent Sales, Revenue, Manufacturers, Suppliers, Key Players 2022 to 2028

The MarketWatch News Department was not involved in the creation of this content.

Aug 03, 2022 (Reportmines via Comtex) -- Pre and Post Covid is covered and Report Customization is available.

Market research reports provide insights into past, present, and forecast of "IBM Server VARs market size". It is now widely accepted that market share is one of the most important determinants of a company's profitability. In most cases, a company with a high market share in the market in which it provides services will be significantly more profitable than a competitor with a smaller market share. This report shows the estimation for the forecast period during 2022 tO 2028.

Get demo PDF of IBM Server VARs Market Analysis https://www.reportmines.com/enquiry/request-sample/1648448

The global IBM Server VARs market size is projected to reach multi million by 2028, in comparision to 2021, at unexpected CAGR during 2022-2028 (Ask for demo Report).

This Report is of 142 pages.

The report provides insights into the market segments. Based on region, the market is segmented into North America: United States, Canada, Europe: GermanyFrance, U.K., Italy, Russia,Asia-Pacific: China, Japan, South, India, Australia, China, Indonesia, Thailand, Malaysia, Latin America:Mexico, Brazil, Argentina, Colombia, Middle East & Africa:Turkey, Saudi, Arabia, UAE, Korea. The report includes major countries' markets based on the type and application. Based on type, the market is segmented into Reseller,Service Provider,Agent. Based on application, the market is classified into Large Enterprises,SMEs.

Get demo PDF of IBM Server VARs Market Analysis https://www.reportmines.com/enquiry/request-sample/1648448

The top competitors in the IBM Server VARs Market, as highlighted in the report, are:

  • Deloitte
  • OpenText
  • Cognizant Technology Solutions
  • PCM
  • Salient Process
  • Sea Level Solutions
  • Wipro
  • Infosys
  • Sirius Computer Solutions
  • Accenture
  • CDW Logistics
  • ConvergeOne
  • DATASKILL
  • HCL America Solutions
  • Information Technology Company
  • Insight
  • Integration Management
  • Presidio Networked Solutions
  • QueBIT
  • Softchoice
  • CapGemini
  • Tata Consultancy Services
  • 321Gang
  • 5x Technology
  • Aavitech
  • ABF Systems
  • Acumlus
  • Advanced Computer Concepts
  • Advanced Integrated Solutions Agile Rules Consultants

Purchase this report https://www.reportmines.com/purchase/1648448 (Price 3250 USD for a Single-User License)

Market Segmentation

The worldwide IBM Server VARs Market is categorized on Component, Deployment, Application, and Region.

The IBM Server VARs Market Analysis by types is segmented into:

  • Reseller
  • Service Provider
  • Agent

The IBM Server VARs Market Industry Research by Application surveys are categorized as follows:

In terms of Region, the IBM Server VARs Market Players available by Region are:

  • North America:
  • Europe:
    • Germany
    • France
    • U.K.
    • Italy
    • Russia
  • Asia-Pacific:
    • China
    • Japan
    • South Korea
    • India
    • Australia
    • China Taiwan
    • Indonesia
    • Thailand
    • Malaysia
  • Latin America:
    • Mexico
    • Brazil
    • Argentina Korea
    • Colombia
  • Middle East & Africa:
    • Turkey
    • Saudi
    • Arabia
    • UAE
    • Korea

Inquire or Share Your Questions If Any Before the Purchasing This Report https://www.reportmines.com/enquiry/pre-order-enquiry/1648448

Key Benefits for Industry Participants & Stakeholders

This report studies to improves quality of products and reduces prices for consumers, and maintains or increases market share, and return on shareholders’ investment.. The IBM Server VARs Market research reports say they help create value with customers and stakeholders opportunities with key market drivers

The IBM Server VARs market research report contains the following TOC:

  • Report Overview
  • Global Growth Trends
  • Competition Landscape by Key Players
  • Data by Type
  • Data by Application
  • North America Market Analysis
  • Europe Market Analysis
  • Asia-Pacific Market Analysis
  • Latin America Market Analysis
  • Middle East & Africa Market Analysis
  • Key Players Profiles Market Analysis
  • Analysts Viewpoints/Conclusions
  • Appendix

Get a demo of TOC https://www.reportmines.com/toc/1648448#tableofcontents

Highlights of The IBM Server VARs Market Report

The IBM Server VARs Market Industry Research Report contains:

  • The analysis includes competitive prices, technically updated products and other consumer friendly policies such as easy and installment credit, longer warranties etc.
  • Completes a thorough analysis of Growth rate forecast and future investment opportunities.
  • IBM Server VARs Market research information on top of the emerging tools, trends, issues, and context necessary for making informed decisions about business and technology.
  • Influence of COVID19 on the market IBM Server VARs market.

Purchase this report https://www.reportmines.com/purchase/1648448 (Price 3250 USD for a Single-User License)

COVID 19 Impact Analysis

The Covid pandemic (COVID-19) has affected all segments on earth whether its human or Market. As well as this market document can provide a total market assessment, Considering all aspects related to the impact of COVID 19 on the IBM Server VARs market. The major challenge faced by IBM Server VARs market is its competitors during and after the pandemic. Some of the major market players active in this market include Deloitte,OpenText,Cognizant Technology Solutions,PCM,Salient Process,Sea Level Solutions,Wipro,Infosys,Sirius Computer Solutions,Accenture,CDW Logistics,ConvergeOne,DATASKILL,HCL America Solutions,Information Technology Company,Insight,Integration Management,Presidio Networked Solutions,QueBIT,Softchoice,CapGemini,Tata Consultancy Services,321Gang,5x Technology,Aavitech,ABF Systems,Acumlus,Advanced Computer Concepts,Advanced Integrated Solutions Agile Rules Consultants.

Get Covid-19 Impact Analysis for IBM Server VARs Market research report https://www.reportmines.com/enquiry/request-covid19/1648448

IBM Server VARs Market Size and Industry Challenges

The IBM Server VARs Market research reports describe specific innovation challenges and outline best practices for addressing these challenges. It contains practical ideas, techniques and key practices. Explain common obstacles and provide practical solutions

Get demo PDF of IBM Server VARs Market Analysis https://www.reportmines.com/enquiry/request-sample/1648448

Reasons to Purchase the IBM Server VARs Market Report

  • Instead of making decisions in the dark, market research informs business decisions and reduces the chances of a plan failing.
  • Segmentation and Scope of the IBM Server VARs Market.
  • It adds credibility to the work you do.
  • It says that the market share of a company is a key to its profitability.

Purchase this report https://www.reportmines.com/purchase/1648448 (Price 3250 USD for a Single-User License)

Contact Us:

Name: Aniket Tiwari

Email: sales@reportmines.com

Phone: USA:+1 917 267 7384 / IN:+91 777 709 3097

Website: https://www.reportmines.com/

Report Published by: ReportMines

More Reports Published By Us:

IBM Storage VARs Market, Global Outlook and Forecast 2022-2028

Raster Encoder Market, Global Outlook and Forecast 2022-2028

Frequency Conversion Controller Market, Global Outlook and Forecast 2022-2028

Electronic Valve Controller Market, Global Outlook and Forecast 2022-2028

Source: MMG

Press Release Distributed by Lemon PR Wire

To view the original version on Lemon PR Wire visit IBM Server VARs Market Product Type Reseller,Service Provider,Agent Sales, Revenue, Manufacturers, Suppliers, Key Players 2022 to 2028

COMTEX_411539476/2788/2022-08-03T22:31:21

The MarketWatch News Department was not involved in the creation of this content.

Tue, 02 Aug 2022 12:00:00 -0500 en-US text/html https://www.marketwatch.com/press-release/ibm-server-vars-market-product-type-resellerservice-provideragent-sales-revenue-manufacturers-suppliers-key-players-2022-to-2028-2022-08-03
Killexams : Containerized Data Center Market Analysis, Research Study With IBM Corporation, Emerson Electric., Cisco Systems

New Jersey, N.J., Aug 03, 2022 The Containerized Data Center Market research report provides all the information related to the industry. It gives the outlook of the market by giving authentic data to its client which helps to make essential decisions. It gives an overview of the market which includes its definition, applications and developments, and manufacturing technology. This Containerized Data Center market research report tracks all the latest developments and innovations in the market. It gives the data regarding the obstacles while establishing the business and guides to overcome the upcoming challenges and obstacles.

Container data center refers to the handling of cabinets, refrigeration systems, power distribution cabinets, fire protection systems, security and monitoring, even data center infrastructure such as UPS and generators can be partially or fully integrated with a standard shipping container, until built highly integrated , a comprehensive data center.

Get the PDF demo Copy (Including FULL TOC, Graphs, and Tables) of this report @:

https://www.a2zmarketresearch.com/sample-request/649742

Competitive landscape:

This Containerized Data Center research report throws light on the major market players thriving in the market; it tracks their business strategies, financial status, and upcoming products.

Some of the Top companies Influencing this Market include:IBM Corporation, Emerson Electric., Cisco Systems, Cirrascale Corporation, Rittal, SGI, Dell, Schneider Electric, Hewlett-Packard, Huawei, Oracle Corporation, Bull SA(Worldline), IO, AIE INFORMATIQUE, Cloud Cube Information Tech, CloudFrame, FuJie Dong, Inspur, ZTE, 21Vianet Group

Market Scenario:

Firstly, this Containerized Data Center research report introduces the market by providing an overview which includes definition, applications, product launches, developments, challenges, and regions. The market is forecasted to reveal strong development by driven consumption in various markets. An analysis of the current market designs and other basic characteristics is provided in the Containerized Data Center report.

Regional Coverage:

The region-wise coverage of the market is mentioned in the report, mainly focusing on the regions:

  • North America
  • South America
  • Asia and Pacific region
  • Middle East and Africa
  • Europe

Segmentation Analysis of the market

The market is segmented on the basis of the type, product, end users, raw materials, etc. the segmentation helps to deliver a precise explanation of the market

Market Segmentation: By Type

20 Feet
53 Feet
41 Feet
Custom

Market Segmentation: By Application

BFSI
IT and Telecoms
Government
Education
Health Care
Defence
Entertainment and Media
Industrial
Energy
Other

For Any Query or Customization: https://a2zmarketresearch.com/ask-for-customization/649742

An assessment of the market attractiveness with regard to the competition that new players and products are likely to present to older ones has been provided in the publication. The research report also mentions the innovations, new developments, marketing strategies, branding techniques, and products of the key participants present in the global Containerized Data Center market. To present a clear vision of the market the competitive landscape has been thoroughly analyzed utilizing the value chain analysis. The opportunities and threats present in the future for the key market players have also been emphasized in the publication.

This report aims to provide:

  • A qualitative and quantitative analysis of the current trends, dynamics, and estimations from 2022 to 2029.
  • The analysis tools such as SWOT analysis, and Porter’s five force analysis are utilized which explain the potency of the buyers and suppliers to make profit-oriented decisions and strengthen their business.
  • The in-depth analysis of the market segmentation helps to identify the prevailing market opportunities.
  • In the end, this Containerized Data Center report helps to save you time and money by delivering unbiased information under one roof.

Table of Contents

Global Containerized Data Center Market Research Report 2022 – 2029

Chapter 1 Containerized Data Center Market Overview

Chapter 2 Global Economic Impact on Industry

Chapter 3 Global Market Competition by Manufacturers

Chapter 4 Global Production, Revenue (Value) by Region

Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions

Chapter 6 Global Production, Revenue (Value), Price Trend by Type

Chapter 7 Global Market Analysis by Application

Chapter 8 Manufacturing Cost Analysis

Chapter 9 Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter 10 Marketing Strategy Analysis, Distributors/Traders

Chapter 11 Market Effect Factors Analysis

Chapter 12 Global Containerized Data Center Market Forecast

Buy Exclusive Report @: https://www.a2zmarketresearch.com/checkout

Contact Us:

Roger Smith

1887 WHITNEY MESA DR HENDERSON, NV 89014

[email protected]

+1 775 237 4157

Wed, 03 Aug 2022 00:52:00 -0500 A2Z Market Research en-US text/html https://www.digitaljournal.com/pr/containerized-data-center-market-analysis-research-study-with-ibm-corporation-emerson-electric-cisco-systems
000-155 exam dump and training guide direct download
Training Exams List