Easiest way to pass HP2-N46 exam is to download VCE from killexams

Move through our HP2-N46 Free Exam PDF plus feel confident regarding the HP2-N46 test. You may pass your check at high signifies or your cashback. We today have aggregated the database of Selling HP Automation and Cloud Management Software Solutions real questions through real test queries bank to end up being able to provide you a chance to get prepared and pass HP2-N46 check on the preliminary attempt. Simply set up our Exam Sim and get prepared. You are going to pass the particular HP2-N46 exam.

Exam Code: HP2-N46 Practice test 2022 by Killexams.com team
Selling HP Automation and Cloud Management Software Solutions
HP Automation learner
Killexams : HP Automation learner - BingNews https://killexams.com/pass4sure/exam-detail/HP2-N46 Search results Killexams : HP Automation learner - BingNews https://killexams.com/pass4sure/exam-detail/HP2-N46 https://killexams.com/exam_list/HP Killexams : HP launches HP Anyware for secure remote working

HP Anyware will be available somewhere in the coming months. The solution’s based on technology from Teradici, which HP acquired last year. HP Anyware should eventually replace HP’s existing zCentral Remote Boost solution.

Teradici is a cornerstone of the upcoming solution. The company provides virtual desktop environments using Cloud Access Software (CAS), allowing companies to remotely host PCs in their on-premises environment and the cloud.

Teradici uses its own PC-over-IP (PCoIP) protocol. The protocol streams the contents of a display. The data travelling over a network is unlike the data exchanged by traditional remote desktop tech, which promotes security.

HP Anyware is the next release of Teradici’s CAS solution. New functionality includes support for Arm-based M1 processors and Macs. In addition, HP and Teradici optimized the tool for Windows 11.

HP told The Register that HP Anyware will replace zCentral Remote Boost, HP’s existing solution for remote work. HP Anyware will have equivalent functionality by mid-2023, after which zCentral Remote is to be discontinued. Though the solution will receive security fixes for some time, users eventually have to migrate to Anyware.

Tip: HPC software company Teradici acquired by HP Inc.

Mon, 25 Jul 2022 22:15:00 -0500 en text/html https://www.techzine.eu/news/applications/84203/hp-launches-hp-anyware-for-secure-remote-working/
Killexams : The Secret to Automation? Eat the Elephant in Chunks.

The goal of security automation is to accelerate detection and response, but you’ll waste a lot of time if you try to eat the elephant all at once

One of my favorite phrases when strategizing how to approach a daunting challenge is “eat the elephant in chunks.” Whether you’re talking about running a marathon, going after that big promotion or saving for the future, the most effective and efficient way to achieve a larger goal is by breaking it down into smaller, discrete pieces. The approach is also highly applicable when talking about security automation. 

Security orchestration, automation and response (SOAR) platforms that focus on automating processes are a great example. Organizations were drawn to the promise of SOAR to Improve the throughput of analyst work by automatically running a playbook in reaction to an incident or issue without the need for human intervention. SOAR was an important step forward and off to a great start. But over time, organizations started to see the pitfalls of trying to eat the entire elephant all at once instead of in chunks. Here's what I mean.

To run SOAR playbooks, you need to define and document a complex decision tree and then manage and maintain long, unwieldy processes. Engineering work is required to customize playbooks and standardize implementation. Playbooks are executed the same way over and over again, with no regard to the relevance or priority of data being processed. Decision-making criteria and logic are built into the playbooks, so it isn’t possible to adapt with agility to changes in the threat landscape and the environment. Playbooks need to be updated manually—pulling results and new learnings from reports and other sources—which becomes even more difficult and time consuming if the person who created the playbook is no longer with the organization.

Clearly, approaching security automation by trying to eat the entire elephant all at once isn’t effective or efficient. But what happens if, instead, you tackle automation from the standpoint of atomic-level actions (or chunks) that are data-driven and executed directly or from a simple playbook? Let’s look at a couple of use cases.

Spear phishing: An email is received that is targeted to the C-level. With a platform that enables atomic automation, you start with data which allows for contextualization. If the email has indicators that have a high threat score, you can take immediate action like sending these indicators to your endpoint detection and response (EDR) solution for blocking. Or you can look-up the indicators in your SIEM to see if there are other events around it. Each atomic action is self-contained and, therefore, simple and quick to define, execute and maintain. You can even put these atomic actions into a straightforward playbook within a few minutes. And because the playbook is data-driven this ensures the actions remain relevant. Bi-directional data flow allows for outputs from detection and response to be used as inputs for learning and improvement. If data changes and certain thresholds are hit, additional actions can be set to run automatically.

Event triage: Atomic automation also supports SOC teams that want to streamline how they triage events that are questionable. Of course, there are cut and dry cases where it makes sense to just run a full playbook. But many events aren’t obviously bad, and an analyst may want to review event details before deciding what to do. In this case, once they’ve determined the event is something to address, they can quickly launch atomic actions to the right the tools in the SOC. There’s no need to pivot between each separate tool and user interface to execute actions. For example, in a couple of clicks they can block all outbound requests to this bad URL that is hosting malware and launch a scan of all systems that have visited it.  

The goal of security automation is to accelerate detection and response, but you’ll waste a lot of time if you try to eat the elephant all at once. With a data-driven approach to automation you can trigger atomic-level actions directly or through simple playbooks to reach that goal faster with greater focus, accuracy and agility. And that’s why you should eat the elephant in chunks.

Marc Solomon is Chief Marketing Officer at ThreatQuotient. He has a strong track record driving growth and building teams for fast growing security companies, resulting in several successful liquidity events. Prior to ThreatQuotient he served as VP of Security Marketing for Cisco following its $2.7 billion acquisition of Sourcefire. While at Sourcefire, Marc served as CMO and SVP of Products. He has also held leadership positions at Fiberlink MaaS360 (acquired by IBM), McAfee (acquired by Intel), Everdream (acquired by Dell), Deloitte Consulting and HP. Marc also serves as an Advisor to a number of technology companies, including Valtix.
Previous Columns by Marc Solomon:
Thu, 04 Aug 2022 01:26:00 -0500 en text/html https://www.securityweek.com/secret-automation-eat-elephant-chunks
Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Partnerships & Use Cases

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Improve future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Improve quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Improve the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : Ciphr appoints David Burns as chief technology officer Ciphr appoints David Burns as chief technology officer image

Burns previously held CTO roles at Wejo, Yell and HP Enterprise Services.

UK HR SaaS provider Ciphr has hired former Wejo and HP Enterprise Services executive David Burns as its new CTO

As the company’s first ever CTO, Burns will oversee Ciphr’s product development and management, leading all technical activities, including development, internal IT, deployment, technical support, and implementation.

Bringing over 30 years’ experience across the wider tech sector, Burns joins from connected vehicle data startup Wejo, where he served as chief technology officer.

Prior to this, he held CTO positions at Key Travel; Yell, CGI (Logica); and HP Enterprise Services.

Burns is the third senior appointment for Ciphr in 2022, following a strong year of growth for the business.

“I’m delighted to join Ciphr as its first ever chief technology officer and work with such a talented team,” said Burns.

“The business has added a lot of functionality to its HCM platform over the past few years, including payroll, talent, and learning, to ensure it can deliver on the increasingly complex requirements of today’s HR and people teams. I’m looking forward to leading Ciphr’s technology strategy to develop and enhance user experience and help our customers get the most from their Ciphr products.”

Chris Berry, founder and CEO of Ciphr, commented: “We are delighted to welcome David to Ciphr and the group’s executive team. He’s an internationally experienced CTO with a career across a range of different service sectors and business models, including listed company and private-equity scenarios, and brings with him a wealth of invaluable skills and experience.

“He’s ideally suited for this newly created role and will be instrumental in ensuring that Ciphr continues to offer an industry-leading user experience across our products as we grow the business.”

About Ciphr

A specialist provider of cloud-based HR, payroll, recruitment and learning software, the Ciphr group offers three people management solutions — Ciphr, Digits LMS and Payroll Business Solutions — globally across the public, private and non-profit sectors.

Backed by ECI Partners, the organisation looks to help HR teams to streamline processes across the entire employee lifecycle, allowing for more time to be spent working strategically — an especially vital endeavour given a common shift to hybrid and remote working since the pandemic.

Related:

The biggest senior technology hires — A list of the biggest senior technology hires, including chief technology officer (CTO) and chief information officer (CIO) appointments.

Support, not surveillance: maintaining employee productivity with HR software — Lesley Holmes, data protection officer at MHR, discusses how HR software can maintain employee productivity without being Big Brother.

Mon, 01 Aug 2022 22:22:00 -0500 Aaron Hurst en text/html https://www.information-age.com/ciphr-appoints-david-burns-as-chief-technology-officer-123499810/
Killexams : Preparing the workforce for the digital job market
JOHN SCHNOBRICH-UNSPLASH

By Brontë H. Lacsamana, Reporter

THE EMPLOYMENT LANDSCAPE has forever changed due to the coronavirus disease 2019 (COVID-19) pandemic, something the new administration must address, especially for a workforce that is still struggling to adapt to the new normal.

In this episode of B-Side, multimedia reporter Brontë H. Lacsamana taps the ideas of Philip A. Gioca, the country manager of online job portal JobStreet Philippines, against a backdrop of digitalization and automation threatening to transform many people’s jobs.

“Early movers and fast movers are becoming the real deal nowadays,” he said, adding that various industries are recognizing the demand for skills.

On June 12, the Department of Labor and Employment (DoLE) offered more than 120,000 jobs nationwide in an Independence Day job fair while JobStreet and the Department of Trade and Industry (DTI) held a virtual career fair from June 13 to 17.

E-commerce platform Shopee started an apprentice program for tech talent in 2021, while the Philippine Business for Education and Citi Foundation launched in 2022 a training program in artificial intelligence and cloud computing, among others.

Though efforts of such private and public institutions do empower workers, Mr. Gioca posted that it will take a change of mindset for the Philippines to truly adapt to the changing work environment.

“This is the time where employers or companies need to really understand what’s really happening on ground,” he said.

Benefits need to be adjusted for a changing economy.

A latest study by e-commerce website iPrice group found that the highest paid entry-level workers in the digital field, like junior project managers and junior UI/UX (user interface or user experience) designers, can barely afford to rent a one-bedroom apartment in the Philippines’ central business districts (CBDs).

Add all the other stressors: the rising prices of commodities and gasoline, poor public transportation, and expensive internet and healthcare services.

“Benefits have changed to internet subsidy, working freely — meaning flexible in terms of working, in terms of shifts, in terms of timing. (Employees) would also like additional healthcare benefits not just to cover themselves but also the family,” Mr. Gioca said.

Work-from-home and hybrid setups, which have muddled the lines between one’s work space and personal space, also require mental health support.

By understanding what’s worth the while and effort of employees, companies will be more able to attract talent due to an employee-centered work environment, he added.

In its “Southeast Asia: Rising from the Pandemic” report published in March 2022, the Asian Development Bank (ADB) concluded that labor market scarring due to the pandemic worsened an already-wide gap between employee skills and workplace expectations.

Both training programs and social protections are needed to ease this gap, the ADB said.

Human resources (HR) services company Sprout Solutions also found in a survey in June that HR professionals’ wish lists include systems that will automate personnel functions, mental and physical health resources, and allowances for transportation, electricity, and internet.

This “Great Reshuffle” of priorities is not only an opportunity for jobseekers to upgrade skills, but also a chance for employers to re-evaluate the role of teams in the company, according to Mr. Gioca.

He explained that 53% of employees now prefer a remote work setup while 41% would rather move to a more affordable location, such as in the provinces, to save more.

“The big question that’s evolving now is, is my company or my work worth it?” he said.

Future-proofing the workforce to withstand automation.

JobStreet’s 2021 study with Boston Consulting Group on the global talent market found that customer service and administration roles may be obsolete in the next three to five years.

“You need to prepare for contingencies for your employees because, sooner or later, because of digitalization and automation, those roles will diminish,” said Mr. Gioca.

A quarter of jobs in outsourcing and electronics will also be affected, but this will be offset by new roles within those industries, the ADB also said in its report.

In September, leaders from the Philippine business process outsourcing (BPO) industry revealed partnerships with the Department of Information and Communications Technology (DICT) to upskill employees in order to address industry demands.

Mr. Gioca of JobStreet believes the approach should no longer be to focus only on so-called “in demand” industries, since upskilling demand has become unpredictable, ever-evolving, and essential everywhere, whether one is a nurse or in tech.

Upskilling, reskilling, and digital learning must be pushed for the job market to keep up with the times, he said. While industries like information technology (IT), healthcare, and science quickly caught up, others are still struggling.

Jobseekers can also easily access platforms like YouTube, Go1, Coursera, and FutureLearn, due to companies and institutions being aware of the upskilling need.

Online training service Coursera, for example, committed in 2021 to more partnerships with Philippine businesses, schools, and government to build accessible, mobile-friendly learning experiences, after recording 85% year-on-year growth in the Philippines.

As of 2022, 1.5 million Filipinos have registered with Coursera, which offers separate programs for businesses and for campuses.

These massive open online courses (MOOCs) shot up over the pandemic with students and working professionals alike availing of them whether in their free time or as an opportunity presented in the school or workplace, Mr. Gioca said.

“Availability nowadays is not an issue because we have seen in the last two years a proliferation of free online training,” he added.

Don’t forget soft skills.

The Philippines ranked 51st out of 134 economies in the Digital Skills Gap Index 2021 released by multinational publishing group Wiley, based on indicators of how prepared an economy is in the digital skills needed for growth, recovery, and prosperity.

When it comes to what kind of training is needed, hard skills like coding, programming, and updated know-how in technical industries should definitely be improved, but soft skills have become in demand too, according to Mr. Gioca.

Critical thinking and active learning skills, for instance, help build an environment where teams easily learn new technologies and eventually better connect with others online.

“How do you now monitor just at home looking after your teams? How do you problem-solve? These are the things now that are very important,” he said.

Technology company HP Philippines also said in June that hybrid setups mean teams must be well-versed in cybersecurity.

Companies must “upgrade their own hardware and software and brief employees on cyber threats and how to respond to them,” HP Philippines said.

Mr. Gioca added that, without teamwork and critical thinking skills tailored for remote setups, employees might not be equipped to do their best work and face problems.

“You have to reintegrate yourself, your employees, to teams, because they have been individuals, working in an environment.… How do you now communicate and engage them, because you will never see them while they’re working?” he said.

Tue, 26 Jul 2022 03:15:00 -0500 Neil en-US text/html https://www.bworldonline.com/special-reports/2022/07/27/463293/preparing-the-workforce-for-the-digital-job-market/
Killexams : New Oracle Database Platforms And Services Deliver Outstanding Cloud Benefits

Let’s talk about Oracle’s successful and expanding investment in cloud infrastructure. The company just celebrated its 45th anniversary, beat Wall Street’s estimated revenue in its fiscal fourth quarter, and showed its highest organic revenue growth rate in over a decade. The company is clearly doing a lot of things its customers like.

Front-and-center to Oracle’s success is Oracle Cloud Infrastructure (OCI) growth. Over the past year there has been a steady stream of OCI-related announcements. These have included plans to grow from 30 to 44 public cloud regions by the end of 2022 (39 are already in place), smaller Dedicated Region configurations, plans for Sovereign Clouds, new Cloud@Customer offerings, and expansions of OCI’s already impressive portfolio of services. This is perhaps the fastest expansion of cloud services by any service provider, and it helped drive Oracle’s 49% year-over-year IaaS growth and 108% growth in Exadata Cloud@Customer (Q4 FY22 earnings report).

And, if those aren't enough to make you consider OCI for your public cloud, what about the new Oracle Database Service for Microsoft Azure that Larry Ellison and Satya Nadella announced at Microsoft Inspire on July 20th? This new service allows Azure customers to choose where to run Oracle Database for their Azure applications. Azure users can easily set up and use Oracle databases running on optimized OCI infrastructure directly from Azure, without logging into OCI.

The Oracle Database Service for Microsoft Azure is an Oracle-managed service currently available in 11 pairs of OCI and Azure regions worldwide. It uses the existing OCI-Azure Interconnect to offer latency between the two clouds of less than 2 milliseconds over secure, private, high-speed networks. This means that developers and mission-critical applications running on Azure can directly access the performance, availability, and automation advantages of Oracle Autonomous Database Service, Exadata Database Service, and Base Database Service running on OCI.

Oracle’s growth numbers represent a great metric to measure its overall success. However, most IT architects and developers want to understand why Oracle's cloud offerings are better than the likes of Amazon Web Services (AWS) for their Oracle Database workloads.

The answer is simple. While Oracle is undoubtedly a strong competitor when matched head-to-head against nearly every public cloud offering, it offers clear advantages for Oracle Database applications. For example, organizations that use Oracle Database in their on-premises data center can more easily move workloads to OCI because it provides extreme levels of compatibility with on-premises installations and offers organizations the same or greater performance, scale, and availability. You won't find a better example of this than Oracle’s cloud-enabled Exadata X9M platform that’s available natively in OCI or for Azure users through Oracle Database Service for Microsoft Azure.

Last year, Oracle delivered what may be the fastest OLTP database machine with the Exadata X9M. This machine is engineered to do only one thing: run Autonomous Database Service and Exadata Database Service faster and more efficiently than anything else on the market, delivering up to 87% more performance than the previous generation platform.

Wringing every ounce of performance and reliability from a database machine such as Oracle Exadata requires thinking about system architecture from the ground up. It requires a deep knowledge of Oracle Database and the ability to optimize the entire hardware and software stack. This is a job that only Oracle can realistically take on.

Exadata X9M’s employs a flexible blend of scale-up and scale-out capabilities that support virtually any workload by separately scaling database compute and storage capabilities. Of particular note is how the Exadata X9M provides high performance for both transactional and analytics workloads and efficient database consolidation.

Let’s start with analytics. At the highest level, Exadata X9M enables fast analytics through parallelism and smart storage. Complex queries are automatically broken down into components that are distributed across smart Exadata storage servers. The storage servers then run low-level SQL and machine learning operations against their local data, returning only results to the database servers. This allows applications to use 100s of gigabytes to terabytes per second of throughput—something you won’t find on your typical cloud database.

For OLTP, Exadata X9M breaks out some additional secret sauce in the form of scalable database server clusters, persistent memory (PMem) in the smart storage servers, and remote direct memory access over converged Ethernet (RoCE) that links them together. Databases run across hundreds of vCPUs to provide high performance and availability and read data directly from shared PMEM on the storage servers. The end result is that Oracle Database achieves SQL read latencies from shared storage of under 19 microseconds, which is more than ten times faster than traditional flash storage.

However, Exadata X9M in OCI doesn’t forego the use of flash memory, it embraces it. Without applications having to do anything, Exadata storage servers automatically move data between terabytes of PMem, tens or hundreds of terabytes of NVMe 4.0 flash, and terabytes to petabytes of disk storage to provide the best performance for different types of workloads. This results in a level of performance that isn’t possible with a traditional on-premises or cloud architecture built using generic servers and storage.

Bringing X9M to the Cloud

There's no question that cloud resources are integral to nearly every enterprise's IT infrastructure. The cloud offers a flexible and scalable consumption model with economics that can be superior to traditional on-premises deployments. While cloud infrastructure can be easily scaled to meet many growing application needs, this is not necessarily true for databases that support mission-critical applications. It's common for organizations to have to refactor applications and redesign databases when they move to the cloud to provide the same levels of performance and availability they had premises, such as when moving Oracle Database to AWS. However, by deploying Exadata X9M in OCI, Oracle eliminates the expensive and time-consuming need to refactor applications for the cloud.

Oracle Exadata X9M in OCI shines for enterprise applications by delivering an elastic cloud database experience. For example, when running Autonomous Database Service or Exadata Database Service on dedicated X9M infrastructure in OCI, you can use 2 to 32 database servers and 3 to 64 smart storage servers in any combination. This means you can deploy platforms with more database servers for heavy OLTP workloads, more storage servers for data warehouses, or an even mixture of each when consolidating both types of workloads.

You can get the raw numbers for CPUs, storage, and memory for Exadata X9M in OCI from the Oracle website. Still, the critical thing to know is that all configurations deliver the database capabilities that enterprises require. For instance, the “entry” Exadata X9M configuration in OCI supports 19 microsecond SQL Read IO latency, 5.6 M SQL Read IOPS, and 135 GB/second of analytics throughput. Furthermore, with the ability to scale database servers by 16x and storage servers by 21x, we expect that no organizations will run into performance limitations.

Oracle tells us that by putting Exadata X9M into OCI, it now delivers the world's fastest OLTP cloud database performance, and they have the data to back it up. Latency is critical for OLTP workloads, an area where the X9M has no equal. Exadata X9M’s 19 microsecond SQL IO latency is 25x better than when running Oracle Database on AWS Relational Database Service (RDS). The analytics throughput numbers from shared storage are even more impressive, with Oracle claiming that Exadata X9M in OCI delivers up to 384x the analytics throughput of Oracle Database running on AWS RDS.

Oracle has conquered the performance challenges for OLTP and analytics in the cloud and delivers this level of performance with attractive economics. Oracle makes the Exadata X9M for OCI available with a true consumption-based model where you only pay for the size of platform you need and the consumption you use. One key feature of Oracle Autonomous Database running on Exadata X9M is that it can auto-scale consumption by 3x based on the demands of the queries executing at every point in time. This helps you meet peak requirements by scaling up database consumption when needs grow and minimizes costs by scaling it back down later. Oracle cites global customers using these scaling capabilities to economically meet seasonal demands for retail companies and end-of-quarter financial closes for any business.

Analyst Take

Running business workloads in the cloud is popular and continues growing at impressive rates because it solves practical problems for IT practitioners and business users. However, generic cloud infrastructure hasn’t delivered the same level of performance and availability for mission-critical OLTP and analytics workloads that many customers achieved with on-premises platforms.

If your enterprise depends on Oracle Database technology—and 97% of the Global Fortune 100 companies use Oracle Database, with 88% relying on Oracle Exadata for business-critical workloads—you need to seriously consider running your cloud database workloads on Exadata X9M in OCI. Oracle's expanding portfolio of OCI services and delivery platforms, coupled with its unique ability to integrate optimized database platforms like Exadata X9M into OCI redefines what it means to run mission-critical databases in the cloud.

The Exadata X9M is built by the same people who build the Oracle Database, best positioning Oracle to optimize the performance, reliability, and automation required to get the most out of Oracle Database in the cloud. Oracle Exadata X9M is a stellar piece of engineering, bringing together compute and storage in an optimized architecture that delivers levels of throughput and reliability that deserve the superlatives I'm throwing around. And, it's not just me saying it; Oracle's momentum in the cloud bears this out as customers continue to make Exadata their preferred option to run Oracle Database.

When combined with the new Oracle Database Service for Microsoft Azure, Exadata X9M in OCI should cause organizations to rethink strategies focused on using generic cloud infrastructure for critical database applications.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys,Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 01 Aug 2022 01:01:00 -0500 Steve McDowell en text/html https://www.forbes.com/sites/moorinsights/2022/08/01/new-oracle-database-platforms-and-services-deliver-outstanding-cloud-benefits/
Killexams : 13 hot chip and semiconductor startups investors are betting on to fill supply-chain gaps and compete with giants like Intel and Nvidia

Nextiles

Nextiles' chip-enabled fitness mat.
Nextiles

Headquarters: New York City

Year founded: 2018

Total funding: $6 million

Valuation: undisclosed

What they do: Nextiles creates fabric-embedded chips to collect fitness and sports data. The firm also has a software platform that allows users to access their data and license it out to third-party companies. 

Why it's a good bet: The firm has gotten interest and investments from major sports players like Draft Kings and the NBA betting on embedded chips to bring about smart athletics. There is also potential when it comes to cloud-connected health applications, and the possibility of licensing Nextiles' technology out to sports teams. 

Geminus

Greg Fallon, the CEO of Geminus.
Geminus

Headquarters: Palo Alto, California

Year founded: 2018 

Total funding: $15.6 million

Valuation: $18.30 million

What they do: Geminus creates software that uses AI to simulate how manufacturing systems will behave and operate. The industry uses this technology for creating semiconductors. The software also predicts when systems will degrade, and estimates their potential longevity. 

Why it's a good bet: While not exclusively focused in the chips field, new and efficient manufacturing processes for semiconductors are in demand amid supply constraints. In June, Lam Capital and the VC arm of SK Telecom, a South Korean telecom firm, both recently led a round where Geminus raised over $5 million.

Arduino

Arduino CEO, Fabio Violante.
Arduino

Headquarters: Lugano, Switzerland

Year founded: 2005

Total funding: $33.94 million

Valuation: Undisclosed

What they do: Arduino produces programmable boards for enterprise clients and educators to prototype equipment like chips.  

Why it's a good bet: The firm recently closed a round raising over $30 million, drawing interest and investments from big chip firms like ARM and Bosch. Arduino also is hiring for open positions across the world, including cloud engineers and sales specialists. 

Lightmatter

Nick Harris, the CEO of Lightmatter.
Lightmatter

Headquarters: Boston, Massachusetts

Year founded: 2017

Total funding: $113 million

Valuation: $240 million

What they do: An emerging field in semiconductors is photonic computing, also known as optical computing, where light is used to transmit and process data on a chip. Lightmatter creates photonic-based processors, which is more energy-efficient and faster than traditional modes of processing

Why it's a good bet: The firm has gotten attention from VCs and big-name firms like HP, Lockheed Martin, and Spark Capital, all of which invested in Lightmatter. The company also just poached Ritesh Jain, Intel's vice president of engineering and a 20-year veteran of the firm who led several key projects like Intel's Auroa Supercomputer

Untether AI

Arun Iyengar, the CEO of Untether, as seen on his LinkedIn page.
LinkedIn

Headquarters: Toronto, Ontario

Year founded: 2018

Total funding: $153.52 million

Valuation: Undisclosed

What they do: The Canada-based firm creates chips for AI. Its designs try to bring memory data closer to the processing unit. 

Why it's a good bet: Untether has gotten attention from big firms like Intel and General Motors that have invested in the startup. In April, the Association of Chinese Canadian Entrepreneurs named Untether and Raymond Chik, one of its cofounders, startup of the year for its chip designs. 

Ayar Labs

Charles Wuischpard, the CEO of Ayar Labs.
Ayar Labs

Headquarters: Santa Clara, California

Year founded: 2015

Total funding: $195.1 million

Valuation: $452 million

What they do: Ayar Labs creates chips using light photonics that help data transfer faster and lower power usage compared traditional semiconductors.

Why it's a good bet: The firm has attracted the interest of big industry players. For example, Nvidia has announced a collaboration to use Ayar's chips for AI and machine learning. The firm is also collaborating with HP, the business-enterprise firm, to develop chips for data centers. Lockheed Martin, a defense firm, Intel, a chip giant, and VCs like IAG Capital Partners and Alumni Ventures have all invested in the company.

Hailo

Hailo's AI microprocessor.
Hailo

Headquarters: Tel Aviv, Israel

Year founded: 2017

Total funding: $221.2 million

Valuation: $965 million

What they do: Hailo creates small AI processors known as microprocessors for devices close to where software collects and sends data, known as edge devices. 

Why it's a good bet: The firm has recieved industry recognition for its chip designs. The Edge AI Vision Alliance, a group made up of industry experts and insiders, named the company's product their 2021 product of the year. Halio has gotten attention from VCs and investors like NEC Corporation, a semiconductor firm, that have helped propel the company closer to unicorn status. 

Menlo Micro

Russ Garcia, the CEO of Menlo Micro, as seen on his LinkedIn page.
L

Headquarters: Irvine, California

Year founded: 2016

Total funding: $227.7 million

Valuation: $445 million 

What they do: Menlo Micro creates electrical and signal switches that send electrical currents to devices. Customers use the firm's switches for power management — making sure enough power is going where it needs to. The firm's switches are also temperature resistant and buyers can use them for household appliances, chargers, and other electronics.

Why it's a good bet: The firm just closed a $150 million round that Tony Fadell, the cocreator of the iPhone and iPod, and his investment group, Future Shapes, led. Standard Industries and Corning, both building-supply firms, have invested in Menlo Micro as well.

Tenstorrent

Ljubisa Bajic, the CEO and founder of Tenstorrent.
Tenstorrent

Headquarters: Toronto, Ontario

Year founded: 2016

Total funding: $240 million, according to the firm

Valuation: $1 billion

What they do: Tenstorrent creates application-specific integrated circuits, or chips that do one thing. Tenstorrent's chips teach computers machine-learning models and work to produce algorithms. 

Why it's a good bet: The firm has attracted the interest of several well-known VCs like Fidelity Wealth Management and Moore Capital. Earlier this year the firm also poached Matthew Mattina, the head of machine learning at ARM, the semiconductor giant, to join its team.

Ambiq

Scott Hanson, the founder and CTO of Ambiq, as seen on his LinkedIn page.
LinkedIn

Headquarters: Austin, Texas

Year founded: 2010

Total funding: $303.6 million

Valuation: $686.8 million

What they do: Ambiq designs integrated circuits, or tiny processors, for wearables, smart cards, and other internet-connected devices that prioritize low power consumption. 

Why it's a good bet: The firm has gotten attention and investment from big-name VCs like Kliner Perkins and established firms like Cisco and ARM. The company also closed a nearly $200 million Series F earlier this year and is hiring for several open software- and design-engineer positions. 

Ampere

Renee James, the CEO and founder of Ampere.
Getty

Headquarters: Santa Clara, California

Year founded: 2017

Total funding: $340 million

Valuation: Undisclosed

What they do: Ampere designs microprocessors — tiny computer chips for specific tasks — for servers at cloud-data centers. The firm's chips are designed for clients who need access to memory-intensive programs like AI and automation. 

Why it's a good bet: The founder and CEO, Renee James, the former president of Intel, is planning to make Ampere a public company later this year or next. The company has gotten attention from big firms like Oracle who have invested over $400 million in the startup. Ampere has also formed partnerships with both Microsoft and Google to provide processors for their respective cloud services.

Groq

Jonathan Ross, the CEO of Groq, as seen on his LinkedIn page.
LinkedIn

Headquarters: Mountain View, California

Year founded: 2016

Total funding: $362.6 million

Valuation: $1.1 billion

What they do: Groq has created a processor architecture with a special focus on AI and machine learning. Unlike traditional processing where each unit has just one function, the firm's tensor-streaming processing allows for units to process multiple functions at once. 

Why it's a good bet: Jonathan Ross, the creator of Google's tensor-processing architecture used for machine learning and AI, leads the firm. Groq has also received attention from VCs like TDX Ventures and Tiger Global, who have pushed its valuation to unicorn status. The company also has several openings for software engineers, product managers, and design engineers. 

Cerebras Systems

Andrew Feldman, the CEO and cofounder of Cerebras.
Cerebras Systems

Headquarters: Sunnyvale, California

Year founded: 2016

Total funding: $723 million

Valuation: $4.3 billion

What they do: Cerebras Systems develops chips for use in artificial intelligence and machine-learning models. The firm creates chips that are the size of wafers, trays that would usually have several chips on it. The firm specializes in AI acceleration, meaning its chips try to train AI models to accomplish tasks faster. 

Why it's a good bet: AI chips have been an area of focus for VCs and Cerebras has been setting records and getting attention from enterprise clients. The firm's latest Wafer Engine 2 is the largest processor ever built and set the record for the highest number of AI models trained on a single chip. Its clients include the pharmaceutical companies AstraZeneca and GSK.

Wed, 03 Aug 2022 23:00:00 -0500 en-US text/html https://www.businessinsider.com/13-chip-and-semiconductor-startups-investors-bet-on-2022-7
Killexams : Industrial Automation Market Size is projected to reach at USD 430.9 Billion by 2030, with a CAGR of 9.7%

Acumen Research and Consulting

Acumen Research and Consulting recently published report titled “Industrial Automation Market Size, Share, Analysis Report and Region Forecast, 2022 - 2030”

TOKYO, Aug. 04, 2022 (GLOBE NEWSWIRE) -- The Global Industrial Automation Market size accounted for USD 189.7 Billion in 2021 and is predicted to be worth USD 430.9 Billion by 2030, with a CAGR of 9.7% during the projected period from 2022 to 2030.

The industrial automation market is quickly expanding as a result of the widespread acceptance of automation technology in the various industries such as, petroleum and natural gas, automotive, manufacturing petrochemical and materials, chemical, and pharmaceutical sectors. Recently, companies can drastically reduce operational and labor expenses by implementing automation technologies such as sensing devices, robotics, machine vision systems, as well as enterprise control solutions. Furthermore, the growing use of automation and robotics technologies in the manufacturing and services sectors to meet complicated consumer expectations is expected to propel the expansion of the industrial automation market.

Industrial automation is the integration of all processing systems, machinery, testing facilities, as well as factories that have become automated as a result of rapid technological advancement. These technologies and platforms are supported by cutting-edge technology such as deep learning, cloud-based services, robots, and others. Some manufacturers are focused on embracing and implementing industrial automation technologies to boost overall productivity, train their staff, and reduce exorbitant expenses while reaching precision and survival. Automation in industry sectors assists the organization and manufacturers increase production, increasing performance, and reducing mistakes. Furthermore, the widespread automation technology in the manufacturing environment, such as software applications and modern instruments, assists in the collection of trustworthy data, statistics, and statistics that can be utilized to make intelligent decisions, leading to significant cost reductions.

Request For Free sample Report @

https://www.acumenresearchandconsulting.com/request-sample/423

Report Coverage:

Market

Industrial Automation Market

Industrial Automation Market Size 2021

USD 189.7 Billion

Industrial Automation Market Forecast 2030

USD 430.9 Billion

Industrial Automation Market CAGR

9.7% During 2022 - 2030

Analysis Period

2018 - 2030

Base Year

2021

Forecast Data

2022 - 2030

Segments Covered

By Type, By Technology, By End-User, And By Region

Regional Scope

North America, Europe, Asia Pacific, Latin America, and Middle East & Africa

Key Companies Profiled

Emerson Electric Co., ABB, Siemens, Schneider Electric, Endress Hauser Management AG, Yokogawa India Ltd., Honeywell International Inc., Azbil Corporation, Fuji Electric Co., Ltd, 3D Systems, Inc., HP Development Company, FANUC CORPORATION, Stratasys Ltd., Hitachi, Ltd., and Rockwell Automation, Inc.

Report Coverage

Market Trends, Drivers, Restraints, Competitive Analysis, Player Profiling, Regulation Analysis

Global Industrial Automation Market Dynamics

The increasing demands for quality real-time data analysis across territories, as well as the rising use of cutting-edge technologies across end-use sectors to Improve efficiency and performance are driving market expansion. The growing necessity for periodic inspection and sophisticated data analysis, which would deliver firms increased visibility into their manufacturing operations and hence increase productivity, is the primary driver of the industrial automation market. Additionally, an increasing reliance on process automation and capital management systems, which would deliver users better visibility into the state of equipment, is boosting the industrial automation market demand. The effective exchange of information among organizational divisions in the industry allows for the most efficient conversion of raw materials to finished products, emphasizing the structure of connected firms as a primary driver of the industrial automation market.

Impact of COVID-19

The last several coronavirus (COVID-19) global epidemic has significantly affected industry sectors across the globe. In addition to hastening deglobalization in industry, COVID-19 has had a significant negative impact on logistical issues. Companies have challenges ranging from sourcing raw materials to distributing final products to luring workers from prevention measures. As a consequence of this circumstance, automation has been presented as the sole viable option. Due to pandemic-related transportation issues, the corporations considered localizing more production for themselves as well as their consumers. By improving productivity, new technology advancements are encouraging deglobalization by offsetting greater salaries. As a result, industries all around the world are increasing their expenditures on automation. However, in latest years, automation is being implemented in developed countries such as the United States and Germany to boost trade.

Check the detailed table of contents of the report @

https://www.acumenresearchandconsulting.com/table-of-content/industrial-automation-market

Significant growth in manufacturing sectors globally spurs the industrial automation market.

Manufacturing industries are expected to continue to alter in the coming years, with a transition from manual intervention to automated processes boosting market demand. For reduced or even no human intervention, the majority of contemporary industrial processes are already automated. Automation solutions are thought essential since traditional industrial processes are unable to fulfill contemporary demands. Furthermore, supportive government regulations in the manufacturing sector, as well as a greater emphasis on socio-economic development in emerging nations, are two significant growth drivers fueling the industrial automation market.

Market Segmentation

The global industrial automation market has been segmented by Acumen Research and Consulting based on type, technology, and end user. In type, the segment is separated into programmable automation, and fixed automation. In term of technology, the market is divided into SCADA, DCS, PAC, HMI, and PLC. In terms of end user, the segment is classified into machine manufacturing, aerospace & defense, automotive, oil & gas, pharmaceuticals, electronics, and others.

Global Industrial Automation Market Regional Outlook

The global industrial automation market is split into five regions: North America, Latin America, Europe, Asia-Pacific, and the Middle East and Africa. According to the industrial automation market report, the Asia-Pacific is expected to be the prominent region in the worldwide market over the coming years. This expansion can be attributed to the existence of major industry sectors in these regions. Asian countries, especially China, India, and Japan, are the leading manufacturers and end consumers of robotic systems, sensor systems, and computer sensor systems. Additionally, India, Japan, & South Korea have thriving consumer goods, automotive, electronics, as well as pharmaceutical industries. Furthermore, government initiatives and legislation promoting the modernization of manufacturing facilities, as well as expenditures in the IIoT, are important factors that influence the growth of the industrial automation market in these nations.

Buy this premium research report –

https://www.acumenresearchandconsulting.com/buy-now/0/423

Industrial Automation Market Players

Some of the prominent industrial automation market companies are ABB, Siemens, Endress Hauser Management AG, Hitachi, Ltd., Honeywell International Inc., Schneider Electric, Fuji Electric Co., Ltd, HP Development Company, Stratasys Ltd., Emerson Electric Co., Yokogawa India Ltd., 3D Systems, Inc., FANUC CORPORATION, Azbil Corporation, and Rockwell Automation, Inc.

Browse More Research course on Automation Industry:

The Global Pharmacy Automation Market size accounted for USD 5,083 Million in 2021 and is expected to reach USD 10,402 Million by 2030 with a considerable CAGR of 8.6% during the forecast timeframe of 2022 to 2030.

The Global Oil & Gas Automation Market accounted for USD 18,979 Million in 2021 and is estimated to reach USD 33,336 Million by 2030, with a significant CAGR of 6.7% from 2022 to 2030.

The Global Warehouse Automation Market accounted for USD 18,937 Million in 2021 and is estimated to reach USD 64,639 Million by 2030, with a significant CAGR of 14.8% from 2022 to 2030.

About Acumen Research and Consulting:

Acumen Research and Consulting is a global provider of market intelligence and consulting services to information technology, investment, telecommunication, manufacturing, and consumer technology markets. ARC helps investment communities, IT professionals, and business executives to make fact-based decisions on technology purchases and develop firm growth strategies to sustain market competition. With the team size of 100+ Analysts and collective industry experience of more than 200 years, Acumen Research and Consulting assures to deliver a combination of industry knowledge along with global and country level expertise.

For Latest Update Follow Us on Twitter and, LinkedIn

Contact Us:

Mr. Richard Johnson

Acumen Research and Consulting

USA: +13474743864

India: +918983225533

E-mail: sales@acumenresearchandconsulting.com

Thu, 04 Aug 2022 11:02:00 -0500 en-NZ text/html https://nz.finance.yahoo.com/news/industrial-automation-market-size-projected-230000018.html
Killexams : At the edge, nobody can hear your IoT devices scream

Sponsored Feature If you've ever wondered what edge computing looks like in action, you could do worse than study the orbiting multi-dimensional challenge that is the multi-agency International Space Station (ISS).

It's not exactly news that communication and computing are difficult in a physically isolated environment circling 400km above the earth, but every year scientists keep giving it new and more complex scientific tasks to justify its existence. This quickly becomes a big challenge. Latency is always high and the data from sensors can take minutes to reach earth, slowing decision making on any task to a crawl.

It's why the ISS has been designed with enough computing power onboard to survive these time lags and operate in isolation, complete with the processing and machine learning power to crunch data onboard. This is edge computing at its most daring, dangerous, and scientifically important. Although the ISS might sound like an extreme example, it is by no means alone. The problem of having enough computing power in the right place is becoming fundamental to a growing number of organizations, affecting everything from manufacturing to utilities and cities.

The idea that the edge matters is based on the simple observation that the only way to maintain performance, management and security in modern networking is to move applications and services closer to the problem, away from a notional data center. Where in traditional networks computing power is in centralized data centers, under edge computing the processing and applications move to multiple locations close to users and where data is generated. The datacenter still exists but becomes only one part of a much larger distributed system working as a single entity.

The model sounds simple enough, but it comes with a catch – moving processing power to the edge must be achieved without losing the centralized management and control on which security and compliance depends.

"Whatever organizations are doing, they want the data and service to be closer to the customer or problem that needs solving," says Ian Hood, chief strategist at Red Hat. Red Hat's Enterprise Linux and Red Hat's OpenShift ContainerPlatform local container platform is used by the ISS to support the small, highly portable cross-platform applications running on the onboard HP Spaceborne Computer-2.

"It's about creating a better service by processing the data at the edge rather than waiting for it to be centralized in the datacenter or public cloud.," continues Hood. The edge is being promoted as the solution for service providers and enterprises, but he believes that it's in industrial applications that the concept is having the biggest immediate impact.

"This sector has a lot of proprietary IoT and industrial automation at the edge but it's not very easy for them to manage. Now they're evolving the application they got from equipment makers such as ABB, Bosch, or Siemens to run on a mainstream compute platform."

Hood calls this the industrial 'device edge', an incarnation of edge computing in which large numbers of devices are connected directly to local computing resources rather than having to backhaul traffic to distant datacenters. In Red Hat's OpenShift architecture, this is accommodated in three configurations depending on the amount of compute power and resilience needed:

-          A three-node RHEL 'compact' cluster comprising three servers, a control plane and worker nodes. Designed for high availability and sites that might have intermittent or low bandwidth.

-          Single node edge server, the same technology but scaled down to a single server which can keep running even if connectivity fails.

-          Remote worker topology featuring a control plane at a regional datacenter with worker nodes across edge sites. Best suited for environments with stable connectivity, three node clusters can also be deployed as the control pane in this configuration.

The common thread in all of these is that customers end up with a Kubernetes infrastructure that distributes application clusters to as many edge environments as they desire.

Beyond the datacenter

Hood says the challenge of edge computing begins with the fact that the devices themselves are exposed on several levels. Because they are located remotely, they are physically vulnerable to tampering and unauthorized access at the deployment site for example, which could lead to a loss of control and/or downtime.

"Let's say the customer deploys the edge compute in a public area where someone can access it. That means if someone walks away with it, the system must shut itself down and erase itself. These servers are not in a secured datacenter."

Hitherto, system makers have rarely had to think about this dimension beyond the specialized realm of kiosks, and point-of-sale, and bank ATMs. However, with edge computing and industrial applications, it suddenly becomes a mainstream worry. If something goes wrong, the server is on its own.

As devices that do their job out of sight in remote locations, it's also possible to lose track of their software state. Industrial operational technology teams must be able to verify that servers and devices are receiving the correct, signed system images and updates while ensuring that communication between the devices and the management center is fully encrypted.

Other potential security risks associated with edge computing are harder to size given that the vulnerability extends to every element of the system. You could call this edge computing's mental block. Admins find themselves migrating from managing a single big problem to a myriad of smaller ones they can't always keep their eye on.

"The risks start in the hardware platform itself. Then you need to consider the operating system and ask whether it's properly secured. Finally, you must make sure the application code you are using has come from a secure registry where it has been vetted or from a secure third party using the same process."

The biggest worry is simply that the proliferation of devices makes it more likely that an edge device will be misconfigured or left unpatched, which punches small holes into the network. An employee could configure containers with too many privileges or root or allow unrestricted communication between different ones, for example.

"Today, most customers still rely on multiple management platforms and proprietary systems. This forces them to use multiple tools and automation to set up edge servers."

Red Hat's answer to this issue is the Ansible Automation Platform, which makes it possible to build repeatable processes across all environments, including the central cloud or datacenter or edge devices. This unified approach benefits every aspect of the way edge servers and devices are managed from setup and provisioning of the OS to patching, compliance routines and security policies. It's hard to imagine how industrial edge computing could work without such a platform but Hood says that organizations today often take a DIY approach.

"If they're not using a tool like Ansible, they'll revert to scripts, hands on keyboards, and multiple OS management systems. And different departments within an organization own different parts of this, for example the division between the IT side and the operations people looking after the industrial systems."

For Hood, migrating to an edge computing model is about choosing a single, consistent application development and deployment platform that ticks every box including the software and firmware stack managed by the OS to the applications, communication and deployment systems built on top of this.

"The approach organizations need to take whether they use Red Hat OpenShift or not is that the deployment of infrastructure needs to be a software-driven process that doesn't require a person to configure it. If it's not OpenShift you'll likely find that it's a proprietary solution to this problem."

The Swiss Federal Railways IoT Network

Another Red Hat implementation Hood involves a partnership with Swiss Federal Railways (SBB), a transport company deploying a growing family of digital services for its 1.25 million daily passengers and the world-famous timetable where no train must ever run late. Connected components include onboard technology such as LED information displays, seat booking technology, Wi-Fi access points, and CCTV and collision detection systems for safety monitoring.

This large, complex network of devices comprises multiple proprietary interfaces and management routines. Latency quickly became an issue as did the manual management workload of looking after numerous sensors and devices for a workforce which already has its hands full with trains, signaling and tracks.

Instead, SBB turned to Red Hat's Ansible automation which has allowed the service to manage IoT devices and edge servers centrally without having to send technicians to visit each train and edge server one at a time. Through Ansible, SBB was also able to get on top of the problem of exposing too many SSH keys and passwords to employees by centralizing these credentials for automated use. What SBB couldn't contemplate, says Hood, was lowering its management overhead at the expense of making the security infrastructure more cumbersome and potentially less secure.

In Hood's view, SBB demonstrates that it's possible for a company with a complex device base to embrace edge computing without inadvertently creating a new level of vulnerability for itself on top of the problems of everyday cybersecurity defense.  Observes Hood:

"Edge computing is just another place for attackers to go. If you leave the door open someone is guaranteed to walk through it eventually."

Learn more about Red Hat's approach to edge computing and security here.

Sponsored by Red Hat.

Fri, 22 Jul 2022 03:29:00 -0500 en text/html https://www.theregister.com/2022/07/22/at_the_edge_nobody_can/
Killexams : Global Neuromorphic Computing Market is Predicted an Elevation Up to USD 7500 Million By 2027 with a Growing CAGR of 50% | Infinium Global Research

The MarketWatch News Department was not involved in the creation of this content.

Aug 04, 2022 (Heraldkeepers) -- The Neuromorphic Computing Market Research Report Study. Covers global and regional markets with an in-depth analysis of the overall market growth prospects. It also sheds light on the comprehensive competitive landscape of the global market with a forecast period of 2021 to 2027. The Neuromorphic Computing Market Research Report.

Further provides a dashboard overview of the key players covering successful marketing strategies, market contribution, and latest developments in historical and current contexts, along with the forecast period 2021 to 2027. The neuromorphic computing market was valued at around 1980 Million in 2021 and is expected to reach over 7500 Million in 2027, growing with a CAGR of around 50% during the forecast period.

Get a sample Copy of the Report: https://www.infiniumglobalresearch.com/reports/sample-request/149

Increase in Demand for Artificial Intelligence and Machine Learning is Expected to Boost Market Growth

An increase in demand for artificial intelligence and machine learning is expected to boost the market growth. The increasing need for better performing ICs, rising demand for machine learning tools and solutions, high demand for cognitive and brain robots, increasing demand for real-time analytics coupled with high adoption of advanced automated technology from numerous sectors such as telecommunication, manufacturing, retail, and logistics are also anticipated to act as major growth drivers for the neuromorphic computing market during the forecast period.

Moreover, the emerging applications pertaining to automation and increasing adoption of neuromorphic computing for security purposes will further boost the growth of the market in the near future. However, the lack of R&D and investments slowing down the development of real-world applications together with the shortage of knowledge regarding neuromorphic computing are acting as market limitations for neuromorphic computing during the forecast period.

This report focuses on Neuromorphic Computing Market Status, Future Forecast, Growth Opportunity, Key Market, and Key Players. The Neuromorphic Computing Market Report. Studies various parameters, such as raw materials, cost and technology, and consumer preferences. It also provides important market credentials such as history, various spreads, and trends, an overview of the trade, regional markets, trade, and market competitors. It covers capital, revenue, and pricing analysis by business, along with other sections such as plans, support areas, products offered by major manufacturers, alliances, and acquisitions. Headquarters delivery.

To understand how the Impact of Covid-19 is covered in this Report:

The complete profile of the company is mentioned. And it includes capacity, production, price, revenue, cost, gross margin, sales volume, revenue, consumption, growth rate, import, export, supply, future strategies, and the technological developments that they are making the report. Historical market data Neuromorphic Computing and forecast data from 2021 to 2020.

Major players are included in the Neuromorphic Computing market report. They are: Intel Corp., IBM Corporation, BrainChip Holdings Ltd., Qualcomm, HP Enterprise, Samsung Electronics Ltd., HRL Laboratories, LLC, Bit Brain Technologies, Nextmind SRL, and Ceryx Medical.

Need Assistance? Send an Enquiry@ https://www.infiniumglobalresearch.com/reports/enquiry/149

Market segmentation Neuromorphic Computing by component (hardware, and software), deployment (edge, and cloud computing), application (signal, data, image processing, object detection), end-user (consumer electronics, automotive, healthcare, military & defense)

Geographically, this Report is segmented into Several Key Regions, With Sales, Revenue, Market Share, and Growth Rate of Neuromorphic Computing in Those Regions, from 2021 to 2027

  • North America (US, Canada, and Mexico)
  • Europe (Germany, UK, France, Italy, Russia, Turkey, etc.)
  • Asia Pacific (China, Japan, Korea, India, Australia, Indonesia, Thailand, Philippines, Malaysia, and Vietnam)
  • South America (Brazil, Argentina, Colombia, etc.)
  • The Middle East and Africa (Saudi Arabia, United Arab Emirates, Egypt, Nigeria, and South Africa)

-Market Landscape: Here, the Neuromorphic Computing Market opposition is based on value, revenue, trade, and organization-specific pie slices, market rates, cut-throat situation landscape, and market rates. Patterns, integrations, developments, acquisitions, and, in general, most latest top organizations It is part of the industry.

-Manufacturers Profiles: Here, they are considered to be the driving force for the Neuromorphic Computing Market. It is dictated by regions marketed, major products, net margin, revenue, cost, and generation.

-Market Status and Outlook by Region: In this segment, the report studies the market size by region, net advantage, exchanges, revenue, generation, overall industry share, CAGR, and region. Here, is the Neuromorphic Computing Market. It is studied in depth according to regions and countries such as North America, Europe, China, India, Japan, and MEA.

-Market Outlook - Production Side - In this report, the authors have focused on the creation and estimates regarding creation by type, key manufacturer indicators, and estimates regarding creation and creation.

-Results and Conclusions of the Research: It is one of the last parts of the report where the researcher’s findings and the conclusion of the exploratory study are presented.

Enquire Here Get Customization & Check Discount for Report @ https://www.infiniumglobalresearch.com/reports/customization/149

Key Stakeholders

- Raw Material Suppliers

- Distributors/Traders/Wholesalers/Suppliers

- Regulatory Agencies, including Government Agencies and NGOs

- Research and Development (R&D) Trade Agencies

- Imports and Exports, Government Agencies, Research Agencies, and Companies Consultants

- Trade associations and industry groups.

- End-use industries

The Study Objectives of this Report are:

To analyze the Neuromorphic Computing Industry status, future forecast, growth opportunity, key market, and key players.

Present the development of Supply of Neuromorphic Computing Market products. In the United States, Europe, and China.

Strategically profile key players and comprehensively analyze their development plans and strategies.

To define, describe and forecast the market by product type, market, and key regions.

Table of Content

Chapter – 1 Preface

1.1. Report Description

1.2. Research Methods

1.3. Research Approaches

Chapter – 2 Executive Summary

2.1. Neuromorphic Computing Market Highlights

2.2. Neuromorphic Computing Market Projection

2.3. Neuromorphic Computing Market Regional Highlights

Chapter – 3 Global Neuromorphic Computing Market Overview

3.1. Introduction

3.2. Market Dynamics

3.2.1. Drivers

3.2.2. Restraints

3.2.3. Opportunities

3.3. Analysis of COVID-19 impact on the Neuromorphic Computing Market

3.4. Porter’s Five Forces Analysis

3.5. IGR-Growth Matrix Analysis

3.6. Value Chain Analysis of Neuromorphic Computing Market

Chapter – 4 Neuromorphic Computing Market Macro Indicator Analysis

Chapter – 5 Global Neuromorphic Computing Market by Component

5.1. Hardware

5.2. Software

Chapter – 6 Global Neuromorphic Computing Market by Deployment

6.1. Edge Computing

6.2. Cloud Computing

Chapter – 7 Global Neuromorphic Computing Market by Application

7.1. Signal Processing

7.2. Data Processing

7.3. Image Processing

7.4. Object Detection

Chapter – 8 Global Neuromorphic Computing Market by End-user

8.1. Consumer Electronics

8.2. Automotive

8.3. Healthcare

8.4. Military & Defense

Chapter – 9 Global Neuromorphic Computing Market by Region 2021-2027

9.1. North America

9.2. Europe

9.3. Asia-Pacific

9.4. RoW

Chapter – 10 Company Profiles and Competitive Landscape

10.1. Competitive Landscape in the Global Neuromorphic Computing Market

10.2. Companies Profiles

10.2.1. Intel Corp.

10.2.2. IBM Corporation

10.2.3. BrainChip Holdings Ltd.

10.2.4. Qualcomm

10.2.5. HP Enterprise

10.2.6. Samsung Electronics Ltd.

10.2.7. HRL Laboratories, LLC

10.2.8. Bit Brain Technologies

10.2.9. Nextmind SRL

10.2.10. Ceryx Medical

Reasons to Buy this Report:

=> Comprehensive analysis of global as well as regional markets of the neuromorphic computing.

=> Complete coverage of all the product types and applications segments to analyze the trends, developments, and forecast of market size up to 2027.

=> Comprehensive analysis of the companies operating in this market. The company profile includes an analysis of the product portfolio, revenue, SWOT analysis, and the latest developments of the company.

=> Infinium Global Research- Growth Matrix presents an analysis of the product segments and geographies that market players should focus on to invest, consolidate, expand and/or diversify.

COMTEX_411576870/2582/2022-08-04T08:34:32

Is there a problem with this press release? Contact the source provider Comtex at editorial@comtex.com. You can also contact MarketWatch Customer Service via our Customer Center.

The MarketWatch News Department was not involved in the creation of this content.

Thu, 04 Aug 2022 00:34:00 -0500 en-US text/html https://www.marketwatch.com/press-release/global-neuromorphic-computing-market-is-predicted-an-elevation-up-to-usd-7500-million-by-2027-with-a-growing-cagr-of-50-infinium-global-research-2022-08-04
HP2-N46 exam dump and training guide direct download
Training Exams List