You will surely pass P2170-015 exam with these Questions and Answers

Our confirmation specialists says that finishing P2170-015 test with just course reading is truly challenging on the grounds that, the majority of the inquiries are out of course book. You can go to killexams.com and download 100 percent free P2170-015 VCE to assess before you purchase. Register and download your full duplicate of P2170-015 dumps questions and partake in the review.

Exam Code: P2170-015 Practice exam 2022 by Killexams.com team
IBM IOC Intelligent Water Technical Mastery Test v1
IBM Intelligent information
Killexams : IBM Intelligent information - BingNews https://killexams.com/pass4sure/exam-detail/P2170-015 Search results Killexams : IBM Intelligent information - BingNews https://killexams.com/pass4sure/exam-detail/P2170-015 https://killexams.com/exam_list/IBM Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Improve future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Improve quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Improve the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : What To Expect From IBM’s Q2 Earnings?

Computing behemoth IBM IBM is slated to report its Q2 2022 results on July 18th. We estimate that IBM’s revenue will come in at about $15.3 billion for the quarter, marginally ahead of the consensus estimate of about $15.25 billion. We estimate that earnings will stand at close to $2.33 per share, compared to a consensus of $2.29 per share. So, what are some of the trends that are likely to drive IBM’S results?

IBM spun off its slow growth and low-margin managed IT services business last year, with the company now focusing on areas such as cloud and artificial intelligence. We expect IBM’s core software and services operations to be the key drivers of the growth over Q2. For perspective, over Q1, software revenue rose by 12.3% year-over-year to $5.8 billion, coming in well ahead of consensus. Consulting revenue also surged 13.3% to $4.8 billion. While the infrastructure business, which includes mainframe hardware, saw slow growth over Q1, we expect it to pick up a bit, as IBM launched a new generation of mainframes earlier this quarter. That said, it’s likely that IBM will face some currency-related headwinds, with the U.S. dollar appreciating strongly versus other currencies in accurate weeks, driven by recession fears and the Fed’s relatively aggressive rate hikes.

We think IBM stock could move a bit higher following its Q2 earnings. At the current market price of about $138 per share, IBM stock trades at just about 14x consensus 2022 earnings and just about 13x projected 2023 earnings. This makes the stock a reasonably good value pick in a market where investors are increasingly prioritizing earnings and cash flows. Moreover, IBM could also hold up reasonably well in the event of an economic downturn in the United States. Chief executive, Arvind Krishna, has indicated that even if global growth cools, spending on information technology will still come in about 4% to 5% ahead of GDP. We value IBM stock at about $153 per share, roughly 10% ahead of the current market price.

See our analysis IBM Valuation: Expensive or Cheap for more details on what’s driving our price estimate for IBM. Also, check out the analysis of IBM Revenue for more details on how IBM revenues are trending.

While IBM stock has fared better than the broader market rising by about 2% year-to-date, the economic outlook looks increasingly uncertain, with the Fed raising rates amid surging inflation. See how low can IBM stock go by comparing its decline in previous market crashes. Here is a performance summary of all stocks in previous market crashes.

What if you’re looking for a more balanced portfolio instead? Our high-quality portfolio and multi-strategy portfolio have beaten the market consistently since the end of 2016.

Invest with Trefis Market Beating Portfolios

See all Trefis Price Estimates

Sun, 17 Jul 2022 21:00:00 -0500 Trefis Team en text/html https://www.forbes.com/sites/greatspeculations/2022/07/18/what-to-expect-from-ibms-q2-earnings/
Killexams : The right and wrong way to use artificial intelligence

For decades, scientists have been giddy and citizens have been fearful of the power of computers. In 1965 Herbert Simon, a Nobel laureate in economics and also a winner of the Turing Award (considered “The Nobel Prize of computing”), predicted that “machines will be capable, within 20 years, of doing any work a man can do.” His misplaced faith in computers is hardly unique. Sixty-seven years later, we are still waiting for computers to become our slaves and masters.

Businesses have spent hundreds of billions of dollars on AI moonshots that have crashed and burned. IBM’s “Dr. Watson” was supposed to revolutionize health care and “eradicate cancer.” Eight years later, after burning through $15 billion with no demonstrable successes, IBM fired Dr. Watson.

Advertisement

In 2016 Turing Award Winner Geoffrey Hinton advised that “We should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists.” Six years later, the number of radiologists has gone up, not down. Researchers have spent billions of dollars working on thousands of radiology image-recognition algorithms that are not as good as human radiologists.

What about those self-driving vehicles, promised by many including Elon Musk in his 2016 boast that “I really consider autonomous driving a solved problem. I think we are probably less than two years away.” Six years later, the most advanced self-driving vehicles are arguably Waymos in San Francisco, which only operate between 10 p.m. and 6 a.m. on the least crowded roads and still have accidents and cause traffic tie-ups. They are a long way from successfully operating in downtown traffic during the middle of the day at a required 99.9999% level of proficiency.

Advertisement

The list goes on. Zillow’s house-flipping misadventure lost billions of dollars trying to revolutionize home-buying before they shuttered it. Carvana’s car-flipping gambit still loses billions.

We have argued for years that we should be developing AI that makes people more productive instead of trying to replace people. Computers have wondrous memories, make calculations that are lightning-fast and error-free, and are tireless, but humans have the real-world experience, common sense, wisdom and critical thinking skills that computers lack. Together, they can do more than either could do on their own.

Weekdays

Catch up on the day’s top five stories every weekday afternoon.

Effective augmentation appears to be finally happening with medical images. A large-scale study just published in Lancet Digital Health is the first to directly compare AI cancer screening when used alone or to assist humans. The software comes from a German startup, Vara, whose AI is already used in more than 25% of Germany’s breast cancer screening centers.

Researchers from Vara, Essen University and the Memorial Sloan Kettering Cancer Center trained the algorithm on more than 367,000 mammograms, and then tested it on 82,851 mammograms that had been held back for that purpose.

In the first strategy, the algorithm was used alone to analyze the 82,851 mammograms. In the second strategy, the algorithm separated the mammograms into three groups: clearly cancer, clearly no cancer, and uncertain. The uncertain mammograms were then sent to board-certified radiologists who were given no information about the AI diagnosis.

Doctors and AI working together turned out to be better than either working alone. The AI pre-screening reduced the number of images the doctors examined by 37% while lowering the false-positive and false-negative rates by about a third compared to AI alone and by 14%-20% compared to doctors alone. Less work and better results!

As machine learning improves, the AI analysis of X-rays will no doubt become more efficient and accurate. There will come a time when AI can be trusted to work alone. However, that time is likely to be decades in the future and attempts to jump directly to that point are dangerous.

We are optimistic that the productivity of many workers can be improved by similar augmentation strategies — not to mention the fact that many of the tasks that computers excel at are dreadful drudgery; e.g., legal research, inventory control and statistical calculations. But far too many attempts to replace humans entirely have not only been an enormous waste of resources but have also undermined the credibility of AI research. The last thing we need is another AI winter where funding dries up, resources are diverted and the tremendous potential of these technologies are put on hold. We are optimistic that the accumulating failures of moonshots and successes of augmentation strategies will change the way that we think about AI.

Advertisement

Funk is an independent technology consultant who previously taught at National University of Singapore, Hitotsubashi and Kobe Universities in Japan, and Penn State, where he taught courses on the economics of new technologies. Smith is the author of ”The AI Delusion” and co-author (with Jay Cordes) of ”The 9 Pitfalls of Data Science” and ”The Phantom Pattern Problem.”

Fri, 05 Aug 2022 21:00:00 -0500 en-US text/html https://www.nydailynews.com/opinion/ny-oped-the-right-and-wrong-way-to-use-artificial-intelligence-20220806-txybtmlcwfgddnfdozvynz5u64-story.html
Killexams : Asia Pacific Artificial Intelligence In Fintech Market Report 2022: Featuring Key Players IBM, Oracle, Google, Microsoft & Others

Company Logo

Dublin, Aug. 09, 2022 (GLOBE NEWSWIRE) -- The "Asia Pacific Artificial Intelligence In Fintech Market Size, Share & Industry Trends Analysis Report By Component (Solutions and Services), By Deployment (On-premise and Cloud), By Application, By Country and Growth Forecast, 2022 - 2028" report has been added to ResearchAndMarkets.com's offering.

The Asia Pacific Artificial Intelligence In Fintech Market is expected to witness market growth of 17.7% CAGR during the forecast period (2022-2028).

Artificial intelligence enhances outcomes by employing approaches derived from human intellect but applied at a scale that is not human. Fintech firms have been transformed in accurate years as a result of the computational arms race. Additionally, near-endless volumes of data are altering AI to unprecedented heights, and smart contracts may simply be a continuation of the current market trend.

In the banking industry, AI is used to look at a person's entire financial health, maintain up with real-time changes, and offer tailored advice based on fresh incoming data by examining cash accounts, investment accounts, and credit accounts. Banks and fintech companies have profited from AI and machine learning because they can process large amounts of data on clients. This information and data is then compared to arrive at conclusions about what services/products clients want, which has benefited in the development of customer relationships.

Hong Kong is a developed metropolis with a high rate of mobile phone use and internet access, providing a solid foundation for the city's fintech ecosystem. As per Invest Hong Kong, the country is home to approximately 600 fintech enterprises and startups. Similarly, 86% of local banks have implemented or plan to implement fintech solutions across all financial services. Consumer fintech adoption in the city was placed in the top five in the world's developed markets. Since 2014, Hong Kong fintech businesses have raised over 1.1 billion dollars in venture funding. Digital payments, securities settlement, wealthtech, electronic Know Your Customer (KYC) and digital identification utilities, insurtech, blockchain, data analytics, and other fintech opportunities abound in Hong Kong.

The HKMA introduced the Fintech Supervisory Sandbox (FSS) in September 2016, allowing banks and their collaborating technology businesses to perform pilot trials of their fintech projects with a small number of consumers without having to meet all of the HKMA's supervisory standards. This arrangement allows banks and tech companies to collect data and user feedback in order to Improve their new efforts, allowing them to deploy new technological solutions faster and for less money. Owing to this government support and huge investment in advanced solutions, the growth of the regional artificial intelligence in fintech market is expected to escalate in the forecast years.

The China market dominated the Asia Pacific Artificial Intelligence In Fintech Market by Country in 2021, and is expected to continue to be a dominant market till 2028; thereby, achieving a market value of $1,908.9 Million by 2028. The Japan market is poised to grow at a CAGR of 17% during (2022-2028). Additionally, The India market is expected to display a CAGR of 18.4% during (2022-2028).

Scope of the Study
Market Segments Covered in the Report:
By Component

By Deployment

By Application

  • Business Analytics & Reporting

  • Customer Behavioral Analytics

  • Fraud Detection

  • Virtual Assistant (Chatbots)

  • Quantitative & Asset Management

  • Others

By Country

  • China

  • Japan

  • India

  • South Korea

  • Singapore

  • Malaysia

  • Rest of Asia Pacific

Key Market Players

  • IBM Corporation

  • Oracle Corporation

  • Microsoft Corporation

  • Google LLC

  • Intel Corporation

  • Salesforce.com, Inc.

  • Amazon Web Services, Inc.

  • ComplyAdvantage

  • Amelia US LLC

  • Inbenta Technologies, Inc.

Key subjects Covered:

Chapter 1. Market Scope & Methodology

Chapter 2. Market Overview

Chapter 3. Competition Analysis - Global

Chapter 4. Asia Pacific Artificial Intelligence In Fintech Market by Component

Chapter 5. Asia Pacific Artificial Intelligence In Fintech Market by Deployment

Chapter 6. Asia Pacific Artificial Intelligence In Fintech Market by Application

Chapter 7. Asia Pacific Artificial Intelligence In Fintech Market by Country

Chapter 8. Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/c76s9d

CONTACT: CONTACT: ResearchAndMarkets.com Laura Wood, Senior Press Manager press@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470 For U.S./CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900
Mon, 08 Aug 2022 21:23:00 -0500 en-US text/html https://www.yahoo.com/now/asia-pacific-artificial-intelligence-fintech-092300545.html
Killexams : History of Artificial Intelligence

Of the myriad technological advances of the 20th and 21st centuries, one of the most influential is undoubtedly artificial intelligence (AI). From search engine algorithms reinventing how we look for information to Amazon’s Alexa in the consumer sector, AI has become a major technology driving the entire tech industry forward into the future.

Whether you’re a burgeoning start-up or an industry titan like Microsoft, there’s probably at least one part of your company working with AI or machine learning. According to a study from Grand View Research, the global AI industry was valued at $93.5 billion in 2021.

AI as a force in the tech industry exploded in prominence in the 2000s and 2010s, but AI has been around in some form or fashion since at least 1950 and arguably stretches back even further than that.

The broad strokes of AI’s history, such as the Turing Test and chess computers, are ingrained in the popular consciousness, but a rich, dense history lives beneath the surface of common knowledge. This article will distill that history and show you AI’s path from mythical idea to world-altering reality.

Also see: Top AI Software 

From Folklore to Fact

While AI is often considered a cutting-edge concept, humans have been imagining artificial intelligences for millenniums, and those imaginings have had a tangible impact on the advancements made in the field today.

Prominent mythological examples include the bronze automaton Talos, protector of the island of Crete from Greece, and the alchemical homunculi of the Renaissance period. Characters like Frankenstein’s Monster, HAL 9000 of 2001: A Space Odyssey, and Skynet from the Terminator franchise are just some of the ways we’ve depicted artificial intelligence in modern fiction.

One of the fictional concepts with the most influence on the history of AI is Isaac Asimov’s Three Laws of Robotics. These laws are frequently referenced when real-world researchers and organizations create their own laws of robotics.

In fact, when the U.K.’s Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC) published its 5 principles for designers, builders and users of robots, it explicitly cited Asimov as a reference point, though stating that Asimov’s Laws “simply don’t work in practice.”

Microsoft CEO Satya Nadella also made mention of Asimov’s Laws when presenting his own laws for AI, calling them “a good, though ultimately inadequate, start.”

Also see: The Future of Artificial Intelligence

Computers, Games, and Alan Turing

As Asimov was writing his Three Laws in the 1940s, researcher William Grey Walter was developing a rudimentary, analogue version of artificial intelligence. Called tortoises or turtles, these tiny robots could detect and react to light and contact with their plastic shells, and they operated without the use of computers.

Later in the 1960s, Johns Hopkins University built their Beast, another computer-less automaton which could navigate the halls of the university via sonar and charge itself at special wall outlets when its battery ran low.

However, artificial intelligence as we know it today would find its progress inextricably linked to that of computer science. Alan Turing’s 1950 paper Computing Machinery and Intelligence, which introduced the famous Turing Test, is still influential today. Many early AI programs were developed to play games, such as Christopher Strachey’s checkers-playing program written for the Ferranti Mark I computer.

The term “artificial intelligence” itself wasn’t codified until 1956’s Dartmouth Workshop, organized by Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester, where McCarthy coined the name for the burgeoning field.

The Workshop was also where Allen Newell and Herbert A. Simon debuted their Logic Theorist computer program, which was developed with the help of computer programmer Cliff Shaw. Designed to prove mathematical theorems the same way a human mathematician would, Logic Theorist would go on to prove 38 of the first 52 theorems found in the Principia Mathematica. Despite this achievement, the other researchers at the conference “didn’t pay much attention to it,” according to Simon.

Games and mathematics were focal points of early AI because they were easy to apply the “reasoning as search” principle to. Reasoning as search, also called means-ends analysis (MEA), is a problem-solving method that follows three basic steps:

  • Ddetermine the ongoing state of whatever problem you’re observing (you’re feeling hungry).
  • Identify the end goal (you no longer feel hungry).
  • Decide the actions you need to take to solve the problem (you make a sandwich and eat it).

This early forerunner of AI’s rationale: If the actions did not solve the problem, find a new set of actions to take and repeat until you’ve solved the problem.

Neural Nets and Natural Languages

With Cold-War-era governments willing to throw money at anything that might deliver them an advantage over the other side, AI research experienced a burst of funding from organizations like DARPA throughout the ’50s and ’60s.

This research spawned a number of advances in machine learning. For example, Simon and Newell’s General Problem Solver, while using MEA, would generate heuristics, mental shortcuts which could block off possible problem-solving paths the AI might explore that weren’t likely to arrive at the desired outcome.

Initially proposed in the 1940s, the first artificial neural network was invented in 1958, thanks to funding from the United States Office of Naval Research.

A major focus of researchers in this period was trying to get AI to understand human language. Daniel Brubow helped pioneer natural language processing with his STUDENT program, which was designed to solve word problems.

In 1966, Joseph Weizenbaum introduced the first chatbot, ELIZA, an act which Internet users the world over are grateful for. Roger Schank’s conceptual dependency theory, which attempted to convert sentences into basic concepts represented as a set of simple keywords, was one of the most influential early developments in AI research.

Also see: Data Analytics Trends 

The First AI Winter

In the 1970s, the pervasive optimism in AI research from the ’50s and ’60s began to fade. Funding dried up as sky-high promises were dragged to earth by a myriad of the real-world issues facing AI researching. Chief among them was a limitation in computational power.

As Bruce G. Buchanan explained in an article for AI Magazine: “Early programs were necessarily limited in scope by the size and speed of memory and processors and by the relative clumsiness of the early operating systems and languages.” This period, as funding disappeared and optimism waned, became known as the AI Winter.

The period was marked by setbacks and interdisciplinary disagreements amongst AI researchers. Marvin Minsky and Frank Rosenblatt’s 1969 book Perceptrons discouraged the field of neural networks so thoroughly that very little research was done in the field until the 1980s.

Then, there was the divide between the so-called “neats” and the “scruffys.” The neats favored the use of logic and symbolic reasoning to train and educate their AI. They wanted AI to solve logical problems like mathematical theorems.

John McCarthy introduced the idea of using logic in AI with his 1959 Advice Taker proposal. In addition, the Prolog programming language, developed in 1972 by Alan Colmerauer and Phillipe Roussel, was designed specifically as a logic programming language and still finds use in AI today.

Meanwhile, the scruffys were attempting to get AI to solve problems that required AI to think like a person. In a 1975 paper, Marvin Minsky outlined a common approach used by scruffy researchers, called “frames.”

Frames are a way that both humans and AI can make sense of the world. When you encounter a new person or event, you can draw on memories of similar people and events to deliver you a rough idea of how to proceed, such as when you order food at a new restaurant. You might not know the menu or the people serving you, but you have a general idea of how to place an order based on past experiences in other restaurants.

From Academia to Industry

The 1980s marked a return to enthusiasm for AI. R1, an expert system implemented by the Digital Equipment Corporation in 1982, was saving the company a reported $40 million a year by 1986. The success of R1 proved AI’s viability as a commercial tool and sparked interest from other major companies like DuPont.

On top of that, Japan’s Fifth Generation project, an attempt to create intelligent computers running on Prolog the same way normal computers run on code, sparked further American corporate interest. Not wanting to be outdone, American companies poured funds into AI research.

Taken altogether, this increase in interest and shift to industrial research resulted in the AI industry ballooning to $2 billion in value by 1988. Adjusting for inflation, that’s nearly $5 billion dollars in 2022.

Also see: Real Time Data Management Trends

The Second AI Winter

In the 1990s, however, interest began receding in much the same way it had in the ’70s. In 1987, Jack Schwartz, the then-new director of DARPA, effectively eradicated AI funding from the organization, yet already-earmarked funds didn’t dry up until 1993.

The Fifth Generation Project had failed to meet many of its goals after 10 years of development, and as businesses found it cheaper and easier to purchase mass-produced, general-purpose chips and program AI applications into the software, the market for specialized AI hardware, such as LISP machines, collapsed and caused the overall market to shrink.

Additionally, the expert systems that had proven AI’s viability at the beginning of the decade began showing a fatal flaw. As a system stayed in-use, it continually added more rules to operate and needed a larger and larger knowledge base to handle. Eventually, the amount of human staff needed to maintain and update the system’s knowledge base would grow until it became financially untenable to maintain. The combination of these factors and others resulted in the Second AI Winter.

Also see: Top Digital Transformation Companies

Into the New Millennium and the Modern World of AI

The late 1990s and early 2000s showed signs of the coming AI springtime. Some of AI’s oldest goals were finally realized, such as Deep Blue’s 1997 victory over then-chess world champion Gary Kasparov in a landmark moment for AI.

More sophisticated mathematical tools and collaboration with fields like electrical engineering resulted in AI’s transformation into a more logic-oriented scientific discipline, allowing the aforementioned neats to claim victory over their scruffy counterparts. Marvin Minsky, for his part, declared that the field of AI was and had been “brain dead” for the past 30 years in 2003.

Meanwhile, AI found use in a variety of new areas of industry: Google’s search engine algorithm, data mining, and speech recognition just to name a few. New supercomputers and programs would find themselves competing with and even winning against top-tier human opponents, such as IBM’s Watson winning Jeopardy! in 2011 over Ken Jennings, who’d once won 74 episodes of the game show in a row.

One of the most impactful pieces of AI in accurate years has been Facebook’s algorithms, which can determine what posts you see and when, in an attempt to curate an online experience for the platform’s users. Algorithms with similar functions can be found on websites like Youtube and Netflix, where they predict what content viewers want to watch next based on previous history.

The benefits of these algorithms to anyone but these companies’ bottom lines are up for debate, as even former employees have testified before Congress about the dangers it can cause to users.

Sometimes, these innovations weren’t even recognized as AI. As Nick Brostrom put it in a 2006 CNN interview: “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.”

The trend of not calling useful artificial intelligence AI did not last into the 2010s. Now, start-ups and tech mainstays alike scramble to claim their latest product is fueled by AI or machine learning. In some cases, this desire has been so powerful that some will declare their product is AI-powered, even when the AI’s functionality is questionable.

AI has found its way into many peoples’ homes, whether via the aforementioned social media algorithms or virtual assistants like Amazon’s Alexa. Through winters and burst bubbles, the field of artificial intelligence has persevered and become a hugely significant part of modern life, and is likely to grow exponentially in the years ahead.

Mon, 25 Jul 2022 03:18:00 -0500 en-US text/html https://www.eweek.com/enterprise-apps/history-of-artificial-intelligence/
Killexams : Asia Pacific Artificial Intelligence In Fintech...

Dublin, Aug. 09, 2022 (GLOBE NEWSWIRE) -- The "Asia Pacific Artificial Intelligence In Fintech Market Size, Share & Industry Trends Analysis Report By Component (Solutions and Services), By Deployment (On-premise and Cloud), By Application, By Country and Growth Forecast, 2022 - 2028" report has been added to ResearchAndMarkets.com's offering.

The Asia Pacific Artificial Intelligence In Fintech Market is expected to witness market growth of 17.7% CAGR during the forecast period (2022-2028).

Artificial intelligence enhances outcomes by employing approaches derived from human intellect but applied at a scale that is not human. Fintech firms have been transformed in accurate years as a result of the computational arms race. Additionally, near-endless volumes of data are altering AI to unprecedented heights, and smart contracts may simply be a continuation of the current market trend.

In the banking industry, AI is used to look at a person's entire financial health, maintain up with real-time changes, and offer tailored advice based on fresh incoming data by examining cash accounts, investment accounts, and credit accounts. Banks and fintech companies have profited from AI and machine learning because they can process large amounts of data on clients. This information and data is then compared to arrive at conclusions about what services/products clients want, which has benefited in the development of customer relationships.

Hong Kong is a developed metropolis with a high rate of mobile phone use and internet access, providing a solid foundation for the city's fintech ecosystem. As per Invest Hong Kong, the country is home to approximately 600 fintech enterprises and startups. Similarly, 86% of local banks have implemented or plan to implement fintech solutions across all financial services. Consumer fintech adoption in the city was placed in the top five in the world's developed markets. Since 2014, Hong Kong fintech businesses have raised over 1.1 billion dollars in venture funding. Digital payments, securities settlement, wealthtech, electronic Know Your Customer (KYC) and digital identification utilities, insurtech, blockchain, data analytics, and other fintech opportunities abound in Hong Kong.

The HKMA introduced the Fintech Supervisory Sandbox (FSS) in September 2016, allowing banks and their collaborating technology businesses to perform pilot trials of their fintech projects with a small number of consumers without having to meet all of the HKMA's supervisory standards. This arrangement allows banks and tech companies to collect data and user feedback in order to Improve their new efforts, allowing them to deploy new technological solutions faster and for less money. Owing to this government support and huge investment in advanced solutions, the growth of the regional artificial intelligence in fintech market is expected to escalate in the forecast years.

The China market dominated the Asia Pacific Artificial Intelligence In Fintech Market by Country in 2021, and is expected to continue to be a dominant market till 2028; thereby, achieving a market value of $1,908.9 Million by 2028. The Japan market is poised to grow at a CAGR of 17% during (2022-2028). Additionally, The India market is expected to display a CAGR of 18.4% during (2022-2028).

Scope of the Study
Market Segments Covered in the Report:
By Component

By Deployment

By Application

  • Business Analytics & Reporting
  • Customer Behavioral Analytics
  • Fraud Detection
  • Virtual Assistant (Chatbots)
  • Quantitative & Asset Management
  • Others

By Country

  • China
  • Japan
  • India
  • South Korea
  • Singapore
  • Malaysia
  • Rest of Asia Pacific

Key Market Players

  • IBM Corporation
  • Oracle Corporation
  • Microsoft Corporation
  • Google LLC
  • Intel Corporation
  • Salesforce.com, Inc.
  • Amazon Web Services, Inc.
  • ComplyAdvantage
  • Amelia US LLC
  • Inbenta Technologies, Inc.

Key subjects Covered:

Chapter 1. Market Scope & Methodology

Chapter 2. Market Overview

Chapter 3. Competition Analysis - Global

Chapter 4. Asia Pacific Artificial Intelligence In Fintech Market by Component

Chapter 5. Asia Pacific Artificial Intelligence In Fintech Market by Deployment

Chapter 6. Asia Pacific Artificial Intelligence In Fintech Market by Application

Chapter 7. Asia Pacific Artificial Intelligence In Fintech Market by Country

Chapter 8. Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/c76s9d

CONTACT: ResearchAndMarkets.com
         Laura Wood, Senior Press Manager
         press@researchandmarkets.com
         For E.S.T Office Hours Call 1-917-300-0470
         For U.S./CAN Toll Free Call 1-800-526-8630
         For GMT Office Hours Call +353-1-416-8900

© 2022 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Ad Disclosure: The rate information is obtained by Bankrate from the listed institutions. Bankrate cannot guaranty the accuracy or availability of any rates shown above. Institutions may have different rates on their own websites than those posted on Bankrate.com. The listings that appear on this page are from companies from which this website receives compensation, which may impact how, where, and in what order products appear. This table does not include all companies or all available products.

All rates are subject to change without notice and may vary depending on location. These quotes are from banks, thrifts, and credit unions, some of whom have paid for a link to their own Web site where you can find additional information. Those with a paid link are our Advertisers. Those without a paid link are listings we obtain to Improve the consumer shopping experience and are not Advertisers. To receive the Bankrate.com rate from an Advertiser, please identify yourself as a Bankrate customer. Bank and thrift deposits are insured by the Federal Deposit Insurance Corp. Credit union deposits are insured by the National Credit Union Administration.

Consumer Satisfaction: Bankrate attempts to verify the accuracy and availability of its Advertisers' terms through its quality assurance process and requires Advertisers to agree to our Terms and Conditions and to adhere to our Quality Control Program. If you believe that you have received an inaccurate quote or are otherwise not satisfied with the services provided to you by the institution you choose, please click here.

Rate collection and criteria: Click here for more information on rate collection and criteria.

Mon, 08 Aug 2022 21:23:00 -0500 text/html https://www.benzinga.com/pressreleases/22/08/g28411655/asia-pacific-artificial-intelligence-in-fintech-market-report-2022-featuring-key-players-ibm-oracl
Killexams : Cybersixgill Delivers the Industry's First End-to-End Vulnerability Exploit Intelligence Solution

Key Takeaways:

  • Cyber intelligence leader Cybersixgill raises the bar on vulnerability management with new Dynamic Vulnerability Exploit (DVE) Intelligence, combining automation, advanced analytics, and rich vulnerability exploit intelligence to address all phases of the Common Vulnerabilities and Exposures (CVE) lifecycle.

  • Cybersixgill's DVE Intelligence features substantial updates to its previous DVE Score to refine vulnerability assessment and prioritization processes by correlating asset exposure and impact severity data with real-time vulnerability and exploit intelligence.

  • DVE Intelligence provides customers with comprehensive context directly related to the probability of exploitation so they can prioritize CVEs in order of urgency and remediate vulnerabilities before they can be exploited and weaponized in attacks.

LAS VEGAS, NV / ACCESSWIRE / August 9, 2022 / Cybersixgill, the leading threat intelligence provider, announced today its new Dynamic Vulnerability Exploit (DVE) Intelligence solution, delivering the cybersecurity industry's first end-to-end intelligence across the entire Common Vulnerabilities and Exposures (CVE) lifecycle.

With a comprehensive set of features and capabilities not offered by other threat intelligence vendors, including automation, adversary technique mapping, and rich vulnerability exploit intelligence, Cybersixgill's DVE Intelligence streamlines vulnerability analysis to help companies reduce risk by accelerating their time to respond.

Cybersixgill, Tuesday, August 9, 2022, Press release picture

According to IBM's X-Force Threat Intelligence Index 2022, vulnerability exploitation has become the most common attack vector for cybercriminals, constituting one of the top five cybersecurity risks. To properly address this situation, organizations need to know their vulnerabilities and the level of risk each vulnerability poses to prioritize remediation activities. Additionally, companies must understand how the risk of any trending vulnerability can impact new applications or hardware investments.

Cybersixgill's DVE Intelligence expands on the company's previous DVE score, which indicates the probability of individual exploits targeting specific companies. DVE Intelligence now refines vulnerability assessment and prioritization processes by correlating asset exposure and impact severity data with real-time vulnerability and exploit intelligence, empowering teams with the critical context they need to prioritize CVEs in order of urgency and remediate vulnerabilities - before they can be exploited and weaponized in attacks.

"Given the high volume of attacks using vulnerability exploitation as the initial means of infiltration, companies require vulnerability management solutions that deliver them the data and context they need to understand where their greatest business risks lie fully," said Gabi Reish, Chief Business Development and Product Officer for Cybersixgill. "Our new DVE Intelligence delivers the broadest range of contextual data - from mapping products to relevant CVEs, to assessing relevant MITRE techniques and offering remediation information. The solution will be of tremendous benefit to organizations as they continue to find ways to Improve security efficiencies and minimize business risk."

Expanding Intelligence Across the CVE Lifecycle: an Industry First

DVE Intelligence expands vulnerability and exploits prioritization and management across the entire CVE lifecycle through the industry's most comprehensive and advanced set of features and capabilities, which include:

  1. Attack surface scanning for specific assets, products (CPEs), and CVEs - The DVE interface enables customers to efficiently identify and scope the particular assets, CVEs, and Common Platform Enumeration (CPEs) that pose the most significant risk to their organization.

  2. Automated mapping of products (CPEs) to relevant CVEs - CPE to CVE matching is critical to reducing false positives, allowing teams to focus only on those vulnerabilities that affect their existing IT assets and infrastructures.

  3. Mapping of CVEs to MITRE ATT&CK framework - By mapping CVEs to MITRE ATT&CK tactics and techniques, DVE Intelligence provides vital insight into the higher-level objectives of the attacker, as well as the likely method and potential impact of exploitation.

  4. Complete intelligence context - DVE Intelligence delivers comprehensive context collected on threat actors and their discourse, exploit kits, attribution to malware, APT, and ransomware. As part of this context, Cybersixgill also provides a score of the likelihood that a vulnerability will be exploited over the next 90 days, hours after the CVE is first published, via a score that is updated in real-time.

  5. Delivery of remediation instructions - DVE Intelligence continuously monitors vendor sites and MITRE CVE records, presenting comprehensive remediation information, instructions and links directly within the DVE interface, dramatically reducing Mean Time to Remediate.

Unlike most vulnerability prioritization technologies, DVE Intelligence is not dependent on external data sources, which can be slow to rate new threats. The solution equips security teams with the real-time intelligence and context necessary to identify and prioritize vulnerabilities that pose the most substantial risks to the organization, resulting in the following benefits:

  • Increases efficiency by focusing on those vulnerabilities that pose the most significant risk to an organization.

  • Reduces business risk by minimizing mean time to respond and remediate with the earliest insights into the likelihood of exploitation.

  • Rationalizes a company's security stack with a single source of truth, presenting all elements of critical, contextual vulnerability and exploiting intelligence data in one unified platform solution.

  • Helps companies comply with industry regulations through quantifiable proof of security processes used to address vulnerabilities and minimize risk.

For more information about Cybersixgill's DVE Intelligence, visit http://cybersixgill.com/dveintelligence.

About Cybersixgill

Cybersixgill continuously collects and exposes the earliest possible indications of risk produced by threat actors moments after they surface on the clear, deep, and dark web. This data is processed, correlated, and enriched using automation to create profiles and patterns of threat actors and their peer networks, including the source and context of each threat. Cybersixgill's extensive body of data can be consumed through a range of seamlessly integrated to your existing security stack, so you can pre-empt threats before they materialize into attacks. The company serves and partners with global enterprises, financial institutions, MSSPs, and government and law enforcement agencies. For more information, visithttps://www.cybersixgill.com/ and follow us onTwitter andLinkedIn.

Media Contacts:

Nancy MacGregor
Trier and Company for Cybersixgill
Mobile: US 1-415-309-5188
Email: nancy@triercompany.com

Danielle Ostrovsky
Hi-Touch PR for Cybersixgill
Mobile: US 1-410-302-9459
Email: ostrovsky@hi-touchpr.com

SOURCE: Cybersixgill

View source version on accesswire.com:
https://www.accesswire.com/711363/Cybersixgill-Delivers-the-Industrys-First-End-to-End-Vulnerability-Exploit-Intelligence-Solution

Tue, 09 Aug 2022 01:00:00 -0500 en-US text/html https://finance.yahoo.com/news/cybersixgill-delivers-industrys-first-end-130000227.html
Killexams : Threat Intelligence Market 2022 Report Examines Latest Trends and Key Drivers Supporting Growth till 2030

The MarketWatch News Department was not involved in the creation of this content.

Aug 08, 2022 (Heraldkeepers) -- The global threat intelligence market is expected to grow more than US$ 16 billion by 2026, at a CAGR of 8.1% during the forecast period.

Global Threat Intelligence market report covers various regions including North America, Europe, Asia Pacific, and Rest of World. The regional Threat Intelligence market is further bifurcated for major countries including U.S., Canada, Germany, UK, France, Italy, China, India, Japan, Brazil, South Africa and others.

Market Research Engine has published a new report titled as "Threat Intelligence Market by Solution (Threat Intelligence Platforms, SIEM, IAM, SVM, Risk and Compliance Management, Incident Forensics), Service (Managed, Professional), Deployment Mode, Organization Size, Vertical, and Region – Global Forecast to 2021-2026-Executive Data Report."

Browse Full Report:https://www.marketresearchengine.com/threat-intelligence-market-size

Global Threat Intelligence market is segmented based on the Component as, Solutions and Services. On the basis of Solutions as, the global Threat Intelligence market is segregated as Threat intelligence platforms, Security information and event management, Log management, Security and vulnerability management, Identity and access management, Risk management and compliance management and User and entity behaviour analytics. Global Threat Intelligence market is segmented based on the Services as, Managed services and Professional Services. On the basis of Deployment Mode as, the global Threat Intelligence market is segregated as On-premises and Cloud. Global Threat Intelligence market is segmented based on the Organization Size as, Small and Medium-Sized Enterprises and Large Enterprises. Global Threat Intelligence market is segmented based on the Vertical as,, Government and Defense, Banking, Financial Services, and Insurance, IT and Telecom, Healthcare, Retail, Transportation, Energy and Utilities, Manufacturing, Education and Others.

Reasons to Buy this Report:

Gain comprehensive insights on the industry trends
Identify industry opportunities and key growth segments
Obtain complete market study on the Threat Intelligence market
Research Methodology:

To calculate the market size, the report considers the revenue generated from the sales of Threat Intelligence market manufacturers. The revenue generated from the sales of Threat Intelligence manufacturer has been calculated through primary and secondary research.

The market size estimation also considered leading players revenues as part of triangulation the key players considered are Symantec (US), IBM (US), FireEye (US), Check Point (US), Trend Micro (Japan), Dell Technologies (US), McAfee (US), LogRhythm (US), LookingGlass Cyber Solutions (US), Proofpoint (US), Kaspersky (Russia), Group-IB (Russia), AlienVault (US), Webroot (US), Digital Shadows (US), Optiv (US), ThreatConnect (US), CrowdStrike (US), Farsight Security (US), Intel 471 (US), Blueliv (Spain), PhishLabs (US), DomainTools (US), Flashpoint (US), and SurfWatch Labs (US). Availability Services among others operating in the Threat Intelligence market across the globe identified through secondary research and a corresponding detailed analysis of the top vendors in the market. The market size calculation also includes types of segmentation determined using secondary sources and Checked through primary sources.

The Threat Intelligence Market has been segmented as below:

Threat Intelligence Market, By Component

Solutions
Services
Threat Intelligence Market, By Solution

Threat Intelligence Platforms
Security Information and Event Management
Log Management
Security and Vulnerability Management
Identity and Access Management
Risk and Compliance Management
Incident Forensics
User and Entity Behavior Analytics
Threat Intelligence Market, By Service

Managed Services
Advanced Threat Management
Security Intelligence Feeds
Professional Services
Consulting Services
Training and Support
Threat Intelligence Market, By Deployment Mode

On-Premises
Cloud
Threat Intelligence Market, By Organization Size

Small and Medium-Sized Enterprises
Large Enterprises
Threat Intelligence Market, By Vertical

Government and Defense
Banking, Financial Services, and Insurance
IT and Telecom
Healthcare
Retail
Transportation
Energy and Utilities
Manufacturing
Education
Others
Threat Intelligence Market, By Region

North America
Europe
Asia Pacific
Rest of World
Report scope:

The global Threat Intelligence market report covers detailed study with the underlying influencing factors for the variations in the industry growth trends. The report scope includes market analysis on regional as well as country level.

Request demo Report from here: https://www.marketresearchengine.com/threat-intelligence-market-size

Table of Contents:

1. Introduction
1.1. Key Points
1.2. Report Description
1.3. Markets Covered
1.4. Stakeholders
2. Research Methodology
2.1. Research Scope
2.2. Market Research Process
2.3. Research Data Analysis
2.3.1. Secondary Research
2.3.2. Primary Research
2.3.3. Models for Estimation
2.4. Market Size Estimation
2.4.1. Bottom-Up Approach
2.4.2. Top-Down Approach
3. Executive Summary
4. Threat Intelligence Market, By Component
4.1. Key Points
4.2. Solutions
4.2.1. Market Overview
4.2.2. Market Size & Forecast
4.3. Services
4.3.1. Market Overview
4.3.2. Market Size & Forecast
5. Threat Intelligence Market, By Organization Size
5.1. Key Points
5.2. Small and Medium-Sized Enterprises
5.2.1. Market Overview
5.2.2. Market Size & Forecast
5.3. Large Enterprises
5.3.1. Market Overview
5.3.2. Market Size & Forecast
6. Threat Intelligence Market, By Deployment Mode
6.1. Key Points
6.2. On-Premises
6.2.1. Market Overview
6.2.2. Market Size & Forecast
6.3. Cloud
6.3.1. Market Overview
6.3.2. Market Size & Forecast
7. Threat Intelligence Market, By Solution
7.1. Key Points
7.2. Threat Intelligence Platforms
7.2.1. Market Overview
7.2.2. Market Size & Forecast
7.3. Security Information and Event Management
7.3.1. Market Overview
7.3.2. Market Size & Forecast
7.4. Log Management
7.4.1. Market Overview
7.4.2. Market Size & Forecast
7.5. Security and Vulnerability Management
7.5.1. Market Overview
7.5.2. Market Size & Forecast
7.6. User and Entity Behavior Analytics
7.6.1. Market Overview
7.6.2. Market Size & Forecast
7.7. Incident Forensics
7.7.1. Market Overview
7.7.2. Market Size & Forecast
7.8. Risk and Compliance Management
7.8.1. Market Overview
7.8.2. Market Size & Forecast
7.9. Identity and Access Management
7.9.1. Market Overview
7.9.2. Market Size & Forecast
8. Threat Intelligence Market, By Service
8.1. Key Points
8.2. Professional Services
8.2.1. Consulting Services
8.2.2. Training and Support
8.2.2.1. Market Overview
8.2.2.2. Market Size & Forecast
8.3. Managed Services
8.3.1. Advanced Threat Management
8.3.2. Security Intelligence Feeds
8.3.2.1. Market Overview
8.3.2.2. Market Size & Forecast
9. Threat Intelligence Market, By Vertical
9.1. Key Points
9.2. Government and Defense
9.2.1. Market Overview
9.2.2. Market Size & Forecast
9.3. Banking, Financial Services, and Insurance
9.3.1. Market Overview
9.3.2. Market Size & Forecast
9.4. IT and Telecom
9.4.1. Market Overview
9.4.2. Market Size & Forecast
9.5. Healthcare
9.5.1. Market Overview
9.5.2. Market Size & Forecast
9.6. User and Entity Behavior Analytics
9.6.1. Market Overview
9.6.2. Market Size & Forecast
9.7. Retail
9.7.1. Market Overview
9.7.2. Market Size & Forecast
9.8. Transportation
9.8.1. Market Overview
9.8.2. Market Size & Forecast
9.9. Energy and Utilities
9.9.1. Market Overview
9.9.1.1. Market Size & Forecast
9.10. Manufacturing
9.10.1. Market Overview
9.10.2. Market Size & Forecast
9.11. Education
9.11.1. Market Overview
9.11.2. Market Size & Forecast
9.12. Others
9.12.1. Market Overview
9.12.1.1. Market Size & Forecast
10. Threat Intelligence Market, By Region
10.1. North America
10.1.1. North America Industrial Floor Coating, By Component
10.1.2. North America Industrial Floor Coating, By Organization Size
10.1.3. North America Industrial Floor Coating, By Deployment Mode
10.1.4. North America Industrial Floor Coating, By Solution
10.1.5. North America Industrial Floor Coating, By Service
10.1.6. North America Industrial Floor Coating, By Vertical
10.1.7. By Country
10.1.7.1. U.S
10.1.7.2. Canada
10.1.7.3. Mexico
10.2. Europe
10.2.1. Europe Industrial Floor Coating, By Component
10.2.2. Europe Industrial Floor Coating, By Organization Size
10.2.3. Europe Industrial Floor Coating, By Deployment Mode
10.2.4. Europe Industrial Floor Coating, By Solution
10.2.5. Europe Industrial Floor Coating, By Service
10.2.6. Europe Industrial Floor Coating, By Vertical
10.2.7. By Country
10.2.7.1. U.K
10.2.7.2. Germany
10.2.7.3. Italy
10.2.7.4. France
10.2.7.5. Rest of Europe
10.3. Asia Pacific
10.3.1. Asia Pacific Industrial Floor Coating, By Component
10.3.2. Asia Pacific Industrial Floor Coating, By Organization Size
10.3.3. Asia Pacific Industrial Floor Coating, By Deployment Mode
10.3.4. Asia Pacific Industrial Floor Coating, By Solution
10.3.5. Asia Pacific Industrial Floor Coating, By Service
10.3.6. Asia Pacific Industrial Floor Coating, By Vertical
10.3.7. By Country
10.3.7.1. China
10.3.7.2. Australia
10.3.7.3. Japan
10.3.7.4. South Korea
10.3.7.5. India
10.3.7.6. Rest of Asia Pacific
10.4. Rest of World
10.4.1. Rest of World Industrial Floor Coating, By Component
10.4.2. Rest of World Industrial Floor Coating, By Deployment Mode
10.4.3. Rest of World Industrial Floor Coating, By Organization Size
10.4.4. Rest of World Industrial Floor Coating, By Service
10.4.5. Rest of World Industrial Floor Coating, By Solution
10.4.6. Asia Pacific Industrial Floor Coating, By Vertical

About MarketResearchEngine.com

Market Research Engine is a global market research and consulting organization. We provide market intelligence in emerging, niche technologies and markets. Our market analysis powered by rigorous methodology and quality metrics provide information and forecasts across emerging markets, emerging technologies and emerging business models. Our deep focus on industry verticals and country reports help our clients to identify opportunities and develop business strategies.

Other Related Market News :

https://www.marketwatch.com/story/anti-pollution-mask-market-size-growth-share-2022-2030-industry-statistics-development-trends-key-players-restraints-challenges-sales-revenue-price-gross-margin-cost-analysis-research-2022-06-16

https://www.marketwatch.com/story/behavioralmental-health-software-market-size-share-forecast-2022-global-industry-future-growth-latest-technology-market-demands-business-challenges-opportunities-current-trends-key-players-and-revenue-analysis-to-2030-2022-06-16

https://www.marketwatch.com/story/blockchain-technology-in-healthcare-market-2022-2028-insights-and-forecast-research-industry-size-share-growth-rate-demands-current-trends-leading-key-players-market-price-revenue-and-gross-margin-analysis-research-2022-06-16

Media Contact

Company Name: Market Research Engine

Contact Person: John Bay

Email: john@marketresearchengine.com

Phone: +1-855-984-1862

Country: United States

Website: https://www.marketresearchengine.com/

COMTEX_411825540/2582/2022-08-08T04:42:04

Is there a problem with this press release? Contact the source provider Comtex at editorial@comtex.com. You can also contact MarketWatch Customer Service via our Customer Center.

The MarketWatch News Department was not involved in the creation of this content.

Sun, 07 Aug 2022 20:42:00 -0500 en-US text/html https://www.marketwatch.com/press-release/threat-intelligence-market-2022-report-examines-latest-trends-and-key-drivers-supporting-growth-till-2030-2022-08-08
Killexams : Microsoft goes all-in on threat intelligence and launches two new products

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Today’s threat landscape is an unforgiving place. With 1,862 publicly disclosed data breaches in 2021, security teams are looking for new ways to work smarter, rather than harder.  

With an ever-growing number of vulnerabilities and sophisticated threat vectors, security professionals are slowly turning to threat intelligence to develop insights into Tactics, Techniques and Procedures (TTPs) and exploits they can use to proactively harden their organization’s defenses against cybercriminals. 

In fact, research shows that the number of organizations with dedicated threat intelligence teams has increased from 41.1% in 2019 to 47.0% in 2022. 

Microsoft is one of the key providers capitalizing on this trend. Just over a year ago, it acquired cyberrisk intelligence provider RiskIQ. Today, Microsoft announced the release of two new products: Microsoft Defender Threat Intelligence (MDTI) and Microsoft External Attack Surface Management. 

The former will provide enterprises with access to real-time threat intelligence updated on a daily basis, while the latter scans the internet to discover agentless and unmanaged internet-facing assets to provide a comprehensive view of the attack surface. 

Using threat intelligence to navigate the security landscape  

One of the consequences of living in a data-driven era is that organizations need to rely on third-party apps and services that they have little visibility over. This new attack surface, when combined with the vulnerabilities of the traditional on-site network, is very difficult to manage. 

Threat intelligence helps organizations respond to threats in this environment because it provides a heads-up on the TTPs and exploits that threat actors use to gain entry to enterprise environments.

As Gartner explains, threat intelligence solutions aim “to provide or assist in the curation of information about the identities, motivations, characteristics and methods of threats, commonly referred to as tactics, techniques and procedures (TTPs).” 

Security teams can leverage the insights obtained from threat intelligence to enhance their prevention and detection capabilities, increasing the effectiveness of processes including incident response, threat hunting and vulnerability management. 

“MDTI maps the internet every day, forming a picture of every observed entity or resource and how they are connected. This daily analysis means changes in infrastructure and connections can be visualized,” said CVP of security, compliance, identity and privacy, Vasu Jakkal. 

“Adversaries and their toolkits can effectively be ‘fingerprinted’ and the machines, IPs, domains and techniques used to attack targets can be monitored. MDTI possesses thousands of ‘articles’ detailing these threat groups and how they operate, as well as a wealth of historical data,” Jakkal said. 

In short, the organization aims to equip security teams with the insights they need to enhance their security strategies and protect their attack surface across the Microsoft product ecosystem against malware and ransomware threats.

Evaluating the threat intelligence market 

The announcement comes as the global threat intelligence market is steadily growing, with researchers expecting an increase from $11.6 billion in 2021 to reach a total of $15.8 billion by 2026. 

One of Microsoft’s main competitors in the space is IBM, with X-Force Exchange, a threat-intelligence sharing platform, where security professionals can search or submit files to scan, and gain access to the threat intelligence submitted by other users. IBM recently announced raising revenue of $16.7 billion. 

Another competitor is Anomali, with ThreatStream, an AI-powered threat intelligence management platform designed to automatically collect and process data across hundreds of threat sources. Anomali most recently raised $40 million in funding as part of a series D funding round in 2018. 

Other competitors in the market include Palo Alto Networks‘ WildFire, ZeroFOX platform, and Mandiant Advantage Threat Intelligence. 

Given the widespread adoption of Microsoft devices among enterprise users, the launch of a new threat intelligence service has the potential to help security teams against the biggest threats to the provider’s product ecosystem.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Tue, 02 Aug 2022 08:00:00 -0500 Tim Keary en-US text/html https://venturebeat.com/security/microsoft-threat-intelligence/
Killexams : Global Business Intelligence Market: 49% Businesses are Looking to Expand Business Intelligence Application

SkyQuest Technology Consulting Pvt. Ltd.

Global business intelligence (BI) market was valued at USD 24.90 billion in 2021, and it is expected to reach a value of USD 42.13 billion by 2028, at a CAGR of 7.6 % over the forecast period (2022-2028).

Westford, USA, Aug. 08, 2022 (GLOBE NEWSWIRE) -- Business intelligence is one of the most popular and rapidly growing areas of information technology. This growth is being driven by a number of factors, including the need for organizations to make better decisions faster and the increasing popularity of big data.  According to SkyQuest’s market analysis, demand for BI is broadly driven by growing focus of businesses on improving operational efficiency, enhanced decision making, improved customer interaction, and more rapid product iteration. The capabilities offered by BI tools can help organizations manage data more effectively, optimize processes and uncover insights that would otherwise go undetected. These capabilities can have a major impact on overall business performance and can provide significant cost savings for companies of all sizes across the global business intelligence market.

It has been observed that, over 43% of organizations already have or are deploying some form of BI solution. This includes both large and small organizations, as well as those specializing in different industries. In addition, nearly half of respondents at a accurate survey said they would deploy more BI applications in the next 12 months to better understand their businesses.

SkyQuest has published a report on global business intelligence market that covers various market aspects such market detailed market analysis, demand, market trends, pricing analysis, leading providers of business intelligence solution, their market share analysis, value chain, and competitive landscape. The report would help the market players in understanding current market trends, growth opportunities, challenges and threat in the global market. To know more,

Get demo copy of this report:

https://skyquestt.com/sample-request/business-intelligence-bi-market

AI is Shaping the Future of Business Intelligence Market

In accurate years, artificial intelligence (AI) has been gaining more and more traction in the business world. This technology is helping to shape the future of business intelligence (BI), as AI can help speed up the process of extracting insights from data and providing actionable insights to managers.

One of the most significant applications of AI in global business intelligence market is its ability to automate complex analyses and decision-making processes. In some cases, AI can even take on the role of a data analyst, allowing companies to scale their data analysis capabilities without having to hire additional employees. In addition to automating tasks, AI also helps to identify patterns and trends that may be otherwise difficult to see. This can allow businesses to make smarter decisions faster, which can ultimately lead to improved profits.

One of the most popular BI tools is IBM’s Watson cognitive system. This tool can be used to create insights from data by analyzing text, images, videos and other sources. IBM has released several versions of Watson over the past few years, each with capabilities that have expanded beyond what was available in the previous version. And Microsoft’s Azure Cognitive Services offers a wide range of capabilities that can be used for BI and data management

As businesses shift away from manual data analysis and toward more automated systems, BI tools will continue to become increasingly important. Indeed, it seems likely that BI will become even more central to business operations.

Embedded Analytics Getting Popular Among Businesses

Embedded analytics is a technology that allows businesses to collect and analyze data without having to install separate software. As per SkyQuest analysis, this technology can be found in many different forms, such as web search engines, social media monitoring tools, and even embedded sensors in devices.

A large number of businesses are opting for embedded analytics to collect data from a wide range of sources in the global business intelligence market. Not only can embedded analytics be used in internet-based applications, but it can also be used in on-premises applications and even with mobile devices. Collecting data from a variety of sources means that businesses have more information available to them when they are trying to make decisions. As per accurate study by IDC, four in five firms used more than 100 data sources and just under one-third had more than 1,000. Often, this data exists in silos.

In addition to collecting data, embedded analytics can also help businesses analyze the data. This analysis can help businesses determine how best to use the data and which pieces of the data are most important. Additionally, companies can use embedded analytics to predict future trends based on past data. This prediction is often referred to as “predictive modeling” and is one of the most powerful features of embedded analytics.

SkyQuest has studied the global business intelligence market and identified current market dynamics and trends. The report provides a detailed analysis of the embedded analytics and its increasing adoption among firms by their size. This will help the market participants understand consumer behavior and formulate growth strategies as per the current market opportunities. To get more insights,

Browse summary of the report and Complete Table of Contents (ToC):

https://skyquestt.com/report/business-intelligence-bi-market

SkyQuest Analysis Suggests 49% Businesses are Planning to Expand BI Applications into their Businesses, But Pricing to Play Key Role  

With the rapid growth of businesses, it is no surprise that data analysis and intelligence are becoming increasingly important across business intelligence market. In a accurate survey, 92% of respondents said that they use business intelligence to analyze their data. However, only 49% said that their organization has a clear strategy for using BI and are aiming to expand application of BI into their businesses. As a result, businesses are looking for ways to Improve their data-analysis capabilities.

These findings come from a survey conducted by SkyQuest Technology Consulting, which polled 239 business decision-makers in the United States Asia Pacific, and Europe about their perceptions of BI.

The survey on the global business intelligence market also found that 54% of respondents feel unhappy with their current role in BI and only 30% believe that their skills in BI are valuable to their organization. Furthermore, 43% of respondents say they have said that BI is becoming expensive for them to afford, while 32% cite its complexity as a reason for not using it. In fact, specifically, 56% of business executives said that this is a key obstacle to their BI implementation plans.

Interestingly, while only a 43% of businesses are currently using BI tools, most believe that this number will increase in the future. Sixty-seven percent of those surveyed believe that BI will become increasingly important over the next three years, while 49 percent think it will be critical within five years.

SkyQuest’s report on business intelligence market can be an essential tool for market players. This report would help to identify the growth potential, current trends, and future prospects of the business intelligence market. The report would also provide detailed analysis of each type of business intelligence product and service. This information would help market players to make informed decisions about their investments in this area.

Key Players in Global Business Intelligence Market

  • IBM Corporation (US)

  • Oracle Corporation (US)

  • Microsoft Corporation (US)

  • Google (US)

  • SAP SE (Germany)

  • Cisco Systems Inc. (US)

  • Information Builders (US)

  • Qlik Technologies Inc. (US)

  • SAS Institute Inc. (US)

  • Tableau Software Inc. (US)

  • TIBCO Software Inc. (US)

  • Domo Inc. (US)

Speak to Analyst for your custom requirements:

https://skyquestt.com/speak-with-analyst/business-intelligence-bi-market

Related Reports in SkyQuest’s Library:

Global Machine Learning Market

Global Customer Data Platform Market

Global Cloud Managed Networking Market

Global Metaverse Infrastructure Market

Global Location Based Services Market

About Us:

SkyQuest Technology is leading growth consulting firm providing market intelligence, commercialization and technology services. It has 450+ happy clients globally.

Address:

1 Apache Way, Westford, Massachusetts 01886

Phone:

USA (+1) 617-230-0741

Email: sales@skyquestt.com

LinkedIn Facebook Twitter

Mon, 08 Aug 2022 02:09:00 -0500 en-GB text/html https://uk.news.yahoo.com/global-business-intelligence-market-49-140900327.html P2170-015 exam dump and training guide direct download
Training Exams List