New Jersey, N.J., July 11, 2022 The IIoT in Automotive Market research report provides all the information related to the industry. It gives the outlook of the market by giving authentic data to its client which helps to make essential decisions. It gives an overview of the market which includes its definition, applications and developments, and manufacturing technology. This IIoT in Automotive market research report tracks all the latest developments and innovations in the market. It gives the data regarding the obstacles while establishing the business and guides to overcome the upcoming challenges and obstacles.
Automotive industry has widely adopted virtual reality (VR) due to the prospect of cost and time reductions. Automotive applications such as manufacturing workstation optimization, vehicle design, and assembly training make use of VR. Automotive companies use VR for the virtual environment to determine the vehicle’s performance in various environments and car crashes. VR and IIoT collectively result in better designs for automobiles, speed-up the manufacturing processes, and help in delivering customer preferred standards. The integration of virtual reality in automotive industry is identified to be one of the key trends contributing towards the growth of the IIoT market in automotive industry.
Get the PDF trial Copy (Including FULL TOC, Graphs, and Tables) of this report @:
This IIoT in Automotive research report throws light on the major market players thriving in the market; it tracks their business strategies, financial status, and upcoming products.
Some of the Top companies Influencing this Market include:Cisco, HCL, IBM, PTC,
Firstly, this IIoT in Automotive research report introduces the market by providing an overview which includes definition, applications, product launches, developments, challenges, and regions. The market is forecasted to reveal strong development by driven consumption in various markets. An analysis of the current market designs and other basic characteristics is provided in the IIoT in Automotive report.
The region-wise coverage of the market is mentioned in the report, mainly focusing on the regions:
Segmentation Analysis of the market
The market is segmented on the basis of the type, product, end users, raw materials, etc. the segmentation helps to deliver a precise explanation of the market
Market Segmentation: By Type
Market Segmentation: By Application
For Any Query or Customization: https://a2zmarketresearch.com/ask-for-customization/654573
An assessment of the market attractiveness with regard to the competition that new players and products are likely to present to older ones has been provided in the publication. The research report also mentions the innovations, new developments, marketing strategies, branding techniques, and products of the key participants present in the global IIoT in Automotive market. To present a clear vision of the market the competitive landscape has been thoroughly analyzed utilizing the value chain analysis. The opportunities and threats present in the future for the key market players have also been emphasized in the publication.
This report aims to provide:
Table of Contents
Global IIoT in Automotive Market Research Report 2022 – 2029
Chapter 1 IIoT in Automotive Market Overview
Chapter 2 Global Economic Impact on Industry
Chapter 3 Global Market Competition by Manufacturers
Chapter 4 Global Production, Revenue (Value) by Region
Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions
Chapter 6 Global Production, Revenue (Value), Price Trend by Type
Chapter 7 Global Market Analysis by Application
Chapter 8 Manufacturing Cost Analysis
Chapter 9 Industrial Chain, Sourcing Strategy and Downstream Buyers
Chapter 10 Marketing Strategy Analysis, Distributors/Traders
Chapter 11 Market Effect Factors Analysis
Chapter 12 Global IIoT in Automotive Market Forecast
Buy Exclusive Report @: https://www.a2zmarketresearch.com/checkout
1887 WHITNEY MESA DR HENDERSON, NV 89014
+1 775 237 4147
(MENAFN- EIN Presswire)
Emergen Research Logo
Rising adoption of Machine Learning and Artificial Intelligence in various applications and solutions are some key factors is driving growthImage recognition Market Size – USD 25.67 Billion in 2020, Market Growth – at a CAGR of 15.3%, Market Trends – Advancements in technology” — Emergen Research
VANCOUVER, BC, CANADA, June 27, 2022 /EINPresswire.com / -- The global image recognition market size is expected to reach USD 80.29 Billion at a steady CAGR of 15.3% in 2028, according to latest analysis by Emergen Research. Increasing usage of smartphones globally and rising adoption of Virtual Reality (VR) and Augmented Reality (AR) are key factors driving growth of the global image recognition market.
Other factors include growing demand for face recognition in tablets, smartphones, and personal computers due to technological advancements. Increasing budgets for homeland security and defense spending by government in countries such as China, Russia, and India is also contributing to growth of the market to a significant extent. In February 2020, Department of Homeland Security officials detailed the use of facial recognition technologies by the administration across the U.S. To date, over 43.7 million individuals have been scanned at border crossings, outbound ships, and elsewhere by various systems. Facial recognition identified 252 persons attempting to use a combined 75 U.S. travel documents belonging to someone else, around 7% under the age of 18 and 20% of who had criminal records.
Get a trial of the report @
Image recognition applications include targeted advertising, smart photo libraries, accessibility for the visually impaired, media interactivity, and enhanced research capabilities. Google, Microsoft, Facebook, Pinterest, and Apple are investing in resources and research into image recognition and related applications. Improper data storage, security breaches, and misuse of facial recognition data are some of the primary concerns related to image recognition technology, which could limit adoption in several areas.
The advent of cloud media services and increase in number of mobile devices is fueling market growth to a significant extent. Deployment and preference for cloud-based services has increased to a significant extent due to the COVID-19 pandemic. The major benefits of cloud as a platform in the market are its easy scalability and ease of sharing already collected images and data between all devices.
The key companies studied in the report are:
Request a discount on the report @
Some Key Highlights in the Report
The software segment accounted for largest revenue share in 2020 due to a sudden and rapid increase in adoption of image recognition software in computer graphics, medical imaging, and photo editing, among others. Rapidly growing trends of industry automation and Industry 4.0 are driving adoption of image recognition software, and the trend is expected to continue over the forecast period.
Image recognition or tracking is used in augmented reality to track, detect, and augment 2D images. Image tracking is dependent on advanced computer vision technology to track and augment images. Jack Daniels augmented reality app turns whisky bottles into pop-out storybooks. The free app uses a tablet or smartphone camera to recognize the sticker on the bottle and unfolds the whole manufacturing process of the drink in a matching black and white pop-up book.
Image recognition is an important tool in autonomous vehicles used by Uber and Google. The technology detects road signs and obstacles through sensors in front of a vehicle and identifies these with the help of this technology. Computer vision systems powered by deep learning are trained with thousands of images of humans, road signs, and obstacles on the road under different weather and lighting conditions. The intelligence of the system continues to increase as new information is fed in.
North America accounted for largest revenue share in 2020 due to high level of integration of AI in e-commerce and digital shopping. Companies in the region are quick to adopt advanced technologies such as AI, deep learning, and cloud-based technologies, which is propelling growth of the market.
To know more about the report, click @
Emergen Research has segmented the global image recognition market on the basis of component, application, deployment mode, technique, industry vertical, and region:
Component Outlook (Revenue, USD Billion; 2018–2028)
Application Outlook (Revenue, USD Billion; 2018–2028)
Security and Surveillance
Scanning and Imaging
Marketing and Advertising
Deployment Mode Outlook (Revenue, USD Billion; 2018–2028)
Technique Outlook (Revenue, USD Billion; 2018–2028)
QR/ Barcode Recognition
Optical Character Recognition
Industry Vertical Outlook (Revenue, USD Billion; 2018–2028)
Media & Entertainment
Retail & E-commerce
IT & Telecom
Automobile & Transportation
Request customization of the report @
Regional Outlook (Revenue, USD Billion; 2018–2028)
Rest of Europe
Rest of APAC
Rest of LATAM
Middle East & Africa
Rest Of MEA
The report addresses the following key points:
The report estimates the expected market size from 2020-2028
The report provides a forecast of market drivers, restraints, and future opportunities for the Image Recognition market
The report further analyses the changing market dynamics
Regional analysis and segmentation of the market with analysis of the regions and segments expected to dominate the market growth
Extensive competitive landscape mapping with profiles of the key competitors
In-depth analysis of business strategies and collaborations such as mergers and acquisitions adopted by the key companies
Revenue forecast, country scope, application insights, and product insights
Thank you for studying our report. For any specific details on customization of this report, please get in touch with us. We will ensure the report you get is well-suited to your needs.
About Emergen Research
At Emergen Research, we believe in advancing with technology. We are a growing Marketresearch and strategy consulting company with an exhaustive knowledge base of cutting-edge and potentially market-disrupting technologies that are predicted to become more prevalent in the coming decade.
Visit us on social media:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.
Artificial intelligence’s impact on the world has already been felt in a variety of ways, from chess computers to search engine algorithms to a chatbot so convincing that a Google researcher thinks it’s sentient. So what’s the future of AI?
Obviously, the future of AI can’t be predicted any more than tomorrow’s lottery numbers can. But even as the research in the field drives the technology further and further, we can put our futurist caps on and speculate about what the world might look like in an AI-driven future.
For clarity, we’ll focus on the AI advances in the world of business, yet we’ll also paint a picture of the world at-large as well.
Also see: Top AI Software
Fiction can have an impact on real-world scientific research. Isaac Asimov’s Three Laws of Robotics – laid out in his short story Runaround – have been part of the discussion of ethics in AI since the field began, even if modern ethical discussions tend to view Asimov’s laws as a fair but lacking starting point.
In these fictional portrayals, there is much anxiety about AI’s use as a weapon. Arguably the most famous fictional instance of AI is either HAL 9000 of 2001: A Space Odyssey or the Terminators from the franchise of the same name. Both properties deal with AI trying to kill humans by any means necessary.
However, AI is just as often portrayed as heroic as monstrous, though its weapon status is often still at the forefront. Many readers might remember The Iron Giant, wherein a 50-foot-tall alien robot struggles with its identity and the United States military before ultimately deciding it’d rather be Superman than a weapon.
These anxieties over AI-as-weapon, well-founded or not, are influential on modern AI-related policy. As recently as 2019, the U.N. was discussing the banning of lethal autonomous weapon systems (LAWS), which calls to mind the exact sort of “killer robot” anxieties present in our fiction.
AI has become a prevalent piece of modern life. When you search something on Google, you’ll be dealing with its Multitask Unified Model (MUM) AI algorithm, the latest in a series of AI at the core of Google’s search engine. If you own Amazon’s Alexa or a similar home virtual assistant, you’ve brought an AI into your home.
Focusing on the business uses, AI is basically everywhere. Technology like chatbots for customer service, autonomous fraud detection, and automated invoice processing are in use by companies both large and small the world over. In fact, this very article was written in Google Docs, which has an AI-driven Smart Compose feature that is widely used by many individuals.
Almost every major tech company in the world has a department actively researching or implementing AI, if not multiple departments. There are countless new AI startups around the world that offer AI-driven software-as-a-service platforms that claim they can save businesses money. The world of business, especially the tech industry, is awash with artificial intelligence and machine learning.
Also see: Digital Transformation Guide: Definition, Types & Strategy
So, why do these companies use AI so much, and why are there so many startups springing up to offer these sorts of AI-powered services to consumers and executives alike? The easy answer is that AI is a trend with ups and downs, and it’s currently trending upward in terms of interest as a viable business technology.
In fact, Grand View Research projects that AI will grow at a compound rate of 38.1% annually from 2022 to 2030.
Beyond trends, there are viable use cases for AI that are driving the future of AI. As far back as the 1980s, major U.S. companies were using expert systems to automate certain tasks to great effect.
For instance, robotic process automation (RPA) uses AI or machine learning to automate simple repetitive tasks, such as the aforementioned invoice processing. When properly implemented, it can be a tremendous cost-saving tool, especially for small-to-midsize businesses (SMBs) that can’t afford to pay a human to perform the same tasks. Expect this use case to grow greatly in the future.
Additionally, many companies use algorithms to help optimize a user’s experience, such as customer service chatbots or Google Photos’ automatic image enhancement feature. For the former, it provides 24/7 service, which no single person could provide on their own. For the latter, it can eliminate the human error present in Photos’ manual image enhancement, leading to more consistent improvements to pictures.
Customer-facing AI can have its drawbacks, however. Google Photos’ photo-tagging algorithm has been infamously inaccurate in the past, and anyone who’s had to talk to a chatbot for IT support knows how unhelpful they can be if you don’t know the exact way to communicate with it. But advances in this technology will certainly drive future AI developments.
Also see: Best Data Analytics Tools
Any discussion of the future in artificial intelligence will inevitably turn to the idea of AI recreating human-like learning and growth patterns or of attaining a version of sentience. Since the field first took off in the 1950s, this concept has dominated the discussion of AI, both within the field and without.
Award-winning computer scientist and chief AI scientist at Meta, Yann LeCun, published a paper in late June 2022 discussing his own vision of how machines can begin thinking like humans. In it, LeCun proposes using the psychological concept of world models to let AI replicate the way humans can intuitively predict the consequences of certain actions.
An example LeCun uses is the difference between a self-driving car and a human driver. A self-driving car might need multiple instances of failure to learn that driving too fast while turning is a bad idea, but the human driver’s instinctive knowledge of physics would tell them that going too fast while turning probably won’t end well.
Throughout the paper, LeCun builds out how this concept could be replicated for an AI. He proposes a six-module architecture for the AI, where each module feeds into each other to replicate the way all parts of the human brain feed into each other to create our observations and models of the world.
While LeCun himself admits to limitations and flaws with his proposal, the paper was written for readers with minimal tech or mathematical knowledge, allowing readers of any industry to be able to understand the potential of AI with human-like thinking patterns.
LeCun is obviously not the only one looking to the future for AI, and in fact, some researchers for Google subsidiary DeepMind AI have managed to develop PLATO, an AI that roughly replicates the way infants learn simple physics concepts.
If we only look at advancements within AI itself, that does not paint a complete picture. Technological advancement doesn’t occur in parallel silos, and a cross-disciplinary field like AI is particularly affected by the state of the tech around it.
Cloud computing, for example, has gone a long way in making AI more accessible. The infrastructure and services provided by cloud computing mean practitioners no longer need to build and maintain separate infrastructure for their AI platforms.
This goes both ways as well, as some developers have used AI to help push cloud computing forward. Such integration allows for streamlined data access, access to automated data mining from cloud servers, and other benefits.
Quantum computing, built on the principles found in quantum physics, can allow AI to process larger and more complex datasets than is possible with traditional methods of computing.
IBM is a leader in this field and, in May 2022, unveiled their roadmap to building quantum-centric computers with some of the first quantum-based software apps, predicted to begin development by 2025. As quantum computing develops and becomes more accessible, it’s likely that AI starts making leaps in advancements as well.
Also see: Why Cloud Means Cloud Native
AI has already had a significant impact on the world around us in the 21st century, but as more research and resources are put into the field’s advancement, we will begin to see even more of AI’s influence in our day-to-day lives.
In the world of healthcare, AI provides medical professionals the ability to process increasingly-large datasets. Researchers were using AI modeling to aid in the development of COVID-19 vaccines from the beginning of the pandemic. As AI advances and becomes more accessible, AI will likely be used to combat other diseases and ailments.
Manufacturing is a classic example of AI and automation reshaping the world. The idea of computers taking blue collar jobs in this industry is ingrained in the minds of a lot of people in the U.S. And indeed, automation has driven job losses in some industrial scenarios.
In reality, computers aren’t taking everyone’s jobs en masse, but advancements in AI might add still more automation to the process. We could very well see AI not just produce manufactured goods but perform quality checks to ensure products are fit to ship with minimal human oversight.
With many businesses shifting to work-from-home and hybrid arrangements, AI (specifically RPA) can be used to automate some of the more repetitive tasks necessary in an office setting, such as customer support. This can provide human employees more time to analyze and develop creative solutions to the sort of complex problems they were hired to solve.
Banking and financial services already use AI, but its impact is on the way these companies analyze data, provide financial advice, and detect fraud. As AI gets more advanced, we could see banks further leverage AI to maintain and facilitate the many services they provide, such as loans and mortgages.
The tech industry as a whole is constantly pushing for progress, and artificial intelligence has been one of the pillars of that progress throughout the 21st century. As advancements are made and research conducted, AI’s influence over the industry and the world will likely only grow.
As noted, we see AI in our everyday lives via search engine algorithms and virtual assistants, but as industries like banking and healthcare begin adopting more and more AI-powered software and solutions, AI could be the most important field of tech in our world today. Provided another AI winter doesn’t show up to cool the field off again.
That said, whether it’s quantum computing taking its capabilities to new heights or the enduring dream of human-like intelligence, the future of AI is expected to play a deeply significant role in consumer and business markets.
Also see: Real Time Data Management Trends
“Salesforce (US), SAP (Germany), Microsoft (US), Oracle (US), IBM (US), AWS (US), Sisense (US), Alteryx (US), SAS Institute (US), Alibaba Cloud (China), Dundas (Canada), TIBCO Software (US), Qlik (US), GoodData (US), Domo (US), Klipfolio (Canada), Datafay (US), Zegami (England), Live Earth (US), Reeport (France), Cluvio (Germany).”
Data Visualization Tools Market by Tool (Standalone and Integrated), Organization Size, Deployment Mode, Business Function, Vertical (BFSI, Telecommunications and IT, Healthcare and Life Sciences, Government), and Region – Global Forecast to 2026
The global Data Visualization Tools Market size to grow from USD 5.9 billion in 2021 to USD 10.2 billion by 2026, at a Compound Annual Growth Rate (CAGR) of 11.6% during the forecast period. Various factors such as the growing demand for an interactive view of data for faster business decisions and increasing developments in Augmented Reality (AR) and Virtual Reality (VR) to enable the interaction of companies with data in 3D formats, are expected to drive the demand for data visualization tools.
Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=94728248
Businesses providing software tools are also expected to slow down for a short span of time. However, the adoption of real-time data-based applications, collaborative applications, analytics, security solutions, and AI are expected to witness increased adoption after the slight slowdown. Verticals such as manufacturing, retail, and energy and utilities have witnessed a moderate slowdown, whereas BFSI, government, and healthcare and life sciences verticals have witnessed a minimal impact. Data visualization techniques have been front-and-center in the efforts to communicate the science around COVID-19 to the very broad audience of policy makers, scientists, healthcare providers, and the public. Companies are focusing on the development of interactive dashboards for analysis of daily cases.
The cloud segment to grow at a higher CAGR during the forecast period
The data visualization tools market by deployment mode has been segmented into on-premises and cloud. The cloud segment is expected to grow at a rapid pace during the forecast period. The high CAGR of the cloud segment can be attributed to the availability of easy deployment options and minimal requirements of capital and time. These factors are supporting the current lockdown scenario of COVID-19 as social distancing, and online purchasing of goods hit the industry and are expected to drive the adoption of cloud-based data visualization tools. Highly secure data encryption and complete data visibility and enhanced control over data in terms of location and the real-time availability of data for extracting insights are responsible for the higher adoption of on-premises-based data visualization tools.
Data visualization tools are essential to analyze massive amounts of information and make data-driven decisions. Data visualization tools provide an accessible way to see and understand trends, outliers, and patterns in data. With further advancements in technologies, dynamic data stories with more automated and consumer-based experiences would replace visual, and point-and-click authoring and exploration. This would shift the focus from predefined dashboards to in-context data stories. Users would prefer data visualization solutions to stream the most relevant insights to each user based on their context, role, or use. The future advancements in data visualization tools would leverage technologies such as augmented analytics, Natural Language Processing (NLP), streaming anomaly detection, and collaboration.
Request trial Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=94728248
Some of the key players operating in the data visualization tools market include Salesforce (US), SAP (Germany), Microsoft (US), Oracle (US), IBM (US), AWS (US), Sisense (US), Alteryx (US), SAS Institute (US), Alibaba Cloud (China), Dundas (Canada), TIBCO Software (US), Qlik (US), GoodData (US), Domo (US), Klipfolio (Canada), Datafay (US), Zegami (England), Live Earth (US), Reeport (France), Cluvio (Germany), Whatagraph (The Netherlands), Databox (US), Datapine (Germany), Toucan Toco (France), and Chord (US). These data visualization tools vendors have adopted various organic and inorganic strategies to sustain their positions and increase their market shares in the global data visualization tools market.
Salesforce was founded in 1999 and is headquartered in California, US. In June 2019, Salesforce acquired Tableau, a data visualization platform provider, in a move that would strengthen its position in the digital transformation space. Its integrated platform provides a single-shared view of every customer across departments, including marketing, sales, commerce, and services. Salesforce has a community of over 10 million innovators, disruptors, and community shapers known as Trailblazers. The company offers a wide range of products and services across segments, including sales, services, marketing, application, analytics, employee experience, trailblazers and reskilling, and enablement and collaboration, most of which operate on a single trusted cloud platform. Salesforce also offers a cross-cloud technology named Salesforce 360, which helps its clients obtain a single integrated, holistic customer profile for various departments. The company caters to various industries, including BFSI, healthcare and life sciences, government, communications, retail, consumer goods, media, manufacturing, transportation, hospitality, automotive, and education. It has geographic presence in North America, Europe, APAC, and MEA.
SAP was founded in 1972 and is headquartered in Walldorf, Germany. The company’s diverse portfolio is segmented into applications, technologies and services, and SAP business network. The company is a leading provider of enterprise application solutions and services. It is also a provider of enterprise resource planning, supply chain management, data integration and quality, and master data management. Its solutions are compliant with GDPR. They enable enterprises to build intelligent AI- and ML-based software to unite human expertise with machine-generated insights. The company segments its diverse portfolio into applications, technology and services, intelligent spend group, and Qualtrics. It works on an intelligent enterprise framework, which includes experience, intelligence, and operations business models. It is known to offer the SAP HANA platform through its experience model of the framework. The platform enables both the transactional processing for data capture and retrieval, and analytical processing for BI and reporting. It offering caters to various industry verticals, including BFSI, public services, telecommunications, energy and utilities, transportation and logistics, travel and hospitality, healthcare and life sciences, and media and entertainment. SAP has more than 345,000 customers across 180 countries in the Americas, EMEA, and APAC.
Company Name: MarketsandMarkets™ Research Private Ltd.
Contact Person: Mr. Aashish Mehra
Email: Send Email
Address:630 Dundee Road Suite 430
State: IL 60062
Country: United States
Business analytics is an increasingly powerful tool for organisations, but one that is associated with steep learning curves and significant investments in infrastructure.
The idea of using data to drive better decision-making is well established. But the conventional approach – centred around reporting and analysis tools – relies on specialist applications and highly trained staff. Often, firms find they have to build teams of data scientists to gather the data and manage the tools, and to build queries.
This creates bottlenecks in the flow of information, as business units rely on specialist teams to interrogate the data, and to report back. Even though reporting tools have improved dramatically over the past decade, with a move from spreadsheets to visual dashboards, there is still too much distance between the data and the decision-makers.
Companies and organistions also face dealing with myriad data sources. A study from IDC found that close to four in five firms used more than 100 data sources and just under one-third had more than 1,000. Often, this data exists in silos.
As a result, suppliers have developed embedded analytics to bring users closer to the data and, hopefully, lead to faster and more accurate decision-making. Suppliers in the space include ThoughtSpot, Qlik and Tableau, but business intelligence (BI) and data stalwarts such as Informatica, SAS, IBM and Microsoft also have relevant capabilities.
Embedded analytics adds functionality into existing enterprise software and web applications. That way, users no longer need to swap into another application – typically a dashboard or even a BI tool itself – to look at data. Instead, analytics suppliers provide application programming interfaces (APIs) to link their tools to the host application.
Embedded analytics can be used to deliver mobile and remote workers access to decision support information, and even potentially data, on the move. This goes beyond simple alerting tools: systems with embedded analytics built in allow users to see visualisations and to drill down into live data.
And the technology is even being used to provide context-aware information to consumers. Google, for example, uses analytics to present information about how busy a location or service will be, based on variables such as the time of day.
Indeed, some suppliers describe embedded analytics as a “Google for business” because it allows users to access data without technical know-how or an understanding of analytical queries.
“My definition generally is having analytics available in the system,” says Adam Mayer, technical product director at Qlik. “That’s not your dedicated kind of BI tool, but more to the point, I think it’s when you don’t realise that you’re analysing data. It’s just there.”
The trend towards embedding analytics into other applications or web services reflects the reality that there are many more people in enterprises who could benefit from the insights offered by BI than there are users of conventional BI systems.
Firms also want to Excellerate their return on investment in data collection and storage by giving more of the business access to the information they hold. And with the growth of machine learning and artificial intelligence (AI), some of the heavy lifting associated with querying data is being automated.
“What we are trying to do is deliver non-technical users the ability to engage with data,” says Damien Brophy, VP for Europe, the Middle East and Africa (EMEA) at ThoughtSpot. “We’re bringing that consumer-like, Google-like experience to enterprise data. It is giving thousands of people access to data, as opposed to five or 10 analysts in the business who then produce content for the rest of the business.”
At one level, embedded analytics stands to replace static reports and potentially dashboards too, without the need to switch applications. That way, an HR or supply chain specialist can view and – to a degree – query data from within their HR or enterprise resource planning (ERP) system, for example.
A field service engineer could use an embedded analysis module within a maintenance application to run basic “what if” queries, to check whether it is better to replace a part now or carry out a minor repair and do a full replacement later.
Also, customer service agents are using embedded analytics to help with decision-making and to tailor offers to customers.
Embedded systems are designed to work with live data and even data streams, even where users do not need to drill down into the data. Enterprises are likely to use the same data to drive multiple analysis tools: the analytics, business development or finance teams will use their own tools to carry out complex queries, and a field service or customer service agent might need little more than a red or green traffic light on their screen.
“The basic idea is that every time your traditional reporting process finds the root cause of a business problem, you train your software, either by formal if-then-else rules or via machine learning, to alert you the next time a similar situation is about to arise,” says Duncan Jones, VP and principal analyst at Forrester.
“For instance, suppose you need to investigate suppliers that are late delivering important items. In the old approach, you would create reports about provider performance, with on-time-delivery KPI and trends and you’d pore through it looking for poor performers.
“The new approach is to create that as a view within your home screen or dashboard, continually alerting you to the worst performers or rapidly deteriorating ones, and triggering a formal workflow for you to record the actions you’ve taken – such as to contact that provider to find out what it is doing to fix its problems.”
This type of alerting helps businesses, because it speeds up the decision-making process by providing better access to data that the organisation already holds.
“It’s partly businesses’ need to move faster, to react more quickly to issues,” says Jones. “It’s also evolution of the technology to make embedded alert-up analytics easier to deliver.”
Embedded analytics suppliers are also taking advantage of the trend for businesses to store more of their data in the cloud, making it easier to link to multiple applications via APIs. Some are going a step further and offering analytical services too: a firm might no longer need expertise in BI, as the provider can offer its own analytical capabilities.
Again, this could be via the cloud, but serving the results back to the users in their own application. And it could even go further by allowing different users to analyse data in their own workflow-native applications.
A “smart” medical device, such as an asthma inhaler, could provide an individual’s clinical data to their doctor, but anonymised and aggregated data to the manufacturer to allow them to plan drug manufacturing capacity better.
“Data now is changing so quickly, you really need intraday reporting,” says Lee Howells, an analytics specialist at PA Consulting. “If we can put that in on a portal and allow people to see it as it happened, or interact with it, they are then able to drill down on it.
“It’s putting that data where employees can use it and those employees can be anyone from the CEO to people on operations.”
But if the advantage of embedded analytics lies in its ability to tailor data to the users’ roles and day-to-day applications, it still relies on the fundamentals of robust BI systems.
Firms considering embedded analytics need to look at data quality, data protection and data governance.
They also need to pay attention to security and privacy: the central data warehouse or data lake might have robust security controls, but does the application connecting via an API? Client software embedding the data should have equal security levels.
And, although cleaning data is always important for effective analytics and business intelligence, it becomes all the more critical when the users are not data scientists. They need to know that they can trust the data, and if the data is imperfect or incomplete, this needs to be flagged.
A data scientist working on an analytics team will have an instinctive feel for data quality and reliability, and will understand that data need not be 100% complete to Excellerate decision-making. But a user in the field, or a senior manager, might not.
“Embedded analytics continues the democratisation of data, bringing data and insight directly to the business user within their natural workflow,” says Greg Hanson, VP for EMEA at Informatica.
“This fosters a culture of data-driven decision-making and can speed time to value. However, for CDOs [chief data officers] and CIOs, the crucial question must be: ‘is it accurate, is it trustworthy and can I rely on it?’ For embedded analytics programmes to be a success, organisations need confidence that the data fuelling them is from the right sources, is high quality and the lineage is understood.”
CDOs should also consider starting small and scaling up. The usefulness of real-time data will vary from workflow to workflow. Some suppliers’ APIs will integrate better with the host application than others. And users will need time to become comfortable making decisions based on the data they see, but also to develop a feel for when questions are better passed on to the analytics or data science team.
“Organisations, as part of their next step forward, have come to us with their cloud infrastructure or data lakes already in place, and they started to transform their data engineering into something that can be used,” says PA’s Howell. “Sometimes they put several small use cases in place as proof of concept and the proof of value. Some data isn’t as well used as it could be. I think that’s going to be a continually evolving capability.”
It’s becoming clear that we are starting to move toward a hybrid and multicloud world. Although many organizations will be content to use a limited number of public clouds and keep the bulk of their workloads on a favorite platform, many organizations are starting to diversify their cloud strategies and recognize the need for more integration and portability.
In our latest Future of Hybrid Cloud Automation report, Futuriom looked at multicloud and hybrid cloud strategies and real-world implementations and found that hybrid platforms are indeed being put into production. Multicloud enables organizations to leverage the tools of multiple clouds. Hybrid cloud enables organizations to blend infrastructure and applications into one virtualized cloud, tapping multiple providers.
In our discussions and research the key benefits of multicloud and hybrid cloud architectures include accelerated digitization, applications workload flexibility, control over the IT infrastructure, and better control over cloud costs. We found a number of organizations that have successfully moved to hybrid cloud and multicloud architectures over the past few years, including Coca Cola Enterprises, Target, Walmart, Fidelity, Mercedes-Benz, and the BMW Group – just to name a few.
With these trends now in place, we have seen an explosion of new cloud tools adopted for hybrid and multicloud environments – while at the same time established cloud platforms are pivoting to fit into the new hybrid reality. The large public cloud providers are all ramping up tools — including offerings in management, compute, networking, and security – to support hybrid or multicloud operations. In addition, major cloud platform providers such as VMware and IBM have spent years of M&A and strategy to build up cross-platform hybrid cloud tools.
The main thrust of these efforts includes integrating key functions such as cloud management, Kubernetes cluster management, and multicloud networking. In between, there is a wide variety of cloud integration specialists using APIs and connectors to streamline the process of connecting hybrid clouds. In addition to the major players, there are dozens of startups targeting hybrid cloud operations.
Futuriom spent months studying these developments, which we have summarized in our Hybrid Cloud Automation report. Next week, we’ll discuss what we’ve found in multicloud networking integration, security, cost management, and hybrid cloud management areas. But first, let’s take a look at the hybrid cloud movement for the major public cloud players – platforms such as Amazon Web Service (AWS), Google Cloud, IBM Cloud, Microsoft Azure, and Oracle Cloud.
The major cloud providers are constantly adding proprietary management and platform tools (such as networking) to make their clouds “stickier.” Many end users we speak to are leery of the inevitable “lock-in” strategy pursued by the public cloud providers, so they will also look at ways to do this independently.
Amazon has indeed emphasized a “one cloud” strategy to build the broadest and most capable cloud platform, including developing their own Graviton chips. Amazon’s also pushing further out to the edge with Outposts as well as its own private wireless offering for 5G and other wireless services. Some of the customers Amazon promoted for its “one cloud” strategy included DISH, Goldman Sachs, and United Airlines. Amazon also has an interesting tool called CloudFormation, which allows you to model resources in YAML or JSON, automate them, and then deploy them in your AWS cloud-based infrastructure. This is competitive with tools that are more geared for multicloud, such as Hashicorp Terraform or Morpheus Data, for example.
Micrososoft and Google have been less forceful in pushing a one-cloud strategy, but that doesn’t mean they don’t have a lock-in agenda as well. For example, Microsoft has focused on networking tools designed to create end-user addiction to its own cloud network. It’s had great success with its own open-source networking platform, Software for Open Networking in the Cloud
(SONiC) and Azure Virtual WAN. Recently, however, Microsoft has paid more marketing attention to hybrid cloud operations and has made Azure Arc a centerpiece of this strategy, saying it offers customers the potential to take advantage of Azure’s security, governance, and management capabilities while at the same time being able to deploy resources on private cloud and on-prem resources. Microsoft has cited Royal Bank of Canada as a customer implementing this hybrid cloud strategy.
Hybrid and multicloud presents both a threat and an opportunity for public cloud providers. For the leaders, particularly #1 Amazon, it’s more of a threat. For the insurgents lower on the totem pole – players like Google, IBM, and Oracle – it’s an opportunity. Microsoft, at #2, can benefit from both perspectives.
Google is also increasing both its internal cloud management tools , including Anthos, Looker, BigQuery Omni, and Apigee API management. Of course, Google cites its managed service for Kubernetes (which it invented), Google Kubernetes Engine (GKE), as the ultimate hybrid and multicloud tool. Its BigQuery Omni tool can analyze data stored in multiple clouds and is available on both Azure and AWS as well. Anthos, the linchpin of Google’s hybrid strategy, enables organizations to run Kubernetes in multiple infrastructure environments.
Customers can manage their Kubernetes infrastructure using GKE, regardless of where the workloads are running. In theory, this provides portability of Kubernetes-based applications among various cloud services. Given this approach, Google seems to be more multicloud-friendly at this point, than, say, Amazon. This makes sense, considering that its main rival Amazon occupies the #1 market-share position in public cloud while Google is an insurgent at #3, according to most public cloud revenue and market-share measurements.
Not to be left out, the commonly recognized #4 and #5 players in public cloud, Oracle and IBM (not necessarily in that order), are also pushing hybrid and multicloud tools. Oracle Cloud Infrastructure (OCI) offers a comprehensive set of multicloud solutions in the form of specialized deployments, database services, monitoring capabilities, and strategic partnerships. It has also teamed with Azure Interconnect to provide organizations with a simple path to a multicloud environment.
Oracle also offers Dedicated Regions that can provide OCI in specific geographic regions, including on premises. Oracle has also teamed with VMware to provide an environment that is portable across clouds. Customers can use Oracle VMware Solution to move or extend on-premises VMware-based workloads to OCI across all regions, including on-premises Dedicated Regions, without rearchitecting applications or retooling operations. latest customer announcements include TIM Brasil, which uses OCI to build a hybrid cloud strategy.
IBM has a sprawling portfolio of cloud offerings, which is sometimes tough to navigate. The situation was complicated by the latest spinoff of Kyndryl, formerly IBM’s cloud infrastructure division. Kyndryl also has strategic partnerships with Google and Microsoft, among others. IBM’s Red Hat division is a key element of its hybrid strategy, providing a Kubernetes management (OpenShift) platform for compute services on multiple clouds, including IBM Cloud, Azure, and AWS.
IBM’s strategy looks either multicloud-friendly or schizophrenic, depending on how you look at it. The major breakup of the company doesn’t make it much easier to understand. One could argue two ways: IBM’s strategy is either more confusing, because it has so many partnerships with other cloud companies, or it’s the best positioned for hybrid and multicloud, with Red Hat OpenShift as a leader for those implementing multiple cloud environments. Microsoft, for example, provides a fully managed OpenShift on Azure service.
In Futuriom’s opinion, Microsoft seems positioned best for hybrid cloud, with world-class management and networking tools. SONiC and Azure Virtual WAN deliver it a strong networking platform that customers can leverage. Its move into communications and software platforms with the acquisition of Metaswitch and Affirmed is also interesting, giving it powerful routing and communications services tools that can be used by enterprises and service providers alike. ExpressRoute provides private connectivity among clouds at major datacenters. Azure Arc enables Azure to provide consistent security and operations across hybrid environments.
Another key strength for Microsoft is its natural position in enterprise software, which gives it a huge edge with customer engagements. Amazon’s heritage is in selling to startups and developers, and Google is still challenged to build a strong enterprise sales culture. Amazon’s “one-cloud” strategy is a huge risk from a marketing perspective – a bet that the market will not move quickly to hybrid or multicloud.
Of course, I doubt that any of this will remain static. The competition for cloud business is fierce and the public cloud providers are constantly upgrading their platforms.
The big question that customers should have going forward in evaluating the multicloud and hybrid cloud offerings of the major cloud providers is: Are they really trying to help me build a hybrid platform or do they just want to lock me in? For the customers, the key considerations in moving forward with a balance of public cloud tools and multicloud cools will include the portability and costs of how they operate in the cloud.
Cloud computing is the future of the constantly evolving federal workforce. To allow for seamless integration with mission systems requiring data proximity and on-premises hosting, many agencies are adopting hybrid and multi-cloud (HMC) environments to help push their capabilities forward while still supporting legacy applications and mission critical applications.
While HMC can easily accommodate mixed workload portfolios, its complexity can present significant security, cost, reliability, and performance challenges to already-strained IT departments. This raises the question of how agencies can fully maximize the benefits of cloud computing at speed and scale, without further burdening their IT teams.
“There's a huge mix of different cloud platforms and application services provided by leading cloud service providers,” said Gary Wang, vice president, Cloud and Application Services for Peraton, which delivers transformative enterprise IT solutions to government agencies. “From a security, governance, productivity and cost saving perspective, managing an HMC environment has become a big challenge.”
Here Wang and Harry Kidder, Peraton’s director, Cloud Technology, discuss the challenges agencies face with HMC management and how hybrid multi-cloud management and zero trust solutions can streamline operations for mission success.
HMC is a Double-edged Sword
HMC environments offer numerous potential benefits, including leveraging the most suitable cloud service features, applying the right environments for the unique application persona, achieving the most cloud cost savings, or avoiding vendor lock-in. While an HMC environment is meant to increase productivity and flexibility, it can be a double-edged sword for agencies that don’t have the capacity to manage complexity.
For a single organization, managing workloads spread across different cloud platforms requires a wide technical skillset in operational management — something Wang noted that agencies have historically struggled to obtain.
According to Wang, many agencies have essentially backed into situations where they are using multiple clouds without realizing that’s what they were doing, or how this could be a risk.
“If you look at how AWS adoption has been going in federal government—by itself, it requires a lot of dedicated practice and resources,” said Wang. “Now, when you throw Microsoft, Google, Oracle, IBM Cloud into this mix, it’s just getting more and more complex. When dealing with multi-cloud you need to have resources, skills, training, operation capacity, and maturity to manage different platforms.”
The Security Struggle is Real
A challenge equally as daunting is keeping hybrid and multi-cloud environments secure. HMC architecture has taken data and workloads beyond traditional physical data centers and network perimeters to the open internet, which increases the number of potential attack vectors. According to Kidder, who spoke at NextGov’s 2022 Cloud Summit, this new security vulnerability has resulted in the need for a complete revamp for government cybersecurity practices.
“At any moment, you can have a different number of services running in a single environment,” he said. “Traditional security approaches and tools are no longer applicable in this highly dynamic and elastic environment.”
With cloud, where data can be accessed from anywhere, agencies must rethink how to mitigate cybersecurity risks. “The implementation needs to be a phased approach that takes into consideration the risk priorities, business asset value, budget, and operational maturity,” said Kidder.
Managing security in such complex operational environments, according to Wang, makes updating their technology and applications a constant struggle. “You need a software-defined infrastructure to ensure platform security. It’s a balancing act to protect these systems while modernizing because we want to protect data without sacrificing productivity.”
Trust No One
Kidder is a strong advocate of agencies committing to protect their HMC with a zero trust framework that automates, simplifies, and secures cloud operations in one standardized, unified platform. As applications and services reach beyond traditional data center and network perimeters, a paradigm shift is required to safely mitigate and minimize risk.
“The ideal zero trust architecture is a holistic cybersecurity framework that applies rigorous security controls, network identity, application services and data,” said Kidder. “It can also adapt to emerging technologies, as well as to emerging threats and the changing digital landscape.”
Peraton’s HMC management platform provides centralized automation, governance, and end-to-end workload lifecycle management across many cloud platforms. The platform can be fully equipped with their zero trust architecture (ZTA) as-a-service solution built with commercial and open-source technologies that continuously vet access to cloud services, configurations, threats, and security posture with AI-enabled automation, and provide fine-grained segmentation among services and resources so that risks can be localized, contained, and remediated rapidly.
Don’t Forget Your Purpose
Because proper HMC management and security implementation can be an overwhelming and financially detrimental task, agencies rely on contractors with comprehensive expertise and a deep bench of talent to help them navigate the transition and ongoing management.
Kidder and Wang’s decades-long work in the field has taught them that every agency has specific cloud requirements. To avoid becoming overwhelmed and ensure they remain on the right track, Wang advises agencies to always keep their end state top of mind.
“The reality is that an agency is going to have a very complex environment,” said Wang. “It is going to have different offices that want to use different cloud environments; different network connectivity requirements; and different security requirements.”
“The key is to understand where you want to go,” he said. “From there, seek out the help, knowledge and tools necessary to allow you to be successful in this cloud journey. Otherwise, you can quickly lose sight and lose control of your HMC environment. And that won’t benefit anyone—you, your leaders, or the taxpayer.”
Learn more about how Peraton can help you successfully manage and secure your HMC environment.
DUBLIN, July 8, 2022 /PRNewswire/ -- The "Global Mobile Edge Computing Market by Infrastructure, Deployment Model, Computing as a Service, Network Connectivity, Applications, Analytics Types, Market Segments, and Industry Verticals 2022-2027" report has been added to ResearchAndMarkets.com's offering.
Select Report Findings
Mobile edge computing will be a key enabler of immersive technologies deployed with 5G
Greatest opportunities will be in teleoperation/cloud robotics, telepresence, and virtual reality
The global mobile edge computing market for software and APIs will reach $2.32 billion by 2027
The market for MEC software in support of IoT applications will reach $637 million globally by 2027
The largest industry vertical opportunities for MEC will be manufacturing, healthcare and automobile
In cellular networks, edge computing via MEC is beneficial for LTE but virtually essential for 5G. This is because Mobile Edge Computing facilitates optimization of fifth generation network resources including focusing communications and computational capacity where it is needed the most. The author's research findings indicate a strong relationship between edge computing and 5G. In fact, if it were not for MEC, 5G would continue to rely upon back-haul to centralized cloud resources for storage and computing, diminishing much of the otherwise positive impact of latency reduction enabled by 5G.
Another driver for the multi-access edge computing market is that MEC will facilitate an entirely new class of low-power devices for IoT networks and systems. These devices will rely upon MEC equipment for processing. Stated differently, some IoT devices will be very lightweight computationally speaking, relying upon edge computing nodes for most of their computation needs.
Mobile Edge Computing Market Drivers
Improved Overall Throughput
Core Congestion Reduction
Application Latency Reduction
Network Awareness and Context
Streaming Data and Real-time Analytics
Network and Application Resiliency
Mobile Edge Computing Market Deployment Alternatives
As the author has stated in the past, the primary standards body for MEC standardization is the European Telecommunications Standards Institute (ETSI), which has done much to move edge computing in mobile/wireless networks forward.
ETSI identifies four physical areas for MEC deployment as follows:
Co-location at Base Station
Co-location at Transmission Node
Co-location at Network Aggregation Point
Co-location with Core Network Functions
Carrier Mobile Edge Computing Market Deployment Considerations
It is important to understand that multi-access edge computing servers and platforms can be deployed in many locations including, but not limited to, an LTE and/or 5G macro base station site, the 3G Radio Network Controller site, a multi-RAT cell aggregation site, or at an aggregation point. Communication Service Providers (CSP) are not accustomed to planning for remote servers.
However, MEC essentially needs many remote data centers. The author predicts that CSPs will need to partner with network integration companies to realize the full vision of MEC. CSPs cannot be bogged down in negotiations, planning, engineering, and deployment of MEC communications/computing platforms every time a new site is acquired.
With the multi-access edge computing market, there is clearly a new computational-communications paradigm in which communications and computing are no longer thought of as separate things. Furthermore, they are planned, engineered, deployed, and operated together. In parallel with this new paradigm, mobile networks are becoming video networks.
Key syllabus Covered:
1. Executive Summary
3. MEC Technology, Platforms, and Architecture
3.1 MEC Platform Architecture Building Blocks
3.2 Edge Cloud Computing Value Chain
3.3 MEC Technology Building Block
3.4 MEC Technology Enablers
3.5 MEC Deployment Considerations
4. MEC Market Drivers and Opportunities
5. MEC Ecosystem
6. MEC Application and Service Strategies
6.1 Optimizing the Mobile Cloud
6.2 Context Aware Services
6.3 Data Services and Analytics
7. Multi-Access Edge Computing Deployment
8. Multi-Access Edge Computing Market Analysis and Forecasts
9. Conclusions and Recommendations
ADLINK Technology Inc.
Advanced Micro Devices
Brocade Communications Systems
Fujitsu Technology Solutions
Hewlett Packard Enterprise (HPE)
Huawei Technologies Co. Ltd.
Integrated Device Technology
Samsung Electronics Co. Ltd.
Vasona Networks (ZephyrTel)
For more information about this report visit https://www.researchandmarkets.com/r/5ko1v8
Research and Markets
Laura Wood, Senior Manager
For E.S.T Office Hours Call +1-917-300-0470
For U.S./CAN Toll Free Call +1-800-526-8630
For GMT Office Hours Call +353-1-416-8900
U.S. Fax: 646-607-1907
Fax (outside U.S.): +353-1-481-1716
View original content:https://www.prnewswire.com/news-releases/global-mobile-edge-computing-market-report-2022-featuring-key-players-ibm-ericsson-samsung-electronics--others-301582861.html
SOURCE Research and Markets
Head of Advanced Planning, Alectra Utilities
Evaluation Manager, Energy Efficiency and Demand Management, Con Edison
Founder & Principal, LO3 Energy
Director, New Energy & Environment, IBM
Grid Edge Content Lead, Wood Mackenzie Power & Renewables
Chief Commercial Officer, Energy Web Foundation (EWF)
Director, Distributed Technologies, Ameren
CEO, Data Gumbo
Principal, Innovation & Partnerships, Exelon Utilities
Manager, Renewable Market Intelligence, NRG, Energy Inc.
Chief Commercial Officer, Energy Web Foundation (EWF)
ESC Customer & Market Innovation Lead, AVANGRID
Advanced packaging is rapidly becoming a mainstream option for chipmakers as the cost of integrating heterogeneous components on a single die continues to rise.
Despite several years of buzz around this shift, the reality is that it has taken more than a half-century to materialize. Advanced packaging began with IBM flip chips in the 1960s, and it got another boost with multi-chip modules in the 1990s, particularly for the mil/aero market. Still, it never became the first choice of commercial chipmakers because shrinking features was less expensive in terms of silicon area, the ecosystem of tools and IP was well established for scaling, and time-to-profitability was much better defined.
The economics began changing significantly at 16/14nm with the introduction of finFETs and double patterning. Design and manufacturing costs are expected to rise at each new node after that. Shrinking features will require new materials at 5nm for contacts and possibly the interconnects, as well as new transistor structures at either 5nm or 3nm (most likely gate-all-around FETs). And then there is high-numerical-aperture EUV, and new etch, deposition and inspection equipment. Collectively, those steps increase the cost of developing and manufacturing chips at advanced processes at a time when there are fewer market opportunities for chips with high-enough volume to offset the costs.
These factors haven’t exactly been a surprise to the semiconductor industry, although EUV’s continued delays did force design teams to adopt multi-patterning for the metal1 and metal2 layers. Nevertheless, it has taken time to develop viable alternatives, and to prove and refine them. EDA vendors now are offering tools and complete flows to build chips using a variety of packaging options, and there are enough advanced-packaging chips in production in high-visibility markets to prove this option is viable from vendors such as Apple, AMD, Huawei, Cisco, IBM, and Xilinx, as well as such as 3D NAND, high-bandwidth memory (HBM), and the Hybrid Memory Cube.
In addition, two of the largest IDMs—Intel and Samsung—now offer low-cost proprietary bridge technology with their foundry services. And all of the major OSATs are offering one or more versions of fan-out wafer-level packaging, in addition to 2.5D and 3D options. This is reflected in growth among all segments of advanced packaging.
Fig. 1: Advanced packaging revenue by platform, in billions of dollars. Source: Yole Developpement Status of the Advanced Packaging Industry 2017 report, May 2017.
Design automation tools
One of the signs of growth in this market is in automation tools. Of the Big Three EDA vendors, Cadence was the first to offer packaging tools. The company jumped into the market back in the 1990s, and it has been developing tools since the beginning of the Millennium based upon the observation that analog doesn’t scale. But it has taken nearly 15 years for that vision to migrate into the mainstream and for other EDA vendors to see this as an opportunity worth the R&D investment.
Mentor, a Siemens Business, earlier this month introduced a flow and new tools for advanced packaging, as well. “The process is now similar to the silicon process,” said Keith Felton, product marketing manager for the company’s Advanced IC Packaging solutions Board Systems Division. “We envision multiple design kits will emerge. So you’ll see two kits for fan-out wafer-level packaging, each with slight variations, as well as side-by side, stacked die, chip on wafer on substrate (CoWoS), high-pin-count flip chips and system-in-package.”
Felton says this will work in conjunction with other tools, including DFM tools and the PCB analysis and verification.
“This is all about physics-based simulation,” said John Lee, general manager and vice president at ANSYS. “This is not just about the semiconductor. It’s thermal analysis and mechanical simulation. If you look at TSMC‘s InFO, you have silicon with wafer-level packaging. You need to do simultaneous thermal-level analysis because there are significant physical effects. And that could include 7nm, 10nm, 16nm, or even older process nodes. But the thermal piece affects the reliability of the system. So if you talk about electromigration and you’re not talking about thermal, then you probably have a pessimistic view of the world. And if you have a non-pessimistic view, that can be dangerous.”
, chairman and co-CEO of Synopsys, said the real key is being able to visualize the entire system and to build components and tools that work across a variety of packaging schemes.”So with IP, you have to characterize it so it works under all circumstances. Overall simulations are system simulations, and multiple chips in various forms make up that system. That also includes software. The ability to model and prototype what people build is critical. That includes digital and digital/mixed signal.”
He noted that it also includes emulation and software prototyping. “You need to be able to run software on hardware you don’t have yet,” de Geus said, whether that is in a package or on a 7nm SoC.
But adding these kinds of tools for advanced packaging will go a long way toward adding predictability into those approaches.
“The EDA tools will make a big difference,” said Jan Vardaman, president of TechSearch International. “You can’t do a lot of things without design tools, and going forward we’re going to see increased use. Wherever you can, you want to go with older nodes in a design if you can partition that properly. For that, design tools are sorely needed.”
“As products evolve from generation to generation, the methodology from our first generation becomes the norm,” said Ou Li, senior director of engineering at ASE. “As we move forward with advanced products, we can fan out what we learn to the rest of the products. Hopefully, with the learning curve and machine learning and throughput, we can accommodate the rest of these products. So the most advanced products are being supported by volume and business scale. For smaller, more fragmented markets, those kinds of things may not be there. But for the product requirements, it may be easier to implement because we’ve learned this with other products.”
Still, market fragmentation has an impact. As designs become more defined by software—rather than differentiation actually being coded into the software using generic hardware platforms—no two designs are the same and the demands from end customers are much more exacting.
“Each different category of product has a different challenge,” said Li. “But for system in package, we have to meet stringent customer requirements. That is a trend for all types of advanced packaging.”
The next step is to start building platforms so the pieces can be swapped in and out more quickly, adding what has been called a “mass customization” approach using packaging.
“The real opportunity is to be able to integrate everything into a platform,” said Scott Sikorski, vice president of worldwide product marketing at STATS ChipPAC. “That will drive the next level of growth. eWLB (embedded wafer-level ball grid array) is a great opportunity to build something that people are already building in a different way.”
How quickly companies adopt that approach remains to be seen. Fan-outs have been in high demand for the past 18 months, but the capacity for developing those types of devices has been limited. That has recently changed as OSATs have increased their capacity.
“Now there is more capacity to develop a lot more devices, and you’ll see a lot more devices soon,” said Sikorski, noting that packaging as an integration platform is also beginning to gain traction. “This is a very low-cost approach because you already have all of the building blocks. We originally thought this would be a PoP format with a through-via structure around the chip. But at the time we suggested it, the supply chain was not ready.”
The learning curve
One of the big advances over the past few years comes from experience in working with advanced packaging for a variety of markets.
“The people doing packaging, test and DFT are now rock stars” said Mike Gianfagna, vice president of marketing at eSilicon. ” Complexity is growing even with packaging. With 2.5D, you have to think about the silicon substrate, thermal and mechanical stress, and more analysis. So the packaging and DFT teams are involved much earlier and all the way through the development process. DFT can impact the entire schedule.”
The goal is to add more predictability into the design process, which takes time to sort out. However, executives and analysts agree that predictability is improving.
“This is still not routine yet because any new technology or technology node has a learning curve,” said Gianfagna. “On almost every chip we’re doing something for the first time. But we’re doing a better job of identifying issues early and understanding the interaction between the chip, memory, high-performance I/O and the substrate.”
Brandon Wang, engineering group director at Cadence, said that all of the major networking companies now have 2.5D designs in the works. “You’ll see the products rolling out next year,” said Wang. “You’ll also see a lot more sensors, particularly MEMS chips, in packages with other chips. The nature of those designs is very different, though, and until recently many of these designs have been very segmented, so it has been hard to create a methodology for all of this. But the direction is that because sensors are so cheap these are going to be more standardized units. It will be more of a platform approach, so if you need something you can get it very quickly.”
To make that happen requires a multi-die co-design, where the sensor parameters are adjusted at the same time as the rest of the electronics.
“We’re going to have sensors everywhere, and you have to co-optimize them,” said Wang. “It will become more electrical-centric. The electronic designers are still focused on yield for the end device, and the platform will enable them to focus on electrical performance and talk to the sensor platform. Every system will have sensors, but you can design a sensor hub that is optimized for specific use cases. That way if you have five sensors, it’s not five times the price. Maybe it’s only 1.3X. And it’s a standard sensor or a sensor hub, so you know how it’s going to behave.”
The focus on platforms is a key piece of this strategy going forward. It makes it easier to add heterogeneity into designs with more predictable results. But platforms also can greatly reduce the cost of designs because they provide economies of scale and the ability for more companies to compete.
“Customers are looking for more and more guidance from us and design suggestions,” said , senior director of market development at ARM. “Last year we introduced design guidance, but it was not just about the processor. It’s also about performance and power numbers. We also have a pre-built software platform to help them get over traditional hurdles.”
One of the issues is that there is no longer a single best way to accomplish something. While in the past, progress was measured by process node, the emphasis on heterogeneity has greatly increased the number of possible choices. Not everything has to be integrated into a single die, and in many cases IP varies significantly from one foundry to the next even at the same process node.
“We’re now involved with our lead partners on every aspect of design,” said Neifert. “That even includes early RTL, although more typically this is done at the IP level than at the subsystem level. And it now includes everything from safety requirements to security. We’re trying to identify any weak link so that when we tie everything together there is not an underlying problem.”
And this is just the beginning. The rollout of EDA tools and flows will add a whole new level of control in these devices.
“You’ll see greater accuracy, smaller feature sizes, and we will be able to start looking at designs in 3D,” said Mentor’s Felton. “You’ll be able to construct blueprints for ‘what if’ scenarios for substrates and you’ll be able to have chip-level models that will include thermal verification.”
The goal is more upfront analysis of different packaging options, which will become particularly important for choosing substrates, packaging types, IP as well as the interconnects within and between chips.
“There are different user types involved here,” said Felton. “There are IC designers and architects, who are coming up with the package type, which may be die-stack-die or PoP, and then they hand that off to a second team to do the package design. That requires a dedicated solution and flow. It’s moving it from mechanical to EDA.”
After 52 years of Moore’s Law, chip design and manufacturing on a single die is very predictale. The entire ecosystem is in place and it hums like a finely tuned machine. It will take time for advanced packaging to achieve that level of predictability, but there are enough of the kinks worked out of the system today, and enough examples of successful packaging, that it is no longer a huge gamble. And as more tools and predictability are created, pricing will continue to drop for fan-outs and 2.5D implementations.
Most industry insiders believe that a handful of companies will continue to shrink logic at the most advanced nodes, but increasingly they will add more elements into a package around that logic. The future is heterogeneous, and the easiest way to mix those elements will be inside a package, not on a single die.
Architecture First, Node Second
Even the biggest proponents of scaling have changed their tune.
Inside FD-SOI And Scaling
GlobalFoundries’ CTO opens up on FD-SOI, 7nm finFETs, and what’s next in scaling.
Moore’s Law: Toward SW-Defined Hardware
Part 2: Heterogeneity and architectures become focus as scaling benefits shrink; IP availability may be problematic.