With the number of lines of code in the average car expected to skyrocket from 10 million in 2010 to 100 million in 2030, there's no getting around the fact that embedded software development and a systems engineering approach has become central not only to automotive design, but to product design in general.
Yet despite the invigorated focus on what is essentially a long-standing design process, organizations still struggle with siloed systems and engineering processes that stand in the way of true systems engineering spanning mechanical, electrical, and software functions. In an attempt to address some of those hurdles, IBM and National Instruments are partnering to break down the silos specifically as they relate to the quality management engineering system workflow, or more colloquially, the marriage between design and test.
"As customers go through iterative development cycles, whether they're building a physical product or a software subsystem, and get to some level of prototype testing, they run into a brick wall around the manual handoff between the development and test side," Mark Lefebvre, director, systems alliances and integrations, for IBM Rational, told us. "Traditionally, these siloed processes never communicate and what happens is they find errors downstream in the software development process when it is more costly to fix."
NI and IBM's answer to this gap? The pair is building a bridge -- specifically an integration between IBM Rational Quality Manager test management and quality management tool, and NI's VeriStand and TestStand real-time testing and test-automation environment. The integration, Lefebvre said, is designed to plug the gap and provide full traceability of what's defined on the test floor back to design and development, enabling more iterative testing throughout the lifecycle and uncovering errors earlier in the process, well before building costly prototypes.
The ability to break down the quality management silos and facilitate earlier collaboration can have a huge impact on cost if you look at the numbers IBM Rational is touting. According to Lefebvre, a bug that costs $1 to fix on a programmer's desktop costs $100 to fix once it makes its way into a complete program and many thousands of dollars once identified after the software has been deployed in the field.
While the integration isn't yet commercialized (Lefebvre said to expect it at the end of the third quarter), there is a proof of concept being tested with five or six big NI/IBM customers. The proof of concept is focused on the development of an embedded control unit (ECU) for a cruise control system that could operate across multiple vehicle platforms. The workflow exhibited marries the software development test processes to the hardware module test processes, from the requirements stage through quality management, so if a test fails or changes are made to the code, the results are shared throughout the development lifecycle.
Prior to such an integration, any kind of data sharing was limited to manual processes around Word documents and spreadsheets, Lefebvre said. "Typically, a software engineer would hand carry all the data in a spreadsheet and import it into the test environment. Now there's a pipe connecting the two."
IBM’s gone by just its initials for so long that many of us have to stop and think about what the letters stand for. International Business Machines.
I was reminded of the corporation’s singular focus last week during the TNW 2022 Conference when Seth Dobrin, IBM’s first chief AI officer, took the stage to talk about artificial intelligence.
As Dobrin put it, IBM “doesn’t do consumer AI.” You won’t be downloading IBM’s virtual assistant for your smart phone anytime soon. Big Blue won’t be getting into the selfie app AI filter game.
Simply put, IBM’s here to provide value for its clients and partners and to create AI models that make human lives easier, better, or both.
That’s all pretty easy to say. But how does a company that’s not focused on creating products and services for the individual consumer actually walk that kind of talk?
According Dobrin, it’s not hard: care about how individual humans will be affected by the models you monetize:
We’re very stringent about the type of data we will ingest and make money from.
During a discussion with the Financial Times’ Tim Bradshaw during the conference, Dobrin used the example of large-parameter models such as GPT-3 and DALL-E 2 as a way to describe IBM’s approach.
He described those models as “toys,” and for good reason: they’re fun to play with, but they’re ultimately not very useful. They’re prone to unpredictability in the form of nonsense, hate-speech, and the potential to output private personal information. This makes them dangerous to deploy outside of laboratories.
However, Dobrin told Bradshaw and the audience that IBM was also working on a similar system. He referred to these agents as “foundational models,” meaning they can be used for multiple applications once developed and trained.
The IBM difference, however, is that the company is taking a human-centered approach to the development of its foundational models.
Under Dobrin’s leadership, the company’s cherry-picking datasets from a variety of sources and then applying internal terms and conditions to them prior to their integration into models or systems.
It’s one thing if GPT-3 accidentally spits out something offensive, these kinds of things are expected in laboratories. But it’s an entirely different situation when, as a hypothetical example, a bank’s production language model starts outputting nonsense or private information to customers.
Luckily, IBM (a company that works with corporations across a spectrum of industries including banking, transportation, and energy) doesn’t believe in cramming a giant database of unchecked data into a model and hoping for the best.
Which brings us to what’s perhaps the most interesting take away from Dobrin’s chat with Bradshaw: “be ready for regulations.”
As the old saying goes: BS in, BS out. If you’re not in control of the data you’re training with, life’s going to get hard for your AI startup come regulation time.
And the Wild West of AI acquisitions is going to come to an end soon as more and more regulatory bodies seek to protect citizens from predatory AI companies and corporate overreach.
If your AI startup creates models that won’t or can’t be compliant in time for use in the EU or US once the regulation hammers fall, your chances of selling them to or getting acquired by a corporation that does business internationally are slim to none.
No matter how you slice it, IBM’s an outlier. It and Dobrin apparently relish the idea of delivering compliance-ready solutions that help protect people’s privacy.
While the rest of big tech spends billions of dollars building eco-harming models that serve no purpose other than to pass arbitrary benchmarks, IBM’s more panic about outcomes than speculation.
And that’s just weird. That’s not how the majority of the industry does business.
IBM and Dobrin are trying to redefine what big tech’s position in the AI sector is. And, it turns out, when your bottom line isn’t driven by advertising revenue, subscriber numbers, or future hype, you can build solutions that are as efficacious as they are ethical.
And that leaves the vast majority of people in the AI startup world with some questions to answer.
Is your startup ready for the future? Are you training models ethically, considering human outcomes, and able to explain the biases baked into your systems? Can your models be made GDPR, EU AI, and Illinois BIPA compliant?
If the current free-for-all dies out and VCs stop throwing money at prediction models and other vaporware or prestidigitation-based products, can your models still provide business value?
There’s probably still a little bit of money to be made for companies and startups who leap aboard the hype train, but there’s arguably a whole lot more to be made for those whose products can actually withstand an AI winter.
Human-centered AI technologies aren’t just a good idea because they make life better for humans, they’re also the only machine learning applications worth betting on over the long haul.
When the dust settles, and we’re all less impressed by the prestidigitation and parlor tricks that big tech’s spending billions of dollars on, IBM will still be out here using our planet’s limited energy resources to develop solutions with individual human outcomes in mind.
That’s the very definition of “sustainability,” and why IBM’s poised to become the defacto technological leader in the global artificial intelligence community under Dobrin’s so-far expert leadership.
The MarketWatch News Department was not involved in the creation of this content.
Jul 11, 2022 (Market Insight Reports) -- The latest Hybrid Integration Platform Market Analysis is designed to help clients Improve their market position, and in line with this, this report provides a detailed analysis of several leading Hybrid Integration Platform market Key Players including Software AG, Informatica, Dell Boomi, Liaison Technologies, Mulesoft, IBM, TIBCO Software, Oracle, WSO2, Snaplogic, Red Hat, Axway, Flowgear, and Others. Also, the Hybrid Integration Platform market analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.
Our Experts will help you get valuable insights about Hybrid Integration Platform market share, size, and regional growth prospects. Available Other Related Market Research Reports
Sample PDF Report at- https://reportsinsights.com/sample/592155
Hybrid Integration Platform Market showcases an in-depth analysis of the overall Hybrid Integration Platform market in terms of market size, upstream situation, price & cost, industry environment, segmentation for Hybrid Integration Platform providers, end-users, geographies, and analysis up to 2028. In addition, the report outlines the factors driving industry growth and the description of market channels.
This has brought along several changes in this report also covers the impact of COVID-19 on the global market.
The market research report covers the analysis of key stakeholders of the Hybrid Integration Platform market. Some of the leading players profiled in the report include: Software AG, Informatica, Dell Boomi, Liaison Technologies, Mulesoft, IBM, TIBCO Software, Oracle, WSO2, Snaplogic, Red Hat, Axway, Flowgear
The research insights presented in this report are backed by a deep understanding of key insights gathered from both secondary and primary research. The opinions and insights presented in the Hybrid Integration Platform market report were influenced by discussions held with several players in this industry. the Hybrid Integration Platform market report highlights the key players and manufacturers and the latest strategies including new product launches, partnerships, joint ventures, technology, segmentation in terms of region and industry competition, profit and loss ratio, and investment ideas. A precise evaluation of effective manufacturing techniques, advertisement techniques, Hybrid Integration Platform market share, size, growth rate, revenue, sales, and value chain analysis.
The 'Global Hybrid Integration Platform Market Research Report' is a comprehensive and informative study on the current state of the Global Hybrid Integration Platform Market industry with emphasis on the global industry. The report presents key statistics on the market status of the global Hybrid Integration Platform market manufacturers and is a valuable source of guidance and direction for companies and individuals interested in the industry.
Major Product Types covered are:
Business-to-Business (B2B) integration
Major Applications of Hybrid Integration Platform covered are:
Government and public sector
Telecommunication, IT, and IT-Enabled Services (ITES)
Regional Hybrid Integration Platform Market (Regional Output, Demand & Forecast by Countries):-
North America (United States, Canada, Mexico)
South America ( Brazil, Argentina, Ecuador, Chile)
Asia Pacific (China, Japan, India, Korea)
Europe (Germany, UK, France, Italy)
Middle East Africa (Egypt, Turkey, Saudi Arabia, Iran) And More.
Access full Report Description, TOC, Table of Figures, Chart, etc. at-https://www.reportsinsights.com/industry-forecast/hybrid-integration-platform-markets-growth-trends-592155
The report is useful in providing answers to several critical questions that are important for the industry stakeholders such as manufacturers and partners, end-users, etc., besides allowing them in strategizing investments and capitalizing on market opportunities.
Reasons to Purchase Global Hybrid Integration Platform Market Report:
1. Important changes in Hybrid Integration Platform market dynamics
2. What is the current Hybrid Integration Platform market scenario across various countries?
3. Current and future of Global Hybrid Integration Platform market outlook in the developed and emerging markets.
4. Analysis of various perspectives of the market with the help of Porter's five forces analysis.
5. The segment that is expected to dominate the Global Hybrid Integration Platform market.
6. Regions that are expected to witness the fastest growth during the forecast period.
7. Identify the latest developments, Global Hybrid Integration Platform market shares, and strategies employed by the major market players.
8. Former, ongoing, and projected Hybrid Integration Platform market analysis in terms of volume and value
About Reports Insights:
Reports Insights is the leading research industry that offers contextual and data-centric research services to its customers across the globe. The firm assists its clients to strategize business policies and accomplish sustainable growth in their respective market domains. The industry provides consulting services, syndicated research reports, and customized research reports.
Read More Article:
The MarketWatch News Department was not involved in the creation of this content.
Learning from failure is a hallmark of the technology business. Nick Baker, a 37-year-old system architect at Microsoft, knows that well. A British transplant at the software giant's Silicon Valley campus, he went from failed project to failed project in his career. He worked on such dogs as Apple Computer's defunct video card business, 3DO's failed game consoles, a chip startup that screwed up a deal with Nintendo, the never-successful WebTV and Microsoft's canceled Ultimate TV satellite TV recorder.
But Baker finally has a hot seller with the Xbox 360, Microsoft's video game console launched worldwide last holiday season. The adventure on which he embarked four years ago would ultimately prove that failure is often the best teacher. His new gig would once again provide copious evidence that flexibility and understanding of detailed customer needs will beat a rigid business model every time. And so far the score is Xbox 360, one, and the delayed PlayStation 3, nothing.
The Xbox 360 console is Microsoft's living room Trojan horse, purchased as a game box but capable of so much more in the realm of digital entertainment in the living room. Since the day after Microsoft terminated the Ultimate TV box in February 2002, Baker has been working on the Xbox 360 silicon architecture team at Microsoft's campus in Mountain View, CA. He is one of the 3DO survivors who now gets a shot at revenge against the Japanese companies that vanquished his old firm.
"It feels good," says Baker. "I can play it at home with the kids. It's family-friendly, and I don't have to play on the Nintendo anymore."
Baker is one of the people behind the scenes who pulled together the Xbox 360 console by engineering some of the most complicated chips ever designed for a consumer entertainment device. The team labored for years and made critical decisions that enabled Microsoft to beat Sony and Nintendo to market with a new box, despite a late start with the Xbox in the previous product cycle. Their story, captured here and in a forthcoming book by the author of this article, illustrates the ups and downs in any big project.
When Baker and his pal Jeff Andrews joined games programmer Mike Abrash in early 2002, they had clear marching orders. Their bosses — Microsoft CEO Steve Ballmer, at the top of Microsoft; Robbie Bach, running the Xbox division; Xbox hardware chief Todd Holmdahl; Greg Gibson, for Xbox 360 system architecture; and silicon chief Larry Yang — all dictated what Microsoft needed this time around.
They couldn't be late. They had to make hardware that could become much cheaper over time and they had to pack as much performance into a game console as they could without overheating the box.
The group of silicon engineers started first among the 2,000 people in the Xbox division on a project that Baker had code-named Trinity. But they couldn't use that name, because someone else at Microsoft had taken it. So they named it Xenon, for the colorless and odorless gas, because it sounded cool enough. Their first order of business was to study computing architectures, from those of the best supercomputers to those of the most power-efficient portable gadgets. Although Microsoft had chosen Intel and NVIDIA to make the chips for the original Xbox the first time around, the engineers now talked to a broad spectrum of semiconductor makers.
"For us, 2002 was about understanding what the technology could do," says Greg Gibson, system designer.
Sony teamed up with IBM and Toshiba to create a full-custom microprocessor from the ground up. They planned to spend $400 million developing the cell architecture and even more fabricating the chips. Microsoft didn't have the time or the chip engineers to match the effort on that scale, but Todd Holmdahl and Larry Yang saw a chance to beat Sony. They could marshal a host of virtual resources and create a semicustom design that combined both off-the-shelf technology and their own ideas for game hardware. Microsoft would lead the integration of the hardware, own the intellectual property, set the cost-reduction schedules, and manage its vendors closely.
They believed this approach would get them to market by 2005, which was when they estimated Sony would be ready with the PlayStation 3. (As it turned out, Microsoft's dreams were answered when Sony, in March, postponed the PlayStation 3 launch until November.)
More important, using an IP ownership strategy with the chips could dramatically cut Microsoft's costs on the original Xbox. Microsoft had lost an estimated $3.7 billion over four years, or roughly a whopping $168 per box. By cutting costs, Microsoft could erase a lot of red ink.
Baker and Andrews quickly decided they wanted to create a balanced design, trading off power efficiency and performance. So they envisioned a multicore microprocessor, one with as many as 16 cores — or miniprocessors — on one chip. They wanted a graphics chip with 60 shaders, or parallel processors for rendering distinct features in graphic animations.
Laura Fryer, manager of the Xbox Advanced Technology Group in Redmond, WA, solicited feedback on the new microprocessor. She said game developers were wary of managing multiple software threads associated with multiple cores, because the switch created a juggling task they didn't have to do on the original Xbox or the PC. But they appreciated the power efficiency and added performance they could get.
Microsoft's current vendors, Intel and NVIDIA, didn't like the idea that Microsoft would own the IP they created. For Intel, allowing Microsoft to take the x86 design to another manufacturer was as troubling as signing away the rights to Windows would be to Microsoft. NVIDIA was willing to do the work, but if it had to deviate from its road map for PC graphics chips in order to tailor a chip for a game box, then it wanted to get paid for it. Microsoft didn't want to pay that high a price. "It wasn't a good deal," says Jen Hsun-Huang, CEO of NVIDIA. Microsoft had also been through a painful arbitration on pricing for the original Xbox graphics chips.
IBM, on the other hand, had started a chip engineering services business and was perfectly willing to customize a PowerPC design for Microsoft, says Jim Comfort, an IBM vice president. At first IBM didn't believe that Microsoft wanted to work together, given a history of rancor dating back to the DOS and OS/2 operating systems in the 1980s. Moreover, IBM was working for Microsoft rivals Sony and Nintendo. But Microsoft pressed IBM for its views on multicore chips and discovered that Big Blue was ahead of Intel in thinking about these kinds of designs.
When Bill Adamec, a Microsoft program manager, traveled to IBM's chip design campus in Rochester, NY, he did a double take when he arrived at the meeting room where 26 engineers were waiting for him. Although IBM had reservations about Microsoft's schedule, the company was clearly serious.
Meanwhile, ATI Technologies assigned a small team to conceive a proposal for a game console graphics chip. Instead of pulling out a derivative of a PC graphics chip, ATI's engineers decided to design a brand-new console graphics chip that relied on embedded memory to feed a lot data to the graphics chip while keeping the main data pathway clear of traffic — critical for avoiding bottlenecks that would slow down the system.
By the fall of 2002, Microsoft's chip architects decided they favored the IBM and ATI solutions. They met with Ballmer and Gates, who wanted to be involved in the critical design decisions at an early juncture. Larry Yang recalls, "We asked them if they could stomach a relationship with IBM." Their affirmative answer pleased the team.
By early 2003, the list of potential chip suppliers had been narrowed down. At that point, Robbie Bach, the chief Xbox officer, took his team to a retreat at the Salish Lodge, on the edge of Washington's beautiful Snoqualmie Falls, made famous by the "Twin Peaks" television show. The team hashed out a battle plan. They would own the IP for silicon that could take the costs of the box down quickly. They would launch the box in 2005 at the same time as Sony would launch its box, or even earlier. The last time, Sony had had a 20-month head start with the PlayStation 2. By the time Microsoft sold its first 1.4 million Xboxes, Sony had sold more than 25 million PlayStation 2s.
Those goals fit well with the choice of IBM and ATI for the two pieces of silicon that would account for more than half the cost of the box. Each chip supplier moved forward, based on a "statement of work," but Gibson kept his options open, and it would be months before the team finalized a contract. Both IBM and ATI could pull blocks of IP from their existing products and reuse them in the Microsoft chips. Engineering teams from both companies began working on joint projects such as the data pathway that connected the chips. ATI had to make contingency plans, in case Microsoft chose Intel over IBM, and IBM also had to consider the possibility that Microsoft might choose NVIDIA.
Through the summer, Microsoft executives and marketers created detailed plans for the console launch. They decided to build security into the microprocessor to prevent hacking, which had proved to be a major embarrassment on the original Xbox. Marketers such as David Reid all but demanded that Microsoft try to develop the new machine in a way that would allow the games for the original Xbox to run on it. So-called backward compatibility wasn't necessarily exploited by customers, but it was a big factor in deciding which box to buy. And Bach insisted that Microsoft had to make gains in Japan and Europe by launching in those regions at the same time as in North America.
For a period in July 2003, Bob Feldstein, the ATI vice president in charge of the Xenon graphics chip, thought NVIDIA had won the deal, but in August Microsoft signed a deal with ATI and announced it to the world. The ATI chip would have 48 shaders, or processors that would handle the nuances of color shading and surface features on graphics objects, and would come with 10 Mbytes of embedded memory.
IBM followed with a contract signing a month later. The deal was more complicated than ATI's, because Microsoft had negotiated the right to take the IBM design and have it manufactured in an IBM-licensed foundry being built by contract chip maker Chartered Semiconductor. The chip would have three cores and run at 3.2 GHz. It was a little short of the 3.5 GHz that IBM had originally pitched, but it wasn't off by much.
By October 2003, the entire Xenon team had made its pitch to Gates and Ballmer. They faced some tough questions. Gates wanted to know if there was any chance the box would run the complete Windows operating system. The top executives ended up giving the green light to Xenon without a Windows version.
The ranks of Microsoft's hardware team swelled to more than 200, with half of the team members working on silicon integration. Many of these people were like Baker and Andrews, stragglers who had come from failed projects such as 3DO and WebTV. About 10 engineers worked on "Ana," a Microsoft video encoder chip, while others managed the schedule and cost reduction with IBM and ATI. Others supported suppliers, such as Silicon Integrated Systems, the supplier of the "south bridge," the communications and input/output chip. The rest of the team helped handle relationships with vendors for the other 1,700 parts in the game console.
Ilan Spillinger headed the IBM chip program, which carried the code name Waternoose, after the spiderlike creature from the film "Monsters, Inc." He supervised IBM's chief engineer, Dave Shippy, and worked closely with Microsoft's Andrews on every aspect of the design program.
Everything happened in parallel. For much of 2003, a team of industrial designers created the look and feel of the box. They tested the design on gamers, and the feedback suggested that the design seemed like something either Apple or Sony had created. The marketing team decided to call the machine the Xbox 360, because it put the gamer at the center. A small software team led by Tracy Sharp developed the operating system in Redmond. Microsoft started investing heavily in games. By February 2004, Microsoft sent out the first kits to game developers for making games on Apple Macintosh G5 computers. And in early 2004, Greg Gibson's evaluation team began testing subsystems to make sure they would all work together when the final design came together.
IBM assigned 421 engineers from six or seven sites to the project, which was a proving ground for its design services business. The effort paid off, with an early test chip that came out in August 2004. With that chip, Microsoft was able to begin debugging the operating system. ATI taped out its first design in September 2004, and IBM taped out its full chip in October 2004. Both chips ran game code early on, which was good, considering that it's very hard to get chips working at all when they first come out of the factory.
IBM executed without many setbacks. As it revised the chip, it fixed bugs with two revisions of the chip's layers. The company was able to debug the design in the factory quickly, because IBM's fab engineers could work on one part while the Chartered engineers could debug a different part of the chip. They fed the information to each other, speeding the cycle of revisions. By Jan. 30, 2005, IBM tapped out the final version of the microprocessor.
ATI, meanwhile, had a more difficult time. The company had assigned 180 engineers to the project. Although games ran on the chip early, problems came up in the lab. Feldstein said that in one game, one frame of animation would freeze as every other frame went by. It took six weeks to uncover the bug and find a fix. Delays in debugging threatened to throw the beta-development-kit program off schedule. That meant thousands of game developers might not get the systems they needed on time. If that happened, the Xbox 360 might launch without enough games, a disaster in the making.
The pressure was intense. But Neil McCarthy, a Microsoft engineer in Mountain View, designed a modification of the metal layers of the graphics chip. By doing so, he enabled Microsoft to get working chips from the interim design. ATI's foundry, Taiwan Semiconductor Manufacturing Co., churned out enough chips to seed the developer systems. The beta kits went out in the spring of 2005.
Meanwhile, Microsoft's brass was panic that Sony would trump the Xbox 360 by coming out with more memory in the PlayStation 3. So in the spring of 2005, Microsoft made what would become a fateful decision. It decided to double the amount of memory in the box, from 256 Mbytes to 512 Mbytes of graphics Double Data Rate 3 (GDDR3) chips. The decision would cost Microsoft $900 million over five years, so the company had to pare back spending in other areas to stay on its profit targets.
Microsoft started tying up all the loose ends. It rehired Seagate Technology, which it had hired for the original Xbox, to make hard disk drives for the box, but this time Microsoft decided to have two SKUs — one with a hard drive, for the enthusiasts, and one without, for the budget-conscious. It brought aboard both Flextronics and Wistron, the current makers of the Xbox, as contract manufacturers. But it also laid plans to have Celestica build a third factory for building the Xbox 360.
Just as everyone started to worry about the schedule going off course, ATI spun out the final graphics chip design in mid-July 2005. Everyone breathed a sigh of relief, and they moved on to the tough work of ramping up manufacturing. There was enough time for both ATI and IBM to build a stockpile of chips for the launch, which was set for Nov. 22 in North America, Dec. 2 in Europe and Dec. 10 in Japan.
Flextronics debugged the assembly process first. Nick Baker traveled to China to debug the initial boxes as they came off the line. Although assembly was scheduled to start in August, it didn't get started until September. Because the machines were being built in southern China, they had to be shipped over a period of six weeks by boat to the regions. Each factory could build only as many as 120,000 machines a week, running at full tilt. The slow start, combined with the multiregion launch, created big risks for Microsoft.
The hardware team was on pins and needles. The most-complicated chips came in on time and were remarkable achievements. Typically, it took more than two years to do the initial designs of complicated chip projects, but both companies were actually manufacturing inside that time window.
Then something unexpected hit. Both Samsung and Infineon Technologies had committed to making the GDDR3 memory for Microsoft. But some of Infineon's chips fell short of the 700 MHz specified by Microsoft. Using such chips could have slowed games down noticeably. Microsoft's engineers decided to start sorting the chips, not using the subpar ones. Because GDDR3 700 MHz chips were just ramping up, there was no way to get more chips. Each system used eight chips. The shortage constrained the supply of Xbox 360s.
Microsoft blamed the resulting shortfall of Xbox 360s on a variety of component shortages. Some users complained of overheating systems. But overall, the company said, the launch was still a great achievement. In its first holiday season, Microsoft sold 1.5 million Xbox 360s, compared to 1.4 million original Xboxes in the holiday season of 2001. But the shortage continued past the holidays.
Leslie Leland, hardware evaluation director, says she felt "terrible" about the shortage and that Microsoft would strive to get a box into the hands of every consumer who wanted one. But Greg Gibson, system designer, says that Microsoft could have worse problems on its hands than a shortage. The IBM and ATI teams had outdone themselves.
The project was by far the most successful Nick Baker had ever worked on. One night, hoisting a beer and looking at a finished console, he said it felt good.
J Allard, the head of the Xbox platform business, praised the chip engineers such as Baker: "They were on the highest wire with the shortest net."
Get more information on Takahashi's book.
This story first appeared in the May issue of Electronic Businessmagazine.
Digital transformation continues to be a vital undertaking for airlines during this critical recovery period. According to International Air Transport Association (IATA), total industry losses between 2020 and 2022 are expected to reach $201 billion. As a result, airlines are increasingly leveraging technology to solve pain points for passengers and employees, not only to optimize their operations, but also to drive revenue and long-term growth.
With legacy systems to untangle, complex business processes to investigate and reorganize, and new technology to test, making that shift is easier said than done. In this environment, airlines are undertaking more coordinated efforts to make digital transformation a “way of life” through technology like cloud architecture, mobile apps, and artificial intelligence. The key to making this leap is understanding the central business challenges and then continuously refining the direction to target essential needs.
“One of the first questions I ask my airline customers is, ‘What is the root cause of any problems today, in the context of your people, processes, and technology?’” said John Szatkowski, IBM’s global offering leader for travel and transportation. “The value isn’t one specific piece of technology or app, it’s solving a critical problem that gives the airline a better foundation.”
Guiding an airline’s journey into modernization starts with understanding how existing systems are intertwined.
“Some of the major airlines we are working with have thousands of systems that need to be modernized, deprecated, lifted, and shifted as part of a ‘cloud transformation’ project — and we need to figure out what to do with them,” said Szatkowski. “The hardest part is that it’s often a tangled web of systems created over the years. There might be 20 systems related to a process. It’s like a house of cards.”
Transformation doesn’t mean starting over — it means reinforcing what’s already part of an airline’s technology stack and removing inefficiencies. It’s important not to go it alone. For example, Etihad Airlines recently partnered with IBM Cloud to produce a more seamless airport check-in. With 18 existing integration systems, 12 major systems for check-in, and 270 unique processes needing to be catered to, it was a complex migration.
In order to make all of this work, Etihad had to deploy a hybrid cloud strategy. Without the help of technology partners like IBM, the time to research, cost to test and learn, and speed to market would have been on a much longer timeline. In total, the team created the new solution on IBM Cloud in just 15 weeks.
“The technology and moving to the cloud are the easy parts,” said Szatkowski. “The hard parts are the project management, people, and process transformations. That’s where consulting comes into play.”
As organizations with multi-faceted operations, airlines need systems that easily connect to each other and allow employees to handle issues on the move.
For instance, timely flights and smooth passenger flow are critical factors for revenue growth. KLM recently undertook a project to Improve the aircraft turnaround experience for its ground-handling employees so they can access the information they needed easily in one place.
Working with an outside partner to assess the challenges and find a solution proved integral. Several ground crew members were invited to participate in a three-day IBM Design Thinking Workshop to discuss problems and solutions. While it sounds obvious to bring in end users and hear about their issues in the field, it’s not always common practice.
“There is nothing better than having the business team listen to their people talk, so we made a point to fly in the ground handlers,” said Erin McClennan, global design director for travel and transportation at IBM. “[Companies] often don’t realize the challenges their employees are facing, and there’s also palpable excitement when we co-create together.”
The workshop was “transformational,” McClennan added. By the end of the second day, the group had a mobile app design to help solve operational issues. By day three, participants received a beta version of the APPron mobile app, which integrates airline, cargo, operations and baggage data — putting the coordinators in control of turnaround.
“We can transform the way employees do their jobs because they have this amazing technology in their pockets,” said McClennan.
By using real-time data effectively, airlines can create connected cabins, improving turnaround times and the passenger experience.
“If a passenger tells a flight attendant that the IFE [in-flight entertainment] is not functioning, the flight attendant can capture that information mid-flight so the maintenance crew can fix it on the turn,” explained McClennan. “The next passenger doesn’t have the same complaint and the flight attendant doesn’t have to have that same difficult conversation — this kind of connecting of systems and real-time data transfer can have a real impact.”
Predictive maintenance is another area that can transform business operations in real time. McClennan and Szatkowski both agreed, however, that effectively implementing a predictive system falls on a spectrum. Needs vary widely from airline to airline depending on their people, processes, and technology mix.
For example, McClennan explained that many airlines are still relying on paper, and sometimes getting started on a path to “full-blown” predictive maintenance is as simple as modernizing systems to reduce paper usage. Others may be ready to build much more complex systems, such as a digital twin of an aircraft or engine that integrates operational, manufacturer, and IoT data sources — which take much longer to build.
“We are really focused on insights and process optimization, which is a consistent aspiration from client to client,” McClennan said. “We can help airlines wherever they are in the process — our focus is to help the client reach the ultimate goal, while still providing value at each step along the way.”
Airlines can harness the power of conversational AI for a variety of applications.
“[Chatbots are] about redirecting the transactional lower priority calls to automation and dynamically rerouting calls that have a higher priority to an agent,” said Szatkowski, adding that this type of service is ideal for airlines as they experience peaks and valleys of demand.
For example, AI-powered virtual agent IBM Watson (aka “Watson”) learns from customer interactions and knows when to search its knowledge base for answers, when to ask for clarity and when users should transfer to a human agent. According to a Forrester study commissioned by IBM, chatbot agents with Watson reduced handle time by 10 percent, and an analysis of four companies using the system reported an ROI of more than 300 percent.
In another instance, ANA partnered with IBM to bring customer feedback into a centralized, trackable system. Along with Salesforce Service Cloud, the airline brought together four global contact centers in the U.S. and Japan, providing complete, up-to-date customer views to enable better real-time service across communications channels. As part of the contact center, Watson Speech to Text visualizes customer conversations to help streamline information and enhance the insights gathered.
By using data intelligently to provide a 360-degree view of a passenger at any point on their journey, airlines can tailor proactive services to customers that will drive greater satisfaction and revenue growth.
ICAO (International Civil Aviation Organization) reported an estimated 25 percent to 29 percent decline in passengers in 2022 compared with 2019, but that’s now all starting to come back. As airlines compete ferociously to win back market share amidst spiking demand, their ability to understand customers’ changing expectations during this volatile time will be critical.
“We know so much about passengers, and I don’t think we’re using that data well,” said McClennan. “Data is now more easily connected, and there is a big opportunity to Improve passenger experience by putting it into the hands of more employees.”
Consider the journey-transforming scenarios such as the following: Automated recognition of a flight with an unusually high number of vegetarian passengers on board triggers a timely increased delivery of plant-based meal options. Or a push notification upon landing advises passengers on baggage carousel number and how to get there.
As an example of leveraging this data during the booking process, IBM partnered with Malaysia Airlines on a “Personalized Pricing and Offers” email ad campaign based on AI algorithms. Malaysia Airlines customers receiving these personalized recommendations made 34 percent more bookings than those who did not. The uptake was even higher (54 percent more) for business class customers.
These are only several examples of how the data airlines collect from their customers, with consent, can turn a pedestrian flight into an unforgettable experience.
Ultimately, there’s no one-size-fits-all solution for digital transformation. As airlines consider where they are on the path to digitize their operations, they might have any number of starting points and milestones across departments and disciplines. By working with technology partners who have seen use cases and understand the underlying systems required to solve issues common across large, complex, enterprise organizations, they can feel confident improving efficiency, profitability, and customer satisfaction.
For more information about IBM’s solutions for the travel and transportation industry, visit https://www.ibm.com/industries/travel-transportation.
This content was created collaboratively by IBM and Skift’s branded content studio, SkiftX.
The Best of Breed (BoB) Conference meets the evolving needs of the IT channel’s largest, fastest-growing, and most progressive solution provider organizations and the top technology vendors and distributors. An invitation-only event, the BoB Conference brings together 100+ attendees from CRN’s elite solution provider lists ─ Solution Provider 500, Tech Elite 250 and Fast Growth 150 ─ to connect and engage over the course of 2 days. The in-person event features empowering CEO interviews, SP 500 solution provider spotlights, economic and market trend sessions, executive panel discussions and briefings, and peer-to-peer networking.
Get all of CRN's coverage of the event here and follow along on Twitter at #bob21.
10 Big Cybersecurity Bets For 2022 From Optiv CEO Kevin Lynch
From data governance, anti-ransomware and managed XDR to advisory services, managed implementation and faster delivery, here’s where Optiv CEO Kevin Lynch plans to place his bets in 2022.
HPE CEO Antonio Neri’s 10 Boldest Statements From Best Of Breed 2021
Neri talks about the growing importance of data, integrated platforms, and opportunities for partners in 5G, connectivity, and the HPE GreenLake edge-to-cloud platform.
HPE CEO Neri: Steer Clear Of Public Cloud, Slash Costs By Up To 50 Percent
One solution provider tells CRN that a customer is looking at that level of savings by moving workloads out of the public cloud and into HPE’s GreenLake platform.
HPE CEO Antonio Neri: Dell Apex ‘Is VMware—It’s Not Dell’
Neri says that the early version of the Dell Apex solution doesn’t offer as broad of a set of as-a-service solutions as HPE’s GreenLake offering, and is predominantly built around the VMware control plane.
HPE CEO Antonio Neri To ‘Personally Lead’ Initiative To Boost GreenLake Experience
The CEO of Hewlett Packard Enterprise says he will oversee a 30-person team working to enhance ‘all aspects of the experience’ involved with the GreenLake as-a-service consumption model.
Hybrid Cloud Doesn’t Have To Be Intimidating, Says IBM
‘We can’t do this alone. With the $1 trillion opportunity around hybrid cloud, the only way we are going to succeed is with you,’ IBM’s Deepa Krishnan tells solution providers.
Cisco CEO Chuck Robbins’ 10 Boldest Statements From Best Of Breed 2021
‘Software … allows us to move faster, innovate more quickly and allows the customer to actually get to the outcome faster. And if we get it right, it’s better for both our business models because [it provides] more predictability. But we have to get it right because it’s complicated to figure all that out,’ Cisco CEO Chuck Robbins tells an audience of solution providers.
Ingram Micro’s Kirk Robinson: New Ownership Means New Channel Investment
‘Platinum [Equity] is in the business of making great companies greater, and they’re fully prepared to leverage their resources and experience to help Ingram Micro grow. And now that we’re U.S.-owned again, we have additional opportunities, not the least of which is in the public sector market where Platinum has proven experience,’ says Kirk Robinson, Ingram Micro’s chief country executive for the U.S.
Ingram Micro Acquires CloudLogic In Big Cloud Services Play
‘CloudLogic not only advises on the best, most efficient, and effective way to run your customers’ applications, but they can also run reports on your customers’ technology, software licensing, and cloud spend through what we call IT Portfolio Optimization,’ says Kirk Robinson, Ingram Micro’s chief country executive for the U.S.
Frank Vitagliano: ‘The Smart Money … Has Gravitated To The Distributors’
‘The smart money in the marketplace … has gravitated to the distributors. What they see is not only the value of what we’re doing today, but also the opportunity to enhance that,’ says Frank Vitagliano, CEO of the Global Technology Distribution Council.
Cisco’s Chuck Robbins On What Subscriptions Mean For Partners: ‘It’s Better For Both Our Business Models’
‘One thing I’ve told the team all along is I don’t know what the solution is, but the answer is we have to do it with our partner community, and that’s just the way it is,’ Cisco CEO Chuck Robbins said at The Channel Company’s 2021 Best of Breed Conference.
Cynet: Automate, Consolidate Security Functions With XDR
‘When you’re selling one consolidated XDR platform, it’s a single setup,. Your win rates are higher, and your margins are higher as well,’ says Royi Barnea, Cynet’s head of channel sales in North America.
Customer Engagement Strategies Changing To Meet New Challenges
‘We are in definitely a hybrid place today of kind of a mix of digital and traditional. And that actually isn’t going away. And expectations of customers are absolutely going to continue to remain in that space, and they’re going to want to interact in that fashion,’ says Jade Surrette, chief marketing officer for The Channel Company.
IBM CEO Arvind Krishna’s 10 Boldest Statements From Best Of Breed 2021
IBM CEO Arvind Krishna talked Red Hat integration plans, supply chain issues, partner opportunities, and took aim at VMWare during a Q&A onstage at the Best of Breed Conference 2021.
Arvind Krishna: IBM ‘Dead Serious’ About Partner Push; Upcoming Growth Has ‘Got To Be’ Through Channel
IBM’s CEO tells partners at The Channel Company’s Best of Breed Conference that the IT giant has stepped up to invest in the channel, and offers major opportunities that include hybrid cloud, Red Hat solutions and security. ‘Now, let’s go grow the business together,’ Krishna said.
IBM CEO Arvind Krishna: Chip Shortage ‘More Likely’ Continuing Until 2023 Or 2024
Krishna said he sees any suggestion that a resolution could come by 2022 as ‘optimistic,’ and called upon the U.S. government to do more to support a larger return of semiconductor manufacturing to the country.
‘Geographically Diversify Manufacturing’ To Solve Supply Chain Crisis: Analyst
‘There’s no question we need to geographically diversify manufacturing. We absolutely have put too much dependence on Asia without having a more predictable macroeconomic and geopolitical relationship,’ says Daniel Newman of Futurum Research.
The Location Analytics Software Market is penetrating at a faster pace and accounted to grow with strong potential in the forecasted period that is 2022 to 2028.
A detailed study of the Location Analytics Software market is conducted in the report, which includes the analysis of the market in terms of size, share growth, technological innovations, marketplace expansion, cost structure, comprehensive and statistical data, and other pictorial representations. The report is an integration of the in-depth research methodology and market understanding for the forecasted period (2022-2028). The report is the complete integration of the key market trends and opportunities, the impacts of the market value. The analysis of the market focuses on the different market segments in order to monitor and conclude the faster-growing business in the period of forecasting. In addition, the detailed analysis of future market demand and supply conditions is covered by considering the inclusive data on the emerging market.
The emerging market trends, market drivers, restraints, growth opportunities, and challenges lead to change in the market dynamics. These factors allow the in-depth analysis of the data on challenges and new possible pathways in the market. The factors that contribute to the market development divide as intrinsic and extrinsic. The drivers and restraints are considered as the intrinsic factors, whereas; the opportunities and challenges are the extrinsic factors of the market. Analysis of both factors leads to strengthen the potential analysis of the market and achieve the greatest return in terms of revenue throughout the forecast. In addition, allows targeted markets to meet with progressive growth.
>>> Get a sample PDF Of the Report @ https://www.stratagemmarketinsights.com/sample/103296
Location Analytics Software Market: Scope of the Report
The report offers the overall understanding with detailed information on productivity, industries, revenues, in order to help the business growth. An extensive industry analysis of the pattern components and developments that affects the growth of the Location Analytics Software market is studied. The market estimates offered in the report are the result of inclusive primary and secondary research, which calculates the historical year, estimated year, and forecasted year. The evaluation of the market values depends upon the various factors that include social, economic, and political factors in response to the current dynamics of the market.
Location Analytics Software Market: Competitive Landscape
The market analysis entails a section solely dedicated to major players in the Location Analytics Software Market wherein our analysts provide an insight to the financial statements of all the major players along with its key developments product benchmarking and SWOT analysis. The company profile section also includes a business overview and financial information. The companies that are provided in this section can be customized according to the client’s requirements.
Prominent Key players of the Location Analytics Software market survey report:
IBM Corporation, Microsoft Corporation, Cisco Systems, Inc., HP Enterprise Company, Google Inc., Oracle Corporation, SAP SE, SAS Institute Inc.
Location Analytics Software Market Outlook (Segmentation Analysis)
Location Analytics Software Market, By Product:
Geocoding and Reverse Geocoding, Data Integration and Extract, Transform, and Load, Reporting and Visualization, Thematic Mapping and Spatial Analysis, Others
Location Analytics Software Market, By Application:
Risk Management, Emergency Response Management, Customer Experience Management, Remote Monitoring, Supply Chain Planning and Optimization, Sales and Marketing Optimization, Predictive Assets Management, Inventory Management, Others
>>> Don’t miss out on business opportunities in Location Analytics Software Market. Please speak to our analyst and gain crucial industry insights that will help your business grow @ https://www.stratagemmarketinsights.com/speakanalyst/103296
This Report lets you identify the opportunities in Location Analytics Software Market by means of a region:
⦿ North America (the United States, Canada, and Mexico)
⦿ Europe (Germany, UK, France, Italy, Russia and Turkey, etc.)
⦿ Asia-Pacific (China, Japan, Korea, India, Australia, and Southeast Asia (Indonesia, Thailand, Philippines, Malaysia, and Vietnam))
⦿ South America (Brazil etc.)
⦿ The Middle East and Africa (North Africa and GCC Countries)
The Location Analytics Software Market: Research Methodology
The research methodology adopted by our company is the integration of primary research, secondary research, and expertise reviews. Secondary research is performed by considering the sources such as company annual reports, research papers, and press releases concerning the industry. Other sources include industry magazines, trade journals, and associations; the government authorized information to incorporate the most reliable data to showcase the opportunities for business expansion in the Global Location Analytics Software Market.
>>> To Purchase This Premium Report, Click Here @ https://www.stratagemmarketinsights.com/cart/103296
Reasons to Purchase this Report :
• Strong qualitative and quantitative market analysis based on the segment breakdown within the consideration of both economic as well as non-economic factors.
• Market evaluation based on market value (data in USD Billion) for each segment breakdown.
• Indicates the region and segment breakdown that is expected to witness the fastest growth rate and acts as market dominant.
• Analysis of geography highlighting, the region vice consumption of the product/service and an indication of the factors that are affecting the market within each region.
• The competitive landscape encompasses the market ranking of the major market competitors, new service/product launches, partnerships, business expansions, and acquisitions in the past five years of companies profiled.
• The company profiles section explains the company overview, company insights, product benchmarking, and SWOT analysis for the major market players.
• Current as well as the future market outlook of the industry with respect to latest developments (which involve growth opportunities and drivers as well as challenges and restraints of both emerging as well as developed regions).
• In-depth analysis of the market through Porter’s Five Forces Analysis.
• Provides insight into the market through Value Chain.
• The understanding of market dynamics scenario, and growth opportunities of the market for the period of forecast.
• 6-month post-sales analyst support.
The Location Analytics Software market report provides answers to the following key questions:
• What will be the size of the Location Analytics Software market and its growth rate in the coming year?
• What are the major key factors driving the global Location Analytics Software market?
• What are the key market trends impacting the growth of the Global Location Analytics Software Market?
• What are the trending factors influencing the market shares of major regions around the world?
• Who are the major market players and what are their strategies in the global Location Analytics Software market?
• What are the market opportunities and threats faced by the vendors in the global Location Analytics Software Market?
• What are the industry trends, drivers and challenges that are manipulating its growth?
• What are the key findings of the five forces analysis of the Global Location Analytics Software Market?
• What is the impact of Covid19 on the current industry?
Stratagem Market Insights
Explore By- MN
“Microsoft (US), IBM (US), Google (US), Oracle (US), AWS (US), Meta (US), Artificial Solutions (Sweden), eGain (US), Baidu (China), Inbenta (US), Alvaria (US), SAP (Germany), Creative Virtual (UK), Gupshup (US), Rasa (US), Pandorabots (US), Botego (US), Chatfuel (US), Pypestream (US), Avaamo (US), Webio (Ireland), ServisBOT (US).”
Bot Services Market by Service Type (Platform & Framework), Mode of Channel (Social Media, Website), Interaction Type, Business Function (Sales & Marketing, IT, HR), Vertical (BFSI, Retail & eCommerce) and Region – Global Forecast to 2027
The Bot Services Market size to grow from USD 1.6 billion in 2022 to USD 6.7 billion by 2027, at a Compound Annual Growth Rate (CAGR) of 33.2% during the forecast period. Various factors such as rise in the need for 24X7 customer support at a lower operational cost, integration of chatbots with social media to augment marketing strategy, and innovations in AI and ML technologies for chatbots resulting in better customer experience are expected to drive the adoption of bot services.
Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=54449873
According to Microsoft, Azure Bot Service provides an integrated development environment for bot building. Its integration with Power Virtual Agents, a fully hosted low-code platform, enables developers of all technical abilities to build conversational AI bots without the need for any further coding. The integration of Azure Bot Service and Power Virtual Agents enables a multidisciplinary team with a range of expertise and abilities to build bots inside a single software as a service (SaaS) solution.
Healthcare and Life Sciences vertical to witness the highest CAGR during the forecast period
The segmentation of the bot services market by vertical includes BFSI, retail & eCommerce, healthcare & life sciences, media & entertainment, travel & hospitality, IT & telecom, government, and others (automotive, utilities, education and real estate). The healthcare industry is developing rapidly due to many major technological advancements to enhance the overall patients experience. Hospitals and other health institutions are increasingly adopting bot services to Improve the overall experience of patients, doctors, and other staff. Additionally, bot services can enhance patient experience and build patient loyalty, while improving organizational efficiency. Moreover, bots, also known as virtual health assistants, notify patients about their medication plan, address concerns, deliver diagnosis reports, educate them regarding certain diseases, motivate them to exercise, and personalize user experience.
Some major players in the bot services market include Microsoft (US), IBM (US), Google (US), Oracle (US), AWS (US), Meta (US), Artificial Solutions (Sweden), eGain (US), Baidu (China), Inbenta (US), Alvaria (US), SAP (Germany), CM.com (Netherlands), Creative Virtual (UK), Kore.ai (US), 7.ai (US), Gupshup (US), Rasa (US), Pandorabots (US), Botego (US), Chatfuel (US), Pypestream (US), Avaamo (US), Webio (Ireland), ServisBOT (US), Morph.ai (India), Cognigy (Germany), Enterprise Bot (Switzerland), Engati (US), and Haptik (US). These players have adopted various organic and inorganic growth strategies, such as new product launches, partnerships and collaborations, and mergers and acquisitions, to expand their presence in the global bot services market.
Request sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=54449873
Artificial Solutions (Sweden) is a leading specialist in Conversational AI solutions and services. The solution offered by the company enables communication with applications, websites, and devices in everyday, human-like natural language via voice, text, touch, or gesture inputs. Artificial Solutions’ conversational AI technology makes it easy to build, implement, and manage a wide range of natural language applications, such as virtual assistants, conversational bots, and speech-based conversational UIs for smart devices. Artificial Solutions offers bot services and solutions to various industries, such as financial services, retail, automotive, telecom, energy and utilities, travel and leisure, and entertainment. Artificial Solutions has won several awards, such as the 2019 Stevie Awards for Sales and Customer Service, the 2018 Speech Industry Awards, and the 2018 AICONICS: Best Intelligent Assistant Innovation. The company’s major customers include AT&T, Shell, Vodafone, TIAA, Volkswagen Group, Deutsche Post, Widiba, Telenor Group, Accenture, KPMG, Cognizant, Wipro, and Publicis Sapient. It has development centers in Barcelona, Hamburg, London, and Stockholm and offices across Europe, Asia Pacific, and South America.
In the bot services market, it provides Teneo, a platform that enables business users and developers to collaborate to create intelligent conversational AI applications. These applications operate across 35 languages, multiple platforms, and channels in record time.
eGain (US) is a leading cloud customer engagement hub software supplier. eGain products have been used to Improve customer experience, streamline service processes, and increase revenue across the online, social media, and phone channels for over a decade. eGain helps hundreds of the worlds leading organizations turn their disjointed sales and customer service operations into unified customer engagement hubs (CEHs). In North America, Europe, the Middle East, Africa, and Asia Pacific, eGain Corporation develops, licenses, implements, and supports customer service infrastructure software solutions. It offers a unified cloud software platform to automate, augment, and orchestrate consumer interactions. It also provides subscription services, which provide users access to its software via a cloud-based platform, as well as professional services, including consultation, implementation, and training. The company caters to the financial services, telecommunications, retail, government, healthcare, and utilities industries.
In the bot services market, the company offers AI Chatbot Virtual Assistant software which improves customer engagement. The VA acts as a guide, helping customers navigate the website and taking them to the relevant places on a page. The virtual assistant provides answers to any queries, even helping in making shopping decisions.
Baidu (China) provides internet search services. It is divided into two segments: Baidu Core and iQIYI. The Baidu app helps customers to access search, feed, and other services through their mobile devices. Baidu Search helps users to access the companys search and other services. Baidu Feed gives users a customized timeline based on their demographics and interests. The company provides products, including Baidu Knows, an online community where users can ask questions to other users; Baidu Wiki; Baidu Healthcare Wiki; Baidu Wenku; Baidu Scholar; Baidu Experience; Baidu Post; Baidu Maps, a voice-enabled mobile app that provides travel-related services; Baidu Drive; Baijiahao; and DuerOS, a smart assistant platform. The company also provides online marketing services such as pay for performance, an auction-based service that enables customers to bid for priority placement of paid sponsored links and reach users searching for information about their products or services. Other marketing services offered by the company are display-based marketing services and other online marketing services based on performance criteria other than cost per click. The company offers a mobile ecosystem, which includes Baidu A, a portfolio of applications. Further, the company provides iQIYI, an online entertainment service, including original and licensed content; video content and membership; and online advertising services.
In the bot services market, Baidu offers Baidu Bot, a search bot software used by Baidu, which collects documents from the web to build a searchable index for the Baidu search engine.
Company Name: MarketsandMarkets™ Research Private Ltd.
Contact Person: Mr. Aashish Mehra
Email: Send Email
Address:630 Dundee Road Suite 430
State: IL 60062
Country: United States
Managing data storage is ever more complex. IT teams have to wrestle with local, direct-attached storage, storage area networks, network attached storage and cloud storage volumes.
They might be deploying hyper-converged systems, or using on-premise implementations of cloud storage technology. And they are likely to have several, incompatible storage protocols at work, especially for unstructured data.
And all of this is happening as the business is demanding more from its data.
This is driving growing interest in global file systems, sometimes also known as distributed file systems.
Global file systems are not new. Back in the 1980s, Carnegie Mellon University developed its Andrew File System, or AFS, which is still in use today. But since then, commercial suppliers have taken the concept further and applied it across cloud and on-premise storage.
Bridging the gap between on-premise and cloud storage promises to simplify IT management and cut costs.
Cloud storage is, by its nature, distributed. The end-user does not know, or need to know, where their data is stored (apart from any compliance-based limitations). Cloud suppliers use object storage technology to split data across multiple servers and even multiple datacentres, to achieve economies of scale.
But most operating systems and applications cannot read and write to object storage directly. They expect to see network protocols such as NFS or SMB, or access storage directly or via a SAN. Although suppliers have created storage gateways, and more applications are compatible with object storage (such as Oracle and Openstack), there are still plenty of applications that do not.
Global file systems could be the answer. They offer the flexibility, resilience and capacity of the cloud, but retain the simplicity – at least for applications and operating systems – of NAS.
“Most organisations of any size will have data stored in a variety of places and file formats, which can make it very challenging to find and use,” says Bryan Betts, principal analyst at Freeform Dynamics. “Putting a global file system over the top means all your data is equally visible to everyone with access rights, in a single standard format – a virtualised ‘super-filesystem’, if you like.”
Benefits include the economies of scale of the cloud, better redundancy than can usually be achieved from on-premise systems, the ability to add (or remove) storage capacity quickly, and a move from capital spending to operating expenditure.
Other pluses of enterprise file sharing services include easier collaboration and, potentially, better security.
Further down the line, though, firms could even move to a single file system that spans on-premise hardware – where latency and performance are critical – and cloud-based applications.
Global file systems work by combining a central file service – typically on public or private clouds – with local network hardware for caching and to ensure application compatibility. They do this by placing all the storage in a single namespace. This will be the single, “gold” copy of all data.
Caching and synching is needed to ensure performance. According to CTERA, one of the suppliers in the space, a large enterprise could be moving more than 30TB of data per site.
Secondly, the system needs broad compatibility. The global file system needs to support migration from legacy, on-premise, NAS hardware. Operating systems and applications need to be able to access the global file system as easily as they did previously with NFS or SMB.
The system also needs to be easy to use, ideally transparent to end-users, and able to scale. Few firms will be able to move everything to a new file system at once, so a global file system that can grow as applications move to it, is vital.
As a cloud-based service, global file systems appeal to organisations that need to share information between sites – or with users outside the business perimeter in use cases that were often bolstered during the pandemic.
This, however, leads to overlaps between the capabilities of the global file system, and conventional file-and-sync services. These include the more consumer-oriented services such as Dropbox and OneDrive, often pressed into service to support remote working during Covid-19, as well as SharePoint, Google Drive and enterprise-grade sharing services.
Some global file system suppliers stress that they, too, can provide these services. Certainly, being able to share files externally, or extend desktop search out to cloud-based files, is useful. For most enterprises, however, basic performance, compatibility and ease of migration are likely to rank higher.
“The challenges are, of course, that this can get very big, and if your data is globally distributed, you, or your global file system developer, need to decide how you will deal with things like file locking – to prevent two people or systems updating the same data at the same time – and replication,” says Freeform Dynamics’ Betts.
The term "artificial intelligence" was only just coined about 60 years ago, but today, we have no shortage of experts pondering the future of AI. Chief amongst the syllabus considered is the technological singularity, a moment when machines reach a level of intelligence that exceeds that of humans.
While currently confined to science fiction, the singularity no longer seems beyond the realm of possibility. From larger tech companies like Google and IBM to dozens of smaller startups, some of the smartest people in the world are dedicated to advancing the fields of AI and robotics. Now, we have human-looking robots that can hold a conversation, read emotions — or at least try to — and engage in one type of work or another.
Top among the leading experts confident that the singularity is a near-future inevitability is Ray Kurzweil, Google's director of engineering. The highly regarded futurist and "future teller" predicts we'll reach it sometime before 2045.
Meanwhile, SoftBank CEO Masayoshi Son, a quite famous futurist himself, is convinced the singularity will happen this century, possibly as soon as 2047. Between his company's strategic acquisitions, which include robotics startup Boston Dynamics, and billions of dollars in tech funding, it might be safe to say that no other person is as eager to speed up the process.
Not everyone is looking forward to the singularity, though. Some experts are concerned that super-intelligent machines could end humanity as we know it. These warnings come from the likes of physicist Stephen Hawking and Tesla CEO and founder Elon Musk, who has famously taken flak for his "doomsday" attitude towards AI and the singularity.
Clearly, the subject is quite divisive, so Futurism decided to gather the thoughts of other experts in the hopes of separating sci-fi from real developments in AI. Here's how close they think we are to reaching the singularity.
Louis Rosenberg, CEO, Unanimous AI:
My view, as I describe in my TED talk from this summer, is that artificial intelligence will become self-aware and will exceed human abilities, a milestone that many people refer to as the singularity. Why am I so sure this will happen? Simple. Mother nature has already proven that sentient intelligence can be created by enabling massive numbers of simple processing units (i.e., neurons) to form adaptive networks (i.e., brains).
Back in the early 1990s, when I started thinking about this issue, I believed that AI would exceed human abilities around the year 2050. Currently, I believe it will happen sooner than that, possibly as early as 2030. That's very surprising to me, as these types of forecasts usually slip further into the future as the limits of technology come into focus, but this one is screaming towards us faster than ever.
To me, the prospect of a sentient artificial intelligence being created on Earth is no less dangerous than an alien intelligence showing up from another planet. After all, it will have its own values, its own morals, its own sensibilities, and, most of all, its own interests.
To assume that its interests will be aligned with ours is absurdly naive, and to assume that it won't put its interests first — putting our very existence at risk — is to ignore what we humans have done to every other creature on Earth.
Thus, we should be preparing for the imminent arrival of a sentient AI with the same level of caution as the imminent arrival of a spaceship from another solar system. We need to assume this is an existential threat for our species.
What can we do? Personally, I am skeptical we can stop a sentient AI from emerging. We humans are just not able to contain dangerous technologies. It's not that we don't have good intentions; it's that we rarely appreciate the dangers of our creations until they overtly present themselves, at which point it's too late.
Does that mean we're doomed? For a long time I thought we were — in fact, I wrote two sci-fi graphic novels about our imminent demise — but now, I am a believer that humanity can survive if we make ourselves smarter, much smarter, and fast...staying ahead of the machines.
Pierre Barreau, CEO, Aiva Technologies:
I think that the biggest misunderstanding when it comes to how soon AI will reach a "super intelligence” level is the assumption that exponential growth in performance should be taken for granted.
First, on a hardware level, we are hitting the ceiling of Moore’s law as transistors can’t get any smaller. At the same time, we have yet to prove in practice that new computing architectures, such as quantum computing, can be used to continue the growth of computing power at the same rate as we had previously.
Second, on a software level, we still have a long way to go. Most of the best-performing AI algorithms require thousands, if not millions, of examples to train themselves successfully. We humans are able to learn new tasks much more efficiently by only seeing a few examples.
The applications of AI [and] deep learning nowadays are very narrow. AI systems focus on solving very specific problems, such as recognizing pictures of cats and dogs, driving cars, or composing music, but we haven’t yet managed to train a system to do all these tasks at once like a human is capable of doing.
That’s not to say that we shouldn’t be optimistic about the progress of AI. However, I believe that if too much hype surrounds a topic, it’s likely that there will come a point when we will become disillusioned with promises of what AI can do.
If that happens, then another AI winter could appear, which would lead to reduced funding in artificial intelligence. This is probably the worst thing that could happen to AI research, as it could prevent further advances in the field from happening sooner rather than later.
Now, when will the singularity happen? I think it depends what we mean by it. If we’re talking about AIs passing the Turing test and seeming as intelligent as humans, I believe that is something we will see by 2050. That doesn’t mean that the AI will necessarily be more intelligent than us.
If we’re talking about AIs truly surpassing humans in any task, then I think that we still need to understand how our own intelligence works before being able to claim that we have created an artificial one that surpasses ours. A human brain is still infinitely more complicated to comprehend than the most complex deep neural network out there.
Raja Chatila, chair of the IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems and director of the Institute of Intelligent Systems and Robotics (ISIR) at Pierre and Marie Curie University:
The technological singularity concept is not grounded on any scientific or technological fact.
The main argument is the so-called “law of accelerating returns” put forward by several prophets of the singularity and mostly by Ray Kurzweil. This law is inspired by Moore’s law, which, as you know, is not a scientific law — it's the result of how the industry that manufactures processors and chips delivers more miniaturized and integrated ones by scaling down the transistor, therefore multiplying computing power by a factor of two approximately every two years, as well as increasing memory capacity.
Everyone knows there are limits to Moore’s law — when we’ll reach the quantum scale, for example — and that there are architectures that can change this perspective (quantum computing, integration of different functions: “more than Moore,” etc.). It’s important to remember that Moore’s law is not a strict law.
However, the proponents of the singularity generalize it to the evolution of species and of technology in general on no rigorous ground. From that, they project that there will be a moment in time in which the increasing power of computers will provide them with a capacity of artificial intelligence, surpassing all human intelligence. Currently, this is predicted by the singularity proponents to happen around 2040 to 2045.
But mere computing power is not intelligence. We have about 100 billion neurons in our brain. It’s their organization and interaction that makes us think and act.
For the time being, all we can do is program explicit algorithms for achieving some computations efficiently (calling this intelligence), be it by specifically defining these computations or through well-designed learning processes, which remain limited to what they’ve been designed to learn.
In conclusion, the singularity is a matter of belief, not science.
Gideon Shmuel, CEO of eyeSight Technologies:
Figuring out how to make machines learn for themselves, in a broad way, may be an hour away in some small lab and may be five years out as a concentrated effort by one of the giants, such as Amazon or Google. The challenge is that once we make this leap and the machines truly learn by themselves, they will be able to do so at an exponential rate, surpassing us within hours or even mere minutes.
I wish I could tell you that, like all other technological advancements, tech is neither good nor bad — it’s just a tool. I wish I could tell you that a tool is as good or as bad as its user. However, all this will not apply any longer. This singularity is not about the human users — it’s about the machines. This will be completely out of our hands, and the only thing that is certain is that we cannot predict the implications.
Plenty of science-fiction books and movies bring up the notion of a super intelligence, figuring out that the best way to save humankind is to destroy it, or lock everyone up, or some other outcome you and I are not going to appreciate.
There is an underlying second order differentiation that is worth making between AI technologies. If you take eyeSight’s domain expertise — embedded computer vision — the risk is rather low. Having a machine or computer learn on their own the meaning of the items and contexts they can see (recognize a person, a chair, a brand, a specific action performed by humans or an interaction, etc.) has nothing to do with the action such a machine can take with respect to this input.
It is in our best interest to have machines that can teach themselves to understand what’s going on and ascribe the right meaning to the happenings. The risk lies with the AI brain that is responsible for taking the sensory inputs and translating them to action.
Actions can be very risky both in the physical realm, through motors (vehicles, gates, cranes, pipe valves, robots, etc.) and in the cyber realm (futzing with information flow, access to information, control of resources, identities, various permissions, etc.).
Should we be afraid of the latter? Personally, I’m shaking.
Patrick Winston, artificial intelligence and computer science professor, MIT Computer Science and Artificial Intelligence Lab (CSAIL):
I was recently asked a variant on this question. People have been saying we will have human-level intelligence in 20 years for the past 50 years. My answer: I'm ok with it. It will be true eventually.
My less flip answer is that, interestingly, [Alan] Turing broached the subject in his original Turing test paper using a nuclear reaction analogy. Since, others have thought they have invented the singularity idea, but it is really an obvious question that anyone who has thought seriously about AI would ask.
My personal answer is that it is not like getting a person to the Moon, which we knew we could do when the space program started. That is, no breakthrough ideas were needed. As far as a technological singularity, that requires one or more breakthroughs, and those are hard/impossible to think of in terms of timelines.
Of course, it depends, in part, on how many have been drawn to think about those hard problems. Now, we have huge numbers studying and working on machine learning and deep learning. Some tiny fraction of those may be drawn to thinking about understanding the nature of human intelligence, and that tiny fraction constitutes a much bigger number than were thinking about human intelligence a decade ago.
So, when will we have our Watson/Crick moment? Forced into a corner, with a knife at my throat, I would say 20 years, and I say that fully confident that it will be true eventually.