A perfect key to success by these P2050-028 Test Prep

If you really to show your professionalism so just Passing the P2050-028 exam is not sufficient. You should have enough Emptoris Strategic Supply Technical Mastery v1 knowledge that will help you work in real world scenarios. Killexams.com specially focus to improve your knowledge about P2050-028 objectives so that you not only pass the exam, but really get ready to work in practical environment as a professional.

Exam Code: P2050-028 Practice exam 2022 by Killexams.com team
Emptoris Strategic Supply Technical Mastery v1
IBM Strategic answers
Killexams : IBM Strategic answers - BingNews https://killexams.com/pass4sure/exam-detail/P2050-028 Search results Killexams : IBM Strategic answers - BingNews https://killexams.com/pass4sure/exam-detail/P2050-028 https://killexams.com/exam_list/IBM Killexams : ‘India is a perfect example of the application of open hybrid cloud’

You have seen the company grow from being just about enterprise Linux to becoming a multi-billion dollar open source enterprise products firm. Having stepped into Cormier’s shoes, are you planning any change in strategy?

The short answer is ‘No’. I’m pretty lucky that I have worked within about 20 feet of Paul for the last 10 years. So, I’ve had the opportunity to have a hand in the team we’ve built and the strategy we’ve built and the bets and positions we’ve made around open hybrid cloud. In my last role, I was heading all of our products and technology and business unit teams. Hence, I know the team and the strategy. And we will evolve. If we look at the cloud services market that’s moving fast, our commercial models will change there to make sure that as customers have a foot on prem (on premises) and in private cloud, we serve them well. As hybrid extends to edge (computing), it will also change how we approach that market. But our fundamental strategy around open hybrid cloud doesn’t change. So, it’s a nice spot to be here, where I don’t feel compelled to make any change, but focus more on execution. 

Tell us a bit about Red Hat’s focus on India, and your expansion plans in the country.

When we see the growth and opportunity in India, it mimics what we see in a lot of parts of the globe—software-defined innovation that is going to be the thing that lets enterprises compete. That could be in traditional markets where they’re leveraging their data centres; or it could be leveraging public cloud technologies. In certain industries, that software innovation is moving to the devices themselves, which we call edge. India is a perfect example of the application of open hybrid cloud because we can serve all of those use cases—from edge deployments in 5G and the adjacent businesses that will be built around that, to connectivity to the public clouds.

Correia (Marshall Correia is vice-president and general manager, India, South Asia at Red Hat): We have been operating in the country for multiple decades and our interest in India is two-fold. One is go-to-market in India, working with the Indian government, Indian enterprises, private sector as well as public sector enterprises. We have a global delivery presence in cities like Pune and Bengaluru. Whether you look at the front office, back office, or mid-office, we are deeply embedded into it (BSE, National Stock Exchange (NSE), Aadhaar, GST Network (GSTN), Life Insurance Corporation of India (LIC), SBI Insurance and most core banking services across India use Red Hat open source technologies). For instance, we work with Infosys on GSTN. So, I would say there is a little bit of Red Hat played out everywhere (in India) but with some large enterprises, we have a very deep relationship. 

Do you believe Red Hat is meeting IBM’s expectations? How often do you interact with Arvind Krishna, and what do you discuss?

About five years ago, Arvind and I were on stage together, announcing our new friendship around IBM middleware on OpenShift. I talk to him every few days. A lot of this credit goes to Paul. We’ve struck the balance with IBM. Arvind would describe it as Red Hat being “independent" (since) we have to partner with other cloud providers, other consulting providers, (and) other technology providers (including Verizon, Accenture, Deloitte, Tata Consultancy Services, and IBM Consulting). But IBM is very opinionated on Red Hat—they built their middleware to Red Hat, and we are their core choice for hybrid. Red Hat gives them (IBM) a technology base that they can apply their global reach to. IBM has the ability to bring open source Red Hat technology to every corner of the planet. 

How are open source architectures helping data scientists and CXOs with the much-needed edge adopting AI-ML (artificial intelligence and machine learning)?

AI is a really big space, and we have always sort of operated in how to get code built and (get it) into production faster. But now training models that can answer questions with precision are running in parallel. Our passion is to integrate that whole flow of models into production, right next to the apps that you’re already building today—we call this the ML ops (machine learning operations, which is jargon for a set of best practices for businesses to run AI successfully) space.

What that means is that we’re not trying to be the best in natural language processing (NLP) or building foundation AI models on it or convolutional neural networks (CNNs). We want to play in our sweet spot, which is how we arm data science teams to be able to get their models from development to production and time into those apps. This is the work we’ve done on OpenShift data science (managed cloud service for data scientists and developers) with it.

Another piece that’s changing and has been exciting for us, is hardware. As an example, cars today and going forward are moving to running just a computer in them. What we do really well is to put Linux on computers and the computer in your car, and the future will look very similar to the computer in your data centre today. And when we’re able to combine that platform, with bringing these AI models into that environment with the speed that you do with code with application integration, it opens up a lot of exciting opportunities for customers to get that data science model of building into the devices, or as close to customers as they possibly can.

This convergence is important, and it’s not tied to edge. Companies have realized that the closer they can push the interaction to the user, the better the experience it’s going to be.

And that could be in banking or pushing self-service to users‘ phones. In autonomous driving, it’s going to be pushing the processing down to your rear view mirror to make decisions for you. In mining, it might be 5g. At the core of it is how far can you push your differentiated logic closer to your consumer use case. That’s why I think we see the explosion in edge.

As a thought leader, I would like your views on trends like the decentralized web and open source metaverse.

If you look at the Red Hat structure, we have areas where we’re committed to businesses through our business units. But then we also have our office of technology that’s led by our CTO, Chris Wright, where we track industry trends where we haven’t necessarily taken a business stake or position but want to understand the technology behind it. The cryptographic blockchain decentralizing core technology foundations, which we watch very closely, is in this space right now. Because they do change the way you operate. It’s strikingly similar to how open source and coding practices are seen as normal today but when I started this 20 years ago, it was a much more connected and controlled experience versus a very decentralized one today. So, we track this very closely from a technology perspective (but) we haven’t yet taken a business position of this.

In this context, do you collaborate with IBM R&D too?

Yeah, we do. We worked closely with the IBM research team run by Dario Gil (senior VP and director of IBM Research) pre-acquisition, and we work even closer with them now. Post-acquisition, the focus on Red Hat and the clarity on IBM’s focus on open hybrid cloud have helped us collaborate even better.

Last, but not the least, what is Red Hat’s stance on the patent promise it made in September 2017, given that your company is now an IBM unit (which has over 70,000 active patents)?

We continue to collect our patents in a way that they won’t be leveraged against other users of open source. Red Hat will do it (patent) for the benefit of open source and to make the usage of open source a little safer. My patents, I believe, are included in that, and will continue to be included in that going forward.

Catch all the Corporate news and Updates on Live Mint. download The Mint News App to get Daily Market Updates & Live Business News.
More Less

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Mon, 08 Aug 2022 17:31:00 -0500 en text/html https://www.livemint.com/companies/people/india-is-a-perfect-example-of-the-application-of-open-hybrid-cloud-11659981260451.html
Killexams : Taking The Road To Modernizing Today's Mainframe

Milan Shetti, President and CEO, Rocket Software.

With the rising popularity of cloud-based solutions over the last decade, a growing misconception in the professional world is that mainframe technology is becoming obsolete. This couldn’t be further from the truth. In fact, the results of a recent Rocket survey of over 500 U.S. IT professionals found businesses today still rely heavily on the mainframe over cloud-based or distributed technologies to power their IT infrastructures—including 67 of the Fortune 100.

Despite the allure surrounding digital solutions, a recent IBM study uncovered that 82% of executives agree their business case still supports mainframe-based applications. This is partly due to the increase in disruptive events taking place throughout the world—the Covid-19 pandemic, a weakened global supply chain, cybersecurity breaches and increased regulations across the board—leading companies to continue leveraging the reliability and security of the mainframe infrastructure.

However, the benefits are clear, and the need is apparent for organizations to consider modernizing their mainframe infrastructure and implementing modern cloud-based solutions into their IT environment to remain competitive in today’s digital world.

Overcoming Mainframe Obstacles

Businesses leveraging mainframe technology that hasn’t been modernized may struggle to attract new talent to their organization. With the new talent entering the professional market primarily trained on cloud-based software, traditional mainframe software and processes create a skills gap that could deter prospective hires and lead to companies missing out on top-tier talent.

Without modernization, many legacy mainframes lack connectivity with modern cloud-based solutions. Although the mainframe provides a steady, dependable operational environment, it’s well known that the efficiency, accuracy and accessibility modern cloud-based solutions create have helped simplify and Boost many operational practices. Mainframe infrastructures that can’t integrate innovative tools—like automation—to streamline processes or provide web and mobile access to remote employees—which has become essential following the pandemic—have become impractical for most business operations.

Considering these impending hurdles, organizations are at a crossroads with their mainframe operations. Realistically, there are three roads a business can choose to journey down. The first is to continue “operating as-is,” which is cost-effective but more or less avoids the issue at hand and positions a company to get left in the dust by its competitors. A business can also “re-platform” or completely remove and replace its current mainframe infrastructure in favor of distributed or cloud models. However, this option can be disruptive, pricey and time-consuming and forces businesses to simply toss out most of their expensive technology investments.

The final option is to “modernize in place.” Modernizing in place allows businesses to continue leveraging their technology investments through mainframe modernization. It’s the preferred method of IT professionals—56% compared to 27% continuing to “operate as-is” and 17% opting to “re-platform”—because it’s typically cost-efficient, less disruptive to operations and improves the connectivity and flexibility of the IT infrastructure.

Most importantly, modernizing in place lets organizations integrate cloud solutions directly into their mainframe environment. In this way, teams can seamlessly transition into a more efficient and sustainable hybrid cloud model that helps alleviate the challenges of the traditional mainframe infrastructure.

Modernizing In Place With A Hybrid Cloud Strategy

With nearly three-quarters of executives from some of the largest and most successful businesses in agreement that mainframe-based applications are still central to business strategy, the mainframe isn’t going anywhere. And with many organizations still opting for mainframe-based solutions for data-critical operating systems—such as financial management, customer transaction systems of record, HR systems and supply chain data management systems—mainframe-based applications are actually expected to grow over the next two years. That’s why businesses must look to leverage their years of technology investments alongside the latest tools.

Modernizing in place with a hybrid cloud strategy is one of the best paths for an enterprise to meet the evolving needs of the market and its customers while simultaneously implementing an efficient and sustainable IT infrastructure. It lets companies leverage innovative cloud solutions in their tech stack that help bridge the skills gap to entice new talent while making operations accessible for remote employees.

The integration of automated tools and artificial intelligence capabilities in a hybrid model can help eliminate many manual processes to reduce workloads and Boost productivity. The flexibility of a modernized hybrid environment can also allow teams to implement cutting-edge processes like DevOps and CI/CD testing into their operations, helping ensure a continuously optimized operational environment.

With most IT professionals in agreement that hybrid is the answer moving forward, it’s clear that more and more businesses that work within mainframe environments will begin to migrate cloud solutions into their tech stack. Modernizing in place with a hybrid cloud strategy is one great way for businesses to meet market expectations while positioning themselves for future success.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Sun, 24 Jul 2022 23:15:00 -0500 Milan Shetti en text/html https://www.forbes.com/sites/forbestechcouncil/2022/07/25/taking-the-road-to-modernizing-todays-mainframe/
Killexams : Tips To Retain Your Talent During The Great Reorientation No result found, try new keyword!We're at an inflection point when an economic downturn could reshape which organizations survive and which ones fall behind, and test their investments in talent development. Fri, 05 Aug 2022 03:00:30 -0500 en-us text/html https://www.msn.com/en-us/money/smallbusiness/tips-to-retain-your-talent-during-the-great-reorientation/ar-AA10lLRO Killexams : Nanosheet FETs Drive Changes In Metrology And Inspection

In the Moore’s Law world, it has become a truism that smaller nodes lead to larger problems. As fabs turn to nanosheet transistors, it is becoming increasingly challenging to detect line-edge roughness and other defects due to the depths and opacities of these and other multi-layered structures. As a result, metrology is taking even more of a hybrid approach, with some well-known tools moving from the lab to the fab.

Nanosheets are the successor to finFETs, an architecture evolution prompted by the industry’s continuing desire to increase speed, capacity, and power. They also help solve short-channel effects, which lead to current leakage. The great vulnerability of advanced planar MOSFET structures is that they are never fully “off.” Due to their configuration, in which the metal-oxide gate sits on top of the channel (conducting current between source and drain terminals), some current continues to flow even when voltage isn’t applied to the gate.

FinFETs raise the channel into a “fin.” The gate is then arched over that fin, allowing it to connect on three sides. Nevertheless, the bottom of the gate and the bottom of the fin are level with each other, so some current can still sneak through. The gate-all-around design turns the fin into multiple, stacked nanosheets, which horizontally “pierce” the gate, giving coverage on all four sides and containing the current. An additional benefit is the nanosheets’ width can be varied for device optimization.

Fig. 1: Comparison of finFET and gate-all-around with nanosheets. Source: Lam Research

Fig. 1: Comparison of finFET and gate-all-around with nanosheets. Source: Lam Research

Unfortunately, with one problem solved, others emerge. “With nanosheet architecture, a lot of defects that could kill a transistor are not line-of-sight,” said Nelson Felix, director of process technology at IBM. “They’re on the underside of nanosheets, or other hard-to-access places. As a result, the traditional methods to very quickly find defects without any prior knowledge don’t necessarily work.”

So while this may appear linear from an evolutionary perspective, many process and materials challenges have to be solved. “Because of how the nanosheets are formed, it’s not as straightforward as it was in the finFET generation to create a silicon-germanium channel,” Nelson said.

Hybrid combinations
Several techniques are being utilized, ranging from faster approaches like optical microscopy to scanning electron microscopes (SEMs), atomic force microscopes (AFMs), X-ray, and even Raman spectroscopy.

Well-known optical vendors like KLA provide the first-line tools, employing techniques such as scatterometry and ellipsometry, along with high-powered e-beam microscopes.

With multiple gate stacks, optical CD measurement needs to separate one level from the next according to Nick Keller, senior technologist, strategic marketing for Onto Innovation. “In a stacked nanosheet device, the physical dimensions of each sheet need to be measured individually — especially after selective source-drain recess etch, which determines drive current, and the inner spacer etch, which determines source-to-gate capacitance, and also affects transistor performance. We’ve done demos with all the key players and they’re really interested in being able to differentiate individual nanosheet widths.”

Onto’s optical critical dimension (OCD) solution combines spectroscopic reflectometry and spectroscopic ellipsometry with an AI analysis engine, called AI-Diffract, to provide angstrom-level CD measurements with superior layer contrast versus traditional OCD tools.

Fig. 2: A model of a GAA device generated using AI Diffract software, showing the inner spacer region (orange) of each nanosheet layer. Source: Onto Innovation

Fig. 2: A model of a GAA device generated using AI Diffract software, showing the inner spacer region (orange) of each nanosheet layer. Source: Onto Innovation

Techniques like spectroscopic ellipsometry or reflectometry from gratings (scatterometry) can measure CDs and investigate feature shapes. KLA describes scatterometry as using broadband light to illuminate a target to derive measurements. The reflected signal is fed into algorithms that compare the signal to a library of models created based on known material properties and other data to see 3D structures. The company’s latest OCD and shape metrology system identifies subtle variations (in CD, high k and metal gate recess, side wall angle, resist height, hard mask height, pitch walking) across a range of process layers. An improved stage and new measurement modules help accelerate throughput.

Chipmakers rely on AI engines and deep computing in metrology just to handle the data streams. “They do the modeling data for what we should be looking at that day, and that helps us out,” said Subodh Kulkarni, CEO of CyberOptics. “But they want us to provide them speedy resolution and accuracy. That’s incredibly difficult to deliver. We’re ultimately relying on things like the resolution of CMOS and the bandwidth of GPUs to crunch all that data. So in a way, we’re relying on those chips to develop inspection solutions for those chips.”

In addition to massive data crunching, data from different tools must be combined seamlessly. “Hybrid metrology is a prevailing trend, because each metrology technique is so unique and has such defined strengths and weaknesses,” said Lior Levin, director of product marketing at Bruker. “No single metrology can cover all needs.”

The hybrid approach is well accepted. “System manufacturers are putting two distinct technologies into one system,” said Hector Lara, Bruker’s director and business manager for Microelectronics AFM. He says Bruker has decided against that approach based on real-world experience, which has shown it leads to sub-optimal performance.

On the other hand, hybrid tools can save time and allow a smaller footprint in fabs. Park Systems, for example, integrates AFM precision with white light interferometry (WLI) into a single instrument. Its purpose, according to Stefan Kaemmer, president of Park Systems Americas, is in-line throughput. While the WLI can quickly spot a defect, “You can just move the trial over a couple of centimeters to the AFM head and not have to take the time to unload it and then load it on another tool,” Kaemmer said.

Bruker, meanwhile, offers a combination of X-ray diffraction (XRD)/X-ray reflectometry (XRR) and X-ray fluorescence (XRF)/XRR for 3D logic applications. However, “for the vast majority of applications, the approach is a very specialized tool with a single metrology,” Levin said. “Then you hybridize the data. That’s the best alternative.”

What AFMs provide
AFMs are finding traction in nanosheet inspection because of their ability to distinguish fine details, a capability already proven in 3D NAND and DRAM production. “In AFM, we don’t really find the defects,” Kaemmer explained. “Predominantly, we read the defect map coming typically from some KLA tool and then we go to whatever the customer picks to closely examine. Why that’s useful is the optical tool tells you there’s a defect, but one defect could actually be three smaller defects that are so close together the optical tool can’t differentiate them.”

The standard joke about AFMs is that their operation was easier to explain when they were first developed nearly forty years ago. In 1985, when record players were in every home, it required little to imagine an instrument in which a sharp tip extended from a cantilevered arm felt its way along a surface to produce signals. With electromagnetic (and sometimes chemical) modifications, that is essentially the hardware design of all modern AFMs. There are now many variations of tip geometries, from pyramids to cones, in a range of materials including silicon, diamond, and tungsten.

In one mode of operation, tapping, the cantilever is put into oscillation at its natural resonant frequency, giving the AFM controlling systems greater precision of force control, resulting in a nanometer scale spatial topographic rendering of the semiconductor structure. The second sub-resonant mode enables greatest force control during tip/sample interaction. That approach becomes invaluable for high-aspect structures rendering high-accuracy depth measurements, and in some structures, sidewall angles and roughness.

Today’s commercial production tools are geared to specific applications, such as defect characterization or surface profile measurement. Unlike optical microscopes, where improvements center on improved resolution, AFMs are looking at subtle profile changes in bond pads for hybrid bonding, for instance, or to reveal defect characteristics like molecular adhesion.

“Bonding is really a sweet spot for AFM,” said Sean Hand, senior staff applications scientist at Bruker. “It’s really planar, it’s flat, we’re able to see the nanoscale roughness, and the nanoscale slope changes that are important.”

Additionally, because tips can exert enough force to move particles, AFMs can both find errors and correct them. They have been used in production to remove debris and make pattern adjustments on lithography masks for nearly two decades. Figure 3 (below) shows a probe-based particle removal during lithography process for advanced node development. Contaminants are removed from EUV masks, allowing the photomask to be quickly returned to production use. That extends the life of the reticle, and avoids surface degradation caused by wet cleaning.

AFM-based particle removal is a significantly lower-cost dry cleaning process and adds no residual contamination to the photomask surface, which can degrade mask life. Surface interaction is local to the defect, which minimizes the potential for contamination of other mask areas. The high precision of the process allows for cleaning within fragile mask features without risk of damage.

Fig. 3: Example of pattern repair. Source: Bruker

Fig. 3: Example of pattern repair. Source: Bruker

An application using probe-based particle removal is used in the lithography process for advanced node development. Contamination removal on EUV masks in production allows the photomask to be quickly returned to production use. This dry cleaning removal process may extend mask life while avoiding surface degradation caused by wet cleaning.

AFMs also are used to evaluate the many photoresist candidates for high-NA EUV, including metal oxide resists and more traditional chemically amplified resists. “With the thin resist evaluation of high NA EUV studies, now you have thin, resist trenches that are much more shallow,” said Anne-Laure Charley, R&D metrology manager at Imec. “And that becomes a very nice use case for AFM.”

The drawback to AFMs, however, is that they are limited to surface characterization. They cannot measure the thickness of layers, and can be limited in terms of deep 3D profile information. Charley recently co-authored a paper that explores a deep-learning-enabled correction for the problem of vertical (z) drift in AFMs. “If you have a structure with a small trench opening, but which is very deep, you will not be able to answer with the tip at the bottom of the trench, and you will not then be able to characterize the full edge depth and also the profile at the bottom of the trench,” she said.

Raman spectroscopy
Raman spectroscopy, which relies on the analysis of inelastically scattered light, is a well-established offline technique for materials characterization that is moving its way inline into fabs. According to IBM’s Felix, it is likely to come online to answer the difficult questions of 3D metrology. “There’s a suite of wafer characterization techniques that historically have been offline techniques. For example, Raman spectroscopy lets you really probe what the bonding looks like,” he said. “But with nanosheet, this is no longer a data set you can just spot-check and have it be only one-way information. We have to use that data in a much different way. Bringing these techniques into the fab and being able to use them non-destructively on a wafer that keeps moving is really what’s required because of the complexity of the material set and the geometries.”

XRD/XRF
In addition to AFM, other powerful techniques are being pulled into the nanosheet metrology arsenal. Bruker, for example, is employing X-ray diffraction (XRD), the crystallography technique with which Rosalind Franklin created the famous “Photograph 51” to show the helical structure of DNA in 1952.

According to Levin, during the height of finFET development, companies adopted XRD technology, but mainly for R&D. “It looks like in this generation of devices, X-ray metrology adoption is much higher.”

“For the gate all around, we have both XRD — the most advanced XRD, the high brightness source XRD, for measurement of the nanosheet stack — combined with XRF,” said Levin. “Both of them are to measure the residue part, making sure everything is connected, as well as those recessed edge steps. An XRF can provide a very accurate volumetric measurement. It can measure single atoms. So in a very sensitive manner, you can measure the recessed edge of the material that is remaining after the recessed etch. And it’s a direct measurement that doesn’t require any calibration. The signal you get is directly proportional to what you’re looking to measure. So there’s significant adoption of these two techniques for GAA initial development.”

Matthew Wormington, chief technologist at Bruker Semi X-ray, gave more details: “High resolution X-ray diffraction and X-ray reflectometry are two techniques that are very sensitive to the individual layer thicknesses and to the compositions, which are key for controlling some of the x parameters downstream in the 3D process. The gate-all-around structure is built on engineered substrates. The first step is planar structures, a periodic array of silicon and silicon germanium layers. X-ray measurement is critical in that very key step because everything is built on top of that. It’s a key enabling measurement. So the existing techniques become much more valuable, because if you don’t get your base substrate correct — not just the silicon but the SiGe/Si multilayer structure — everything following it is challenged.”

Conclusion
The introduction of nanosheet transistors and other 3D structures is calling for wider usage of tools like AFM, X-ray systems, ellipsometry and Raman spectroscopy. And new processes, like hybrid bonding, leads to older processes being brought in for new applications. Imec’s Charley said, “There are some specific challenges that we see linked to stacking of wafers. You eventually need to measure through silicon because when you start to stack two wafers on top of each other, you need to measure or inspect through the backside and eventually you still have a relatively thick silicon. And that’s implies working with different wavelengths, in particular infrared. So vendors are developing specific overlay tools using infrared for these kinds of use cases.”

As for who will ultimately drive the research, it depends on when you ask that question. “The roadmap for technology is always bi-directional,” said Lior. “It’s hard to quantify, but roughly half comes from the technology side from what is possible, and half comes from what’s needed in the marketplace. Every two or three years we have a new generation of tools.”

REFERENCES
D. Cerbu, et. al., “Deep Learning-Enabled Vertical Drift Artefact Correction for AFM Images,” Proc. SPIE Metrology, Inspection, and Process Control XXXVI, May 2022; doi: 10.1117/12.2614029

A.A. Sifat, J. Jahng, and E.O. Potma, “Photo-Induced Force Microscopy (PiFM) — Principles and Implementations,” Chem. Soc. Rev., 2022,51, 4208-4222. https://pubs.rsc.org/en/content/articlelanding/2022/cs/d2cs00052k

Mary A. Breton, Daniel Schmidt, Andrew Greene, Julien Frougier, and Nelson Felix, “Review of nanosheet metrology opportunities for technology readiness,” J. of Micro/Nanopatterning, Materials, and Metrology, 21(2), 021206 (2022). https://doi.org/10.1117/1.JMM.21.2.021206

Daniel Schmidt, Curtis Durfee, Juntao Li, Nicolas Loubet, Aron Cepler, Lior Neeman, Noga Meir, Jacob Ofek, Yonatan Oren, and Daniel Fishman, “In-line Raman spectroscopy for gate-all-around nanosheet device manufacturing,” J. of Micro/Nanopatterning, Materials, and Metrology, 21(2), 021203 (2022). https://doi.org/10.1117/1.JMM.21.2.021203

Related Stories
Speeding Up The R&D Metrology Process
The goal is to use fab-like methods in the lab, but that’s not easy.

Metrology Challenges For Gate-All-Around
Why future nodes will require new equipment and approaches.

Contact Mode versus Tapping Mode AFM


Mon, 08 Aug 2022 19:04:00 -0500 en-US text/html https://semiengineering.com/nanosheet-fets-drive-changes-in-metrology-and-inspection/
Killexams : Altus Group’s Jim Hannon Has a Big Appetite for Proptech Startups

In April, Jim Hannon ascended to CEO at Altus Group after almost two years as president of Altus Analytics, a subsidiary. He’s looking to continue the company’s long policy of aggressive acquisition of proptech startups that feed its valuation, tax appeal, project management and due diligence platform for real estate investors and owners.

Founded in 2005, the publicly traded, Toronto-based Altus Group was an early proponent of providing real estate technology data as what it calls “intelligence as a service.”

Commercial Observer spoke with Hannon in late July from his home in Naples, Fla., about Altus’ role in the real estate investment and ownership world and about his views on proptech in the near and longer term.

The interview has been edited for length and clarity.

Commercial Observer: With a $2 billion market cap, Altus Group is a huge company in the proptech sector, and one with many services. As CEO, what’s your elevator pitch for Altus?

Jim Hannon: In a nutshell, we’re No. 1 in providing valuations via technology advisory services for commercial real estate. We are the No. 1 or 2 player in the core markets that we serve to make it easier to do tax appeals and have successful outcomes in lowering your taxes and getting better returns out of your assets.

We help developers determine when and where, or if, they should invest. And if they choose to invest, we help them project-manage large investments and development. So those are the things we do: valuation, tax appeal, project management, and due diligence. Our clients are investors, asset managers, developers, lenders, and, for the tax business, property owners.

Is Altus too large, or not large enough, for what you’re trying to accomplish as a technology source for your clients?

That’s an interesting observation. I started my career at IBM, so this doesn’t feel very large to me at all. Actually, it’s a very tight-knit community inside Altus. It came together through acquisitions over the years. But it feels like a tightly focused company from my chair compared to the size of the companies that I’ve been at.

How big is Altus in employees and revenue?

We have 2,600 employees. We’re in a blackout period right at the moment, so I can’t get too specific, but I can tell you that last year we did $625 million Canadian in revenue ($485 million today).

As you mentioned, Altus has grown quite a bit through acquisition. What does that look like these days? Is there more opportunity to acquire proptech startups that fit your platform, or have innovative startup opportunities slowed down?

There’s always opportunity to acquire proptech startups. We keep a close eye on the market, as well as on our capital structure, making sure we’re deploying investments in the right areas. 

Last year, we did three significant acquisitions. We purchased a company in Paris called Finance Active. We’re heavily in the valuation business around equity investments in commercial real estate. Finance Active put us into the debt management side of those investments and it significantly increased the size of our international footprint. 

In March of last year, we bought a company called Stratodem, which gave us an analytics engine and thousands of macroeconomic data points to pull into our advanced analytics. And, in November, we purchased a company in New York City called Reonomy, which gave us a significant amount of data on about 53 million commercial real estate assets in the U.S. It also gave us the underlying technology to link attributes of assets to the drivers of performance. 

This year we purchased a tax technology company called Rethink Solutions, which gave us automated workflow and some predictive analytics capabilities for taxes, as well.

What made those proptech companies attractive to Altus?

On the tax side, we want technology that improves workflow, or improves the predictability of a successful outcome of a tax appeal. In the Canadian market, we’re the No. 1 commercial real estate tax appraisal adviser. So, basically, we help make the process of appealing tax assessment easier. In the U.K., we’re No. 1. 

In the U.S., it’s hard to exactly get the size of the market, but our estimate is that we’re No. 2, but still in a single-digit type of market share. It’s a very fragmented market in the U.S. so acquisitions that can help us automate the processes or predict which assets are going to have the highest probability of a successful outcome is interesting technology for us. It allows us to expand our market to clients who want to self-serve, or have a lighter advisory touch if they choose, or if they want to leverage the expertise of our teams.

On the analytics side, our core franchises have been in commercial real estate valuations — mark to market. We are by far the leaders, whether it’s from a technology perspective with our Argus enterprise, our flagship product, or through our advisory services. As we generate valuations, we throw off a tremendous amount of exhaustive data, which allows us to look at the commercial real estate market and say, “OK, what drove performance of various types of assets?”

How do you see the industry in the midst of so much technological change?

The industry is at an inflection point. It feels very similar to me as financial services did over a decade ago, where there’s fantastic technology and expert services to go along with that technology, to say, “What just happened in the market? How do I get a better understanding of what’s going on around me?” The next step is, “Why did that happen?” We can draw correlations using our analytics technology, especially with our recent acquisitions. Then, most importantly, it’s, “What’s going to happen next? Where should I invest? Why should I invest? And how do I think about asset performance across vast portfolios of investments?” That’s where we were going with our acquisitions last year.

What is the most exciting thing you have found in becoming CEO?

It’s the opportunity to be in front of the whole industry. We’re very early in the adoption curve of advanced analytics, in thinking about the investment side of commercial real estate. There are great firms out there, they have their own data strategies, and some of them are significantly larger than we are. 

But this is what we do: Investment firms should have data strategies, and we’re here to enable those data strategies for them. Putting together assets like Stratodem with Reonomy to create advanced offers, and pairing them with Argus and our advisory business, and even the data we split off in our tax franchise, there’s no other company in the world that has our data set and the potential to change this industry like we do. And it was just too much fun of an opportunity to pass on.

On the demand side, how do your clients view the adoption of proptech?

They’re hungry for it. If we put it in context of today’s economic situation: When you look at rising interest rates and headwinds, that’s going to change investment theses and the way owners think about how they maximize their return on their assets. They are focused on the tenant experience, as they should be. I think that side of the business has as much potential as our side, the investment and performance management side. There’s so much opportunity to Boost the services inside buildings and to bring all sorts of technology to bear in this current economic cycle.

It’s even more important to be thinking about productivity, efficiency and differentiation. The various proptech companies that are out there, they’re all coming at it with some angle on that. I think the owners understand that investments in technology are going to enable their future growth and the best outcomes with their tenants. We’re seeing strong demand. We’re in about 100 markets overall in six core countries — Canada, U.S., U.K., Germany, France and Australia — and we see the addressable market for those six countries alone at about a $5 billion opportunity. When you add in the rest of the world, our model says that globally it’s a $10 billion market.

What kinds of data questions are clients asking Argus about?

The first set of conversations that I had with CXO-level folks in the industry were surprisingly to me about just the core management of data. “How do I harmonize data from investments in three different countries to get a portfolio view?” I understood that problem. If this is where they’re at now, even the most advanced ones are still trying to figure out how do they corral their data and look at it on a country or global basis.

Then think about all the various attributes of performance. That’s a core problem across the industry, and the technology we’re building organically with the acquisitions that we executed last year directly addresses that problem.

Is there any particular sector of real estate that you’re concentrating on for your clients, whether it be construction or office or residential?

There’s a blurring of the lines that happens. We stick to our core strategy, which is commercial real estate. However, as investors are moving into single-family residential rentals as a commercial asset class, that changes our perspective on what is commercial. The legacy definitions don’t necessarily hold if you’re looking at it from an investor perspective. So that’s not where our core strength is, but we’re building up those analytics capabilities. 

In our Stratodem acquisition, we actually picked up a tremendous amount of data on macro-residential information, which we built into our models. It informs the performance of commercial real estate assets. Across the classes of commercial real estate, we’re building up data and analytics on all of it. We have our tax practices. We look to target and segment into areas of growth like data centers or green energy.

For the rest of this year, or in the near future, how do you view the adoption and use of technology in real estate, and how will that affect Altus’ strategy?

I have to be careful to not answer a specific question about the rest of the year that could in any way come across as guidance. I’ll talk about the industry in general and our positioning. We’re in a great place. In markets that go up or down, you’re going to have investors either looking to buy or looking to sell. We’ve gone through various economic cycles over the last 15 years, and we are very resilient, because buyers and sellers are looking for that next piece of information to determine what they should do next. 

We’ve been there with expert services, information and analytics capabilities, and the adoption of that technology is accelerating. That puts us in a great place as a trusted partner to many of the world’s largest investors.

Philip Russo can be reached at prusso@commercialobserver.com.

Tue, 09 Aug 2022 02:30:00 -0500 Philip Russo en-US text/html https://commercialobserver.com/2022/08/altus-group-jim-hannon-proptech/
Killexams : Enterprise Knowledge Management System Market 2022 Depth Investigation And Analysis Report On Key Players 2030

The MarketWatch News Department was not involved in the creation of this content.

Aug 01, 2022 (Alliance News via COMTEX) -- Key Companies Covered in the Enterprise Knowledge Management System Research are Alfanar, Chris Lewis Group, Cisco, Enlighted, GoTo Room, IQBoard, Komstadt, Logitech, Microsoft, Poly, Scenariio, Smart Systems(Smarthomes Chattanooga), TecinteracaBloomfire, Callidus Software Inc., Chadha Software Technologies, ComAround, Computer Sciences Corporation(APQC), EduBrite Systems, EGain Ernst Young, IBM Global Services, Igloo, KMS Lighthouse, Knosys, Moxie Software, Open Text Corporation, ProProfs, Right Answers, Transversal, Yonyx, Glean, IntraFindtive, TIS Control, Vox Audio Visual, Webex, Yealink and other key market players.

The global Enterprise Knowledge Management System market size will reach USD million in 2030, growing at a CAGR of % during the analysis period.

As the global economy recovers in 2021 and the supply of the industrial chain improves, the Enterprise Knowledge Management System market will undergo major changes. According to the latest research, the market size of the Enterprise Knowledge Management System industry in 2021 will increase by USD million compared to 2020, with a growth rate of %.

Request To Free trial of This Strategic Report:-https://reportocean.com/industry-verticals/sample-request?report_id=AR9965

The global Enterprise Knowledge Management System industry report provides top-notch qualitative and quantitative information including: Market size (2017-2021 value and 2022 forecast). The report also contains descriptions of key players, including key financial indicators and market competitive pressure analysis.

The report also assesses key opportunities in the market and outlines the factors that are and will drive the growth of the industry. Taking into account previous growth patterns, growth drivers, and current and future trends, we also forecast the overall growth of the global Enterprise Knowledge Management System market during the next few years.

Types list
On-Cloud
On-Premise

Application list
SMEs
Large Enterprise

The recent analysis by Report Ocean on the global Enterprise Knowledge Management SystemMarket Report 2021 revolves around various aspects of the market, including characteristics, size and growth, segmentation, regional and country breakdowns, competitive landscape, market shares, trends, strategies, etc. It also includes COVID-19 Outbreak Impact, accompanied by traces of the historic events. The study highlights the list of projected opportunities, sales and revenue on the basis of region and segments. Apart from that, it also documents other subjects such as manufacturing cost analysis, Industrial Chain, etc. For better demonstration, it throws light on the precisely obtained data with the thoroughly crafted graphs, tables, Bar &Pie Charts, etc.

Get a report on Enterprise Knowledge Management SystemMarket' (Including Full TOC, 100+ Tables & Figures, and charts). -Covers Precise Information on Pre & Post COVID-19 Market Outbreak by Region

Download Free trial Copy of 'Enterprise Knowledge Management SystemMarket' Report :- https://reportocean.com/industry-verticals/sample-request?report_id=AR9965

Key Segments Studied in the Global Enterprise Knowledge Management SystemMarket

Our tailormade report can help companies and investors make efficient strategic moves by exploring the crucial information on market size, business trends, industry structure, market share, and market predictions.

Apart from the general projections, our report outstands as it includes thoroughly studied variables, such as the COVID-19 containment status, the recovery of the end-use market, and the recovery timeline for 2020/ 2021

Analysis on COVID-19 Outbreak Impact Include:
In light of COVID-19, the report includes a range of factors that impacted the market. It also discusses the trends. Based on the upstream and downstream markets, the report precisely covers all factors, including an analysis of the supply chain, consumer behavior, demand, etc. Our report also describes how vigorously COVID-19 has affected diverse regions and significant nations.

Report Include:

  • Market Behaviour/ Level of Risk and Opportunity
  • End Industry Behaviour/ Opportunity Assessment
  • Expected Industry Recovery Timeline

For more information or any query mail atsales@reportocean.com

Each report by the Report Ocean contains more than 100+ pages, specifically crafted with precise tables, charts, and engaging narrative: The tailor-made reports deliver vast information on the market with high accuracy. The report encompasses: Micro and macro analysis, Competitive landscape, Regional dynamics, Operational landscape, Legal Set-up, and Regulatory frameworks, Market Sizing and Structuring, Profitability and Cost analysis, Demographic profiling and Addressable market, Existing marketing strategies in the market, Segmentation analysis of Market, Best practice, GAP analysis, Leading market players, Benchmarking, Future market trends and opportunities.

Geographical Breakdown:The regional section of the report analyses the market on the basis of region and national breakdowns, which includes size estimations, and accurate data on previous and future growth. It also mentions the effects and the estimated course of Covid-19 recovery for all geographical areas. The report gives the outlook of the emerging market trends and the factors driving the growth of the dominating region to provide readers an outlook of prevailing trends and help in decision making.

Nations:Argentina, Australia, Austria, Belgium, Brazil, Canada, Chile, China, Colombia, Czech Republic, Denmark, Egypt, Finland, France, Germany, Hong Kong, India, Indonesia, Ireland, Israel, Italy, Japan, Malaysia, Mexico, Netherlands, New Zealand, Nigeria, Norway, Peru, Philippines, Poland, Portugal, Romania, Russia, Saudi Arabia, Singapore, South Africa, South Korea, Spain, Sweden, Switzerland, Thailand, Turkey, UAE, UK, USA, Venezuela, Vietnam

Thoroughly Described Qualitative COVID 19 Outbreak Impact Include Identification and Investigation on:Market Structure, Growth Drivers, Restraints and Challenges, Emerging Product Trends & Market Opportunities, Porter's Fiver Forces. The report also inspects the financial standing of the leading companies, which includes gross profit, revenue generation, sales volume, sales revenue, manufacturing cost, individual growth rate, and other financial ratios. The report basically gives information about the Market trends, growth factors, limitations, opportunities, challenges, future forecasts, and information on the prominent and other key market players.

(Check Our Exclusive Offer: 30% to 40% Discount) :- https://reportocean.com/industry-verticals/sample-request?report_id=AR9965

Key questions answered:This study documents the affect ofCOVID 19 Outbreak: Our professionally crafted report contains precise responses and pinpoints the excellent opportunities for investors to make new investments. It also suggests superior market plan trajectories along with a comprehensive analysis of current market infrastructures, prevailing challenges, opportunities, etc. To help companies design their superior strategies, this report mentions information about end-consumer target groups and their potential operational volumes, along with the potential regions and segments to target and the benefits and limitations of contributing to the market. Any market’s robust growth is derived by its driving forces, challenges, key suppliers, key industry trends, etc., which is thoroughly covered in our report. Apart from that, the accuracy of the data can be specified by the effective SWOT analysis incorporated in the study.

A section of the report is dedicated to the details related to import and export, key players, production, and revenue, on the basis of the regional markets. The report is wrapped with information about key manufacturers, key market segments, the scope of products, years considered, and study objectives.

It also guides readers through segmentation analysis based on product type, application, end-users, etc. Apart from that, the study encompasses a SWOT analysis of each player along with their product offerings, production, value, capacity, etc.

List of Factors Covered in the Report are:
Major Strategic Developments: The report abides by quality and quantity. It covers the major strategic market developments, including R&D, M&A, agreements, new products launch, collaborations, partnerships, joint ventures, and geographical expansion, accompanied by a list of the prominent industry players thriving in the market on a national and international level.

Key Market Features:
Major subjects like revenue, capacity, price, rate, production rate, gross production, capacity utilization, consumption, cost, CAGR, import/export, supply/demand, market share, and gross margin are all assessed in the research and mentioned in the study. It also documents a thorough analysis of the most important market factors and their most recent developments, combined with the pertinent market segments and sub-segments.

List of Highlights & Approach
The report is made using a variety of efficient analytical methodologies that offers readers an in-depth research and evaluation on the leading market players and comprehensive insight on what place they are holding within the industry. Analytical techniques, such as Porter’s five forces analysis, feasibility studies, SWOT analyses, and ROI analyses, are put to use to examine the development of the major market players.

Inquire more and share questions if any before the purchase on this report at :- https://reportocean.com/industry-verticals/sample-request?report_id=AR9965

Points Covered in Enterprise Knowledge Management SystemMarket Report:

........and view more in complete table of Contents

Thank you for reading; we also provide a chapter-by-chapter report or a report based on region, such as North America, Europe, or Asia.

Request Full Report :-https://reportocean.com/industry-verticals/sample-request?report_id=AR9965

About Report Ocean:
We are the best market research reports provider in the industry. Report Ocean believes in providing quality reports to clients to meet the top line and bottom line goals which will boost your market share in today's competitive environment. Report Ocean is a 'one-stop solution' for individuals, organizations, and industries that are looking for innovative market research reports.

Get in Touch with Us:
Report Ocean:
Email:
sales@reportocean.com
Address: 500 N Michigan Ave, Suite 600, Chicago, Illinois 60611 - UNITED STATES
Tel:+1 888 212 3539 (US - TOLL FREE)
Website:https://www.reportocean.com

COMTEX_411339872/2796/2022-08-01T05:20:10

The MarketWatch News Department was not involved in the creation of this content.

Sun, 31 Jul 2022 21:20:00 -0500 en-US text/html https://www.marketwatch.com/press-release/enterprise-knowledge-management-system-market-2022-depth-investigation-and-analysis-report-on-key-players-2030-2022-08-01
Killexams : Observability: Why It’s a Red Hot Tech Term

Recently, IBM struck a deal to acquire Databand.ai, which develops software for data observability. The purchase amount was not announced. However, the acquisition does show the importance of observability, as IBM has acquired similar companies during the past couple years.

“Observability goes beyond traditional monitoring and is especially relevant as infrastructure and application landscapes become more complex,” said Joseph George, Vice President of Product Management, BMC.  “Increased visibility gives stakeholders greater insight into issues and user experience, reducing time spent firefighting, and creating time for more strategic initiatives.”

Observability is an enormous category. It encompasses log analytics, application performance monitoring (APM), and cybersecurity, and the term has been applied in other IT areas like networking. For example, in terms of APM, spending on the technology is expected to hit $6.8 billion by 2024, according to Gartner.

So then, what makes observability unique? And why is it becoming a critical part of the enterprise tech stack? Well, let’s take a look.

Also read: Top Observability Tools & Platforms

How Observability Works

The ultimate goal of observability is to go well beyond traditional monitoring capabilities by giving IT teams the ability to understand the health of a system at a glance.

An observability platform has several important functions. One is to find the root causes of a problem, which could be a security breach or a bug in an application. In some cases, the system will offer a fix. Sometimes an observability platform will make the corrections on its own.

“Observability isn’t a feature you can install or a service you can subscribe to,” said Frank Reno, Senior Product Manager, Humio. “Observability is something you either have, or you don’t. It is only achieved when you have all the data to answer any question about the health of your system, whether predictable or not.”

The traditional approach is to crunch huge amounts of raw telemetry data and analyze it in a central repository. However, this could be difficult to do at the edge, where there is a need for real-time solutions.

“An emerging alternative approach to observability is a ‘small data’ approach, focused on performing real-time analysis on data streams directly at the source and collecting only the valuable information,” said Shannon Weyrick, vice president of research, NS1. “This can provide immediate business insight, tighten the feedback loop while debugging problems, and help identify security weaknesses. It provides consistent analysis regardless of the amount of raw data being analyzed, allowing it to scale with data production.”

Also read: Observability’s Growth to Evolve into Automation Solutions in 2022

The Levers for Observability

The biggest growth factor for observability is the strategic importance of software. It’s become a must-have for most businesses.

“Software has become the foundation for how organizations interact with their customers, manage their supply chain, and are measured against their competition,” said Patrick Lin, VP of Product Management for Observability, Splunk. “Particularly as teams modernize, there are a lot more things they have to monitor and react to — hybrid environments, more frequent software changes, more telemetry data emitted across fragmented tools, and more alerts. Troubleshooting these software systems has never been harder, and the way monitoring has traditionally been done just doesn’t cut it anymore.”

The typical enterprise has dozens of traditional tools for monitoring infrastructure, applications and digital experiences. The result is that there are data silos, which can lessen the effectiveness of those tools. In some cases, it can mean catastrophic failures or outages.

But with observability, the data is centralized. This allows for more visibility across the enterprise.

“You get to root causes quickly,” said Lin. “You understand not just when an issue occurs but what caused it and why. You Boost mean time to detection (MTTD) and mean time to resolution (MTTR) by proactively detecting emerging issues before customers are impacted.”

Also read: Dynatrace vs Splunk: Monitoring Tool Comparison

Observability Challenges

Of course, observability is not a silver bullet. The technology certainly has downsides and risks.  

In fact, one of the nagging issues is the hype factor. This could ultimately harm the category.  “There is a significant amount of observability washing from legacy vendors, driving confusion for end users trying to figure out what observability is and how it can benefit them,” said Nick Heudecker, Senior Director of Market Strategy & Competitive Intelligence, Cribl.

True, this is a problem with any successful technology. But customers definitely need to do the due diligence.

Observability also is not a plug-and-play technology.There is a need for change management. And yes, you must have a highly skilled team to get the max from the technology.

“The biggest downside of observability is that someone – such as an engineer or a person from DevOps or the site reliability engineering (SRE) organization — needs to do the real observing,” said Gavin Cohen, VP of Product, Zebrium.  “For example, when there is a problem, observability tools are great at providing access and drill-down capabilities to a huge amount of useful information. But it’s up to the engineer to sift through and interpret that information and then decide where to go next in the hunt to determine the root cause. This takes skill, time, patience and experience.”

Although, with the growth in artificial intelligence (AI) and machine learning (ML), this can be addressed. In other words, the next-generation tools can help automate the observer role. “This requires deep intelligence about the systems under observation, such as with sophisticated modeling, granular details and comprehensive AI,” said Kunal Agarwal, founder and CEO, Unravel Data.

Read next: AI and Observability Platforms to Alter DevOps Economics

Tue, 19 Jul 2022 07:55:00 -0500 en-US text/html https://www.itbusinessedge.com/it-management/observability-is-hot/
Killexams : The risky new way of building mobile broadband networks, explained by Rakuten Mobile CEO Tareq Amin

In 2019, the Trump administration brokered a deal allowing T-Mobile to buy Sprint as long as it helped Dish Network stand up a new 5G network to keep the number of national wireless carriers at four and preserve competition in the mobile market. You can say a lot about that deal, but it happened. And now, in 2022, Dish’s network — which is called Project Genesis, that’s a real name — is slowly getting off the ground. And it’s built on a new kind of wireless technology called Open Radio Access Network, or ORAN. Dish’s network is only the third ORAN network in the entire world, and if ORAN works, it will radically change how the entire wireless industry operates.

I have wanted to know more about ORAN for a long time. So today, I’m talking to Tareq Amin, CEO of Rakuten Mobile. Rakuten Mobile is a new wireless carrier in Japan. It just launched in 2020. It’s also the world’s first ORAN network, and Tareq basically pushed this whole concept into existence.

Tareq’s big idea, an Open Radio Access Network, is to break apart the hardware and software and make it so that many more vendors can build radio access hardware that Rakuten Mobile can run its own software on. Think about it like a Mac versus a PC: a Mac is Apple hardware running Apple’s software, while a PC can come from anyone and run Windows just fine or run another operating system if you want.

That’s the promise of ORAN: that it will increase competition and lower costs for cellular base station hardware, allow for more software innovation, and generally make networks faster and more reliable because operators like Rakuten Mobile will be in tighter control of the software that runs the networks and move all that software from the hardware itself to cloud services like Amazon AWS.

Since Rakuten Mobile is making all this software that can run on open hardware, they can sell it to other people. So Tareq is also the CEO of Rakuten Symphony, which — you guessed it — is helping Dish run its network here along with another network called 1&1 in Germany.

I really wanted to know if ORAN is going to work, and how Tareq managed to make it happen in such a traditional industry. So we got into it — like, really into it.

Okay, Tareq Amin, CEO of Rakuten Mobile. Here we go.

Tareq Amin is the CEO of Rakuten Mobile and the CEO of Rakuten Symphony. Welcome to Decoder.

Thank you, Nilay. Pleasure being with you.

I am excited to talk to you. Rakuten Mobile is one of the leaders in this next generation of wireless networks being built and I am very curious about it. It is in Japan, but we have a largely US-based audience, so can you explain what Rakuten is? What kind of company is it, and what is its presence like in Japan?

The Rakuten Group as a whole is not a telecom company, but mostly an internet services company. It started as one of the earliest e-commerce technology companies in Japan. Today, it is one of the largest in e-commerce, fintech, banking, travel, et cetera. These significant internet services were primarily built around a massive ecosystem in Japan, and the only missing piece for Rakuten as a group was the mobile connectivity business. That is why I came to Japan, to help build and launch a disruptive architecture for its mobile 4G/5G network.

Let me make a really bad comparison here. This company has been a huge internet services provider for a while. This is kind of like if Yahoo was massively successful and started a wireless network.

Correct. I mean, think of Amazon. What would happen if Amazon launched a mobile network in the US? This is the best analogy I could give, because Rakuten operates at that scale in Japan. This company with a disruptive mindset, disruptive skill set, disruptive culture, and disruptive organization endorsed my super crazy idea of how we should build this next-generation mobile infrastructure. I think that is where I attribute most of the success. The company’s DNA and culture is just remarkably different.

So it’s huge. How is it structured overall? How is Rakuten Mobile a part of that structure?

Of all the entities today, I think the founder and chairman of the company, Mickey [Hiroshi “Mickey” Mikitani], is probably one of the most innovative leaders I have ever had the opportunity to work with. I cannot tell you how much I enjoy the interactions we have with him. He is down to earth and his leadership style is definitely hands-on; he doesn’t really operate at a high level.

The fundamental belief of Rakuten is around synergistic impact for its ecosystem. The company has 71 internet-facing services in Japan — we also operate globally, by the way — and you as a consumer have one membership ID that you benefit from. The points/membership/loyalty is the foundation of what this company works on. Regardless of which services you consume, they are all tied through this unique ID across all 71.

The companies and the organizations internally have subsidiaries and legal structures that would separate all of them, but synergistically, they are all connected through this membership/points/loyalty system. We think it is really critical to grow the synergistic impact of not just one service, but the collective services, to the end consumer.

Today, Rakuten Mobile is a subsidiary of the group, and Rakuten Symphony is more focused on our platform business. It focuses on the globalization of the technology and architecture we have done in Japan, by selling and promoting to global customers.

When you say Symphony, do you mean the wireless network technology or the technology of the whole company?

Symphony itself is much more than just wireless. Of course, it has Edge Cloud connectivity architecture, the wireless technology stack for 4G/5G, and life cycle management for automation operations. In August of last year we launched Rakuten Symphony as a formal entity to take all the technology we have now and promote it to a global customer base.

I think one of the reasons you and I are having this conversation is because Dish Network in the United States is a Symphony customer. They are launching a next-generation 5G network and I have been very curious about how that is going. It sounds like Symphony is a big piece of the puzzle there.

To provide you a bit of background, maybe we should start with the mobile business in Japan, because it is the foundation this idea initially started from. So, I would tell you, I have had a super crazy life. I am really blessed that I had the opportunity to work with amazing leaders and across three continents so far. My previous experiences before coming to Japan, which involved building another large greenfield network in India called Reliance Jio, have taught me quite a bit.

To be very frank with you, it taught me the value of the US dollar. When you go into a country where the economy of units — how much you could charge a consumer — is one to two US dollars, the idea of supply chain procurement and cost has to change. You have to find a way to build cost-efficient networks.

The launch of Reliance Jio was very successful and became a really good Cinderella story for the industry. I am extremely thankful for what Jio has taught me personally, and I have always wondered what I would do differently if I had a second opportunity to build a greenfield.

To provide everybody listening to this podcast some perspective, the mobile technology industry has been about nothing but hardware changes since the inception of the first 1G in 1981. You just take the old hardware and replace it with new hardware. Nothing has changed in the way we deploy networks when the Gs change, even now in 2022. It is still complex and expensive, and I don’t think the essence of AI and autonomy exist in the DNA of these networks. That is why when you look at the cost expenditures to build new technology like 5G, it is so cost-prohibitive.

It was by coincidence that I met the chairman and CEO of Rakuten group, Mickey Mikitani, and I loved everything that Rakuten is all about. Like most people, I didn’t necessarily know who Rakuten was at the time. I only knew of them because I love football (soccer) and they were a big sponsor of FC Barcelona.

When Mickey started explaining the company fabric to me, about its DNA and internet services, I thought about what a significant opportunity he would have if he adopted a different architecture in how these networks are deployed — one that moves away from proprietary hardware. What would happen if we remove the hardware completely and build the world’s first, and only, cloud-native software telco?

Let me be really honest with you, this was just in PPT at the time. I conceived the idea thinking about what I would do differently if I were granted another opportunity like Reliance Jio. One of the first key elements I wanted to change is adopting this unique cloud architecture, because nobody had really deployed an end-to-end horizontal cloud across any telco yet.

The second element — which you have probably heard of because the industry has been talking about it excitedly — is this thing called Open RAN, which is the idea of disaggregating hardware and software. The third element, my ultimate dream, is the enablement of a full autonomous network that is able to run itself, fix itself, and heal itself without human beings.

This is the journey of mobile, and I think this is what differentiates us so much. I can’t say I had a recipe that defined what success would look like, but I was obsessed. Obsessed with creating a world-class organization with a larger ecosystem, and getting everybody motivated about this concept that did not exist four years ago.

Now here we are, post commercial launch. The world is celebrating what we have done. They like and enjoy the ideas around this disaggregated network, and they love the concept of cloud-native architecture. What I love the most is that we opened up a healthy debate across the globe. We really encourage and support what Dish is doing in the United States by deploying Open RAN as an architecture. I think this is absolutely the right platform to build resilient, scalable, cost-effective mobile networks for the future.

That is the high-level story of how this journey started with a super crazy, ambitious idea that nobody thought would succeed. If you go back four years to some of the press releases that were published, I cannot tell you how many times I was told I’m crazy or that I’m going to fail. As I said, we became fanatic about this idea, and that is what drove us all to emotionally connect to the mission, the objective. I am very, very happy to see the results that the team has achieved.

I want to take that in stages. I definitely want to talk about Jio, because it is a really interesting foundational element of this whole story. I want to talk about what you have built with O-RAN, and how that works in the industry. I also want to talk about where it could go as a platform for the network providers. But I have to ask you the Decoder question first. You have described your ideas as super crazy like five times now. You are the CEO of a big wireless provider in Japan, and you are selling that stuff to other CEOs. I have to ask you the Decoder question. How do you make decisions?

I know this might sound a little controversial, but I have to tell you. In any project I have taken, even from my early days, we have always been taught that you have to have a Plan A and a Plan B. This has never worked for me. I have a concept I call, “No Plan B for me.”

I don’t go in thinking, “This project will fail, therefore I need to look at alternatives and options,” so I am absolutely not thinking about making big, bold decisions. I live by a basic philosophy that it is okay to fail sometimes, but let’s fail fast so we can pick ourselves up and progress. I am not saying people shouldn’t have Option A and Option B. I just feel that, for me personally, Option B might provide my mind the opportunity to entertain that there is an escape clause. That may not necessarily be a good thing when working on ambitious projects. I think you need to be committed to your beliefs and ideas.

I have made some tough calls during my career, but for whatever reason, I have never really been thinking about the consequences of failure. Sometimes we learn more from the mistakes we make and from having difficult experiences, whether they are personal or professional. I think my decision-making capability is one that is very bold, trying to make the team believe in the objectives that we are trying to accomplish and not worrying about failure. Sometimes you just need to be focused on the idea and the mission. Yes, the results are important, but that is not the only thing I am married to.

This is how I have operated all my life, and so far, I am really happy with some of the thinking I have adopted. I am not saying people should not have options in their lives, but this idea of “no Plan B” has its merits in certain projects. How can you adapt your leadership style when approaching projects, rather than thinking, “What is the other option?”

I think with deploying millions upon millions of dollars of mobile broadband equipment, it often feels like you have got to be committed. Let’s talk about that, starting with Jio. If the listeners don’t know, Reliance Jio is now the biggest carrier in India. It is extremely popular, but it launched as a pretty disruptive challenger against other carriers of 4G like Airtel. You just gave it away for free for like the first six months, and it has been lower-cost ever since. This is not the new idea though, right? It is not the open hardware-software disaggregated network that you are talking about now. How did you make Jio so cheap at the beginning?

I will tell you a one-minute prelude. I was sitting very comfortably in Newport Beach when I got a call from my friend. He asked me if I would be interested in going to India and being part of a leadership team to build this ambitious, audacious idea for a massive network at scale, in a country that has north of 1.3 billion people. My first reaction was, “What do I know about India? I have colleagues, but I have never really been there.”

It seemed like an interesting opportunity, and he encouraged me to go meet the executive leadership team of Reliance Jio. I remember flying to Dallas to have a conversation with three leaders that I didn’t really know at the time. One of them in particular, I have to tell you, the more he talked, the more I just wanted to listen. I was amazed by his ambition for what he wanted to achieve in the country.

What was his name?

Mukesh Ambani. I have learned quite a bit from him. India was ranked 154th in the world in mobile broadband penetration before Reliance Jio. The idea was, “Can we assemble an organization that brings ubiquitous connectivity anywhere and everywhere you go across the country? Can 1.3 billion people benefit from this massive transformation that offers cutting-edge services?”

At the time, LTE was the service that Jio launched with. I was really amazed by this ambition and how big it was. I said, “This is an opportunity I just cannot pass up.” It was much bigger than the financial reward; it was an opportunity of learning and understanding. I truly enjoy meeting different cultures. The more I interact with people from different parts of the world, the more it fuels the energy inside me.

So I picked myself up and I moved to India. I landed in the old Mumbai airport, and when I powered on my device, I saw a symbol I hadn’t seen in the US for a decade — 2G. I knew the opportunity Jio had if we did this right. I mean, think about it. 2G. What is really the definition of broadband? 256 kilobits per second? That’s not internet services. The foundation of Jio started with this.

I will tell you the big things that I have learned. Most people think the way you achieve the best pricing is through a process called request for proposals and reverse auctions, to bring vendors and partners to compete against each other. Sometimes there is a better way to do this. You find larger companies where the CEOs have emotion and connection to the idea that you are building, and are willing to work with you as a true partner.

One of the key, fundamental pillars I learned from Jio is that not everything is about status quo. How you run provider selection, vendor selection, or requests for proposal, everything starts from the top leadership of partners you select. They need the ability to connect with the emotional journey — because it is an emotional journey after all — to do something at the scale of what Jio wanted to do. One of the biggest lessons I learned is the process of selecting suppliers who are uniquely different.

In terms of building a network at a relatively low cost, I will explain how this Open RAN idea came in. During my tenure at Jio, I really started thinking that in order to build a network at scale, regardless of how cheap your labor is, you need to fundamentally change your operating platforms for digitization. Jio would have north of 100,000 people a day working in the field, deploying sites. How do you manage them — provide them tasks, check on the quality of installation they do, and audit the work before you turn up any of the bay stations, sites, or radio units?

I have driven this entire digitization and the digital workflows associated with it to connect everybody in India, whether it is Jio employees, contractors, or distributed organizations. Up to 400,000 people at any instant of time would come to the systems that my team has built. That changed everything. It changed the mentality of how we drive cost efficiency and how we run the operations.

This is where I would tell you that big building blocks started formulating in my mind around automation and its impact to operational efficiency if you approach it with a completely fundamental point of view from the current legacy systems you find in other telcos. Because of the constraint of financial pressure on what we call the average revenue per user, the RPU, which is the measurement of how much you charge a mobile customer, I wanted to find a different way to deploy the network.

When you build a network like Jio that has to support 1.3 billion, it’s not just about these big, massive radio sites you deploy. We need things called small cells, which are products that look like Wi-Fi access points, but you deploy lots of them to achieve what we call a heterogeneous design, a design that has big and small sites to meet capacity and coverage requirements.

I prepared an amazing presentation about small cells to the leadership team of Jio and I thought I kicked it out of the park. But then I was asked a question I have never heard in my life. Imagine! I am a veteran in this industry and have been doing this for a very long time. Someone said, “Tareq, I love your strategy. Can you tell me who the chipset provider is for the small cell product?” I’m like, “What are you talking about?” I have never been asked such a question by any operator that I have ever worked for outside of India.

I was told, “Look, Tareq, money doesn’t grow on trees in India. You need to know the cost. To know the cost, you must understand the component cost.” That was the first building block. I said, “Okay, next time I come to this meeting, I am not going to be uneducated anymore.”

I took on a small project which, at the time, did not seem audacious to me. I said, “Look, if I go to an electronics shop in the US, like a Best Buy, I could buy a Wi-Fi access point for $100. If I buy an enterprise access point from a large supplier, it costs $1,000.” I wanted to know what the difference is, so I hired five of the best university graduates one could ask for, and I asked them a trivial question. “Open both boxes, write the part numbers.” I had a really great friend at Qualcomm, and I remember this gentleman saying, “Tareq, you are becoming too dangerous.”

Right. You are the network operator. You’re their margin.

That is where everything started clicking for me. The chairman of Jio was not afraid to think the way I wanted to think, so I told him, “Look, I want to build our own Wi-Fi access point. If we buy an access point at $1,000, I am now convinced I could get you an access point at sub-$100.” A year later, the total cost of the Wi-Fi access point we built in Jio was $35.

This delta between $1,000 and $35 translates to a substantial amount of money saved, and it started by disaggregating everything. Jio enabled its cost structure, and it was able to offer it for free because it had an amazing partnership with suppliers that secured great business terms. Simplification of technology, LTE only, and an amazing process for network rollout all played huge factors in lowering the cost and economics for Jio.

Let me ask you more about that. Jio is a transformative network, and is now obviously the most popular in India. You were able to offer a much lower-cost product than the traditional cell providers with what sounds like very clever business moves. You went and negotiated new kinds of provider agreements and you said, “We have to actually integrate our products, find lower chips at cost, and make our own products. We have to build a new, efficient way to deploy the network with our technicians.”

To your credit, those are excellent management moves. At their core though, they are not technology moves. Now that you are onto Rakuten and saying you are going to build O-RAN, that is a technology play. Broadly, it sounds like you are going to take the management playbook that made Jio work, and now you are lowering costs even further with the technology of O-RAN — or you are proving out a technology that will one day enable further lower costs.

There were two things I could not do in Jio, and it’s not really anybody’s fault, the timing just wasn’t right. If you look at building a mobile network, I think everybody now more or less understands that you need antennas, bay stations, radio access, and core network infrastructure. But unless you are in this industry, you don’t realize the complexity of the operation tools that one needs in order to run and manage this distributed massive infrastructure.

The first thing I wanted to change in Jio is the traditional architecture. This management layer is called OSS [operation subsystems], and it is archaic, to put it politely. If you work in an adjacent vertical industry such as hyperscalers, an internet-facing company, you will be scratching your head saying, “I cannot believe this is how networks are managed today.”

Despite the elegance of the Gs and changing from one to five, the process of managing a network is as archaic as you could ever imagine. The idea of a true customer experience management is aloof; it is still a dream that nobody has enabled. The first thing I wanted to do is to change the paradigm of having thousands of disaggregated toolsets to manage a network into a consolidated platform. It was an idea that I couldn’t drive in Jio. I will tell you why that is even more important than Open RAN. These building blocks are for new architecture, the next generation of OSS.

If we build these operation platforms on a new modern architecture that supports real-time telemetry, the idea is to get real-time information about every element and node that you have into your network. Being able to correlate them and apply AI machine learning on top of them requires modern-age platforms. It is so critical to my dream.

Our success will not be celebrated because of Open RAN, but the grander vision of having Rakuten talked about as a company that does what Tesla has done for the electric industry in terms of autonomy. Autonomy in mobile networks is an absolutely amazing opportunity to build a resilient and reliable network that has better security architecture and does not need the complexity of the way we run and manage networks today. That was the first building block.

The impact of these big building blocks is massive. Here is the second thing I couldn’t do in Reliance Jio at the time. If you look at a pie chart on the cost structure for mobile networks, you may say, “Where do we spend money?” Regardless of geography, regardless of country, 70 to 80 percent of your spending always goes into this thing called radio access. Radio access today has been a private club that is really meant for about four or five companies, and that’s it. There is no diversification of the supply chain. You have no option but to buy from Ericsson, Nokia, Huawei, or ZTE. Nobody else could sell you the products of radio access.

The radio access products are the base stations?

Correct. Those are the base stations.

Which are the components of the cell tower?

Yes, and they contribute to about 70 percent of the capex [capital expenditure]. They are the one area that no startup has ever embraced and said, “You know what? Why don’t we try to disaggregate this? Why don’t we start to move away from the traditional architecture for how these space stations are deployed? Instead of running on custom hardware, custom ASICs, let’s use true software that runs on commodity appliances equivalent to what you would find inside data centers.”

This concept has been talked about, but nobody was willing to take the risk in any startup. Maybe I was wrong that your job is secure if you pick a traditional vendor. That is what I was thinking through, four years ago.

This is like “Nobody ever got fired for buying IBM.”

Something like that.

Let me ask you this. Is it because the initial investment is so high? There are not many startup wireless networks in the world. When they do start, they need an enormous amount of capital just to buy the spectrum. Are the stakes too high to take that kind of risk?

I think as an industry, we make the mistake of not rewarding and supporting startups the way we should. Our ability to incubate and build a thriving ecosystem that is built on new innovations, ideas, and startups is still a dream. I do not think anyone in telecom would argue with that. The reality is that everybody wants to see it happening, but we are just not there yet.

It was complex to do what we did in Japan. It was not simple, nor was it easy. When you have a running network carrying massive amounts of traffic, of course there are risks that you are going to have to take. The risk in that case is ensuring that you don’t disrupt your running base with poor quality services. Maybe the fear in people’s minds is that this technology is not ready, or integrating it into their networks is too complex, or they don’t have the right skillset to go into a software-defined world where they will need to upscale or hire new organization.

You said that right now the four vendors are Ericsson, Nokia, Huawei, and ZTE. You have moved to Open RAN, open radio access, in Japan. Do you have more vendors than those four? Are you actually using the commodity hardware with the software to find network? Or is it still those four vendors but now you can run your code on them?

The foundation of success for Rakuten Mobile today started by Rakuten itself enabling and acquiring one of the most disruptive companies in this Open RAN space. We bought a company in Boston called Altiostar, and I thought they had everything one could dream about, except nobody was willing to provide them a chance. I diversified my hardware supply chain and purchased hardware through 11 suppliers. I mandated where manufacturing can happen, in terms of product, security, and chipsets. Also, the era that we entered focused on heightened security, especially around 5G. I felt really good about our ability to control manufacturing and supply chain.

The software Altiostar provided was the radio software for this entire open access network in Japan. Altiostar software is now running over 290,000 radiating elements. I mean, this is massive; 98 percent of the population coverage of Japan is served there.

I provide huge credit to the large vendors. Nokia had a very big internal debate when I told them, “I want to buy your hardware, but not your software.” I know their board had to approve it, but this is the beauty of software disaggregation. Now, I buy one hardware aspect of the Nokia and Altiostar is running the radio software for that platform. We now have a diversified supply chain and we are no longer just counting on four hardware suppliers. We have a common software stack. The big building block, which is this OSS, has enabled our own platforms and tools.

Rakuten has purchased Altiostar from Boston. We have purchased an innovative cloud company in Silicon Valley called Robin.io for our Edge Cloud. We have purchased the OSS company called InnoEye and formulated this integrated technology stack that is now part of Rakuten Symphony.

You have described Rakuten’s network as being in the cloud several times. Very simply, what does it mean for a wireless network to be cloud-based?

To provide you an image, four years ago I was asked to do a keynote in Japan on my first day there. Thanks to my translator, I think people understood the concepts I was explaining to them. I said, “Here is an image of what we don’t want to build.”

If I show you how to deliver voice and video messaging, most of the telecom networks across the world, even today, are still running into boxes of hardware. Having a cloud network means that your workloads are now moved away from proprietary implementation, to a complete network function software components. These software components run with the beauty of what is called microservices for software, and run with the elegance of things that cloud inherently supports, like capacity management, auto-elasticity, scale in, and scale out.

This is basic terminology. I’m not telling you about things that have been invented by Rakuten Mobile. It is thanks to Google, Microsoft, and Amazon, who have innovated like crazy on the cloud. I have just benefited from the innovation that they have done to deliver on scalability, resiliency, reliability, and a cost efficiency that one could never have imagined.

When it comes to the cost, this is a hyper-operation structure. There are 279,000 radiating elements, and the operational headcount in Rakuten Mobile is still sitting below 250 people.

That’s crazy.

As the number increases, there is no direct proportionality between the number of units in the network versus the number of employees in the network. There is absolutely no direct correlation whatsoever anymore. To me, that is what cloud is all about. All the things on top of it are modules that you need to derive to the operational efficiency that we did in Japan.

From an end user perspective, you have now architected this network differently. You have created a small revolution in the wireless industry from the provider level, where you can buy any hardware from 11 suppliers and run your software on it. Does the end user see an appreciable difference in quality? Or does it just lower the cost?

There is a huge difference from the end user point of view. One of the key reasons that Rakuten was encouraged and supported was because we were determined to enter the mobile segment in Japan. We felt that competition was stagnant, and the cost per user is one of the most expensive in the world.

To benefit the end consumer, we took a chapter from Jio’s strategy on lowering the cost burden economically. We did something that was so simple. At the time, the average plan rate in Japan was sitting about $100 US per user. We dropped that cost to $27 US, unlimited, no caps. When you go inside our stores, we change everything. We said, “Look, you don’t need to think about the plans. There is only one plan. That’s it.”

From a choices point of view, we made life super simple. We bundled local, we bundled international, we bundled everything under one local plan, and we tied it synergistically to the larger ecosystem of Rakuten. You acquire points as you buy things on e-commerce, as you buy things on our travel website, as you buy things from Rakuten Energy, or as you subscribe to Rakuten Bank. You could then use these points to pay off your cellular bill. The $27 could effectively be zero, because of the synergistic impact of other services you consume in Rakuten and the points you acquire from all of them.

Would Rakuten Mobile be profitable at $27 a customer? Is it being subsidized by the larger Rakuten?

We have to be profitable. Spectrum here is not auctioned in Japan; we are allocated spectrum, but there are conditions to it. You cannot just run a business that is not profitable standalone. So we will break even in Rakuten Mobile and make it standalone.

The way I think about it, it is not subsidized by the ecosystem. If I acquire you as a mobile customer, because of the impact I could bring to that larger sales contribution of you potentially buying from e-commerce or travel, I am using connectivity to empower the purchases of these 70-plus internet services, so we are actually contributing to the larger group. As long as the total top line revenue is increased because of mobile contribution, the group as a whole is going to be in good shape.

Even with standalone mobile, we are committed to our break-even point. We need to make it a profitable standalone business. The group as a whole has remarkable synergistic impact in our business. That is the benefit in value.

Now there is another benefit on the network architecture. Today we talk about the essence of marketing with Edge. The definition is so simple. It is all about bringing content as close to your device as humanly possible, to bring content close to you. I would always argue, if you have nothing but virtual machines or network functions that are software, the ability for you to move these software components from large data centers and all the way to the Edge is trivial. Hardware reallocation becomes more complex.

When the Edge use cases in Rakuten Mobile get delivered, you are hopefully going to hear some very amazing news about the lowest latency in the world delivered over the 5G network. This is the beginning of what is possible for new use cases for the consumer.

Think of cloud gaming. It has never been successful, at least in wireless, because networks could not sustain the latency that it would require. Speed, in my opinion, is a stupid metric to talk about. We should talk about latency, latency, latency! How do you deliver sub-4-millisecond latency on a wireless network?

It hasn’t happened yet on licensed spectrum, but I think you are going to see it very soon. There is an advantage to this software architecture and the creation of new age applications for cloud gaming. Even as we talk, people are getting excited about the metaverse, which will need these use cases to come alive in the mobile fabric.

So you have talked about Open RAN, how you have built it, how you have architected the network for Rakuten Mobile, how you have new software layers, and how you have new hardware relationships. You are also the CEO of Rakuten Symphony, which is the company inside Rakuten that would then license all these components to other vendors. Dish Network in this country is one of those providers, and they are at the beginning stages of trying to build a brand new greenfield Open RAN 5G network. If you were going to build an Open RAN network in the United States, how would you do it?

My focus would probably be a lot different than many people would think. It is not about technology. I have never in my life approached a problem where I think technology is the issue. We do not provide ourselves enough credit for how creative we are as human beings and our ability to solve complex problems.

The first thing I would start with is structure, organization, and culture. What is the culture you need to have to do amazing, disruptive things? When I moved to Japan, I didn’t know anything about it. I always knew that I wanted to visit, but I didn’t know about the complexities and challenges I would have to face. I mean, imagine being in the heart of Tokyo, being largely driven and supported by an amazing leadership team that says, “The world is your canvas, hire from anywhere.”

I have brought in 17 nationalities — relocated, not as expats, as full-time employees in our office in Japan. Being this diversified, multicultural organization was the key. I did my own recruiting and handpicked my team. My focus was initially to find people with the spirits of warriors, that were willing to take on tough challenges and the bruises that came along with them, that would not get discouraged by people telling them something would not work.

Long story short, I would not build a network that has looked the same for 30 years. I would not build a network just because Rakuten has done it this way. I think networks of the future must have this essence of software and must have autonomy built into its DNA. This is not just about Open RAN, this is a holistic approach for fundamental transformation in the network architecture.

I ask this question a lot and the answers always surprise me. Most companies that I think of as hardware companies, once they make the investment in software, they end up with more software engineers than hardware engineers. Is that the case for you?

I have no hardware engineers at all. None. I think from the beginning, this was done by design. I knew that I could create an ecosystem in hardware, and I don’t want to be in the hardware business. From a fundamental business model, I had enough credible relationships in this industry to cultivate and create an ecosystem for people that just enjoy being in hardware design. But that is not us; it is not our fabric, not our DNA.

The more I look at the world, the more I see the success of companies that have invested heavily into the right skill sets, whether it is from data science, AI, ML, or the various software organizations that they have built. This is what I thought we needed.

If you go to Rakuten Symphony’s largest R&D center in India, we now have over 3,500 people that only do software. To me, that is an asset that is unprecedented in terms of the extent of capability, what we could build, what we could deliver, and the scale that we could deliver at. I don’t want to invest in hardware. I just think that it is not my business.

Our investment is all about platform. I really enjoy seeing the advancements that we have enabled, though we are still early in this journey. I have a lot of other things I want to accomplish before I say that Symphony has succeeded.

Symphony is a first-of-its-kind company, since it is going to sell a new kind of operating platform to other carriers. Do you have competitors? Do you see this being the next turn of the wireless industry? Are we going to see other platform integrators like Symphony show up and say to carriers, “Hey, we can do this part for you. You can focus on customer service or fighting with the FCC or whatever it is that carriers do”?

To be very honest with you, I love the idea of having more competitors in this space. It challenges my own team to stay on top of their toes, which is really good. At the same time, having more entrants come into the space would help me cultivate the hardware ecosystem today.

Symphony is uniquely positioned; there are not a whole lot of people that could provide the integrated stack that Symphony has. Symphony’s biggest advantage is that it has a running, live lab carrying a large commercial customer base called Rakuten Mobile. Nobody tells me, “Don’t do this or that on Rakuten Mobile.” I could do disruptive ideas or disruptive innovation, and test and validate new products and technologies before giving them to anybody else.

It’s good to be the CEO of both.

I know. This is one of the reasons I accepted and volunteered. I thought for the short term, it would be important to be able to control these two ecosystems, because Japan is a quality-sensitive market. If I build a high-quality network, nobody will doubt whether Symphony’s technology stack is credible, scalable, reliable, or secure. We are uniquely positioned because of our ability to deliver on a robust automation platform, Open RAN software technology architecture, and innovative Edge Cloud software.

I don’t see many in the industry that have the technology capabilities today that Symphony offers. People have bits and pieces of what we have, but when I look at the integrated stack, I’m really happy to see that we have some unique intellectual properties and IPs that are remarkably differentiated from the market today.

So Dish is obviously a client. We will see how their network goes. Are you talking to Telefónica, Verizon, and British Telecom? Are they thinking about O-RAN in this way?

Since it’s public in the US, I can talk about it. As I mentioned before, it is not just about the O-RAN discussion for me, it is about the whole story. We announced in the last Mobile World Congress that AT&T is working with Rakuten Symphony on a few disruptive applications around the digital workflow on the operation for wireless and wireline, the same as Telefónica in the UK and Telefónica in Germany. Our first big breakthrough was an integrated stack.

In the heart of Europe, in Germany, we are the provider for a new greenfield operator called 1&1. I told the CEO of 1&1 that my dream is to build Rakuten 2.0 in Germany, so we are building the entire fabric of this network. It has been an amazing journey to take all the lessons learned from Japan and be able now to bring them to Germany. We are in the early stages, but I am really optimistic to see what the future will hold for Open RAN as a whole for Symphony.

Rakuten Mobile and Rakuten Symphony have opened a well-needed, healthy debate in the industry about radio access provider alternatives and diversification that we need in order to move away into a software-driven network. We feel that is a big accomplishment for us.

As you build out the O-RAN networks, one thing that we know very well in the United States is that our handset vendors — Apple, Samsung, Google, Motorola — are very picky about qualifying their devices for networks.

Oh yes.

Is there a difference in the conversation between a traditional network and an O-RAN network, when you go and talk to the Apples and Samsungs of the world?

Yes. Before we were approved as a mobile company to be able to sell their devices, I have to tell you about the pleasure of working with the likes of Apple. I’m being really honest about this; I really liked it. Their burden to quality was really high, as was their ability to accept and certify a quality of network. I thought if we got the certification that we needed from them, that’s another third-party audit; I would have cleared a big quality hurdle.

The Apple engineering team is really strong. They really understood the technology, which was great. There are a lot of facets to do with it that are fascinating. No matter how great it is, I had to pass a set of KPIs and metrics for device certification. This was not trivial. I went through the same journey with Jio, so I kind of have some ideas about the burdens to acceptance from large device manufacturing companies. I also knew that this is a process of identifying issues, solving them, coming back to the device vendors, and continuing to reiterate in improving the quality.

I went through the same journey in mobile, but just slightly after our commercial launch, when we got our commercial certification on being able to sell Apple devices, that was a big relief for all of us. A big relief, because it means that we have reached a quality level that they deem is minimally acceptable to carry the device.

Of course we monitor the quality every day, so I’m really happy that we have done this. We have proven that the Open RAN network, especially the software that we have built in Japan, is running with amazing reliability. Rather than celebrating our courageous attempt to do something good for everybody, the early days of our journey were all about skepticism. Like, “This will not work. This will not work.”

Was Apple more skeptical of your network going into tests than others since the technology is different?

The device vendors were very supportive. The skepticism came from the fear, uncertainty, and doubt from traditional OEMs and vendors who wanted to tell everybody that this technology is horrible. It was to such an extent I ignored everything. I still do today. I say you cannot argue the benefit of cloud brought to IT and enterprise. There is an indisputable benefit to this. When it comes to telco, why would you argue the advantage and benefit of moving all your workloads to the cloud?

I think this debate is ending, and it is ending much quicker and in a better place for everybody. I have huge admiration for what Apple has done. It’s a really impressive company. The more that we continue to engage with them, the more we can tell that this company is obsessed with quality. I thought if we cleared the hurdle of getting their acceptance, then it shows another validation for us that we are running a high-quality network. They are a strategic, critical part of our provider ecosystem today in Japan.

Let me flip this question around real quick. One of my favorite things about the Indian smartphone market is how wide open it is on the device side. This is something that happened after Jio rolled out, but I was friends with a former editor of Gadgets 360 in India, Kunal Dua, and he told me, “My team covers 12 to 15 Android phone launches a week.”

The device market is wide open, you can connect anything, there are dual SIMs, and the real consumer experience of picking a phone is of unlimited choice. That is not the case in the United States or in other countries. What do you think the benefits of that are? I am quite honestly jealous that there is that much choice in that market.

I think a couple of things in India really benefit the country quite a bit. When you have massive volume, people are intrigued to enter these economies that exist. Certain things have changed in Japan as well. The government policies are mandating the support for open device ecosystems.

In our case, we even told them that 100 percent of our device portfolio will support eSIM, which gives you the ability and flexibility to switch carriers within one second. You can just say, “Oh, I don’t like this. I like this.” The freedom of choices is just unparalleled. We, as Rakuten Mobile, changed the business model. We said, “Look, we will enable eSIM. There are no fees for termination of contracts. There are no fees for anything. If you don’t like us, you can leave. If you do like us, you are part of our family.”

We made it really simple, because it is a dream for us to build an open ecosystem. We are trying to see if it is relatively successful to open up a storefront for open device markets, since we own a very large e-commerce website. Come in, purchase, and acquire.

The difference between India and the US is that India does not subsidize the device. As a consumer in the US, you have been trained that you can buy an iPhone by signing a contract, and the iPhone will be subsidized by the carrier. A consumer could benefit from this open device ecosystem, but there would have to be a mentality change. Will a consumer accept the idea that they have to buy a device? From a carrier point of view, I still argue that if they don’t subsidize, maybe they could lower the cost of their tariffs.

It is still an evolution. For us in mobile, we have pretty much adopted what India has done. We said, “bring your own device,” and we promoted all these devices that you are talking about in India. We brought them into our e-commerce site. In Japanese, it is called Ichiba. So we brought them to the Ichiba website, gave them a storefront, let them advertise, and let them market. Our website has a massive amount of daily active users that come to it, and we do not necessarily benefit from selling their devices, but we don’t want to subsidize any device. That is subjective.

What is the biggest challenge of O-RAN? You have a long history in this industry. I’m sure many challenges are familiar to you in building a traditional network. What is the biggest, most surprising challenge of building it in this way?

Let me tell you the part that I was surprised about. Some parts were easier, some more difficult. If I take you to a traditional base station and we examine what is really there at this radio site, we will find that almost 95 percent of every deployment is the same. Basically, there is a big refrigerator cabinet, and inside this cabinet there is something called the base band. This is the brain of the base station. This base band was built on custom ASICs that large companies needed to constantly invest into this hardware development for.

The first thing that we did was remove the software and make it more like an off-the-shelf appliance, like a traditional data center server. I recognize that the software only gets better; there are no issues with software. The difficult part was that the hardware components you need for the base station are really complex.

At every site, there is an antenna that has a transmitting unit, called either a remote radiohead, or massive MIMO in 5G. These products need to support a huge diversity of spectrum bands, because in every country there are different spectrum bands and different bandwidth. If you are a traditional provider — say Nokia, Ericsson, Huawei, ZTE — these companies have invested in a large organization, with tens of thousands of people, whose entire job is to create this massive hardware that could support all these diversified spectrum bands.

My number-one challenge with Rakuten Mobile is to find these hardware suppliers, because there are not a whole lot of them for Open RAN. The hardware suppliers that could support diversified spectrum requirements — because country to country it will be different — turned out to be a really big challenge. The approach that we have taken in Japan is to go to middle-size companies and startups. I funded them and encourage them to build the hardware that we need.

My biggest challenge and my biggest headache is spending time trying to find a company that has capability and scale to become the hardware provider for Open RAN at the right cost structure. The hardware you need for both 4G and 5G is not to be underestimated. I think it is easier to solve the issues around some of the RF units that one would need for these base stations. This is my personal challenge, and I know the industry as a whole needs to solve for this.

I know these are complicated products, but are these companies thinking that it is a race to the bottom? Most PC vendors ship the same Intel processor, the same basic parts, and they have to differentiate around the edges or do services for recurring revenue. We talk about this on Decoder all the time. The big four that you mentioned sell you the whole stack and then charge for service and support. That is a very high-margin business. If you commoditize the hardware and say, “I am going to run my own software,” do those companies worry it is just a race to the bottom?

Let’s differentiate between large companies and new entrants. I think new entrants in hardware are comfortable and content, understanding the value they provide by being commodity suppliers. Let me provide you an analogy. Let’s say Apple uses Foxconn to manufacture its devices. I am sure Foxconn will not tell you they are unhappy about this business model. It has built their entire strategy around high-value engineering, high-yield, and high-capacity manufacturing, because that is how they make revenue. They do not bundle support services.

I found that the new age manufacturing companies I was looking for were companies like Foxconn. Companies that understand the new business model that I want to create.

The most amazing thing that the US, and some companies are probably not aware of, is the elegance that we have in the United States around silicon companies. It is amazing how they genuinely are one of the most innovative in the world in terms of capability. It still exists in the US; we still control this. Today, Qualcomm, Intel, Nvidia, Broadcom, and many other companies, provide a lot of technology in a way that is needed for these products. We go and build reference designs directly with the silicon companies, and then I take that reference design, go to a contract manufacturer, and say, “Build this reference design.”

This new way of working seems like the future. Hopefully one day, for the hardware supply chain ecosystem, many companies like Foxconn will start to exist and will appreciate the value they need to build hardware for all suppliers. Maybe Ericsson or Nokia will one day have to look and evaluate a pivoting opportunity to go into a software world that may have a much better valuation.

Look at the stock price of traditional telecom companies today. Look at the stock price of ServiceNow, a digital workflow tool. Look at the difference between them. One is a complete SaaS model; one lives on a traditional business model. I don’t think the market appreciates and recognizes that this may be the right thing to do.

It seems like it is inevitable. It is just a matter of time for traditional vendors to start pivoting. I want this hardware to be commoditized. It is very important. The value you compete on has to be software, it cannot be hardware.

Rakuten Mobile is only a couple years old. It is the fourth carrier in Japan, and you have 5 million subscribers. Japan is a big country. KDDI has 10X the subscribers. Is the ambition to be the number one carrier, like Jio became the number one carrier in India? How would you get there?

I am really proud about what we have done in Japan. I think for many people that have been through this journey of building networks, they will know it is not a trivial process. We had two pragmatic challenges.

First, we had to prove to the world that a new technology actually works and delivers on cost, resiliency, and reliability. That’s a check mark; done. That is not just me telling you today, but audited by a third party. Look at the performance, quality and reliability we do. Second, if you are in the mobile business, I think you have one area that new technology cannot easily solve for you. You need to have ubiquitous coverage everywhere and anywhere you go.

I am not sure if you have ever visited Tokyo, Japan, but you should know this is a concrete jungle. It’s amazing. The density that exists in an area like Tokyo, the subways and the coverage you have to provide for them, and the amount of capacity you have to cater for, is not trivial. In two years, we have been able to build a network to cater for 96 percent of Japan coverage. I have never seen the speed that a network could be built at, at this scale.

So our ambition is not to be a fourth mobile operator in Japan. It is by far to be a highly disruptive ecosystem provider in which we want to take the number-one position in this country. The approach we take here is very simple. We need to ensure that ubiquitous, high-quality coverage is delivered anywhere you go in Japan. We are almost there.

I’m not just talking about the outdoors. High-rises, indoor, deep indoor, basements, subways. Anything and everywhere you go, an amazing network must be delivered. And second is the point/membership/loyalty that I talked to you about earlier. We think that’s a huge differentiator from the competitors, just to bring a much bigger value, and being obsessed about the customer experience and the services that we have offered.

From being an infant, to where we are today, I am really happy about what the team has accomplished, but we have a lot of work that we need to focus on to finish the last remaining 3 percent of our build. That percent is extremely important to achieve the quality of coverage that we need to really be at par and better.

I know my cost today is 40 percent cheaper in running my network than any competitor in Japan. I have an advantage that is virtually impossible for anybody in Japan to compete against today around network cost structure. So that gives me a leg up on what we could do, what business models we could experiment with, and the actions that we will take. You will see us very decisive in our approach, because we don’t want just to be another carrier in Japan. We want to be leading mobile operators in this country.

All right, Tareq. That was amazing. I feel like I could talk to you for another full hour about this. Thank you so much for being on Decoder.

Thank you.

Tue, 09 Aug 2022 03:35:00 -0500 en text/html https://www.theverge.com/23297756/5g-rakuten-mobile-ceo-oran-cloud-network-decoder
Killexams : Is It Safer to Pull Your Money Out of the Stock Market or Keep Investing for Now? No result found, try new keyword!Investors might want to consider that the risk of missing out on gains is just as significant as the risk of suffering losses. Wed, 27 Jul 2022 05:47:25 -0500 en-us text/html https://www.msn.com/en-us/money/markets/is-it-safer-to-pull-your-money-out-of-the-stock-market-or-keep-investing-for-now/ar-AA1026PV Killexams : Don’t pop antibiotics every time you have a cold. But resistance crisis has an AI solution

These technologies are already working together to accelerate the discovery of new antimicrobial medicines. One subset of next-gen AI, dubbed generative models, produces hypotheses about the final molecule needed for a specific new drug. These AI models don’t just search for known molecules with relevant properties, such as the ability to bind to and neutralise a virus or a bacterium, they are powerful enough to learn features of the underlying data and can suggest new molecules that have not yet been synthesised. This design, as opposed to searching capability, is particularly transformative because the number of possible suitable molecules is greater than the number of atoms in the universe, prohibitively large for search tasks.

Generative AI can navigate this vast chemical space to discover the right molecule faster than any human using conventional methods. AI modelling already supports research that could help patients with Parkinson’s disease, diabetes and chronic pain. For example, antimicrobial peptides (AMPs), for example, small protein-like compounds, is one solution that is the subject of intensive study. These molecules hold great promise as next-generation antibiotics because they are inherently less susceptible to resistance and are produced naturally as part of the innate immune system of living organisms.

In recent studies published in Nature Biomedical Engineering, 2021, the AI-assisted search for new, effective, non-toxic peptides produced 20 promising novel candidates in just 48 days, a striking reduction compared to the conventional development times for new compounds.

Among these were two novel candidates used against Klebsiella pneumoniae, a bacterium frequently found in hospitals that causes pneumonia and bloodstream infections and has become increasingly resistant to conventional classes of antibiotics. Obtaining such a result with conventional research methods would take years.

AMPs already in commercial use

Collaborative work between IBM, Unilever, and STFC, which hosts one of IBM Research’s Discovery Accelerators at the Hartree Centre in the UK, has recently helped researchers better understand AMPs. Unilever has already used that new knowledge to create consumer products that boost the effects of these natural-defence peptides.

And, in this Biophysical Journal paper, researchers demonstrated how small-molecule additives (organic compounds with low molecular weights) are able to make AMPs much more potent and efficient. Using advanced simulation methods, IBM researchers, in combination with experimental studies from Unilever, also identified new molecular mechanisms that could be responsible for this enhanced potency. This is a first-of-its-kind proof of principle that scientists will take forward in ongoing collaborations.

Boosting material discovery with AI Generative models and advanced computer simulations is part of a much larger strategy at IBM Research, dubbed Accelerated Discovery, where we use emerging computing technologies to boost the scientific method and its application to discovery. The aim is to greatly speed up the rate of discovery of new materials and drugs, whether it is in preparation for the next global crisis or to rapidly address the current and the inevitable future ones.

This is just one element of the loop comprising the revised scientific method, a cutting-edge transformation of the traditional linear approach to material discovery. Broadly, AI learns about the desired properties of a new material. Next, another type of AI, IBM’s Deep Search, combs through the existing knowledge on the manufacturing of this specific material, meaning all the previous research tucked away in patents and papers.

Generative models have the potential to create a new molecule

Following this, the generative models create a possible new molecule based on the existing data. Once done, we use a high-performance computer to simulate this new candidate molecule and the reactions it should have with its neighbours to make sure it performs as expected. In the future, a quantum computer could Boost these molecular simulations even further.

The final step is AI-driven lab testing to experimentally validate the predictions and develop real molecules. At IBM, we do this with a tool called RoboRXN, a small, fridge-sized chemistry lab’ that combines AI, cloud computing and robots to help researchers create new molecules anywhere at anytime. The combination of these approaches is well suited to tackle general ‘inverse design’ problems. Here, the task is to find or create for the first time a material with a desired property or function, as opposed to computing or measuring the properties of large numbers of candidates.

Proof that AI can go beyond the limits of classical computing

The antibiotics crisis is a particularly urgent example of a global inverse design challenge in need of a true paradigm shift towards the way we discover materials. The rapid progress in quantum computing and the development of quantum machine-learning techniques is now creating realistic prospects of extending the reach of artificial intelligence beyond the limitations of classical computing. Early examples show promise for quantum advantages in model training speed, classification tasks and prediction accuracy.

Overall, combining the most powerful emerging AI techniques (possibly with quantum acceleration) to learn features linked to antimicrobial activity with physical modelling at the molecular scale to reveal the modes of action is, arguably, the most promising route to creating these essential compounds faster than ever before.

The article originally appeared in the World Economic Forum.


Also read: CGPA instead of marks, lateral entry — Modi govt SoP to bring board parity


Sat, 06 Aug 2022 14:59:00 -0500 en-US text/html https://theprint.in/world/dont-pop-antibiotics-every-time-you-have-a-cold-but-resistance-crisis-has-an-ai-solution/1069101/
P2050-028 exam dump and training guide direct download
Training Exams List