Last week, after IBM’s report of positive quarterly earnings, CEO Arvind Krishna and CNBC’s Jim Cramer shared their frustration that IBM’s stock “got clobbered.” IBM’s stock price immediately fell by10%, while the S&P500 remained steady (Figure 1)
While a five-day stock price fluctuation is by itself meaningless, questions remain about the IBM’s longer-term picture. “These are great numbers,” declared Krishna.
“You gave solid revenue growth and solid earnings,” Cramer sympathized. “You far exceeded expectations. Maybe someone is changing the goal posts here?”
It is also possible that Krishna and Cramer missed where today’s goal posts are located. Strong quarterly numbers do not a digital winner make. They may induce the stock market to regard a firm as a valuable cash cow, like other remnants of the industrial era. But to become a digital winner, a firm must take the kind of steps that Satya Nadella took at Microsoft to become a digital winner: kill its dogs, commit to a mission of customer primacy, identify real growth opportunities, transform its culture, make empathy central, and unleash its agilists. (Figure 2)
Since becoming CEO, Nadella has been brilliantly successful at Microsoft, growing market capitalization by more than a trillion dollars.
Krishna has been IBM CEO since April 2020. He began his career at IBM in 1990, and had been managing IBM’s cloud and research divisions since 2015. He was a principal architect of the Red Hat acquisition.
They are remarkable parallels between the careers of Krishna and Nadella.
· Both are Indian-American engineers, who were born in India.
· Both worked at the firm for several decades before they became CEOs.
· Prior to becoming CEOs, both were in charge of cloud computing.
Both inherited companies in trouble. Microsoft was stagnating after CEO Steve Ballmer, while IBM was also in rapid decline, after CEO Ginny Rometty: the once famous “Big Blue” had become known as a “Big Bruise.”
Although it is still early days in Krishna’s CEO tenure, IBM is under-performing the S&P500 since he took over (Figure 3).
More worrying is the fact that Krishna has not yet completed the steps that Nadella took in his first 27 months. (Figure 1).
Nadella wrote off the Nokia phone and declared that IBM would no longer sell its flagship Windows as a business. This freed up energy and resources to focus on creating winning businesses.
By contrast, Krishna has yet to jettison, IBM’s most distracting baggage:
· Commitment to maximizing shareholder value (MSV): For the two prior decades, IBM was the public champion of MSV, first under CEO Palmisano 2001-2011, and again under Rometty 2012-2020—a key reason behind IBM’s calamitous decline (Figure 2) Krishna has yet to explicitly renounce IBM’s MSV heritage.
· Top-down bureaucracy: The necessary accompaniment of MSV is top-down bureaucracy, which flourished under CEOs Palmisano and Rometty. Here too, bureaucratic processes must be explicitly eradicated, otherwise they become permanent weeds.
· The ‘Watson problem’: IBM’s famous computer, Watson, may have won ‘Jeopardy!’ but it continues to have problems in the business marketplace. In January 2022, IBM reported that it had sold Watson Health assets to an investment firm for around $1 billion, after acquisitions that had cost some $4 billion. Efforts to monetize Watson continue.
· Infrastructure Services: By spinning off its Cloud computing business as a publicly listed company (Kyndryl), IBM created nominal separation, but Kyndryl immediately lost 57% of its share value.
· Quantum Computing: IBM pours resources into research on quantum computing and touts its potential to revolutionize computing. However unsolved technical problems of “decoherence” and “entanglement” mean that any meaningful benefits are still some years away.
· Self-importance: Perhaps the heaviest baggage that IBM has yet to jettison is the over-confidence reflected in sales slogans like “no one ever got fired for hiring IBM”. The subtext is that firms “can leave IT to IBM” and that the safe choice for any CIO is to stick with IBM. It’s a status quo mindset—the opposite of the clients that IBM needs to attract.
At the outset of his tenure as CEO of Microsoft, Nadella spent the first nine months getting consensus on a simple customer-driven mission statement.
Krishna did write at the end of the letter to staff on day one as CEO, and he added at the end:“Third, we all must be obsessed with continually delighting our clients. At every interaction, we must strive to offer them the best experience and value. The only way to lead in today’s ever-changing marketplace is to constantly innovate according to what our clients want and need.” This would have been more persuasive if it had come at the beginning of the letter, and if there had been stronger follow-up.
What is IBM’s mission? No clear answer appears from IBM’s own website. The best one gets from About IBM is the fuzzy do-gooder declaration: “IBMers believe in progress — that the application of intelligence, reason and science can Strengthen business, society and the human condition.” Customer primacy is not explicit, thereby running the risk that IBM’s 280,000 employees will assume that the noxious MSV goal is still in play.
At Microsoft, Nadella dismissed competing with Apple on phones or with Google on Search. He defined the two main areas of opportunity—mobility and the cloud.
Krishna has identified the Hybrid Cloud and AI as IBM’s main opportunities. Thus, Krishna wrote in his newsletter to staff on day one as CEO: “Hybrid cloud and AI are two dominant forces driving change for our clients and must have the maniacal focus of the entire company.”
However, both fields are now very crowded. IBM is now a tiny player in Cloud in comparison to Amazon, Microsoft, and Google. In conversations, Krishna portrays IBM as forging working partnerships with the big Cloud players, and “integrating their offerings in IBM’s hybrid Cloud.” One risk here is whether the big Cloud players will facilitate this. The other risk is that IBM will attract only lower-performing firms that use IBM as a crutch so that they can cling to familiar legacy programs.
At Microsoft, Nadella addressed culture upfront, rejecting Microsoft’s notoriously confrontational culture, and set about instilling a collaborative customer-driven culture throughout the firm.
Although Krishna talks openly to the press, he has not, to my knowledge, frontally addressed the “top-down” “we know best” culture that prevailed in IBM under his predecessor CEOs. He has, to his credit, pledged “neutrality” with respect to the innovative, customer-centric Red Hat, rather than applying the “Blue washing” that the old IBM systematically applied to its acquisitions to bring them into line with IBM’s top-down culture, and is said to have honored its pledge—so far. But there is little indication that IBM is ready to adopt Red Hat’s innovative culture for itself. It is hard to see these two opposed cultures remain “neutral” forever. Given the size differential between IBM and Red Hat, the likely winner is easy to predict, unless Krishna makes a more determined effort to transform IBM’s culture.
As in any large tech firm, when Nadella and Krishna took over their respective firms, there were large hidden armies of agilists waiting in the shadows but hamstrung by top-down bureaucracies. At Microsoft, Nadella’s commitment to “agile, agile, agile” combined with a growth mindset, enabled a fast start.. At IBM, if Krishna has any passion for Agile, it has not yet shared it widely.
Although IBM has made progress under Krishna, it is not yet on a path to become a clear digital winner.
And read also:
Is Your Firm A Cash-Cow Or A Growth-Stock?
Why Companies Must Learn To Discuss The Undiscussable
You have seen the company grow from being just about enterprise Linux to becoming a multi-billion dollar open source enterprise products firm. Having stepped into Cormier’s shoes, are you planning any change in strategy?
The short answer is ‘No’. I’m pretty lucky that I have worked within about 20 feet of Paul for the last 10 years. So, I’ve had the opportunity to have a hand in the team we’ve built and the strategy we’ve built and the bets and positions we’ve made around open hybrid cloud. In my last role, I was heading all of our products and technology and business unit teams. Hence, I know the team and the strategy. And we will evolve. If we look at the cloud services market that’s moving fast, our commercial models will change there to make sure that as customers have a foot on prem (on premises) and in private cloud, we serve them well. As hybrid extends to edge (computing), it will also change how we approach that market. But our fundamental strategy around open hybrid cloud doesn’t change. So, it’s a nice spot to be here, where I don’t feel compelled to make any change, but focus more on execution.
Tell us a bit about Red Hat’s focus on India, and your expansion plans in the country.
When we see the growth and opportunity in India, it mimics what we see in a lot of parts of the globe—software-defined innovation that is going to be the thing that lets enterprises compete. That could be in traditional markets where they’re leveraging their data centres; or it could be leveraging public cloud technologies. In certain industries, that software innovation is moving to the devices themselves, which we call edge. India is a perfect example of the application of open hybrid cloud because we can serve all of those use cases—from edge deployments in 5G and the adjacent businesses that will be built around that, to connectivity to the public clouds.
Correia (Marshall Correia is vice-president and general manager, India, South Asia at Red Hat): We have been operating in the country for multiple decades and our interest in India is two-fold. One is go-to-market in India, working with the Indian government, Indian enterprises, private sector as well as public sector enterprises. We have a global delivery presence in cities like Pune and Bengaluru. Whether you look at the front office, back office, or mid-office, we are deeply embedded into it (BSE, National Stock Exchange (NSE), Aadhaar, GST Network (GSTN), Life Insurance Corporation of India (LIC), SBI Insurance and most core banking services across India use Red Hat open source technologies). For instance, we work with Infosys on GSTN. So, I would say there is a little bit of Red Hat played out everywhere (in India) but with some large enterprises, we have a very deep relationship.
Do you believe Red Hat is meeting IBM’s expectations? How often do you interact with Arvind Krishna, and what do you discuss?
About five years ago, Arvind and I were on stage together, announcing our new friendship around IBM middleware on OpenShift. I talk to him every few days. A lot of this credit goes to Paul. We’ve struck the balance with IBM. Arvind would describe it as Red Hat being “independent" (since) we have to partner with other cloud providers, other consulting providers, (and) other technology providers (including Verizon, Accenture, Deloitte, Tata Consultancy Services, and IBM Consulting). But IBM is very opinionated on Red Hat—they built their middleware to Red Hat, and we are their core choice for hybrid. Red Hat gives them (IBM) a technology base that they can apply their global reach to. IBM has the ability to bring open source Red Hat technology to every corner of the planet.
How are open source architectures helping data scientists and CXOs with the much-needed edge adopting AI-ML (artificial intelligence and machine learning)?
AI is a really big space, and we have always sort of operated in how to get code built and (get it) into production faster. But now training models that can answer questions with precision are running in parallel. Our passion is to integrate that whole flow of models into production, right next to the apps that you’re already building today—we call this the ML ops (machine learning operations, which is jargon for a set of best practices for businesses to run AI successfully) space.
What that means is that we’re not trying to be the best in natural language processing (NLP) or building foundation AI models on it or convolutional neural networks (CNNs). We want to play in our sweet spot, which is how we arm data science teams to be able to get their models from development to production and time into those apps. This is the work we’ve done on OpenShift data science (managed cloud service for data scientists and developers) with it.
Another piece that’s changing and has been exciting for us, is hardware. As an example, cars today and going forward are moving to running just a computer in them. What we do really well is to put Linux on computers and the computer in your car, and the future will look very similar to the computer in your data centre today. And when we’re able to combine that platform, with bringing these AI models into that environment with the speed that you do with code with application integration, it opens up a lot of exciting opportunities for customers to get that data science model of building into the devices, or as close to customers as they possibly can.
This convergence is important, and it’s not tied to edge. Companies have realized that the closer they can push the interaction to the user, the better the experience it’s going to be.
And that could be in banking or pushing self-service to users‘ phones. In autonomous driving, it’s going to be pushing the processing down to your rear view mirror to make decisions for you. In mining, it might be 5g. At the core of it is how far can you push your differentiated logic closer to your consumer use case. That’s why I think we see the explosion in edge.
As a thought leader, I would like your views on trends like the decentralized web and open source metaverse.
If you look at the Red Hat structure, we have areas where we’re committed to businesses through our business units. But then we also have our office of technology that’s led by our CTO, Chris Wright, where we track industry trends where we haven’t necessarily taken a business stake or position but want to understand the technology behind it. The cryptographic blockchain decentralizing core technology foundations, which we watch very closely, is in this space right now. Because they do change the way you operate. It’s strikingly similar to how open source and coding practices are seen as normal today but when I started this 20 years ago, it was a much more connected and controlled experience versus a very decentralized one today. So, we track this very closely from a technology perspective (but) we haven’t yet taken a business position of this.
In this context, do you collaborate with IBM R&D too?
Yeah, we do. We worked closely with the IBM research team run by Dario Gil (senior VP and director of IBM Research) pre-acquisition, and we work even closer with them now. Post-acquisition, the focus on Red Hat and the clarity on IBM’s focus on open hybrid cloud have helped us collaborate even better.
Last, but not the least, what is Red Hat’s stance on the patent promise it made in September 2017, given that your company is now an IBM unit (which has over 70,000 active patents)?
We continue to collect our patents in a way that they won’t be leveraged against other users of open source. Red Hat will do it (patent) for the benefit of open source and to make the usage of open source a little safer. My patents, I believe, are included in that, and will continue to be included in that going forward.
Apple, IBM, and United Airlines announced a partnership on Thursday to build out a suite of iOS apps to be used by the airline’s front-line employees, reads a new report from Business Insider. The apps will be a part of the IBM MobileFirst for iOS project, and will be used by more than 50,000 United […]
Yahoo has quietly launched their existing mobile application, bringing the popular Yahoo services Answers to your phone in a simple package that mimics the online layout, according to a new report from TechCrunch. The app, officially called Yahoo Answers Now, is available for iOS and at the time of writing, according to the source, can be […]
In the Moore’s Law world, it has become a truism that smaller nodes lead to larger problems. As fabs turn to nanosheet transistors, it is becoming increasingly challenging to detect line-edge roughness and other defects due to the depths and opacities of these and other multi-layered structures. As a result, metrology is taking even more of a hybrid approach, with some well-known tools moving from the lab to the fab.
Nanosheets are the successor to finFETs, an architecture evolution prompted the industry’s continuing desire to increase speed, capacity, and power. They also help solve short-channel effects, which lead to current leakage. The great vulnerability of advanced planar MOSFET structures is that they are never fully “off.” Due to their configuration, in which the metal-oxide gate sits on top of the channel (conducting current between source and drain terminals), like a float in a pool, some current continues to flow even when voltage isn’t applied to the gate.
FinFETs raise the channel into a “fin.” The gate is then arched over that fin, allowing it to connect on three sides. Nevertheless, the bottom of the gate and the bottom of the fin are level with each other, so some current can still sneak through. The gate-all-around design turns the fin into multiple, stacked nanosheets, which horizontally “pierce” the gate, giving coverage on all four sides and containing the current. An additional benefit is the nanosheets’ width can be varied for device optimization.
Fig. 1: Comparison of finFET and gate-all-around with nanosheets. Source: Lam Research
Unfortunately, with one problem solved, others emerge. “With nanosheet architecture, a lot of defects that could kill a transistor are not line-of-sight,” said Nelson Felix, director of process technology at IBM. “They’re on the underside of nanosheets, or other hard-to-access places. As a result, the traditional methods to very quickly find defects without any prior knowledge don’t necessarily work.”
So while this may appear linear from an evolutionary perspective, many process and materials challenges have to be solved. “Because of how the nanosheets are formed, it’s not as straightforward as it was in the finFET generation to create a silicon-germanium channel,” Nelson said.
Several techniques are being utilized, ranging from faster approaches like optical microscopy to scanning electron microscopes (SEMs), atomic force microscopes (AFMs), X-ray, and even Raman spectroscopy.
Well-known optical vendors like KLA provide the first-line tools, employing techniques such as scatterometry and ellipsometry, along with high-powered e-Beam microscopes.
With multiple gate stacks, optical CD measurement needs to separate one level from the next according to Nick Keller, senior technologist, strategic marketing for Onto Innovation. “In a stacked nanosheet device, the physical dimensions of each sheet need to be measured individually — especially after selective source-drain recess etch, which determines drive current, and the inner spacer etch, which determines source-to-gate capacitance, and also affects transistor performance. We’ve done demos with all the key players and they’re really interested in being able to differentiate individual nanosheet widths.”
Onto’s optical critical dimension (OCD) solution combines spectroscopic reflectometry and spectroscopic ellipsometry with an AI analysis engine, called AI-Diffract, to provide angstrom-level CD measurements with superior layer contrast versus traditional OCD tools.
Fig. 2: A model of a GAA device generated using AI Diffract software, showing the inner spacer region (orange) of each nanosheet layer. Source: Onto Innovation
Techniques like spectroscopic ellipsometry or reflectometry from gratings (scatterometry) can measure CDs and investigate feature shapes. KLA describes scatterometry as using broadband light to illuminate a target to derive measurements. The reflected signal is fed into algorithms that compare the signal to a library of models created based on known material properties and other data to see 3D structures. The company’s latest OCD and shape metrology system identifies subtle variations in CD, high k and metal gate recess, side wall angle, resist height, hard mask height, pitch walking) across a range of process layers. An improved stage and new measurement modules help accelerate throughput.
Chipmakers rely on AI engines and deep computing in metrology just to handle the data streams. “They do the modeling data for what we should be looking at that day, and that helps us out,” said Subodh Kulkarni, CEO of CyberOptics. “But they want us to provide them speedy resolution accuracy. That’s incredibly difficult to deliver. We’re ultimately relying on things like the resolution of CMOS and the bandwidth of GPUs to crunch all that data. So in a way, we’re relying on those chips to develop inspection solutions for those chips.”
In addition to massive data crunching, data from different tools must be combined seamlessly. “Hybrid metrology is a prevailing trend, because each metrology technique is so unique and has such defined strengths and weaknesses,” said Lior Levin, director of product marketing at Bruker. “No single metrology can cover all needs.”
The hybrid approach is well accepted. “System manufacturers are putting two distinct technologies into one system,” said Hector Lara, Bruker’s director and business manager for Microelectronics AFM. He says Bruker has decided against that approach based on real-world experience, which has shown it will lead to sub-optimal performance.
On the other hand, hybrid tools can save time and allow a smaller footprint in fabs. Park Systems, for example, integrates AFM precision with white light interferometry (WLI) into a single instrument. Its purpose, according to Stefan Kaemmer, president of Park Systems Americas, is in-line throughput. While the WLI can quickly spot a defect, “You can just move the sample over a couple of centimeters to the AFM head and not have to take the time to unload it and then load it on another tool,” Kaemmer said.
Bruker, meanwhile, offers a combination of X-ray diffraction (XRD)/X-ray reflectometry (XRR) and X-ray fluorescence (XRF)/XRR for 3D logic applications. “For the vast majority of applications, the approach is a very specialized tool with a single metrology,” Levin said. “Then you hybridize the data. That’s the best alternative.”
What AFMs provide
AFMs are finding traction in nanosheet inspection because of their ability to distinguish fine details, a capability already proven in 3D NAND and DRAM production. “In AFM, we don’t really find the defects,” Kaemmer explained. “Predominantly, we read the defect map coming typically from some KLA tool and then we go to whatever the customer picks to closely examine. Why that’s useful is the optical tool tells you there’s a defect, but one defect could actually be three smaller defects that are so close together the optical tool can’t differentiate them.”
The standard joke about AFMs is that their operation was easier to explain when they were first developed nearly forty years ago. In 1985, when record players were in every home, it required little to imagine an instrument in which a sharp tip extended from a cantilevered arm felt its way along a surface to produce signals. With electromagnetic (and sometimes chemical) modifications, that is essentially the hardware design of all modern AFMs. There are now many variations of tip geometries, from pyramids to cones, in a range of materials including silicon, diamond, and tungsten.
There are two basic modes of operation. One is tapping. As the name implies, the cantilever is put into oscillation at its natural resonant frequency, giving the AFM controlling systems the greatest precision of force control. The result is a nanometer-scale spatial topopgraphic rendering of the semiconductor structure. The second uses a sub-resonant mode that results in the greatest force control during a tip-sample interaction. That approach is important for high-aspect structures because it renders high-accuracy depth measurements, and in some structures sidewall angles and roughness.
Today’s commercial production tools are geared to specific applications, such as defect characterization or surface profile measurement. Unlike optical microscopes, where improvements center on improved resolution, AFMs are looking at subtle profile changes in bond pads for hybrid bonding, for instance, or to reveal defect characteristics like molecular adhesion.
“Bonding is really a sweet spot for AFM,” said Sean Hand, senior staff applications scientist at Bruker. “It’s really planar, it’s flat, we’re able to see the nanoscale roughness, and the nanoscale slope changes that are important.”
Additionally, because tips can exert enough force to move particles, AFMs can both find errors and correct them. For nearly two decades, they have been used in production to remove debris and make pattern adjustments on lithography masks. Figure 3 (below) shows a probe-based particle removal during lithography process for advanced node development. Contaminants are removed from EUV masks, allowing the photomask to be quickly returned to production use. That extends the life of the mask and the reticle, and avoids surface degradation caused by wet cleaning.
AFM-based particle removal is a significantly lower-cost dry cleaning process and adds no residual contamination on the photomask surface, which can degrade mask life. Surface interaction is local to the defect, which minimizes the potential for contamination of other mask areas. The high precision of the process allows for cleaning within fragile mask features without risk of damage.
Fig. 3: Example of pattern repair. Source: Bruker
In advanced lithography, AFMs also are used to evaluate the many photoresist candidates for high-NA EUV, including metal oxide resists and more traditional chemically amplified resists. “With the thin resist evaluation of high NA EUV studies, now you have thin, resist trenches that are much more shallow,” said Anne-Laure Charley, R&D metrology manager at Imec. “And that becomes a very nice use case for AFM.”
The drawback to AFMs, however, is that they are limited to surface characterization. They cannot measure the thickness of layers, and can be limited in terms of deep 3D profile information. Charley recently co-authored a paper that explores a deep-learning-enabled correction for the problem of vertical (z) drift in AFMs. “If you have a structure with a small trench opening, but which is very deep, you will not be able to answer with the tip at the bottom of the trench, and you will not then be able to characterize the full edge depth and also the profile at the bottom of the trench,” she said.
Raman spectroscopy, which relies on the analysis of inelastically scattered light, is a well-established offline technique for materials characterization that is moving its way inline into fabs. According to IBM’s Felix, it is likely to come online to answer the difficult questions of 3D metrology. “There’s a suite of wafer characterization techniques that historically have been offline techniques. For example, Raman spectroscopy lets you really probe what the bonding looks like,” he said. “But with nanosheet, this is no longer a data set you can just spot-check and have it be only one-way information. We have to use that data in a much different way. Bringing these techniques into the fab and being able to use them non-destructively on a wafer that keeps moving is really what’s required because of the complexity of the material set and the geometries.”
In addition to AFM, other powerful techniques are being pulled into the nanosheet metrology arsenal. Bruker, for example, is employing X-ray diffraction (XRD), the crystallography technique with which Rosalind Franklin created the famous “Photograph 51” to show the helical structure of DNA in 1952.
According to Levin, during the height of finFET development, companies adopted XRD technology, but mainly for R&D. “It looks like in this generation of devices, X-ray metrology adoption is much higher.”
“For the gate all around, we have both XRD — the most advanced XRD, the high brightness source XRD, for measurement of the nanosheet stack — combined with XRF,” said Levin. “Both of them are to measure the residue part, making sure everything is connected, as well as those recessed edge steps. An XRF can provide a very accurate volumetric measurement. It can measure single atoms. So in a very sensitive manner, you can measure the recessed edge the material that is remaining after the recessed edge. And it’s a direct measurement that doesn’t require any calibration. The signal you get is directly proportional to what you’re looking to measure. So there’s significant adoption of these two techniques for GAA initial development.”
Matthew Wormington, chief technologist at Bruker Semi X-ray, gave more details: “High resolution X-ray diffraction and X-ray reflectometry are two techniques that are very sensitive to the individual layer thicknesses and to the compositions, which are key for controlling some of the X parameters downstream in the 3D process. The gate-all-around structure is built on engineered substrates. The first step is planar structures, a periodic array of silicon and silicon germanium layers. X-ray measurement is critical in that very key step because everything is built on top of that. It’s a key enabling measurement. So the existing techniques become much more valuable, because if you don’t get your base substrate correct — not just the silicon but the SiGe/Si multilayer structure — everything following it is challenged.”
The introduction of nanosheet transistors and other 3D structures is calling for wider usage of tools like AFM, X-ray systems, ellipsometry and Raman spectroscopy. And new processes, like hybrid bonding, leads to older processes being brought in for new applications. Imec’s Charley said, “There are some specific challenges that we see linked to stacking of wafers. You eventually need to measure through silicon because when you start to stack two wafers on top of each other, you need to measure or inspect through the backside and eventually you still have a relatively thick silicon. And that’s implies working with different wavelengths, in particular infrared. So vendors are developing specific overlay tools using infrared for these kinds of use cases.”
As for who will ultimately drive the research, it depends on when you ask that question. “The roadmap for technology is always bi-directional,” said Lior. “It’s hard to quantify, but roughly half comes from the technology side from what is possible, and half comes from what’s needed in the marketplace. Every two or three years we have a new generation of tools.”
D. Cerbu, et. al., “Deep Learning-Enabled Vertical Drift Artefact Correction for AFM Images,” Proc. SPIE Metrology, Inspection, and Process Control XXXVI, May 2022; doi: 10.1117/12.2614029
A.A. Sifat, J. Jahng, and E.O. Potma, “Photo-Induced Force Microscopy (PiFM) — Principles and Implementations,” Chem. Soc. Rev., 2022,51, 4208-4222. https://pubs.rsc.org/en/content/articlelanding/2022/cs/d2cs00052k
Mary A. Breton, Daniel Schmidt, Andrew Greene, Julien Frougier, and Nelson Felix, “Review of nanosheet metrology opportunities for technology readiness,” J. of Micro/Nanopatterning, Materials, and Metrology, 21(2), 021206 (2022). https://doi.org/10.1117/1.JMM.21.2.021206
Daniel Schmidt, Curtis Durfee, Juntao Li, Nicolas Loubet, Aron Cepler, Lior Neeman, Noga Meir, Jacob Ofek, Yonatan Oren, and Daniel Fishman, “In-line Raman spectroscopy for gate-all-around nanosheet device manufacturing,” J. of Micro/Nanopatterning, Materials, and Metrology, 21(2), 021203 (2022). https://doi.org/10.1117/1.JMM.21.2.021203
Speeding Up The R&D Metrology Process
The goal is to use fab-like methods in the lab, but that’s not easy.
Metrology Challenges For Gate-All-Around
Why future nodes will require new equipment and approaches.
Contact Mode versus Tapping Mode AFM
Recently, IBM struck a deal to acquire Databand.ai, which develops software for data observability. The purchase amount was not announced. However, the acquisition does show the importance of observability, as IBM has acquired similar companies during the past couple years.
“Observability goes beyond traditional monitoring and is especially relevant as infrastructure and application landscapes become more complex,” said Joseph George, Vice President of Product Management, BMC. “Increased visibility gives stakeholders greater insight into issues and user experience, reducing time spent firefighting, and creating time for more strategic initiatives.”
Observability is an enormous category. It encompasses log analytics, application performance monitoring (APM), and cybersecurity, and the term has been applied in other IT areas like networking. For example, in terms of APM, spending on the technology is expected to hit $6.8 billion by 2024, according to Gartner.
So then, what makes observability unique? And why is it becoming a critical part of the enterprise tech stack? Well, let’s take a look.
Also read: Top Observability Tools & Platforms
The ultimate goal of observability is to go well beyond traditional monitoring capabilities by giving IT teams the ability to understand the health of a system at a glance.
An observability platform has several important functions. One is to find the root causes of a problem, which could be a security breach or a bug in an application. In some cases, the system will offer a fix. Sometimes an observability platform will make the corrections on its own.
“Observability isn’t a feature you can install or a service you can subscribe to,” said Frank Reno, Senior Product Manager, Humio. “Observability is something you either have, or you don’t. It is only achieved when you have all the data to answer any question about the health of your system, whether predictable or not.”
The traditional approach is to crunch huge amounts of raw telemetry data and analyze it in a central repository. However, this could be difficult to do at the edge, where there is a need for real-time solutions.
“An emerging alternative approach to observability is a ‘small data’ approach, focused on performing real-time analysis on data streams directly at the source and collecting only the valuable information,” said Shannon Weyrick, vice president of research, NS1. “This can provide immediate business insight, tighten the feedback loop while debugging problems, and help identify security weaknesses. It provides consistent analysis regardless of the amount of raw data being analyzed, allowing it to scale with data production.”
Also read: Observability’s Growth to Evolve into Automation Solutions in 2022
The biggest growth factor for observability is the strategic importance of software. It’s become a must-have for most businesses.
“Software has become the foundation for how organizations interact with their customers, manage their supply chain, and are measured against their competition,” said Patrick Lin, VP of Product Management for Observability, Splunk. “Particularly as teams modernize, there are a lot more things they have to monitor and react to — hybrid environments, more frequent software changes, more telemetry data emitted across fragmented tools, and more alerts. Troubleshooting these software systems has never been harder, and the way monitoring has traditionally been done just doesn’t cut it anymore.”
The typical enterprise has dozens of traditional tools for monitoring infrastructure, applications and digital experiences. The result is that there are data silos, which can lessen the effectiveness of those tools. In some cases, it can mean catastrophic failures or outages.
But with observability, the data is centralized. This allows for more visibility across the enterprise.
“You get to root causes quickly,” said Lin. “You understand not just when an issue occurs but what caused it and why. You Strengthen mean time to detection (MTTD) and mean time to resolution (MTTR) by proactively detecting emerging issues before customers are impacted.”
Also read: Dynatrace vs Splunk: Monitoring Tool Comparison
Of course, observability is not a silver bullet. The technology certainly has downsides and risks.
In fact, one of the nagging issues is the hype factor. This could ultimately harm the category. “There is a significant amount of observability washing from legacy vendors, driving confusion for end users trying to figure out what observability is and how it can benefit them,” said Nick Heudecker, Senior Director of Market Strategy & Competitive Intelligence, Cribl.
True, this is a problem with any successful technology. But customers definitely need to do the due diligence.
Observability also is not a plug-and-play technology.There is a need for change management. And yes, you must have a highly skilled team to get the max from the technology.
“The biggest downside of observability is that someone – such as an engineer or a person from DevOps or the site reliability engineering (SRE) organization — needs to do the actual observing,” said Gavin Cohen, VP of Product, Zebrium. “For example, when there is a problem, observability tools are great at providing access and drill-down capabilities to a huge amount of useful information. But it’s up to the engineer to sift through and interpret that information and then decide where to go next in the hunt to determine the root cause. This takes skill, time, patience and experience.”
Although, with the growth in artificial intelligence (AI) and machine learning (ML), this can be addressed. In other words, the next-generation tools can help automate the observer role. “This requires deep intelligence about the systems under observation, such as with sophisticated modeling, granular details and comprehensive AI,” said Kunal Agarwal, founder and CEO, Unravel Data.
Read next: AI and Observability Platforms to Alter DevOps Economics
Milan Shetti, President and CEO, Rocket Software.
With the rising popularity of cloud-based solutions over the last decade, a growing misconception in the professional world is that mainframe technology is becoming obsolete. This couldn’t be further from the truth. In fact, the results of a recent Rocket survey of over 500 U.S. IT professionals found businesses today still rely heavily on the mainframe over cloud-based or distributed technologies to power their IT infrastructures—including 67 of the Fortune 100.
Despite the allure surrounding digital solutions, a recent IBM study uncovered that 82% of executives agree their business case still supports mainframe-based applications. This is partly due to the increase in disruptive events taking place throughout the world—the Covid-19 pandemic, a weakened global supply chain, cybersecurity breaches and increased regulations across the board—leading companies to continue leveraging the reliability and security of the mainframe infrastructure.
However, the benefits are clear, and the need is apparent for organizations to consider modernizing their mainframe infrastructure and implementing modern cloud-based solutions into their IT environment to remain competitive in today’s digital world.
Overcoming Mainframe Obstacles
Businesses leveraging mainframe technology that hasn’t been modernized may struggle to attract new talent to their organization. With the new talent entering the professional market primarily trained on cloud-based software, traditional mainframe software and processes create a skills gap that could deter prospective hires and lead to companies missing out on top-tier talent.
Without modernization, many legacy mainframes lack connectivity with modern cloud-based solutions. Although the mainframe provides a steady, dependable operational environment, it’s well known that the efficiency, accuracy and accessibility modern cloud-based solutions create have helped simplify and Strengthen many operational practices. Mainframe infrastructures that can’t integrate innovative tools—like automation—to streamline processes or provide web and mobile access to remote employees—which has become essential following the pandemic—have become impractical for most business operations.
Considering these impending hurdles, organizations are at a crossroads with their mainframe operations. Realistically, there are three roads a business can choose to journey down. The first is to continue “operating as-is,” which is cost-effective but more or less avoids the issue at hand and positions a company to get left in the dust by its competitors. A business can also “re-platform” or completely remove and replace its current mainframe infrastructure in favor of distributed or cloud models. However, this option can be disruptive, pricey and time-consuming and forces businesses to simply toss out most of their expensive technology investments.
The final option is to “modernize in place.” Modernizing in place allows businesses to continue leveraging their technology investments through mainframe modernization. It’s the preferred method of IT professionals—56% compared to 27% continuing to “operate as-is” and 17% opting to “re-platform”—because it’s typically cost-efficient, less disruptive to operations and improves the connectivity and flexibility of the IT infrastructure.
Most importantly, modernizing in place lets organizations integrate cloud solutions directly into their mainframe environment. In this way, teams can seamlessly transition into a more efficient and sustainable hybrid cloud model that helps alleviate the challenges of the traditional mainframe infrastructure.
Modernizing In Place With A Hybrid Cloud Strategy
With nearly three-quarters of executives from some of the largest and most successful businesses in agreement that mainframe-based applications are still central to business strategy, the mainframe isn’t going anywhere. And with many organizations still opting for mainframe-based solutions for data-critical operating systems—such as financial management, customer transaction systems of record, HR systems and supply chain data management systems—mainframe-based applications are actually expected to grow over the next two years. That’s why businesses must look to leverage their years of technology investments alongside the latest tools.
Modernizing in place with a hybrid cloud strategy is one of the best paths for an enterprise to meet the evolving needs of the market and its customers while simultaneously implementing an efficient and sustainable IT infrastructure. It lets companies leverage innovative cloud solutions in their tech stack that help bridge the skills gap to entice new talent while making operations accessible for remote employees.
The integration of automated tools and artificial intelligence capabilities in a hybrid model can help eliminate many manual processes to reduce workloads and Strengthen productivity. The flexibility of a modernized hybrid environment can also allow teams to implement cutting-edge processes like DevOps and CI/CD testing into their operations, helping ensure a continuously optimized operational environment.
With most IT professionals in agreement that hybrid is the answer moving forward, it’s clear that more and more businesses that work within mainframe environments will begin to migrate cloud solutions into their tech stack. Modernizing in place with a hybrid cloud strategy is one great way for businesses to meet market expectations while positioning themselves for future success.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
The MarketWatch News Department was not involved in the creation of this content.
Aug 01, 2022 (Alliance News via COMTEX) -- Key Companies Covered in the Enterprise Knowledge Management System Research are Alfanar, Chris Lewis Group, Cisco, Enlighted, GoTo Room, IQBoard, Komstadt, Logitech, Microsoft, Poly, Scenariio, Smart Systems(Smarthomes Chattanooga), TecinteracaBloomfire, Callidus Software Inc., Chadha Software Technologies, ComAround, Computer Sciences Corporation(APQC), EduBrite Systems, EGain Ernst Young, IBM Global Services, Igloo, KMS Lighthouse, Knosys, Moxie Software, Open Text Corporation, ProProfs, Right Answers, Transversal, Yonyx, Glean, IntraFindtive, TIS Control, Vox Audio Visual, Webex, Yealink and other key market players.
The global Enterprise Knowledge Management System market size will reach USD million in 2030, growing at a CAGR of % during the analysis period.
As the global economy recovers in 2021 and the supply of the industrial chain improves, the Enterprise Knowledge Management System market will undergo major changes. According to the latest research, the market size of the Enterprise Knowledge Management System industry in 2021 will increase by USD million compared to 2020, with a growth rate of %.
Request To Free sample of This Strategic Report:-https://reportocean.com/industry-verticals/sample-request?report_id=AR9965
The global Enterprise Knowledge Management System industry report provides top-notch qualitative and quantitative information including: Market size (2017-2021 value and 2022 forecast). The report also contains descriptions of key players, including key financial indicators and market competitive pressure analysis.
The report also assesses key opportunities in the market and outlines the factors that are and will drive the growth of the industry. Taking into account previous growth patterns, growth drivers, and current and future trends, we also forecast the overall growth of the global Enterprise Knowledge Management System market during the next few years.
The recent analysis by Report Ocean on the global Enterprise Knowledge Management SystemMarket Report 2021 revolves around various aspects of the market, including characteristics, size and growth, segmentation, regional and country breakdowns, competitive landscape, market shares, trends, strategies, etc. It also includes COVID-19 Outbreak Impact, accompanied by traces of the historic events. The study highlights the list of projected opportunities, sales and revenue on the basis of region and segments. Apart from that, it also documents other subjects such as manufacturing cost analysis, Industrial Chain, etc. For better demonstration, it throws light on the precisely obtained data with the thoroughly crafted graphs, tables, Bar &Pie Charts, etc.
Get a report on Enterprise Knowledge Management SystemMarket' (Including Full TOC, 100+ Tables & Figures, and charts). -Covers Precise Information on Pre & Post COVID-19 Market Outbreak by Region
Download Free sample Copy of 'Enterprise Knowledge Management SystemMarket' Report :- https://reportocean.com/industry-verticals/sample-request?report_id=AR9965
Key Segments Studied in the Global Enterprise Knowledge Management SystemMarket
Our tailormade report can help companies and investors make efficient strategic moves by exploring the crucial information on market size, business trends, industry structure, market share, and market predictions.
Apart from the general projections, our report outstands as it includes thoroughly studied variables, such as the COVID-19 containment status, the recovery of the end-use market, and the recovery timeline for 2020/ 2021
Analysis on COVID-19 Outbreak Impact Include:
In light of COVID-19, the report includes a range of factors that impacted the market. It also discusses the trends. Based on the upstream and downstream markets, the report precisely covers all factors, including an analysis of the supply chain, consumer behavior, demand, etc. Our report also describes how vigorously COVID-19 has affected diverse regions and significant nations.
For more information or any query mail email@example.com
Each report by the Report Ocean contains more than 100+ pages, specifically crafted with precise tables, charts, and engaging narrative: The tailor-made reports deliver vast information on the market with high accuracy. The report encompasses: Micro and macro analysis, Competitive landscape, Regional dynamics, Operational landscape, Legal Set-up, and Regulatory frameworks, Market Sizing and Structuring, Profitability and Cost analysis, Demographic profiling and Addressable market, Existing marketing strategies in the market, Segmentation analysis of Market, Best practice, GAP analysis, Leading market players, Benchmarking, Future market trends and opportunities.
Geographical Breakdown:The regional section of the report analyses the market on the basis of region and national breakdowns, which includes size estimations, and accurate data on previous and future growth. It also mentions the effects and the estimated course of Covid-19 recovery for all geographical areas. The report gives the outlook of the emerging market trends and the factors driving the growth of the dominating region to provide readers an outlook of prevailing trends and help in decision making.
Nations:Argentina, Australia, Austria, Belgium, Brazil, Canada, Chile, China, Colombia, Czech Republic, Denmark, Egypt, Finland, France, Germany, Hong Kong, India, Indonesia, Ireland, Israel, Italy, Japan, Malaysia, Mexico, Netherlands, New Zealand, Nigeria, Norway, Peru, Philippines, Poland, Portugal, Romania, Russia, Saudi Arabia, Singapore, South Africa, South Korea, Spain, Sweden, Switzerland, Thailand, Turkey, UAE, UK, USA, Venezuela, Vietnam
Thoroughly Described Qualitative COVID 19 Outbreak Impact Include Identification and Investigation on:Market Structure, Growth Drivers, Restraints and Challenges, Emerging Product Trends & Market Opportunities, Porter's Fiver Forces. The report also inspects the financial standing of the leading companies, which includes gross profit, revenue generation, sales volume, sales revenue, manufacturing cost, individual growth rate, and other financial ratios. The report basically gives information about the Market trends, growth factors, limitations, opportunities, challenges, future forecasts, and information on the prominent and other key market players.
(Check Our Exclusive Offer: 30% to 40% Discount) :- https://reportocean.com/industry-verticals/sample-request?report_id=AR9965
Key questions answered:This study documents the affect ofCOVID 19 Outbreak: Our professionally crafted report contains precise responses and pinpoints the excellent opportunities for investors to make new investments. It also suggests superior market plan trajectories along with a comprehensive analysis of current market infrastructures, prevailing challenges, opportunities, etc. To help companies design their superior strategies, this report mentions information about end-consumer target groups and their potential operational volumes, along with the potential regions and segments to target and the benefits and limitations of contributing to the market. Any market’s robust growth is derived by its driving forces, challenges, key suppliers, key industry trends, etc., which is thoroughly covered in our report. Apart from that, the accuracy of the data can be specified by the effective SWOT analysis incorporated in the study.
A section of the report is dedicated to the details related to import and export, key players, production, and revenue, on the basis of the regional markets. The report is wrapped with information about key manufacturers, key market segments, the scope of products, years considered, and study objectives.
It also guides readers through segmentation analysis based on product type, application, end-users, etc. Apart from that, the study encompasses a SWOT analysis of each player along with their product offerings, production, value, capacity, etc.
List of Factors Covered in the Report are:
Major Strategic Developments: The report abides by quality and quantity. It covers the major strategic market developments, including R&D, M&A, agreements, new products launch, collaborations, partnerships, joint ventures, and geographical expansion, accompanied by a list of the prominent industry players thriving in the market on a national and international level.
Key Market Features:
Major subjects like revenue, capacity, price, rate, production rate, gross production, capacity utilization, consumption, cost, CAGR, import/export, supply/demand, market share, and gross margin are all assessed in the research and mentioned in the study. It also documents a thorough analysis of the most important market factors and their most recent developments, combined with the pertinent market segments and sub-segments.
List of Highlights & Approach
The report is made using a variety of efficient analytical methodologies that offers readers an in-depth research and evaluation on the leading market players and comprehensive insight on what place they are holding within the industry. Analytical techniques, such as Porter’s five forces analysis, feasibility studies, SWOT analyses, and ROI analyses, are put to use to examine the development of the major market players.
Inquire more and share questions if any before the purchase on this report at :- https://reportocean.com/industry-verticals/sample-request?report_id=AR9965
Points Covered in Enterprise Knowledge Management SystemMarket Report:
........and view more in complete table of Contents
Thank you for reading; we also provide a chapter-by-chapter report or a report based on region, such as North America, Europe, or Asia.
Request Full Report :-https://reportocean.com/industry-verticals/sample-request?report_id=AR9965
About Report Ocean:
We are the best market research reports provider in the industry. Report Ocean believes in providing quality reports to clients to meet the top line and bottom line goals which will boost your market share in today's competitive environment. Report Ocean is a 'one-stop solution' for individuals, organizations, and industries that are looking for innovative market research reports.
Get in Touch with Us:
Address: 500 N Michigan Ave, Suite 600, Chicago, Illinois 60611 - UNITED STATES
Tel:+1 888 212 3539 (US - TOLL FREE)
The MarketWatch News Department was not involved in the creation of this content.
By Jamie Wilson, MD & Founder, Cryptoloc Technology Group
As our world gets smaller, and our systems for sharing information become increasingly interconnected, breaches are becoming an inevitability. It’s no longer a matter of if, but when, your data will come under attack – but do you have any idea how precious your data actually is?
The criminals who steal data – whether for the purpose of blackmail, identity theft, extortion or even espionage – are finding themselves competing in an increasingly crowded marketplace. Over the course of the global coronavirus pandemic, as the lines between our personal and professional lives and devices blurred like never before and ransomware proliferated, hackers became more active and empowered than ever.
According to Privacy Affairs’ latest Dark Web Price Index, the stolen data market grew significantly larger in both volume and variety over the last year, with more credit card data, personal information and documents on offer.
As the supply of stolen data has grown, prices for each individual piece of data have plummeted. Hacked credit card details that would have sold for US$240 in 2021 are going for US$120 in 2022, for instance, and stolen online banking logins are down from US$120 to US$65.
But this hasn’t discouraged cybercriminals. Instead, dark web sites have begun resorting to traditional marketing tactics like two-for-one discounts on stolen data, creating a bulk sales mentality that places an even greater imperative on cybercrime cartels to amass large quantities of data.
This makes it even more likely that your data will be stolen, because even if your organisation isn’t specifically targeted, you could be caught up in an increasingly common smash-and-grab raid – like the attack on Microsoft that exposed around a quarter of a million email systems last year.
And while the value of each piece of data on the dark web is decreasing for cybercriminals, cyber attacks are just getting costlier for the businesses the data is stolen from.
Not sure how much your data is worth? The exact answer is impossible to quantify definitively, as it will change from one business and one piece of data to another, but it’s clear that having your data stolen can have devastating consequences.
According to the Cost of a Data Breach Report 2021 from IBM and Ponemon, which studied the impacts of 537 real breaches across 17 countries and regions, the per-record cost to a business of a data breach sits at US$161 per record on average – a 10.3 per cent increase from 2020 to 2021.
For a personally identifiable piece of customer data, the cost goes up to US$180 per record. Not only is this the costliest type of record, it’s also the most commonly compromised, appearing in 44 per cent of all breaches in the study.
For a personally identifiable piece of employee data, the cost sits at US$176 per record. Intellectual property costs US$169 per record, while anonymised customer data will set you back US$157 per record.
But it’s extremely unlikely that a cybercriminal would go to the effort of hacking your business for one piece of data. In that sense, it’s more instructive to look at the average cost of a data breach in total – which currently sits at a staggering US$4.24M.
For ransomware breaches, in which cybercriminals encrypt files on a device and demand a ransom in exchange for their encryption, the average cost goes up to US$4.62M, while data breaches caused by business email compromise have an average cost of US$5.01M.
Breaches are costliest in the heavily regulated healthcare industry (US$9.23M) – a logical outcome, given the heightened sensitivity of medical records. By comparison, the ‘cheapest’ breaches are in less regulated industries such as hospitality (US$3.03M).
Mega breaches involving at least 50 million records were excluded from the study to avoid blowing up the average, but a separate section of the report noted that these types of attacks cost 100 times more than the average breach.
The report found the average breach takes 287 days to identify and contain, with the cost increasing the longer the breach remains unidentified. So when it comes to cybercrime, time really is money.
IBM and Ponemon broke the average cost of a breach up into four broad categories – detection and escalation (29 per cent), notification (6 per cent), post-breach response (27 per cent) and lost business cost (38 per cent). Lost business costs include business disruption and revenue losses from system downtime; the cost of lost customers; reputation losses; and diminished goodwill.
A 2019 Deloitte report determined that up to 90 per cent of the total costs in a cyberattack occur beneath the surface – that the disruption to a business’ operations, as well as insurance premium increases, credit rating impact, loss of customer relationships and brand devaluation are the real killers in the long run.
It can take time for the true impacts of a breach to reveal themselves. In 2021, the National Australia Bank revealed it had paid $686,878 in compensation to customers as the result of a 2019 data breach, which led to the personal account details of about 13,000 customers being uploaded to the dark web.
The costs included the reissuance of government identification documents, as well as subscriptions to independent, enhanced fraud detection services for the affected customers. But the bank also had to hire a team of cyber-intelligence experts to investigate the breach, the cost of which remains unknown.
The IBM and Ponemon report confirms that the costs of a data breach won’t all be felt straight away. While the bulk of an average data breach’s cost (53 per cent) is incurred in the first year, another 31 per cent is incurred in the second year, and the final 16 per cent is incurred more than two years after the event.
And with the recent rise of double extortion – in which cyber criminals not only take control of a system and demand payment for its return, but also threaten to leak the data they’ve stolen unless they receive a separate payment – we’re likely to see data breaches exact a heavy toll for even longer time periods moving forward.
Data breaches are becoming costlier and more common, so it’s more important than ever to ensure your data is protected.
Many businesses are turning to cyber insurance to protect themselves. Cyber insurance typically covers costs related to the loss of data, as well as fines and penalties imposed by regulators, public relations costs, and compensation to third parties for failure to protect their data.
But as breaches become a virtual inevitability and claims for catastrophic cyberattacks become more common, insurers are getting cold feet. Premiums are skyrocketing, and insurers are limiting their coverage, with some capping their coverage at about half of what they used to offer and others refusing to offer cyber insurance policies altogether.
Regardless, cyber insurance is not a cyber security policy. Even the most favourable cyber insurance policy doesn’t prevent breaches, but merely attempts to mitigate the impact after the horse has already bolted.
The best approach is to educate your employees and other members of your organisation about cyber security, and put the appropriate controls and best practices in place, including using multi-factor authentication, implementing zero trust policies, and backing up and encrypting data.
The IBM and Ponemon report found that the use of strong encryption – at least 256 AES, at rest and in transit – was a top mitigating cost factor. Organisations using strong encryption had an average breach cost that was 29.4 per cent lower than those using low standard or no encryption.
When data is safely and securely encrypted, any files a cybercriminal gains access to will be worthless to them without an encryption key. My business, Cryptoloc, has taken this principle even further with our patented three-key encryption technology, which combines three different encryption algorithms into one unique multilayer process.
Built for a world without perimeters, our ISO-certified technology has been deployed across multiple products, including Cryptoloc Secure2Client, which enables users to send fully encrypted documents directly from Microsoft Outlook.
We’ve recently made Secure2Client available on the Salesforce AppExchange, so that marketing, sales, commerce, service and IT teams using Salesforce around the world can encrypt the reports they send to clients and third parties that are sensitive or confidential in nature.
This protects Salesforce users from the potentially catastrophic ramifications of a data breach, while allowing them to continue using the existing application that their business is built around.
We’ve also rolled out a new Ransomware Recovery capability that empowers users to protect and restore their data in real-time in the event of an attack, ensuring they never have to pay a costly ransom for the return of their data.
With Ransomware Recovery, every version of every file a user stores in the Cloud is automatically saved. If they suspect they’ve been the victim of a ransomware attack, they can simply lock down their Cloud temporarily to stop the spread of malware; view their files’ audit trails to determine when the attack occurred; roll back their data to the point before it was corrupted; and then unlock their Cloud.
This ensures users can recover their data as quickly and effectively as possible, minimising costly disruptions to their business, removing the need for a lengthy and expensive investigation, and ensuring they never have to pay a cent to a cybercriminal to get back the data that’s rightfully theirs.
Yes, cyber attacks are inevitable – but victimhood isn’t. If you take the right precautions, you can prevent costly breaches and maintain control of your precious data.
About the Author
Jamie Wilson is the founder and chairman of Cryptoloc, recognized by Forbes as one of the 20 Best Cybersecurity Startups to watch in 2020. Headquartered in Brisbane, Australia, with offices in Japan, US, South Africa and the UK, Cryptoloc have developed one of the world’s strongest encryption technologies and cybersecurity platforms, ensuring clients have complete control over their data. Jamie can be reached online at www.linkedin.com/in/jamie-wilson-07424a68 and at www.cryptoloc.com
FAIR USE NOTICE: Under the "fair use" act, another author may make limited use of the original author's work without asking permission. Pursuant to 17 U.S. Code § 107, certain uses of copyrighted material "for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright." As a matter of policy, fair use is based on the belief that the public is entitled to freely use portions of copyrighted materials for purposes of commentary and criticism. The fair use privilege is perhaps the most significant limitation on a copyright owner's exclusive rights. Cyber Defense Media Group is a news reporting company, reporting cyber news, events, information and much more at no charge at our website Cyber Defense Magazine. All images and reporting are done exclusively under the Fair Use of the US copyright act.
Debdoot Mukherjee is the Chief Data Scientist and Head of AI at Meesho, the Indian origin social commerce platform at the forefront of the boundaryless workplace model that became a norm in the aftermath of the Covid-19 pandemic. Upon completing his postgraduate degree from IIT-Delhi, Mukherjee began his career in the research division at IBM, where he attained expertise in Information Retrieval and Machine Learning techniques. He then journeyed on to work in impactful roles at companies like Hike, Myntra and ShareChat before leading the AI and data science division at Meesho.
In an exclusive interview with Analytics India Magazine, Debdoot Mukherjee opened up about his journey into data science, machine learning and everything AI.
AIM: What attracted you to this field?
Debdoot: My first brush with machine learning was during my masters where I took a few courses related to the subject. As I progressed, my interest in the field kept growing. Post graduation, I joined IBM research where I got a chance to go deep into new technologies. It became a routine where machine learning was turning out to be a great tool to apply in every project. In the last decade, progress in the field of AI/ML has trumped all of us. Later when I moved to Myntra, I got the opportunity to apply all the techniques that I’d learnt to achieve significant results. That’s what keeps me going in this field.
AIM: Would you say holding a degree in data science/AI is enough?
Debdoot: Machine learning is a field where theoretical knowledge is very important. Awareness of the right state-of-the-art ML techniques and knowing how to implement them on the problem statement requires a great deal of clarity on the theoretical foundations of the subject. So, from the standpoint of formal training, a degree does not seem important. However, it is of paramount importance that the foundations are clear, which then comes from proper college training. After gaining theoretical knowledge, the next step is to understand the practical applications, which comes with hands-on projects, hackathons and such. Practicing these techniques as part of an industry or academia provides a broad perspective on applications which result in out of the box solutions.
AIM: With so many patents to your name, how were you able to come up with such ideas?
Debdoot: It is all part and parcel of working in a research lab. The goal of researchers is to look for and develop ideas that have a significant impact. One is also expected to drive this impact in both the business world and academic world. Overtime, one does get a playbook on how to convert ideas into patents.
AIM: How does Meesho leverage AI/ML in its business ?
Debdoot: The mission statement of Meesho is to use AI/ML as a sort of enabler to all pillars of e-commerce platforms, marketplace trends, and such. There are a lot of applications on the demand side, like the consumer side where people discover products—be it the feed that one landed on, or opening a different category listing pages on an app or sifting through the search interface itself—AI is being integrated in features like computer vision, virtual assistants, search enablement to Strengthen the user experience. We are also working on the preempt mechanism and, with time and history of user preferences, we will be able to recommend certain products that a user will need in the future. However, a lot of this is serendipitous discovery, where, based on the depth of understanding, the user can be recommended a lot of products, without having a clear shopping intent in that category. Now from the supply side, the scenario is not that different. A lot of applications are largely led by recommendation systems and ranking monitors on a variety of touch points.
AIM: Your vision for the future?
Debdoot: In this day and age, AI has become a prerequisite for a successful business as a major part of the business process has AI/ML techniques integrated into them. However, there are many other industries where AI adoption is still in its infancy. Artificial intelligence has the power to not only transform businesses but also society at large. AI can do well in some variability of large and structured data sets but it struggles to replicate intuition. Natural Language Processing, object detection and image generation are some of the challenges that research institutes and scientists are working to crack. My vision is that AI/ML models create solutions that humans can utilise in various tasks, but not replace humans in any manner.
AIM: What is your point of view on AGI? Have we achieved it yet?
Debdoot: I’m pretty sure that we haven’t achieved it yet. However, sentience in its essence is fairly subjective, like emotion, perception and so on. AI has not reached the level of human intelligence as a lot of these machines still fall short in comparison to the human brain. Keeping that in mind, the next phase of development is mimicking the workings of the human brain. The metric might not be the same and for most cases, AI requires a lot of data, pre-conditions, and such. One must look into nature for answers. The solution is natural and causal. So far, the end result has been very good. But, we need to fundamentally change the approach and then you can think of getting closer to AGI.
RESEARCH TRIANGLE PARK – IBM Chair and CEO Arvind Krishna took an upbeat view of the tech giant’s prospects in a conference call with analysts Monday after an earnings report that topped expectations and despite a possible recession while inflation mounts.
“If I look at our pipelines, our pipelines are remaining pretty healthy. So, I would tell you that right now, what we are seeing is that the second half at this point, looks pretty consistent to the first half,” Krishna said, according to a transcrpit published by The Motley Fool.
“When we look at our pipelines, whether it’s in Red Hat or mainframe software or Automation, Data and AI, Security, and by geography. So, this is a little bit different, which is why you’ve heard me often say I’m a bit more optimistic than many of my peers, both within the industry and across the board, because we see that technology, and [CFO] Jim [Kavanaugh] has also said that this has of AI is deflationary. So, in an inflationary environment when clients take our technology, deploy it, leverage our consulting. It acts as a counterbalance to all of the inflation and all of the labor demographics that people are facing all over the globe.
IBM beats revenue expectations; Red Hat sales surge 17%
“So, that is the reason on software. Second, on consulting correlated to the economic cycle, and maybe we see much less of that this time around because of the nature of our consulting. If you look at it, a lot of our consulting is around deploying back-office applications, critical applications, supply chain resilience, worrying about cash conversion, worrying about optimization of our — of the costs within our clients. Those tend to get more attention actually in, I’ll call it, at least a slight down cycle.
“They’re not. Third, look, consulting is very labor-based. Jim talked a lot about the demand and the supply. But to be completely clear, in a business where you do hire tens of thousands of people because of the scale of it, you do churn in the neighborhood of tens of thousands each year.
“That gives you an automatic way to hit a pause in some of the profit controls because if you don’t see the demand coming, you’re going to slow down your supply side. … On M&A, actually, I’ll say it before Jim will. We have said that our model is about a point to point and a half each year based on M&A.
“And if you look at last year, that was pretty consistent. If you look at this year, look, the year is not over, but you should expect it to stay in that range. And it’s a mix between consulting acquisitions and software acquisitions, and the multiples that we’ve been getting them at still imply that range. So, I think we’ll provide you that answer, and we expect it to stay there.”
Read the full transcript online.