Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Undoubtedly, cloud computing is a mainstay in the enterprise.
Still, the increased adoption of hybrid and public clouds, combined with continued security breaches from both inside and outside forces, leave many with lingering concerns about cloud security. And rightly so.
This makes it all the more critical to have advanced, 21st-century privacy safeguards in place – even as this has often proved problematic in the security space.
“At a high level, cybersecurity has largely taken an incremental form, leveraging existing traditional tools in response to new attacks,” said Eyal Moshe, CEO of HUB Security.
But this is a “costly and unwinnable” endeavor, he pointed out, given the “determination and resources of malicious players” who can reap massive profits. Therefore, a “security paradigm shift is needed that incorporates traditional defenses but also simultaneously assumes they will not work and that every system is always vulnerable.”
The solution, he and others say: Confidential computing, an emerging cloud computing technology that can isolate and protect data while it is being processed.
Before an app can process data, it goes through a decryption in memory. This leaves data briefly unencrypted – and therefore exposed – just before, during, and just after its processing. Hackers can access it, encryption-free, and it is also vulnerable to root user compromise (when administrative privileges are given to the wrong person).
“While there have been technologies to protect data in transit or stored data, maintaining security while data is in use has been a particular challenge,” explained Justin Lam, data security research analyst with S&P Global Market Intelligence.
Confidential computing seeks to close this gap, providing cybersecurity for highly sensitive information requiring protection during transit. The process “helps to ensure that data remains confidential at all times in trusted environments that isolate data from internal and external threats,” Lam explained.
By isolating data within a protected central processing unit (CPU) during processing, the CPU resources are only accessible to specially authorized programming code, otherwise making its resources invisible to “everything and anyone else.” As a result, it is undiscoverable by human users as well as cloud providers, other computer resources, hypervisors, virtual machines and the operating system itself.
This process is enabled through the use of a hardware-based architecture known as a trusted execution environment (TEE). Unauthorized entities cannot view, add, remove or otherwise alter data when it is within the TEE, which denies access attempts and cancels a computation if the system comes under attack.
As Moshe explained, even if computer infrastructure is compromised, “data should still be safe.”
“This involves a number of techniques of encryption, decryption and access controls so information is available only at the time needed, only for the specific user who has the necessary permissions within that secure enclave,” Moshe said.
Still, these enclaves are “not the only weapon in the arsenal.” “Ultra-secure firewalls” that monitor messages coming in and going out are combined with secure remote management, hardware security modules and multifactor authentication. Platforms embed access and approval policies in their own enclaves, including CPUs and/or GPUs for apps, Moshe said.
All told, this creates an accessibility and governance system that can be seamlessly customized without impeding performance, he said. And confidential computing has a wide scope, particularly when it comes to software attacks, protocol attacks, cryptographic attacks, basic physical attacks and memory dump attacks.
“Enterprises need to demonstrate maximum trustworthiness even when the data is in use,” said Lam, underscoring that this is particularly important when enterprises process sensitive data for another entity. “All parties benefit because the data is handled safely and remains confidential.”
The concept is rapidly gaining traction. As predicted by Everest Group, a “best-case scenario” is that confidential computing will achieve a market value of around $54 billion by 2026, representing a compound annual growth rate (CAGR) of 90% to 95%. The global research firm emphasizes that “it is, of course, a nascent market, so big growth figures are to be expected.”
According to an Everest Group report, all segments – including hardware, software and services – are expected to grow. This exponential expansion is being fueled by enterprise cloud and security initiatives and increasing regulation, particularly in privacy-sensitive industries including banking, finance and healthcare.
Confidential computing is a concept that has “moved quickly from research projects into fully deployed offerings across the industry,” said Rohit Badlaney, vice president of IBM Z Hybrid Cloud, and Hillery Hunter, vice president and CTO of IBM Cloud, in a blog post.
These include deployments from cloud providers AMD, Intel, Google Cloud, Microsoft Azure, Amazon Web Services, Red Hat and IBM. Cybersecurity companies including Fortinet, Anjuna Security, Gradient Flow and HUB Security also specialize in confidential computing solutions.
Everest Group points to several use cases for confidential computing, including collaborative analytics for anti-money laundering and fraud detection, research and analytics on patient data and drug discovery, and treatment modeling and security for IoT devices.
“Data protection is only as strong as the weakest link in end-to-end defense – meaning that data protection should be holistic,” said Badlany and Hunter of IBM, which in 2018 released its tools IBM Hyper Protect Services and IBM Cloud Data Shield. “Companies of all sizes require a dynamic and evolving approach to security focused on the long-term protection of data.”
Furthermore, to help facilitate widespread use, the Linux Foundation announced the Confidential Computing Consortium in December 2019. The project community is dedicated to defining and accelerating confidential computing adoption and establishing technologies and open standards for TEE. The project brings together hardware vendors, developers and cloud hosts and includes commitments and contributions from member organizations and open-source projects, according to its website.
“One of the most exciting things about Confidential Computing is that although in early stages, some of the biggest names in technology are already working in the space,” lauds a report from Futurum Research. “Even better, they are partnering and working to use their powers for good.”
Enterprises always want to ensure the security of their data, particularly before transitioning it to a cloud environment. Or, as a blog post from cybersecurity company Fortinet describes it, essentially “trusting in an unseen technology.”
“Confidential computing aims to supply a level of security that acknowledges the fact that organizations are no longer in a position to move freely within their own space,” said Moshe.
Company data centers can be breached by external parties, and are also susceptible to insider threat (whether through maliciousness or negligence). With public clouds, meanwhile, common standards can’t always be assured or Tested against sophisticated attacks.
Perimeters that provide protection are increasingly easy to breach, Moshe pointed out, especially when web services serve so many clients all at once. Then there’s the increased use of edge computing, which brings with it “massive real-time data processing requirements,” particularly in highly dispersed verticals such as retail and manufacturing.
Lam agreed that confidential computing will be increasingly important going forward to demonstrate regulatory compliance and security best practices. It “creates and attests” trusted environments for programs to execute securely and for data to remain isolated.
“These trusted environments have more tangible importance, as overall cloud computing is increasingly abstracted in virtualized or serverless platforms,” Lam said.
Still, enterprises should not consider confidential computing an end-all-be-all.
Given the growing dynamics and prevalence of the cloud, IoT, edge and 5G, “confidential computing environments will have to be resilient to rapid changes in trust and demand,” he said.
Confidential computing may require future hardware availability and improvements at “significant scale,” he said. And, as is the case with all other security tools, care must be taken to secure other components, policies, identities and processes.
Ultimately, Lam pointed out, like any other security tool, “it’s not a complete or foolproof solution.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
I've written many times about having joined the investment industry in 1969 when the "Nifty Fifty" stocks were in full flower. My first employer, First National City Bank, as well as many of the other "money-center banks" (the leading investment managers of the day), were enthralled with these companies, with their powerful business models and flawless prospects. Sentiment surrounding their stocks was uniformly positive, and portfolio managers found great safety in numbers. For example, a common refrain at the time was "you can't be fired for buying IBM," the era's quintessential growth company.
I've also written extensively about the fate of these stocks. In 1973-74, the OPEC oil embargo and the resultant recession took the S&P 500 Index down a total of 47%. And many of the Nifty Fifty, for which it had been thought that "no price was too high," did far worse, falling from peak p/e ratios of 60-90 to trough multiples in the single digits. Thus, their devotees lost almost all of their money in the stocks of companies that "everyone knew" were great. This was my first chance to see what can happen to assets that are on what I call "the pedestal of popularity."
In 1978, I was asked to move to the bank's bond department to start funds in convertible bonds and, shortly thereafter, high yield bonds. Now I was investing in securities most fiduciaries considered "uninvestable" and which practically no one knew about, cared about, or deemed desirable... and I was making money steadily and safely. I quickly recognized that my strong performance resulted in large part from precisely that fact: I was investing in securities that practically no one knew about, cared about, or deemed desirable. This brought home the key money-making lesson of the Efficient Market Hypothesis, which I had been introduced to at the University of Chicago Business School: If you seek superior investment results, you have to invest in things that others haven't flocked to and caused to be fully valued. In other words, you have to do something different.
In 2006, I wrote a memo called Dare to Be Great. It was mostly about having high aspirations, and it included a rant against conformity and investment bureaucracy, as well as an assertion that the route to superior returns by necessity runs through unconventionality. The element of that memo that people still talk to me about is a simple two-by-two matrix:
|Conventional Behavior||Unconventional Behavior|
|Favorable Outcomes||Average good results||Above average results|
|Unfavorable Outcomes||Average bad results||Below average results|
Here's how I explained the situation:
Of course, it's not easy and clear-cut, but I think it's the general situation. If your behavior and that of your managers are conventional, you're likely to get conventional results - either good or bad. Only if the behavior is unconventional is your performance likely to be unconventional... and only if the judgments are superior is your performance likely to be above average.
The consensus opinion of market participants is baked into market prices. Thus, if investors lack the insight that is superior to the average of the people who make up the consensus, they should expect average risk-adjusted performance.
Many years have passed since I wrote that memo, and the investing world has gotten a lot more sophisticated, but the message conveyed by the matrix and the accompanying explanation remains unchanged. Talk about simple - in the memo, I reduced the issue to a single sentence: "This just in: You can't take the same actions as everyone else and expect to outperform."
The best way to understand this idea is by thinking through a highly logical and almost mathematical process (greatly simplified, as usual, for illustrative purposes):
A certain (but unascertainable) number of dollars will be made over any given period by all investors collectively in an individual stock, a given market, or all markets taken together. That amount will be a function of (a) how companies or assets fare in fundamental terms (e.g., how their profits grow or decline) and (b) how people feel about those fundamentals and treat asset prices.
On average, all investors will do average.
If you're happy doing average, you can simply invest in a broad swath of the assets in question, buying some of each in proportion to its representation in the relevant universe or index. By engaging in average behavior in this way, you're guaranteed average performance. (Obviously, this is the idea behind index funds.)
If you want to be above average, you have to depart from consensus behavior. You have to overweight some securities, asset classes, or markets and underweight others. In other words, you have to do something different.
The challenge lies in the fact that (a) market prices are the result of everyone's collective thinking and (b) it's hard for any individual to consistently figure out when the consensus is wrong and an asset is priced too high or too low.
Nevertheless, "active investors" place active bets in an effort to be above average.
Investor A decides stocks as a whole are too cheap, and he sells bonds in order to overweight stocks. Investor B thinks stocks are too expensive, so she moves to an underweighting by selling some of her stocks to Investor A and putting the proceeds into bonds.
Investor X decides a certain stock is too cheap and overweights it, buying from investor Y, who thinks it's too expensive and therefore wants to underweight it.
It's essential to note that in each of the above cases, one investor is right and the other is wrong. Now go back to the first bullet point above: Since the total dollars earned by all investors collectively are fixed in amount, all active bets, taken together, constitute a zero-sum game (or negative-sum after commissions and other costs). The investor who is right earns an above-average return, and by definition, the one who's wrong earns a below-average return.
Thus, every active bet placed in the pursuit of above-average returns carries with it the risk of below-average returns. There's no way to make an active bet such that you'll win if it works but not lose if it doesn't. Financial innovations are often described as offering some version of this impossible bargain, but they invariably fail to live up to the hype.
The bottom line of the above is simple: You can't hope to earn above-average returns if you don't place active bets, but if your active bets are wrong, your return will be below average.
Investing strikes me as being very much like golf, where playing conditions and the performance of competitors can change from day to day, as can the placement of the holes. On some days, one approach to the course is appropriate, but on other days, different tactics are called for. To win, you have to either do a better job than others of selecting your approach or executing on it or both.
The same is true for investors. It's simple: If you hope to distinguish yourself in terms of performance, you have to depart from the pack. But, having departed, the difference will only be positive if your choice of strategies and tactics is correct and/or you're able to execute better.
In 2009, when Columbia Business School Publishing was considering whether to publish my book The Most Important Thing, they asked to see a sample chapter. As has often been my experience, I sat down and described a concept I hadn't previously written about or named. That description became the book's first chapter, addressing one of its most important topics: second-level thinking. It's certainly the concept from the book that people ask me about most often.
The idea of second-level thinking builds on what I wrote in Dare to Be Great. First, I repeated my view that success in investing means doing better than others. All active investors (and certainly money managers hoping to earn a living) are driven by the pursuit of superior returns.
But that universality also makes beating the market a difficult task. Millions of people are competing for each dollar of investment gain. Who'll get it? The person who's a step ahead. In some pursuits, getting up to the front of the pack means more schooling, more time in the gym or the library, better nutrition, more perspiration, greater stamina, or better equipment. But in investing, where these things count for less, it calls for more perceptive thinking... at what I call the second level.
The basic idea behind second-level thinking is easily summarized: In order to outperform, your thinking has to be different and better.
Remember, your goal in investing isn't to earn average returns; you want to do better than average. Thus, your thinking has to be better than that of others - both more powerful and at a higher level. Since other investors may be smart, well-informed, and highly computerized, you must find an edge they don't have. You must think of something they haven't thought of, see things they miss, or bring insight they don't possess. You have to react differently and behave differently. In short, being right may be a necessary condition for investment success, but it won't be sufficient. You have to be more right than others... which by definition means your thinking has to be different.
Having made the case, I went on to distinguish second-level thinkers from those who operate at the first level:
First-level thinking is simplistic and superficial, and just about everyone can do it (a bad sign for anything involving an attempt at superiority). All the first-level thinker needs is an opinion about the future, as in "The outlook for the company is favorable, meaning the stock will go up."
Second-level thinking is deep, complex, and convoluted. The second-level thinker takes a great many things into account:
What is the range of likely future outcomes?
What outcome do I think will occur?
What's the probability I'm right?
What does the consensus think?
How does my expectation differ from the consensus?
How does the current price for the asset comport with the consensus view of the future, and with mine?
Is the consensus psychology that's incorporated in the price too bullish or bearish?
What will happen to the asset's price if the consensus turns out to be right, and what if I'm right?
The difference in workload between first-level and second-level thinking is clearly massive, and the number of people capable of the latter is tiny compared to the number capable of the former.
First-level thinkers look for simple formulas and easy answers. Second-level thinkers know that success in investing is the antithesis of simple.
Speaking about difficulty reminds me of an important idea that arose in my discussions with my son Andrew during the pandemic (described in the memo Something of Value, published in January 2021). In the memo's extensive discussion of how efficient most markets have become in recent decades, Andrew makes a terrific point: "Readily available quantitative information with regard to the present cannot be the source of superior performance." After all, everyone has access to this type of information - with regard to public U.S. securities, that's the whole point of the SEC's Reg FD (for fair disclosure) - and nowadays all investors should know how to manipulate data and run screens.
So, then, how can investors who are intent on outperforming hope to reach their goal? As Andrew and I said on a podcast where we discussed Something of Value, they have to go beyond readily available quantitative information with regard to the present. Instead, their superiority has to come from an ability to:
better understand the significance of the published numbers,
better assess the qualitative aspects of the company, and/or
better divine the future.
Obviously, none of these things can be determined with certainty, measured empirically, or processed using surefire formulas. Unlike present-day quantitative information, there's no source you can turn to for easy answers. They all come down to judgment or insight. Second-level thinkers who have better judgment are likely to achieve superior returns, and those who are less insightful are likely to generate inferior performance.
This all leads me back to something Charlie Munger told me around the time The Most Important Thing was published: "It's not supposed to be easy. Anyone who finds it easy is stupid." Anyone who thinks there's a formula for investing that guarantees success (and that they can possess it) clearly doesn't understand the complex, dynamic, and competitive nature of the investing process. The prize for superior investing can amount to a lot of money. In the highly competitive investment arena, it simply can't be easy to be the one who pockets the extra dollars.
There's a concept in the investing world that's closely related to being different: contrarianism. "The investment herd" refers to the masses of people (or institutions) that drive security prices one way or the other. It's their actions that take asset prices to bull market highs and sometimes bubbles and, in the other direction, to bear market territory and occasional crashes. At these extremes, which are invariably overdone, it's essential to act in a contrary fashion.
Joining in the swings described above causes people to own or buy assets at high prices and to sell or fail to buy at low prices. For this reason, it can be important to part company with the herd and behave in a way that's contrary to the actions of most others.
Contrarianism received its own chapter in The Most Important Thing. Here's how I set forth the logic:
Markets swing dramatically, from bullish to bearish, and from overpriced to underpriced.
Their movements are driven by the actions of "the crowd," "the herd," and "most people." Bull markets occur because more people want to buy than sell, or the buyers are more highly motivated than the sellers. The market rises as people switch from being sellers to being buyers, and as buyers become even more motivated and the sellers less so. (If buyers didn't predominate, the market wouldn't be rising.)
Market extremes represent inflection points. These occur when bullishness or bearishness reaches a maximum. Figuratively speaking, a top occurs when the last person who will become a buyer does so. Since every buyer has joined the bullish herd by the time the top is reached, bullishness can go no further, and the market is as high as it can go. Buying or holding is dangerous.
Since there's no one left to turn bullish, the market stops going up. And if the next day one person switches from buyer to seller, it will start to go down.
So at the extremes, which are created by what "most people" believe, most people are wrong.
Therefore, the key to investment success has to lie in doing the opposite: in diverging from the crowd. Those who recognize the errors that others make can profit enormously from contrarianism.
To sum up, if the extreme highs and lows are excessive and the result of the concerted, mistaken actions of most investors, then it's essential to leave the crowd and be a contrarian.
In his 2000 book, Pioneering Portfolio Management, David Swensen, the former chief investment officer of Yale University, explained why investing institutions are vulnerable to conformity with current consensus belief and why they should instead embrace contrarianism. (For more on Swensen's approach to investing, see "A Case in Point" below.) He also stressed the importance of building infrastructure that enables contrarianism to be employed successfully:
Unless institutions maintain contrarian positions through difficult times, the resulting damage imposes severe financial and reputational costs on the institution.
Casually researched, consensus-oriented investment positions provide the little prospect for producing superior results in the intensely competitive investment management world.
Unfortunately, overcoming the tendency to follow the crowd, while necessary, proves insufficient to certain investment success... While courage to take a different path enhances chances for success, investors face likely failure unless a thoughtful set of investment principles undergirds the courage.
Before I leave the subject of contrarianism, I want to make something else very clear. First-level thinkers - to the extent they're interested in the concept of contrarianism - might believe contrarianism means doing the opposite of what most people are doing, so selling when the market rises and buying when it falls. But this overly simplistic definition of contrarianism is unlikely to be of much help to investors. Instead, the understanding of contrarianism itself has to take place at a second level.
In The Most Important Thing Illuminated, an annotated edition of my book, four professional investors and academics provided commentary on what I had written. My good friend Joel Greenblatt, an exceptional equity investor, provided a very apt observation regarding knee-jerk contrarianism: "... just because no one else will jump in front of a Mack truck barreling down the highway doesn't mean that you should." In other words, the mass of investors aren't wrong all the time, or wrong so dependably that it's always right to do the opposite of what they do. Rather, to be an effective contrarian, you have to figure out:
what the herd is doing;
why it's doing it;
what's wrong, if anything, with what it's doing; and
what you should do about it.
Like the second-level thought process laid out in bullet points on page four, intelligent contrarianism is deep and complex. It amounts to much more than simply doing the opposite of the crowd. Nevertheless, good investment decisions made at the best opportunities - at the most overdone market extremes - invariably include an element of contrarian thinking.
There are only so many syllabus I find worth writing about, and since I know I'll never know all there is to know about them, I return to some from time to time and add to what I've written previously. Thus, in 2014, I followed up on 2006's Dare to Be Great with a memo creatively titled Dare to Be Great II. To begin, I repeated my insistence on the importance of being different:
If your portfolio looks like everyone else's, you may do well, or you may do poorly, but you can't do differently. And being different is absolutely essential if you want a chance at being superior...
I followed that with a discussion of the challenges associated with being different:
Most great investments begin in discomfort. The things most people feel good about - investments where the underlying premise is widely accepted, the recent performance has been positive, and the outlook is rosy - are unlikely to be available at bargain prices. Rather, bargains are usually found among things that are controversial, that people are pessimistic about, and that have been performing badly of late.
But then, perhaps most importantly, I took the idea a step further, moving from daring to be different to its natural corollary: daring to be wrong. Most investment books are about how to be right, not the possibility of being wrong. And yet, the would-be active investor must understand that every attempt at success by necessity carries with it the chance for failure. The two are absolutely inseparable, as I described at the top of page three.
In a market that is even moderately efficient, everything you do to depart from the consensus in pursuit of above-average returns has the potential to result in below-average returns if your departure turns out to be a mistake. Overweighting something versus underweighting it; concentrating versus diversifying; holding versus selling; hedging versus not hedging - these are all double-edged swords. You gain when you make the right choice and lose when you're wrong.
One of my favorite sayings came from a pit boss at a Las Vegas casino: "The more you bet, the more you win when you win." Absolutely inarguable. But the pit boss conveniently omitted the converse: "The more you bet, the more you lose when you lose." Clearly, those two ideas go together.
In a presentation I occasionally make to institutional clients, I employ PowerPoint animation to graphically portray the essence of this situation:
A bubble drops down, containing the words "Try to be right." That's what active investing is all about. But then a few more words show up in the bubble: "Run the risk of being wrong." The bottom line is that you simply can't do the former without also doing the latter. They're inextricably intertwined.
Then another bubble drops down, with the label "Can't lose." There are can't-lose strategies in investing. If you buy T-bills, you can't have a negative return. If you invest in an index fund, you can't underperform the index. But then two more words appear in the second bubble: "Can't win." People who use can't-lose strategies by necessity surrender the possibility of winning. T-bill investors can't earn more than the lowest of yields. Index fund investors can't outperform.
And that brings me to the assignment I imagine receiving from unenlightened clients: "Just apply the first set of words from each bubble: Try to outperform while employing can't-lose strategies." But that combination happens to be unavailable.
The above shows that active investing carries a cost that goes beyond commissions and management fees: heightened risk of inferior performance. Thus, every investor has to make a conscious decision about which course to follow. Pursue superior returns at the risk of coming in behind the pack, or hug the consensus position and ensure average performance. It should be clear that you can't hope to earn superior returns if you're unwilling to bear the risk of sub-par results.
And that brings me to my favorite fortune cookie, which I received with dessert 40-50 years ago. The message inside was simple: The cautious seldom err or write great poetry. In my college classes in Japanese studies, I learned about the koan, which Oxford Languages defines as "a paradoxical anecdote or riddle, used in Zen Buddhism to demonstrate the inadequacy of logical reasoning and to provoke enlightenment." I think of my fortune that way because it raises a question I find paradoxical and capable of leading to enlightenment.
But what does the fortune mean? That you should be cautious because cautious people seldom make mistakes? Or that you shouldn't be cautious, because cautious people rarely accomplish great things?
The fortune can be read both ways, and both conclusions seem reasonable. Thus the key question is, "Which meaning is right for you?" As an investor, do you like the idea of avoiding error, or would you rather try for superiority? Which path is more likely to lead to success as you define it, and which is more feasible for you? You can follow either path, but clearly not both simultaneously.
Thus, investors have to answer what should be a very basic question: Will you (a) strive to be above average, which costs money, is far from sure to work, and can result in your being below average, or (b) accept average performance - which helps you reduce those costs but also means you'll have to look on with envy as winners report mouth-watering successes. Here's how I put it in Dare to Be Great II:
How much emphasis should be put on diversifying, avoiding risk, and ensuring against below-pack performance, and how much on sacrificing these things in the hope of doing better?
And here's how I described some of the considerations:
Unconventional behavior is the only road to superior investment results, but it isn't for everyone. In addition to superior skill, successful investing requires the ability to look wrong for a while and survive some mistakes. Thus each person has to assess whether he's temperamentally equipped to do these things and whether his circumstances - in terms of employers, clients and the impact of other people's opinions - will allow it... when the chips are down and the early going makes him look wrong, as it invariably will.
You can't have it both ways. And as in so many aspects of investing, there's no right or wrong, only right or wrong for you.
The aforementioned David Swensen ran Yale University's endowment from 1985 until his passing in 2021, an unusual 36-year tenure. He was a true pioneer, developing what has come to be called "the Yale Model" or "the Endowment Model." He radically reduced Yale's holdings of public stocks and bonds and invested heavily in innovative, illiquid strategies such as hedge funds, venture capital, and private equity at a time when almost no other institutions were doing so. He identified managers in those fields who went on to generate superior results, several of whom earned investment fame. Yale's resulting performance beat almost all other endowments by miles. In addition, Swensen sent out into the endowment community a number of disciples who produced enviable performances for other institutions. Many endowments emulated Yale's approach, especially beginning around 2003-04 after these institutions had been punished by the bursting of the tech/Internet bubble. But few if any duplicated Yale's success. They did the same things, but not nearly as early or as well.
To sum up all the above, I'd say Swensen dared to be different. He did things others didn't do. He did these things long before most others picked up the thread. He did them to a degree that others didn't approach. And he did them with exceptional skill. What a great formula for outperformance.
In Pioneering Portfolio Management, Swensen provided a description of the challenge at the core of investing - especially institutional investing. It's one of the best paragraphs I've ever read and includes a two-word phrase (which I've bolded for emphasis) that for me reads like sheer investment poetry. I've borrowed it countless times:
...Active management strategies demand uninstitutional behavior from institutions, creating a paradox that few can unravel. Establishing and maintaining an unconventional investment profile requires acceptance of uncomfortably idiosyncratic portfolios, which frequently appear downright imprudent in the eyes of conventional wisdom.
As with many great quotes, this one from Swensen says a great deal in just a few words. Let's parse its meaning:
Idiosyncratic - When all investors love something, it's likely their buying will render it highly-priced. When they hate it, their selling will probably cause it to become cheap. Thus, it's preferable to buy things most people hate and sell things most people love. Such behavior is by definition highly idiosyncratic (i.e., "eccentric," "quirky," or "peculiar").
Uncomfortable - The mass of investors take the positions they take for reasons they find convincing. We witness the same developments they do and are impacted by the same news. Yet, we realize that if we want to be above average, our reaction to those inputs - and thus our behavior - should in many instances be different from that of others. Regardless of the reasons, if millions of investors are doing A, it may be quite uncomfortable to do B.
And if we do bring ourselves to do B, our action is unlikely to prove correct right away. After we've sold a market darling because we think it's overvalued, its price probably won't start to drop the next day. Most of the time, the hot asset you've sold will keep rising for a while, and sometimes a good while. As John Maynard Keynes said, "Markets can remain irrational longer than you can remain solvent." And as the old adage goes, "Being too far ahead of your time is indistinguishable from being wrong." These two ideas are closely related to another great Keynes quote: "Worldly wisdom teaches that it is better for the reputation to fail conventionally than to succeed unconventionally." Departing from the mainstream can be embarrassing and painful.
Uninstitutional behavior from institutions - We all know what Swensen meant by the word "institutions": bureaucratic, hidebound, conservative, conventional, risk-averse, and ruled by consensus; in short, unlikely mavericks. In such settings, the cost of being different and wrong can be viewed as highly unacceptable relative to the potential benefit of being different and right. For the people involved, passing up profitable investments (errors of omission) poses far less risk than making investments that produce losses (errors of commission). Thus, investing entities that behave "institutionally" are, by their nature, highly unlikely to engage in idiosyncratic behavior.
Early in his time at Yale, Swensen chose to:
minimize holdings of public stocks;
vastly overweight strategies falling under the heading "alternative investments" (although he started to do so well before that label was created);
in so doing, commit a substantial portion of Yale's endowment to illiquid investments for which there was no market; and
hire managers without lengthy track records on the basis of what he perceived to be their investment acumen.
To use his words, these actions probably appeared "downright imprudent in the eyes of conventional wisdom." Swensen's behavior was certainly idiosyncratic and uninstitutional, but he understood that the only way to outperform was to risk being wrong, and he accepted that risk with great results.
To conclude, I want to describe a recent occurrence. In mid-June, we held the London edition of Oaktree's biannual conference, which followed on the heels of the Los Angeles version. My assigned course at both conferences was the market environment. I faced a dilemma while preparing for the London conference because so much had changed between the two events: On May 19, the S&P 500 was at roughly 3,900, but by June 21 it was at approximately 3,750, down almost 4% in roughly a month. Here was my issue: Should I update my slides, which had become somewhat dated, or reuse the LA slides to deliver a consistent message to both audiences?
I decided to use the LA slides as the jumping-off point for a discussion of how much things had changed in that short period. The key segment of my London presentation consisted of a stream-of-consciousness discussion of the concerns of the day. I told the attendees that I pay close attention to the questions people ask most often at any given point in time, as the questions tell me what's on people's minds. And the questions I'm asked these days overwhelmingly surround:
the outlook for inflation,
the extent to which the Federal Reserve will raise interest rates to bring it under control, and
whether doing so will produce a soft landing or a recession (and if the latter, how bad).
Afterward, I wasn't completely happy with my remarks, so I rethought them over lunch. And when it was time to resume the program, I went up on stage for another two minutes. Here's what I said:
All the discussion surrounding inflation, rates, and recession falls under the same heading: the short term. And yet:
We can't know much about the short-term future (or, I should say, we can't dependably know more than the consensus).
If we have an opinion about the short term, we can't (or shouldn't) have much confidence in it.
If we reach a conclusion, there's not much we can do about it - most investors can't and won't meaningfully revamp their portfolios based on such opinions.
We really shouldn't care about the short term - after all, we're investors, not traders.
I think it's the last point that matters most. The question is whether you agree or not.
For example, when asked whether we're heading toward a recession, my usual answer is that whenever we're not in a recession, we're heading toward one. The question is when. I believe we'll always have cycles, which means recessions and recoveries will always lie ahead. Does the fact that there's a recession ahead mean we should reduce our investments or alter our portfolio allocation? I don't think so. Since 1920, there have been 17 recessions as well as one Great Depression, a World War and several smaller wars, multiple periods of worry about global cataclysm, and now a pandemic. And yet, as I mentioned in my January memo, Selling Out, the S&P 500 has returned about 10½% a year on average over that century-plus. Would investors have improved their performance by getting in and out of the market to avoid those problem spots... or would doing so have diminished it? Ever since I quoted Bill Miller in that memo, I've been impressed by his formulation that "it's time, not timing" that leads to real wealth accumulation. Thus, most investors would be better off ignoring short-term considerations if they want to enjoy the benefits of long-term compounding.
Two of the six tenets of Oaktree's investment philosophy say (a) we don't base our investment decisions on macro forecasts and (b) we're not market timers. I told the London audience our main goal is to buy debt or make loans that will be repaid and to buy interests in companies that will do well and make money. None of that has anything to do with the short term.
From time to time, when we consider it warranted, we do vary our balance between aggressiveness and defensiveness, primarily by altering the size of our closed-end funds, the pace at which we invest, and the level of risk we'll accept. But we do these things on the basis of current market conditions, not expectations regarding future events.
Everyone at Oaktree has opinions on the short-run phenomena mentioned above. We just don't bet heavily that they're right. During our recent meetings with clients in London, Bruce Karsh and I spent a lot of time discussing the significance of the short-term concerns. Here's how he followed up in a note to me:
...Will things be as bad or worse or better than expected? Unknowable... and equally unknowable how much is priced in, i.e. what the market is truly expecting. One would think a recession is priced in, but many analysts say that's not the case. This stuff is hard...!!!
Bruce's comment highlights another weakness of having a short-term focus. Even if we think we know what's in store in terms of things like inflation, recessions, and interest rates, there's absolutely no way to know how market prices comport with those expectations. This is more significant than most people realize. If you've developed opinions regarding the issues of the day, or have access to those of pundits you respect, take a look at any asset and ask yourself whether it's priced rich, cheap, or fair in light of those views. That's what matters when you're pursuing investments that are reasonably priced.
The possibility - or even the fact - that a negative event lies ahead isn't in itself a reason to reduce risk; investors should only do so if the event lies ahead and it isn't appropriately reflected in asset prices. But, as Bruce says, there's usually no way to know.
At the beginning of my career, we thought in terms of investing in a stock for five or six years; something held for less than a year was considered a short-term trade. One of the biggest changes I've witnessed since then is the incredible shortening of time horizons. Money managers know their returns in real-time, and many clients are fixated on how their managers did in the most recent quarter.
No strategy - and no level of brilliance - will make every quarter or every year a successful one. Strategies become more or less effective as the environment changes and their popularity waxes and wanes. In fact, highly disciplined managers who hold most rigorously to a given approach will tend to report the worst performance when that approach goes out of favor. Regardless of the appropriateness of a strategy and the quality of investment decisions, every portfolio and every manager will experience good and bad quarters and years that have no lasting impact and say nothing about the manager's ability. Often this poor performance will be due to unforeseen and unforeseeable developments.
Thus, what does it mean that someone or something has performed poorly for a while? No one should fire managers or change strategies based on short-term results. Rather than taking capital away from underperformers, clients should consider increasing their allocations in the spirit of contrarianism (but few do). I find it incredibly simple: If you wait at a bus stop long enough, you're guaranteed to catch a bus, but if you run from bus stop to bus stop, you may never catch a bus.
I believe most investors have their eye on the wrong ball. One quarter's or one year's performance is meaningless at best and a harmful distraction at worst. But most investment committees still spend the first hour of every meeting discussing returns in the most recent quarter and the year to date. If everyone else is focusing on something that doesn't matter and ignoring the thing that does, investors can profitably diverge from the pack by blocking out short-term concerns and maintaining a laser focus on long-term capital deployment.
A final quote from Pioneering Portfolio Management does a great job of summing up how institutions can pursue the superior performance most want. (Its concepts are also relevant to individuals):
Appropriate investment procedures contribute significantly to investment success, allowing investors to pursue profitable long-term contrarian investment positions. By reducing pressures to produce in the short run, liberated managers gain the freedom to create portfolios positioned to take advantage of opportunities created by short-term players. By encouraging managers to make potentially embarrassing out-of-favor investments, fiduciaries increase the likelihood of investment success.
Oaktree is probably in the extreme minority in its relative indifference to macro projections, especially regarding the short term. Most investors fuss over expectations regarding short-term phenomena, but I wonder whether they actually do much about their concerns and whether it helps.
Many investors - and especially institutions such as pension funds, endowments, insurance companies, and sovereign wealth funds, all of which are relatively insulated from the risk of sudden withdrawals - have the luxury of being able to focus exclusively on the long term... if they will take advantage of it. Thus, my suggestion to you is to depart from the investment crowd, with its unhelpful preoccupation with the short term, and to instead join us in focusing on the things that really matter.
Editor's Note: The summary bullets for this article were chosen by Seeking Alpha editors.
If you’re in the market for a data fabric, then you might be interested in a recent report from Forrester, which published a Wave report in June detailing the pros and cons of more than a dozen data fabrics offerings.
Enterprises that are struggling to manage big data for advanced analytics and AI projects are increasingly turning to data fabrics, which help by centralizing access the variety of tools and capabilities needed to work with data in a governed, responsible manner (if not centralizing the data itself).
It’s a fairly new space, yet data fabrics from 15 vendors made the cut in the Forrester Wave: Enterprise Data Fabric Q2, 2022. The report was written by analyst Noel Yuhanna, who was involved in defining the new category, with help from Forrester analysts Aaron Katz, Angela Lozada, and Katie Pierpont.
Forrester ranked the data fabric offerings across 26 criteria, and based on the results, separated the contenders into three groups, including leaders, strong performers, and contenders. Here’s a brief rundown on the offerings and how they stack up.
Informatica: The integration giant has moved solidly into the data fabric space, where Forrester considers it a leader, with strengths in core data fabric areas like catalog, discovery, transformation, lineage, processing, events, and transaction processing.
“Informatica has a strong product vision that demonstrates a commitment to expanded data fabric use cases,” Yuhanna wrote. The company lags in open source support and partner ecosystem tooling, however.
Oracle: This vendor is leveraging its strength in databases, data management, security, and replication as it moves into the data fabric space, Yuhanna said.
“Oracle’s superior vision focuses on a unified, intelligent, and automated platform to accelerate use cases, leveraging AI/ML, transactions, knowledge graph, data products, and semantics,” the analyst wrote.
Denodo: The longtime data virtualization player is now one of the leaders in the data fabric space by expanding its capabilities in integration, management, and delivery. Its strengths lie in data connectivity, integration, processing, transactional workload, transformation, access, search, and delivery, Yuhanna wrote.
“Denodo’s data fabric roadmap suggests improved data pipeline automation, extended graph capabilities, extended AI/ML capabilities within the platform, and simplified administration for a geodistributed environment,” Yuhanna wrote.
IBM: Big Blue has been a longtime player in the master data management (MDM) space, and it’s moved aggressively to become a data fabric player, according to Yuhanna.
IBM is strong in areas like data modeling, catalog, governance, pipeline, discovery and classification, event and transaction processing, and deployment options, Yuhanna wrote. “However, it lags in data quality, scale, automation, ecosystem tooling, and supporting an end-to-end integrated data fabric solution.”
SAP: The ERP giant has a comprehensive data fabric solution that can handle complex use cases for large companies, either on-prem or with hybrid deployments. Yuhanna said it’s ideal for existing SAP customers that want to bring non-SAP data into the fold.
SAP’s data fabric does many things well, including semantic data modeling, catalog, governance and security, connectivity, discovery and classification, quality, transformation and lineage, events and transactions, access and delivery, and deployment options, according to Yuhanna. But it lags in administration and end-to-end integration.
Talend: The longtime data and application integration vendor is also a leader in the nascient data fabric space, according to the Forrester analysis. “Talend has demonstrated a strong ability to execute on its vision as well as a consistent track record of install base and outstanding client satisfaction,” Yuhanna wrote.
Strengths for Talend lie in data connectivity, processing and persistence, and deployment options. Concerns exist around its catalog and reference data, administration, automation, and offering an integrated end-to-end solution.
Teradata: The data warehousing giant has made inroads into the data fabric space, with strengths in areas like semantics, data quality, and catalogs. Its roadmap is focused on filling feature gaps in areas like data integration, catalog, governance, semantics, automation, and support for new connectors, Yuhanna wrote.
“Although Teradata on par with the competition around product enhancements, execution roadmap, innovation roadmap, and performance,” he wrote, “its strongest strategic point is its partner ecosystem, which is extensive and covers the needs of most customer.”
Cloudera: The one-time Hadoop darling is a big supporter of hybrid data platforms, which is what data fabrics are all about these days. The company’s data fabric strengths lay in data pipelines and streaming data, data processing and persistence, and data delivery, Yuhuanna wrote.
TIBCO Software: The TIBCO Connected Intelligence Platform offers “good capabilities,” according to Forrester, which noted high scores in data connectivity, catalog and reference data pipelines, streaming data, integration, data processing, and persistence, and data delivery.
However, the product lags in several areas, including automation, governance, transformation and lineage, and end-to-end connectivity. TIBCO’s roadmap also is underwhelming, per the analyst group.
Qlik: The business intelligence vendor delivers data fabric capabilities in its Active Intelligence Platform, which received high scores in data connectivity and delivery, governance, data access and search, processing, deployment options, and administration.
But Qlik lags in delivering an end-to-end data fabric solution, with gaps in data modeling, discovery and classification, and transformation, Forrester says. Support was also flagged as a potential issue.
Cinchy: The data fabric from Cinchy benefits from being an “end-to-end” solution “with a high degree of automation” that simplifies development and deployment and offers “excellent data delivery capabilities.” One customer applauded Cinchy’s query, auditing, and workflow capabilities.
But it lags in basic capabilities, such as data discovery and classification, and processing, Forrester says. It also has limited scalability, and the partner ecosystem is small, according to Forrester.
Cambridge Semantics: This data fabric offering is based on Cambridge Semantics’ AnzoGraph database, and offers features like data integration, graph algorithms, analytics, and ML. Data harmonization, including structured and unstructured data, is a strength for Cambridge Semantics, which Forrester says is strong in data discovery and classification, and data access and search.
However, Cambridge Semantics’ vision is “too knowledge-graph-centric, which is no longer differentiating,” Forrester says. The product lags in data governance, transactional, and data event processing capabilities, the analyst group says.
Hitachi Vantara: An established data fabric with foundations in “total visibility with data modernization services,” Hitachi Vantara has some bright spots, Forrester says. Its strengths lay in data connectivity and data processing and persistence.
However, it lags in several areas, including the data catalog, governance, data quality, administration, data access and search, and end-to-end integration. High complexity was also cited by a customer.
Solix Technologies: This vendor comes from an archival background, and its Common Data Platform (CDP) received good scores from Forrester in areas like data connectivity, discovery, classification, processing, delivery, transformation, lineage, and deployment options.
However, the vendor lags in several areas, including semantic data modeling, data pipeline, transactional capabilities, data quality, end-to-end integration, and scalability. The vendor also faces struggles with “a small market presence, little innovation on its roadmap, and enhancements that largely fill feature gaps,” Forrester says.
HPE: The computer giant’s Ezmeral Data Fabric is critical in helping to ease access to HPE customers’ data stored on-prem, the cloud, and on the edge. It hits the mark for data catalog, discovery, classification, transformation, lineage, data processing and persistence, data access, search, and data delivery, according to Forrester.
However, HPE (which declined to participate in the full evaluation process, Forrester says) lags in several key areas, including semantic data modeling, pipelines, connectivity, quality, handling events and transactions, administration, and offering an end-to-end integrated solution, the analyst group says.
Data Fabric Brings Data Together for Timely Decisions
Data Mesh Vs. Data Fabric: Understanding the Differences
Data Fabrics Emerge to Soothe Cloud Data Management Nightmares
As the outfit’s CEO and founder and former Formula 1 champion looked on from the paddock, scorched by the near 40-degree heat baking the former Nato base in Sardinia, Nico Rosberg’s X Racing (RXR) team took the win at the Island X Prix II of Extreme E Season 2.
It would have been the team’s third victory of Extreme E Season 2, but despite valiant efforts in dragging to the finishing line a car missing its entire front wing, RXR suffered a heavy time penalty after the smash-up that caused the damage to its own car and to that being driven by racing legend Carlos Sainz, father of the namesake Ferrari F1 driver.
And so ended another chapter in Extreme E. But away from the thrills of the race, there was also a serious element to proceedings, with the event designed to highlight and attempt to alleviate sadly real and present environmental damage. And advanced networking technology is front and central to this aim, not only in making the racing event possible, but also in contributing to solutions to prevent and minimise issues arising from climate change.
Fundamentally, the Extreme E series hopes to raise awareness of the climate emergency by mixing sport with a purpose to inspire change. It aims to leave behind a long-lasting positive impact through its legacy programmes and showcase the performance of all-electric vehicles in extreme conditions. Specifically, the five-race global voyage aims to pave the way to a lower-carbon future through the promotion of electric vehicles, draw attention to the impacts of climate change and the solutions we can all be part of, and accelerate gender equality in motorsport.
The locations Extreme E visits are all, in some way, affected by environmental issues such as desertification, deforestation, melting ice caps, plastic pollution and rising carbon emissions. By holding races in areas that are suffering as a result of the environmental crisis, the organisers hope to raise viewers’ awareness and interest in environmental issues.
In addition to the action on the scorched terrain, the Island X Prix in June 2022 highlighted how Sardinia was one of the areas hardest hit by wildfires in Italy in summer 2021, which devastated forests across the Mediterranean region. The fires blazed through 20,000 hectares of land, displaced more than 1,000 people and killed around 30 million bees. Wildfires such as these are calculated to be responsible for 20% of total global CO2 emissions and cost $5bn to fight.
As a result, the Extreme E team embarked on a legacy sustainability project, including a fire prevention campaign, within the local communities in the area of Montiferru. This was carried out along with Extreme E’s official technology and communication partner Vodafone Business.
The three-year collaboration will see Vodafone Business capabilities in areas such as 5G, mobile private network (MPN), the internet of things (IoT) and mobile edge computing (MEC) integrated into Extreme E’s global operations, with full involvement in the purpose-driven elements of the series and special prominence on Extreme E’s legacy programmes and the science laboratory on board the event’s base on the ship St Helena.
Assessing the extent and nature of the partnership with the new all-electric car racing series, which furthers its reach into motorsport after having been involved with Formula E, Amanda Jobbins, chief marketing officer (CMO) at Vodafone Business, says the enterprise arm of the telecommunications giant is, like Extreme E, on a journey as it embarks on its mission to provide businesses with solutions beyond connectivity as part of their digital transformation journey.
Yet Jobbins emphasises that the firm is very hurry to maximise the potential of the full Vodafone brand. “You know the old adage, ‘no one ever got fired for buying IBM’? The brand is very important to the positioning for Vodafone Business, because we’re known as a consumer company, but we want customers to realise that we have these enterprise capabilities, we have business solutions for them. So [the partnership] is a fantastic brand opportunity.
“Our Formula E relationship was just a team sponsorship, and we weren’t really getting the amplification out of that [given] there’s so many brands involved with Formula E. We’d heard about Extreme E and we knew a lot of the drivers participating with Nico and [others], so we began a conversation and [found] the purpose pillars of our organisations aligned 100%. It just seemed like a fantastic marriage essentially. We have an opportunity to really help them.”
Through its IoT solutions, Vodafone Business has committed to helping with sustainability efforts, including agriculture, forestation and decarbonisation of energy grids. Specifically in Sardinia, Vodafone Business has deployed long-life, low-power, wide area network (LPWAN) sensors to quickly detect a fire and promptly send an alert to the authorities.
The bespoke technology inside the sensor uses artificial intelligence (AI) to detect gases in the smouldering phase of a fire, resulting in alerts in hours rather than days. The centralised cloud platform leverages big data tools to monitor, correlate, analyse and send these alerts to firefighters, public authorities, forest owners and scientists.
The IoT gas sensors operate without the need for cellular coverage and will be installed in trees to detect the smouldering phase, before the fire takes hold, which will preserve the forest footprint by shortening reaction times in the “golden hour”.
Vodafone believes that using a mesh gateway to connect to a cloud-based alert centre means this is a much faster-acting solution to the problem of wildfire detection than using cameras or satellites. It expects the partnership to benefit the local area and the environment through reduced cost of firefighting, reduced impact on the economy, reduced insurance costs, and saving lives and wildlife species.
Creating a smart forest which can detect ultra-early warning signs of a forest fire is an innovative use of IoT, helping to mitigate the impact of climate change, says Reuben Kingsland, head of product management at Vodafone Business.
“Long-life solar-powered sensors using distributed LoRa gateways connect in a mesh network via border gateways to the cloud which enables large-scale and cost-effective deployment in remote parts of the world,” he notes.
“Large areas of remote forest can be monitored 24/7 by sensors communicating with one another and relaying sensory data back to a Vodafone LTE-enabled gateway which sits at the edge of the forest – which, in turn, connects to the cloud platform. This near-real-time insight enables faster, safer and more cost-effective decisions to be made to prevent the spread of fires.
“As a secondary benefit, general forest health can also be monitored over time as the sensors collect temperature, humidity and air quality day and night.”
Meanwhile, back on the track, Vodafone Business is also offering the technology for the race’s Command Centre where teams monitor the progression of their cars around the circuit. Vodafone Business has supplied a wireless network offering basic connectivity of 200Mbps, through three microwave links going from the track and back to the Command Centre.
But the event needs the next level of connectivity, notes Jobbins, and Vodafone Business is currently in discussion with the Extreme E head of IT.
“Here at the track, they want a basic connectivity, because it’s only season two. They’re still ironing out the kinks of their own championship. And they’re looking at how to activate it better. Activation for them really requires connectivity and applications – and that’s where we want to help.
“We’re exploring the art of the possible – right now, they don’t have communication with the driver in the car, for example. They’ve got a medical car they’re going to roll out and it will need communication. We could start with the basics and build up: get a solid infrastructure in place, like we have, and then build on the hierarchy of needs. So that would be driver communication, medical car communication, then it’s the next layer, which is going to be around data and analytics.”
Again, in this regard, Vodafone Business sensors and trackers will be used in vehicles, and the data generated could be used in the fan experience, for example. There are also possibilities within asset tracking for the vast array of technology, machinery and other products at use at one of the events.
“I think you have to work overtime to think about how to bring the experience to the people and then also to amplify after the race and in other locations,” Jobbins adds. “I’ve got this concept of a digital twin for the legacy programme. So, we’re hearing about the legacy programme here, and suddenly around wildfires. As I mentioned, wildfires are happening all over the world. Why couldn’t we have a twin experience in those locations at the same time [and act like] a smart city?
“So [with assets] we want to tag it and track it, and run data analytics on it, so that they can optimise the performance of the cars and then optimise the championship. They can get more fans engaged and together we can amplify the message of climate change and why it’s so critical people pay attention now.
“We keep hearing the doomsday messages [and so] don’t wait. We know we’ve got to really keep the pressure on everyone to keep the pressure on all organisations essentially to meet these targets. For us, that first step [is] bringing more innovation into how Extreme E grows the championship, and we want to grow together, going forward.”
Alberto Romero, Cambrian-AI Analyst, contributed to this article.
Mythic is an analog AI processor company conceived to overcome the growing limitations of digital processors. Founded by Mike Henry and Dave Fick, and based in Texas, Austin, and Redwood City, California, Mythic aims to solve the technical and physical bottlenecks that limit current processors through the use of analog compute in a world dominated by digital technology. Mythic wants to prove that, contrary to common belief, analog isn’t a relic of the past, but a promise for the future.
Two main problems inhibit the pace of development of digital hardware: The end of Moore’s Law and the Von Neumann architecture. For 60 years we’ve enjoyed ever-increasing powerful hardware as predicted by Gordon Moore in 1965, but as we approach the minimum theoretical size of transistors, his well-harnessed law seems to be coming to an end. Another well-known issue is the need in the Von Neumann architecture to move data from memory to the processor and back to make the computations. This approach is increasingly being replaced by compute-in-memory (CIM) or compute-near-memory approaches that significantly reduce memory bandwidth and latency while increasing performance.
Mythic claims it has built a unique, paradigm-shifting solution that promises to tackle digital’s limitations while providing improved specifications compared to the best-in-class digital solutions: an analog compute engine (ACE). Historically, analog computers were replaced by digital due to the latter’s reduced cost and size and their general-purpose nature. However, the current landscape of AI is dominated by deep neural networks (DNNs) which don’t require extreme precision and, more importantly, the majority of the computing bulk goes into a single operation: matrix multiplication. The perfect opportunity for analog compute.
On top of it, Mythic is exploiting the advantages of CIM and dataflow architecture to obtain impressive early results. They’ve taken CIM to the extreme by computing directly in the flash memory cells. Their analog matrix processors take the inputs as voltage, the weights are stored as resistance, and the output is the resulting current. In addition, the dataflow design keeps these processes running in parallel, which allows for extremely fast and efficient calculations while maintaining high performance. A clever combination of analog computing, CIM, and dataflow architecture defines the Mythic ACE, the company’s main differentiating technology.
Mythic’s ACE meets the requisites of edge AI inference
Mythic’s tech promises high performance at very low power, ultra-low latency, low cost, and small form factor. The basic element is their Analog Matrix Processor (AMP) which features an array of tiles, each containing the ACE complemented by digital elements: SRAM, a vector SIMD unit, a NoC router, and a 32-bit RISC-V nano processor. The innovative design of the ACE eliminates the need for DDR DRAM, reducing latency, cost, and power consumption. AMP chips can be scaled, providing support for large or multiple models. Their first product, the single-chip M1076 AMP (76 AMP tiles) can handle many endpoint applications and can be scaled up to 4-AMPs or even 16-AMPs on a single PCI express card, adequate for edge server-level high-performance usage.
The hardware is complemented with a software stack that provides a seamless pipeline going from the graph (ONNX and PyTorch) to an AMP-ready package through a process of optimization (including a quantization to analog INT8) and compilation. Mythic’s platform also supports a library of ready-to-go DNNs, including object detection/classification (YOLO, ResNet, etc.) and pose estimation models (OpenPose).
The company’s full-stack solution leverages the potential of analog processors while maintaining relevant features of the digital world. It makes the M1076 AMP a great option to handle AI workloads for inference at the edge faster and more efficiently — the company claims it provides the “best-in-class TOPS/W” — than its fully-digital counterparts. That, and the company’s broad offering of products and AI models, make it well-posited to target fast-growing edge AI-focused markets like video surveillance, smart home devices, AR/VR, drones, and robotics.
So far, it seems Mythic has transformed an innovative idea into promising tech to compete for edge inference AI. Now, let’s see the numbers. The company claims the M1076 AMP performs at up to 25 TOPS running at around 3W. Compared to similar digital hardware, that’s a reduction in power consumption of up to 10x. And it can store up to 80M weights on-chip. The MP10304 Quad-AMP PCIe card can deliver up to 100 TOPS at 25W and store 320M weights. When we compare these claims to those of many others, we can’t help but be impressed.
The success of analog AI will depend on achieving high density, high throughput, low latency, and high energy efficiency, while simultaneously delivering accurate predictions. Compared to pure digital implementations analog circuits are inherently noisy, but despite this challenge, the benefits of analog compute become apparent as processors like the M1076 are able to run larger DNN models that feature higher accuracy, higher resolution or lower latency.
As Mythic continues to refine its hardware and software, we will look forward to seeing benchmarks that can demonstrate the platform’s capabilities and power efficiency. But we have seen enough already to be excited by the potential of this unique approach.
Research drawing on the quantum “anti-butterfly effect” solves a longstanding experimental problem in physics and establishes a method for benchmarking the performance of quantum computers.
“Using the simple, robust protocol we developed, we can determine the degree to which quantum computers can effectively process information, and it applies to information loss in other complex quantum systems, too,” said Bin Yan, a quantum theorist at Los Alamos National Laboratory.
Yan is corresponding author of the paper on benchmarking information scrambling published today in Physical Review Letters. “Our protocol quantifies information scrambling in a quantum system and unambiguously distinguishes it from fake positive signals in the noisy background caused by quantum decoherence,” he said.
Noise in the form of decoherence erases all the quantum information in a complex system such as a quantum computer as it couples with the surrounding environment. Information scrambling through quantum chaos, on the other hand, spreads information across the system, protecting it and allowing it to be retrieved.
Coherence is a quantum state that enables quantum computing, and decoherence refers to the loss of that state as information leaks to the surrounding environment.
“Our method, which draws on the quantum anti-butterfly effect we discovered two years ago, evolves a system forward and backward through time in a single loop, so we can apply it to any system with time-reversing the dynamics, including quantum computers and quantum simulators using cold atoms,” Yan said.
The Los Alamos team demonstrated the protocol with simulations on IBM cloud-based quantum computers.
The inability to distinguish decoherence from information scrambling has stymied experimental research into the phenomenon. First studied in black-hole physics, information scrambling has proved relevant across a wide range of research areas, including quantum chaos in many-body systems, phase transition, quantum machine learning and quantum computing. Experimental platforms for studying information scrambling include superconductors, trapped ions and cloud-based quantum computers.
Yan and co-author Nikolai Sinitsyn published a paper in 2020 proving that evolving quantum processes backwards on a quantum computer to damage information in the simulated past causes little change when returned to the present. In contrast, a classical-physics system smears the information irrecoverably during the back-and-forth time loop.
Building on this discovery, Yan, Sinitsyn and co-author Joseph Harris, a University of Edinburgh graduate student who worked on the current paper as a participant in the Los Alamos Quantum Computing Summer School, developed the protocol. It prepares a quantum system and subsystem, evolves the full system forward in time, causes a change in a different subsystem, then evolves the system backward for the same amount of time. Measuring the overlap of information between the two subsystems shows how much information has been preserved by scrambling and how much lost to decoherence.
The Paper: Benchmarking Information Scrambling, by Joseph Harris, Bin Yan and Nikolai Sinitsyn, in Physical Review Letters. DOI: 10.1103/PhysRevLett.129.050602
The Funding: Office of Science, Basic Energy Sciences; Office of Science, Office of Advanced Scientific Computing Research and the Laboratory Directed Research and Development program at Los Alamos National Laboratory.
Physical Review Letters
Benchmarking Information Scrambling
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Travis Humble has been named director of the Quantum Science Center headquartered at the Department of Energy’s Oak Ridge National Laboratory. The QSC is a multi-institutional partnership that spans industry, academia and government institutions and is tasked with uncovering the full potential of quantum materials, sensors and algorithms.
Humble was named deputy director in 2020, when DOE established this five-year, $115 million effort as one of five National Quantum Information Science Research Centers. Following the departure of former QSC Director David Dean, Humble began serving as interim director in January.
“I am excited to be working at the forefront of quantum science and technology with this amazing team of scientists and engineers,” he said. “The QSC provides a wonderful opportunity to leverage our nation’s best and brightest for solving some of the most interesting scientific problems of our time.”
As interim director, Humble has overseen the QSC’s three primary focus areas: quantum materials discovery and development, quantum algorithms and simulation, and quantum devices and sensors for discovery science. In his new role, he will continue collaborating with QSC partner institutions including ORNL, Los Alamos National Laboratory, Fermi National Accelerator Laboratory, Purdue University, Microsoft and IBM.
A distinguished ORNL scientist, Humble also directs the laboratory’s Quantum Computing Institute and the Oak Ridge Leadership Computing Facility’s Quantum Computing User Program. The QSC leverages DOE user facilities, including the OLCF, to solve research problems.
Humble joined ORNL as an intelligence community postdoctoral research fellow in 2005, then became a staff member in 2007. He received a bachelor’s degree in chemistry from the University of North Carolina Wilmington and a master’s degree and doctorate in theoretical chemistry from the University of Oregon.
As QSC director, Humble will prioritize the development of quantum materials for quantum computing and quantum sensing, as well as the application of these technologies to aid scientific discovery, Strengthen the nation’s security and energy efficiency, and ensure economic competitiveness. Other goals include demonstrating the advantages of early quantum computers and advancing methods for probing the fundamental physics of quantum matter.
By addressing current quantum challenges and expanding workforce development activities focused on recruitment and training, Humble anticipates that the QSC’s leadership role in the ongoing quantum revolution will continue to grow.
Humble also serves as an assistant professor with the University of Tennessee, Knoxville’s Bredesen Center for Interdisciplinary Research and Graduate Education, editor-in-chief for ACM Transactions on Quantum Computing, associate editor for Quantum Information Processing and co-chair of the Institute of Electrical and Electronics Engineers Quantum Initiative.
Now in his 17th year at ORNL and more passionate about the future of quantum than ever, Humble is positioning the QSC to shape quantum research and technologies at national and international scales.
“Quantum science and technology are transformative paradigms, and we have only scratched the surface of what is possible,” he said. “The QSC will bring new discoveries in materials, computing and sensing that promote a deeper understanding of these ideas and prepare us for the next generation of quantum technologies.”
The QSC, a DOE National Quantum Information Science Research Center led by ORNL, performs cutting-edge research at national laboratories, universities, and industry partners to overcome key roadblocks in quantum state resilience, controllability, and ultimately the scalability of quantum technologies. QSC researchers are designing materials that enable topological quantum computing; implementing new quantum sensors to characterize topological states and detect dark matter; and designing quantum algorithms and simulations to provide a greater understanding of quantum materials, chemistry, and quantum field theories. These innovations enable the QSC to accelerate information processing, explore the previously unmeasurable, and better predict quantum performance across technologies. For more information, visit qscience.org.
UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science
While more than half this collection of Dow Industrials is too pricey and reveals only skinny dividends, four of the five lowest priced Dogs of the Dow are ready to buy. This month, Intel Corp (INTC), Walgreens Boots Alliance (WBA), Verizon Communications Inc (VZ), and Dow Inc (DOW) live up to the dogcatcher ideal of annual dividends from $1K invested exceeding their single share prices. Furthermore, one more, showed its price within $6.70 of meeting that goal.
With renewed downside market pressure of 51.5%, it would be possible for all ten (even CVX) to become elite fair-priced dogs with their annual yield (from $1K invested) meeting or exceeding their single share prices by year's end. [See a summary of top ten fair-priced August Dow Dogs in Actionable Conclusion 21 near the middle of this article.]
Four of ten top dividend-yielding Dow dogs (tinted gray in the chart below) were among the top ten gainers for the coming year based on analyst 1-year target prices. So, this August, 2023 yield-based forecast for Dow dogs, as graded by Wall St. wizard estimates, was 40% accurate.
Estimated dividend-returns from $1000 invested in the ten highest-yielding stocks and their aggregate one-year analyst median target prices, as reported by YCharts, created the 2022-23 data points for the projections below. Note: one-year target prices estimated by lone analysts were not applied. Ten probable profit-generating trades projected to August 3, 2023 were:
Salesforce Inc (CRM) was projected to net $293.56, based on the median of target price estimates from forty-six analysts, less broker fees. The Beta number showed this estimate subject to risk/volatility 8% greater than the market as a whole.
The Walt Disney Co (DIS) was projected to net $287.01, based on the median of target estimates from twenty-eight analysts, less broker fees. The Beta number showed this estimate subject to risk/volatility 21% over the market as a whole.
Boeing Co (BA) was forecast to net $274.21, based on the median of target price estimates from nineteen analysts, less broker fees. The Beta number showed this estimate subject to risk/volatility 38% greater than the market as a whole.
JPMorgan Chase & Co (JPM) was projected to net $246.33, based on the median of target price estimates from twenty-six analysts, plus the estimated annual dividend, less broker fees. The Beta number showed this estimate subject to risk/volatility 12% more than the market as a whole.
Visa Inc (V) was projected to net $226.09, based on dividends, plus the median of target price estimates from thirty-two analysts, less broker fees. The Beta number showed this estimate subject to risk/volatility 11% under the market as a whole.
Caterpillar Inc (CAT) was projected to net $223.62 based on the median of target price estimates from twenty-five analysts, plus dividends, less broker fees. The Beta number showed this estimate subject to risk/volatility 10% more than the market as a whole.
Verizon Communications Inc was projected to net $220.12, based on dividends, plus the median of target price estimates from twenty-two analysts, less broker fees. The Beta number showed this estimate subject to risk/volatility 67% less than the market as a whole.
Nike Inc (NKE) netted $214.39 based on the median of target price estimates from thirty-one analysts, less broker fees. The Beta number showed this estimate subject to risk/volatility 2% greater than the market as a whole.
Dow Inc was projected to net $205.79, based on the median of target prices estimated by nineteen analysts, less broker fees. A Beta number is still not available for DOW.
Cisco Systems Inc (CSCO) was projected to net $200.29, based on dividends, plus the median target price estimates from twenty-four analysts, less broker fees. The Beta number showed this estimate subject to risk/volatility 3% less than the market as a whole.
The average net gain in dividend and price was estimated at 23.91% on $10k invested as $1k in each of these top ten Dow Index stocks. This gain estimate was subject to average risk/volatility 1% greater than the market as a whole.
Stocks earned the "dog" moniker by exhibiting three traits: (1) paying reliable, repeating dividends, (2) their prices fell to where (3) yield (dividend/price) grew higher than their peers. Thus, the highest yielding stocks in any collection became known as "dogs." More precisely, these are, in fact, best called, "underdogs".
Top ten Dow dogs as of 8/3/22 represented seven of eleven Morningstar sectors by YCharts and IndexArb. All ten stocks were the same on the two lists and the order was also identical.
Both YCharts and IndexARB put the lone communication services sector member on their lists in first place, Verizon . In second place was the lone basic materials dog, Dow Inc .
Then three technology dogs were placed in the third, sixth, and ninth positions, International Business Machines (IBM) , Intel Corp  and Cisco Systems Inc  per YCharts and IndexArb.
However both lists put the first of two healthcare members fourth, Walgreens Boots Alliance  and the other health member was tenth, Merck & Co Inc, (MRK) .
The two lists showed a lone industrials dog in fourth, 3M Co (MMM) , and that the lone energy representative was seventh, Chevron (CVX) . They also
Finally, the two lists agreed that ninth place belonged to the financial services firm, JPMorgan Chase & Co , to complete their August top ten dogs of the Dow by yield lists.
Graphs above show the relative strengths of the top ten Dow dogs by yield as of market close 8/3/2022. The two sets of charts show the variation of dividends calculated by YCharts.com estimates and those from the arbitrage firm IndexArb.com. There was a $5.90 difference in total estimated single share dividends between YCharts and IndexArb top ten, resulting in a $0.42 average cost per dividend dollar differential. These numbers were just enough to show a 1% variance on the pie charts.
This month six of the top-ten Dow dogs show an overbought condition (in which aggregate single share price of the ten exceeds projected annual dividend from $10k invested as $1k each in those ten). A dividend dogcatcher priority is to select stocks whose dividends from $1K invested exceed their single share price. As mentioned above, that condition was reached by four of the five lowest priced Dogs of the Dow, Intel Corp, Verizon Communications, Walgreens Boots Alliance, and Dow Inc live up to the dogcatcher ideal of annual dividends from $1K invested exceeding their single share prices. Furthermore, one more, Cisco Systems, showed a price within $6.70 of meeting that goal as of August 3.
This gap between high share price and low dividend per $1k (or oversold condition) means, no matter which chart you read, 23 of all 27 Dow dividend payers are low risk and low opportunity dogs, with the non-dividend payers being particularly dismal. The Dow top-ten average cost per dollar of annual dividend for August 3, 2022 was $24.32 per YCharts or $23.909 by the IndexArb reckoning.
One that cut its dividend after March, 2020, Boeing, has re-learned (and is now certified that it knows how to fly in some countries) and is thus prepared to take off again if airlines ever trust planes made in the USA again. The used plane and airbus market, however, is soaring. BA may not ever recover from being in worse shape than was GE when excused from the Dow index.
As for DIS, the magic kingdom may be close to reinstating a dividend but don't hold your breath. Furthermore, the existing of the three latest no-dividend stocks on the block, CRM, is simply overpriced. Those three non-dividend payers are the true down in the dumps dogs of the Dow, despite analysts high-balling their future share price estimates. All of the three demonstrate a total disregard for shareholders.
Remember this dogcatcher yield-based stock-picking strategy is contrarian. That means rooting for (buying) the underdog is productive when you don't already own these stocks. If you do hold these stocks, then you must look for opportune pull-backs in price to add to your position to best Strengthen your dividend yield. Plenty of pull-back opportunities appear to be ahead.
The charts above retain the current dividend amount and adjust share price to produce a yield (from $1K invested) to equal or exceed the single share price of each stock. As this illustration shows, four are ideally priced. Beside Intel Corp, Walgreens Boots Alliance, Verizon Communications, and Dow Inc breaking into the ideal zone, one more low priced stock is within $5.90 of making the grade [CSCO].
Five more, however (IBM; MMM; CVX; JPM; MRK ) need to trim prices between $35.09 and $79.96. to attain that elusive 50/50 goal.
The alternative, of course, would be that these companies raise their dividends but that is a lot to ask in these highly disrupted, inflationary, yet cash-rich times. Mr. Market is much more effective at moving prices up or down to appropriate amounts.
To quantify top dog rankings, analyst median price target estimates provided a "market sentiment" gauge of upside potential. Added to the simple high-yield "dog" metrics, analyst median price target estimates provided another tool to dig out bargains.
Ten top Dow dogs were culled by yield for their monthly update. Yield (dividend / price) results as Tested by YCharts did the ranking.
As noted above, top-ten Dow dogs selected 7/1/22 by both the YChart and IndexArb methods revealing the highest dividend-yields represented seven of the eleven sectors. Consumer Cyclical and Consumer Defensive selections were missing. (Real Estate is not reported and Utilities has its own Dow Index.)
$5000 invested as $1k in each of the five lowest-priced stocks in the top ten Dow Dividend kennel by yield were predicted by analyst 1-year targets to deliver 10.02% more gain than from $5,000 invested in all ten. The seventh lowest priced, JPMorgan Chase & Co, showed top analyst-estimated gains of 24.6%.
The five lowest-priced Dow top-yield dogs for August 3 were: Intel Corp; Walgreens Boots Alliance Inc; Verizon Communications Inc; Cisco Systems Inc; Dow Inc, with prices ranging from $36.52 to $51.49.
Five higher-priced Dow top-yield dogs for August 1 were: Merck & Co Inc; JPMorgan Chase & Co; International Business Machines Corp; 3M Co; Chevron Corp, whose prices ranged from $87.62 to $155.36.
The distinction between five low-priced dividend dogs and the general field of ten reflected Michael B. O'Higgins' "basic method" for beating the Dow. The scale of projected gains based on analyst targets added a unique element of "market sentiment" gauging upside potential. It provided a here-and-now equivalent of waiting a year to find out what might happen in the market.
Caution is advised, since analysts are historically only 20% to 90% accurate on the direction of change and just 0% to 15% accurate on the degree of change. (In 2017 the market somewhat followed analyst sentiment. In 2018 analysts estimates were contrarian indicators of market performance, and they continued to be contrary for the first two quarters of 2019 but switched to conforming for the last two quarters.) In 2020 analyst projections were quite contrarian. The first half of 2021 most dividend stock price actions exceeded all analyst expectations. The last half of 2021 was still gangbusters. The 2022 Summer sag may free-up five or more Dow dogs, sending them into the ideal zone where returns from $1k invested equal (or exceed) their single-share price.
Lest there be any doubt about the recommendations in this article, this month there were four Dow Index stocks showing dividends for $1k invested exceeding its single share price: Intel Corp, Walgreens Boots Alliance, Verizon Communications, and Dow Inc.
The dogcatcher hands off recommendations are still in place referring to one that cut its dividend in March, 2020. While Boeing, has re-learned (and is certified in certain countries) how to fly, it still has to coax customers to get airborne again. BA faces strong headwinds to stay on the Dow index (despite analyst optimism for the lone American commercial air-crafter).
Also keep hands off the existing non-dividend member of the Dow, Salesforce Inc, until it declares a dividend from $1K invested greater than its single share price.
While subscriptions keep the ship afloat, Disney needs audiences to get strapped back into buying tickets to watch and ride before resuming a dividend. The DIS parks are now open in CA & FL. And what about the cruise ships? Will anybody cruise, play on the parks, or attend movies again? If so, when will the DIS dividend return? Looks like all viewer loyalties have switched to Apple productions and streaming entertainment options. Happily, Disney franchised offerings compete well in the streaming market.
The net gain/loss estimates above did not factor in any foreign or domestic tax problems resulting from distributions. Consult your tax advisor regarding the source and consequences of "dividends" from any investment.
Stocks listed above were suggested only as possible reference points for your Dow dividend dog stock purchase or sale research process. These were not recommendations.
A team of researchers from the Department of Energy's Oak Ridge National Laboratory and Tennessee Technological University have created a 2D, open-source flood inundation model designed for a multiarchitecture computing system. The Two-dimensional Runoff Inundation Toolkit for Operational Needs, or TRITON, can use multiple graphics processing units, or GPUs, to model flooding more quickly and accurately than existing tools.
Flood modeling is an essential part of emergency preparedness and response. However, models must be both fast and accurate—returning simulation results in a matter of minutes—to be useful tools for decision-making and planning. The higher the model's resolution, the more computational power it takes to run, so organizations may resort to simpler models that sacrifice accuracy for speed. The computational power of GPUs enables calculations by high-resolution models to run more quickly than simpler models that only use CPUs.
As high-performance computing has grown into an indispensable tool for science, it has also become a requirement for modern flood models to leverage the strength of hybrid CPU + GPU architectures. TRITON, the development of which was funded by the Air Force Numerical Weather Modeling Program, is specifically optimized for the multiarchitecture design of supercomputers like the IBM AC922 Summit at the Oak Ridge Leadership Computing Facility.
"The unique thing about TRITON is not just that it uses GPUs—it's not the only GPU-accessible flood model. But it is customized to use multiple GPUs simultaneously, which makes it suitable for solving flood problems on Summit," said Shih-Chieh Kao, an ORNL group leader who led the project.
The team put the model through its paces on Summit to demonstrate its consistency, stability and some of its unique capabilities, such as the runoff hydrograph. This optional data allows TRITON to simulate pluvial floods—that is, local flash floods—in addition to riverine floods. During a riverine flood, a stream or river swells and inundates a floodplain. Using a dataset from the Federal Emergency Management Agency of 100-year flood zones as a benchmark, simulations that used the runoff hydrograph were more accurate than the basic hydraulic model alone.
"In order to really understand flood impact, we need to understand inundation, which includes how deep a river is and accounts for different flood events: riverine and flash floods. Conventional flood models usually only address riverine floods. TRITON can address both and provide more information about the flood impact," said Kao. "If you have this inundation information, you can overlay it on assets and evaluate which are at risk and which are not."
In another test case, the team simulated the 2017 flood in the Houston metropolitan area caused by Hurricane Harvey. The simulation covered 10 days and was modeled on two different hardware configurations: one using multiple CPUs and the other using multiple GPUs. The results soundly demonstrated the advantage of a flood model designed to run on a multi-GPU configuration. Even the smallest hardware configuration—one compute node with six GPUs—completed the simulation faster than the most powerful multi-CPU configuration of 64 nodes.
As an open-source toolkit, TRITON is available for free and can be used on a range of computing platforms—from laptops and desktops to supercomputers. Members of the research team are continuously developing new features and are working on algorithms to scale the current capabilities up to an operational level.
"TRITON will be a foundation for us to keep building on, and we call it a toolkit for a reason. We keep building to make it more useful—that's our vision. As computing power increases, and the prices go down, eventually everyone should have more access to use these capabilities to better simulate floods," said Kao.
Citation: New model harnesses supercomputing power for more accurate flood simulations (2022, July 28) retrieved 10 August 2022 from https://phys.org/news/2022-07-harnesses-supercomputing-power-accurate-simulations.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Google Cloud Platform (GCP) announced the coming availability of its Arm-based instance, the Tau T2A, last week (currently available in preview) to address the ever-expanding needs of customers developing and deploying scale-out, cloud-native workloads. What does this announcement mean for enterprise IT? Does landing this final major cloud player fully validate Arm in the enterprise? And what motivated Google to jump on the Arm bandwagon? We'll address this a little more in the following paragraphs.
What was announced
The T2A virtual machine (VM) is part of the GCP Tau scale-out instance family. Tau is targeted at those cloud-native applications that run containerized or in VMs that don't require extreme compute resources. The Tau family was deployed initially with AMD EPYC (T2D) with fixed configurations to offer this instance type optimized for cost and scale-out performance.
The Tau T2A VM is based on Ampere Computing’s Altra CPU. It’s important to note that Azure announced Ampere and instances are GA at Oracle Cloud Infrastructure, as well as several Chinese clouds (including TikTok parent, ByteDance).
To motivate developers and customers, GCP offers a free 8-core, 32G RAM instance of T2A through general availability.
How Google is positioning T2A
One of the things I find with Arm announcements is that sometimes the "why would I use this?' question isn't fully answered. It's almost as if an assumption is made that enterprise IT professionals would fully understand the price-performance benefits of Arm and workload affinity.
Through the briefings Moor Insights & Strategy Patrick Moorhead and I received, as well as the various public statements from Google, it is refreshing to see the company is helping guide its customers. As mentioned, T2A is a VM targeting those scale-out workloads that don't require maximum compute resources at the individual instance level. Unsurprisingly, one of the supporting blogs from Google discusses optimizations for the Google Kubernetes Engine (GKE), Google's container environment.
A valuable capability of GKE is its multi-architecture support. So, containerized workloads can run in an x86 and Arm environment simultaneously. While this has many practical benefits, it also makes it easier for IT organizations to dip their collective toes in the “Arm” water, so to speak. It is capabilities such as this (not unique to GCP) that allow for organizations to deploy on Arm seamlessly.
It should be noted that T2A also runs the Google Container-optimized OS. So, organizations utilizing the popular Docker containers can expect full support.
Google has also enabled its Batch and Dataflow cloud services to run on T2A. These two services that target batch processing and streaming analytics respectively benefit from the Tau family's scale-out nature and T2A in particular.
While Google provides good guidance for its customers considering exploring or deploying on Arm, the use of T2A can be far broader. Independent of Google, Ampere has developed a robust ecosystem of partners, spanning the operating system to the workload. Functions like serverless caching via Momento, SLURM workload scheduling via SchedMD, and HPC through Rescale – are all optimized workloads for Ampere. And there are many more.
A few more details on T2A
Google is careful in how it positions its VM instances. When the company released its Tau VM family last year, it was very clear in positioning these as cost-effective, scale-out VMs. As one would expect with “cost-effective,” some options customers may prefer are lacking, such as local SSD support and higher bandwidth networking (32G supported in T2A v. 100G in other instances). Further, once a customer is locked into a T2A VM size (vCPU and RAM), they cannot dynamically add more resources.
Given the workloads targeted, the above makes sense, as customers look to distribute applications across many "good enough" performing VMs that don’t require maximum network throughput.
I like that GCP drives all of its specialized value into the Tau family, including T2A. The security measures, optimizations around memory (NUMA), network optimizations, etc.. that GCP has developed are all lit up in T2A. This level of support should assures customers utilizing T2A that these instances enjoy the same level of support as the highest performing compute engines.
Has Arm arrived in the enterprise?
The quick and simple answer is yes, though not for every workload. GCP announcing Arm-based instances rounds out support from all the major CSPs. This widespread support hasn't happened because Arm is cool or trendy. Nor has it happened as an exercise to drive better pricing from the x86 players. Arm is being deployed because CSPs can deliver equal or better performance for specific workloads at a lower cost and power envelope. Period. This is basic economics.
While Arm is not going to replace x86 to run virtualized infrastructure on VMware anytime soon, there are still use cases where Arm is a good fit. In its blog promoting T2A, one of GCP's reference customers is Harvard University. The school runs several compute-intensive workloads on SLURM VirtualFlow, and T2A allows it to run tens of thousands of VMs in parallel, reducing compute time significantly. But here’s the key to what Harvard had to say – the migration to T2A was done with minimal effort. Such is the beauty of cloud-native development. The cost and time savings will be immediately recognized.
I like this Harvard reference because it reminds us that Arm is not just for the digitally born companies that have never had an on-premises datacenter. It's for any company embarking on a digital transformation or modernization project.
Further proof of Arm's move into the datacenter can be seen in HPE's announcement of the upcoming ProLiant RL300 Gen11 server based on Ampere's Altra CPU. This is the first mainstream server that HPE has announced ahead of its Gen11 launch, and I expect the market will see competitors roll out its servers in time.
Is T2A just a competitive response from Google?
I don’t believe that Google is interested in investing in and rolling out an Arm-based instance to be like every other cloud provider. GCP is run by many intelligent people who firmly understand its customers' wants and needs.
As a company, Google has deep roots in silicon design, development, and optimizations. It's no secret that the company works with CPU vendors to deliver Google compute-optimized platforms. I think GCP has done its due diligence in ensuring the Ampere CPU could and would meet its particular and the needs of its customers.
I believe my only question is around the longer-term strategy for Google and Arm. There are two camps in the CSP space: those that design its silicon (i.e., AWS Graviton) and those that deploy Ampere. Given Google's history in silicon development, could we see a custom chip in the future? It is a scenario that is entirely plausible.
Google rounds out support for Arm from the major CSPs with its Tau T2A VM offering, based on Ampere Computing’s Altra CPU. While the company is last to market in this regard, it has done a thorough job of positioning Arm relative to x86 and target workloads.
I believe this is just the beginning for Arm at GCP and suspect the company will eventually roll Arm offerings into other compute engine offerings over time. But I think it will do this in a very measured way, looking for areas where Arm can offer a differentiated experience for customers.
It's a good time to be a proponent of Arm. And a better day to be an investor of Ampere Computing. There is no doubt that Arm is here to stay. Not as a cheap alternative to x86, but as an architecture that can be optimized for many workloads, with the ability to lead in raw performance, price-performance, and performance-per-watt, at a time when each of these measures are so critical.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.
Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.