It felt like for a long time, the quantum computing industry avoided talking about “quantum advantage” or “quantum supremacy,” the point where quantum computers can solve problems that would simply take too long to solve on classical computers. To some degree, that’s because the industry wanted to avoid the hype that comes with that, but IBM today brought back talk about quantum advantage again by detailing how it plans to use a novel error mitigation technique to chart a path toward running the increasingly large circuits it’ll take to reach this goal — at least for a certain set of algorithms.
It’s no secret that quantum computers hate nothing more than noise. Qubits are fickle things, after all, and the smallest change in temperature or vibration can make them decohere. There’s a reason the current era of quantum computing is associated with “noisy intermediate-scale quantum (NISQ) technology.”
The engineers at IBM and every other quantum computing company are making slow but steady strides toward reducing that noise on the hardware and software level, with IBM’s 65-qubit systems from 2020 now showing twice the coherence time compared to when they first launched, for example. The coherence time of IBM’s transmon superconducting qubits is now over 1 ms.
But IBM is also taking another approach but betting on new error mitigation techniques, dubbed probabilistic error cancellation and zero-noise extrapolation. At a very basic level, you can almost think of this as the quantum equivalent of the active noise cancellation in your headphones. The system regularly checks the system for noise and then essentially inverts those noisy circuits to enable it to create virtually error-free results.
IBM has now shown that this isn’t just a theoretical possibility but actually works in its existing systems. One disadvantage here is that there is quite a bit of overhead when you constantly sample these noisy circuits and that overhead is exponential in the number of qubits and the circuit depths. But that’s a trade-off worth making, argues Jerry Chow, the director of Hardware Development for IBM Quantum.
“Error mitigation is about finding ways to deal with the physical errors in certain ways, by learning about the errors and also just running quantum circuits in such a way that allows us to cancel them,” explained Chow. “In some ways, error correction is like the ultimate error mitigation, but the point is that there are techniques that are more near term with a lot of the hardware that we’re building that already provide this avenue. The one that we’re really excited about is called probabilistic error cancellation. And that one really is a way of trading off runtime — trading off running more circuits in order to learn about the noise that might be inherent to the system that is impacting your calculations.”
The system essentially inserts additional gates into existing circuits to sample the noise inherent in the system. And while the overhead increases exponentially with the size of the system, the IBM team believes it’s a weaker exponential than the best classical methods to estimate those same circuits.
As IBM previously announced, it plans to introduce error mitigation and suppression techniques into its Qiskit Runtime by 2024 or 2025 so developers won’t even have to think about these when writing their code.
IBM has published details on a collection of techniques it hopes will usher in quantum advantage, the inflection point at which the utility of quantum computers exceeds that of traditional machines.
The focus is on a process known as error mitigation, which is designed to Excellerate the consistency and reliability of circuits running on quantum processors by eliminating sources of noise.
IBM says that advances in error mitigation will allow quantum computers to scale steadily in performance, in a similar pattern exhibited over the years in the field of classical computing.
Although plenty has been said about the potential of quantum computers, which exploit a phenomenon known as superposition to perform calculations extremely quickly, the reality is that current systems are incapable of outstripping traditional supercomputers on a consistent basis.
A lot of work is going into improving performance by increasing the number of qubits on a quantum processor, but researchers are also investigating opportunities related to qubit design, the pairing of quantum and classical computers, new refrigeration techniques and more.
IBM, for its part, has now said it believes an investment in error mitigation will bear the most fruit at this stage in the development of quantum computing.
“Indeed, it is widely accepted that one must first build a large fault-tolerant quantum processor before any of the quantum algorithms with proven super-polynomial speed-up can be implemented. Building such a processor therefore is the central goal for our development,” explained IBM, in a blog post (opens in new tab).
“However, latest advances in techniques we refer to broadly as quantum error mitigation allow us to lay out a smoother path towards this goal. Along this path, advances in qubit coherence, gate fidelities, and speed immediately translate to measurable advantage in computation, akin to the steady progress historically observed with classical computers.”
The post is geared towards a highly technical audience and goes into great detail, but the main takeaway is this: the ability to quiet certain sources of error will allow for increasingly complex quantum workloads to be executed with reliable results.
According to IBM, the latest error mitigation techniques go “beyond just theory”, with the advantage of these methods having already been demonstrated on some of the most powerful quantum hardware currently available.
“At IBM Quantum, we plan to continue developing our hardware and software with this path in mind,” the company added.
“At the same time, together with our partners and the growing quantum community, we will continue expanding the list of problems that we can map to quantum circuits and develop better ways of comparing quantum circuit approaches to traditional classical methods to determine if a problem can demonstrate quantum advantage. We fully expect that this continuous path that we have outlined will bring us practical quantum computing.”
You don’t have to be a physicist to know that noise and quantum computing don’t mix. Any noise, movement or temperature swing causes qubits – the quantum computing equivalent to a binary bit in classical computing – to fail.
That’s one of the main reasons quantum advantage (the point at which quantum surpasses classic computing) and quantum supremacy (when quantum computers solve a problem not feasible for classical computing) feel like longer-term goals and emerging technology. It’s worth the wait, though, as quantum computers promise exponential increases over classic computing, which tops out at supercomputing. However, due to the intricacies of quantum physics (e.g., entanglement), quantum computers are also more prone to errors based on environmental factors when compared to supercomputers or high-performance computers.
Quantum errors arise from what’s known as decoherence, a process that occurs when noise or nonoptimal temperatures interfere with qubits, changing their quantum states and causing information stored by the quantum computer to be lost.
Many enterprises view quantum computing technology as a zero-sum scenario and that if you want value from a quantum computer, you need fault-tolerant quantum processors and a multitude of qubits. While we wait, we’re stuck in the NISQ era — noisy intermediate-scale quantum — where quantum hasn’t surpassed classical computers.
That’s an impression IBM hopes to change.
In a blog published today by IBM, its quantum team (Kristan Temme, Ewout van den Berg, Abhinav Kandala and Jay Gambett) writes that the history of classical computing is one of incremental advances.
“Although quantum computers have seen tremendous improvements in their scale, quality and speed in latest years, such a gradual evolution seems to be missing from the narrative,” the team wrote. “However, latest advances in techniques we refer to broadly as quantum error mitigation allow us to lay out a smoother path towards this goal. Along this path, advances in qubit coherence, gate fidelities and speed immediately translate to measurable advantage in computation, akin to the steady progress historically observed with classical computers.”
In a move to get a quantum advantage sooner – and in incremental steps – IBM claims to have created a technique that’s designed to tap more value from noisy qubits and move away from NISQ.
Instead of focusing solely on fault-tolerant computers. IBM’s goal is continuous and incremental improvements, Jerry Chow, the director of hardware development for IBM Quantum, told VentureBeat.
To mitigate errors, Chow points to IBM’s new probabilistic error cancellation, a technique designed to invert noisy quantum circuits to achieve error-free results, even though the circuits themselves are noisy. It does bring a runtime tradeoff, he said, because you’re giving up running more circuits to gain insight into the noise causing the errors.
The goal of the new technique is to provide a step, rather than a leap, towards quantum supremacy. It’s “a near-term solution,” Chow said, and a part of a suite of techniques that will help IBM learn about error correction through error migration. “As you increase the runtime, you learn more as you run more qubits,” he explained.
Chow said that while IBM continues to scale its quantum platform, this offers an incremental step. Last year, IBM unveiled a 127-qubit Eagle processor, which is capable of running quantum circuits that can’t be replicated classically. Based on its quantum roadmap laid out in May, IBM systems is on track to reach 4,000-plus qubit quantum devices in 2025.
Probabilistic error cancellation represents a shift for IBM and the quantum field overall. Rather than relying solely on experiments to achieve full error correction under certain circumstances, IBM has focused on a continuous push to address quantum errors today while still moving toward fault-tolerant machines, Chow said. “You need high-quality hardware to run billions of circuits. Speed is needed. The goal is not to do error mitigation long-term. It’s not all or nothing.”
IBM quantum computing bloggers add that its quantum error mitigation technique “is the continuous path that will take us from today’s quantum hardware to tomorrow’s fault-tolerant quantum computers. This path will let us run larger circuits needed for quantum advantage, one hardware improvement at a time.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
In the past few months International Business Machines (NYSE:IBM) has turned into one of the best performing tech names. Since I first covered the company in January of 2021 IBM returned 17%, compared to merely 8% for the broader equity market.
During this timeframe the spin-off of Kyndryl (KD) was completed and now that the underperforming assets have been unloaded, expectations around the 'New IBM' are running high. Unfortunately, however, the strong share price performance since November of last year has little to do with IBM's fundamentals.
As we see in the graph below, the iShares Edge MSCI USA Momentum Factor ETF (MTUM) peaked also in November of last year and since then the gap with the iShares Edge MSCI USA Value Factor ETF (VLUE) has been expanding.
As expectations of monetary tightening begun to surface and inflationary pressures intensified, high duration and momentum stocks begun to underperform the lower duration value companies. I talked about this dynamic in my latest analysis called 'The Cloud Space In Numbers: What Matters The Most', where I showed why the high-growth names were at risk. More specifically, I distinguished between the companies in the bottom left-hand corner and those in the upper right-hand corner in the graph below.
As we see in the graph below, the high flyers, such as Workday (WDAY), Salesforce (CRM) and Adobe (ADBE), have become the worst performers, while companies like IBM and Oracle (ORCL) that were usually associated with low expected growth and low valuation multiples became the new stars.
Although this was good news for value investors as a whole and is a trend that could easily continue, we should distinguish between strong business performance and market-wide forces. Having said that, IBM shareholders should not simply assume that the strong share price performance is a sign of strong execution. Needless to say, the Kyndryl disastrous performance of losing 75% of its value in a matter of months also lies on the shoulders of current management of IBM.
IBM's recently reported quarterly numbers once again disappointed and the management seems to have largely attributed the U.S. dollar movement to the slightly lower guidance.
Alongside the guidance gross margins also fell across the board, with the exception of the Financing division, which is relatively small to the other business units.
Rising labour and component costs were also to blame during the quarter and the management is addressing these through pricing actions which should take some time.
Although this is likely true, IBM is also reducing spend on research and development and selling, general and administrative functions. Such actions are usually taken as a precaution during downturns, however, consistent lower spend in those areas could often have grave consequences.
Last but not least, the reported EPS numbers from continuing operations should also be adjusted as I have outlined before.
I usually exclude the royalty income and all income/expenses grouped in the 'other' category. These expenses/income usually have little to do with IBM's ongoing business and as such I deem them to be irrelevant for long-term shareholders.
On an adjusted basis, EPS increased from $1.08 in Q2 2021 to $1.33 in Q2 2022, which although is a notable increase remains low. Just as a back of the envelope calculation, if we annualize the last quarterly result, we end up with a total EPS number of $5.3 or a forward P/E ratio of almost 25x. Given all the difficulties facing IBM and its growth profile, this still appears as too high.
As expected, IBM continued on its strategy to fuel its growth through a frenzy of acquisitions and divestitures. Following the Kyndryl spin-off, the company completed four deals in a matter of just few months.
As I have said before, all that does not bode well for the prospects of IBM's legacy businesses. Moreover, the management does not seem to be focused on organic growth numbers in their quarterly reviews which is even more worrisome.
Now that the underperforming assets have been off-loaded, IBM's dividend payments are still too high relative to its adjusted income.
* adjusted for Intellectual property and custom development income, Other (income) and expense and Income/(loss) from discontinued operations, net of tax
As previously noted, this puts the company between a rock and a hard place. However, reducing or discontinuing the dividend could potentially result in an exodus of long-term shareholders.
We should also mention that IBM has been barely paying any taxes over latest years due to various tax credits (see below). This, however, is gradually changing and will likely provide yet another headwind on EPS numbers in the future.
Even though the narrative around IBM has been largely focused on its business turning around, the company's free cash flow per share continues to decline.
A potential upside based on a successful turnaround story of IBM that is gravitating around the hybrid cloud is a major reason for many current and potential shareholders of IBM to hope for a light at the end of the tunnel. However, little seems to have changed at IBM following the spin-off of Kyndryl and a declining business also creates a significant moral hazard problem for management where more risk taking is incentivized. All that combined with the fact that IBM is doing M&A deals almost on a monthly basis, creates significant risks for long-term owners of the business.
Phishing incidents are on the rise. A report from IBM shows that phishing was the most popular attack vector in 2021, resulting in one in five employees falling victim to phishing hacking techniques.
Although technical solutions protect against phishing threats, no solution is 100% effective. Consequently, companies have no choice but to involve their employees in the fight against hackers. This is where security awareness training comes into play.
Security awareness training gives companies the confidence that their employees will execute the right response when they discover a phishing message in their inbox.
As the saying goes, "knowledge is power," but the effectiveness of knowledge depends heavily on how it is delivered. When it comes to phishing attacks, simulations are among the most effective forms of training because the events in training simulations directly mimic how an employee would react in the event of an real attack. Since employees do not know whether a suspicious email in their inbox is a simulation or a real threat, the training becomes even more valuable.
It is critical to plan, implement and evaluate a cyber awareness training program to ensure it truly changes employee behavior. However, for this effort to be successful, it should involve much more than just emailing employees. Key practices to consider include:
Because employees do not recognize the difference between phishing simulations and real cyberattacks, it's important to remember that phishing simulations evoke different emotions and reactions, so awareness training should be conducted thoughtfully. As organizations need to engage their employees to combat the ever-increasing attacks and protect their assets, it is important to keep morale high and create a positive culture of cyber hygiene.
Based on years of experience, cybersecurity firm CybeReady has seen companies fall into these common mistakes.
The approach of running a phishing simulation as a test to catch and punish "repeat offenders" can do more harm than good.
An educational experience that involves stress is counterproductive and even traumatic. As a result, employees will not go through the training but look for ways to circumvent the system. Overall, the fear-based "audit approach" is not beneficial to the organization in the long run because it cannot provide the necessary training over an extended period.
Solution #1: Be sensitive
Because maintaining positive employee morale is critical to the organization's well-being, provide positive just-in-time training.
Just-in-time training means that once employees have clicked on a link within the simulated attack, they are directed to a short and concise training session. The idea is to quickly educate the employee on their mistake and deliver them essential tips on spotting malicious emails in the future.
This is also an opportunity for positive reinforcement, so be sure to keep the training short, concise, and positive.
Solution #2: Inform relevant departments.
Communicate with relevant stakeholders to ensure they are aware of ongoing phishing simulation training. Many organizations forget to inform relevant stakeholders, such as HR or other employees, that the simulations are being conducted. Learning has the best effect when participants have the opportunity to feel supported, make mistakes, and correct them.
It is important to vary the simulations. Sending the same simulation to all employees, especially at the same time, is not only not instructive but also has no valid metrics when it comes to organizational risk.
The "warning effect" - the first employee to discover or fall for the simulation warns the others. This prepares your employees to respond to the "threat" by anticipating the simulation, thus bypassing the simulation and the training opportunity.
Another negative impact is social desirability bias, which causes employees to over-report incidents to IT without noticing them in order to be viewed more favorably. This leads to an overloaded system and the department IT.
This form of simulation also leads to inaccurate results, such as unrealistically low click-through rates and over-reporting rates. Thus, the metrics do not show the real risks of the company or the problems that need to be addressed.
Solution: Drip mode
Drip mode allows sending multiple simulations to different employees at different times. Certain software solutions can even do this automatically by sending a variety of simulations to different groups of employees. It's also important to implement a continuous cycle to ensure that all new employees are properly onboarded and to reinforce that security is important 24/7 - not just checking a box for minimum compliance.
With over 3.4 billion phishing attacks per day, it's safe to assume that at least a million of them differ in complexity, language, approach, or even tactics.
Unfortunately, no single phishing simulation can accurately reflect an organization's risk. Relying on a single phishing simulation result is unlikely to provide reliable results or comprehensive training.
Another important consideration is that different groups of employees respond differently to threats, not only because of their vigilance, training, position, tenure, or even education level but because the response to phishing attacks is also contextual.
Solution: Implement a variety of training programs
Behavior change is an evolutionary process and should therefore be measured over time. Each training session contributes to the progress of the training. Training effectiveness, or in other words, an accurate reflection of real organizational behavior change, can be determined after multiple training sessions and over time.
The most effective solution is to continuously conduct various training programs (at least once a month) with multiple simulations.
It is highly recommended to train employees according to their risk level. A diverse and comprehensive simulation program also provides reliable measurement data based on systematic behavior over time. To validate their efforts at effective training, organizations should be able to obtain a valid indication of their risk at any given point in time while monitoring progress in risk reduction.
Creating such a program may seem overwhelming and time-consuming. That's why we have created a playbook of the 10 key practices you can use to create a simple and effective phishing simulation. Simply download the CybeReady Playbook or meet with one of our experts for a product demo and learn how CybeReady's fully automated security awareness training platform can help your organization achieve the fastest results with virtually zero effort IT.
A new research center for artificial intelligence and machine learning has sprung up at the University of Oregon, thanks to a collaboration between IBM and the Oregon Advanced Computing Institute for Science and Society. The Oregon Center for Enterprise AI eXchange (CE-AIX) leverages the university's high-performance computing technology and enterprise servers from IBM to create new training opportunities and collaborations with industry.
"The new lab facility will be a valuable resource for worldwide universities and enterprise companies wanting to take advantage of IBM Enterprise Servers POWER9 and POWER10 combined with IBM Spectrum storage, along with AIX and RHEL with OpenShift," said Ganesan Narayanasamy, IBM's leader for academic and research worldwide.
Narayanasamy said the new center extends state-of-the-art facilities and other Silicon Valley-style services to researchers, system developers, and other users looking to take advantage of open-source high-performance computing resources. The center has already helped thousands of students gain exposure and practice with its high-performance computing training, and it is expected to serve as a global hub that will help prepare the next generation of computer scientists, according to the center's director Sameer Shende.
"We aim to expand the skillset of researchers and students in the area of commercial application of artificial intelligence and machine learning, as well as high-performance computing technologies," Shende said.
Thanks to a long-term loan agreement with IBM, the center has access to powerful enterprise servers and other capabilities. It was envisioned to bring together data scientists from businesses in different domains, such as financial services, manufacturing, and transportation, along with IBM research and development engineers, IBM partner data scientists, and university students and researchers.
The new center also has the potential to be leveraged by everyone from global transportation companies seeking to design more efficient trucking routes to clean energy firms looking to design better wind turbines based on models of airflow patterns. At the University of Oregon, there are potential applications in data science, machine learning, environmental hazards monitoring, and other emerging areas of research and innovation.
"Enterprise AI is a team sport," said Raj Krishnamurthy, an IBM chief architect for enterprise AI and co-director of the new center. "As businesses continue to operationalize AI in mission-critical systems, the use cases and methodologies developed from collaboration in this center will further promote the adoption of trusted AI techniques in the enterprise."
Ultimately, the center will contribute to the University of Oregon's overall research excellence, said AR Razdan, who serves as the university's vice president for research and innovation.
"The center marks another great step forward in [the university's] ongoing efforts to bring together interdisciplinary teams of researchers and innovators," Razdan said.
This post was created by IBM with Insider Studios.
The guides leverage Astadia’s 25+ years of expertise in partnering with organizations to reduce costs, risks and timeframes when migrating their IBM mainframe applications to cloud platforms
BOSTON, August 03, 2022--(BUSINESS WIRE)--Astadia is pleased to announce the release of a new series of Mainframe-to-Cloud reference architecture guides. The documents cover how to refactor IBM mainframes applications to Microsoft Azure, Amazon Web Services (AWS), Google Cloud, and Oracle Cloud Infrastructure (OCI). The documents offer a deep dive into the migration process to all major target cloud platforms using Astadia’s FastTrack software platform and methodology.
As enterprises and government agencies are under pressure to modernize their IT environments and make them more agile, scalable and cost-efficient, refactoring mainframe applications in the cloud is recognized as one of the most efficient and fastest modernization solutions. By making the guides available, Astadia equips business and IT professionals with a step-by-step approach on how to refactor mission-critical business systems and benefit from highly automated code transformation, data conversion and testing to reduce costs, risks and timeframes in mainframe migration projects.
"Understanding all aspects of legacy application modernization and having access to the most performant solutions is crucial to accelerating digital transformation," said Scott G. Silk, Chairman and CEO. "More and more organizations are choosing to refactor mainframe applications to the cloud. These guides are meant to assist their teams in transitioning fast and safely by benefiting from Astadia’s expertise, software tools, partnerships, and technology coverage in mainframe-to-cloud migrations," said Mr. Silk.
The new guides are part of Astadia’s free Mainframe-to-Cloud Modernization series, an ample collection of guides covering various mainframe migration options, technologies, and cloud platforms. The series covers IBM (NYSE:IBM) Mainframes.
In addition to the reference architecture diagrams, these comprehensive guides include various techniques and methodologies that may be used in forming a complete and effective Legacy Modernization plan. The documents analyze the important role of the mainframe platform, and how to preserve previous investments in information systems when transitioning to the cloud.
In each of the IBM Mainframe Reference Architecture white papers, readers will explore:
Benefits, approaches, and challenges of mainframe modernization
Understanding typical IBM Mainframe Architecture
An overview of Azure/AWS/Google Cloud/Oracle Cloud
Detailed diagrams of IBM mappings to Azure/AWS/ Google Cloud/Oracle Cloud
How to ensure project success in mainframe modernization
The guides are available for download here:
To access more mainframe modernization resources, visit the Astadia learning center on www.astadia.com.
Astadia is the market-leading software-enabled mainframe migration company, specializing in moving IBM and Unisys mainframe applications and databases to distributed and cloud platforms in unprecedented timeframes. With more than 30 years of experience, and over 300 mainframe migrations completed, enterprises and government organizations choose Astadia for its deep expertise, range of technologies, and the ability to automate complex migrations, as well as testing at scale. Learn more on www.astadia.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220803005031/en/
Wilson Rains, Chief Revenue Officer
LAWRENCE, Kan.--(BUSINESS WIRE)--Jul 28, 2022--
Cobalt Iron Inc., a leading provider of SaaS-based enterprise data protection, today announced that the company has been deemed one of the 10 Most Promising IBM Solution Providers 2022 by CIOReview Magazine. The annual list of companies is selected by a panel of experts and members of CIOReview Magazine’s editorial board to recognize and promote innovation and entrepreneurship. A technology partner for IBM, Cobalt Iron earned the distinction based on its Compass ® enterprise SaaS backup platform for monitoring, managing, provisioning, and securing the entire enterprise backup landscape.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220728005043/en/
Cobalt Iron Compass® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection. (Graphic: Business Wire)
According to CIOReview, “Cobalt Iron has built a patented cyber-resilience technology in a SaaS model to alleviate the complexities of managing large, multivendor setups, providing an effectual humanless backup experience. This SaaS-based data protection platform, called Compass, leverages strong IBM technologies. For example, IBM Spectrum Protect is embedded into the platform from a data backup and recovery perspective. ... By combining IBM’s technologies and the intellectual property built by Cobalt Iron, the company delivers a secure, modernized approach to data protection, providing a ‘true’ software as a service.”
Through proprietary technology, the Compass data protection platform integrates with, automates, and optimizes best-of-breed technologies, including IBM Spectrum Protect, IBM FlashSystem, IBM Red Hat Linux, IBM Cloud, and IBM Cloud Object Storage. Compass enhances and extends IBM technologies by automating more than 80% of backup infrastructure operations, optimizing the backup landscape through analytics, and securing backup data, making it a valuable addition to IBM’s data protection offerings.
CIOReview also praised Compass for its simple and intuitive interface to display a consolidated view of data backups across an entire organization without logging in to every backup product instance to extract data. The machine learning-enabled platform also automates backup processes and infrastructure, and it uses open APIs to connect with ticket management systems to generate tickets automatically about any backups that need immediate attention.
To ensure the security of data backups, Cobalt Iron has developed an architecture and security feature set called Cyber Shield for 24/7 threat protection, detection, and analysis that improves ransomware responsiveness. Compass is also being enhanced to use several patented techniques that are specific to analytics and ransomware. For example, analytics-based cloud brokering of data protection operations helps enterprises make secure, efficient, and cost-effective use of their cloud infrastructures. Another patented technique — dynamic IT infrastructure optimization in response to cyberthreats — offers unique ransomware analytics and automated optimization that will enable Compass to reconfigure IT infrastructure automatically when it detects cyberthreats, such as a ransomware attack, and dynamically adjust access to backup infrastructure and data to reduce exposure.
Compass is part of IBM’s product portfolio through the IBM Passport Advantage program. Through Passport Advantage, IBM sellers, partners, and distributors around the world can sell Compass under IBM part numbers to any organizations, particularly complex enterprises, that greatly benefit from the automated data protection and anti-ransomware solutions Compass delivers.
CIOReview’s report concludes, “With such innovations, all eyes will be on Cobalt Iron for further advancements in humanless, secure data backup solutions. Cobalt Iron currently focuses on IP protection and continuous R&D to bring about additional cybersecurity-related innovations, promising a more secure future for an enterprise’s data.”
About Cobalt Iron
Cobalt Iron was founded in 2013 to bring about fundamental changes in the world’s approach to secure data protection, and today the company’s Compass ® is the world’s leading SaaS-based enterprise data protection system. Through analytics and automation, Compass enables enterprises to transform and optimize legacy backup solutions into a simple cloud-based architecture with built-in cybersecurity. Processing more than 8 million jobs a month for customers in 44 countries, Compass delivers modern data protection for enterprise customers around the world. www.cobaltiron.com
Product or service names mentioned herein are the trademarks of their respective owners.
Link to Word Doc:www.wallstcom.com/CobaltIron/220728-Cobalt_Iron-CIOReview_Top_IBM_Provider_2022.docx
Photo Caption: Cobalt Iron Compass ® is a SaaS-based data protection platform leveraging strong IBM technologies for delivering a secure, modernized approach to data protection.
Follow Cobalt Iron
View source version on businesswire.com:https://www.businesswire.com/news/home/20220728005043/en/
CONTACT: Agency Contact:
Wall Street Communications
Tel: +1 801 326 9946
Web:www.wallstcom.comCobalt Iron Contact:
VP of Marketing
Tel: +1 785 979 9461
KEYWORD: EUROPE UNITED STATES NORTH AMERICA KANSAS
INDUSTRY KEYWORD: DATA MANAGEMENT SECURITY TECHNOLOGY SOFTWARE NETWORKS INTERNET
SOURCE: Cobalt Iron
Copyright Business Wire 2022.
PUB: 07/28/2022 09:00 AM/DISC: 07/28/2022 09:03 AM
IBM’s annual Cost of Data Breach Report for 2022 is packed with revelations, and as usual none of them are good news. Headlining the report is the record-setting cost of data breaches, with the global average now at $4.35 million. The report also reveals that much of that expense comes with the data breach version of “long Covid,” expenses that are realized more than a year after the attack.
Most organizations (60%) are passing these added costs on to consumers in the form of higher prices. And while 83% of organizations now report experiencing at least one data breach, only a small minority are adopting zero trust strategies.
The IBM report draws on input from 550 global organizations surveyed about the period between March 2021 and March 2022, in partnership with the Ponemon Institute.
Though the average cost of a data breach is up, it is only by about 2.6%; the average in 2021 was $4.24 million. This represents a total climb of 13% since 2020, however, reflecting the general spike in cyber crime seen during the pandemic years.
Organizations are also increasingly not opting to absorb the cost of data breaches, with the majority (60%) compensating by raising consumer prices separate from any other latest increases due to inflation or supply chain issues. The report indicates that this may be an underreported upward influence on prices of consumer goods, as 83% of organizations now say that they have been breached at least once.
Brad Hong, Customer Success Manager for Horizon3.ai, sees a potential consumer backlash on the horizon once public awareness of this practice grows: “It’s already a breach of confidence to lose the confidential data of customers, and sure there’s bound to be an organization across those surveyed who genuinely did put in the effort to protect against and curb attacks, but for those who did nothing, those who, instead of creating a disaster recovery plan, just bought cyber insurance to cover the org’s operational losses, and those who simply didn’t care enough to heed the warnings, it’s the coup de grâce to then pass the cost of breaches to the same customers who are now the victims of a data breach. I’d be curious to know what percent of the 60% of organizations who increased the price of their products and services are using the extra revenue for a war chest or to actually reinforce their security—realistically, it’s most likely just being used to fill a gap in lost revenue for shareholders’ sake post-breach. Without government regulations outlining restrictions on passing cost of breach to consumer, at the least, not without the honest & measurable efforts of a corporation as their custodian, what accountability do we all have against that one executive who didn’t want to change his/her password?”
Breach costs also have an increasingly long tail, as nearly half now come over a year after the date of the attack. The largest of these are generally fines that are levied after an investigation, and decisions or settlements in class action lawsuits. While the popular new “double extortion” approach of ransomware attacks can drive long-term costs in this way, the study finds that companies paying ransom demands to settle the problem quickly aren’t necessarily seeing a large amount of overall savings: their average breach cost drops by just $610,000.
Sanjay Raja, VP of Product with Gurucul, expands on how knock-on data breach damage can continue for years: “The follow-up attack effect, as described, is a significant problem as the playbooks and solutions provided to security operations teams are overly broad and lack the necessary context and response actions for proper remediation. For example, shutting down a user or application or adding a firewall block rule or quarantining a network segment to negate an attack is not a sustainable remediation step to protect an organization on an ongoing basis. It starts with a proper threat detection, investigation and response solution. Current SIEMs and XDR solutions lack the variety of data, telemetry and combined analytics to not only identify an attack campaign and even detect variants on previously successful attacks, but also provide the necessary context, accuracy and validation of the attack to build both a precise and complete response that can be trusted. This is an even greater challenge when current solutions cannot handle complex hybrid multi-cloud architectures leading to significant blind spots and false positives at the very start of the security analyst journey.”
In spite of over four out of five organizations now having experienced some sort of data breach, only slightly over 20% of critical infrastructure companies have moved to zero trust strategies to secure their networks. Cloud security is also lagging as well, with a little under half (43%) of all respondents saying that their security practices in this area are either “early stage” or do not yet exist.
Those that have onboarded security automation and AI elements are the only group seeing massive savings: their average cost of data breach is $3.05 million lower. This particular study does not track average ransom demands, but refers to Sophos research that puts the most latest number at $812,000 globally.
The study also notes serious problems with incident response plans, especially troubling in an environment in which the average ransomware attack is now carried out in four days or less and the “time to ransom” has dropped to a matter of hours in some cases. 37% of respondents say that they do not test their incident response plans regularly. 62% say that they are understaffed to meet their cybersecurity needs, and these organizations tend to suffer over half a million more dollars in damages when they are breached.
Of course, cost of data breaches is not distributed evenly by geography or by industry type. Some are taking much bigger hits than others, reflecting trends established in prior reports. The health care industry is now absorbing a little over $10 million in damage per breach, with the average cost of data breach rising by $1 million from 2021. And companies in the United States face greater data breach costs than their counterparts around the world, at over $8 million per incident.
Shawn Surber, VP of Solutions Architecture and Strategy with Tanium, provides some insight into the unique struggles that the health care industry faces in implementing effective cybersecurity: “Healthcare continues to suffer the greatest cost of breaches but has among the lowest spend on cybersecurity of any industry, despite being deemed ‘critical infrastructure.’ The increased vulnerability of healthcare organizations to cyber threats can be traced to outdated IT systems, the lack of robust security controls, and insufficient IT staff, while valuable medical and health data— and the need to pay ransoms quickly to maintain access to that data— make healthcare targets popular and relatively easy to breach. Unlike other industries that can migrate data and sunset old systems, limited IT and security budgets at healthcare orgs make migration difficult and potentially expensive, particularly when an older system provides a small but unique function or houses data necessary for compliance or research, but still doesn’t make the cut to transition to a newer system. Hackers know these weaknesses and exploit them. Additionally, healthcare orgs haven’t sufficiently updated their security strategies and the tools that manufacturers, IT software vendors, and the FDA have made haven’t been robust enough to thwart the more sophisticated techniques of threat actors.”
Familiar incident types also lead the list of the causes of data breaches: compromised credentials (19%), followed by phishing (16%). Breaches initiated by these methods also tended to be a little more costly, at an average of $4.91 million per incident.
Though the numbers are never as neat and clean as averages would indicate, it would appear that the cost of data breaches is cut dramatically for companies that implement solid automated “deep learning” cybersecurity tools, zero trust systems and regularly tested incident response plans. Mature cloud security programs are also a substantial cost saver.