Update: IBM posted pretty solid earnings after the bell on Monday. IBM beat on the top and bottom lines. EPS was $2.31 versus the consensus of $2.29. Revenue was $15.54 billion versus the consensus of $15.18 billion. So far so good but the company warned the strong dollar would impact cash flows and forecast a $3.5 billion potential hit from the strong dollar. One does wonder about their hedging strategy as it was a fairly clear-cut case that the dollar was a one-way bet over the past quarter. IBM stock fell in after-hours as this dollar hit was digested and is currently down 5% premarket at $131.
International Business Machines (IBM) gets the tech sector earnings up and running when it reports its second-quarter earnings after the close on Monday. This is make-or-break earnings season for the battered tech sector, and investors will be keeping a close eye to see how margins in the sector hold up. Inflationary pressures are mounting, and margin compression is a normal feature at this stage in the economic cycle. Have investors gotten ahead of themselves though, as it may take another quarter before the full effects feed through to corporate bottom lines and balance sheets?
IBM is expected to post earnings per share of $2.29 on revenue of $15.2 billion. This marks a modest decline from Q1 of about 5% on EPS but a more noticeable near-20% drop in quarterly revenues. IBM has been a favorite of dividend investors for decades and its current dividend yield sits at 4.3%. With interest rates rising, that yield is no longer as attractive as it once was since Treasuries now yield 3% with the certain of the US government. Dividend payouts will also be watched. Already IBM is trading up in Monday's premarket as investors react favorably to the more benign risk environment and the Fed's likely 75-basis-point rate rise.
One other area of major concern will revolve around how IBM is managing the effects of the strong dollar. This is a problem for any company that earns significant amounts of its revenue from overseas markets. IBM gets about 50% of its revenue from outside the US. Converting that into an ever stronger dollar means that without any currency hedging IBM is facing a major headwind from currency conversion. IBM already warned investors of the effects of the strong dollar on its Q1 earnings when it showed revenue at constant currency growing 11% but converted to the dollar that revenue growth shrank to 8%. Since Q1 the dollar has surged ever higher.
The area that could provide the biggest uplift is cloud computing. This has been a growth area for Microsoft via its Azure offering and Amazon through Amazon Web Services. IBM has made significant investments over the past number of years in its cloud business as it looks to transform away from its legacy business.
For the highly experienced trader selling volatility into earnings, releases can be quite a profitable strategy as market makers always mark up volatility ahead of earnings. This makes buying options expensive and skews the risk-reward profile markedly downward during earnings season. However, this is only for highly experienced traders as risk management is key when selling volatility.
The uptrend remains in place with $130 being strong support from the trend line and the 200-day moving average. Of interest is the reaction to the last two earnings releases. Both were better than expected, and in both cases IBM stock spiked up before retracing to test the trend line and holding support.
IBM daily chart
Information on these pages contains forward-looking statements that involve risks and uncertainties. Markets and instruments profiled on this page are for informational purposes only and should not in any way come across as a recommendation to buy or sell in these assets. You should do your own thorough research before making any investment decisions. FXStreet does not in any way certain that this information is free from mistakes, errors, or material misstatements. It also does not certain that this information is of a timely nature. Investing in Open Markets involves a great deal of risk, including the loss of all or a portion of your investment, as well as emotional distress. All risks, losses and costs associated with investing, including total loss of principal, are your responsibility. The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of FXStreet nor its advertisers. The author will not be held responsible for information that is found at the end of links posted on this page.
If not otherwise explicitly mentioned in the body of the article, at the time of writing, the author has no position in any stock mentioned in this article and no business relationship with any company mentioned. The author has not received compensation for writing this article, other than from FXStreet.
FXStreet and the author do not provide personalized recommendations. The author makes no representations as to the accuracy, completeness, or suitability of this information. FXStreet and the author will not be liable for any errors, omissions or any losses, injuries or damages arising from this information and its display or use. Errors and omissions excepted.
The author and FXStreet are not registered investment advisors and nothing in this article is intended to be investment advice.
Vish Gain visited Wimbledon to get a sneak peek of how IBM is using data and AI to help the tennis tournament engage with fans in its ‘pursuit of greatness’.
It’s not every day that you get to visit Wimbledon and walk around its hallowed courts during the tournament. An even rarer cohort of individuals gets to visit the underground bunkers where the behind-the-scenes action happens. I was lucky enough to do both last week.
Walking into the premises of the world’s oldest and most prestigious tennis tournament, I wasn’t sure what to expect. I’d watched Wimbledon matches growing up, but witnessing one live was a different ball game altogether, excuse the pun.
But my trip to Wimbledon wasn’t just about watching the action as it happened, but to dig deeper. And by digging deeper, I mean visiting the underground data rooms run by Wimbledon’s technology partner, IBM.
IBM has been a tech partner of Wimbledon since 1990. Since then, the two have been linked inextricably, trying to innovate new ways of engaging Wimbledon’s worldwide audience and using technology to live up to its motto: ‘In pursuit of greatness.’
Data analysis, automation and artificial intelligence are just some of the technologies developed by IBM and its partners that are being deployed to make watching Wimbledon, both in-person and from afar, a more meaningful experience.
“It all starts with the data,” Kevin Farrar, IBM UK sports partnerships lead, told me. “We’ve built this platform of innovation with the club to turn massive amounts of data into engaging and meaningful insights for the fans.”
Farrar works with a team of experts who, in collaboration with other technology partners, collect and process the immense amounts of data generated throughout the tournament.
“We’re collecting the test stats. There’s the direction of serve, how the ball is returned, backhand or forehand, the rally count, how the point is won, if it’s a forced or unforced error,” he whispered to me in a room full of experts wearing headphones watching the matches closely.
This information is collected from thousands of data points, which are then combined with data from other sources, such as Hawk-Eye’s electronic line-calling technology, to produce meaningful insights that are fed into the Wimbledon website and to global broadcasters.
The fruit of this behind-the-scenes work by IBM is best displayed on Wimbledon’s official website, where live updates on matches are combined with AI-powered match insights to make the sport exciting for those not within the premises.
This year, for example, has seen the introduction of the IBM Power Index, an AI-powered daily ranking of player momentum before and during Wimbledon. Using Watson, IBM’s powerful natural language processing system, the Power Index analyses player performance, media commentary and other factors to quantify momentum.
“A lot of people just watch tennis once a year – they watch Wimbledon. They’ll know the big names, but they won’t necessarily know the upcoming players. The Power Index gives a mechanism for them to sort of identify players that are hot at the moment,” Farrar said.
Users of the Wimbledon website or smartphone app can view the Power Index and click on any player they find interesting and want to keep an eye on. They can track the player’s progress and get personalised updates based on what or who they’re interested in.
“It’s an algorithm that takes both structured data and unstructured data,” Farrar explained. “The structured data is the scores and match results. But it’s also looking at the media buzz through trusted data sources, to see what the media is saying about the players.”
The Sherlock-like Watson (although named after early IBM CEO Thomas Watson) is also able to use vast amounts of data and expert input to predict which of the two players in any given match has a higher chance of winning. Fans on the app can weigh in too and see how far they stand from the AI estimate.
Farrar said the reason IBM is doing all this is to engage with fans interested in both technical details as well as the “drama and beauty of it all” through a visual experience. In the 2021 championships, Wimbledon reached approximately 18m people through its digital platforms.
“Sports fans love debate. So, putting something out there in terms of a prediction that Watson has come up with, they’ll have their own views and their own win factors in their mind. It’s about engaging the fans in that social debate and asking them, ‘Well, what do you think?’”
For Deborah Threadgold, IBM Ireland country manager, the relationship between Wimbledon and IBM is a great example of what the company’s strategy is all around.
“When you look at the data piece, when you look at the automation piece, and the security and how it is all sitting on that platform, and how that’s allowing them to innovate, then that’s exactly what IBM brings to all of our clients,” Threadgold told me.
“So even here in Ireland, whether you’re in the sporting industry, or much more broadly, whether you’re in financial services, public sector, whatever it may be, all of those tools and those mechanisms, you can actually reimagine how that works into your own industry.”
Of the four cornerstone annual tennis tournaments, Wimbledon is by far the most traditional with the richest history. It has been played since 1877 at the All England Lawn Tennis and Croquet Club in London.
“Our challenges here is to get that balance right between the tradition and heritage of the club, and the way they present themselves with technology and innovation,” Farrar said. “The brand is very important to them, and we make sure that that remains the case while still innovating every year.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.
It is sometimes difficult to understand the true value of IBM's Power-based CPUs and associated server platforms. And the company has written a lot about it over the past few years. Even for IT professionals that deploy and manage servers. As an industry, we have become accustomed to using x86 as a baseline for comparison. If an x86 CPU has 64 cores, that becomes what we used to measure relative value in other CPUs.
But this is a flawed way of measuring CPUs and a broken system for measuring server platforms. An x86 core is different than an Arm core which is different than a Power core. While Arm has achieved parity with x86 for some cloud-native workloads, the Power architecture is different. Multi-threading, encryption, AI enablement – many functions are designed into Power that don’t impact performance like other architectures.
I write all this as a set-up for IBM's announced expanded support for its Power10 architecture. In the following paragraphs, I will provide the details of IBM's announcement and supply some thoughts on what this could mean for enterprise IT.
What was announced
Before discussing what was announced, it is a good idea to do a quick overview of Power10.
IBM introduced the Power10 CPU architecture at the Hot Chips conference in August 2020. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. Power10 is developed on the opensource Power ISA. Power10 comes in two variants – 15x SMT8 cores and 30x SMT4 cores. For those familiar with x86, SMT8 (8 threads/core seems extreme, as does SMT4. But this is where the Power ISA is fundamentally different from x86. Power is a highly performant ISA, and the Power10 cores are designed for the most demanding workloads.
One last note on Power10. SMT8 is optimized for higher throughput and lower computation. SMT4 attacks the compute-intensive space with lower throughput.
IBM introduced the Power E1080 in September of 2021. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. The E1080 is a system designed for mission and business-critical workloads and has been strongly adopted by IBM's loyal Power customer base.
Because of this success, IBM has expanded the breadth of the Power10 portfolio and how customers consume these resources.
The big reveal in IBM’s latest announcement is the availability of four new servers built on the Power10 architecture. These servers are designed to address customers' full range of workload needs in the enterprise datacenter.
The Power S1014 is the traditional enterprise workhorse that runs the modern business. For x86 IT folks, think of the S1014 equivalent to the two-socket workhorses that run virtualized infrastructure. One of the things that IBM points out about the S1014 is that this server was designed with lower technical requirements. This statement leads me to believe that the company is perhaps softening the barrier for the S1014 in data centers that are not traditional IBM shops. Or maybe for environments that use Power for higher-end workloads but non-Power for traditional infrastructure needs.
The Power S1022 is IBM's scale-out server. Organizations embracing cloud-native, containerized environments will find the S1022 an ideal match. Again, for the x86 crowd – think of the traditional scale-out servers that are perhaps an AMD single socket or Intel dual-socket – the S1022 would be IBM's equivalent.
Finally, the S1024 targets the data analytics space. With lots of high-performing cores and a big memory footprint – this server plays in the area where IBM has done so well.
In addition, to these platforms, IBM also introduced the Power E1050. The E1050 seems designed for big data and workloads with significant memory throughput requirements.
The E1050 is where I believe the difference in the Power architecture becomes obvious. The E1050 is where midrange starts to bump into high performance, and IBM claims 8-socket performance in this four-socket socket configuration. IBM says it can deliver performance for those running big data environments, larger data warehouses, and high-performance workloads. Maybe, more importantly, the company claims to provide considerable cost savings for workloads that generally require a significant financial investment.
One benchmark that IBM showed was the two-tier SAP Standard app benchmark. In this test, the E1050 beat an x86, 8-socket server handily, showing a 2.6x per-core performance advantage. We at Moor Insights & Strategy didn’t run the benchmark or certify it, but the company has been conservative in its disclosures, and I have no reason to dispute it.
But the performance and cost savings are not just associated with these higher-end workloads with narrow applicability. In another comparison, IBM showed the Power S1022 performs 3.6x better than its x86 equivalent for running a containerized environment in Red Hat OpenShift. When all was added up, the S1022 was shown to lower TCO by 53%.
What makes Power-based servers perform so well in SAP and OpenShift?
The value of Power is derived both from the CPU architecture and the value IBM puts into the system and server design. The company is not afraid to design and deploy enhancements it believes will deliver better performance, higher security, and greater reliability for its customers. In the case of Power10, I believe there are a few design factors that have contributed to the performance and price//performance advantages the company claims, including
These seemingly minor differences can add up to deliver significant performance benefits for workloads running in the datacenter. But some of this comes down to a very powerful (pardon the redundancy) core design. While x86 dominates the datacenter in unit share, IBM has maintained a loyal customer base because the Power CPUs are workhorses, and Power servers are performant, secure, and reliable for mission critical applications.
Like other server vendors, IBM sees the writing on the wall and has opened up its offerings to be consumed in a way that is most beneficial to its customers. Traditional acquisition model? Check. Pay as you go with hardware in your datacenter? Also, check. Cloud-based offerings? One more check.
While there is nothing revolutionary about what IBM is doing with how customers consume its technology, it is important to note that IBM is the only server vendor that also runs a global cloud service (IBM Cloud). This should enable the company to pass on savings to its customers while providing greater security and manageability.
I like what IBM is doing to maintain and potentially grow its market presence. The new Power10 lineup is designed to meet customers' entire range of performance and cost requirements without sacrificing any of the differentiated design and development that the company puts into its mission critical platforms.
Will this announcement move x86 IT organizations to transition to IBM? Unlikely. Nor do I believe this is IBM's goal. However, I can see how businesses concerned with performance, security, and TCO of their mission and business-critical workloads can find a strong argument for Power. And this can be the beginning of a more substantial Power presence in the datacenter.
Note: This analysis contains insights from Moor Insights & Strategy Founder and Chief Analyst, Patrick Moorhead.
Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.
A month after the National Institute of Standards and Technology (NIST) revealed the first quantum-safe algorithms, Amazon Web Services (AWS) and IBM have swiftly moved forward. Google was also quick to outline an aggressive implementation plan for its cloud service that it started a decade ago.
It helps that IBM researchers contributed to three of the four algorithms, while AWS had a hand in two. Google contributed to one of the submitted algorithms, SPHINCS+.
A long process that started in 2016 with 69 original candidates ends with the selection of four algorithms that will become NIST standards, which will play a critical role in protecting encrypted data from the vast power of quantum computers.
NIST's four choices include CRYSTALS-Kyber, a public-private key-encapsulation mechanism (KEM) for general asymmetric encryption, such as when connecting websites. For digital signatures, NIST selected CRYSTALS-Dilithium, FALCON, and SPHINCS+. NIST will add a few more algorithms to the mix in two years.
Vadim Lyubashevsky, a cryptographer who works in IBM's Zurich Research Laboratories, contributed to the development of CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon. Lyubashevsky was predictably pleased by the algorithms selected, but he had only anticipated NIST would pick two digital signature candidates rather than three.
Ideally, NIST would have chosen a second key establishment algorithm, according to Lyubashevsky. "They could have chosen one more right away just to be safe," he told Dark Reading. "I think some people expected McEliece to be chosen, but maybe NIST decided to hold off for two years to see what the backup should be to Kyber."
After NIST identified the algorithms, IBM moved forward by specifying them into its recently launched z16 mainframe. IBM introduced the z16 in April, calling it the "first quantum-safe system," enabled by its new Crypto Express 8S card and APIs that provide access to the NIST APIs.
IBM was championing three of the algorithms that NIST selected, so IBM had already included them in the z16. Since IBM had unveiled the z16 before the NIST decision, the company implemented the algorithms into the new system. IBM last week made it official that the z16 supports the algorithms.
Anne Dames, an IBM distinguished engineer who works on the company's z Systems team, explained that the Crypto Express 8S card could implement various cryptographic algorithms. Nevertheless, IBM was betting on CRYSTAL-Kyber and Dilithium, according to Dames.
"We are very fortunate in that it went in the direction we hoped it would go," she told Dark Reading. "And because we chose to implement CRYSTALS-Kyber and CRYSTALS-Dilithium in the hardware security module, which allows clients to get access to it, the firmware in that hardware security module can be updated. So, if other algorithms were selected, then we would add them to our roadmap for inclusion of those algorithms for the future."
A software library on the system allows application and infrastructure developers to incorporate APIs so that clients can generate quantum-safe digital signatures for both classic computing systems and quantum computers.
"We also have a CRYSTALS-Kyber interface in place so that we can generate a key and provide it wrapped by a Kyber key so that could be used in a potential key exchange scheme," Dames said. "And we've also incorporated some APIs that allow clients to have a key exchange scheme between two parties."
Dames noted that clients might use Kyber to generate digital signatures on documents. "Think about code signing servers, things like that, or documents signing services, where people would like to actually use the digital signature capability to ensure the authenticity of the document or of the code that's being used," she said.
During Amazon's AWS re:Inforce security conference last week in Boston, the cloud provider emphasized its post-quantum cryptography (PQC) efforts. According to Margaret Salter, director of applied cryptography at AWS, Amazon is already engineering the NIST standards into its services.
During a breakout session on AWS' cryptography efforts at the conference, Salter said AWS had implemented an open source, hybrid post-quantum key exchange based on a specification called s2n-tls, which implements the Transport Layer Security (TLS) protocol across different AWS services. AWS has contributed it as a draft standard to the Internet Engineering Task Force (IETF).
Salter explained that the hybrid key exchange brings together its traditional key exchanges while enabling post-quantum security. "We have regular key exchanges that we've been using for years and years to protect data," she said. "We don't want to get rid of those; we're just going to enhance them by adding a public key exchange on top of it. And using both of those, you have traditional security, plus post quantum security."
Last week, Amazon announced that it deployed s2n-tls, the hybrid post-quantum TLS with CRYSTALS-Kyber, which connects to the AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In an update this week, Amazon documented its stated support for AWS Secrets Manager, a service for managing, rotating, and retrieving database credentials and API keys.
While Google didn't make implementation announcements like AWS in the immediate aftermath of NIST's selection, VP and CISO Phil Venables said Google has been focused on PQC algorithms "beyond theoretical implementations" for over a decade. Venables was among several prominent researchers who co-authored a technical paper outlining the urgency of adopting PQC strategies. The peer-reviewed paper was published in May by Nature, a respected journal for the science and technology communities.
"At Google, we're well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information," Venables wrote in a blog post published following the NIST announcement. "We have one goal: ensure that Google is PQC ready."
Venables recalled an experiment in 2016 with Chrome where a minimal number of connections from the Web browser to Google servers used a post-quantum key-exchange algorithm alongside the existing elliptic-curve key-exchange algorithm. "By adding a post-quantum algorithm in a hybrid mode with the existing key exchange, we were able to test its implementation without affecting user security," Venables noted.
Google and Cloudflare announced a "wide-scale post-quantum experiment" in 2019 implementing two post-quantum key exchanges, "integrated into Cloudflare's TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients." The experiment helped Google understand the implications of deploying two post-quantum key agreements with TLS.
Venables noted that last year Google tested post-quantum confidentiality in TLS and found that various network products were not compatible with post-quantum TLS. "We were able to work with the vendor so that the issue was fixed in future firmware updates," he said. "By experimenting early, we resolved this issue for future deployments."
The four algorithms NIST announced are an important milestone in advancing PQC, but there's other work to be done besides quantum-safe encryption. The AWS TLS submission to the IETF is one example; others include such efforts as Hybrid PQ VPN.
"What you will see happening is those organizations that work on TLS protocols, or SSH, or VPN type protocols, will now come together and put together proposals which they will evaluate in their communities to determine what's best and which protocols should be updated, how the certificates should be defined, and things like things like that," IBM's Dames said.
Dustin Moody, a mathematician at NIST who leads its PQC project, shared a similar view during a panel discussion at the RSA Conference in June. "There's been a lot of global cooperation with our NIST process, rather than fracturing of the effort and coming up with a lot of different algorithms," Moody said. "We've seen most countries and standards organizations waiting to see what comes out of our nice progress on this process, as well as participating in that. And we see that as a very good sign."
Under the agreement, Avnet Inc., through Avnet Cilicon, the Americas-based semiconductor distribution specialist division of Avnet's largest operating group, Avnet Electronics Marketing, will provide engineering design services to customers to help accelerate the adoption of IBM ASIC products and to help reduce customers' time to market. The agreement covers ASIC products and technologies from IBM at the .18 micron and .25 micron technology nodes.
In addition, Avnet Inc., through Avnet Cilicon, will provide sales and marketing support for IBM ASIC products to its large distribution customer base, along with providing these customers access to Avnet's materials management capabilities for their particular supply chain requirements.
This announcement marks the first time that IBM has opened up its ASIC design methodologies for execution by a channel business partner. As a result, a broader array of customers will now be able to gain access to IBM industry-leading ASIC technology through Avnet Design Centers.
"The expansion of our existing, successful distribution agreement to now include IBM ASIC products and technology is a big win for Avnet and most importantly for the distribution customer base," said Jeff Ittel, president of Avnet Cilicon. "IBM has the world's leading ASIC products and methodologies, which are proven to enable designs that are right the first time and to help reduce time-to-market for its customers' products. These capabilities will now be widely available to the Avnet distribution customer base."
"Avnet is the leading distributor in this segment and brings over 20 years of experience in the ASIC business and over 1000 completed designs by our ASIC design center engineering team, whose services include architectural design, IP integration, verification, test, timing closure and physical layout," Ittel noted.
"This agreement represents a new business model for IBM and a significant opportunity for our ASIC business," said Tom Reeves, vice president, ASIC product group, IBM Systems and Technology Group. "Avnet offers an established customer base and technical design support via four dedicated Design Centers in North America that can help our ASIC business expand into new opportunities."
About IBM Microelectronics
IBM is a recognized innovator in the semiconductor industry, having been first with advances like more power-efficient copper wiring in place of aluminum and faster SOI and silicon germanium transistors. These and other innovations have contributed to IBM's standing as the number one U.S. patent holder for 11 consecutive years. More information about IBM semiconductors can be found at: http://www.ibm.com/chips.
About Avnet Cilicon
Avnet Cilicon is the semiconductor distribution specialist division of Avnet Electronics Marketing in the Americas, an operating group of Avnet, Inc. (NYSE:AVT). Avnet Cilicon combines semiconductor expertise, technical excellence and deep market knowledge to enhance time to revenue for all supply-chain partners in the electronics arena. Avnet Cilicon's core competencies include materials management, technical support through Avnet Design Services, logistics support through Avnet Supply Chain Services, and customer-centric, dedicated sales channels. Avnet Cilicon, combined with Avnet IP&E, Avnet's interconnect, passive and electromechanical component and services division, delivers Support Across the Board. For more information, visit http://www.em.avnet.com/semi.
Today’s instrument is International Business Machines stock traded on NYSE under the ticker IBM.
Looking at IBM’s chart, we can see that it is traded close to its 52W high at the current price of $140.56.
Today, if it manages to break through its resistance level at around $142, we could expect it to re-test its 52W high at around $146 otherwise it should fall towards its next support level at about $136.
Now is the time to cast your vote for the DesignCon 2020 Engineer of the Year. This award is given out each year during the DesignCon event and seeks to recognize the best of the best in engineering and new product advancements at the chip, board, or system level, with a special emphasis on signal integrity and power integrity.
Editors of Design News and the staff of DesignCon would like to offer hearty congratulations to the finalists. For this year’s award, the winner (or his/her representative) will be able to direct a $1,000 donation to any secondary educational institution in the United States. The details on each nominee are below as provided in their published biographies and by the person/s who made the nomination. Please cast your vote by following this link.
Voting closes at noon Pacific Time on Friday, December 27. The winner will be announced at DesignCon 2020, January 28-30, at the Santa Clara Convention Center, Santa Clara, CA.
The six finalists for the 2020 DesignCon Engineer of the Year Award are (click each name to see finalist’s bio and community activity):
Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.
See the Official Rules of the Engineer of the Year Award
Please click here to learn more about DesignCon and register to attend
Consultant, SIRF Consultants LLC
Joseph C. (Jay) Diepenbrock holds an Sc. B. (EE) from Brown University and an MSEE from Syracuse University. He worked in a number of development areas in IBM including IC, analog and RF circuit and backplane design. He then moved to IBM's Integrated Supply Chain, working on the electrical specification, testing, and modeling of connectors and cables and was IBM's Subject Matter Expert on high speed cables. After a long career at IBM he left there and joined Lorom America as Senior Vice President, High Speed Engineering, and led the Lorom Signal Integrity team, supporting its high speed product development. He left Lorom in 2015 and is now a signal integrity consultant with SIRF Consultants, LLC.
Holding 12 patents, 30+ publications, and a recognized expert in SI, Jay is currently the technical editor of the IEEE P370 standard and has worked on numerous other industry standards. He is a Senior Member of the IEEE and was an EMC Society Distinguished Lecturer. Jay has a steadfast commitment to solid engineering and communicating/teaching about it. He regularly contributes to industry discourse and education at events and in trade publications, and received a Best Paper award at EDICon 2018 for his paper on IEEE P370 spec. He has made a distinguished career in high-speed product development, including circuit and backplane design, high speed connectors and cables, and signal integrity consulting. Beyond that, Jay actively volunteers his time for disaster and humanitarian relief, including being a driver and member of the IEEE MOVE truck team, which provides emergency communications during and after a disaster. He truly uses his engineering skills to make the world a better place.
Jay is a long-time, active member of the DesignCon Technical Program Committee.
This year at DesignCon, Jay will be presenting the tutorial “Introduction to the IEEE P370 Standard & Its Applications for High Speed Interconnect Characterization” and moderating the panel “Untangling Standards: The Challenges Inside the Box.”
Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.
Senior Key Expert, EBS Product Development, Mentor, A Siemens Business
Dr. Vladimir Dmitriev-Zdorov has developed a number of advanced models and novel simulation methods used in Mentor products. His current work includes development of efficient methods of circuit/system simulation in the time and frequency domains, transformation and analysis of multi-port systems, and statistical and time-domain analysis of SERDES links. He received Ph.D. and D.Sc. degrees (1986, 1998) based on his work on circuit and system simulation methods. The results have been published in numerous papers and conference proceedings, including DesignCon. Several DesignCon papers such as “BER-and COM-Way of Channel-Compliance Evaluation: What are the Sources of Differences?” and “A Causal Conductor Roughness Model and its Effect on Transmission Line Characteristics” have received the Best Paper Award. Dr. Vladimir Dmitriev-Zdorov holds 9 patents.
Vladimir is an active member of the DesignCon Technical Program Committee.
This year at DesignCon, Vladimir will be presenting the technical session, “How to Enforce Causality of Standard & "Custom" Metal Roughness Models” and on the panel “Stump the SI/PI Experts.”
Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.
Fellow, Micron Technology
Tim Hollis is a distinguished member of the Micron Technologies technical staff and an advanced signaling R&D lead. His main focus is in identifying and directing forward-looking projects for the SI R&D team to pursue and driving a cross-functional working group intended to provide forward-looking technical guidance to upper management.
Tim has shown outstanding technical leadership in solving numerous challenges with regard to high-speed DDR memory interfaces, for both computing and graphics applications. He has contributed papers to DesignCon as received a Best Paper Award in 2018 as lead author for “16Gb/s and Beyond with Single-Ended I/O in High-Performance Graphics Memory.” His 85+ patents reflect his innovative mind and his prodigious contributions to technology.
Tim received a BS in Electrical Engineering from University of Utah and a Ph.D. in Electrical Engineering from Brigham Young University.
Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.
Principle SI and PI Engineer, Samtec
Istvan Novak is a Principle Signal and Power Integrity Engineer at Samtec, working on advanced signal and power integrity designs. Prior to 2018 he was a Distinguished Engineer at SUN Microsystems, later Oracle. He worked on new technology development, advanced power distribution and signal integrity design and validation methodologies for SUN's successful workgroup server families. He introduced the industry's first 25um power-ground laminates for large rigid computer boards, and worked with component vendors to create a series of low-inductance and controlled-ESR bypass capacitors. He also served as SUN's representative on the Copper Cable and Connector Workgroup of InfiniBand, and was engaged in the methodologies, designs and characterization of power-distribution networks from silicon to DC-DC converters. He is a Life Fellow of the IEEE with twenty-five patents to his name, author of two books on power integrity, teaches signal and power integrity courses, and maintains a popular SI/PI website.
Istvan has in many cases single handedly helped the test and measurement industry develop completely new instruments and methods of measurement. New VNA types and Scope probes and methodologies are in the market today thanks to Istvan's efforts and openness to help others. He was responsible for the power distribution and high-speed signal integrity designs of SUN’s V880, V480, V890, V490, V440, T1000, T2000, T5120 and T5220 midrange server families. Last, but not least, Istvan has been a tremendous contributor to SI List, educating and helping engineers across the world with their SI/PI problems. Istvan is an active member of the DesignCon Technical Program Committee, sharing his expertise by participating in the review of content for multiple tracks. He is an IEEE Fellow and has been a tutor at the University of Oxford, Oxford, UK for the past 10 years. He has also been a faculty member at CEI Europe AB since 1991 and served as Vice Dean of Faculty, Associate Professor at the Technical University of Budapest.
At DesignCon 2020, Istvan will be participating in the technical session, “Current Distribution, Resistance & Inductance in Power Connectors,” and the panel, “Stump the SI/PI Experts.”
Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.
Business Development Manager, Rohde & Schwarz
Michael Schnecker’s experience in the test and measurement industry includes applications, sales and product development and specialization in signal integrity applications using oscilloscopes and other instruments. Prior to joining Rohde & Schwarz, Mike held positions at LeCroy and Tektronix. While at LeCroy, he was responsible for the deployment of the SDA series of serial data analyzers.
Mike has more than two decades of experience working with oscilloscope measurements. His background in time and frequency domains provides him with unique insight into the challenges engineers face when testing high-speed systems for both power and signal integrity. Interacting with engineers in the industry daily has allowed Mike to master the ability to explain complex measurement science to engineers at any level. He also holds several patents, including methods and apparatus for analyzing serial data streams as well as coherent interleaved sampling. Thus, Mike is recognized as a thought leader and exceptional mentor in the signal and power integrity community.
Mike has a BS from Lehigh University and an MS from Georgia Tech, both in electrical engineering.
This year at DesignCon, Mike will be presenting the tutorial “Signal Integrity: Measurements & Instrumentation“ and at the technical session, “Real-Time Jitter Analysis Using Hardware Based Clock Recovery & Serial Pattern Trigger.”
Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.
President and Founder, Simberian
Yuriy Shlepnev is President and Founder of Simberian Inc., where he develops Simbeor electromagnetic signal integrity software. He received M.S. degree in radio engineering from Novosibirsk State Technical University in 1983, and the Ph.D. degree in computational electromagnetics from Siberian State University of Telecommunications and Informatics. He was principal developer of electromagnetic simulator for Eagleware Corporation and leading developer of electromagnetic software for simulation of signal and power distribution networks at Mentor Graphics. The results of his research are published in multiple papers and conference proceedings.
Yuriy conceived and brought to market a state of the art electromagnetic field solver tool suite and is considered an expert in his field and regularly posts teaching videos. He is a senior member of IEEE AP, MYY, EMC, and CPMT societies. He is also a Fellow of Kong’s Electromagnetics Academy and a member of the Applied Computational Electromagnetics Society (ACES).
Yuriy is active in the Technical Program Committee for DesignCon and has served a track co-chair in the past. At DesignCon this year he will be presenting the tutorial “Design Insights from Electromagnetic Analysis & Measurements of PCB & Packaging Interconnects Operating at 6- to 112-Gbps & Beyond” and speaking in the technical session “Machine Learning Applications for COM Based Simulation of 112Gb Systems.”
Cast your vote for the 2020 Engineer of the Year by noon PT, December 27.
Learn more about DesignCon and register to attend
There are some things that leave indelible impressions in your memory. One of those things, for me, was a technical presentation in 1980 I attended — by calling in a lot of favors — a presentation by HP at what is now the Stennis Space Center. I was a student and it took a few phone calls to wrangle an invite but I wound up in a state-of-the-art conference room with a bunch of NASA engineers watching HP tell us about all their latest and greatest. Not that I could afford any of it, mind you. What really caught my imagination that day was the HP9845C, a color graphics computer with a roughly $40,000 price tag. That was twice the average US salary for 1980. Now, of course, you have a much better computer — or, rather, you probably have several much better computers including your phone. But if you want to relive those days, you can actually recreate the HP9845C’s 1980-vintage graphics glory using, of all things, a game emulator.
Keep in mind that the IBM PC was nearly two years away at this point and, even then, wouldn’t hold a candle to the HP9845C. Like many machines of its era, it ran BASIC natively — in fact, it used special microcode to run BASIC programs relatively quickly on its 16-bit 5.7 MHz CPU. The 560 x 455 pixel graphics system had its own CPU and you could max it out with a decadent 1.5 MB of RAM. (But not, alas, for $40,000 which got you — I think –128K or so.)
The widespread use of the computer mouse was still in the future, so the HP had that wonderful light pen. Mass storage was also no problem — there was a 217 kB tape drive and while earlier models had a second drive and a thermal printer optional, these were included in the color “C” model. Like HP calculators, you could slot in different ROMs for different purposes. There were other options such as a digitizer and even floppy discs.
The machines had a brief life, being superseded quickly by better computers. However, the computer managed to play a key role in making the 1983 movie Wargames and the predecessor, the HP9845B appeared on screen in Raise the Titanic.
According to the HP Museum, the 9845C wasn’t terribly reliable. The tape drives are generally victims of age after 40+ years, but the power supplies and memory also have their share of issues. Luckily, we are going to simulate our HP9845C, so we won’t have to deal with any of those problems.
One other cool feature of just about every HP computer from that era was the soft key system. These were typically built into the monitor or, sometimes, the keyboard and lined up with labels on the screen. So instead of remembering that F2 is the search command (or whatever), there would be a little label on the screen over the button that said “Search.” Great stuff!
When you think about simulating an old computer, you probably think of SimH. However, the HP machines were very graphical in nature, so the author of the HP9845C emulator made a different choice: MAME. You normally think of MAME as a video game emulator. However, if you want color graphics, ROM slots, and a light pen, MAME is a pretty good choice.
As you can see, you get a view of the 9845C monitor replete with soft keys and, if you enable it, even a light pen. You can load different images as ROMs and tapes. The only tricky part is the keyboard. The HP has a custom keyboard that works a bit different than a PC keyboard.
In particular, the HP computers were typically screen-oriented. So the Enter key was usually distinct from the key that told the computer you were ready for it to process. This leads to some interesting keyboard mappings.
In fact, the page that has the most information about the emulator is a little hard to wade through, so this might help. First, you want to scroll down to the bottom and get the prebuilt emulators for Linux or Windows. You can build with MAME or use the stock versions — assuming your stock version has all the right options. But it is easier to just grab the prebuilt and they can coexist with other versions of MAME; even if you want to go a different route eventually, you probably should still start there.
The emulator is called 45c and, on Linux, I had to make it executable myself (
chmod +x). Here is a typical command line:
./45c -magt1 tapes/demo1.hti -magt2 tapes/demo2.hti -ramsize 192k '-rom1 advprog' '-rom5 colorgfx' '-rom3 massd' '-rom4 strucprog' &
All of those tape and ROM files are in the distribution archive. You probably don’t need any of the ROMs, but I loaded them anyway. Add
-window if you prefer not to run full screen. If you do that, you may also want to add
-nomax options to Excellerate appearance.
If you want to try the lightpen, use the
-lightgun -lightgun_device_mouse option to turn your mouse into a lightpen. Note this will grab your mouse and you may need to use Alt+Tab or some other method to switch away from the emulator.
The keyboard mappings are listed on the web page but here are a few that are handy to know:
So faced with the prompt, you can enter something like:
Then press the numeric enter key to see the result. So this being a BASIC computer, you can enter:
10 PRINT "HOWDY!"
Right? Well, yes, but then you need to press store (Right Shift+Enter)
If you have the tapes loaded as above (you can view the tape catalog with the CAT command), try this:
load "autost" run
Remember to use the numeric pad enter key after each line, not the normal enter key!
The king of the demos is the Space Shuttle graphic which was cutting edge in 1980. You could change various display and plot options using the soft keys.
Of course, the Space Shuttle is only fun for so long. There are many other demos on the same tape, but eventually you’ll want to play with something more interesting. The HP Museum has a good bit of software you can probably figure out how to load. You can’t get the software, but if you want to see what the state of gaming was on a $70,000 HP9845B in those days, [Terry Burlison] has some recollections and screen shots. You’ll also find tons of documents and other information on the main HP9845 site.
It would be really interesting if the emulator could drive an HP-IB card in the PC or a PI to drive all your old boat anchor test equipment. That might even let you connect a hard drive. Maybe.
International Business Machines (IBM) is a Zacks Rank #5 (Strong Sell) provides advanced information technology solutions, computer systems, quantum computing and super computing solutions, enterprise software, storage systems and microelectronics.
“Big Blue” has struggled over the last decade, so they have tried to adjust and pivot to the cloud. Their acquisition of Red Hat helped this idea, but a latest earnings report has disappointed investors.
The stock is now trending lower and looks like it might challenge 2022 lows.
About the Company
IBM is headquartered in Armonk, New York. The company was incorporated in 1911 and employs over 280,000 people.
The company operates through four business segments: Software, Consulting, Infrastructure, and Financing.
IBM is valued at $114 billion and has a Forward PE of 13. The stock holds a Zacks Style Score of “C” in Value, “B” in Growth and “B” in Momentum. The stock pays a dividend of 5%.
The company reported EPS last week, seeing Q2 at $2.31 v the $2.29 expected. Revenues came in at $15.5B v the $15.1B. IBM affirmed FY22 at the high end of its mid-single digit model, but narrowed the FY22 FCF to $10B from $10-10.5B.
Margins were down year over year, from 55.2% to 53.4%. While software, consulting and infrastructure revenues were all higher year over year.
Here are some comments from CEO Arvind Krishna:
"In the quarter we delivered good revenue performance with balanced growth across our geographies, driven by client demand for our hybrid cloud and AI offerings. The IBM team executed our strategy well.”
Analyst are already starting to drop estimates as a result of the earnings report.
After stabilizing over the last few months, estimate have fallen off a cliff over the last 7 days. For the current quarter, estimates have fallen from $2.57 to 2.07, or 20%.
Things look to Excellerate next quarter, but we see estimates tracking lower again for next year. Over the last 60 days, numbers have been lowered from $10.81 to $10.26, or 5%.
The stock was holding up well before earnings, as it was seeing support at the 50-day moving average. But IBM is now trading under all its moving averages after the earnings report, slicing right through the 200-day at $130.50.
The lows of the year are just under $120. These should be taken out if the momentum continues and the bears could possibly target the 2021 lows around $113.
Looking at Fibonacci levels, a 61.8% retracement drawn from May lows to June highs was holding at $133. However, this support was broken and bears should target the 161.8% extension at $113. This lines up with that 2021 low support.
While big blue had some positive aspects to the quarter, investors were disappointed overall. The stock fell over 8% after earnings and looks like it could take out 2022 lows on any market weakness.
The stock pays a nice dividend, but with cash flow being taken down, investors might start to lose faith in that payout
For now, a better option in the sector might be Agilysys (AGYS). The stock is a Zacks Rank #2 (Buy) and has held up relatively well over the last six months.
Want the latest recommendations from Zacks Investment Research? Today, you can get 7 Best Stocks for the Next 30 Days. Click to get this free report
International Business Machines Corporation (IBM) : Free Stock Analysis Report
Agilysys, Inc. (AGYS) : Free Stock Analysis Report
To read this article on Zacks.com click here.
60% of breached businesses raised product prices post-breach; vast majority of critical infrastructure lagging in zero trust adoption; $550,000 in extra costs for insufficiently staffed businesses
CAMBRIDGE, Mass., July 27, 2022 /CNW/ -- IBM (NYSE: IBM) Security today released the annual Cost of a Data Breach Report,1 revealing costlier and higher-impact data breaches than ever before, with the global average cost of a data breach reaching an all-time high of $4.35 million for studied organizations. With breach costs increasing nearly 13% over the last two years of the report, the findings suggest these incidents may also be contributing to rising costs of goods and services. In fact, 60% of studied organizations raised their product or services prices due to the breach, when the cost of goods is already soaring worldwide amid inflation and supply chain issues.
The perpetuality of cyberattacks is also shedding light on the "haunting effect" data breaches are having on businesses, with the IBM report finding 83% of studied organizations have experienced more than one data breach in their lifetime. Another factor rising over time is the after-effects of breaches on these organizations, which linger long after they occur, as nearly 50% of breach costs are incurred more than a year after the breach.
The 2022 Cost of a Data Breach Report is based on in-depth analysis of real-world data breaches experienced by 550 organizations globally between March 2021 and March 2022. The research, which was sponsored and analyzed by IBM Security, was conducted by the Ponemon Institute.
Some of the key findings in the 2022 IBM report include:
Critical Infrastructure Lags in Zero Trust – Almost 80% of critical infrastructure organizations studied don't adopt zero trust strategies, seeing average breach costs rise to $5.4 million – a $1.17 million increase compared to those that do. All while 28% of breaches amongst these organizations were ransomware or destructive attacks.
It Doesn't Pay to Pay – Ransomware victims in the study that opted to pay threat actors' ransom demands saw only $610,000 less in average breach costs compared to those that chose not to pay – not including the cost of the ransom. Factoring in the high cost of ransom payments, the financial toll may rise even higher, suggesting that simply paying the ransom may not be an effective strategy.
Security Immaturity in Clouds – Forty-three percent of studied organizations are in the early stages or have not started applying security practices across their cloud environments, observing over $660,000 on average in higher breach costs than studied organizations with mature security across their cloud environments.
Security AI and Automation Leads as Multi-Million Dollar Cost Saver – Participating organizations fully deploying security AI and automation incurred $3.05 million less on average in breach costs compared to studied organizations that have not deployed the technology – the biggest cost saver observed in the study.
"Businesses need to put their security defenses on the offense and beat attackers to the punch. It's time to stop the adversary from achieving their objectives and start to minimize the impact of attacks. The more businesses try to perfect their perimeter instead of investing in detection and response, the more breaches can fuel cost of living increases." said Charles Henderson, Global Head of IBM Security X-Force. "This report shows that the right strategies coupled with the right technologies can help make all the difference when businesses are attacked."
Over-trusting Critical Infrastructure Organizations
Concerns over critical infrastructure targeting appear to be increasing globally over the past year, with many governments' cybersecurity agencies urging vigilance against disruptive attacks. In fact, IBM's report reveals that ransomware and destructive attacks represented 28% of breaches amongst critical infrastructure organizations studied, highlighting how threat actors are seeking to fracture the global supply chains that rely on these organizations. This includes financial services, industrial, transportation and healthcare companies amongst others.
Despite the call for caution, and a year after the Biden Administration issued a cybersecurity executive order that centers around the importance of adopting a zero trust approach to strengthen the nation's cybersecurity, only 21% of critical infrastructure organizations studied adopt a zero trust security model, according to the report. Add to that, 17% of breaches at critical infrastructure organizations were caused due to a business partner being initially compromised, highlighting the security risks that over-trusting environments pose.
Businesses that Pay the Ransom Aren't Getting a "Bargain"
According to the 2022 IBM report, businesses that paid threat actors' ransom demands saw $610,000 less in average breach costs compared to those that chose not to pay – not including the ransom amount paid. However, when accounting for the average ransom payment, which according to Sophos reached $812,000 in 2021, businesses that opt to pay the ransom could net higher total costs - all while inadvertently funding future ransomware attacks with capital that could be allocated to remediation and recovery efforts and looking at potential federal offenses.
The persistence of ransomware, despite significant global efforts to impede it, is fueled by the industrialization of cybercrime. IBM Security X-Force discovered the duration of studied enterprise ransomware attacks shows a drop of 94% over the past three years – from over two months to just under four days. These exponentially shorter attack lifecycles can prompt higher impact attacks, as cybersecurity incident responders are left with very short windows of opportunity to detect and contain attacks. With "time to ransom" dropping to a matter of hours, it's essential that businesses prioritize rigorous testing of incident response (IR) playbooks ahead of time. But the report states that as many as 37% of organizations studied that have incident response plans don't test them regularly.
Hybrid Cloud Advantage
The report also showcased hybrid cloud environments as the most prevalent (45%) infrastructure amongst organizations studied. Averaging $3.8 million in breach costs, businesses that adopted a hybrid cloud model observed lower breach costs compared to businesses with a solely public or private cloud model, which experienced $5.02 million and $4.24 million on average respectively. In fact, hybrid cloud adopters studied were able to identify and contain data breaches 15 days faster on average than the global average of 277 days for participants.
The report highlights that 45% of studied breaches occurred in the cloud, emphasizing the importance of cloud security. However, a significant 43% of reporting organizations stated they are just in the early stages or have not started implementing security practices to protect their cloud environments, observing higher breach costs2. Businesses studied that did not implement security practices across their cloud environments required an average 108 more days to identify and contain a data breach than those consistently applying security practices across all their domains.
Additional findings in the 2022 IBM report include:
Phishing Becomes Costliest Breach Cause – While compromised credentials continued to reign as the most common cause of a breach (19%), phishing was the second (16%) and the costliest cause, leading to $4.91 million in average breach costs for responding organizations.
Healthcare Breach Costs Hit Double Digits for First Time Ever– For the 12th year in a row, healthcare participants saw the costliest breaches amongst industries with average breach costs in healthcare increasing by nearly $1 million to reach a record high of $10.1 million.
Insufficient Security Staffing – Sixty-two percent of studied organizations stated they are not sufficiently staffed to meet their security needs, averaging $550,000 more in breach costs than those that state they are sufficiently staffed.
To get a copy of the 2022 Cost of a Data Breach Report, please visit: https://www.ibm.com/security/data-breach.
Read more about the report's top findings in this IBM Security Intelligence blog.
Sign up for the 2022 IBM Security Cost of a Data Breach webinar on Wednesday, August 3, 2022, at 11:00 a.m. ET here.
Connect with the IBM Security X-Force team for a personalized review of the findings: https://ibm.biz/book-a-consult.
About IBM Security
IBM Security offers one of the most advanced and integrated portfolios of enterprise security products and services. The portfolio, supported by world-renowned IBM Security X-Force® research, enables organizations to effectively manage risk and defend against emerging threats. IBM operates one of the world's broadest security research, development, and delivery organizations, monitors 150 billion+ security events per day in more than 130 countries, and has been granted more than 10,000 security patents worldwide. For more information, please check www.ibm.com/security, follow @IBMSecurity on Twitter or visit the IBM Security Intelligence blog.
IBM Security Communications
1 Cost of a Data Breach Report 2022, conducted by Ponemon Institute, sponsored, and analyzed by IBM
2 Average cost of $4.53M, compared to average cost $3.87 million at participating organizations with mature-stage cloud security practices
View original content to get multimedia:https://www.prnewswire.com/news-releases/ibm-report-consumers-pay-the-price-as-data-breach-costs-reach-all-time-high-301592749.html
View original content to get multimedia: http://www.newswire.ca/en/releases/archive/July2022/27/c2517.html