It is sometimes difficult to understand the true value of IBM's Power-based CPUs and associated server platforms. And the company has written a lot about it over the past few years. Even for IT professionals that deploy and manage servers. As an industry, we have become accustomed to using x86 as a baseline for comparison. If an x86 CPU has 64 cores, that becomes what we used to measure relative value in other CPUs.
But this is a flawed way of measuring CPUs and a broken system for measuring server platforms. An x86 core is different than an Arm core which is different than a Power core. While Arm has achieved parity with x86 for some cloud-native workloads, the Power architecture is different. Multi-threading, encryption, AI enablement – many functions are designed into Power that don’t impact performance like other architectures.
I write all this as a set-up for IBM's announced expanded support for its Power10 architecture. In the following paragraphs, I will provide the details of IBM's announcement and supply some thoughts on what this could mean for enterprise IT.
What was announced
Before discussing what was announced, it is a good idea to do a quick overview of Power10.
IBM introduced the Power10 CPU architecture at the Hot Chips conference in August 2020. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. Power10 is developed on the opensource Power ISA. Power10 comes in two variants – 15x SMT8 cores and 30x SMT4 cores. For those familiar with x86, SMT8 (8 threads/core seems extreme, as does SMT4. But this is where the Power ISA is fundamentally different from x86. Power is a highly performant ISA, and the Power10 cores are designed for the most demanding workloads.
One last note on Power10. SMT8 is optimized for higher throughput and lower computation. SMT4 attacks the compute-intensive space with lower throughput.
IBM introduced the Power E1080 in September of 2021. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. The E1080 is a system designed for mission and business-critical workloads and has been strongly adopted by IBM's loyal Power customer base.
Because of this success, IBM has expanded the breadth of the Power10 portfolio and how customers consume these resources.
The big reveal in IBM’s accurate announcement is the availability of four new servers built on the Power10 architecture. These servers are designed to address customers' full range of workload needs in the enterprise datacenter.
The Power S1014 is the traditional enterprise workhorse that runs the modern business. For x86 IT folks, think of the S1014 equivalent to the two-socket workhorses that run virtualized infrastructure. One of the things that IBM points out about the S1014 is that this server was designed with lower technical requirements. This statement leads me to believe that the company is perhaps softening the barrier for the S1014 in data centers that are not traditional IBM shops. Or maybe for environments that use Power for higher-end workloads but non-Power for traditional infrastructure needs.
The Power S1022 is IBM's scale-out server. Organizations embracing cloud-native, containerized environments will find the S1022 an ideal match. Again, for the x86 crowd – think of the traditional scale-out servers that are perhaps an AMD single socket or Intel dual-socket – the S1022 would be IBM's equivalent.
Finally, the S1024 targets the data analytics space. With lots of high-performing cores and a big memory footprint – this server plays in the area where IBM has done so well.
In addition, to these platforms, IBM also introduced the Power E1050. The E1050 seems designed for big data and workloads with significant memory throughput requirements.
The E1050 is where I believe the difference in the Power architecture becomes obvious. The E1050 is where midrange starts to bump into high performance, and IBM claims 8-socket performance in this four-socket socket configuration. IBM says it can deliver performance for those running big data environments, larger data warehouses, and high-performance workloads. Maybe, more importantly, the company claims to provide considerable cost savings for workloads that generally require a significant financial investment.
One benchmark that IBM showed was the two-tier SAP Standard app benchmark. In this test, the E1050 beat an x86, 8-socket server handily, showing a 2.6x per-core performance advantage. We at Moor Insights & Strategy didn’t run the benchmark or certify it, but the company has been conservative in its disclosures, and I have no reason to dispute it.
But the performance and cost savings are not just associated with these higher-end workloads with narrow applicability. In another comparison, IBM showed the Power S1022 performs 3.6x better than its x86 equivalent for running a containerized environment in Red Hat OpenShift. When all was added up, the S1022 was shown to lower TCO by 53%.
What makes Power-based servers perform so well in SAP and OpenShift?
The value of Power is derived both from the CPU architecture and the value IBM puts into the system and server design. The company is not afraid to design and deploy enhancements it believes will deliver better performance, higher security, and greater reliability for its customers. In the case of Power10, I believe there are a few design factors that have contributed to the performance and price//performance advantages the company claims, including
These seemingly minor differences can add up to deliver significant performance benefits for workloads running in the datacenter. But some of this comes down to a very powerful (pardon the redundancy) core design. While x86 dominates the datacenter in unit share, IBM has maintained a loyal customer base because the Power CPUs are workhorses, and Power servers are performant, secure, and reliable for mission critical applications.
Like other server vendors, IBM sees the writing on the wall and has opened up its offerings to be consumed in a way that is most beneficial to its customers. Traditional acquisition model? Check. Pay as you go with hardware in your datacenter? Also, check. Cloud-based offerings? One more check.
While there is nothing revolutionary about what IBM is doing with how customers consume its technology, it is important to note that IBM is the only server vendor that also runs a global cloud service (IBM Cloud). This should enable the company to pass on savings to its customers while providing greater security and manageability.
I like what IBM is doing to maintain and potentially grow its market presence. The new Power10 lineup is designed to meet customers' entire range of performance and cost requirements without sacrificing any of the differentiated design and development that the company puts into its mission critical platforms.
Will this announcement move x86 IT organizations to transition to IBM? Unlikely. Nor do I believe this is IBM's goal. However, I can see how businesses concerned with performance, security, and TCO of their mission and business-critical workloads can find a strong argument for Power. And this can be the beginning of a more substantial Power presence in the datacenter.
Note: This analysis contains insights from Moor Insights & Strategy Founder and Chief Analyst, Patrick Moorhead.
Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
As the world becomes increasingly data-driven, businesses must find suitable solutions to help them achieve their desired outcomes. Data lake storage has garnered the attention of many organizations that need to store large amounts of unstructured, raw information until it can be used in analytics applications.
The data lake solution market is expected to grow rapidly in the coming years and is driven by vendors that offer cost-effective, scalable solutions for their customers.
Learn more about data lake solutions, what key features they should have and some of the top vendors to consider this year.
A data lake is defined as a single, centralized repository that can store massive amounts of unstructured and semi-structured information in its native, raw form.
It’s common for an organization to store unstructured data in a data lake if it hasn’t decided how that information will be used. Some examples of unstructured data include images, documents, videos and audio. These data types are useful in today’s advanced machine learning (ML) and advanced analytics applications.
Data lakes differ from data warehouses, which store structured, filtered information for specific purposes in files or folders. Data lakes were created in response to some of the limitations of data warehouses. For example, data warehouses are expensive and proprietary, cannot handle certain business use cases an organization must address, and may lead to unwanted information homogeneity.
On-premise data lake solutions were commonly used before the widespread adoption of the cloud. Now, it’s understood that some of the best hosts for data lakes are cloud-based platforms on the edge because of their inherent scalability and considerably modular services.
A 2019 report from the Government Accountability Office (GAO) highlights several business benefits of using the cloud, including better customer service and the acquisition of cost-effective options for IT management services.
Cloud data lakes and on-premise data lakes have pros and cons. Businesses should consider cost, scale and available technical resources to decide which type is best.
Read more about data lakes: What is a data lake? Definition, benefits, architecture and best practices
It’s critical to understand what features a data lake offers. Most solutions come with the same core components, but each vendor may have specific offerings or unique selling points (USPs) that could influence a business’s decision.
Below are five key features every data lake should have:
Data lakes that offer diverse interfaces, APIs and endpoints can make it much easier to upload, access and move information. These capabilities are important for a data lake because it allows unstructured data for a wide range of use cases, depending on a business’s desired outcome.
ML engineers, data scientists, decision-makers and analysts benefit most from a centralized data lake solution that stores information for easy access and availability. This characteristic can help data professionals and IT managers work with data more seamlessly and efficiently, thus improving productivity and helping companies reach their goals.
Imagine a data lake with large amounts of information but no sense of organization. A viable data lake solution must incorporate generic organizational methods and search capabilities, which provide the most value for its users. Other features might include key-value storage, tagging, metadata, or tools to classify and collect subsets of information.
Security and access control are two must-have features with any digital tool. The current cybersecurity landscape is expanding, making it easier for threat actors to exploit a company’s data and cause irreparable damage. Only certain users should have access to a data lake, and the solution must have strong security to protect sensitive information.
More organizations are growing larger and operating at a much faster rate. Data lake solutions must be flexible and scalable to meet the ever-changing needs of modern businesses working with information.
Also read: Unlocking analytics with data lake and graph analysis
Some data lake solutions are best suited for businesses in certain industries. In contrast, others may work well for a company of a particular size or with a specific number of employees or customers. This can make choosing a potential data lake solution vendor challenging.
Companies considering investing in a data lake solution this year should check out some of the vendors below.
The AWS Cloud provides many essential tools and services that allow companies to build a data lake that meets their needs. The AWS data lake solution is widely used, cost-effective and user-friendly. It leverages the security, durability, flexibility and scalability that Amazon S3 object storage offers to its users.
The data lake also features Amazon DynamoDB to handle and manage metadata. AWS data lake offers an intuitive, web-based console user interface (UI) to manage the data lake easily. It also forms data lake policies, removes or adds data packages, creates manifests of datasets for analytics purposes, and features search data packages.
Cloudera is another top data lake vendor that will create and maintain safe, secure storage for all data types. Some of Cloudera SDX’s Data Lake Service capabilities include:
Other benefits of Cloudera’s data lake include product support, downloads, community and documentation. GSK and Toyota leveraged Cloudera’s data lake to garner critical business intelligence (BI) insights and manage data analytics processes.
Databricks is another viable vendor, and it also offers a handful of data lake alternatives. The Databricks Lakehouse Platform combines the best elements of data lakes and warehouses to provide reliability, governance, security and performance.
Databricks’ platform helps break down silos that normally separate and complicate data, which frustrates data scientists, ML engineers and other IT professionals. Aside from the platform, Databricks also offers its Delta Lake solution, an open-format storage layer that can Excellerate data lake management processes.
Domo is a cloud-based software company that can provide big data solutions to all companies. Users have the freedom to choose a cloud architecture that works for their business. Domo is an open platform that can augment existing data lakes, whether it’s in the cloud or on-premise. Users can use combined cloud options, including:
Domo offers advanced security features, such as BYOK (bring your own key) encryption, control data access and governance capabilities. Well-known corporations such as Nestle, DHL, Cisco and Comcast leverage the Domo Cloud to better manage their needs.
Google is another big tech player offering customers data lake solutions. Companies can use Google Cloud’s data lake to analyze any data securely and cost-effectively. It can handle large volumes of information and IT professionals’ various processing tasks. Companies that don’t want to rebuild their on-premise data lakes in the cloud can easily lift and shift their information to Google Cloud.
Some key features of Google’s data lakes include Apache Spark and Hadoop migration, which are fully managed services, integrated data science and analytics, and cost management tools. Major companies like Twitter, Vodafone, Pandora and Metro have benefited from Google Cloud’s data lakes.
Hewlett Packard Enterprise (HPE) is another data lake solution vendor that can help businesses harness the power of their big data. HPE’s solution is called GreenLake — it offers organizations a truly scalable, cloud-based solution that simplifies their Hadoop experiences.
HPE GreenLake is an end-to-end solution that includes software, hardware and HPE Pointnext Services. These services can help businesses overcome IT challenges and spend more time on meaningful tasks.
Business technology leader IBM also offers data lake solutions for companies. IBM is well-known for its cloud computing and data analytics solutions. It’s a great choice if an operation is looking for a suitable data lake solution. IBM’s cloud-based approach operates on three key principles: embedded governance, automated integration and virtualization.
These are some data lake solutions from IBM:
With so many data lakes available, there’s surely one to fit a company’s unique needs. Financial services, healthcare and communications businesses often use IBM data lakes for various purposes.
Microsoft offers its Azure Data Lake solution, which features easy storage methods, processing, and analytics using various languages and platforms. Azure Data Lake also works with a company’s existing IT investments and infrastructure to make IT management seamless.
The Azure Data Lake solution is affordable, comprehensive, secure and supported by Microsoft. Companies benefit from 24/7 support and expertise to help them overcome any big data challenges they may face. Microsoft is a leader in business analytics and tech solutions, making it a popular choice for many organizations.
Companies can use Oracle’s Big Data Service to build data lakes to manage the influx of information needed to power their business decisions. The Big Data Service is automated and will provide users with an affordable and comprehensive Hadoop data lake platform based on Cloudera Enterprise.
This solution can be used as a data lake or an ML platform. Another important feature of Oracle is it is one of the best open-source data lakes available. It also comes with Oracle-based tools to add even more value. Oracle’s Big Data Service is scalable, flexible, secure and will meet data storage requirements at a low cost.
Snowflake’s data lake solution is secure, reliable and accessible and helps businesses break down silos to Excellerate their strategies. The top features of Snowflake’s data lake include a central platform for all information, fast querying and secure collaboration.
Siemens and Devon Energy are two companies that provide testimonials regarding Snowflake’s data lake solutions and offer positive feedback. Another benefit of Snowflake is its extensive partner ecosystem, including AWS, Microsoft Azure, Accenture, Deloitte and Google Cloud.
Companies that spend extra time researching which vendors will offer the best enterprise data lake solutions for them can manage their information better. Rather than choose any vendor, it’s best to consider all options available and determine which solutions will meet the specific needs of an organization.
Every business uses information, some more than others. However, the world is becoming highly data-driven — therefore, leveraging the right data solutions will only grow more important in the coming years. This list will help companies decide which data lake solution vendor is right for their operations.
Read next: Get the most value from your data with data lakehouse architecture
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
The “Cirrus” Power10 processor from IBM, which we codenamed for Big Blue because it refused to do it publicly and because we understand the value of a synonym here at The Next Platform, shipped last September in the “Denali” Power E1080 big iron NUMA machine. And today, the rest of the Power10-based Power Systems product line is being fleshed out with the launch of entry and midrange machines – many of which are suitable for supporting HPC and AI workloads as well as in-memory databases and other workloads in large enterprises.
The question is, will IBM care about traditional HPC simulation and modeling ever again with the same vigor that it has in past decades? And can Power10 help reinvigorate the HPC and AI business at IBM. We are not sure about the answer to the first question, and got the distinct impression from Ken King, the general manager of the Power Systems business, that HPC proper was not a high priority when we spoke to him back in February about this. But we continue to believe that the Power10 platform has some attributes that make it appealing for data analytics and other workloads that need to be either scaled out across small machines or scaled up across big ones.
Today, we are just going to talk about the five entry Power10 machines, which have one or two processor sockets in a standard 2U or 4U form factor, and then we will follow up with an analysis of the Power E1050, which is a four socket machine that fits into a 4U form factor. And the question we wanted to answer was simple: Can a Power10 processor hold its own against X86 server chips from Intel and AMD when it comes to basic CPU-only floating point computing.
This is an important question because there are plenty of workloads that have not been accelerated by GPUs in the HPC arena, and for these workloads, the Power10 architecture could prove to be very interesting if IBM thought outside of the box a little. This is particularly true when considering the feature called memory inception, which is in effect the ability to build a memory area network across clusters of machines and which we have discussed a little in the past.
We went deep into the architecture of the Power10 chip two years ago when it was presented at the Hot Chip conference, and we are not going to go over that ground again here. Suffice it to say that this chip can hold its own against Intel’s current “Ice Lake” Xeon SPs, launched in April 2021, and AMD’s current “Milan” Epyc 7003s, launched in March 2021. And this makes sense because the original plan was to have a Power10 chip in the field with 24 fat cores and 48 skinny ones, using dual-chip modules, using 10 nanometer processes from IBM’s former foundry partner, Globalfoundries, sometime in 2021, three years after the Power9 chip launched in 2018. Globalfoundries did not get the 10 nanometer processes working, and it botched a jump to 7 nanometers and spiked it, and that left IBM jumping to Samsung to be its first server chip partner for its foundry using its 7 nanometer processes. IBM took the opportunity of the Power10 delay to reimplement the Power ISA in a new Power10 core and then added some matrix math overlays to its vector units to make it a good AI inference engine.
IBM also created a beefier core and dropped the core count back to 16 on a die in SMT8 mode, which is an implementation of simultaneous multithreading that has up to eight processing threads per core, and also was thinking about an SMT4 design which would double the core count to 32 per chip. But we have not seen that today, and with IBM not chasing Google and other hyperscalers with Power10, we may never see it. But it was in the roadmaps way back when.
What IBM has done in the entry machines is put two Power10 chips inside of a single socket to increase the core count, but it is looking like the yields on the chips are not as high as IBM might have wanted. When IBM first started talking about the Power10 chip, it said it would have 15 or 30 cores, which was a strange number, and that is because it kept one SMT8 core or two SMT4 cores in reserve as a hedge against bad yields. In the products that IBM is rolling out today, mostly for its existing AIX Unix and IBM i (formerly OS/400) enterprise accounts, the core counts on the dies are much lower, with 4, 8, 10, or 12 of the 16 cores active. The Power10 cores have roughly 70 percent more performance than the Power9 cores in these entry machines, and that is a lot of performance for many enterprise customers – enough to get through a few years of growth on their workloads. IBM is charging a bit more for the Power10 machines compared to the Power9 machines, according to Steve Sibley, vice president of Power product management at IBM, but the bang for the buck is definitely improving across the generations. At the very low end with the Power S1014 machine that is aimed at small and midrange businesses running ERP workloads on the IBM i software stack, that improvement is in the range of 40 percent, supply or take, and the price increase is somewhere between 20 percent and 25 percent depending on the configuration.
Pricing is not yet available on any of these entry Power10 machines, which ship on July 22. When we find out more, we will do more analysis of the price/performance.
There are six new entry Power10 machines, the feeds and speeds of which are shown below:
For the HPC crowd, the Power L1022 and the Power L1024 are probably the most interesting ones because they are designed to only run Linux and, if they are like prior L classified machines in the Power8 and Power9 families, will have lower pricing for CPU, memory, and storage, allowing them to better compete against X86 systems running Linux in cluster environments. This will be particularly important as IBM pushed Red Hat OpenShift as a container platform for not only enterprise workloads but also for HPC and data analytic workloads that are also being containerized these days.
One thing to note about these machines: IBM is using its OpenCAPI Memory Interface, which as we explained in the past is using the “Bluelink” I/O interconnect for NUMA links and accelerator attachment as a memory controller. IBM is now calling this the Open Memory Interface, and these systems have twice as many memory channels as a typical X86 server chip and therefore have a lot more aggregate bandwidth coming off the sockets. The OMI memory makes use of a Differential DIMM form factor that employs DDR4 memory running at 3.2 GHz, and it will be no big deal for IBM to swap in DDR5 memory chips into its DDIMMs when they are out and the price is not crazy. IBM is offering memory features with 32 GB, 64 GB, and 128 GB capacities today in these machines and will offer 256 GB DDIMMs on November 14, which is how you get the maximum capacities shown in the table above. The important thing for HPC customers is that IBM is delivering 409 GB/sec of memory bandwidth per socket and 2 TB of memory per socket.
By the way, the only storage in these machines is NVM-Express flash drives. No disk, no plain vanilla flash SSDs. The machines also support a mix of PCI-Express 4.0 and PCI-Express 5.0 slots, and do not yet support the CXL protocol created by Intel and backed by IBM even though it loves its own Bluelink OpenCAPI interconnect for linking memory and accelerators to the Power compute engines.
Here are the different processor SKUs offered in the Power10 entry machines:
As far as we are concerned, the 24-core Power10 DCM feature EPGK processor in the Power L1024 is the only interesting one for HPC work, aside from what a theoretical 32-core Power10 DCM might be able to do. And just for fun, we sat down and figured out the peak theoretical 64-bit floating point performance, at all-core base and all-core turbo clock speeds, for these two Power10 chips and their rivals in the Intel and AMD CPU lineups. Take a gander at this:
We have no idea what the pricing will be for a processor module in these entry Power10 machines, so we took a stab at what the 24-core variant might cost to be competitive with the X86 alternatives based solely on FP64 throughput and then reckoned the performance of what a full-on 32-core Power10 DCM might be.
The answer is that IBM can absolutely compete, flops to flops, with the best Intel and AMD have right now. And it has a very good matrix math engine as well, which these chips do not.
The problem is, Intel has “Sapphire Rapids” Xeon SPs in the works, which we think will have four 18-core chiplets for a total of 72 cores, but only 56 of them will be exposed because of yield issues that Intel has with its SuperFIN 10 nanometer (Intel 7) process. And AMD has 96-core “Genoa” Epyc 7004s in the works, too. Power11 is several years away, so if IBM wants to play in HPC, Samsung has to get the yields up on the Power10 chips so IBM can sell more cores in a box. Big Blue already has the memory capacity and memory bandwidth advantage. We will see if its L-class Power10 systems can compete on price and performance once we find out more. And we will also explore how memory clustering might make for a very interesting compute platform based on a mix of fat NUMA and memory-less skinny nodes. We have some ideas about how this might play out.
Organisations have stepped up their focus on backup and recovery as they face an ever increasingly likelihood of falling victim to cyber crime. This is according to speakers participating in an IBM EMEA webinar on boosting cyber resilience with IBM Storage and Predatar, the cyber recovery platform that adds another dimension to IBM’s Spectrum Protect and Spectrum Protect Plus.
Roland Leins, Business Development Executive for Storage Software at IBM Europe, said data protection had to be transformed as organisations made cyber resiliency a top priority. “Modernising data protection for resilience has become crucial. A common mistake is to architect for backup, but organisations must architect for quick recovery to meet the SLAs for the data – this can be the difference between getting the business up again or going out of business completely,” he said. Automation is also necessary to ensure that the necessary recovery happens in a repeatable consistent manner to meet the business SLAs.”
Ben Hodge, Head of Marketing at Predatar, said while the NIST best practice framework covers identify, protect, detect, respond and recover, many organisations have focused on identifying, protecting and detecting in the past. “Organisations are increasingly realising it is quite likely their defences will be breached. There’s a refocusing on response and recovery for a fast and effective response. As they refocus, they are realising they have big challenges to overcome. Predatar is all about the response and recovery, working hand in hand with IBM storage and defences. It is the final piece of the puzzle,” he said.
Built for IBM Spectrum Protect and Spectrum Protect Plus environments, the Predatar cyber recovery orchestration platform takes resiliency to the next level with capabilities to rival any enterprise backup and recovery solution. Predatar’s cyber analytics and real-time alerts have been built to help infrastructure and security teams cut through the noise of complex backup systems to show them their recoverability risk factors on a configurable dashboard. Predatar IQ instantly notifies users of anomalies, changes and issues in their backup environment as they occur, while Data Explorer lets organisations explore the environment to discover new recoverability insights.
Hodge noted that around two-thirds of backup recoveries failed to meet the recovery time objectives for business continuity. “Recovering cleanly is becoming increasingly difficult because of the dwell time malware can sit inside the storage environment before being discovered; being replicated into backup and storage. If organisations are infected deep and wide, recovery will reinfect the infected environment. Around 10% of backup recoveries fail to recover – there could be critical business data in there and organisations can’t afford to have holes in their data,” he said.
“Our cyber recovery orchestration uses automated workflows and AV tools and XDR/EDR tools to continually recover backup workloads, scan them to ensure they are clean. The only way you can be certain you can recover quickly, cleanly and completely is by testing it continually. We also have cyber analytics built in to serve users with data and insights to understand the overall health and recoverability of the backup environments. With machine learning and artificial intelligence overlaid on these analytics, Predatar learns and, over time, it becomes smarter and capable of finding more infections, faster. With Spectrum Protect, Predatar is running in the background, running tests and plugs into SEIM platforms such as QRadar. Predatar continuously searches across backups to look for known infection signatures and identify dormant malware, and will recover suspicious backup workloads to an isolated CleanRoom, scan them for viruses, clean them and restore them to production. By continually scanning your backups in the background, Predatar finds, quarantines and eliminates dormant viruses and malware before they can wreak havoc.”
In SA through Axiz
Craig Botha, Business Development Manager: Advanced Technologies: IBM at Axiz, says Predatar is a compelling solution for anyone tasked with protecting, backing up and recovering data.
“With Predatar running continuous recovery testing and backup data validation, scheduled testing, randomised testing with ML behaviour-based testing, organisations will be able to recover backups quickly, cleanly and completely. What’s exciting is that IBM has taken tried and trusted technology and packaged it with Predatar for an all-in-one solution: you get the full muscle of QRadar enterprise security information and event management (SIEM) in a modern, midrange storage device. It’s new and exciting thinking from IBM. An impressive differentiator is that it’s continuously learning – the AI built into it is phenomenal,” says Botha. “It gets to a point where it knows exactly what team needs the backup, and which data is most important to the company, and adapts to cater for priorities.
He notes that Predatar brings key data protection and backup features into one solution, enabling cyber security and storage teams to do more with less. “There’s a massive cyber skills problem, and there’s a lot of burnout among those with too much to do. Restoring data using traditional methods can be a nightmare – backups might not work, or tapes may be damaged. But Predatar is the future come early, making restoring an immutable copy quick and easy. It’s the future come early. For South African businesses, it addresses challenges around skills and cost,” he says.
“We have been asking for this for some time, and now we have it as part of our portfolio, along with Spectrum Protect and Protect Plus for a complete cyber resiliency story.”
There might be times when you need to use a computer or browse the web without anyone being able to track or see what you've done. Maybe you're birthday shopping for a loved one or researching a book and don't want your search terms visible to prying eyes. We don't know what you're doing, and we're not here to judge, just don't do anything gross or illegal.
Sure, you could use your browser's incognito mode, but a persistent sleuth could still figure out what you were doing, and sometimes that just won't do. That's where Tails comes in. Tails is a standalone operating system that stands for The Amnesic Incognito Live System, and it does precisely what the name suggests, (via Privacy Affairs).
The overall process is similar to the one described above with Linus, only with added security. You're going to want software to create a bootable USB flash drive. You can use YUMI, listed above, or find another option depending on your preferences. Then just install Tails on your flash drive.
Once installed on your flash drive, you can use it to boot the OS directly from the drive. It will encrypt all of your files and internet usage while in use. Then, once you remove the flash drive and shut down your computer, any trace of your activities vanishes into the ether as if they happened. Again, don't be weird, use responsibly.
Global Next-Generation Data Storage Market 2022-2028, By Product Type (File Storage, Object Storage, Block Storage), By Application End User (Small and Medium Enterprises (SMEs), Large enterprises), and Geography (Asia-Pacific, North America, Europe, South America, and the Middle East and Africa), Segments and Forecasts from 2022 to 2028. Global Next-Generation Data Storage market size is estimated to be worth USD 51670 million in 2021 and is forecast to a readjusted size of USD 81860 million by 2028 with a CAGR of 6.8 Percent
The Global Next-Generation Data Storage Market Report Overview presents the most accurate growth scenarios and market opportunities, which can be accessed with the right assumptions. During the forecast period, the research highlights the unique and essential variables that are projected to have a substantial impact on the Next-Generation Data Storage market. This study contains a substantial quantity of information that will assist new producers in gaining a better understanding of the industry. The market analysis and statistics for Next-Generation Data Storage segments of the market such as type, industry, regions, and market expansion, development, trends, demographics, and forecast are included in the research study. The Next-Generation Data Storage market report analyses the overall market’s demand and supply trends, providing key insights and graphical depiction.
Get a Full PDF demo Copy of the Next-Generation Data Storage Market Report: (Including Full TOC, List of Tables and Figures, and Chart) at https://www.eonmarketresearch.com/sample/94341
Next-Generation Data Storage Market segment by players, this report covers
Dell, HPE, NetApp, IBM, Hitachi, Toshiba, Pure Storage, Nutanix, Tintri, Simplivity, Scality
Product Type Outlook (Revenue, USD Billion; 2021– 2027)
File Storage, Object Storage, Block Storage
Application/End-User (Revenue, USD Billion; 2021– 2027)
Small and Medium Enterprises (SMEs), Large enterprises
Region Outlook (Revenue, USD Billion; 2021– 2027)
● North America- US, Canada, and Mexico
● Europe- Germany, UK, France, Italy, Spain, Benelux, and Rest of Europe
● Asia Pacific- China, India, Japan, South Korea, and Rest of Asia Pacific
● Latin America- Brazil and Rest of Latin America
● Middle East and Africa- Saudi Arabia , UAE , South Africa, Rest of Middle East and Africa
Have Any Query? Ask Our Experts: https://www.eonmarketresearch.com/enquiry/94341
The data is based on current trends as well as historical milestones. This section also includes a breakdown of total production on the global Next-Generation Data Storage market and by kind from 2017 to 2028. The number of sales by region from 2017 to 2028 is discussed in this section. The Next-Generation Data Storage report includes pricing analysis for each area from 2017 to 2022, and the worldwide value from 2017 to 2028.
Objectives of the global Next-Generation Data Storage market
● To identify the main subsegments of the Next-Generation Data Storage market to comprehend its structure.
● Identifies describes, and analyses the sales volume, value, market share, competitive market landscape, opportunities and threats, and strategic initiatives for the main worldwide Next-Generation Data Storage manufacturers for the next few years.
● To examine the Next-Generation Data Storage in terms of specific expected growth, career outlook, and market share contribution.
● Analyze commercial developments in the market, such as market expansions, partnerships, new product development, and mergers.
● To develop a strategic analysis of the main players and a thorough analysis of their strategic planning.
Browse Complete Next-Generation Data Storage Market Report Details with Table of contents and list of tables at https://www.eonmarketresearch.com/next-generation-data-storage-market-94341
Reasons to buy the global Next-Generation Data Storage market
● This research identifies the region and market sector that is likely to expand the fastest and dominate the Next-Generation Data Storage industry.
● Next-Generation Data Storage Market analysis by region, covering the consumption of the manufacturer in each country as well as the factors that influence the market within each region.
● The Next-Generation Data Storage market environment includes the top players’ market rankings, as well as new service/product announcements, collaborations, company growth, and acquisitions made by the companies profiled in the previous five years.
● For the top Next-Generation Data Storage market players, extensive company profiles with business overviews, company insights, product evaluations, and SWOT analyses are available.
● The company’s present and future outlook in light of accurate changes (which include both advanced and developing regions’ growth possibilities and drivers, as well as difficulties and restraints).
1. Fire Retardant Rubber Market 2022 Current Status And Future Prospects, Segmentation, Strategy, and Forecast to 2028- Elasto Proxy, Shin-Etsu, Polycomp, PAR, Ronfell – https://www.marketwatch.com/press-release/fire-retardant-rubber-market-2022-current-status-and-future-prospects-segmentation-strategy-and-forecast-to-2028–elasto-proxy-shin-etsu-polycomp-par-ronfell-2022-07-14
2. Peripheral Nerves Injury Operative Market Size 2022 Analysis by Competitive Vendors in Top Regions and Countries, Revenue, Growth Rate, Development Trends, Threats, Opportunities, and Forecast to 2028- Neurorrhaphy, Nerve Grafting, Axogen, Integra LifeSciences, Synovis – https://www.marketwatch.com/press-release/peripheral-nerves-injury-operative-market-size-2022-analysis-by-competitive-vendors-in-top-regions-and-countries-revenue-growth-rate-development-trends-threats-opportunities-and-forecast-to-2028–neurorrhaphy-nerve-grafting-axogen-integra-lifesciences-synovis-2022-07-14
30 N Gould St Ste R, Sheridan, WY 82801, USA
Phone: +1 (310) 601-4227
Email: [email protected]
Welcome to the 135th Wimbledon Championships – the smartest, most data-driven and sustainable ever!
In this piece I explore the latest technology, people and purpose inspired innovations following a behind the scenes visit with IBM, the official Information Technology partner for the All England Lawn Tennis Club (AELTC) and The Championships for the past 33 years. I also reflect on the changes I have seen first-hand over the last 12 months, especially regarding explainable AI, the quality and interactivity of the fan experience and sustainable development. This is clearly based on the power of trusted partnership with workshopping, testing and developing AI and analytics-based solutions in an all-year shared commitment. Let’s dive into the key advances!
Wimbledon reached approximately 18 million fans through its digital platforms in 2021. And this year, the event has become even more technology feature rich by design - whilst always supported by people in partnership. Around each match court there are typically 2-3 individuals applying their judgment on the data generated – from reflecting on whether it was a volley or drop shot, or double-checking, did the ball graze the racket or was that actually an ace? The only thing stopping this at Wimbledon 2022 was quite literally a swarm of bees, when the affected court data entry position was cleared and this ‘sanity-check’ of data role seamlessly taken over by the IBM Technology Command Centre – meaning no data point evaluation was lost!
With an overarching focus on trustworthy AI and the interactivity of the fan experience, this year IBM has moved another ‘step beyond’ the technology foundation of IBM Power Index (IPI) with Watson, Personalized Recommendations and Highlights Reels, and IBM Match Insights with Watson, and all underpinned by NLP, Natural Search and a hybrid cloud approach combining IBM Cloud, on-premises systems and private clouds, which I covered previously here. Two new features for 2022 which advance both AI explainability and the fan experience are ‘Win Factors’ and ‘Have Your Say’ as detailed below:
On both the official Wimbledon apps and Wimbledon.com, users can now register their own personal predictions for match outcomes via Have Your Say. Users can their compare their take with the aggregated predictions of other tennis fans plus the AI-powered Likelihood to Win predictions that are generated by IBM Watson. Such a great way to enable all voices to be heard, enable interactivity and continue the conversation on social channels too – sports fans always love to debate! And you can see the delight on occasions where ‘users beat the tech’ – although it must be said that IBM predictions maintain an 80%+ accuracy rate on the centre courts where predictions are based on the greatest volume, history and range of data sources. So, impressive results all round!
Advancing the existing ‘Match Insights’ feature on Wimbledon.com and the Wimbledon apps, ‘Win Factors’ offer a new level of explainability into the factors which can affect player performance that are being analysed by the AI to determine insights and predictions. These include the IBM Power Index, ATP/WTA rankings, yearly success, media articles and punditry, accurate performance, head-to-head, ratio of games won, net of sets won and court surface. The feature also highlights the top 3 factors that have influenced the AI’s ultimate Likelihood to Win prediction. This heralds a new level of digital experiences driven by AI with transparency and explainability embedded by design, critical to both user understanding and user trust, as highlighted in new research explored here.
The overarching event theme for the Championships this year is “Environment Positive, Every Day” with investment in reducing environmental impact a long held commitment by the AELTC as detailed in pledges including net-zero emissions from operations, zero-waste status and achieving biodiversity net-gain available < a rel="nofollow" target="_blank" href="https://www.wimbledon.com/en_GB/atoz/sustainability.html">here. Amongst a number of green developments at Wimbledon 2022 are electric vehicles, a bug hotel, living walls and reusable cups plus priority plant-based options in restaurants, coupled by innovation in the food supply chain, especially around transparency and waste reduction.
IBM is already developing new innovation around energy efficient smart lighting for the event and underpins the Wimbledon Championships environmental commitment with its ‘sustainable by design’ technology strategy which focuses on the complete lifecycle from design choices through to innovation adoption, manufacturing and shipping logistics. Taking materials selection as the foundation of a sustainable lifecycle, IBM has sourced all 3TG mineral requirements from ethical smelters or refiners, or from 100% recycled or scrap for the last 3 years.
And with a 27-year history of continually improving the energy efficiency of its products generation after generation, the latest z16 server iteration combines performance advancements, executing some 25 billion encrypted transactions per day, with assured energy ratings and transparency around estimated carbon footprint – this can help predict the full life cycle emissions of a product and identify opportunity areas for the greatest greenhouse gas reduction.
The commitment continues with practices deployed in manufacturing to eliminate waste, for example the use of High Recycled Content Polyethylene Cushions for IBMz, IBM Power and Storage, reducing the use of virgin materials by some 60% and by optimizing reuse and recycling opportunities. Since 1995, IBM has processed 2.46 Billion pounds of products and product waste worldwide, sending just 0.3% to landfills or for incineration in 2021. Reflecting on this further, when we consider the 2022 Championships aspiration to be “Environment Positive, Every Day” this level of sustained action for sustainable innovation by Wimbledon’s technology partner for the past 33 years, is clearly a match well made!
It has been said that ‘technology is magic’ and my experience behind the scenes at Wimbledon 2022 echoes that ‘art of the possible ethos’ made real, raising the game in AI explainability and trust, alongside heightening the dynamism and interactivity of the digital experience - helping to keep fans informed, engaged, curious and deeply involved real-time. It also brings to the fore that technology innovation and becoming data-driven is underpinned by the power and motivation of people, partnership and purpose – Wimbledon maybe a 2-week event but it’s an everyday and all year continuous commitment to sustainable technology change. I can’t wait to see what’s next!
For more information on IBM and the Wimbledon Championships, more information is available here.
Dr. Sally Eaves is a highly experienced chief technology officer, professor in advanced technologies, and a Global Strategic Advisor on digital transformation specializing in the application of emergent technologies, notably AI, 5G, cloud, security, and IoT disciplines, for business and IT transformation, alongside social impact at scale.
An international keynote speaker and author, Sally was an inaugural recipient of the Frontier Technology and Social Impact award, presented at the United Nations, and has been described as the "torchbearer for ethical tech", founding Aspirational Futures to enhance inclusion, diversity, and belonging in the technology space and beyond. Sally is also the chair for the Global Cyber Trust at GFCYBER.
The IBM PC spawned the basic architecture that grew into the dominant Wintel platform we know today. Once heavy, cumbersome and power thirsty, it’s a machine that you can now emulate on a single board with a cheap commodity microcontroller. That’s thanks to work from [Fabrizio Di Vittorio], who has shared a how-to on Youtube.
The full playlist is quite something to watch, showing off a huge number of old-school PC applications and games running on the platform. There’s QBASIC, FreeDOS, Windows 3.0, and yes, of course, Flight Simulator. The latter game was actually considered somewhat of a de facto standard for PC compatibility in the 1980s, so the fact that the ESP32 can run it with [Fabrizio’s] code suggests he’s done well.
It’s amazingly complete, with the ESP32 handling everything from audio and video to sound output and keyboard and mouse inputs. It’s a testament to the capability of modern microcontrollers that this is such a simple feat in 2021.
We’ve seen the ESP32 emulate 8-bit gaming systems before, too. If you remember [Fabrizio’s] name, it’s probably from his excellent FabGL library. Videos after the break.
Press release content from Business Wire. The AP news staff was not involved in its creation.
DUBLIN--(BUSINESS WIRE)--Jul 14, 2022--
The “Global Cloud Computing Market, By Deployment Type, By Service Model, Platform as a Service, Software as a Service, By Industry Vertical & By Region - Forecast and Analysis 2022 - 2028” report has been added to ResearchAndMarkets.com’s offering.
The Global Cloud Computing Market was valued at USD 442.89 Billion in 2021, and it is expected to reach a value of USD 1369.50 Billion by 2028, at a CAGR of more than 17.50% over the forecast period (2022 - 2028).
Cloud computing is the delivery of hosted services over the internet, including software, servers, storage, analytics, intelligence, and networking. Software-as-a-Service (SaaS), Infrastructure-as-a-Service (IaaS), and Platform-as-a-Service (PaaS) are three types of cloud computing services (PaaS).
The expanding usage of cloud-based services and the growing number of small and medium businesses around the world are the important drivers driving the market growth. Enterprises all over the world are embracing cloud-based platforms as a cost-effective way to store and manage data. Commercial data demands a lot of storage space. With the growing volume of data generated, many businesses have moved their data to cloud storage, using services like Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
The growing need to regulate and reduce Capital Expenditure (CAPEX) and Operational Expenditure (OPEX), as well as the increasing volume of data generated in websites and mobile apps, are a few drivers driving the growth of emerging technologies. Emerging technologies like big data, artificial intelligence (AI), and machine learning (ML) are gaining traction, resulting in the global cloud computing industry growth. The cloud computing market is also driven by major factors such as data security, Faster Disaster Recovery (DR), and meeting compliance standards.
Aspects covered in this report
The global cloud computing market is segmented on the basis of deployment type, service model, and industry vertical. Based on the deployment type, the market is segmented as: private cloud, public cloud, and hybrid cloud. Based on the service model, the market is segmented as: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Based on industry vertical, the market is segmented as: Government, Military & Defense, Telecom & IT, Healthcare, Retail, and Others. Based on region it is categorized into: North America, Europe, Asia-Pacific, Latin America, and MEA.
Key Market Trends
For more information about this report visit https://www.researchandmarkets.com/r/m9wewu
View source version on businesswire.com:https://www.businesswire.com/news/home/20220714005444/en/
Laura Wood, Senior Press Manager
For E.S.T Office Hours Call 1-917-300-0470
For U.S./CAN Toll Free Call 1-800-526-8630
For GMT Office Hours Call +353-1-416-8900
INDUSTRY KEYWORD: SOFTWARE TECHNOLOGY NETWORKS DATA MANAGEMENT
SOURCE: Research and Markets
Copyright Business Wire 2022.
PUB: 07/14/2022 06:38 AM/DISC: 07/14/2022 06:38 AM