The high-end Power10 server launched last year has enjoyed “fantastic” demand, according to IBM. Let’s look into how IBM Power has maintained its unique place in the processor landscape.
This article is a bit of a walk down memory lane for me, as I recall 4 years working as the VP of Marketing at IBM Power back in the 90s. The IBM Power development team is unique as many of the engineers came from a heritage of developing processors for the venerable and durable mainframe (IBMz) and the IBM AS400. These systems were not cheap, but they offered enterprises advanced features that were not available in processors from SUN or DEC, and are still differentiated versus the industry standard x86.
While a great deal has changed in the industry since I left IBM, the Power processor remains the king of the hill when it comes to performance, security, reliability, availability, OS choice, and flexible pricing models in an open platform. The new Power10 processor-based systems are optimized to run both mission-critical workloads like core business applications and databases, as well as maximize the efficiency of containerized and cloud-native applications.
IBM introduced the high-end Power10 server last September and is now broadening the portfolio with four new systems: the scale-out 2U Power S1014, Power S1022, and Power S1024, along with a 4U midrange server, the Power E1050. These new systems, built around the Power10 processor, have twice the cores and memory bandwidth of the previous generation to bring high-end advantages to the entire Power10 product line. Supporting AIX, Linux, and IBM i operating systems, these new servers provide Enterprise clients a resilient platform for hybrid cloud adoption models.
The latest IBM Power10 processor design includes the Dual Chip Module (DCM) and the entry Single Chip Module SCM) packaging, which is available in various configurations from four cores to 24 cores per socket. Native PCIe 5th generation connectivity from the processor socket delivers higher performance and bandwidth for connected adapters. And IBM Power10 remains the only 8-way simultaneous multi-threaded core in the industry.
An example of the advanced technology offered in Power10 is the Open Memory Interface (OMI) connected differential DIMM (DDIMM) memory cards delivering increased performance, resilience, and security over industry-standard memory technologies, including the implementation of transparent memory encryption. The Power10 servers include PowerVM Enterprise Edition to deliver virtualized environments and support a frictionless hybrid cloud deployment model.
Surveys say IBM Power experiences 3.3 minutes or less of unplanned outage due to security issues, while an ITIC survey of 1,200 corporations across 28 vertical markets gives IBM Power a 99.999% or greater availability rating. Power10 also stepped up the AI Inferencing game with 5X faster inferencing per socket versus Power9 with each Power10 processor core sporting 4 Matrix Math Accelerators.
But perhaps even more telling of the IBM Power strategy is the consumption-based pricing in the Power Private Cloud with Shared Utility Capacity commercial model allowing customers to consume resources more flexibly and efficiently for all supported operating systems. As x86 continued to lower server pricing over the last two decades, IBM has rolled out innovative pricing models to keep these advanced systems more affordable in the face of ever-increasing cloud adoption and commoditization.
While most believe that IBM has left the hardware business, the company’s investments in underlying hardware technology at the IBM Research Labs, and the continual enhancements to IBM Power10 and IBM z demonstrate that the firm remains committed to advanced hardware capabilities while eschewing the battles for commoditized (and lower margin) hardware such as x86, Arm, and RISC-V.
Enterprises demanding more powerful, flexible, secure, and yes, even affordable innovation would do well to familiarize themselves with IBM’s latests in advanced hardware designs.
IBM is continuing its effort to democratize blockchain technology for developers. The company announced the availability of the IBM Blockchain Platform Starter Plan designed to provide developers, startups and enterprises the tools for building blockchain proofs-of-concept and an end-to-end developer experience.
“What do you get when you offer easy access to an enterprise blockchain test environment for three months?” Jerry Cuomo, VP of blockchain technology at IBM, wrote in a blog post. “More than 2,000 developers and tens of thousands of transaction blocks, all sprinting toward production readiness.”
RELATED CONTENT: Unlocking the blockchain potential
IBM has been focused on bringing the blockchain to enterprises for years. Earlier this year, the company announced IBM Blockchain Starter Services, Blockchain Acceleration Services and Blockchain Innovation Services.
The platform is powered by the open-source Hyperledger Fabric framework, and features a test environment, suite of education tools and modules, network provisioning, and $500 in credit for starting up a blockchain network. Hyperledger Fabric is an open-source blockchain framework implementation originally developed by Digital Asset and IBM.
According to the company, the Blockchain Platform was initially built for institutions working collectively towards mission-critical business goals. “And while Starter Plan was originally intended as an entry point for developers to test and deploy their first blockchain applications, users also now include larger enterprises creating full applications powered by dozens of smart contracts, eliminating many of the repetitive legacy processes that have traditionally slowed or prevented business success,” Cuomo explained.
Other features include: access to IBM Blockchain Platform Enterprise Plan capabilities, code samples available on GitHub, and Hyperledger Composer open-source technology.
“Starter Plan was introduced as a way for anyone to access the benefits of the IBM Blockchain Platform regardless of their level of blockchain understanding or production readiness. IBM has worked for several years to commercialize blockchain and harden the technology for the enterprise based on experience with hundreds clients across industries,” Cuomo wrote.
RHEL 9.0, the latest major release of Red Hat Enterprise Linux, delivers tighter security, as well as improved installation, distribution, and management for enterprise server and cloud environments.
The operating system, code named Plow, is a significant upgrade over RHEL 8.0 and makes it easier for application developers to test and deploy containers.
Available in server and desktop versoins, RHEL remains one of the top Linux distributions for running enterprise workloads because of its stability, dependability, and robustness.
It is free for software-development purposes, but instances require registration with the Red Hat Subscription Management (RHSM) service. Red Hat, owned by IBM, provides 24X7 subscription-based customer support as well as professional integration services. With the money Red Hat receives from subscriptions, it supports other open source efforts, including those that provide upstream features that eventually end up in RHEL itself.
RHEL 9 can be run on a variety of physical hardware, as a virtual machine on hypervisors, in containers, or as instances in Infrastructure as a Service (IaaS) public cloud services. It supports legacy x86 hardware as well as 64-bit x86_64-v2, aarch64, and ARMv8.0-A hardware architectures. RHEL 9 supports IBM Power 9, Power 10, and Z-series (z14) hardware platforms.
RHEL also supports a variety of data-storage file systems, including the common Ext4 file system, GFS2 and XFS. Legacy support for Ext2, Ext3, and vfat (FAT32) still exists.
RHEL scales to large amounts of persistent and transient store, and RHEL 9 increases maximum amount of memory to 48 TB for x86_64 architectures.
The first step is downloading the operating system and following some straight-forward steps.
When installing RHEL 9, users are prompeted for "Software Selection" options, and we chose Server with GUI. There are others such as Minimal Install, Server, Workstation, Custom Operating System, and Virtualization Host.
At this point, additional software can be chosen based on the environment and install functions like DNS Name Server, File and Storage Server, Debugging Tools, GNOME, and Guest Agents, if running a hypervisor. These allow tailoring the type of install based on the role of the server. Next, users can select add-ons for additional environment software to install automatically.
Server with GUI or any of the desktop variants of RHEL 9 come with the GNOME 40 desktop environment. (The latest GNOME version is 42.) For a graphical interface, RHEL 9 uses the Wayland 1.19 graphics-display server protocol with NVIDIA drivers. Wayland is the C library communications protocol that specifies how data will be sent to the display server and clients. The latest Wayland release is 1.21 with RHEL again opting for stability and general availability.
RHEL is a solid operating system for application developers who plan to move working code into production. RHEL 9 comes with GNU Compiler Collection (GCC) 11.2.1 with LLVM, glibc 2.34, and binutils 2.35. Link Time Optimization (LTO) is now enabled by default to help make executables smaller and more efficient.
RHEL 9 comes with Python 3.9 installed by default and supports modern programming languages like Rust and Go. RHEL 9 also comes with updated programming languages including Node.js, Ruby 3.0.3, Perl 5.32, and PHP 8.0.
Red Hat offers the OpenShift Container Platform as its primary product for running Linux containers in a Kubernetes management environment. OpenShift runs on RHEL, and RHEL 9 has available Universal Base Image (UBI) images to support building containerized applications. RHEL 9 also has automatic container updates and rollbacks, and the Podman tool can help notify DevOps teams if containers are failing and automatically rollback to known-good configurations.
Linux software-package management systems have been evolving in exact years. The yum (Yellow-Dog Updater Modified) software update utility is being deprecated, but the command itself is still supported. The transition to dnf (Dandified Yum) has occurred, and the yum command is just a symbolic link to dnf3.
RHEL 9 comes with Red Hat Package Manager (RPM) 4.16, and the rpm command can still be used to install files with the .rpm file extension. Flatpak (formerly sdg-app) is another method of packaging and distributing software to Linux systems. Flatpak defines permissions and resource access that apps require.
RHEL 9 also supports the Red Hat Software Collections (RHSCL) for releasing semi-annual stable updates of critical application software. RHSCL provides updates to software-development tools, web services, database software, and other key software for application environments.
Integrity Measurement Architecture (IMA) can detect files that have been maliciously modified and assess the integrity of the Linux kernel. To validate the authenticity and integrity of the OS distribution, RHEL 9 supports IMA along with Extended Verification Module (EVM) to protect file-extended attributes. RHEL 9 Malware Detection, provided with Red Hat Insights, can perform a security assessment by using YARA pattern-matching software to show evidence of malware.
RHEL 9 also provides greater control over root-user password authentication using SSH. It is possible to disable root-user login with basic passwords to help Excellerate server security. Updated classes, permissions, and features of SELinux are part of RHEL 9 to leverage Linux Kernel security capabilities.
RHEL 9 also uses OpenSSL 3.0.1, which improves the cryptographic libraries and processes to Excellerate confidentiality and integrity of web communications.
Red Hat systems are often used in environments that require heightened levels of security and must meet certain security compliance requirements. Governments often require Security Technical Implementation Guide (STIG) configuration standards along with validation using Security Content Automation Protocol (SCAP). RHEL 9 supports OpenSCAP 1.3.6 and can use the SCAP Security Guide (SSG) and the RHEL 9 Open Vulnerability Assessment Language (OVAL) signatures to check for compliance.
Red Hat Insights is a management and operations service that reviews RHEL systems for compliance, vulnerabilities, patch, gain configuration advice, and optimization. Red Hat Insights Image Builder allows creation of custom RHEL images for simplified deployment to environments including cloud infrastructure.
Red Hat offers Image Builder as-a-Service to customize and standardize a preferred RHEL 9 image and run it in an IaaS cloud service provider. Image Builder can create blueprints to customize the bootable ISO installer image. The new version of Image Builder supports creation of separate logical filesystems. This helps when meeting security-compliance requirements that call for specific directories and file systems to use dedicated partitions for STIGs.
Web-based monitoring and administration tool Cockpit comes with RHEL 9, making management and operations easier for those new to Red Hat system management.
Red Hat emphasizes uptime and supportability while keeping systems patched. RHEL 9 supports kernel live patch management that allows patching a running Linux kernel without rebooting or restarting processes.
Red Hat systems often run in cloud environments. RHEL 9 includes Resource Optimization for cloud deployments to help size the system appropriately for its workload and to balance performance and costs.
The first step toward using RHEL 9 is installing it in a test environment to get to know how it works. The 60-day demo subscription can get you started. It is important to thoroughly test RHEL 9 before lifting and shifting workloads onto new RHEL 9 systems; upgrading in-place is discouraged.
Next, perform an asset inventory of all the RHEL systems in the environment. It’s okay to admit that there are some old RHEL 6 and 7 systems in the environment in desperate need of upgrades. Some organizations may even have a few RHEL 5 and CentOS 4 systems lurking about their data centers. Those older servers are ideal candidates for RHEL 9 upgrades.
Red Hat contributes to many open-source software projects, and CentOS Linux is their upstream source for RHEL. Check out CentOS Stream 9 (released December 3, 2021) to experience what features may be coming to RHEL 9.1.
If you want to check out the latest Linux features for free, the Fedora Project (now Fedora 36) may be something to obtain and install. Fedora is intended to have the most leading-edge features and provide a vision for the future progression of the RHEL OS. Red Hat is the primary contributor to the Fedora Project, but it also has worldwide community contributors.
Fedora Workstation 36 (released May 10, 2022) comes with the latest GNOME 42 desktop along with many other new features and software. Fedora 37 will be released in December 2022, an aggressive release schedule that promotes innovation and rapid evolution of new features.
Red Hat provides long-term support for customers who run production applications for many years and require the stability. It also publishes its release schedule and the support life cycle of the operating systems. The schedule had been for a new release every five years, but with RHEL 9 has returned to a three-year cadence. Dot releases occur annually, so RHEL 9.1 should be out around May 2023.
Support for major RHEL releases span 10 years, five years of full support followed by five years of maintenance. For example, RHEL 6 was released May of 2011 and is now in the Extended Life-cycle Support (ELS) phase for customers who purchase that Add-on subscription).
RHEL 9 won't enter the ELS phase until May 2032. It's hard to plan that far in advance, but Red Hat has a long tradition of honoring commitments to customers. Here is a diagram of the lifespan of RHEL 9 from the RHEL support matrix.
Based on the transparency of the release schedule and Red Hat’s history of meeting it, we can expect RHEL 10 to be out sometime in May 2025.
Copyright © 2022 IDG Communications, Inc.
Press release content from Business Wire. The AP news staff was not involved in its creation.
DUBLIN--(BUSINESS WIRE)--Jul 14, 2022--
The “Global Cloud Computing Market, By Deployment Type, By Service Model, Platform as a Service, Software as a Service, By Industry Vertical & By Region - Forecast and Analysis 2022 - 2028” report has been added to ResearchAndMarkets.com’s offering.
The Global Cloud Computing Market was valued at USD 442.89 Billion in 2021, and it is expected to reach a value of USD 1369.50 Billion by 2028, at a CAGR of more than 17.50% over the forecast period (2022 - 2028).
Cloud computing is the delivery of hosted services over the internet, including software, servers, storage, analytics, intelligence, and networking. Software-as-a-Service (SaaS), Infrastructure-as-a-Service (IaaS), and Platform-as-a-Service (PaaS) are three types of cloud computing services (PaaS).
The expanding usage of cloud-based services and the growing number of small and medium businesses around the world are the important drivers driving the market growth. Enterprises all over the world are embracing cloud-based platforms as a cost-effective way to store and manage data. Commercial data demands a lot of storage space. With the growing volume of data generated, many businesses have moved their data to cloud storage, using services like Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
The growing need to regulate and reduce Capital Expenditure (CAPEX) and Operational Expenditure (OPEX), as well as the increasing volume of data generated in websites and mobile apps, are a few drivers driving the growth of emerging technologies. Emerging technologies like big data, artificial intelligence (AI), and machine learning (ML) are gaining traction, resulting in the global cloud computing industry growth. The cloud computing market is also driven by major factors such as data security, Faster Disaster Recovery (DR), and meeting compliance standards.
Aspects covered in this report
The global cloud computing market is segmented on the basis of deployment type, service model, and industry vertical. Based on the deployment type, the market is segmented as: private cloud, public cloud, and hybrid cloud. Based on the service model, the market is segmented as: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Based on industry vertical, the market is segmented as: Government, Military & Defense, Telecom & IT, Healthcare, Retail, and Others. Based on region it is categorized into: North America, Europe, Asia-Pacific, Latin America, and MEA.
Key Market Trends
For more information about this report visit https://www.researchandmarkets.com/r/m9wewu
View source version on businesswire.com:https://www.businesswire.com/news/home/20220714005444/en/
Laura Wood, Senior Press Manager
For E.S.T Office Hours Call 1-917-300-0470
For U.S./CAN Toll Free Call 1-800-526-8630
For GMT Office Hours Call +353-1-416-8900
INDUSTRY KEYWORD: SOFTWARE TECHNOLOGY NETWORKS DATA MANAGEMENT
SOURCE: Research and Markets
Copyright Business Wire 2022.
PUB: 07/14/2022 06:38 AM/DISC: 07/14/2022 06:38 AM
Predictive maintenance based on machine learning has reached the point where it can benefit virtually every manufacturer, big or small, an expert will tell engineers at the upcoming Pacific Design & Manufacturing Show.
Kayed Almasarweh, IBM’s Watson and cognitive solutions lead, contends that machine learning and artificial intelligence can minimize unplanned downtime, eliminate maintenance guesswork, optimize supply chain management, and reduce warranty costs in products, if used correctly. “This is not only for big manufacturing operations; it’s for everybody,” Almasarweh told Design News. “Once you get it implemented with the right data, you can get a return on investment almost immediately.”
Kayed Almasarweh of IBM: “This is not only for big manufacturing operations; it’s for everybody.” (Image source: IBM)
Almasarweh will provide a high-level view of predictive maintenance based on machine learning in a session titled, "Applying IoT and Machine Learning for Predictive Maintenance," at the Anaheim Convention Center on February 6th. At the session, he will discuss challenges, successes, and lessons learned using real-life examples from industry.
Today, Almasarweh said, applications for predictive maintenance are widespread and growing by the day. The technology could be applied to CNC machines, assembly robots, conveyor belts, stamping machines, chain rails, locomotives, trucks, and just about any other imaginable factory asset, he noted. “As long as the asset is used by a business to generate revenue, make products, or move things, it’s a target for predictive maintenance,” he told us.
To a large degree, predictive maintenance is being fueled by the broader availability of data in virtually all kinds of businesses, Almasarweh said. Many businesses have sensors in place, as well as access to data storage in the cloud. Moreover, central processing units (CPUs) and graphics processing units (GPUs) have improved so dramatically in the past decade that many can now analyze hundreds of millions of transactions per second. Together, the sensors, storage, and computing capabilities are creating a foundation for machine learning and artificial intelligence that hasn’t been available previously.
“Data is the heart of being able to do machine learning and AI,” Almasarweh said.
To get started, some manufacturers may need to bring in past information, such as data about previous machine failures and parameters that were captured along the way to those failures. In those cases, the data may be stored on electronic media, or even on paper, in files or folders.
Once the data is available, Almasarweh said, implementation of predictive maintenance involves one of two approaches. Manufacturers can purchase a machine learning model or develop one themselves. Those who develop their own models tend to be larger enterprises with bigger engineering staffs.
Either way, he said, the potential benefits are there. “If your maintenance costs are spinning out of control and it’s keeping your plant manager up at night, then you probably need to look at predictive maintenance,” he said. “It’s something that works and something that keeps getting better with time.”
Senior technical editor Chuck Murray has been writing about technology for 34 years. He joined Design News in 1987, and has covered electronics, automation, fluid power, and auto.
Integrated services company Downer has entered into a 10-year collaboration deal with IBM Consulting to explore possibilities working with artificial intelligence (AI) and other technologies in reducing its carbon footprint across its rail and transit systems.
Downer first began working with IBM in 2017 to modernise its technology platform, embedding digital and intelligent capabilities into its civil infrastructure operations.
The platform now uses IBM Maximo along with IBM Cognos Analytics with Watson to Excellerate availability, reliability and safety of its services and the fleets it maintains for customers.
This next phase of Downer’s digital transformation journey will involve a range of IBM technologies and services that work together to provide Downer a single view of the life, health and carbon footprint of all assets within the Rail and Transit Systems division, while working to keep it secure from cyber security threats.
“Sustainability is a critical focus for Downer and our customers. We have a clear roadmap to get to Net Zero by 2050,” Downer head of growth for rail and transit systems Adam Williams said.
“With IBM we have jointly created a single platform that provides a comprehensive suite of capabilities to support the sustainment of rolling stock (all railway vehicles).
“Our Rail and Transit Systems division is evolving towards becoming a supplier of digital services and this is how we will differentiate ourselves in the marketplace.
IBM Systems senior vice president Ric Lewis said strategic partnerships were necessary to overcome skills and talent shortages.
“Skills and talent remain the greatest challenge and hinderance to successful implementation of new technologies,” Lewis said.
“IBM’s new ecosystem led approach represents the biggest change to our go-to-market model in 30 years. We continue to simplify the way our partners work with us, access clients, and deliver consistent client experiences.”
In April, recently crowned vice president and general manager of Asia Pacific (APAC) Paul Burton vowed to treat the regional channel as "king”.
"I experienced first-hand how it is to be an IBM partner and I wasn’t necessarily happy with it, to be frank," he said at the time.
"In my opinion, the channel was not properly cared for and did not have the proper focus. It has always been focused on large clients, large deals, multi-hundred-million-dollar deals.
"That drove everything. But in the last few years, the channel has been king for us, especially in Asia Pacific."
Burton came into the new APAC role with three distinct aims - not to disintermediate partners nor compete with them; ensure it's mutually beneficial for partners [to work with IBM] and not to leave partners on their own but rather work and sell with them.
"It’s very much let’s work together – let's create together," he said.
According to Burton, there are now 1,500 partners in APAC consuming IBM's co-marketing dollars.
Error: Please check your email address.
The Qatar Public Works Authority 'Ashghal' has awarded IBM the contract to provide a smarter road and drainage infrastructure in the gas-rich country.
The new system will enhance the quality of services, safety and environmental sustainability for citizens in the country.
In line with the Qatar National Vision 2030, Ashghal and IBM will deploy an Enterprise Asset Management Solution (EAMS) to effectively manage the operation and maintenance of the country's roads and drainage networks and multiple effluent and water treatment plants.
The solution will enable Ashghal to rapidly evaluate and respond to defects or incidents reported by citizens and anticipate and prevent problems. Enabled through the use of mobile devices,
Ashghal will be able to quickly plan work requirements, determine resource availability and ensure the right crew responds with the right materials and tools.
The new system will also gather and analyse millions of discrete pieces of information about the country's road and drainage assets through a Geographic Information System (GIS) to allow the location of assets or work to be determined and tracked in real time.
During the project kick-off meeting Ashghal's president Nasser bin Ali Al Mawlawi said the implementation of EAMS is a pivotal step towards enhancing and streamlining the services of Ashghal's roads and drainage operations and maintenance departments.
"Designed with a focus on customer centricity, the solution will help Ashghal to advance its asset management services with the organisation's overall business goals through a system that will ensure lower asset failure frequency and ensure timely maintenance. With this advanced software, Ashghal will gain real time visibility into the country's asset usage and, better govern and manage the lifecycle of road and drainage networks to achieve higher returns on national investment," he added.
According to him, the ability to draw from multiple sources across Qatar will also provide better insight into the condition of pipes buried deep underground in specific locations and the road network.
"This will help reduce the frequency of maintenance interventions, which in turn will help reduce traffic congestion and increase public safety. It will ensure the road and drainage systems are safer and environmentally sustainable," he added.
With a total land area of 11,500 sq km and a population of 2.2 million, Qatar has experienced rapid economic growth over the last several years. This economic growth has resulted in increased demand for government entities in Qatar to provide a world-class infrastructure.
IBM opened an office in Qatar in April 2012 as part of the company's expansion in the Middle East to meet the growing needs of customers in the region.
"Building a smarter infrastructure is the foundation to establishing a smarter economy. Citizens are also placing increasing demands on their leaders to innovate and progress," remarked Amr Refaat, the general manager, IBM Middle East and Pakistan.
"The roll out of the Enterprise Asset Management Solution is a key demonstration of how Ashghal is already executing on Qatar's journey to a smarter economy leveraging Smart City concepts and enhancing citizen services," he added.
Based on IBM's Maximo Asset Management Software, the solution will transform the way road and drainage asset data, maintenance work, and ultimately customer services are managed within the Authority's Asset Affairs operations.
The Germany data center market is expected to reach a CAGR of 5.3% over the forecast period of 2022–2031. Big data and IoT technology will increase investments in the data center market as enterprises in Germany are observing high data generation across industries such as BFSI, IT & Telecom, Healthcare, Government & Defense, etc.
– The implementation of the General Data Protection Regulation (GDPR) also acts as a driver for data center investment and regional cloud network development in Germany. For instance, Microsoft opened its cloud region in Switzerland (2018), Germany (2019), and is planning to open a cloud region in Norway (2020).
– Similarly, Google has announced to open a cloud region in Frankfurt, Germany, in 2020. Hence, the implementation of data protection and privacy policies in Europe will contribute to the global data center market growth.
– The COVID-19 pandemic has resulted in a subsequent need to enforce social distancing in response to the lock-down. Due to this, there is a real drive across the public sector in Germany to shift from traditional channels to digital channels to enable citizens, businesses, and public sector staff to access public services and securely share data from remote locations. The COVID-19 Coronavirus crisis has reinforced the importance of data centers and what they do. Also, demand for cloud services is soar in some sectors but wither in other verticals as they shift into survival mode. All the above factors are enablers for the steady growth of the market in the short term as well as long time.
Key Market Trends
Increase in Colocation & Hyperscale Investments To Drive the Growth
– The primary factor driving the growth of data centers in Germany is the increased investment by colocation service providers in the Europe colocation market. The increasing cloud service providers and the information technology industry is boosting the data center market in Germany.
Download PDF sample Report For More [email protected] https://www.sdki.us/sample-request-86338
– An increase in data generation every year is forcing many companies to double their on-premise storage from time to time. More companies are opting for the data center as it addresses their storage issues without substantial upfront costs.
– Companies have big-budget, and those who need more space for storing data are going for wholesale data center colocation. For instance, Vantage acquired Etix to expedite wholesale data center capacity delivery for hyper-scale and enterprise customers looking to expand in Frankfurt. The company is planning to invest $2 billion across five markets in Europe, including a “crown jewel” 55MW greenfield campus under construction in Offenbach, Germany, just outside of Frankfurt.
– The rising number of smart hospitals in Germany owing to rising investments in digital healthcare infrastructure in the country, increasing investments in communication and technology is also boosting the market growth of data center colocation in Germany.
Growing IT Infrastructure to Fuel the Market Growth
– Germany is the fifth-largest digital economy in the world. Over 80% of enterprises in the manufacturing sector in Germany plan to digitize their value chain by 2024. Increased digital economy initiatives along with factors such as high industrial tech spend and growth in smart cities initiatives, are leading to increased edge data center deployment
– Public cloud services dominate the data centers market in Germany. Government agencies are grown by private cloud services as they plan to make greater use of cloud services in public administration during the forecast period. However, hybrid cloud services have more substantial growth potential than private and public cloud services.
– The increased adoption of Big data and IoT technology across various industries in Germany led to high data generation across the region. Such trends are creating a need for efficient IT infrastructure to manage the enormous amounts of data and thus provides growth opportunities for the data center market in Germany. Berlin, Hamburg, and Munich are the leading three smart cities in Germany using IoT for business and commercial purposes.
Access Free sample Report [email protected] https://www.sdki.us/sample-request-86338
– Over 80% of enterprises in the manufacturing sector in Germany plan to digitize their value chain by 2024. Increased emphasis on digitization from connectivity to data and service architectures is also leading to increased growth of data centers in Germany. However, the high cost associated with data centers acts as a hindrance to the growth of the market.
The Germany data center market is highly concentrated due to higher initial investments and low availability of resources, which present challenges to this market. Some of the key players in the market are Cisco Systems Inc., IBM Corporation, and Huawei. Some exact developments in the market include:
– In February 2020, Huawei launched the Intelligent Data Center Service Solution at the Industrial Digital Transformation Conference 2020. This service can help customers design, build, and operate the world’s high-reliability (Tier-4), green, and intelligent data centers. With the aid of Artificial Intelligence, Power Usage Effectiveness (PUE) can be reduced by 8%-15%.
– In December 2019, IBM expanded the availability of IBM Power Systems Virtual Servers on IBM Cloud to IBM Cloud data center in Germany. In addition to Washington, D.C., and Dallas, TX, the company’s AIX and IBM i users can now provision in Frankfurt, Germany.
SDKI Inc goal is market scenarios in various countries such as Japan, China, the United States, Canada, the United Kingdom, and Germany. We also focus on providing reliable research insights to clients around the world, including growth indicators, challenges, trends and competitive environments, through a diverse network of research analysts and consultants. With SDKI gaining trust and a customer base in more than 30 countries, SDKI is even more focused on expanding its foothold in other pristine economies.
600 S Tyler St
Suite 2100 #140
AMARILLO, TX 79101
Email:– [email protected]
MOUNTAIN VIEW, Calif. -- June 9, 2008 -- Synopsys, Inc. (NASDAQ: SNPS), a world leader in software and IP for semiconductor design and manufacturing, today announced the availability of its RTL-to-GDSII low power reference design flow for the 45-nanometer (nm) Common Platform™ technology offering from IBM, Chartered Semiconductor Manufacturing Ltd. and Samsung Electronics Co., Ltd. The reference flow, derived from Synopsys' tapeout-proven Pilot Design Environment, offers a comprehensive design implementation methodology that enables system-on-chip development teams to reduce power and cost while improving performance when designing with the Common Platform technology 45-nm process. The reference flow is built around Synopsys' Eclypse™ Low Power Solution incorporating Galaxy™ Design Platform implementation and signoff tools and the widely adopted Unified Power Format (UPF) language, using the latest technology files from the Common Platform foundries and ARM® Physical IP standard cells, I/Os, memories and the Power Management Kit for the CMOS11LP process.
"The industry continues to face low power design challenges that require leading companies to unite in providing proven methodologies and flows to optimize power management," said Tom Lantzsch, vice president of Marketing, ARM Physical IP Division. "The Synopsys reference flow, enabled by ARM Physical IP including the Power Management Kit, allows designers to easily implement power reduction techniques needed in advanced systems design."
The new low power reference flow takes chip designers through each step of the design process to optimize and implement highly complex 45-nm low power designs. The reference flow enables engineers to express low power design intent using UPF, while supporting detailed implementation and analysis with a full suite of tools from the Galaxy Design Platform, including Design Compiler® synthesis, IC Compiler physical design, DFT MAX scan compression, Formality® equivalency checking, Star-RCXT™ extraction, and PrimeTime® signoff. The reference flow automates and simplifies the adoption of advanced low power technologies and techniques including concurrent multi-corner multi-mode (MCMM) analysis and optimization, multi-threshold CMOS (MTCMOS) power gating, multi-threshold leakage optimization, power-aware placement and clock tree synthesis, and power-aware test techniques.
"Today's leading companies push the bounds of integration, power and cost in order to develop market advantage. The Common Platform is collaborating with Synopsys, a leader in electronic design automation, in the development of 45-nanometer optimized reference flows to support one of the most advanced process implementations available to designers today," said Kevin Meyer, vice president of Industry Marketing and Platform Alliances at Chartered, on behalf of the Common Platform technology alliance. "Expanding this joint effort to include low power Physical IP from ARM on 45-nanometer Common Platform technology demonstrates how innovative collaboration can benefit mutual customers."
"The 45-nanometer reference flow is the latest achievement resulting from the ongoing collaboration between the Common Platform companies, ARM and Synopsys," said Rich Goldman, vice president of Corporate Marketing and Strategic Market Development at Synopsys. "This collaboration by industry leaders allows chip designers to take full advantage of advances in Common Platform technology, Synopsys design tools, and ARM Physical IP to meet project requirements in a complete, consistent and validated design environment."
To learn more about how IBM, Chartered, Samsung, ARM and Synopsys are innovatively collaborating on a 45-nm low power reference flow design solution, visit the Common Platform partner booth #1341 and Synopsys booth #1349 to register to attend the 45-nm low power reference flow for Common Platform technology go-deep technical suite session at the 45th Design Automation Conference (DAC) in Anaheim, California, June 9 through June 12.
The reference flow is expected to be available in July 2008 at no charge to Synopsys customers and may be obtained by completing the request form at http://www.synopsys.com/cp-refflow-request/. Supporting physical IP and technology files are available from their respective suppliers.
Synopsys, Inc. (NASDAQ: SNPS) is a world leader in electronic design automation (EDA), supplying the global electronics market with the software, intellectual property (IP) and services used in semiconductor design and manufacturing. Synopsys' comprehensive, integrated portfolio of implementation, verification, IP, manufacturing and field-programmable gate array (FPGA) solutions helps address the key challenges designers and manufacturers face today, such as power and yield management, system-to-silicon verification and time-to-results. These technology-leading solutions help provide Synopsys customers a competitive edge in bringing the best products to market quickly while reducing costs and schedule risk. Synopsys is headquartered in Mountain View, California, and has more than 60 offices located throughout North America, Europe, Japan, Asia and India. Visit Synopsys online at http://www.synopsys.com/.