Did you know?
You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.
Architecture Classics: IBM Building / Mario Roberto Álvarez & Associates
Located in the Retiro district of the Autonomous City of Buenos Aires is the IBM building designed by Mario Roberto Álvarez & Associates. Conceived to house the headquarters of the IBM company, this office building was designed around 1979 and consists of a tower supported by two large concrete structural cores on a base, which is separated from the ground and the shaft of the tower to house the ground floor and a level of common areas in order to maintain the urban scale. The language of the building is recognizable from a distance as it is formed by an enclosure of horizontal bands of glass and exposed concrete parapets-parasols, which achieve a dialogue and balance in the proportion of full and empty spaces.
Inaugurated around 1983, the project was designed on the basis of a series of premises that involved the contemplation of all the restrictions of the building codes. This included things like the use of the maximum usable floor area, the use of a free floor plan with a clear and simple reinforced concrete structure free of interior columns, and the optimization of the maximum perimeter of natural lighting. Also, attention to the pedestrian scale and the hierarchy of accesses on the ground floor, the modulation of façades and ceilings for a greater number of offices and flexibility, compliance with the client's safety regulations, and the integral solution to the problem of façade maintenance together with the escape to the external staircase.
Memoir by the authors. The work configures a contemporary, functional, efficient, and economical building, which, without being an extravagance, is different from its neighbors. A building that is the product of interpreting and complying with the client's criteria, objectives, and concepts, compatible with its neighbors, but different from the surrounding glass boxes in Catalinas Norte.
Architectural and engineering considerations resulted in the proposed structural skeleton, which proved to be the most practical and economical. The standard plan is supported by two central cores and a series of perimeter columns, with a module of 1.50 meters. The two central cores rest on direct foundations and the perimeter columns do not reach the ground but transfer their load to the cores by means of a special structural system.
This transition structure basically consists of two plates, a lower and an upper plate. The lower plate rises from the cores towards the sloping edge and intercepts the upper plate, which is horizontal in line with the columns. In this way, the load of the perimeter columns is gently deflected towards the cores by means of a logical and economic structure. This structure overhangs the line of the columns, distributing their loads and creating a space for anchoring the concrete bars. The necessary stability and support of vertical and asymmetrical loads is provided by a beam grid.
This modulated design of the structure allows for maximum flexibility in the floorplans to locate offices according to IBM's optimum and to have two minimum office ranges of 3m and a 1.50m corridor. A perimeter overhang with sunshades will provide security and also serve as a means of escape to an additional external fire escape.
The typical window is made of anodized aluminum with athermic upper glass, while the sill is a compact vitreous glass painted and baked on the outside. Every other window is an opening banner for ventilation in case of emergency so that no office of at least 3x3 is without ventilation. On each of the facades, one of these modules opens from floor to ceiling to allow access to the escape staircase.
On the other side, on the first floor, there is a terrace, garden, and living area for different functions related to the offices, which go from the 3rd floor to the 19th floor, leaving the top floor for the machine room. Two of the three basements are developed for parking, while the remaining one houses connections and various computer circuits.
Project: MRA+A | Mario Roberto Álvarez & Associates
Alvarez, Kopiloff, Santoro, Satow, Rivanera
Location: Carlos M. de la Paolera 275, Autonomous City of Buenos Aires, Argentina
Land Surface: 2.736 m²
Total Built Surface Area: 32.000 m²
You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.
Our systems have detected unusual traffic activity from your network. Please complete this reCAPTCHA to demonstrate that it's you making the requests and not a robot. If you are having trouble seeing or completing this challenge, this page may help. If you continue to experience issues, you can contact JSTOR support.
Block Reference: #ecb12102-4e7b-11ed-9032-504a6d517057
Date and time: Tue, 18 Oct 2022 00:29:27 GMT
Go back to JSTOR
©2000- ITHAKA. All Rights Reserved. JSTOR®, the JSTOR logo, JPASS®, and ITHAKA® are registered trademarks of ITHAKA.
While the “death of the mainframe” may be a ways off if ever, companies are currently looking for exit strategies from big iron, with logistics multinational FedEx making headlines of late, announcing it will retire all its mainframes by 2024 in a bid to save $400 million annually.
As part of its goal to achieve carbon-neutral operations globally by 2040, FedEx is adopting a zero data center/zero mainframe environment, running half its compute in colocation facilities and half in the cloud — a move that will also help the global logistics provider be more flexible, secure, and cost-effective, says Ken Spangler, executive vice president of global information technology at FedEx.
“Mainframes were not in our long-term plan,” he says. “Over a 10-year period, we have been evolving slowly away from the mainframe base; it’s basically the retire, replace, and re-engineer strategy.”
So far, 90% of FedEx’s big-iron applications have been moved off the company’s mainframes, but 10% are “sticky,” because of integration issues due to layers of interdependencies, Spangler says, adding that FedEx has “some unique operating companies” in its portfolio with their own technologies that have a lot of dependencies.
The undertaking is as massive as it sounds, as migrating compute-intensive systems on mainframes out of data centers and into the cloud is not for the faint of heart.
Still, companies such as IBM are taking steps to help companies’ mainframe applications have an afterlife in the cloud, and many enterprises are embarking on journeys to modernize their existing mainframe strategies for the digital era, including investing further in the latest big iron.
But for companies like FedEx looking to divest from their mainframe estates in favor of the cloud, a methodical approach is essential. Motivations for making the move vary, says Mike Chuba, managing vice president at Gartner. In some cases it’s a “graying of the skillset” and in others, aging equipment and cost, he says.
“The analogy I use is, if you’re a homeowner and haven’t done basic maintenance for 10 to 15 years and things are falling apart, you’ve got a very difficult decision: whether to make that substantial investment to catch up … or look to move someplace else,’’ Chuba says.
For smaller mainframe shops that have “fallen far behind, and the mainframe hasn’t been a strategic asset and competitive differentiator,” the choice may be less cut and dry, he says. “They may be running on hardware that is 10 years old with unsupported software, so attempting to modernize may be too large.”
But for entities that can potentially realize a future without having to maintain big iron in-house, here are vital insights from IT leaders who have begun the journey.
For cable manufacturer Southwire, the impetus to move off mainframes was aging equipment. It became a question of “did we want to be in the data center business or are there other people who do processing better,’’ says Dan Stuart, senior vice president of IT at Southwire, which makes wire and cable for transmitting and distributing electricity.
Another factor was “cost avoidance,” Stuart says, as the equipment refresh cycle and software contract renewals were approaching. Instead, the company opted to move its core SAP environment and Tier 1 systems, including the company’s manufacturing resource system, to Google Cloud Platform (GCP).
The migration occurred mid-pandemic in July 2020 and was undertaken by a combination of internal staff, Google services, and a third-party provider, Stuart says, adding that Southwire’s core SAP system still runs on an IBM DB2 database in GCP, whereas its other Tier 1 applications run on Google Cloud VMware.
The migration took about eight to nine months, and Stuart is happy with the results. “We haven’t experienced many problems at all” running SAP in the cloud, he says. “I would say fewer than on-premise.”
But not having a “well laid-out project plan” around data is something that Stuart says did result in issues. “If I were to do this again, I’d look at the size of our databases and clean them up before I cut over and take a lot of historical data and archive it,” he says. “The real ‘gotcha’ for us was we needed about two full days of downtime to do this and for a company that runs 24/7, that’s about all the time we have.”
Up next is moving a couple of other Tier 1 manufacturing systems that Stuart says are ready for the cloud now that IT has implemented SD-WAN.
“We knew we had to increase our bandwidth to reduce any type of challenges with performance,’’ he explains. “We just started rolling out SD-WAN with redundant data lines with network providers to reduce the amount of downtime and increase the amount of bandwidth coming through.”
Based on his experience, Stuart advises IT leaders to clean and purge data before moving mainframe applications to the cloud. “You don’t want to carry [excess data] over because you don’t want to pay for that. So right-sizing that environment would be highly recommended. After that, you know exactly the data you want to bring over,” he says.
By moving to the cloud, Southwire has been able to streamline its disaster recovery process as well. And because the company is “very big on ESG and sustainability,” getting out from under having to run and maintain mainframes gives the company a reduction in its carbon footprint, Stuart says.
By contrast, FedEx’s approach to weaning off on-premises mainframes is multivariant. For example, as part of its “retire, replace, and re-engineer strategy,” FedEx’s freight company environment — one of those 10% “sticky” mainframe applications — will be retired because “it wasn’t worth completely re-engineering and investing a lot of money,’’ Spangler says.
“We want to have efficient enterprise solutions, so in that case, we’re re-platforming off the mainframe because it will go away in two years and we will have [new] enterprise solutions,’’ he says. Spangler added that “we’re being very cautious about not just re-platforming things generically.”
Overall, FedEx’s mainframe divestment work is being done by a combination of internal and external teams. The “heavy part” of its mainframe retirement plan got under way in 2021. The goal is to be done by 2023.
Still, Spangler advises IT leaders to “take an economic view” of what to migrate given that there are still “tremendous technology capabilities” that exist on the mainframe. “It can’t be a theoretically thing,’’ he says. “We just know for our environment, because we’re more than a 40-year-old company … we have old technologies we were replacing anyway, and when we looked at our enterprise strategy, it just made sense.”
Spangler says IT leaders should also keep the principles of engineering and architecture in mind. “A lot of people are so focused on getting rid of their mainframes they end up with mess,” he says, adding that strong engineering and architecting upfront will help make sure you end up with something that is modern, world-class, expandable, secure, and modifiable.
Lastly, Spangler recommends that IT leaders “continuously update your plan because it’s a battle. It’s hard. Brutally hard. We literally zero-base our business case on this every quarter and build from the bottom up.’’
Doing so requires FedEx to look at all the costs and saving elements and start with a clean sheet that considers whether the assumptions pan out against the reality. This ensures that if something has changed, officials are aware of it, he says.
“Every week, every quarter, and every year we know more,’’ he says. “Right now we’re very stable. We’re super confident with a high line of sight and we are executing very strongly.”
When deciding whether it’s time to move away from hosting your own big iron, there are a number of variables to consider. Besides the cost of modernizing your mainframe operations and applications, and taking into consideration the internal skills necessary to keep a mainframe and its applications chugging, organizations need to think about the value of availability, security, resiliency, and transactional integrity — which are often hard to quantify, Gartner’s Chuba says.
“People have been trying to move off the mainframe for the last 10 to 15 years, and plenty of CIOs are lying alongside the road. … They came in with a charter to move off the mainframe and have failed,’’ he says. “Part of that is that vendors have overpromised, but the truth is it’s not easy. The low-hanging fruit has moved off [the mainframe] because there are places those apps can be moved more efficiently.” But if a mission-critical application is migrated and then goes down, a company could find itself out of business, Chuba says.
Cloud providers, and especially the hyperscalers, have put a lot resources and investments into making it somewhat easier for companies to migrate applications off their mainframes in the past 10 years, he says — capabilities that will keep getting better.
That said, for most organizations, and large mainframe shops in particular, “the mantra is, ‘Do no harm to those business-critical applications,’’’ Chuba says. “They need a solid business case and assurances the transition will be seamless and their apps will run with the same level of performance, resiliency, transactional integrity, and security in the cloud as what they’ve had in mainframes.”
As CIOs contemplate what to do about their mainframes, Chuba says it boils down to a few essential factors: “If you’ve got a skills issue, first and foremost, you have to do something — whether move to the cloud or an MSP,’’ Chuba says. “If you don’t have the [mainframe] skills you don’t have many options. You can’t just shut the door and turn off the lights and hope and pray things will run.”
As for those weighing moving their mainframe applications to the cloud versus modernizing them, “the discussion is the degree of risk you’re willing to take,” he says, pointing out that if a mainframe migration project stretches out over three to six to nine to 12 years, IT leaders are incurring lot of costs along the way.
“FedEx is kind of sitting at the poker table and saying, ‘We’re all in.’ If they can do that and pull it off in a timely manner, I have no doubt …. they’ll be able to claim victory,’’ Chuba says. “But for customers who drag their feet or lose the momentum on these projects [after] starting with low-hanging fruit and then the project gets bogged down and they chase the next shiny object … costs could turn out to be pretty significant.”
FedEx’s Spangler agrees that regardless of the environment you’re retiring, IT — and the company — has to remain committed. “You have to lead it [and] you have to drive it hard, because these kinds of technologies are very integrated. And you have to stay focus. That’s the hard part,” he says.
Two accurate "state of cloud" reports offer advice on how to address the everlasting, crippling cloud skills dearth.
The inability to find IT pros with requisite cloud skills is persistently identified as a major challenge to organizations moving to cloud computing, as we have covered in accurate articles like "Cloud Strategy Survey Highlights Skills Shortage, Cloud Overspend" and "IBM Cloud Study: 'Initial Excitement' Bends to Skills, Security, Compliance Challenges" and even articles from years ago like this 2018 article, "Report: 'Cloud and Distributed Computing' Are Skills Companies Need Most."
In fact, only 8 percent of global technologists have significant cloud-related skills and experience, says a accurate "2022 State of Cloud Report" report from Pluralsight, a technology workforce development company known for its training courses.
That report, like many before it, identifies security skills as the most coveted in this age of rampant ransomware and other cybersecurity threats, though it also highlighted a need for database analytics, networking, machine learning and several other highly sought-after skills.
Pluralsight said security was the No. 1 obstacle preventing organizations from achieving cloud maturity. "When 40 percent of leaders and learners agree that security is the top skills gap, we have a serious problem," the company said in a blog post last month. "That is, unless you want to see your organization's name splashed across the headlines. Cloud computing is the future. That much is clear. But to provide consumers with reliable solutions, we have to prioritize security."
The 8 percent figure mentioned above led off Pluralsight's list of major takeaways from its report:
Pluralsight's report compiled survey results from more than 1,000 technologists and leaders in the United States, Europe, Australia and India on the most current trends and challenges in cloud strategy and learning.
A second report, "IBM Transformation Index: State of Cloud," also emphasized how crucial security concerns are. That report is designed to help organizations gauge how they fare against industry and local cloud norms in a variety of cloud areas. IBM said the report is based on its own research with more than 3,000 IT and business decision makers in 12 countries and 23 industries, revealing the areas where teams face the biggest challenges and opportunities. There, of course, security is also front and center, along with the skills gap.
"More than 90 percent of financial services, telecommunications and government organizations who responded have adopted security tools such as confidential computing capabilities, multifactor authentication and others," IBM said in a Sept. 29 blog post. "However, gaps remain that prevent organizations from driving innovation. In fact, 32 percent of respondents cite security as the top barrier for integrated workloads across environments, and more than 25 percent of respondents agree security concerns present a roadblock to achieving their cloud business goals.
"When it comes to managing their cloud applications, 69 percent of respondents say their team lacks the skills needed to be proficient. This, combined with each cloud generating its own operating silo, puts constraints on the efficiency and effectiveness of people's work."
The skills shortage was discussed further in the real report, which said: "Cloud complexity continues to grow. Many IT teams lack the necessary cloud skills to manage this complexity successfully and will need to reskill, hire for new skills, or 'borrow' skills via free agents in a gig model. Almost seven out of 10 respondents say their organization's IT team lacks the skills to architect or manage cloud applications."
Both reports go into much greater detail about the talent dearth, cloud computing obstacles and much more, while also offering advice to organizations to address those issues.
When it comes to offering solutions to mitigate the cloud skills shortage, Pluralsight unsurprisingly emphasized training, or "upskilling."
To shrink skills gaps and reduce lost internal knowledge, the company said: "Employees need to be aligned with company strategy while they begin to upskill to build within your cloud structure. Institutional knowledge is hard to come by and critical to maintain, which means it's essential that you don't allow critical internal knowledge to leave with an employee if they leave your company. Providing upskilling tools and opportunities is a worthwhile investment, but an investment nonetheless, so take steps to assure it properly benefits your long-term goals."
Pluralsight said organizations can mitigate this risk in a few ways:
IBM, meanwhile, said that to develop a cadre of cloud-skilled resources and create a single effective hybrid cloud operating model, organizations should consider the following steps:
Other bullet lists of advice came from our May article, "How to Address Crippling Cloud Skills Shortage?"
That article features the above graphic from Deloitte and bullet lists from that company and others including McKinsey & Company, The Linux Foundation's Clyde Seepersad and others. Here are some samples:
The explosion in data forcing chipmakers to get much more granular about where logic and memory are placed on a die, how data is partitioned and prioritized to utilize those resources, and what the thermal impact will be if they are moved closer together on a die or in a package.
For more than a decade, the industry has faced a basic problem — moving data can be more resource-intensive than actually computing the data. There are several key variables that need to be considered:
In most cases, shortening the distance between processing and memory can have a significant impact on performance and on heat. Still, none of this comes for free.
“Those two go hand-in-hand,” said Ron Lowman, strategic marketing manager for IoT at Synopsys. “The reason people are looking at in-memory compute is for exactly those reasons. Still, everybody’s realized that they’re going to have to liquid cool these AI accelerators in the data center. From a thermal perspective, there’s a big cost there. But there’s also huge power savings for in-memory compute. The whole idea is pervasive in the industry, because depending on the algorithm you’re using and the processing that you’re using for AI, well over 20% of the power budget can be just accessing memory, and that impacts the power consumption as well as the cost of the total implementation.”
The question design teams need to consider is what are they trying to optimize for a particular application or use case. For example, the needs for an AI system are very different than for a system that contains some AI functionality. And it’s much different in smart phone, where reaction time of a few milliseconds may be acceptable, than in a safety-critical application such as an autonomous vehicle or missile guidance system, where real-time response is essential.
“If you’re doing any kind of compute in a memory device, it’s going to cause some heating, so you have to get to the right balance,” said Steven Woo, fellow and distinguished inventor at Rambus. “But people are lured in. In a CPU today, the aggregate amount of silicon area across all the DRAM chips is so much larger than the area of a CPU. So you’re tempted, because you have so much more area to do all this in. But this thing you think you can take advantage of actually becomes harder because airflow is notoriously difficult in or near memory.”
Put simply, shrinking the distance between processor and memory involves a series of tradeoffs, and those tradeoffs can be highly domain-specific. “I look at near-memory compute as how close you can get traditional logic and memory process technologies together,” said Steve Pawlowski, vice president of advanced computing solutions at Micron. “The closest would be you slap them together and hybrid bond. Pins are expensive, so if you can get the near-memory compute where there is a piece of silicon with the memory right on top, you can take advantage of the width of the memory and minimize the amount of data movement between the memory and the logic to get extremely high bandwidth and low efficiencies.”
But the total cost on a chip, or a system, needs to be fully understood. “If I have to normalize the power consumption between memory accesses versus compute, I know for a fact that it is orders of magnitude higher to get the data to the compute elements than the real compute operation itself,” said Ramesh Chettuvetty, senior director at Infineon Technologies. “So that would mean that people have to find ways to reduce the number of data movements back and forth between memory and compute elements. But even with HBM, or other architectures that interface with HBM, they still have hundreds of watts of power consumed for the peak operations, so it mandates cooling techniques.”
This adds a whole new element to floor-planning, and in advanced packages it frequently involves an understanding of proximity effects and how much heat or noise those components will generate, and how it will be dealt with in the context of uses cases and other elements.
“You don’t want to put data in one corner of the die, and then have it go completely across to other side of the die to be used. Then you’re burning power, and wires don’t scale,” Pawlowski said. “But there’s also 40 years of software all over the planet that wasn’t written to near-memory compute specs. With AI, by comparison, architecture and software are optimized for each other.”
Even in systems built from the ground up for near-memory compute, which theoretically should be able to deal with all of these issues, there are challenges. “AI chips require massive amounts of memories that are closely integrated,” said Preeti Gupta, director, product management at Ansys. “The only way that these systems can achieve their goals is by integrating multiple dies closer together, whether 2.5D, which uses interposers substrate, or 3D, with dies stacked on top of each other. With 3D, the thermal problem exacerbates because the heat gets trapped between the dies and cannot escape as readily.”
For design and thermal engineers, the resulting Jenga-like structures are painful to contemplate, with their coupled and cascading physics effects. “There’s a lot of modeling that is required to understand the temperature profiles,” Gupta said. “To be able to understand airflow around the system, you have to be able to model the effect of temperature, not just on power consumption, but also on the mechanical part at the time. For example, the package may warp. So, there is electrical, there is mechanical, there is computational fluid dynamics. You cannot just look at mechanical in isolation. You have to also incorporate the impact of thermal on mechanical stress, encompassing multi-physics.”
These issues are creating a to-do list for engineers and physicists. In fact, because engineering teams keep trying to place components closer together, which causes thermal problems, do architectures need to be reconsidered?
“The rethinking of architectures is what is getting us to the near-memory or in-memory compute in first place,” said Chettuvetty. “There are several architecture techniques that are already being deployed, like cache-coherent architectures. You want to partition the caches, such that multiple cores can share the caches. And these caches are synchronized, by architecture, to make sure that the data dependencies are already taken care. Those architectural-level changes are being deployed currently, in multi-core environments. But there are still bottlenecks.”
For example, in AI inferencing, there is no way to store the number of weights required on an SoC, which could be as many as 80 million, in an embedded fashion, so connected memory must be used.
“Most of the time, we should have a very efficient data flow architecture in items using a memory controller,” Chettuvetty said. “If you rely on traditional conventional monolithic architectures, the compute and storage elements are separate. In that case, enormous amounts of memory are needed, which cannot be realized by embedded means at this point, so we will have to rely on external memory. There, the only option is to bring it as close as possible so that the capacitances’ drag on the interfaces are extremely low. That means I can lower the voltages on the lines. If I bring down the voltages and the swings are limited, then I consume that much less power. I can bring down the capacitance, and if I can bring down the wattage strings on these interfaces, I can reduce the power consumption that much. Those are techniques design teams are exploring on most of the high speed interfaces.”
Across the industry, there is increasing focus at the leading edge on stacking die, whether that is done in 2.5D, 3D, or in fan-outs with pillars. In some cases, there are even 2.5D and 3D-ICs being packaged together. In all of these, the goal is to shorten distances of critical data paths and to Improve throughput.
“Thermal issues are going to become more prevalent as we adopt 2.5D and 3D packaging,” said Synopsys’ Lowman. “We’ve seen a big uptick. We’ve introduced technologies, like high-bandwidth memory. It’s hugely advantageous because we’re running out of pins to increase bandwidth on traditional DDR and GDDR. HBM provides parallelism. So being able to put a stacked memory on top of that already has proven extremely beneficial, and we’re going to continue to see that adoption increase. While it is costly technology to implement, if you need performance, that’s where you’re going to have to go. For AI, you have to adopt technologies like that. We also have die-to-die technologies, because on-chip SRAM is very important for AI or on chip memories. They’ve forgone having DRAM in the system, so what they do is connect chips with lots of on-chip memory, and lots of AI compute elements together. They do that via die-to-die technology to increase performance. While this started with AI, we are seeing it migrate to server chips, as well as on the latest PC architectures. That will continue to expand, but there are thermal issues with 3D packaging. It’s an engineering field that should continue to grow.”
Further, AI/ML power requirements and the consequent architectures may usher in more thinking about how to actively cool DIMM modules. “In the past, we’ve seen a lot of forced air using for cooling, but the heat capacity of water is just so much better than air,” said Rambus’ Woo. “There will likely be broader adoption of liquid cooling, but the immersion ingredients are expensive because they’ve got to be non-corrosive.”
Indeed, thermal re-thinking extends not only from both cooling to basic architecture, but to who’s invited to create new approaches.
“The lines are blurring between chips and system design,” said Gupta. “These are not two disparate teams anymore. They must work together, which leads to a need for open, extensible platforms.”
For example, the 7nm IBM Telum microprocessor, which integrated AI capabilities, presented a re-defined cache architecture. The microprocessor contains 8 processor cores, clocked at over 5GHz, with each core supported by a redesigned 32MB private Level-2 cache. The Level-2 caches interact to form a 256MB virtual Level-3 and 2GB Level-4 cache.
“AI is a very compute intensive activity and therefore a power intensive activity,” said Christian Jacobi, distinguished engineer and chief architect for microprocessors at IBM. “The way we’re doing this on these systems is by integrating it into the processor chip, we reduce some of the energy cost of doing AI, because we can access the data where it already lives. I don’t need to take that data and move it somewhere else, move it to a different device, or move it across a network or move it through a PCI interface to an I/O attached adapter. Instead, I have my localized AI engine, and I can access the data there, so at least we can reduce that overhead of getting the data to the computer and back. Thus, there’s a power efficiency that comes from being able to run massive amounts of workload consolidated on z16 and LinuxONE systems, and how the integrated AI accelerator helps with the power efficiency of those new workload components in the context of the traditional workload components.”
According to Jacobi, this achievement required working closely with the power supply team and the thermal team to develop advanced power supply and thermal solutions. “We’re investigating and developing new technology to extract heat from the chips. We have a heatsink on the processor, extract the heat with water, and then exchange that water heat with the data center. For the future, we’re optimizing the thermal interface between the chip and heat sinks to get more efficient cooling capabilities.”
Other ideas under consideration include shifting workloads among interconnected data centers, depending on both internal circumstances like processing overloads, and external circumstances like heatwaves. There are also approaches in place, like power system management, which turns off or down those parts of chips that are not actively needed. This strategy is very visible in smart phones, where the display powers off when the user is not looking at it.
But even the most well-balanced systems could be vulnerable to power viruses and the consequent thermal stress, Jacobi noted.
While near-memory compute reduces the distance that data travels, and can reduce the amount of data that needs to be sent longer distances, it’s not the only solution. And in some cases, it may not be the best solution.
The challenge is there are so many pieces that potentially can interact in a complex design, that they need to be considered in the context of the entire system.
“If you look at this holistic system with multiple components included, and then fire each of them with their power and check the thermal conduction and follow the physics, that gives at least a first-order approximation of how the heat is generated, where it goes, and how much temperature there’s going to be on a particular surface,” said Lang Lin, principal product manager for Ansys. “Simulation can at least estimate it in the right way.”
Google is preparing a mainframe modernization service that intends to simplify and lessen the risk of migrating mainframe workloads to the cloud - a complex process that can be frought with pitfalls.…
Called Dual Run for Google Cloud, the preview service was unveiled at the company's Google Cloud Next event and is claimed to eliminate the most common risks and concerns associated with mainframe projects.
The basic premise behind Dual Run is that mainframe applications tend to be the most mission-critical within any enterprise that uses them, and any disruption caused by downtime could be a commercial disaster, just ask British bank TSB.
Dual Run enables such workloads to run simultaneously on the existing mainframe and on Google Cloud, allowing an organization to perform testing and gather data on performance and stability, with no disruption to their operations – at least according to Google.
Once customers are satisfied with the functional equivalence of the two systems, the applications running in the Google Cloud environment can then be made the organization's system-of-record, while the existing mainframe systems can operate as a backup, Google claims.
While Google is not a name you might associate with mainframe expertise, potential customers may be reassured by the fact that the underlying technology of Dual Run was developed by a bank, Santander, to facilitate its own transition of its core banking services to the cloud.
Santander said the software, which it calls Gravity, has allowed it to bring data and workloads onto Google Cloud, and it has already migrated 80 percent of its IT infrastructure, including a core banking platform where financial transactions such as money transfers, deposits or loans are processed.
"Dual Run for Google Cloud, which leverages the innovation we've developed in-house at Santander, will be critical for the digital transformation of many companies and is a testament to the outstanding technology built by our teams," said Banco Santander group chief operating and technology officer Dirk Marzluf.
But Dual Run is only one part of Google's Mainframe Modernization Solution (MMS), which the company said includes tried-and-tested processes and tools for a risk-mitigated migration to Google Cloud.
Google said that its cloud consultants and external advisors will advise on and help design the customer's cloud architecture, dependencies, and the steps for mainframe migration and modernization.
These external advisors include Atos, Capgemini, Infosys, and Kyndryl, the former IT infrastructure services division of IBM which recently unveiled a separate agreement with Microsoft to build data pipelines between mainframe systems and the Azure cloud.
The tools that Google has alongside Dual Run include G4 for application discovery and code refactoring to migrate Cobol, PL/1, or Assembler programs into cloud-native services. This is based on tech Google gained when it acquired Cornerstone Technology in 2020.
It also has a Mainframe Connector that enables data to be moved between the mainframe, Google Cloud Storage, and BigQuery, Google's data warehouse service.
The giveaway that this isn't just another on-demand cloud service is that interested parties need to get in touch with a Google Cloud account representative in order to access Dual Run and the company's other mainframe migration tools.
However, modernizing IT infrastructure can be a significant stepping stone into the cloud era for enterprises, according to Google Cloud VP and GM Sachin Gupta.
"By moving mainframe systems to the cloud, organizations have an opportunity to better utilize their data, implement stronger cybersecurity protections, and build a foundation for their digital transformations that will drive their future growth," Gupta said. ®
By The Valuentum Team
International Business Machines Corporation (NYSE:IBM) has become a fundamentally different business in the past few years, one focused on providing hybrid cloud computing offerings. The company is a stellar free cash flow generator which enables IBM to reward investors via generous dividend increases, with shares of IBM yielding ~5.1% as of this writing. Substantial near-term headwinds remain, largely due to the various exogenous shocks seen of late (such as major inflationary pressures, rising interest rates, supply chain hurdles, and raging geopolitical tensions), though IBM is still worth considering as a high-yielding income generation idea.
IBM solves business problems via integrated hardware/software solutions that leverage IT and its knowledge of business processes. Its solutions help reduce a client's costs or enable new capabilities that generate revenue. The company was founded in 1924 and is headquartered in New York.
Back in 2019, IBM bought Red Hat (a top provider of open source cloud software) through a ~$34 billion deal which made IBM a contending hybrid cloud provider. IBM is looking to seize what it describes as a ~$1 trillion hybrid cloud opportunity, and accurate growth in this area has been encouraging. IBM's revamped management team is working hard to turn things around after the company made various blunders during the 2010s decade. Its current Chairman and CEO, Arvind Krishna, has done a solid job righting the ship at IBM since taking on the top role in 2020.
In November 2021, IBM spun off its legacy business tax-free to shareholders as a new publicly traded entity, Kyndryl Holdings, Inc. (KD). Initially, IBM retained a 19.9% stake in Kyndryl though the firm intends to exit that position within 12 months of the spinoff.
On July 18, IBM reported earnings for the second quarter of 2022 that beat both consensus top- and bottom-line estimates. Its GAAP revenues rose by 9% year-over-year to hit $15.5 billion with strong growth at its Red Hat, various consulting services, and hybrid infrastructure offerings being key here. When removing foreign currency headwinds arising from the strong US dollar seen of late from the picture, IBM's non-GAAP constant currency revenues were up 16% year-over-year last quarter. IBM's portfolio optimization efforts are having a very powerful impact on its financial performance.
The firm's GAAP gross margin fell by ~185 basis points year-over-year last quarter, falling down to 55.4%. However, economies of scale helped drive its GAAP income from continuing operations up by 81% year-over-year in the second quarter, rising to $1.5 billion. There is some noise here due to the separation of IBM's legacy businesses (via the spinoff of Kyndryl) from its core operations. Keeping that noise in mind, IBM's underlying operations have performed quite well of late.
During its second quarter earnings call, IBM's management team noted the firm now forecasted that its full-year free cash flows would come in near $10.0 billion in 2022, at the low end of its previous forecast. IBM generated $3.6 billion in free cash flow (defined as net operating cash flow less 'payments for property, plant, and equipment' and 'investment in software') while spending $3.0 billion covering its dividend obligations during the first half of 2022. Its modest share repurchases during this period were related to tax withholding purposes as the new IBM is focused on retaining cash to invest in the business. We appreciate that IBM's dividend obligations remain well-covered by its traditional free cash flows.
The company exited June 2022 with a net debt load of $42.8 billion (inclusive of short-term debt, exclusive of restricted cash). One of the biggest risks to IBM's dividend is its large net debt load. IBM had $7.6 billion in cash, cash equivalents, and current marketable securities on hand at the end of June 2022 which provides the company with ample liquidity to meet its near-term funding needs.
IBM continues to expect that its constant currency revenues will grow decently this year (in the mid-single digit range), though sustained foreign currency headwinds are expected to offset strong demand for its offerings, to a degree. Over the long haul, we forecast that under its new management team, IBM will return to stable revenue growth which in turn should see the company's free cash flows swell higher. That would allow IBM to boost its dividend in a sustainable manner going forward, though we caution that its net debt load could limit the size of any future payout increases.
The Dividend Cushion Ratio Deconstruction, shown in the image up above, reveals the numerator and denominator of the Dividend Cushion ratio. At the core, the larger the numerator, or the healthier a company's balance sheet and future free cash flow generation, relative to the denominator, or a company's cash dividend obligations, the more durable the dividend.
The Dividend Cushion Ratio Deconstruction image puts sources of free cash in the context of financial obligations next to expected cash dividend payments over the next 5 years on a side-by-side comparison. Because the Dividend Cushion ratio and many of its components are forward-looking, our dividend evaluation may change upon subsequent updates as future forecasts are altered to reflect new information.
In the context of the Dividend Cushion ratio, IBM's numerator is smaller than its denominator, which suggests weak forward-looking dividend coverage. However, given IBM's strong and stable cash flow profile, we view its forward-looking dividend coverage favorably when considering IBM's ability to tap capital markets into account. Should IBM stumble for any reason, its ability to make good on its payout may be in danger.
The best measure of a firm's ability to create value for shareholders is expressed by comparing its return on invested capital ['ROIC'] with its weighted average cost of capital ['WACC']. The gap or difference between ROIC and WACC is called the firm's economic profit spread. IBM's 3-year historical return on invested capital (without goodwill) is 41.6%, which is above the estimate of its cost of capital of 9.2%.
In the chart down below, we show the probable path of ROIC in the years ahead based on the estimated volatility of key drivers behind the measure. The solid grey line reflects the most likely outcome, in our opinion, and represents the scenario that results in our fair value estimate. Assuming IBM's accurate portfolio optimization efforts go as planned, the firm's ability to generate shareholder value (which historically has been impressive) should continue to improve.
Our discounted cash flow process values each firm on the basis of the present value of all future free cash flows, net of balance sheet considerations. We think IBM is worth $136 per share with a fair value range of $101-$171 per share. Shares of IBM are trading moderately below our fair value estimate as of this writing.
The near-term operating forecasts used in our enterprise cash flow model, including revenue and earnings, do not differ much from consensus estimates or management guidance. Our model reflects a compound annual revenue growth rate of 3.4% during the next five years, a pace that is higher than the firm's 3-year historical compound annual growth rate of -10.3%.
Our model reflects a 5-year projected average operating margin of 17.6%, which is above IBM's trailing 3-year average. Beyond Year 5, we assume free cash flow will grow at an annual rate of 2% for the next 15 years and 3% in perpetuity. For IBM, we use a 9.2% weighted average cost of capital to discount future free cash flows.
Although we estimate IBM's fair value at about $136 per share, every company has a range of probable fair values that's created by the uncertainty of key valuation drivers (like future revenue or earnings, for example). After all, if the future were known with certainty, we wouldn't see much volatility in the markets as stocks would trade precisely at their known fair values.
In the graphic up above, we show this probable range of fair values for IBM. We think the firm is attractive below $101 per share (the green line), but quite expensive above $171 per share (the red line). The prices that fall along the yellow line, which includes our fair value estimate, represent a reasonable valuation for the firm, in our opinion.
The steady decline in IBM's legacy business since 2010 represents a major reason why the firm spun off Kyndryl in November 2021. Going forward, IBM will need to prove that as a leaner and more focused enterprise, it can maintain solid revenue and operating income growth over the long haul. We think that will be the case, though substantial near-term headwinds remain. Investors looking for an income generation idea backed up by a strong cash flow profile should take a closer look at IBM.
In this interview, NewsMedical talks to Dr. Amy Sheng, a technical account manager at Sino Biological, and Dr. Lurong Pan, founder and CEO of next-generation biotech startup Ainnocence, about how artificial intelligence (AI) can be used in combination with high throughput production and screening to enhance the development process for potential universal vaccines.
Topics discussed include how AI can be used in combination with high throughput production and screening of proteins to expedite the development process for candidate universal vaccines – an ideal method for preventing and controlling future pandemics but also one of the major research challenges in biotechnology today.
Dr. Amy Sheng: I am a technical account manager at Sino Biological. My background is in cell and molecular biology, antibody development, and production. Currently, I am in charge of the CRO program in Sino Biological in the United States.
Dr. Lurong Pan: I am the founder and CEO of Ainnocence, an AI drug discovery platform. I have a Ph.D. in computational biology and a background in computer science specializing in artificial intelligence. I have been in this industry doing drug design and leveraging different computational algorithms for about 14 years. I initially collaborated with Sino Biological, using them as one of the cell providers of our internal drug discovery program.
AS: Universal vaccines are vaccines that provide broad efficacy against various strains of a virus. From the case of SARS, we have experienced and understood how rapidly a virus can mutate and can escape immunity. Therefore, it is vital to develop the vaccine or therapeutics reagents that will continue to protect against any new versions of the virus that may emerge.
A universal vaccine has more potential compared to traditional ones in protecting vulnerable populations from various strains and even future variants. Before SARS and COVID, influenza has been a target for the development of a universal vaccine. The development of influenza vaccines with broader productions has been a goal for decades, and thanks to accurate developments in vaccine targets and more efficient delivery platforms, this goal is believed to become more achievable.
AS: With influenza, traditional vaccines do not produce durable, productive immunity and a cross-reactive immune response that can neutralize diverse influenza viral strengths. Traditional vaccine efficacy varies significantly in various age groups and against different viral strains. The strain mismatch is one of the main causes of vaccine failure. Very commonly, I hear friends say that oh vaccine does not work. Even I was knocked down for 1 week by flu even though I got the flu shot. In fact, just last year, the major strain of flu H3N2 mutated in a way that the designed vaccine could not efficiently provide immunity to the vulnerable population.
However, universal vaccines could potentially solve this issue and provide better production as quickly as possible. Besides the efficacy, universal vaccines would lower the cost of vaccine R&D, manufacturing, and stockpiling if we are faced with a rapidly mutating virus, which will greatly benefit the low-source country populations that are particularly vulnerable to pandemics.
AS: Universal vaccine development mainly focuses on the unchanged part of the virus or conserved region. However, this region might be shadowed by the ever-mutating domain of the virus, which is called immunodominance. In other words, the generated universal vaccine might have lower immune responses. Now, researchers are trying to make chimeric protein vaccines to make the conservative region more immunodominant and the immune response broader and more durable.
How the vaccine is packed can also greatly affect the strain and the quality of immune responses. For example, adenoviruses can be modified and used to deliver DNA sequences encoding the viral antigens we want to present. The benefit of this is that they can keep producing the antigen for weeks, which might help extend the response.
LP: Vaccines have two types of purposes: preventive and therapeutic. Preventive means you supply it ahead of time, and it is protective for a while. Those are normally the vaccines that generate B cell immunity that has a longer memory in our immune system and takes effect. From the current improvement in science, we are already observing certain antibodies or antibody cocktails that can target multiple viral infections. If we could find a universal antigen using those computational and experimental measures, the product would be a universal preventive vaccine that generates a cocktail of long-term protective B cell immunity.
Another part of our immunity is T cell immunity which normally gives a more distinct and harsh response to clear non-selective invaders of our body. We could control T cell behavior in a therapeutic way that stimulates its function when we get infected and, at the same time, customize it for different types of patients.
Some patients have compromised T cell immunity, and some have overexcited T cell immunity that causes an inflammatory effect. If we could have a modulator to stimulate our T cells to be able to protect us, clear out any invaders or pathogenic modules from outside, and at the same time not hurt us – that would also be classed as a universal vaccine used after infection to stimulate your immune system.
LP: Traditionally, we generate antibodies using a single antigen species. For example, with COVID, initially, we designed the vaccine using an animal model to generate antibodies to a strain that was currently happening. When the virus is mutant, we have to inject models multiple times in a timeline. We cannot chase the speed of the mutant using the conventional animal model method to develop universal vaccines.
Computational technology could be a great help in collaborative experiments where we have virtually computed all the strengths from the beginning of the pandemic to that current time point, resulting in over 1,000 different strengths. Within a few hours, you can generate antibodies that potentially target different strengths. In the opposite vein, we can also use the same algorithm to identify what is common among all those mutant strains to generate an antibody to prevent future strains. In using this model for antibody design, the antibody is also able to prevent a future strain.
There are patterns in the evolutionary trait of the viral species that we can learn from and be able to predict the future mutation trend of those species. If we can completely digitalize the virus, we could build an algorithm to find its evolutionary pattern and find a common immunogenetic sequence to contribute to the vaccine design process. Computational technology could speed up design and even help discover new phenomena in biology in the future.
AS: Ainnocence is a great platform to design proteins and antibodies and provide a guideline of what the ideal candidates look like. With Ainnocence, we can better understand and predict how the products will interact with the protein partners.
On the production side, if the vaccine is in protein form, the production of the predicted sequences can be challenging. The desired protein should have high stability, high yield, and purity for the later manufacturer and other aspects of the R&D. We have to do a lot of troubleshooting to standardize the production of that protein.
AS: Ainnocence was one of our customers. We developed a lot of component antibody projects together. As we provide various platforms for recombinant production, especially the highest approved protein production and screening, we can produce up to 1,000 constructs in a turnaround time as short as four weeks.
Together with Ainnocence, we are able to provide quick answers to some basic information such as yield stability and purity and computational designs. We have extensive experience with viral protein production and the world’s largest viral protein bank. These antigens can be used to analyze vaccine-induced antibody responses.
LP: We started by collaborating on the COVID project, and we found out a lot of customers not only wanted to produce a protein drug but also wanted to Improve the property of those protein drugs. In the past, you had to try different mutations to find a good quality one, and this would take a lot of time and cost a lot. Suppose we could involve AI and be able to shortlist all those unnecessary experiments. In that case, we can enrich all the high-probability positive species using our AI engine and thus only need to conduct a very limited number of experiments to save time. It would expedite the R&D process and, from the customer side, mean that less money is spent on failed experiments. The technology is a good solution for the industry, and if we combine it to have a design and a production capability simultaneously, we can have a quick turnaround for more customer modules to be made. It is a good business model for both companies.
LP: A lot of universities, commercial entities, and nonprofit organizations are working on a universal vaccine. From a scientific and technology readiness point of view, I think we are getting close. I cannot make an accurate prediction if it will be in five years or ten years, but I think we will be able to see good animal study results in the coming three years. A large human trial is another story because that would involve a long-term series of safety studies, a larger population, and collaboration globally in different regions. In the next five to ten years, I would hope to see a vaccine validated in human trials for a candidate able to cover most of the common pathogens.
AS: What we have seen recently for SARS-COVID is a unique case of vaccine development. The normal process of universal vaccine development, or any vaccine development, would take a long time, especially in the human trial, to ensure that the vaccine has a good safety record before the government finally approves it. We hope to see that coming out faster, but the most important thing is safety and keeping that on track.
LP: We see common patterns for viral evolution: they evolve quickly and with their hosts. It is something we can learn digitally and computationally to either analyze understanding or even predict its trend. Everything is encoded in an RNA or a DNA strain that can be shared in a digitalized way for all species. In that case, these patterns are computable.
Amy Sheng, Ph.D., is a technical account manager in Sino Biological. Amy joined Sino Biological in 2021 supporting CRO services and project management in Eastern US region.
Prior to joining Sino Biological, she worked in Caprico Biotechnologies as production manager in charge of antibody development and production for flow cytometry. She has a PhD in Molecular and Cell Biology from Georgia Institute of Technology, and is ASCP-certified molecular biologist and ASQ-certified CSSGB.
Dr. Lurong Pan is the founder and CEO of Ainnocence, an AI-powered, next-generation biotech startup. She has extensive drug design and precision medicine research experience using structural biology, computational chemistry, and artificial intelligence technologies.
She was previously a senior investigator at Global Health Drug Discovery Institute and a research scientist in structural biology and computational biology at University of Alabama at Birmingham.
Dr. Pan received her B.S. in Applied Chemistry from Nanjing University, M.S. in Computer Science from Georgia Tech, and Ph.D. in Chemistry from University of Alabama at Birmingham. She is also an IBM-certified big data architect.
Twitter: Ainnocence (@Ainnocence_Inc) / Twitter
Sino Biological is an international reagent supplier and service provider. The company specializes in recombinant protein production and antibody development. All of Sino Biological's products are independently developed and produced, including recombinant proteins, antibodies and cDNA clones. Sino Biological is the researchers' one-stop technical services shop for the advanced technology platforms they need to make advancements. In addition, Sino Biological offers pharmaceutical companies and biotechnology firms pre-clinical production technology services for hundreds of monoclonal antibody drug candidates.
Sino Biological is committed to providing high-quality recombinant protein and antibody reagents and to being a one-stop technical services shop for life science researchers around the world. All of our products are independently developed and produced. In addition, we offer pharmaceutical companies and biotechnology firms pre-clinical production technology services for hundreds of monoclonal antibody drug candidates. Our product quality control indicators meet rigorous requirements for clinical use samples. It takes only a few weeks for us to produce 1 to 30 grams of purified monoclonal antibody from gene sequencing.
IBM continues to spend millions to buy hybrid cloud companies, as the company makes its sixth acquisition in 2022 with Dialexa.
IBM continues to spend millions on buying hybrid cloud companies with the unveiling of its acquisition of engineering consulting specialist Dialexa to boost its cloud charge.
Since IBM CEO Arvind Krishna took the reins in April 2020, IBM has acquired more than 25 companies, including many hybrid cloud businesses.
In February alone, IBM acquired cloud consultant services standout Sentaca, as well as Microsoft Azure consultancy all-star Neudesic—with the two purchases squarely aimed at boosting IBM’s hybrid and multi-cloud services capabilities.
[Related: UK To Probe Amazon, Google, Microsoft’s Cloud Dominance]
Looking at the Armonk, N.Y.-based company’s purchase of Dialexa, IBM will gain 300 skilled product managers, designers, full-stack engineers and data scientists. Dialexa will become part of IBM’s Consulting business unit, which spearheads the company’s digital product engineering services in the Americas.
“Dialexa’s product engineering expertise, combined with IBM’s hybrid cloud and business transformation offerings, will help our clients turn concepts into differentiated product portfolios that accelerate growth,” said John Granger, senior vice president of IBM Consulting, in a statement.
Dialexa marks IBM’s sixth purchase in 2022 with the goal of boosting its hybrid cloud and artificial intelligence abilities.
Along with buying Dialexa, Sentaca and Neudesic, IBM has also acquired Randori, an attack surface management cybersecurity specialist that helps protect hybrid cloud environments.
Earlier this year, IBM’s CEO said hybrid cloud and artificial intelligence are top of mind for his company in terms of investment and the future.
“We are integrating technology and expertise—from IBM, our partners and even our competitors—to meet the urgent needs of our clients, who see hybrid cloud and AI as crucial sources of competitive advantage,” Krishna said in March. “And we are ready to be the catalyst of progress for our clients as they pursue the digital transformation of the world’s mission-critical businesses.”
In 2021, IBM’s hybrid cloud revenue jumped 19 percent compared with 2020, comprising 35 percent of its total revenue.
Based in Dallas and Chicago, Dialexa delivers a suite of digital product engineering services to help customers create transformative products to drive business outcomes.
Dialexa’s 300-strong engineers and skilled IT experts advise and create custom digital products for customers, which include Deere & Company, Pizza Hut U.S. and Toyota Motor North America. Financial terms of the Dialexa deal were not disclosed.
IBM said Dialexa provides deep experience delivering end-to-end digital product engineering services consisting of strategy, design, build, launch and optimization services across cloud platforms including Amazon Web Services and Microsoft Azure.
“Digital product engineering represents the tip of the spear for competitive advantage,” said Dialexa CEO Scott Harper in a statement. “IBM and Dialexa’s shared vision for delivering industry-defining digital products could be a game-changer.”
IBM (NYSE:IBM) acquired Dialexa, a Dallas TX and Chicago, IL-based digital product engineering services firm.
The amount of the deal was not disclosed. The transaction is expected to close in the fourth quarter of this year and is subject to customary closing conditions and regulatory clearances.
The acquisition is expected to enhance IBM’s product engineering expertise and provide end-to-end digital transformation services for clients. Upon close, Dialexa will join IBM Consulting, strengthening IBM’s digital product engineering services presence in the Americas.
Founded in 2010 and led by CEO Scott Harper, Dialexa delivers a suite of digital product engineering services, enabling organizations to create new products to drive business outcomes. The company has deep experience delivering end-to-end digital product engineering services consisting of strategy, design, build, launch, and optimization services across cloud platforms including AWS and Microsoft Azure. Its team of 300 product managers, designers, full-stack engineers and data scientists, based in Dallas and Chicago, advise and create custom, commercial-grade digital products for clients such as Deere & Company, Pizza Hut US, and Toyota Motor North America.