System architects are often impatient about the future, especially when they can see something good coming down the pike. And thus, we can expect a certain amount of healthy and excited frustration when it comes to the Compute Express Link, or CXL, interconnect created by Intel, which with the absorption of Gen-Z technology from Hewlett Packard Enterprise and now OpenCAPI technology from IBM will become the standard for memory fabrics across compute engines for the foreseeable future.
The CXL 2.0 specification, which brings memory pooling across the PCI-Express 5.0 peripheral interconnect, will soon available on CPU engines. Which is great. But all eyes are already turning to the just-released CXL 3.0 specification, which rides atop the PCI-Express 6.0 interconnect coming in 2023 with 2X the bandwidth, and people are already contemplating what another 2X of bandwidth might offer with CXL 4.0 atop PCI-Express 7.0 coming in 2025.
In a way, we expect for CXL to follow the path blazed by IBM’s “Bluelink” OpenCAPI interconnect. Big Blue used the Bluelink interconnect in the “Cumulus” and “Nimbus” Power9 processors to provide NUMA interconnects across multiple processors, to run the NVLink protocol from Nvidia to provide memory coherence across the Power9 CPU and the Nvidia “Volta” V100 GPU accelerators, and to provide more generic memory coherent links to other kinds of accelerators through OpenCAPI ports. But the path that OpenCAPI and CXL will not be exactly the same, obviously. OpenCAPI is kaput and CXL is the standard for memory coherence in the datacenter.
IBM put faster OpenCAPI ports on the “Cirrus” Power10 processors, and they are used to provide those NUMA links as with the Power9 chips as well as a new OpenCAPI Memory Interface that uses the Bluelink SerDes as a memory controller, which runs a bit slower than a DDR4 or DDR5 controller but which takes up a lot less chip real estate and burns less power – and has the virtue of being exactly like the other I/O in the chip. In theory, IBM could have supported the CXL and NVLink protocols running atop its OpenCAPI interconnect on Power10, but there are some sour grapes there with Nvidia that we don’t understand – it seems foolish not to offer memory coherence with Nvidia’s current “Ampere” A100 and impending “Hopper” H100 GPUs. There may be an impedance mismatch between IBM and Nvidia in regards to signaling rates and lane counts between OpenCAPI and NVLink. IBM has PCI-Express 5.0 controllers on its Power10 chips – these are unique controllers and are not the Bluelink SerDes – and therefore could have supported the CXL coherence protocol, but as far as we know, Big Blue has chosen not to do that, either.
Given that we think CXL is the way a lot of GPU accelerators and their memories will link to CPUs in the future, this strategy by IBM seems odd. We are therefore nudging IBM to do a Power10+ processor with support for CXL 2.0 and NVLink 3.0 coherent links as well as with higher core counts and maybe higher clock speeds, perhaps in a year or a year and a half from now. There is no reason IBM cannot get some of the AI and HPC budget given the substantial advantages of its OpenCAPI memory, which is driving 818 GB/sec of memory bandwidth out of a dual chip module with 24 cores. We also expect for future datacenter GPU compute engines from Nvidia will support CXL in some fashion, but exactly how it will sit side-by-side with or merge with NVLink is unclear.
It is also unclear how the Gen-Z intellectual property donated to the CXL Consortium by HPE back in November 2021 and the OpenCAPI intellectual property donated to the organization steering CXL by IBM last week will be used to forge a CXL 4.0 standard, but these two system vendors are offering up what they have to help the CXL effort along. For which they should be commended. That said, we think both Gen-Z and OpenCAPI were way ahead of CXL and could have easily been tapped as in-node and inter-node memory and accelerator fabrics in their own right. HPE had a very elegant set of memory fabric switches and optical transceivers already designed, and IBM is the only CPU provider that offered CPU-GPU coherence across Nvidia GPUs and the ability to hook memory inside the box or across boxes over its OpenCAPI Memory Interface riding atop the Bluelink SerDes. (AMD is offering CPU-GPU coherence across its custom “Trento” Epyc 7003 series processors and its “Aldebaran” Instinct MI250X GPU accelerators in the “Frontier” exascale supercomputer at Oak Ridge National Laboratories.)
We are convinced that the Gen-Z and OpenCAPI technology will help make CXL better, and Excellerate the kinds and varieties of coherence that are offered. CXL initially offered a kind of asymmetrical coherence, where CPUs can read and write to remote memories in accelerators as if they are local but using the PCI-Express bus instead of a proprietary NUMA interconnect – that is a vast oversimplification – rather than having full cache coherence across the CPUs and accelerators, which has a lot of overhead and which would have an impedance mismatch of its own because PCI-Express was, in days gone by, slower than a NUMA interconnect.
But as we have pointed out before, with PCI-Express doubling its speed every two years or so and latencies holding steady as that bandwidth jumps, we think there is a good chance that CXL will emerge as a kind of universal NUMA interconnect and memory controller, much as IBM has done with OpenCAPI, and Intel has suggested this for both CXL memory and CXL NUMA and Marvell certainly thinks that way about CXL memory as well. And that is why with CXL 3.0, the protocol is offering what is called “enhanced coherency,” which is another way of saying that it is precisely the kind of full coherency between devices that, for example, Nvidia offers across clusters of GPUs on an NVSwitch network or IBM offered between Power9 CPUs and Nvidia Volta GPUs. The kind of full coherency that Intel did not want to do in the beginning. What this means is that devices supporting the CXL.memory sub-protocol can access each other’s memory directly, not asymmetrically, across a CXL switch or a direct point-to-point network.
There is no reason why CXL cannot be the foundation of a memory area network as IBM has created with its “memory inception” implementation of OpenCAPI memory on the Power10 chip, either. As Intel and Marvell have shown in their conceptual presentations, the palette of chippery and interconnects is wide open with a standard like CXL, and improving it across many vectors is important. The industry let Intel win this one, and we will be better off in the long run because of it. Intel has largely let go of CXL and now all kinds of outside innovation can be brought to bear.
Ditto for the Universal Chiplet Interconnect Express being promoted by Intel as a standard for linking chiplets inside of compute engine sockets. Basically, we will live in a world where PCI-Express running UCI-Express connects chiplets inside of a socket, PCI-Express running CXL connects sockets and chips within a node (which is becoming increasingly ephemeral), and PCI-Express switch fabrics spanning a few racks or maybe even a row someday use CXL to link CPUs, accelerators, memory, and flash all together into disaggregated and composable virtual hardware servers.
For now, what is on the immediate horizon is CXL 3.0 running atop the PCI-Express 6.0 transport, and here is how CXL 3.0 is stacking up against the prior CXL 1.0/1.1 release and the current CXL 2.0 release on top of PCI-Express 5.0 transports:
When the CXL protocol is running in I/O mode – what is called CXL.io – it is essentially just the same as the PCI-Express peripheral protocol for I/O devices. The CXL.cache and CXL.memory protocols add caching and memory addressing atop the PCI-Express transport, and run at about half the latency of the PCI-Express protocol. To put some numbers on this, as we did back in September 2021 when talking to Intel, the CXL protocol specification requires that a snoop response on a snoop command when a cache line is missed has to be under 50 nanoseconds, pin to pin, and for memory reads, pin to pin, latency has to be under 80 nanoseconds. By contrast, a local DDR4 memory access one a CPU socket is around 80 nanoseconds, and a NUMA access to far memory in an adjacent CPU socket is around 135 nanoseconds in a typical X86 server.
With the CXL 3.0 protocol running atop the PCI-Express 6.0 transport, the bandwidth is being doubled on all three types of drivers without any increase in latency. That bandwidth increase, to 256 GB/sec across x16 lanes (including both directions) is thanks to the 256 byte flow control unit, or flit, fixed packet size (which is larger than the 64 byte packet used in the PCI-Express 5.0 transport) and the PAM-4 pulsed amplitude modulation encoding that doubles up the bits per signal on the PCI-Express transport. The PCI-Express protocol uses a combination of cyclic redundancy check (CRC) and three-way forward error correction (FEC) algorithms to protect the data being transported across the wire, which is a better method than was employed with prior PCI-Express protocols and hence why PCI-Express 6.0 and therefore CXL 3.0 will have much better performance for memory devices.
The CXL 3.0 protocol does have a low latency CRC algorithm that breaks the 256 B flits into 128 B half flits and does its CRC check and transmissions on these subflits, which can reduce latencies in transmissions by somewhere between 2 nanosecond and 5 nanoseconds.
The neat new thing coming with CXL 3.0 is memory sharing, and this is distinct from the memory pooling that was available with CXL 2.0. Here is what memory pooling looks like:
With memory pooling, you put a glorified PCI-Express switch that speaks CXL between hosts with CPUs and enclosures with accelerators with their own memories or just blocks of raw memory – with or without a fabric manager – and you allocate the accelerators (and their memory) or the memory capacity to the hosts as needed. As the diagram above shows on the right, you can do a point to point interconnect between all hosts and all accelerators or memory devices without a switch, too, if you want to hard code a PCI-Express topology for them to link on.
With CXL 3.0 memory sharing, memory out on a device can be literally shared simultaneously with multiple hosts at the same time. This chart below shows the combination of device shared memory and coherent copies of shared regions enabled by CXL 3.0:
System and cluster designers will be able to mix and match memory pooling and memory sharing techniques with CXL 3.0. CXL 3.0 will allow for multiple layers of switches, too, which was not possible with CXL 2.0, and therefore you can imagine PCI-Express networks with various topologies and layers being able to lash together all kinds of devices and memories into switch fabrics. Spine/leaf networks common among hyperscalers and cloud builders are possible, including devices that just share their cache, devices that just share their memory, and devices that share their cache and memory. (That is Type 1, Type 3, and Type 2 in the CXL device nomenclature.)
The CXL fabric is what will be truly useful and what is enabled in the 3.0 specification. With a fabric, a you get a software-defined, dynamic network of CXL-enabled devices instead of a static network set up with a specific topology linking specific CXL devices. Here is a simple example of a non-tree topology implemented in a fabric that was not possible with CXL 2.0:
And here is the neat bit. The CXL 3.0 fabric can stretch to 4,096 CXL devices. Now, ask yourself this: How many of the big iron NUMA systems and HPC or AI supercomputers in the world have more than 4,096 devices? Not as many as you think. And so, as we have been saying for years now, for a certain class of clustered systems, whether the nodes are loosely or tightly coupled at their memories, a PCI-Express fabric running CXL is just about all they are going to need for networking. Ethernet or InfiniBand will just be used to talk to the outside world. We would expect to see flash devices front-ended by DRAM as a fast cache as the hardware under storage clusters, too. (Optane 3D XPoint persistent memory is no longer an option. But there is always hope for some form of PCM memory or another form of ReRAM. Don’t hold your breath, though.)
As we sit here mulling all of this over, we can’t help thinking about how memory sharing might simplify the programming of HPC and AI applications, especially if there is enough compute in the shared memory to do some collective operations on data as it is processed. There are all kinds of interesting possibilities. . . .
Anyway, making CXL fabrics is going to be interesting, and it will be the heart of many system architectures. The trick will be sharing the memory to drive down the effective cost of DRAM – research by Microsoft Azure showed that on its cloud, memory capacity utilization was only an average of about 40 percent, and half of the VMs running never touched more than half of the memory allocated to their hypervisors from the underlying hardware – to pay for the flexibility that comes through CXL switching and composability for devices with memory and devices as memory.
What we want, and what we have always wanted, was a memory-centric systems architecture that allows all kinds of compute engines to share data in memory as it is being manipulated and to move that data as little as possible. This is the road to higher energy efficiency in systems, at least in theory. Within a few years, we will get to test this all out in practice, and it is legitimately exciting. All we need now is PCI-Express 7.0 two years earlier and we can have some real fun.
Old mainframe computers are interesting, especially to those of us who weren’t around to see them in action. We sit with old-timers and listen to their stories of the good ol’ days. They tell us about loading paper tape or giving instructions one at a time with toggle switches and LED output indicators. We hang on every word because its interesting to know how we got to this point in the tech-timeline and we appreciate the patience and insanity it must have taken to soldier on through the “good ol’ days”.
[Ken Shirriff] is making those good ol’ days come alive with a series of articles relating to his work with hardware at the Computer History Museum. His latest installment is an article describing the strange implementation of the IBM 1401’s qui-binary arithmetic. Full disclosure: It has not been confirmed that [Ken] is an “old-timer” however his article doesn’t help the argument that he isn’t.
Ken describes in thorough detail how the IBM 1401 — which was first introduced in 1959 — takes a decimal number as an input and operates on it one BCD digit at a time. Before performing the instruction the BCD number is converted to qui-binary. Qui-binary is represented by 7 bits, 5 qui bits and 2 binary bits: 0000000. The qui portion represents the largest even number contained in the BCD value and the binary portion represents a 1 if the BCD value is odd or a 0 for even. For example if the BCD number is 9 then the Q8 bit and the B1 bit are set resulting in: 1000010.
The qui-binary representation makes for easy error checking since only one qui bit should be set and only one binary bit should be set. [Ken] goes on to explain more complex arithmetic and circuitry within the IBM 1401 in his post.
If you aren’t familiar with [Ken], we covered his reverse engineering of the Sinclair Scientific Calculator, his explanation of the TL431, and of course the core memory repair that is part of his Computer History Museum work.
Thanks for the tip [bobomb].
A month after the National Institute of Standards and Technology (NIST) revealed the first quantum-safe algorithms, Amazon Web Services (AWS) and IBM have swiftly moved forward. Google was also quick to outline an aggressive implementation plan for its cloud service that it started a decade ago.
It helps that IBM researchers contributed to three of the four algorithms, while AWS had a hand in one. Google is also among those who contributed to SPHINCS+.
A long process that started in 2016 with 69 original candidates ends with the selection of four algorithms that will become NIST standards, which will play a critical role in protecting encrypted data from the vast power of quantum computers.
NIST's four choices include CRYSTALS-Kyber, a public-private key-encapsulation mechanism (KEM) for general asymmetric encryption, such as when connecting websites. For digital signatures, NIST selected CRYSTALS-Dilithium, FALCON, and SPHINCS+. NIST will add a few more algorithms to the mix in two years.
Vadim Lyubashevsky, a cryptographer who works in IBM's Zurich Research Laboratories, contributed to the development of CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon. Lyubashevsky was predictably pleased by the algorithms selected, but he had only anticipated NIST would pick two digital signature candidates rather than three.
Ideally, NIST would have chosen a second key establishment algorithm, according to Lyubashevsky. "They could have chosen one more right away just to be safe," he told Dark Reading. "I think some people expected McEliece to be chosen, but maybe NIST decided to hold off for two years to see what the backup should be to Kyber."
After NIST identified the algorithms, IBM moved forward by specifying them into its recently launched z16 mainframe. IBM introduced the z16 in April, calling it the "first quantum-safe system," enabled by its new Crypto Express 8S card and APIs that provide access to the NIST APIs.
IBM was championing three of the algorithms that NIST selected, so IBM had already included them in the z16. Since IBM had unveiled the z16 before the NIST decision, the company implemented the algorithms into the new system. IBM last week made it official that the z16 supports the algorithms.
Anne Dames, an IBM distinguished engineer who works on the company's z Systems team, explained that the Crypto Express 8S card could implement various cryptographic algorithms. Nevertheless, IBM was betting on CRYSTAL-Kyber and Dilithium, according to Dames.
"We are very fortunate in that it went in the direction we hoped it would go," she told Dark Reading. "And because we chose to implement CRYSTALS-Kyber and CRYSTALS-Dilithium in the hardware security module, which allows clients to get access to it, the firmware in that hardware security module can be updated. So, if other algorithms were selected, then we would add them to our roadmap for inclusion of those algorithms for the future."
A software library on the system allows application and infrastructure developers to incorporate APIs so that clients can generate quantum-safe digital signatures for both classic computing systems and quantum computers.
"We also have a CRYSTALS-Kyber interface in place so that we can generate a key and provide it wrapped by a Kyber key so that could be used in a potential key exchange scheme," Dames said. "And we've also incorporated some APIs that allow clients to have a key exchange scheme between two parties."
Dames noted that clients might use Dilithium to generate digital signatures on documents. "Think about code signing servers, things like that, or documents signing services, where people would like to actually use the digital signature capability to ensure the authenticity of the document or of the code that's being used," she said.
During Amazon's AWS re:Inforce security conference last week in Boston, the cloud provider emphasized its post-quantum cryptography (PQC) efforts. According to Margaret Salter, director of applied cryptography at AWS, Amazon is already engineering the NIST standards into its services.
During a breakout session on AWS' cryptography efforts at the conference, Salter said AWS had implemented an open source, hybrid post-quantum key exchange based on a specification called s2n-tls, which implements the Transport Layer Security (TLS) protocol across different AWS services. AWS has contributed it as a draft standard to the Internet Engineering Task Force (IETF).
Salter explained that the hybrid key exchange brings together its traditional key exchanges while enabling post-quantum security. "We have regular key exchanges that we've been using for years and years to protect data," she said. "We don't want to get rid of those; we're just going to enhance them by adding a public key exchange on top of it. And using both of those, you have traditional security, plus post quantum security."
Last week, Amazon announced that it deployed s2n-tls, the hybrid post-quantum TLS with CRYSTALS-Kyber, which connects to the AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In an update this week, Amazon documented its stated support for AWS Secrets Manager, a service for managing, rotating, and retrieving database credentials and API keys.
While Google didn't make implementation announcements like AWS in the immediate aftermath of NIST's selection, VP and CISO Phil Venables said Google has been focused on PQC algorithms "beyond theoretical implementations" for over a decade. Venables was among several prominent researchers who co-authored a technical paper outlining the urgency of adopting PQC strategies. The peer-reviewed paper was published in May by Nature, a respected journal for the science and technology communities.
"At Google, we're well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information," Venables wrote in a blog post published following the NIST announcement. "We have one goal: ensure that Google is PQC ready."
Venables recalled an experiment in 2016 with Chrome where a minimal number of connections from the Web browser to Google servers used a post-quantum key-exchange algorithm alongside the existing elliptic-curve key-exchange algorithm. "By adding a post-quantum algorithm in a hybrid mode with the existing key exchange, we were able to test its implementation without affecting user security," Venables noted.
Google and Cloudflare announced a "wide-scale post-quantum experiment" in 2019 implementing two post-quantum key exchanges, "integrated into Cloudflare's TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients." The experiment helped Google understand the implications of deploying two post-quantum key agreements with TLS.
Venables noted that last year Google tested post-quantum confidentiality in TLS and found that various network products were not compatible with post-quantum TLS. "We were able to work with the vendor so that the issue was fixed in future firmware updates," he said. "By experimenting early, we resolved this issue for future deployments."
The four algorithms NIST announced are an important milestone in advancing PQC, but there's other work to be done besides quantum-safe encryption. The AWS TLS submission to the IETF is one example; others include such efforts as Hybrid PQ VPN.
"What you will see happening is those organizations that work on TLS protocols, or SSH, or VPN type protocols, will now come together and put together proposals which they will evaluate in their communities to determine what's best and which protocols should be updated, how the certificates should be defined, and things like things like that," IBM's Dames said.
Dustin Moody, a mathematician at NIST who leads its PQC project, shared a similar view during a panel discussion at the RSA Conference in June. "There's been a lot of global cooperation with our NIST process, rather than fracturing of the effort and coming up with a lot of different algorithms," Moody said. "We've seen most countries and standards organizations waiting to see what comes out of our nice progress on this process, as well as participating in that. And we see that as a very good sign."
The PACCAR IT Europe Delivery Center is responsible for application development, maintenance and support on both IBM Mainframe and Microsoft based systems. The applications cover all major business areas of DAF Trucks: Purchasing, Product Development, Truck Sales, Production, Logistics, Parts, After Sales and Finance. Examples of applications are:
• 3D DAF Truck Configurator
• The ITS (International Truck Service) application supporting the 24/7 call center of DAF ITS by the handling of road side assistance requests throughout Europa.
• The European Parts System supports all PACCAR Parts dealers with ordering, invoices and return programs and the Parts Distribution Centers with order handling and delivery.
• The custom DAF application that securely transfers software and calibration settings into the embedded controllers in our trucks.
• The back-end systems that enable our Diagnostic Tools to perform diagnosis and software updates on trucks in both the manufacturing and aftermarket environment.
For implementing secure software testing in our development teams we need a software engineer who is able to:
• Design an implement the secure testing process:
o How and when to test
o Make use of CI/CD pipelines or manually in case of no pipelines available
o How to handle test findings from test tool
o How to handle change documentation (test evidence)
• And has experience with software development (.NET environment)
• Design and implement secure test process
• Bachelor IT level and approximately 5 years experience.
• C# (pre)
• Azure DevOps (VSTS)
• CI/CD pipelines (YAML)
• We're not looking specifically for a Developer but for someone who is going to design and implement Security Testing in the teams
• Level: medior/senior
• Job is full time
• Working on site at DAF ITD Eindhoven. Working from home possible for maximal 2 days a week
PACCAR IT Europe is responsible for the development, support and management of information systems in Europe for both internal and external users of DAF Trucks, PACCAR Parts, PACCAR Financial and PacLease. Within IT Europe, more than 250 IT professionals are employed.
Systems-on-a-Chip (SoC) are becoming increasingly complex, leading to corresponding increases in the complexity and cost of SoC design and development. We propose to address this problem by introducing comprehensive change management. Change management, which is widely used in the software industry, involves controlling when and where changes can be introduced into components so that changes can be propagated quickly, completely, and correctly.
In this paper we address two main topics: One is typical scenarios in electronic design where change management can be supported and leveraged. The other is the specification of a comprehensive schema to illustrate the varieties of data and relationships that are important for change management in SoC design.
SoC designs are becoming increasingly complex. Pressures on design teams and project managers are rising because of shorter times to market, more complex technology, more complex organizations, and geographically dispersed multi-partner teams with varied “business models” and higher “cost of failure.”
Current methodology and tools for designing SoC need to evolve with market demands in key areas: First, multiple streams of inconsistent hardware (HW) and software (SW) processes are often integrated only in the late stages of a project, leading to unrecognized divergence of requirements, platforms, and IP, resulting in unacceptable risk in cost, schedule, and quality. Second, even within a stream of HW or SW, there is inadequate data integration, configuration management, and change control across life cycle artifacts. Techniques used for these are often ad hoc or manual, and the cost of failure is high. This makes it difficult for a distributed group team to be productive and inhibits the early, controlled reuse of design products and IP. Finally, the costs of deploying and managing separate dedicated systems and infrastructures are becoming prohibitive.
We propose to address these shortcomings through comprehensive change management, which is the integrated application of configuration management, version control, and change control across software and hardware design. Change management is widely practiced in the software development industry. There are commercial change-management systems available for use in electronic design, such as MatrixOne DesignSync , ClioSoft SOS , IC Manage Design Management , and Rational ClearCase/ClearQuest , as well as numerous proprietary, “home-grown” systems. But to date change management remains an under-utilized technology in electronic design.
In SoC design, change management can help with many problems. For instance, when IP is modified, change management can help in identifying blocks in which the IP is used, in evaluating other affected design elements, and in determining which tests must be rerun and which rules must be re-verified. Or, when a new release is proposed, change management can help in assessing whether the elements of the release are mutually consistent and in specifying IP or other resources on which the new release depends.
More generally, change management gives the ability to analyze the potential impact of changes by tracing to affected entities and the ability to propagate changes completely, correctly, and efficiently. For design managers, this supports decision-making as to whether, when, and how to make or accept changes. For design engineers, it helps in assessing when a set of design entities is complete and consistent and in deciding when it is safe to make (or adopt) a new release.
In this paper we focus on two elements of this approach for SoC design. One is the specification of representative use cases in which change management plays a critical role. These show places in the SoC development process where information important for managing change can be gathered. They also show places where appropriate information can be used to manage the impact of change. The second element is the specification of a generic schema for modeling design entities and their interrelationships. This supports traceability among design elements, allows designers to analyze the impact of changes, and facilitates the efficient and comprehensive propagation of changes to affected elements.
The following section provides some background on a survey of subject-matter experts that we performed to refine the problem definition.
We surveyed some 30 IBM subject-matter experts (SMEs) in electronic design, change management, and design data modeling. They identified 26 problem areas for change management in electronic design. We categorized these as follows:
Major themes that crosscut these included:
We held a workshop with the SMEs to prioritize these problems, and two emerged as the most significant: First, the need for basic management of the configuration of all the design data and resources of concern within a project or work package (libraries, designs, code, tools, test suites, etc.); second, the need for designer visibility into the status of data and configurations in a work package.
To realize these goals, two basic kinds of information are necessary: 1) An understanding of how change management may occur in SoC design processes; 2) An understanding of the kinds of information and relationships needed to manage change in SoC design. We addressed the former by specifying change-management use cases; we addressed the latter by specifying a change-management schema.
3. USE CASES
This section describes typical use cases in the SoC design process. Change is a pervasive concern in these use cases—they cause changes, respond to changes, or depend on data and other resources that are subject to change. Thus, change management is integral to the effective execution of each of these use cases. We identified nine representative use cases in the SoC design process, which are shown in Figure 1.
Figure 1. Use cases in SoC design
In general there are four ways of initiating a project: New Project, Derive, Merge and Retarget. New Project is the case in which a new project is created from the beginning. The Derive case is initiated when a new business opportunity arises to base a new project on an existing design. The Merge case is initiated when an actor wants to merge configuration items during implementation of a new change management scheme or while working with teams/organizations outside of the current scheme. The Retarget case is initiated when a project is restructured due to resource or other constraints. In all of these use cases it is important to institute proper change controls from the outset. New Project starts with a clean slate; the other scenarios require changes from (or to) existing projects.
Once the project is initiated, the next phase is to update the design. There are two use cases in the Update Design composite state. New Design Elements addresses the original creation of new design elements. These become new entries in the change-management system. The Implement Change use case entails the modification of an existing design element (such as fixing a bug). It is triggered in response to a change request and is supported and governed by change-management data and protocols.
The next phase is the Resolve Project and consists of 3 use cases. Backout is the use case by which changes that were made in the previous phase can be reversed. Release is the use case by which a project is released for cross functional use. The Archive use case protects design asset by secure copy of design and environment.
4. CHANGE-MANAGEMENT SCHEMA
The main goal of the change-management schema is to enable the capture of all information that might contribute to change management
The schema, which is defined in the Unified Modeling Language (UML) , consists of several high-level packages (Figure 2).
Click to enlarge
Figure 2. Packages in the change-management schema
Package Data represents types for design data and metadata. Package Objects and Data defines types for objects and data. Objects are containers for information, data represent the information. The main types of object include artifacts (such as files), features, and attributes. The types of objects and data defined are important for change management because they represent the principle work products of electronic design: IP, VHDL and RTL specifications, floor plans, formal verification rules, timing rules, and so on. It is changes to these things for which management is most needed.
The package Types defines types to represent the types of objects and data. This enables some types in the schema (such as those for attributes, collections, and relationships) to be defined parametrically in terms of other types, which promotes generality, consistency, and reusability of schema elements.
Package Attributes defines specific types of attribute. The basic attribute is just a name-value pair that is associated to an object. (More strongly-typed subtypes of attribute have fixed names, value types, attributed-object types, or combinations of these.) Attributes are one of the main types of design data, and they are important for change management because they can represent the status or state of design elements (such as version number, verification level, or timing characteristics).
Package Collections defines types of collections, including collections with varying degrees of structure, typing, and constraints. Collections are important for change management in that changes must often be coordinated for collections of design elements as a group (e.g., for a work package, verification suite, or IP release). Collections are also used in defining other elements in the schema (for example, baselines and change sets).
The package Relationships defines types of relationships. The basic relationship type is an ordered collection of a fixed number of elements. Subtypes provide directionality, element typing, and additional semantics. Relationships are important for change management because they can define various types of dependencies among design data and resources. Examples include the use of macros in cores, the dependence of timing reports on floor plans and timing contracts, and the dependence of test results on tested designs, test cases, and test tools. Explicit dependency relationships support the analysis of change impact and the efficient and precise propagation of changes,
The package Specifications defines types of data specification and definition. Specifications specify an informational entity; definitions denote a meaning and are used in specifications.
Package Resources represents things (other than design data) that are used in design processes, for example, design tools, IP, design methods, and design engineers. Resources are important for change management in that resources are used in the actions that cause changes and in the actions that respond to changes. Indeed, minimizing the resources needed to handle changes is one of the goals of change management.
Resources are also important in that changes to a resource may require changes to design elements that were created using that resource (for example, when changes to a simulator may require reproduction of simulation results).
Package Events defines types and instances of events. Events are important in change management because changes are a kind of event, and signals of change events can trigger processes to handle the change.
The package Actions provides a representation for things that are done, that is, for the behaviors or executions of tools, scripts, tasks, method steps, etc. Actions are important for change in that actions cause change. Actions can also be triggered in response to changes and can handle changes (such as by propagating changes to dependent artifacts).
Subpackage Action Definitions defines the type Action Execution, which contains information about a particular execution of a particular action. It refers to the definition of the action and to the specific artifacts and attributes read and written, resources used, and events generated and handled. Thus an action execution indicates particular artifacts and attributes that are changed, and it links those to the particular process or activity by which they were changed, the particular artifacts and attributes on which the changes were based, and the particular resources by which the changes were effected. Through this, particular dependency relationships can be established between the objects, data, and resources. This is the specific information needed to analyze and propagate concrete changes to artifacts, processes, resources.
Package Baselines defines types for defining mutually consistent set of design artifacts. Baselines are important for change management in several respects. The elements in a baseline must be protected from arbitrary changes that might disrupt their mutual consistency, and the elements in a baseline must be changed in mutually consistent ways in order to evolve a baseline from one version to another.
The final package in Figure 2 is the Change package. It defines types that for representing change explicitly. These include managed objects, which are objects with an associated change log, change logs and change sets, which are types of collection that contain change records, and change records, which record specific changes to specific objects. They can include a reference to an action execution that caused the change
The subpackage Change Requests includes types for modeling change requests and responses. A change request has a type, description, state, priority, and owner. It can have an associated action definition, which may be the definition of the action to be taken in processing the change request. A change request also has a change-request history log.
An example of the schema is shown in Figure 3. The clear boxes (upper part of diagram) show general types from the schema and the shaded boxes (lower part of the diagram) show types (and a few instances) defined for a specific high-level design process project at IBM.
Click to enlarge
Figure 3. Example of change-management data
The figure shows a dependency relationship between two types of design artifact, VHDLArtifact and FloorPlannableObjects. The relationship is defined in terms of a compiler that derives instances of FloorPlannableObjects from instances of VHDLArtifact. Execution of the compiler constitutes an action that defines the relationship. The specific schema elements are defined based on the general schema using a variety of object-oriented modeling techniques, including subtyping (e.g., VHDLArtifact), instantiation (e.g., Compile1) and parameterization (e.g. VHDLFloorplannable ObjectsDependency).
5. USE CASE IMPLEMENT CHANGE
Here we present an example use case, Implement Change, with details on its activities and how the activities use the schema presented in Section 4. This use case is illustrated in Figure 4.
Click to enlarge
Figure 4. State diagram for use case Implement Change
The Implement Change use case addresses the modification of an existing design element (such as fixing a bug). It is triggered by a change request. The first steps of this use case are to identify and evaluate the change request to be handled. Then the relevant baseline is located, loaded into the engineer’s workspace, and verified. At this point the change can be implemented. This begins with the identification of the artifacts that are immediately affected. Then dependent artifacts are identified and changes propagated according to dependency relationships. (This may entail several iterations.) Once a stable state is achieved, the modified artifacts are Tested and regression tested. Depending on test results, more changes may be required. Once the change is considered acceptable, any learning and metrics from the process are captured and the new artifacts and relationships are promoted to the public configuration space.
This paper explores the role of comprehensive change management in SoC design, development, and delivery. Based on the comments of over thirty experienced electronic design engineers from across IBM, we have captured the essential problems and motivations for change management in SoC projects. We have described design scenarios, highlighting places where change management applies, and presented a preliminary schema to show the range of data and relationships change management may incorporate. Change management can benefit both design managers and engineers. It is increasingly essential for improving productivity and reducing time and cost in SoC projects.
Contributions to this work were also made by Nadav Golbandi and Yoav Rubin of IBM’s Haifa Research Lab. Much information and guidance were provided by Jeff Staten and Bernd-josef Huettl of IBM’s Systems and Technology Group. We especially thank Richard Bell, John Coiner, Mark Firstenberg, Andrew Mirsky, Gary Nusbaum, and Harry Reindel of IBM’s Systems and Technology Group for sharing design data and experiences. We are also grateful to the many other people across IBM who contributed their time and expertise.
Agreement established to provide comprehensive design platform down to 14nm targeting low-power, high-performance for advanced consumer electronics
SANTA CLARA, Calif.--January 17, 2011 --ARM® [(LSE: ARM); (Nasdaq: ARMH)] and IBM (NYSE: IBM) today announced an agreement between the two companies to extend their collaboration on advanced semiconductor technologies to enable the rapid development of next generation mobile products optimized for performance and power efficiency. The resulting technology will provide a suite of optimized physical and processor IP by ARM tuned to IBM’s advanced manufacturing process down to 14nm; providing streamlined development and earlier introduction of advanced consumer electronics into the marketplace.
As the consumers’ requirements increase for high-end features on mobile devices including; extended battery life, uninterrupted internet access, high end multimedia and secure transactions, the chip design becomes increasingly more challenging. Designers must consider nanometer -scale effects in terms of lithography, variability, and so on while simultaneously meeting performance, power and area (PPA) targets across hundreds of millions of transistors. This increased complexity potentially results in additional in-house design time.
Through this agreement ARM and IBM will collaboratively develop design platforms aligning the manufacturing process, microprocessor and physical IP design teams. This collaboration will minimize the risk and barriers to migrating to smaller geometries while enabling optimized density, performance, power and yield in advanced SoC designs; accelerating the introduction of advanced electronics into the marketplace.
“The ARM Cortex processor family has become the leadership platform for the majority of smart phones and many other emerging mobile devices,” said Michael Cadigan, general manager, IBM Microelectronics. “We plan to continue working closely with ARM and our foundry customers to speed the momentum of ARM technology by delivering highly advanced, low-power semiconductor technology for a variety of new communications and computing devices.”
“IBM has a proven track record of delivering the core research and development that is relied upon by major semiconductor vendors worldwide for their advanced semiconductor devices. Their leadership of the ISDA alliance, which features a diverse set of top-tier companies as members, is growing in importance as consolidation trends in the semiconductor manufacturing industry continue,” said Simon Segars, EVP and general manager, ARM physical IP division. “This agreement will ensure we are able to deliver highly tuned ARM Artisan Physical IP solutions on advanced ISDA process technologies to meet the early time-to-market our customers demand.”
Past collaboration with IBM and ARM on advanced geometries has been underway since 2008, resulting in the implementation of extensive process and physical IP refinements to Excellerate SoC density, routability, manufacturability, power consumption, and performance. Moreover, through the previous collaboration on the 32nm and 28nm nodes, ARM has already delivered 11 test chips that provide concrete research structures and early silicon validation. In addition, ARM has developed specific optimizations targeting ARM processor cores including most recently a complete ARM Cortex-A9 processor core implemented on 32nm High-K Metal Gate technology.
Today’s agreement reinforces the aligned co-development of semiconductor process, foundation Physical IP building blocks, and microprocessor core optimization necessary to achieve market-leading system-on-chip (SoC) solutions. Furthermore, it extends access for ARM to continue this systematic test chip roadmap and assure early time-to-market readiness of the necessary platform of physical and processor IP solutions for nodes ranging from 20nm through 14nm.
ARM designs the technology that is at the heart of advanced digital products, from wireless, networking and consumer entertainment solutions to imaging, automotive, security and storage devices. ARM’s comprehensive product offering includes 32-bit RISC microprocessors, graphics processors, video engines, enabling software, cell libraries, embedded memories, high-speed connectivity products, peripherals and development tools. Combined with comprehensive design services, training, support and maintenance, and the company’s broad Partner community, they provide a total system solution that offers a fast, reliable path to market for leading electronics companies.
Find out more about ARM by following these links:
Player's don't actually write code or draw models in the game, but it does take users through many of the same thought processes that one would take in creating business process models. Unofortunately, there is no combat in the game, nor can you "hijack people's cube and take their PCs for joyrides. Oh, and kick the crap out the guys walking the halls on their cell phone" as one youtuber commented. :)
Kali Durgampudi, the chief technology officer of healthcare payments company Zelis, believes that the implementation of blockchain tech is vital for protecting patients’ sensitive data from cybercriminals.
Speaking with Health IT News on Wednesday, Durgampudi noted that some of the biggest issues in healthcare are privacy and data security as the industry works to digitize its “archaic paper-based processes.”
“Blockchain technology has the potential to alleviate many of these concerns,” he said, as he highlighted the importance of utilizing a digital ledger that is “impenetrable” to protect sensitive patient and financial data amid the growing rate of cyberattacks across the globe:
“Since the information cannot be modified or copied, blockchain technology vastly reduces security risks, giving hospital and healthcare IT organizations a much stronger line of defense against cybercriminals.”
Durgampudi went on to note that blockchain tech can also play a key role in healthcare payments, as it can help provide greater transparency and efficiency over current payment models in healthcare. He said the many payers and providers were hesitant to share information via email as emails could go awry and there was no proof of delivery.
“Blockchain provides both payers and providers with complete visibility into the entire lifecycle of a claim, from the patient registering at the front desk to disputing a cost to sending an explanation of benefits,” he added.
One of the major companies that has worked on blockchain-based healthcare solutions is multinational tech giant IBM.
The blockchain arm of the company has rolled out several solutions for healthcare such as health credential verification, the “Trust Your Supplier” service to find Tested suppliers and “Blockchain Transparent Supply,” which provides supply chain tracking on temperature-controlled pharmaceuticals.
In March 2021, Cointelegraph reported that IBM was working on a trial of a COVID-19 vaccination passport dubbed the “Excelsior Pass” in partnership with former New York Governor Andrew Cuomo. The passport was designed to be able to verify an individual's vaccination or test results by IBM’s blockchain.
Related: Blockchain without crypto: Adoption of decentralized tech
Another key player in the blockchain-based healthcare space is enterprise blockchain VeChain. In June last year, the project teamed up with Shanghai’s Renji Hospital to launch blockchain-based in-vitro fertilization (IVF) service application.
VeChain also partnered with San Marino in July 2021 to launch a nonfungible token- (NFT)-based vaccination passport that was said to be verifiable worldwide by scanning QR codes tied to the certificate.
David Jia, a blockchain investor with a Ph.D. in neuroscience from Oxford University, echoed similar sentiments to Durgampudi this week.
In a Thursday blog post on Medium, Jia emphasized that blockchain tech could significantly Excellerate drug traceability and verification, along with the data management of clinical trials, patient info and claiming/billing.
“Accuracy in medical records over the long term as well as accessibility is essential, as it is necessary for an individual’s record to be able to be transferred between providers, insurance companies, and certified with relative ease. If medical records are stored on a blockchain, they may be updated safely in almost real-time,” he wrote.