A month after the National Institute of Standards and Technology (NIST) revealed the first quantum-safe algorithms, Amazon Web Services (AWS) and IBM have swiftly moved forward. Google was also quick to outline an aggressive implementation plan for its cloud service that it started a decade ago.
It helps that IBM researchers contributed to three of the four algorithms, while AWS had a hand in two. Google contributed to one of the submitted algorithms, SPHINCS+.
A long process that started in 2016 with 69 original candidates ends with the selection of four algorithms that will become NIST standards, which will play a critical role in protecting encrypted data from the vast power of quantum computers.
NIST's four choices include CRYSTALS-Kyber, a public-private key-encapsulation mechanism (KEM) for general asymmetric encryption, such as when connecting websites. For digital signatures, NIST selected CRYSTALS-Dilithium, FALCON, and SPHINCS+. NIST will add a few more algorithms to the mix in two years.
Vadim Lyubashevsky, a cryptographer who works in IBM's Zurich Research Laboratories, contributed to the development of CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon. Lyubashevsky was predictably pleased by the algorithms selected, but he had only anticipated NIST would pick two digital signature candidates rather than three.
Ideally, NIST would have chosen a second key establishment algorithm, according to Lyubashevsky. "They could have chosen one more right away just to be safe," he told Dark Reading. "I think some people expected McEliece to be chosen, but maybe NIST decided to hold off for two years to see what the backup should be to Kyber."
After NIST identified the algorithms, IBM moved forward by specifying them into its recently launched z16 mainframe. IBM introduced the z16 in April, calling it the "first quantum-safe system," enabled by its new Crypto Express 8S card and APIs that provide access to the NIST APIs.
IBM was championing three of the algorithms that NIST selected, so IBM had already included them in the z16. Since IBM had unveiled the z16 before the NIST decision, the company implemented the algorithms into the new system. IBM last week made it official that the z16 supports the algorithms.
Anne Dames, an IBM distinguished engineer who works on the company's z Systems team, explained that the Crypto Express 8S card could implement various cryptographic algorithms. Nevertheless, IBM was betting on CRYSTAL-Kyber and Dilithium, according to Dames.
"We are very fortunate in that it went in the direction we hoped it would go," she told Dark Reading. "And because we chose to implement CRYSTALS-Kyber and CRYSTALS-Dilithium in the hardware security module, which allows clients to get access to it, the firmware in that hardware security module can be updated. So, if other algorithms were selected, then we would add them to our roadmap for inclusion of those algorithms for the future."
A software library on the system allows application and infrastructure developers to incorporate APIs so that clients can generate quantum-safe digital signatures for both classic computing systems and quantum computers.
"We also have a CRYSTALS-Kyber interface in place so that we can generate a key and provide it wrapped by a Kyber key so that could be used in a potential key exchange scheme," Dames said. "And we've also incorporated some APIs that allow clients to have a key exchange scheme between two parties."
Dames noted that clients might use Kyber to generate digital signatures on documents. "Think about code signing servers, things like that, or documents signing services, where people would like to actually use the digital signature capability to ensure the authenticity of the document or of the code that's being used," she said.
During Amazon's AWS re:Inforce security conference last week in Boston, the cloud provider emphasized its post-quantum cryptography (PQC) efforts. According to Margaret Salter, director of applied cryptography at AWS, Amazon is already engineering the NIST standards into its services.
During a breakout session on AWS' cryptography efforts at the conference, Salter said AWS had implemented an open source, hybrid post-quantum key exchange based on a specification called s2n-tls, which implements the Transport Layer Security (TLS) protocol across different AWS services. AWS has contributed it as a draft standard to the Internet Engineering Task Force (IETF).
Salter explained that the hybrid key exchange brings together its traditional key exchanges while enabling post-quantum security. "We have regular key exchanges that we've been using for years and years to protect data," she said. "We don't want to get rid of those; we're just going to enhance them by adding a public key exchange on top of it. And using both of those, you have traditional security, plus post quantum security."
Last week, Amazon announced that it deployed s2n-tls, the hybrid post-quantum TLS with CRYSTALS-Kyber, which connects to the AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In an update this week, Amazon documented its stated support for AWS Secrets Manager, a service for managing, rotating, and retrieving database credentials and API keys.
While Google didn't make implementation announcements like AWS in the immediate aftermath of NIST's selection, VP and CISO Phil Venables said Google has been focused on PQC algorithms "beyond theoretical implementations" for over a decade. Venables was among several prominent researchers who co-authored a technical paper outlining the urgency of adopting PQC strategies. The peer-reviewed paper was published in May by Nature, a respected journal for the science and technology communities.
"At Google, we're well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information," Venables wrote in a blog post published following the NIST announcement. "We have one goal: ensure that Google is PQC ready."
Venables recalled an experiment in 2016 with Chrome where a minimal number of connections from the Web browser to Google servers used a post-quantum key-exchange algorithm alongside the existing elliptic-curve key-exchange algorithm. "By adding a post-quantum algorithm in a hybrid mode with the existing key exchange, we were able to test its implementation without affecting user security," Venables noted.
Google and Cloudflare announced a "wide-scale post-quantum experiment" in 2019 implementing two post-quantum key exchanges, "integrated into Cloudflare's TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients." The experiment helped Google understand the implications of deploying two post-quantum key agreements with TLS.
Venables noted that last year Google tested post-quantum confidentiality in TLS and found that various network products were not compatible with post-quantum TLS. "We were able to work with the vendor so that the issue was fixed in future firmware updates," he said. "By experimenting early, we resolved this issue for future deployments."
The four algorithms NIST announced are an important milestone in advancing PQC, but there's other work to be done besides quantum-safe encryption. The AWS TLS submission to the IETF is one example; others include such efforts as Hybrid PQ VPN.
"What you will see happening is those organizations that work on TLS protocols, or SSH, or VPN type protocols, will now come together and put together proposals which they will evaluate in their communities to determine what's best and which protocols should be updated, how the certificates should be defined, and things like things like that," IBM's Dames said.
Dustin Moody, a mathematician at NIST who leads its PQC project, shared a similar view during a panel discussion at the RSA Conference in June. "There's been a lot of global cooperation with our NIST process, rather than fracturing of the effort and coming up with a lot of different algorithms," Moody said. "We've seen most countries and standards organizations waiting to see what comes out of our nice progress on this process, as well as participating in that. And we see that as a very good sign."
Hitoshi Kume, a recipient of the 1989 Deming Prize for use of quality principles, defines problems as "undesirable results of a job." Quality improvement efforts work best when problems are addressed systematically using a consistent and analytic approach; the methodology shouldn't change just because the problem changes. Keeping the steps to problem-solving simple allows workers to learn the process and how to use the tools effectively.
Easy to implement and follow up, the most commonly used and well-known quality process is the plan/do/check/act (PDCA) cycle (Figure 1). Other processes are a takeoff of this method, much in the way that computers today are takeoffs of the original IBM system. The PDCA cycle promotes continuous improvement and should thus be visualized as a spiral instead of a closed circle.
Another popular quality improvement process is the six-step PROFIT model in which the acronym stands for:
P = Problem definition.
R = Root cause identification and analysis.
O = Optimal solution based on root cause(s).
F = Finalize how the corrective action will be implemented.
I = Implement the plan.
T = Track the effectiveness of the implementation and verify that the desired results are met.
If the desired results are not met, the cycle is repeated. Both the PDCA and the PROFIT models can be used for problem solving as well as for continuous quality improvement. In companies that follow total quality principles, whichever model is chosen should be used consistently in every department or function in which quality improvement teams are working.
Figure 1. The most common process for quality improvement is the plan/do/check/act cycle outlined above. The cycle promotes continuous improvement and should be thought of as a spiral, not a circle.
Once the basic problem-solving or quality improvement process is understood, the addition of quality tools can make the process proceed more quickly and systematically. Seven simple tools can be used by any professional to ease the quality improvement process: flowcharts, check sheets, Pareto diagrams, cause and effect diagrams, histograms, scatter diagrams, and control charts. (Some books describe a graph instead of a flowchart as one of the seven tools.)
The concept behind the seven basic tools came from Kaoru Ishikawa, a renowned quality expert from Japan. According to Ishikawa, 95% of quality-related problems can be resolved with these basic tools. The key to successful problem resolution is the ability to identify the problem, use the appropriate tools based on the nature of the problem, and communicate the solution quickly to others. Inexperienced personnel might do best by starting with the Pareto chart and the cause and effect diagram before tackling the use of the other tools. Those two tools are used most widely by quality improvement teams.
Flowcharts describe a process in as much detail as possible by graphically displaying the steps in proper sequence. A good flowchart should show all process steps under analysis by the quality improvement team, identify critical process points for control, suggest areas for further improvement, and help explain and solve a problem.
The flowchart in Figure 2 illustrates a simple production process in which parts are received, inspected, and sent to subassembly operations and painting. After completing this loop, the parts can be shipped as subassemblies after passing a final test or they can complete a second cycle consisting of final assembly, inspection and testing, painting, final testing, and shipping.
Figure 2. A basic production process flowchart displays several paths a part can travel from the time it hits the receiving dock to final shipping.
Flowcharts can be simple, such as the one featured in Figure 2, or they can be made up of numerous boxes, symbols, and if/then directional steps. In more complex versions, flowcharts indicate the process steps in the appropriate sequence, the conditions in those steps, and the related constraints by using elements such as arrows, yes/no choices, or if/then statements.
Check sheets help organize data by category. They show how many times each particular value occurs, and their information is increasingly helpful as more data are collected. More than 50 observations should be available to be charted for this tool to be really useful. Check sheets minimize clerical work since the operator merely adds a mark to the tally on the prepared sheet rather than writing out a figure (Figure 3). By showing the frequency of a particular defect (e.g., in a molded part) and how often it occurs in a specific location, check sheets help operators spot problems. The check sheet example shows a list of molded part defects on a production line covering a week's time. One can easily see where to set priorities based on results shown on this check sheet. Assuming the production flow is the same on each day, the part with the largest number of defects carries the highest priority for correction.
Figure 3. Because it clearly organizes data, a check sheet is the easiest way to track information.
The Pareto diagram is named after Vilfredo Pareto, a 19th-century Italian economist who postulated that a large share of wealth is owned by a small percentage of the population. This basic principle translates well into quality problems—most quality problems result from a small number of causes. Quality experts often refer to the principle as the 80-20 rule; that is, 80% of problems are caused by 20% of the potential sources.
A Pareto diagram puts data in a hierarchical order (Figure 4), which allows the most significant problems to be corrected first. The Pareto analysis technique is used primarily to identify and evaluate nonconformities, although it can summarize all types of data. It is perhaps the diagram most often used in management presentations.
Figure 4. By rearranging random data, a Pareto diagram identifies and ranks nonconformities in the quality process in descending order.
To create a Pareto diagram, the operator collects random data, regroups the categories in order of frequency, and creates a bar graph based on the results.
Cause and effect diagrams
The cause and effect diagram is sometimes called an Ishikawa diagram after its inventor. It is also known as a fish bone diagram because of its shape. A cause and effect diagram describes a relationship between variables. The undesirable outcome is shown as effect, and related causes are shown as leading to, or potentially leading to, the said effect. This popular tool has one severe limitation, however, in that users can overlook important, complex interactions between causes. Thus, if a problem is caused by a combination of factors, it is difficult to use this tool to depict and solve it.
A fish bone diagram displays all contributing factors and their relationships to the outcome to identify areas where data should be collected and analyzed. The major areas of potential causes are shown as the main bones, e.g., materials, methods, people, measurement, machines, and design (Figure 5). Later, the subareas are depicted. Thorough analysis of each cause can eliminate causes one by one, and the most probable root cause can be selected for corrective action. Quantitative information can also be used to prioritize means for improvement, whether it be to machine, design, or operator.
Figure 5. Fish bone diagrams display the various possible causes of the final effect. Further analysis can prioritize them.
The histogram plots data in a frequency distribution table. What distinguishes the histogram from a check sheet is that its data are grouped into rows so that the identity of individual values is lost. Commonly used to present quality improvement data, histograms work best with small amounts of data that vary considerably. When used in process capability studies, histograms can display specification limits to show what portion of the data does not meet the specifications.
After the raw data are collected, they are grouped in value and frequency and plotted in a graphical form (Figure 6). A histogram's shape shows the nature of the distribution of the data, as well as central tendency (average) and variability. Specification limits can be used to display the capability of the process.
Figure 6. A histogram is an easy way to see the distribution of the data, its average, and variability.
A scatter diagram shows how two variables are related and is thus used to test for cause and effect relationships. It cannot prove that one variable causes the change in the other, only that a relationship exists and how strong it is. In a scatter diagram, the horizontal (x) axis represents the measurement values of one variable, and the vertical (y) axis represents the measurements of the second variable. Figure 7 shows part clearance values on the x-axis and the corresponding quantitative measurement values on the y-axis.
Figure 7. The plotted data points in a scatter diagram show the relationship between two variables.
A control chart displays statistically determined upper and lower limits drawn on either side of a process average. This chart shows if the collected data are within upper and lower limits previously determined through statistical calculations of raw data from earlier trials.
The construction of a control chart is based on statistical principles and statistical distributions, particularly the normal distribution. When used in conjunction with a manufacturing process, such charts can indicate trends and signal when a process is out of control. The center line of a control chart represents an estimate of the process mean; the upper and lower critical limits are also indicated. The process results are monitored over time and should remain within the control limits; if they do not, an investigation is conducted for the causes and corrective action taken. A control chart helps determine variability so it can be reduced as much as is economically justifiable.
In preparing a control chart, the mean upper control limit (UCL) and lower control limit (LCL) of an approved process and its data are calculated. A blank control chart with mean UCL and LCL with no data points is created; data points are added as they are statistically calculated from the raw data.
Figure 8. Data points that fall outside the upper and lower control limits lead to investigation and correction of the process.
Figure 8 is based on 25 samples or subgroups. For each sample, which in this case consisted of five rods, measurements are taken of a quality characteristic (in this example, length). These data are then grouped in table form (as shown in the figure) and the average and range from each subgroup are calculated, as are the grand average and average of all ranges. These figures are used to calculate UCL and LCL. For the control chart in the example, the formula is ± A2R, where A2 is a constant determined by the table of constants for variable control charts. The constant is based on the subgroup sample size, which is five in this example.
Many people in the medical device manufacturing industry are undoubtedly familiar with many of these tools and know their application, advantages, and limitations. However, manufacturers must ensure that these tools are in place and being used to their full advantage as part of their quality system procedures. Flowcharts and check sheets are most valuable in identifying problems, whereas cause and effect diagrams, histograms, scatter diagrams, and control charts are used for problem analysis. Pareto diagrams are effective for both areas. By properly using these tools, the problem-solving process can be more efficient and more effective.
Those manufacturers who have mastered the seven basic tools described here may wish to further refine their quality improvement processes. A future article will discuss seven new tools: relations diagrams, affinity diagrams (K-J method), systematic diagrams, matrix diagrams, matrix data diagrams, process decision programs, and arrow diagrams. These seven tools are used less frequently and are more complicated.
Ashweni Sahni is director of quality and regulatory affairs at Minnetronix, Inc. (St. Paul, MN), and a member of MD&DI's editorial advisory board.
IBM has come up with an automatic debating system called Project Debater that researches a topic, presents an argument, listens to a human rebuttal and formulates its own rebuttal. But does it pass the Turing test? Or does the Turing test matter anymore?
The Turing test was first introduced in 1950, often cited as year-one for AI research. It asks, “Can machines think?”. Today we’re more interested in machines that can intelligently make restaurant recommendations, drive our car along the tedious highway to and from work, or identify the surprising looking flower we just stumbled upon. These all fit the definition of AI as a machine that can perform a task normally requiring the intelligence of a human. Though as you’ll see below, Turing’s test wasn’t even for intelligence or even for thinking, but rather to determine a test subject’s sex.
The Turing test as we know it today is to see if a machine can fool someone into thinking that it’s a human. It involves an interrogator and a machine with the machine hidden from the interrogator. The interrogator asks questions of the machine using only keyboard and screen. The purpose of the interrogator’s questions are to help him to decide if he’s talking to a machine or a human. If he can’t tell then the machine passes the Turing test.
Often the test is done with a number of interrogators and the measure of success is the percentage of interrogators who can’t tell. In one example, to deliver the machine an advantage, the test was to tell if it was a machine or a 13-year-old Ukrainian boy. The young age excused much of the strangeness in its conversation. It fooled 33% of the interrogators.
Naturally Turing didn’t call his test “the Turing test”. Instead he called it the imitation game, since the goal was to imitate a human. In Turing’s paper, he gives two versions of the test. The first involves three people, the interrogator, a man and a woman. The man and woman sit in a separate room from the interrogator and the communication at Turing’s time was ideally via teleprinter. The goal is for the interrogator to guess who is male and who is female. The man’s goal is to fool the interrogator into making the wrong decision and the woman’s is to help him make the right one.
The second test in Turing’s paper replaces the woman with a machine but the machine is now the deceiver and the man tries to help the interrogator make the right decision. The interrogator still tries to guess who is male and who is female.
But don’t let that goal fool you. The real purpose of the game was as a replacement for his question of “Can a machine think?”. If the game was successful then Turing figured that his question would have been answered. Today, we’re both more sophisticated about what constitutes “thinking” and “intelligence”, and we’re also content with the machine displaying intelligent behavior, whether or not it’s “thinking”. To unpack all this, let’s take IBM’s latest Project Debater under the microscope.
IBM’s Project Debater is an example of what we’d call a composite AI as opposed to a narrow AI. An example of narrow AI would be to present an image to a neural network and the neural network would label objects in that image, a narrowly defined task. A composite AI, however, performs a more complex task requiring a number of steps, much more akin to a human brain.
Project Debater is first given the motion to be argued. You can read the paper on IBM’s webpage for the details of what it does next but basically it spends 15 minutes researching and formulating a 4-minute opening speech supporting one side of the motion. It also converts the speech to natural language and delivers it to an audience. During those initial 15 minutes, it also compiles leads for the opposing argument and formulates responses. This is in preparation for its later rebuttal. It then listens to its opponents rebuttal, converting it to text using IBM’s own Watson speech-to-text. It analyzes the text and, in combination with the responses it had previously formulated, comes up with its own 4-minute rebuttal. It converts that to speech and ends with a summary 2-minute speech.
All of those steps, some of them considered narrow AI, add up to a composite AI. The whole is done with neural networks along with conventional data mining, processing, and analysis.
The following video is of a live debate between Project Debater and Harish Natarajan, world record holder for the number of debate competitions won. Judge for yourself how well it works.
Does Project Debater pass the Turing test? It didn’t take the formal test, however, you can judge for yourself by imagining reading a transcript of what Project Debater had to say. Could you tell whether it was produced by a machine or a human? If you could mistake it for a human then it may pass the Turing test. It also responds to the human debater’s argument, similar to answering questions in the Turing test.
Keep in mind though that Project Debater had 15 minutes to prepare for the opening speech and no numbers are given on how long it took to come up with the other speeches, so if time-to-answer is a factor then it may lose there. But does it matter?
Does it matter if any of today’s AIs can pass the Turing test? That’s most often not the goal. Most AIs end up as marketed products, even the ones that don’t start out that way. After all, eventually someone has to pay for the research. As long as they do the job then it doesn’t matter.
IBM’s goal for Project Debater is to produce persuasive arguments and make well informed decisions free of personal bias, a useful tool to sell to businesses and governments. Tesla’s goal for its AI is to drive vehicles. Chatbots abound for handling specific phone and online requests. All of them do something normally requiring the intelligence of a human with varying degrees of success. The test that matters then is whether or not they do their tasks well enough for people to pay for them.
Maybe asking if a machine can think, or even if it can pass for a human, isn’t really relevant. The ways we’re using them require only that they can complete their tasks. Sometimes this can require “human-like” behavior, but most often not. If we’re not using AI to trick people anyway, is the Turing test still relevant?
The PACCAR IT Europe Delivery Center is responsible for application development, maintenance and support on both IBM Mainframe and Microsoft based systems. The applications cover all major business areas of DAF Trucks: Purchasing, Product Development, Truck Sales, Production, Logistics, Parts, After Sales and Finance. Examples of applications are:
• 3D DAF Truck Configurator
• The ITS (International Truck Service) application supporting the 24/7 call center of DAF ITS by the handling of road side assistance requests throughout Europa.
• The European Parts System supports all PACCAR Parts dealers with ordering, invoices and return programs and the Parts Distribution Centers with order handling and delivery.
• The custom DAF application that securely transfers software and calibration settings into the embedded controllers in our trucks.
• The back-end systems that enable our Diagnostic Tools to perform diagnosis and software updates on trucks in both the manufacturing and aftermarket environment.
For implementing secure software testing in our development teams we need a software engineer who is able to:
• Design an implement the secure testing process:
o How and when to test
o Make use of CI/CD pipelines or manually in case of no pipelines available
o How to handle test findings from test tool
o How to handle change documentation (test evidence)
• And has experience with software development (.NET environment)
• Design and implement secure test process
• Bachelor IT level and approximately 5 years experience.
• C# (pre)
• Azure DevOps (VSTS)
• CI/CD pipelines (YAML)
• We're not looking specifically for a Developer but for someone who is going to design and implement Security Testing in the teams
• Level: medior/senior
• Job is full time
• Working on site at DAF ITD Eindhoven. Working from home possible for maximal 2 days a week
PACCAR IT Europe is responsible for the development, support and management of information systems in Europe for both internal and external users of DAF Trucks, PACCAR Parts, PACCAR Financial and PacLease. Within IT Europe, more than 250 IT professionals are employed.
Systems-on-a-Chip (SoC) are becoming increasingly complex, leading to corresponding increases in the complexity and cost of SoC design and development. We propose to address this problem by introducing comprehensive change management. Change management, which is widely used in the software industry, involves controlling when and where changes can be introduced into components so that changes can be propagated quickly, completely, and correctly.
In this paper we address two main topics: One is typical scenarios in electronic design where change management can be supported and leveraged. The other is the specification of a comprehensive schema to illustrate the varieties of data and relationships that are important for change management in SoC design.
SoC designs are becoming increasingly complex. Pressures on design teams and project managers are rising because of shorter times to market, more complex technology, more complex organizations, and geographically dispersed multi-partner teams with varied “business models” and higher “cost of failure.”
Current methodology and tools for designing SoC need to evolve with market demands in key areas: First, multiple streams of inconsistent hardware (HW) and software (SW) processes are often integrated only in the late stages of a project, leading to unrecognized divergence of requirements, platforms, and IP, resulting in unacceptable risk in cost, schedule, and quality. Second, even within a stream of HW or SW, there is inadequate data integration, configuration management, and change control across life cycle artifacts. Techniques used for these are often ad hoc or manual, and the cost of failure is high. This makes it difficult for a distributed group team to be productive and inhibits the early, controlled reuse of design products and IP. Finally, the costs of deploying and managing separate dedicated systems and infrastructures are becoming prohibitive.
We propose to address these shortcomings through comprehensive change management, which is the integrated application of configuration management, version control, and change control across software and hardware design. Change management is widely practiced in the software development industry. There are commercial change-management systems available for use in electronic design, such as MatrixOne DesignSync , ClioSoft SOS , IC Manage Design Management , and Rational ClearCase/ClearQuest , as well as numerous proprietary, “home-grown” systems. But to date change management remains an under-utilized technology in electronic design.
In SoC design, change management can help with many problems. For instance, when IP is modified, change management can help in identifying blocks in which the IP is used, in evaluating other affected design elements, and in determining which tests must be rerun and which rules must be re-verified. Or, when a new release is proposed, change management can help in assessing whether the elements of the release are mutually consistent and in specifying IP or other resources on which the new release depends.
More generally, change management gives the ability to analyze the potential impact of changes by tracing to affected entities and the ability to propagate changes completely, correctly, and efficiently. For design managers, this supports decision-making as to whether, when, and how to make or accept changes. For design engineers, it helps in assessing when a set of design entities is complete and consistent and in deciding when it is safe to make (or adopt) a new release.
In this paper we focus on two elements of this approach for SoC design. One is the specification of representative use cases in which change management plays a critical role. These show places in the SoC development process where information important for managing change can be gathered. They also show places where appropriate information can be used to manage the impact of change. The second element is the specification of a generic schema for modeling design entities and their interrelationships. This supports traceability among design elements, allows designers to analyze the impact of changes, and facilitates the efficient and comprehensive propagation of changes to affected elements.
The following section provides some background on a survey of subject-matter experts that we performed to refine the problem definition.
We surveyed some 30 IBM subject-matter experts (SMEs) in electronic design, change management, and design data modeling. They identified 26 problem areas for change management in electronic design. We categorized these as follows:
Major themes that crosscut these included:
We held a workshop with the SMEs to prioritize these problems, and two emerged as the most significant: First, the need for basic management of the configuration of all the design data and resources of concern within a project or work package (libraries, designs, code, tools, test suites, etc.); second, the need for designer visibility into the status of data and configurations in a work package.
To realize these goals, two basic kinds of information are necessary: 1) An understanding of how change management may occur in SoC design processes; 2) An understanding of the kinds of information and relationships needed to manage change in SoC design. We addressed the former by specifying change-management use cases; we addressed the latter by specifying a change-management schema.
3. USE CASES
This section describes typical use cases in the SoC design process. Change is a pervasive concern in these use cases—they cause changes, respond to changes, or depend on data and other resources that are subject to change. Thus, change management is integral to the effective execution of each of these use cases. We identified nine representative use cases in the SoC design process, which are shown in Figure 1.
Figure 1. Use cases in SoC design
In general there are four ways of initiating a project: New Project, Derive, Merge and Retarget. New Project is the case in which a new project is created from the beginning. The Derive case is initiated when a new business opportunity arises to base a new project on an existing design. The Merge case is initiated when an actor wants to merge configuration items during implementation of a new change management scheme or while working with teams/organizations outside of the current scheme. The Retarget case is initiated when a project is restructured due to resource or other constraints. In all of these use cases it is important to institute proper change controls from the outset. New Project starts with a clean slate; the other scenarios require changes from (or to) existing projects.
Once the project is initiated, the next phase is to update the design. There are two use cases in the Update Design composite state. New Design Elements addresses the original creation of new design elements. These become new entries in the change-management system. The Implement Change use case entails the modification of an existing design element (such as fixing a bug). It is triggered in response to a change request and is supported and governed by change-management data and protocols.
The next phase is the Resolve Project and consists of 3 use cases. Backout is the use case by which changes that were made in the previous phase can be reversed. Release is the use case by which a project is released for cross functional use. The Archive use case protects design asset by secure copy of design and environment.
4. CHANGE-MANAGEMENT SCHEMA
The main goal of the change-management schema is to enable the capture of all information that might contribute to change management
The schema, which is defined in the Unified Modeling Language (UML) , consists of several high-level packages (Figure 2).
Click to enlarge
Figure 2. Packages in the change-management schema
Package Data represents types for design data and metadata. Package Objects and Data defines types for objects and data. Objects are containers for information, data represent the information. The main types of object include artifacts (such as files), features, and attributes. The types of objects and data defined are important for change management because they represent the principle work products of electronic design: IP, VHDL and RTL specifications, floor plans, formal verification rules, timing rules, and so on. It is changes to these things for which management is most needed.
The package Types defines types to represent the types of objects and data. This enables some types in the schema (such as those for attributes, collections, and relationships) to be defined parametrically in terms of other types, which promotes generality, consistency, and reusability of schema elements.
Package Attributes defines specific types of attribute. The basic attribute is just a name-value pair that is associated to an object. (More strongly-typed subtypes of attribute have fixed names, value types, attributed-object types, or combinations of these.) Attributes are one of the main types of design data, and they are important for change management because they can represent the status or state of design elements (such as version number, verification level, or timing characteristics).
Package Collections defines types of collections, including collections with varying degrees of structure, typing, and constraints. Collections are important for change management in that changes must often be coordinated for collections of design elements as a group (e.g., for a work package, verification suite, or IP release). Collections are also used in defining other elements in the schema (for example, baselines and change sets).
The package Relationships defines types of relationships. The basic relationship type is an ordered collection of a fixed number of elements. Subtypes provide directionality, element typing, and additional semantics. Relationships are important for change management because they can define various types of dependencies among design data and resources. Examples include the use of macros in cores, the dependence of timing reports on floor plans and timing contracts, and the dependence of test results on tested designs, test cases, and test tools. Explicit dependency relationships support the analysis of change impact and the efficient and precise propagation of changes,
The package Specifications defines types of data specification and definition. Specifications specify an informational entity; definitions denote a meaning and are used in specifications.
Package Resources represents things (other than design data) that are used in design processes, for example, design tools, IP, design methods, and design engineers. Resources are important for change management in that resources are used in the actions that cause changes and in the actions that respond to changes. Indeed, minimizing the resources needed to handle changes is one of the goals of change management.
Resources are also important in that changes to a resource may require changes to design elements that were created using that resource (for example, when changes to a simulator may require reproduction of simulation results).
Package Events defines types and instances of events. Events are important in change management because changes are a kind of event, and signals of change events can trigger processes to handle the change.
The package Actions provides a representation for things that are done, that is, for the behaviors or executions of tools, scripts, tasks, method steps, etc. Actions are important for change in that actions cause change. Actions can also be triggered in response to changes and can handle changes (such as by propagating changes to dependent artifacts).
Subpackage Action Definitions defines the type Action Execution, which contains information about a particular execution of a particular action. It refers to the definition of the action and to the specific artifacts and attributes read and written, resources used, and events generated and handled. Thus an action execution indicates particular artifacts and attributes that are changed, and it links those to the particular process or activity by which they were changed, the particular artifacts and attributes on which the changes were based, and the particular resources by which the changes were effected. Through this, particular dependency relationships can be established between the objects, data, and resources. This is the specific information needed to analyze and propagate concrete changes to artifacts, processes, resources.
Package Baselines defines types for defining mutually consistent set of design artifacts. Baselines are important for change management in several respects. The elements in a baseline must be protected from arbitrary changes that might disrupt their mutual consistency, and the elements in a baseline must be changed in mutually consistent ways in order to evolve a baseline from one version to another.
The final package in Figure 2 is the Change package. It defines types that for representing change explicitly. These include managed objects, which are objects with an associated change log, change logs and change sets, which are types of collection that contain change records, and change records, which record specific changes to specific objects. They can include a reference to an action execution that caused the change
The subpackage Change Requests includes types for modeling change requests and responses. A change request has a type, description, state, priority, and owner. It can have an associated action definition, which may be the definition of the action to be taken in processing the change request. A change request also has a change-request history log.
An example of the schema is shown in Figure 3. The clear boxes (upper part of diagram) show general types from the schema and the shaded boxes (lower part of the diagram) show types (and a few instances) defined for a specific high-level design process project at IBM.
Click to enlarge
Figure 3. Example of change-management data
The figure shows a dependency relationship between two types of design artifact, VHDLArtifact and FloorPlannableObjects. The relationship is defined in terms of a compiler that derives instances of FloorPlannableObjects from instances of VHDLArtifact. Execution of the compiler constitutes an action that defines the relationship. The specific schema elements are defined based on the general schema using a variety of object-oriented modeling techniques, including subtyping (e.g., VHDLArtifact), instantiation (e.g., Compile1) and parameterization (e.g. VHDLFloorplannable ObjectsDependency).
5. USE CASE IMPLEMENT CHANGE
Here we present an example use case, Implement Change, with details on its activities and how the activities use the schema presented in Section 4. This use case is illustrated in Figure 4.
Click to enlarge
Figure 4. State diagram for use case Implement Change
The Implement Change use case addresses the modification of an existing design element (such as fixing a bug). It is triggered by a change request. The first steps of this use case are to identify and evaluate the change request to be handled. Then the relevant baseline is located, loaded into the engineer’s workspace, and verified. At this point the change can be implemented. This begins with the identification of the artifacts that are immediately affected. Then dependent artifacts are identified and changes propagated according to dependency relationships. (This may entail several iterations.) Once a stable state is achieved, the modified artifacts are Verified and regression tested. Depending on test results, more changes may be required. Once the change is considered acceptable, any learning and metrics from the process are captured and the new artifacts and relationships are promoted to the public configuration space.
This paper explores the role of comprehensive change management in SoC design, development, and delivery. Based on the comments of over thirty experienced electronic design engineers from across IBM, we have captured the essential problems and motivations for change management in SoC projects. We have described design scenarios, highlighting places where change management applies, and presented a preliminary schema to show the range of data and relationships change management may incorporate. Change management can benefit both design managers and engineers. It is increasingly essential for improving productivity and reducing time and cost in SoC projects.
Contributions to this work were also made by Nadav Golbandi and Yoav Rubin of IBM’s Haifa Research Lab. Much information and guidance were provided by Jeff Staten and Bernd-josef Huettl of IBM’s Systems and Technology Group. We especially thank Richard Bell, John Coiner, Mark Firstenberg, Andrew Mirsky, Gary Nusbaum, and Harry Reindel of IBM’s Systems and Technology Group for sharing design data and experiences. We are also grateful to the many other people across IBM who contributed their time and expertise.
RHEL 9.0, the latest major release of Red Hat Enterprise Linux, delivers tighter security, as well as improved installation, distribution, and management for enterprise server and cloud environments.
The operating system, code named Plow, is a significant upgrade over RHEL 8.0 and makes it easier for application developers to test and deploy containers.
Available in server and desktop versoins, RHEL remains one of the top Linux distributions for running enterprise workloads because of its stability, dependability, and robustness.
It is free for software-development purposes, but instances require registration with the Red Hat Subscription Management (RHSM) service. Red Hat, owned by IBM, provides 24X7 subscription-based customer support as well as professional integration services. With the money Red Hat receives from subscriptions, it supports other open source efforts, including those that provide upstream features that eventually end up in RHEL itself.
RHEL 9 can be run on a variety of physical hardware, as a virtual machine on hypervisors, in containers, or as instances in Infrastructure as a Service (IaaS) public cloud services. It supports legacy x86 hardware as well as 64-bit x86_64-v2, aarch64, and ARMv8.0-A hardware architectures. RHEL 9 supports IBM Power 9, Power 10, and Z-series (z14) hardware platforms.
RHEL also supports a variety of data-storage file systems, including the common Ext4 file system, GFS2 and XFS. Legacy support for Ext2, Ext3, and vfat (FAT32) still exists.
RHEL scales to large amounts of persistent and transient store, and RHEL 9 increases maximum amount of memory to 48 TB for x86_64 architectures.
The first step is downloading the operating system and following some straight-forward steps.
When installing RHEL 9, users are prompeted for "Software Selection" options, and we chose Server with GUI. There are others such as Minimal Install, Server, Workstation, Custom Operating System, and Virtualization Host.
At this point, additional software can be chosen based on the environment and install functions like DNS Name Server, File and Storage Server, Debugging Tools, GNOME, and Guest Agents, if running a hypervisor. These allow tailoring the type of install based on the role of the server. Next, users can select add-ons for additional environment software to install automatically.
Server with GUI or any of the desktop variants of RHEL 9 come with the GNOME 40 desktop environment. (The latest GNOME version is 42.) For a graphical interface, RHEL 9 uses the Wayland 1.19 graphics-display server protocol with NVIDIA drivers. Wayland is the C library communications protocol that specifies how data will be sent to the display server and clients. The latest Wayland release is 1.21 with RHEL again opting for stability and general availability.
RHEL is a solid operating system for application developers who plan to move working code into production. RHEL 9 comes with GNU Compiler Collection (GCC) 11.2.1 with LLVM, glibc 2.34, and binutils 2.35. Link Time Optimization (LTO) is now enabled by default to help make executables smaller and more efficient.
RHEL 9 comes with Python 3.9 installed by default and supports modern programming languages like Rust and Go. RHEL 9 also comes with updated programming languages including Node.js, Ruby 3.0.3, Perl 5.32, and PHP 8.0.
Red Hat offers the OpenShift Container Platform as its primary product for running Linux containers in a Kubernetes management environment. OpenShift runs on RHEL, and RHEL 9 has available Universal Base Image (UBI) images to support building containerized applications. RHEL 9 also has automatic container updates and rollbacks, and the Podman tool can help notify DevOps teams if containers are failing and automatically rollback to known-good configurations.
Linux software-package management systems have been evolving in latest years. The yum (Yellow-Dog Updater Modified) software update utility is being deprecated, but the command itself is still supported. The transition to dnf (Dandified Yum) has occurred, and the yum command is just a symbolic link to dnf3.
RHEL 9 comes with Red Hat Package Manager (RPM) 4.16, and the rpm command can still be used to install files with the .rpm file extension. Flatpak (formerly sdg-app) is another method of packaging and distributing software to Linux systems. Flatpak defines permissions and resource access that apps require.
RHEL 9 also supports the Red Hat Software Collections (RHSCL) for releasing semi-annual stable updates of critical application software. RHSCL provides updates to software-development tools, web services, database software, and other key software for application environments.
Integrity Measurement Architecture (IMA) can detect files that have been maliciously modified and assess the integrity of the Linux kernel. To validate the authenticity and integrity of the OS distribution, RHEL 9 supports IMA along with Extended Verification Module (EVM) to protect file-extended attributes. RHEL 9 Malware Detection, provided with Red Hat Insights, can perform a security assessment by using YARA pattern-matching software to show evidence of malware.
RHEL 9 also provides greater control over root-user password authentication using SSH. It is possible to disable root-user login with basic passwords to help Excellerate server security. Updated classes, permissions, and features of SELinux are part of RHEL 9 to leverage Linux Kernel security capabilities.
RHEL 9 also uses OpenSSL 3.0.1, which improves the cryptographic libraries and processes to Excellerate confidentiality and integrity of web communications.
Red Hat systems are often used in environments that require heightened levels of security and must meet certain security compliance requirements. Governments often require Security Technical Implementation Guide (STIG) configuration standards along with validation using Security Content Automation Protocol (SCAP). RHEL 9 supports OpenSCAP 1.3.6 and can use the SCAP Security Guide (SSG) and the RHEL 9 Open Vulnerability Assessment Language (OVAL) signatures to check for compliance.
Red Hat Insights is a management and operations service that reviews RHEL systems for compliance, vulnerabilities, patch, gain configuration advice, and optimization. Red Hat Insights Image Builder allows creation of custom RHEL images for simplified deployment to environments including cloud infrastructure.
Red Hat offers Image Builder as-a-Service to customize and standardize a preferred RHEL 9 image and run it in an IaaS cloud service provider. Image Builder can create blueprints to customize the bootable ISO installer image. The new version of Image Builder supports creation of separate logical filesystems. This helps when meeting security-compliance requirements that call for specific directories and file systems to use dedicated partitions for STIGs.
Web-based monitoring and administration tool Cockpit comes with RHEL 9, making management and operations easier for those new to Red Hat system management.
Red Hat emphasizes uptime and supportability while keeping systems patched. RHEL 9 supports kernel live patch management that allows patching a running Linux kernel without rebooting or restarting processes.
Red Hat systems often run in cloud environments. RHEL 9 includes Resource Optimization for cloud deployments to help size the system appropriately for its workload and to balance performance and costs.
The first step toward using RHEL 9 is installing it in a test environment to get to know how it works. The 60-day demo subscription can get you started. It is important to thoroughly test RHEL 9 before lifting and shifting workloads onto new RHEL 9 systems; upgrading in-place is discouraged.
Next, perform an asset inventory of all the RHEL systems in the environment. It’s okay to admit that there are some old RHEL 6 and 7 systems in the environment in desperate need of upgrades. Some organizations may even have a few RHEL 5 and CentOS 4 systems lurking about their data centers. Those older servers are ideal candidates for RHEL 9 upgrades.
Red Hat contributes to many open-source software projects, and CentOS Linux is their upstream source for RHEL. Check out CentOS Stream 9 (released December 3, 2021) to experience what features may be coming to RHEL 9.1.
If you want to check out the latest Linux features for free, the Fedora Project (now Fedora 36) may be something to download and install. Fedora is intended to have the most leading-edge features and provide a vision for the future progression of the RHEL OS. Red Hat is the primary contributor to the Fedora Project, but it also has worldwide community contributors.
Fedora Workstation 36 (released May 10, 2022) comes with the latest GNOME 42 desktop along with many other new features and software. Fedora 37 will be released in December 2022, an aggressive release schedule that promotes innovation and rapid evolution of new features.
Red Hat provides long-term support for customers who run production applications for many years and require the stability. It also publishes its release schedule and the support life cycle of the operating systems. The schedule had been for a new release every five years, but with RHEL 9 has returned to a three-year cadence. Dot releases occur annually, so RHEL 9.1 should be out around May 2023.
Support for major RHEL releases span 10 years, five years of full support followed by five years of maintenance. For example, RHEL 6 was released May of 2011 and is now in the Extended Life-cycle Support (ELS) phase for customers who purchase that Add-on subscription).
RHEL 9 won't enter the ELS phase until May 2032. It's hard to plan that far in advance, but Red Hat has a long tradition of honoring commitments to customers. Here is a diagram of the lifespan of RHEL 9 from the RHEL support matrix.
Based on the transparency of the release schedule and Red Hat’s history of meeting it, we can expect RHEL 10 to be out sometime in May 2025.
Copyright © 2022 IDG Communications, Inc.
Player's don't actually write code or draw models in the game, but it does take users through many of the same thought processes that one would take in creating business process models. Unofortunately, there is no combat in the game, nor can you "hijack people's cube and take their PCs for joyrides. Oh, and kick the crap out the guys walking the halls on their cell phone" as one youtuber commented. :)
Kali Durgampudi, the chief technology officer of healthcare payments company Zelis, believes that the implementation of blockchain tech is vital for protecting patients’ sensitive data from cybercriminals.
Speaking with Health IT News on Wednesday, Durgampudi noted that some of the biggest issues in healthcare are privacy and data security as the industry works to digitize its “archaic paper-based processes.”
“Blockchain technology has the potential to alleviate many of these concerns,” he said, as he highlighted the importance of utilizing a digital ledger that is “impenetrable” to protect sensitive patient and financial data amid the growing rate of cyberattacks across the globe:
“Since the information cannot be modified or copied, blockchain technology vastly reduces security risks, giving hospital and healthcare IT organizations a much stronger line of defense against cybercriminals.”
Durgampudi went on to note that blockchain tech can also play a key role in healthcare payments, as it can help provide greater transparency and efficiency over current payment models in healthcare. He said the many payers and providers were hesitant to share information via email as emails could go awry and there was no proof of delivery.
“Blockchain provides both payers and providers with complete visibility into the entire lifecycle of a claim, from the patient registering at the front desk to disputing a cost to sending an explanation of benefits,” he added.
One of the major companies that has worked on blockchain-based healthcare solutions is multinational tech giant IBM.
The blockchain arm of the company has rolled out several solutions for healthcare such as health credential verification, the “Trust Your Supplier” service to find Verified suppliers and “Blockchain Transparent Supply,” which provides supply chain tracking on temperature-controlled pharmaceuticals.
In March 2021, Cointelegraph reported that IBM was working on a trial of a COVID-19 vaccination passport dubbed the “Excelsior Pass” in partnership with former New York Governor Andrew Cuomo. The passport was designed to be able to verify an individual's vaccination or test results by IBM’s blockchain.
Related: Blockchain without crypto: Adoption of decentralized tech
Another key player in the blockchain-based healthcare space is enterprise blockchain VeChain. In June last year, the project teamed up with Shanghai’s Renji Hospital to launch blockchain-based in-vitro fertilization (IVF) service application.
VeChain also partnered with San Marino in July 2021 to launch a nonfungible token- (NFT)-based vaccination passport that was said to be verifiable worldwide by scanning QR codes tied to the certificate.
David Jia, a blockchain investor with a Ph.D. in neuroscience from Oxford University, echoed similar sentiments to Durgampudi this week.
In a Thursday blog post on Medium, Jia emphasized that blockchain tech could significantly Excellerate drug traceability and verification, along with the data management of clinical trials, patient info and claiming/billing.
“Accuracy in medical records over the long term as well as accessibility is essential, as it is necessary for an individual’s record to be able to be transferred between providers, insurance companies, and certified with relative ease. If medical records are stored on a blockchain, they may be updated safely in almost real-time,” he wrote.