There is no better option than our 000-N55 Practice Test and free pdf

You will get the exactly same replica of 000-N55 real exam questions that you are going to attempt in actual test. Killexams.com has maintained database of 000-N55 free pdf that is big questions bank highly pertinent to 000-N55 and served by test takers who attempt the 000-N55 exam and passed with high score.

Exam Code: 000-N55 Practice exam 2022 by Killexams.com team
IBM FileNet P8 System Implementation Technical Mastery Test v1
IBM Implementation basics
Killexams : IBM Implementation basics - BingNews https://killexams.com/pass4sure/exam-detail/000-N55 Search results Killexams : IBM Implementation basics - BingNews https://killexams.com/pass4sure/exam-detail/000-N55 https://killexams.com/exam_list/IBM Killexams : 7 Basic Tools That Can Strengthen Quality

Hitoshi Kume, a recipient of the 1989 Deming Prize for use of quality principles, defines problems as "undesirable results of a job." Quality improvement efforts work best when problems are addressed systematically using a consistent and analytic approach; the methodology shouldn't change just because the problem changes. Keeping the steps to problem-solving simple allows workers to learn the process and how to use the tools effectively.

Easy to implement and follow up, the most commonly used and well-known quality process is the plan/do/check/act (PDCA) cycle (Figure 1). Other processes are a takeoff of this method, much in the way that computers today are takeoffs of the original IBM system. The PDCA cycle promotes continuous improvement and should thus be visualized as a spiral instead of a closed circle.

Another popular quality improvement process is the six-step PROFIT model in which the acronym stands for:

P = Problem definition.

R = Root cause identification and analysis.

O = Optimal solution based on root cause(s).

F = Finalize how the corrective action will be implemented.

I = Implement the plan.

T = Track the effectiveness of the implementation and verify that the desired results are met.

If the desired results are not met, the cycle is repeated. Both the PDCA and the PROFIT models can be used for problem solving as well as for continuous quality improvement. In companies that follow total quality principles, whichever model is chosen should be used consistently in every department or function in which quality improvement teams are working.

Quality Improvement

Figure 1. The most common process for quality improvement is the plan/do/check/act cycle outlined above. The cycle promotes continuous improvement and should be thought of as a spiral, not a circle.
 

7 Basic Quality Improvement Tools

Once the basic problem-solving or quality improvement process is understood, the addition of quality tools can make the process proceed more quickly and systematically. Seven simple tools can be used by any professional to ease the quality improvement process: flowcharts, check sheets, Pareto diagrams, cause and effect diagrams, histograms, scatter diagrams, and control charts. (Some books describe a graph instead of a flowchart as one of the seven tools.)

The concept behind the seven basic tools came from Kaoru Ishikawa, a renowned quality expert from Japan. According to Ishikawa, 95% of quality-related problems can be resolved with these basic tools. The key to successful problem resolution is the ability to identify the problem, use the appropriate tools based on the nature of the problem, and communicate the solution quickly to others. Inexperienced personnel might do best by starting with the Pareto chart and the cause and effect diagram before tackling the use of the other tools. Those two tools are used most widely by quality improvement teams.

Flowcharts

Flowcharts describe a process in as much detail as possible by graphically displaying the steps in proper sequence. A good flowchart should show all process steps under analysis by the quality improvement team, identify critical process points for control, suggest areas for further improvement, and help explain and solve a problem.

The flowchart in Figure 2 illustrates a simple production process in which parts are received, inspected, and sent to subassembly operations and painting. After completing this loop, the parts can be shipped as subassemblies after passing a final test or they can complete a second cycle consisting of final assembly, inspection and testing, painting, final testing, and shipping.

Quality Improvement Tools

Figure 2. A basic production process flowchart displays several paths a part can travel from the time it hits the receiving dock to final shipping.
 

Flowcharts can be simple, such as the one featured in Figure 2, or they can be made up of numerous boxes, symbols, and if/then directional steps. In more complex versions, flowcharts indicate the process steps in the appropriate sequence, the conditions in those steps, and the related constraints by using elements such as arrows, yes/no choices, or if/then statements.

Check sheets

Check sheets help organize data by category. They show how many times each particular value occurs, and their information is increasingly helpful as more data are collected. More than 50 observations should be available to be charted for this tool to be really useful. Check sheets minimize clerical work since the operator merely adds a mark to the tally on the prepared sheet rather than writing out a figure (Figure 3). By showing the frequency of a particular defect (e.g., in a molded part) and how often it occurs in a specific location, check sheets help operators spot problems. The check sheet example shows a list of molded part defects on a production line covering a week's time. One can easily see where to set priorities based on results shown on this check sheet. Assuming the production flow is the same on each day, the part with the largest number of defects carries the highest priority for correction.

Quality Improvement Tools

Figure 3. Because it clearly organizes data, a check sheet is the easiest way to track information.
 

Pareto diagrams

The Pareto diagram is named after Vilfredo Pareto, a 19th-century Italian economist who postulated that a large share of wealth is owned by a small percentage of the population. This basic principle translates well into quality problems—most quality problems result from a small number of causes. Quality experts often refer to the principle as the 80-20 rule; that is, 80% of problems are caused by 20% of the potential sources.

A Pareto diagram puts data in a hierarchical order (Figure 4), which allows the most significant problems to be corrected first. The Pareto analysis technique is used primarily to identify and evaluate nonconformities, although it can summarize all types of data. It is perhaps the diagram most often used in management presentations.

Quality Improvement Tools

Figure 4. By rearranging random data, a Pareto diagram identifies and ranks nonconformities in the quality process in descending order.
 

To create a Pareto diagram, the operator collects random data, regroups the categories in order of frequency, and creates a bar graph based on the results.

Cause and effect diagrams

The cause and effect diagram is sometimes called an Ishikawa diagram after its inventor. It is also known as a fish bone diagram because of its shape. A cause and effect diagram describes a relationship between variables. The undesirable outcome is shown as effect, and related causes are shown as leading to, or potentially leading to, the said effect. This popular tool has one severe limitation, however, in that users can overlook important, complex interactions between causes. Thus, if a problem is caused by a combination of factors, it is difficult to use this tool to depict and solve it.

A fish bone diagram displays all contributing factors and their relationships to the outcome to identify areas where data should be collected and analyzed. The major areas of potential causes are shown as the main bones, e.g., materials, methods, people, measurement, machines, and design (Figure 5). Later, the subareas are depicted. Thorough analysis of each cause can eliminate causes one by one, and the most probable root cause can be selected for corrective action. Quantitative information can also be used to prioritize means for improvement, whether it be to machine, design, or operator.

Quality Improvement Tools

Figure 5. Fish bone diagrams display the various possible causes of the final effect. Further analysis can prioritize them.
 

Histograms

The histogram plots data in a frequency distribution table. What distinguishes the histogram from a check sheet is that its data are grouped into rows so that the identity of individual values is lost. Commonly used to present quality improvement data, histograms work best with small amounts of data that vary considerably. When used in process capability studies, histograms can display specification limits to show what portion of the data does not meet the specifications.

After the raw data are collected, they are grouped in value and frequency and plotted in a graphical form (Figure 6). A histogram's shape shows the nature of the distribution of the data, as well as central tendency (average) and variability. Specification limits can be used to display the capability of the process.

Quality Improvement Tools

Figure 6. A histogram is an easy way to see the distribution of the data, its average, and variability.
 

Scatter diagrams

A scatter diagram shows how two variables are related and is thus used to test for cause and effect relationships. It cannot prove that one variable causes the change in the other, only that a relationship exists and how strong it is. In a scatter diagram, the horizontal (x) axis represents the measurement values of one variable, and the vertical (y) axis represents the measurements of the second variable. Figure 7 shows part clearance values on the x-axis and the corresponding quantitative measurement values on the y-axis.

Quality Improvement Tool

Figure 7. The plotted data points in a scatter diagram show the relationship between two variables.
 

Control charts

A control chart displays statistically determined upper and lower limits drawn on either side of a process average. This chart shows if the collected data are within upper and lower limits previously determined through statistical calculations of raw data from earlier trials.

The construction of a control chart is based on statistical principles and statistical distributions, particularly the normal distribution. When used in conjunction with a manufacturing process, such charts can indicate trends and signal when a process is out of control. The center line of a control chart represents an estimate of the process mean; the upper and lower critical limits are also indicated. The process results are monitored over time and should remain within the control limits; if they do not, an investigation is conducted for the causes and corrective action taken. A control chart helps determine variability so it can be reduced as much as is economically justifiable.

In preparing a control chart, the mean upper control limit (UCL) and lower control limit (LCL) of an approved process and its data are calculated. A blank control chart with mean UCL and LCL with no data points is created; data points are added as they are statistically calculated from the raw data.

Figure 8. Data points that fall outside the upper and lower control limits lead to investigation and correction of the process.
 

Figure 8 is based on 25 samples or subgroups. For each sample, which in this case consisted of five rods, measurements are taken of a quality characteristic (in this example, length). These data are then grouped in table form (as shown in the figure) and the average and range from each subgroup are calculated, as are the grand average and average of all ranges. These figures are used to calculate UCL and LCL. For the control chart in the example, the formula is ± A2R, where A2 is a constant determined by the table of constants for variable control charts. The constant is based on the subgroup demo size, which is five in this example.

Conclusion

Many people in the medical device manufacturing industry are undoubtedly familiar with many of these tools and know their application, advantages, and limitations. However, manufacturers must ensure that these tools are in place and being used to their full advantage as part of their quality system procedures. Flowcharts and check sheets are most valuable in identifying problems, whereas cause and effect diagrams, histograms, scatter diagrams, and control charts are used for problem analysis. Pareto diagrams are effective for both areas. By properly using these tools, the problem-solving process can be more efficient and more effective.

Those manufacturers who have mastered the seven basic tools described here may wish to further refine their quality improvement processes. A future article will discuss seven new tools: relations diagrams, affinity diagrams (K-J method), systematic diagrams, matrix diagrams, matrix data diagrams, process decision programs, and arrow diagrams. These seven tools are used less frequently and are more complicated.

Ashweni Sahni is director of quality and regulatory affairs at Minnetronix, Inc. (St. Paul, MN), and a member of MD&DI's editorial advisory board.


Tue, 02 Aug 2022 12:00:00 -0500 en text/html https://www.mddionline.com/design-engineering/7-basic-tools-can-improve-quality
Killexams : IBM unveils a bold new ‘quantum error mitigation’ strategy

IBM today announced a new strategy for the implementation of several “error mitigation” techniques designed to bring about the era of fault-tolerant quantum computers.

Up front: Anyone still clinging to the notion that quantum circuits are too noisy for useful computing is about to be disillusioned.

A decade ago, the idea of a working quantum computing system seemed far-fetched to most of us. Today, researchers around the world connect to IBM’s cloud-based quantum systems with such frequency that, according to IBM’s director of quantum infrastructure, some three billion quantum circuits are completed every day.

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

IBM and other companies are already using quantum technology to do things that either couldn’t be done by classical binary computers or would take too much time or energy. But there’s still a lot of work to be done.

The dream is to create a useful, fault-tolerant quantum computer capable of demonstrating clear quantum advantage — the point where quantum processors are capable of doing things that classical ones simply cannot.

Background: Here at Neural, we identified quantum computing as the most important technology of 2022 and that’s unlikely to change as we continue the perennial march forward.

The short and long of it is that quantum computing promises to do away with our current computational limits. Rather than replacing the CPU or GPU, it’ll add the QPU (quantum processing unit) to our tool belt.

What this means is up to the individual use case. Most of us don’t need quantum computers because our day-to-day problems aren’t that difficult.

But, for industries such as banking, energy, and security, the existence of new technologies capable of solving problems more complex than today’s technology can represents a paradigm shift the likes of which we may not have seen since the advent of steam power.

If you can imagine a magical machine capable of increasing efficiency across numerous high-impact domains — it could save time, money, and energy at scales that could ultimately affect every human on Earth — then you can understand why IBM and others are so desparate on building QPUs that demonstrate quantum advantage.

The problem: Building pieces of hardware capable of manipulating quantum mechanics as a method by which to perform a computation is, as you can imagine, very hard.

IBM’s spent the past decade or so figuring out how to solve the foundational problems plaguing the field — to include the basic infrastructure, cooling, and power source requirements necessary just to get started in the labs.

Today, IBM’s quantum roadmap shows just how far the industry has come:

But to get where it’s going, we need to solve one of the few remaining foundational problems related to the development of useful quantum processors: they’re noisy as heck.

The solution: Noisy qubits are the quantum computer engineer’s current bane. Essentially, the more processing power you try to squeeze out of a quantum computer the noisier its qubits get (qubits are essentially the computer bits of quantum computing).

Until now, the bulk of the work in squelching this noise has involved scaling qubits so that the signal the scientists are trying to read is strong enough to squeeze through.

In the experimental phases, solving noisy qubits was largely a game of Wack-a-mole. As scientists came up with new techniques — many of which were pioneered in IBM laboratories — they pipelined them to researchers for novel application.

But, these days, the field has advanced quite a bit. The art of error mitigation has evolved from targeted one-off solutions to a full suite of techniques.

Per IBM:

Current quantum hardware is subject to different sources of noise, the most well-known being qubit decoherence, individual gate errors, and measurement errors. These errors limit the depth of the quantum circuit that we can implement. However, even for shallow circuits, noise can lead to faulty estimates. Fortunately, quantum error mitigation provides a collection of tools and methods that allow us to evaluate accurate expectation values from noisy, shallow depth quantum circuits, even before the introduction of fault tolerance.

In recent years, we developed and implemented two general-purpose error mitigation methods, called zero noise extrapolation (ZNE) and probabilistic error cancellation (PEC).

Both techniques involve extremely complex applications of quantum mechanics, but they basically boil down to finding ways to eliminate or squelch the noise coming off quantum systems and/or to amplify the signal that scientists are trying to measure for quantum computations and other processes.

Neural’s take: We spoke to IBM’s director of quantum infrastructure, Jerry Chow, who seemed pretty excited about the new paradigm.

He explained that the techniques being touted in the new press release were already in production. IBM’s already demonstrated massive improvements in their ability to scale solutions, repeat cutting-edge results, and speed up classical processes using quantum hardware.

The bottom line is that quantum computers are here, and they work. Currently, it’s a bit hit or miss whether they can solve a specific problem better than classical systems, but the last remaining hard obstacle is fault-tolerance.

IBM’s new “error mitigation” strategy signals a change from the discovery phase of fault-tolerance solutions to implementation.

We tip our hats to the IBM quantum research team. Learn more here at IBM’s official blog.

Thu, 28 Jul 2022 03:42:00 -0500 en text/html https://thenextweb.com/news/ibm-unveils-bold-new-quantum-error-mitigation-strategy
Killexams : Big Blue

What Is Big Blue?

Big Blue is a nickname used since the 1980s for the International Business Machines Corporation (IBM). The moniker may have arisen from the blue tint of its early computer displays, or from the deep blue color of its corporate logo.

Key Takeaways

  • Big Blue refers to the IBM corporation, an early developer of both business machines and personal computers.
  • The nickname may refer to the color used in its logo, or from its blue-colored computer displays and cases prevalent in the 1960s through 1980s.
  • IBM is also a blue-chip stock, a mature and dominant company that is a component of the Dow Jones Industrial Average index.
  • IBM is responsible for including the UPC barcode, the magnetic stripe card, the personal computer, the floppy disk, the hard disk drive, and the ATM.
IBM Logo. www.ibm.com

Understanding Big Blue

Big Blue arose in the early 1980s in the popular and financial press as a nickname for IBM. The name has unclear specific origins, but is generally assumed to refer to the blue tint of the cases of its computers.

The nickname was embraced by IBM, which has been content with leaving its origins in obscurity and has named many of its projects in homage of the nickname. For example, Deep Blue, IBM’s chess-playing computer, challenged and ultimately defeated grandmaster Garry Kasparov in a controversial 1997 tournament.

The first known print reference to the Big Blue nickname appeared in the June 8, 1981, edition of Businessweek magazine, and is attributed to an anonymous IBM enthusiast.

“No company in the computer business inspires the loyalty that IBM does, and the company has accomplished this with its almost legendary customer service and support … As a result, it is not uncommon for customers to refuse to buy equipment not made by IBM, even though it is often cheaper. ‘I don't want to be saying I should have stuck with the “Big Blue,”’ says one IBM loyalist. ‘The nickname comes from the pervasiveness of IBM's blue computers.’”

Other speculators have also associated the Big Blue nickname with the company’s logo and its one-time dress code, as well as IBM’s historical association with blue-chip stocks.

History of Big Blue

Investopedia / Alison Czinkota

IBM began in 1911 as the Computing-Tabulating-Recording Company (CTR) in Endicott, NY. CTR was a holding company created by Charles R. Flint that amalgamated three companies that together produced scales, punch-card data processors, employee time clocks, and meat slicers. In 1924, CTR was renamed International Business Machines.

In the following century, IBM would go on to become one of the world’s top technological leaders, developing, inventing, and building hundreds of hardware and software information technologies. IBM is responsible for many inventions that quickly became commonplace, including the UPC barcode, the magnetic stripe card, the personal computer, the floppy disk, the hard disk drive, and the ATM.

IBM technologies were crucial to the implementation of U.S. government initiatives such as the launch of the Social Security Act in 1935 and many NASA missions, from the 1963 Mercury flight to the 1969 moon landing and beyond.

IBM holds the most U.S. patents of any business and, to date, IBM employees have been awarded many notable titles, including five Nobel Prizes and six Turing Awards.  

One of the first multinational conglomerates to emerge in U.S. history, IBM maintains a multinational presence, operating in 175 countries worldwide and employing some 350,000 employees globally.

Examples of Big Blue's Financial Performance

IBM has underperformed the broader S&P 500 index and Nasdaq-100 index. Significant divergence began in 1985 when the Nasdaq-100 and S&P 500 moved higher while IBM was mostly flat or lower until 1997. Since then it has continued to lose ground, especially when compared to the Nasdaq-100 index.

Image by Sabrina Jiang © Investopedia 2021

The underperformance in the stock price between 1985 and 2019 is underscored by the firm's financial performance. Between 2005 and 2012, net income generally rose, but at less than 12% per year on average. Between 2012 and 2017, net income fell by 65% over the time period, before recovering in 2018 and 2019. In 2019, though, net income was still about 43% lower than it was in 2012.

Tue, 19 Jul 2022 00:35:00 -0500 en text/html https://www.investopedia.com/terms/b/big-blue.asp
Killexams : Comprehensive Change Management for SoC Design By Sunita Chulani1, Stanley M. Sutton Jr.1, Gary Bachelor2, and P. Santhanam1
1 IBM T. J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 USA
2 IBM Global Business Services, PO BOX 31, Birmingham Road, Warwick CV34 5JL UK

Abstract

Systems-on-a-Chip (SoC) are becoming increasingly complex, leading to corresponding increases in the complexity and cost of SoC design and development.  We propose to address this problem by introducing comprehensive change management.  Change management, which is widely used in the software industry, involves controlling when and where changes can be introduced into components so that changes can be propagated quickly, completely, and correctly.
In this paper we address two main topics:   One is typical scenarios in electronic design where change management can be supported and leveraged. The other is the specification of a comprehensive schema to illustrate the varieties of data and relationships that are important for change management in SoC design.

1.    INTRODUCTION

SoC designs are becoming increasingly complex.  Pressures on design teams and project managers are rising because of shorter times to market, more complex technology, more complex organizations, and geographically dispersed multi-partner teams with varied “business models” and higher “cost of failure.”

Current methodology and tools for designing SoC need to evolve with market demands in key areas:  First, multiple streams of inconsistent hardware (HW) and software (SW) processes are often integrated only in the late stages of a project, leading to unrecognized divergence of requirements, platforms, and IP, resulting in unacceptable risk in cost, schedule, and quality.  Second, even within a stream of HW or SW, there is inadequate data integration, configuration management, and change control across life cycle artifacts.  Techniques used for these are often ad hoc or manual, and the cost of failure is high.  This makes it difficult for a distributed group team     to be productive and inhibits the early, controlled reuse of design products and IP.  Finally, the costs of deploying and managing separate dedicated systems and infrastructures are becoming prohibitive.

We propose to address these shortcomings through comprehensive change management, which is the integrated application of configuration management, version control, and change control across software and hardware design.  Change management is widely practiced in the software development industry.  There are commercial change-management systems available for use in electronic design, such as MatrixOne DesignSync [4], ClioSoft SOS [2], IC Manage Design Management [3], and Rational ClearCase/ClearQuest [1], as well as numerous proprietary, “home-grown” systems.  But to date change management remains an under-utilized technology in electronic design.

In SoC design, change management can help with many problems.  For instance, when IP is modified, change management can help in identifying blocks in which the IP is used, in evaluating other affected design elements, and in determining which tests must be rerun and which rules must be re-verified. Or, when a new release is proposed, change management can help in assessing whether the elements of the release are mutually consistent and in specifying IP or other resources on which the new release depends.

More generally, change management gives the ability to analyze the potential impact of changes by tracing to affected entities and the ability to propagate changes completely, correctly, and efficiently.  For design managers, this supports decision-making as to whether, when, and how to make or accept changes.  For design engineers, it helps in assessing when a set of design entities is complete and consistent and in deciding when it is safe to make (or adopt) a new release.

In this paper we focus on two elements of this approach for SoC design.  One is the specification of representative use cases in which change management plays a critical role.  These show places in the SoC development process where information important for managing change can be gathered.  They also show places where appropriate information can be used to manage the impact of change.  The second element is the specification of a generic schema for modeling design entities and their interrelationships.  This supports traceability among design elements, allows designers to analyze the impact of changes, and facilitates the efficient and comprehensive propagation of changes to affected elements.

The following section provides some background on a survey of subject-matter experts that we performed to refine the problem definition.     

2.    BACKGROUND

We surveyed some 30 IBM subject-matter experts (SMEs) in electronic design, change management, and design data modeling.  They identified 26 problem areas for change management in electronic design.  We categorized these as follows:

  • visibility into project status
  • day-to-day control of project activities
  • organizational or structural changes
  • design method consistency
  • design data consistency

Major themes that crosscut these included:

  • visibility and status of data
  • comprehensive change management
  • method definition, tracking, and enforcement
  • design physical quality
  • common approach to problem identification and handling

We held a workshop with the SMEs to prioritize these problems, and two emerged     as the most significant:  First, the need for basic management of the configuration of all the design data and resources of concern within a project or work package (libraries, designs, code, tools, test suites, etc.); second, the need for designer visibility into the status of data and configurations in a work package.

To realize these goals, two basic kinds of information are necessary:  1) An understanding of how change management may occur in SoC design processes; 2) An understanding of the kinds of information and relationships needed to manage change in SoC design.  We addressed the former by specifying change-management use cases; we addressed the latter by specifying a change-management schema.

3.    USE CASES

This section describes typical use cases in the SoC design process.  Change is a pervasive concern in these use cases—they cause changes, respond to changes, or depend on data and other resources that are subject to change.  Thus, change management is integral to the effective execution of each of these use cases. We identified nine representative use cases in the SoC design process, which are shown in Figure 1.


Figure 1.  Use cases in SoC design

In general there are four ways of initiating a project: New Project, Derive, Merge and Retarget.  New Project is the case in which a new project is created from the beginning.  The Derive case is initiated when a new business opportunity arises to base a new project on an existing design. The Merge case is initiated when an actor wants to merge configuration items during implementation of a new change management scheme or while working with teams/organizations outside of the current scheme. The Retarget case is initiated when a project is restructured due to resource or other constraints.  In all of these use cases it is important to institute proper change controls from the outset.  New Project starts with a clean slate; the other scenarios require changes from (or to) existing projects.    

Once the project is initiated, the next phase is to update the design. There are two use cases in the Update Design composite state.  New Design Elements addresses the original creation of new design elements.  These become new entries in the change-management system.  The Implement Change use case entails the modification of an existing design element (such as fixing a bug).  It is triggered in response to a change request and is supported and governed by change-management data and protocols.

The next phase is the Resolve Project and consists of 3 use cases. Backout is the use case by which changes that were made in the previous phase can be reversed.  Release is the use case by which a project is released for cross functional use. The Archive use case protects design asset by secure copy of design and environment.

4.    CHANGE-MANAGEMENT SCHEMA

The main goal of the change-management schema is to enable the capture of all information that might contribute to change management

4.1     Overview

The schema, which is defined in the Unified Modeling Language (UML) [5], consists of several high-level packages (Figure 2).


Click to enlarge

Figure 2.  Packages in the change-management schema

Package Data represents types for design data and metadata.  Package Objects and Data defines types for objects and data.  Objects are containers for information, data represent the information.  The main types of object include artifacts (such as files), features, and attributes.  The types of objects and data defined are important for change management because they represent the principle work products of electronic design: IP, VHDL and RTL specifications, floor plans, formal verification rules, timing rules, and so on.  It is changes to these things for which management is most needed.

The package Types defines types to represent the types of objects and data.  This enables some types in the schema (such as those for attributes, collections, and relationships) to be defined parametrically in terms of other types, which promotes generality, consistency, and reusability of schema elements.

Package Attributes defines specific types of attribute.  The basic attribute is just a name-value pair that is associated to an object.  (More strongly-typed subtypes of attribute have fixed names, value types, attributed-object types, or combinations of these.)  Attributes are one of the main types of design data, and they are important for change management because they can represent the status or state of design elements (such as version number, verification level, or timing characteristics).

Package Collections defines types of collections, including collections with varying degrees of structure, typing, and constraints.  Collections are important for change management in that changes must often be coordinated for collections of design elements as a group (e.g., for a work package, verification suite, or IP release).  Collections are also used in defining other elements in the schema (for example, baselines and change sets).

The package Relationships defines types of relationships.  The basic relationship type is an ordered collection of a fixed number of elements.  Subtypes provide directionality, element typing, and additional semantics.  Relationships are important for change management because they can define various types of dependencies among design data and resources.  Examples include the use of macros in cores, the dependence of timing reports on floor plans and timing contracts, and the dependence of test results on tested designs, test cases, and test tools.  Explicit dependency relationships support the analysis of change impact and the efficient and precise propagation of changes,

The package Specifications defines types of data specification and definition.  Specifications specify an informational entity; definitions denote a meaning and are used in specifications.

Package Resources represents things (other than design data) that are used in design processes, for example, design tools, IP, design methods, and design engineers.  Resources are important for change management in that resources are used in the actions that cause changes and in the actions that respond to changes.  Indeed, minimizing the resources needed to handle changes is one of the goals of change management.

Resources are also important in that changes to a resource may require changes to design elements that were created using that resource (for example, when changes to a simulator may require reproduction of simulation results).

Package Events defines types and instances of events.  Events are important in change management because changes are a kind of event, and signals of change events can trigger processes to handle the change.

The package Actions provides a representation for things that are done, that is, for the behaviors or executions of tools, scripts, tasks, method steps, etc.  Actions are important for change in that actions cause change.  Actions can also be triggered in response to changes and can handle changes (such as by propagating changes to dependent artifacts).

Subpackage Action Definitions defines the type Action Execution, which contains information about a particular execution of a particular action.  It refers to the definition of the action and to the specific artifacts and attributes read and written, resources used, and events generated and handled.  Thus an action execution indicates particular artifacts and attributes that are changed, and it links those to the particular process or activity by which they were changed, the particular artifacts and attributes on which the changes were based, and the particular resources by which the changes were effected.  Through this, particular dependency relationships can be established between the objects, data, and resources.  This is the specific information needed to analyze and propagate concrete changes to artifacts, processes, resources.


Package Baselines defines types for defining mutually consistent set of design artifacts. Baselines are important for change management in several respects.  The elements in a baseline must be protected from arbitrary changes that might disrupt their mutual consistency, and the elements in a baseline must be changed in mutually consistent ways in order to evolve a baseline from one version to another.

The final package in Figure 2 is the Change package.  It defines types that for representing change explicitly.  These include managed objects, which are objects with an associated change log, change logs and change sets, which are types of collection that contain change records, and change records, which record specific changes to specific objects.  They can include a reference to an action execution that caused the change

The subpackage Change Requests includes types for modeling change requests and responses.  A change request has a type, description, state, priority, and owner.  It can have an associated action definition, which may be the definition of the action to be taken in processing the change request.  A change request also has a change-request history log.

4.2    Example

An example of the schema is shown in Figure 3.  The clear boxes (upper part of diagram) show general types from the schema and the shaded boxes (lower part of the diagram) show types (and a few instances) defined for a specific high-level design process project at IBM.


Click to enlarge

Figure 3.  Example of change-management data

The figure shows a dependency relationship between two types of design artifact, VHDLArtifact and FloorPlannableObjects.  The relationship is defined in terms of a compiler that derives instances of FloorPlannableObjects from instances of VHDLArtifact.  Execution of the compiler constitutes an action that defines the relationship.  The specific schema elements are defined based on the general schema using a variety of object-oriented modeling techniques, including subtyping (e.g., VHDLArtifact), instantiation (e.g., Compile1) and parameterization (e.g. VHDLFloorplannable ObjectsDependency).

5.    USE CASE IMPLEMENT CHANGE

Here we present an example use case, Implement Change, with details on its activities and how the activities use the schema presented in Section 4.  This use case is illustrated in Figure 4.


Click to enlarge

Figure 4.  State diagram for use case Implement Change

The Implement Change use case addresses the modification of an existing design element (such as fixing a bug).  It is triggered by a change request.  The first steps of this use case are to identify and evaluate the change request to be handled.  Then the relevant baseline is located, loaded into the engineer’s workspace, and verified.  At this point the change can be implemented.  This begins with the identification of the artifacts that are immediately affected.  Then dependent artifacts are identified and changes propagated according to dependency relationships.  (This may entail several iterations.)  Once a stable state is achieved, the modified artifacts are Tested and regression tested.  Depending on test results, more changes may be required.  Once the change is considered acceptable, any learning and metrics from the process are captured and the new artifacts and relationships are promoted to the public configuration space.

6.    CONCLUSIONS

This paper explores the role of comprehensive change management in SoC design, development, and delivery.  Based on the comments of over thirty experienced electronic design engineers from across IBM, we have captured the essential problems and motivations for change management in SoC projects. We have described design scenarios, highlighting places where change management applies, and presented a preliminary schema to show the range of data and relationships change management may incorporate.  Change management can benefit both design managers and engineers.  It is increasingly essential for improving productivity and reducing time and cost in SoC projects.

ACKNOWLEDGMENTS

Contributions to this work were also made by Nadav Golbandi and Yoav Rubin of IBM’s Haifa Research Lab.  Much information and guidance were provided by Jeff Staten and Bernd-josef Huettl of IBM’s Systems and Technology Group. We especially thank Richard Bell, John Coiner, Mark Firstenberg, Andrew Mirsky, Gary Nusbaum, and Harry Reindel of IBM’s Systems and Technology Group for sharing design data and experiences.  We are also grateful to the many other people across IBM who contributed their time and expertise.

REFERENCES

1.    http://www306.ibm.com/software/awdtools/changemgmt/enterprise/index.html

2.    http://www.cliosoft.com/products/index.html

3.    http://www.icmanage.com/products/index.html

4.    http://www.ins.clrc.ac.uk/europractice/software/matrixone.html

5.    http://www.uml.org/

Mon, 18 Jul 2022 12:00:00 -0500 en text/html https://www.design-reuse.com/articles/15745/comprehensive-change-management-for-soc-design.html
Killexams : IBM Just Launched Blockchain Beyond Currency

Blockchain has the potential to become an integral part of our future. Essentially, it's a decentralized digital ledger that's secured by cryptography and boasts transparency that's unparalleled in any digital platform. Though initially linked to cryptocurrency, the technology has since seen various applications beyond Bitcoin.

Click to View Full Infographic

Blockchain networks are employed in the financial sector, in universal basic income (UBI) programs, and even for humanitarian purposes. A number of institutions have begun investing in research and development of other blockchain-based applications, exploring its potential use in various transaction-based industries. Indeed, the technology has the potential to be as disruptive as the internet itself.

IBM saw that potential when it introduced IBM Blockchain last year. The goal of that public cloud service was to deliver customers the means to build secure blockchain networks. On Sunday, IBM launched its own "Blockchain as a Service," and it's the first enterprise-ready implementation of IBM Blockchain.

The blockchain is based on The Linux Foundation's open source Hyperledger Fabric. "Think of it as an operating system for marketplaces, data-sharing networks, micro-currencies, and decentralized digital communities," explains Hyperledger on its website. "It has the potential to vastly reduce the cost and complexity of getting things done in the real world."

Through Hyperledger, IBM is offering a set of cloud-based services to help customers create, deploy, and manage blockchain networks, according to Jerry Cuomo, VP of blockchain technology at IBM. “Some time ago, we and several other members of the industry came to view that there needs to be a group looking after, governing, and shepherding technology around blockchain for serious business,” he told TechCrunch.

Though open source, Hyperledger promises to be secure and safe. "Only an Open Source, collaborative software development approach can ensure the transparency, longevity, interoperability, and support required to bring blockchain technologies forward to mainstream commercial adoption," they explained. "That is what Hyperledger is about – communities of software developers building blockchain frameworks and platforms."

To satisfy enterprise users, IBM adds another layer of security services using the IBM cloud. The computing giant also claims that their blockchain network is built around a highly auditable way of tracking all activity. This gives administrators a trail they can follow in case something goes wrong, like in the unlikely event that the network could be breached.

IBM's vision for blockchain isn't just limited to enterprise use. In 2015, IBM and Samsung presented a proof-of-concept for a blockchain-based, decentralized Internet of Things (IoT) called Autonomous Decentralized Peer-to-Peer Telemetry (ADEPT). It's a testament to just how much potential blockchain has. Ultimately, the technology puts digital security and transparency on a whole new level, one that we'll need as we push further into a future of extreme connectivity.


Thu, 23 Mar 2017 07:46:00 -0500 text/html https://futurism.com/ibm-just-launched-blockchain-beyond-currency
Killexams : Interfacing the IBM-PC to Medical Equipment
  • This book describes the techniques used for interfacing a PC to a range of medical equipment used internationally in the areas of anesthesia, intensive care, surgery, respiratory medicine and physiology. The first part of the book addresses serial interface, including the RS-232 Standard, transmission of data, and an introduction to serial-interface programming using Microsoft QuickBASIC. The second looks at electrical safety, and the use of Kermit and data analysis. Finally, the third part of this volume considers the practical aspects of interfacing a PC to a wide range of equipment and includes example programs for collecting data and in some cases for controlling the equipment directly.

    • No comparable volume looks at serial interfacing of medical equipment
    • Provides programming techniques as well as example programmes
    • Describes equipment used internationally
    Read more

    Reviews & endorsements

    "...a comprehensive text on the methods of serial communication...Students of medicine or computer engineering, researchers, anesthesiologists, respiratory therapists, and anyone involved in critical care monitoring will find this book of immense value...This compilation will prove to be of great assistance to anyone interested in serial connectivity, especially to one of the 10 pieces of equipment specifically covered in the last section. The subject is completely covered from conceptual basics to genuine implementation. The authors have succeeded in condensing the myriad of references, technical manuals, and other information normally necessary to accomplish this often confusing task." John J. Fino, Jr., Doody's Health Sciences Book Review Journal

    Customer reviews

    Not yet reviewed

    Be the first to review

    Review was not posted due to profanity

    ×

    Product details

    • Date Published: April 1995
    • format: Hardback
    • isbn: 9780521462808
    • length: 428 pages
    • dimensions: 229 x 152 x 27 mm
    • weight: 0.8kg
    • contains: 10 b/w illus.
    • availability: Available
  • Table of Contents

    Part I. The Serial Interface:
    1. The RS-232 standard
    2. Transmission of data
    3. Flow control
    4. The PC serial interface
    5. Serial interface programming in QuickBASIC
    Part II. Miscellaneous Topics: 6. Kermit
    7. Electrical safety and the PC
    8. Data analysis
    Part III. The Equipment:
    9. The Ohmeda 3700 and 3740 pulse oximeters
    10. The Nellcor N-200E pulse oximeter
    11. The Novametrix 515A pulse oximeter
    12. The Minolta Pulsox-7 pulse oximeter
    13. The Datex CardiocapTMII and Capnomac UltimaTM series of monitors
    14. The Graseby 3400 syringe pump
    15. The Ohmeda 9000 syringe pump
    16. The Vitalograph compact II spirometer
    17. The Ohmeda 7800 ventilator
    18. The Dräger Evita intensive care ventilator
    Appendices:
    1. ASCII control and graphic characters
    2. Serial port connector pin-outs
    3. Key codes
    4. The null modem
    5. Program for a device simulator
    6. QuickBASIC 4.5 OPEN and OPEN COM statements
    7. Plotting data using GNUPLOT
    8. Binary and hexadecimal notation
    9. Glossary of terms and abbreviations
    References
    Index.

  • Authors

    Richard W. D. Nickalls, Nottingham City Hospital NHS Trust

    R. Ramasubramanian, Burton District Hospital

  • Sun, 20 Jun 2021 12:14:00 -0500 en text/html https://www.cambridge.org/us/academic/subjects/medicine/medicine-general-interest/interfacing-ibm-pc-medical-equipment-art-serial-communication
    Killexams : Can IBM Get Back Into HPC With Power10?

    The “Cirrus” Power10 processor from IBM, which we codenamed for Big Blue because it refused to do it publicly and because we understand the value of a synonym here at The Next Platform, shipped last September in the “Denali” Power E1080 big iron NUMA machine. And today, the rest of the Power10-based Power Systems product line is being fleshed out with the launch of entry and midrange machines – many of which are suitable for supporting HPC and AI workloads as well as in-memory databases and other workloads in large enterprises.

    The question is, will IBM care about traditional HPC simulation and modeling ever again with the same vigor that it has in past decades? And can Power10 help reinvigorate the HPC and AI business at IBM. We are not sure about the answer to the first question, and got the distinct impression from Ken King, the general manager of the Power Systems business, that HPC proper was not a high priority when we spoke to him back in February about this. But we continue to believe that the Power10 platform has some attributes that make it appealing for data analytics and other workloads that need to be either scaled out across small machines or scaled up across big ones.

    Today, we are just going to talk about the five entry Power10 machines, which have one or two processor sockets in a standard 2U or 4U form factor, and then we will follow up with an analysis of the Power E1050, which is a four socket machine that fits into a 4U form factor. And the question we wanted to answer was simple: Can a Power10 processor hold its own against X86 server chips from Intel and AMD when it comes to basic CPU-only floating point computing.

    This is an important question because there are plenty of workloads that have not been accelerated by GPUs in the HPC arena, and for these workloads, the Power10 architecture could prove to be very interesting if IBM thought outside of the box a little. This is particularly true when considering the feature called memory inception, which is in effect the ability to build a memory area network across clusters of machines and which we have discussed a little in the past.

    We went deep into the architecture of the Power10 chip two years ago when it was presented at the Hot Chip conference, and we are not going to go over that ground again here. Suffice it to say that this chip can hold its own against Intel’s current “Ice Lake” Xeon SPs, launched in April 2021, and AMD’s current “Milan” Epyc 7003s, launched in March 2021. And this makes sense because the original plan was to have a Power10 chip in the field with 24 fat cores and 48 skinny ones, using dual-chip modules, using 10 nanometer processes from IBM’s former foundry partner, Globalfoundries, sometime in 2021, three years after the Power9 chip launched in 2018. Globalfoundries did not get the 10 nanometer processes working, and it botched a jump to 7 nanometers and spiked it, and that left IBM jumping to Samsung to be its first server chip partner for its foundry using its 7 nanometer processes. IBM took the opportunity of the Power10 delay to reimplement the Power ISA in a new Power10 core and then added some matrix math overlays to its vector units to make it a good AI inference engine.

    IBM also created a beefier core and dropped the core count back to 16 on a die in SMT8 mode, which is an implementation of simultaneous multithreading that has up to eight processing threads per core, and also was thinking about an SMT4 design which would double the core count to 32 per chip. But we have not seen that today, and with IBM not chasing Google and other hyperscalers with Power10, we may never see it. But it was in the roadmaps way back when.

    What IBM has done in the entry machines is put two Power10 chips inside of a single socket to increase the core count, but it is looking like the yields on the chips are not as high as IBM might have wanted. When IBM first started talking about the Power10 chip, it said it would have 15 or 30 cores, which was a strange number, and that is because it kept one SMT8 core or two SMT4 cores in reserve as a hedge against bad yields. In the products that IBM is rolling out today, mostly for its existing AIX Unix and IBM i (formerly OS/400) enterprise accounts, the core counts on the dies are much lower, with 4, 8, 10, or 12 of the 16 cores active. The Power10 cores have roughly 70 percent more performance than the Power9 cores in these entry machines, and that is a lot of performance for many enterprise customers – enough to get through a few years of growth on their workloads. IBM is charging a bit more for the Power10 machines compared to the Power9 machines, according to Steve Sibley, vice president of Power product management at IBM, but the bang for the buck is definitely improving across the generations. At the very low end with the Power S1014 machine that is aimed at small and midrange businesses running ERP workloads on the IBM i software stack, that improvement is in the range of 40 percent, deliver or take, and the price increase is somewhere between 20 percent and 25 percent depending on the configuration.

    Pricing is not yet available on any of these entry Power10 machines, which ship on July 22. When we find out more, we will do more analysis of the price/performance.

    There are six new entry Power10 machines, the feeds and speeds of which are shown below:

    For the HPC crowd, the Power L1022 and the Power L1024 are probably the most interesting ones because they are designed to only run Linux and, if they are like prior L classified machines in the Power8 and Power9 families, will have lower pricing for CPU, memory, and storage, allowing them to better compete against X86 systems running Linux in cluster environments. This will be particularly important as IBM pushed Red Hat OpenShift as a container platform for not only enterprise workloads but also for HPC and data analytic workloads that are also being containerized these days.

    One thing to note about these machines: IBM is using its OpenCAPI Memory Interface, which as we explained in the past is using the “Bluelink” I/O interconnect for NUMA links and accelerator attachment as a memory controller. IBM is now calling this the Open Memory Interface, and these systems have twice as many memory channels as a typical X86 server chip and therefore have a lot more aggregate bandwidth coming off the sockets. The OMI memory makes use of a Differential DIMM form factor that employs DDR4 memory running at 3.2 GHz, and it will be no big deal for IBM to swap in DDR5 memory chips into its DDIMMs when they are out and the price is not crazy. IBM is offering memory features with 32 GB, 64 GB, and 128 GB capacities today in these machines and will offer 256 GB DDIMMs on November 14, which is how you get the maximum capacities shown in the table above. The important thing for HPC customers is that IBM is delivering 409 GB/sec of memory bandwidth per socket and 2 TB of memory per socket.

    By the way, the only storage in these machines is NVM-Express flash drives. No disk, no plain vanilla flash SSDs. The machines also support a mix of PCI-Express 4.0 and PCI-Express 5.0 slots, and do not yet support the CXL protocol created by Intel and backed by IBM even though it loves its own Bluelink OpenCAPI interconnect for linking memory and accelerators to the Power compute engines.

    Here are the different processor SKUs offered in the Power10 entry machines:

    As far as we are concerned, the 24-core Power10 DCM feature EPGK processor in the Power L1024 is the only interesting one for HPC work, aside from what a theoretical 32-core Power10 DCM might be able to do. And just for fun, we sat down and figured out the peak theoretical 64-bit floating point performance, at all-core base and all-core turbo clock speeds, for these two Power10 chips and their rivals in the Intel and AMD CPU lineups. Take a gander at this:

    We have no idea what the pricing will be for a processor module in these entry Power10 machines, so we took a stab at what the 24-core variant might cost to be competitive with the X86 alternatives based solely on FP64 throughput and then reckoned the performance of what a full-on 32-core Power10 DCM might be.

    The answer is that IBM can absolutely compete, flops to flops, with the best Intel and AMD have right now. And it has a very good matrix math engine as well, which these chips do not.

    The problem is, Intel has “Sapphire Rapids” Xeon SPs in the works, which we think will have four 18-core chiplets for a total of 72 cores, but only 56 of them will be exposed because of yield issues that Intel has with its SuperFIN 10 nanometer (Intel 7) process. And AMD has 96-core “Genoa” Epyc 7004s in the works, too. Power11 is several years away, so if IBM wants to play in HPC, Samsung has to get the yields up on the Power10 chips so IBM can sell more cores in a box. Big Blue already has the memory capacity and memory bandwidth advantage. We will see if its L-class Power10 systems can compete on price and performance once we find out more. And we will also explore how memory clustering might make for a very interesting compute platform based on a mix of fat NUMA and memory-less skinny nodes. We have some ideas about how this might play out.

    Mon, 11 Jul 2022 12:01:00 -0500 Timothy Prickett Morgan en-US text/html https://www.nextplatform.com/2022/07/12/can-ibm-get-back-into-hpc-with-power10/ Killexams : Explainable AI Is Trending And Here’s Why

    According to the 2022 IBM Institute for Business Value study on AI Ethics in Action, building trustworthy Artificial Intelligence (AI) is perceived as a strategic differentiator and organizations are beginning to implement AI ethics mechanisms.

    Seventy-five percent of respondents believe that ethics is a source of competitive differentiation. More than 67% of respondents who view AI and AI ethics as important indicate that their organizations outperform their peers in sustainability, social responsibility, and diversity and inclusion.

    The survey showed that 79% of CEOs are prepared to embed AI ethics into their AI practices, up from 20% in 2018, but less than a quarter of responding organizations have operationalized AI ethics. Less than 20% of respondents strongly agreed that their organization's practices and actions match (or exceed) their stated principles and values.

    Peter Bernard, CEO of Datagration, says that understanding AI gives companies an advantage, but Bernard adds that explainable AI allows businesses to optimize their data.

    "Not only are they able to explain and understand the AI/ML behind predictions, but when errors arise, they can understand where to go back and make improvements," said Bernard. "A deeper understanding of AI/ML allows businesses to know whether their AI/ML is making valuable predictions or whether they should be improved."

    Bernard believes this can ensure incorrect data is spotted early on and stopped before decisions are made.

    Avivah Litan, vice president and distinguished analyst at Gartner, says that explainable AI also furthers scientific discovery as scientists and other business users can explore what the AI model does in various circumstances.

    "They can work with the models directly instead of relying only on what predictions are generated given a certain set of inputs," said Litan.

    But John Thomas, Vice President and Distinguished Engineer in IBM Expert Labs, says at its very basic level, explainable AI are the methods and processes for helping us understand a model's output. "In other words, it's the effort to build AI that can explain to designers and users why it made the decision it did based on the data that was put into it," said Thomas.

    Thomas says there are many reasons why explainable AI is urgently needed.

    "One reason is model drift. Over time as more and more data is fed into a given model, this new data can influence the model in ways you may not have intended," said Thomas. "If we can understand why an AI is making certain decisions, we can do much more to keep its outputs consistent and trustworthy over its lifecycle."

    Thomas adds that at a practical level, we can use explainable AI to make models more accurate and refined in the first place. "As AI becomes more embedded in our lives in more impactful ways, [..] we're going to need not only governance and regulatory tools to protect consumers from adverse effects, we're going to need technical solutions as well," said Thomas.

    "AI is becoming more pervasive, yet most organizations cannot interpret or explain what their models are doing," said Litan. "And the increasing dependence on AI escalates the impact of mis-performing AI models with severely negative consequences," said Litan.

    Bernard takes it back to a practical level, saying that explainable AI [..] creates proof of what senior engineers and experts "know" intuitively and explaining the reasoning behind it simultaneously. "Explainable AI can also take commonly held beliefs and prove that the data does not back it up," said Bernard.

    "Explainable AI lets us troubleshoot how an AI is making decisions and interpreting data is an extremely important tool in helping us ensure AI is helping everyone, not just a narrow few," said Thomas.

    Hiring is an example of where explainable AI can help everyone.

    Thomas says hiring managers deal with all kinds of hiring and talent shortages and usually get more applications than they can read thoroughly. This means there is a strong demand to be able to evaluate and screen applicants algorithmically.

    "Of course, we know this can introduce bias into hiring decisions, as well as overlook a lot of people who might be compelling candidates with unconventional backgrounds," said Thomas. "Explainable AI is an ideal solution for these sorts of problems because it would allow you to understand why a model rejected a certain applicant and accepted another. It helps you make your make model better.”

    Making AI trustworthy

    IBM's AI Ethics survey showed that 85% of IT professionals agree that consumers are more likely to choose a company that's transparent about how its AI models are built, managed and used.

    Thomas says explainable AI is absolutely a response to concerns about understanding and being able to trust AI's results.

    "There's a broad consensus among people using AI that you need to take steps to explain how you're using it to customers and consumers," said Thomas. "At the same time, the field of AI Ethics as a practice is relatively new, so most companies, even large ones, don't have a Head of AI ethics, and they don't have the skills they need to build an ethics panel in-house."

    Thomas believes it's essential that companies begin thinking about building those governance structures. "But there also a need for technical solutions that can help companies manage their use of AI responsibly," said Thomas.

    Driven by industry, compliance or everything?

    Bernard points to the oil and gas industry as why explainable AI is necessary.

    "Oil and gas have [..] a level of engineering complexity, and very few industries apply engineering and data at such a deep and constant level like this industry," said Bernard. "From the reservoir to the surface, every aspect is an engineering challenge with millions of data points and different approaches."

    Bernard says in this industry, operators and companies still utilize spreadsheets and other home-grown systems-built decades ago. "Utilizing ML enables them to take siloed knowledge, Strengthen it and create something transferrable across the organization, allowing consistency in decision making and process."

    "When oil and gas companies can perform more efficiently, it is a win for everyone," said Bernard. "The companies see the impact in their bottom line by producing more from their existing assets, lowering environmental impact, and doing more with less manpower."

    Bernard says this leads to more supply to help ease the burden on demand. "Even modest increases like 10% improvement in production can have a massive impact in supply, the more production we have [..] consumers will see relief at the pump."

    But Litan says the trend toward explainable AI is mainly driven by regulatory compliance.

    In a 2021 Gartner survey, AI in Organizations reported that regulatory compliance is the top reason privacy, security and risk are barriers to AI implementation.

    "Regulators are demanding AI model transparency and proof that models are not generating biased decisions and unfair 'irresponsible' policies," said Litan. "AI privacy, security and/or risk management starts with AI explainability, which is a required baseline."

    Litan says Gartner sees the biggest uptake of explainable AI in regulated industries like healthcare and financial services. "But we also see it increasingly with technology service providers that use AI models, notably in security or other scenarios," said Litan.

    Litan adds that another reason explainable AI is trending is that organizations are unprepared to manage AI risks and often cut corners around model governance. "Organizations that adopt AI trust, risk and security management – which starts with inventorying AI models and explaining them – get better business results," adds Litan.

    But IBM's Thomas doesn't think you can parse the uptake of explainable AI by industry.

    "What makes a company interested in explainable AI isn't necessarily the industry they're in; they're invested in AI in the first place," said Thomas. "IT professionals at businesses deploying AI are 17% more likely to report that their business values AI explainability. Once you get beyond exploration and into the deployment phase, explaining what your models are doing and why quickly becomes very important to you."

    Thomas says that IBM sees some compelling use cases in specific industries starting with medical research.

    "There is a lot of excitement about the potential for AI to accelerate the pace of discovery by making medical research easier," said Thomas. "But, even if AI can do a lot of heavy lifting, there is still skepticism among doctors and researchers about the results."

    Thomas says explainable AI has been a powerful solution to that particular problem, allowing researchers to embrace AI modeling to help them solve healthcare-related challenges because they can refine their models, control for bias and monitor the results.

    "That trust makes it much easier for them to build models more quickly and feel comfortable using them to inform their care for patients," said Thomas.

    IBM worked with Highmark Health to build a model using claims data to model sepsis and COVID-19 risk. But again, Thomas adds that because it's a tool for refining and monitoring how your AI models perform, explainable AI shouldn't be restricted to any particular industry or use case.

    "We have airlines who use explainable AI to ensure their AI is doing a good job predicting plane departure times. In financial services and insurance, companies are using explainable AI to make sure they are making fair decisions about loan rates and premiums," said Thomas. "This is a technical component that will be critical for anyone getting serious about using AI at scale, regardless of what industry they are in."

    Guard rails for AI ethics

    What does the future look like with AI ethics and explainable AI?

    Thomas says the hope is that explainable AI will spread and see adoption because that will be a sign companies take trustworthy AI, both the governance and the technical components, very seriously.

    He also sees explainable AI as essential guardrails for AI Ethics down the road.

    "When we started putting seatbelts in cars, a lot more people started driving, but we also saw fewer and less severe accidents," said Thomas. "That's the obvious hope - that we can make the benefits of this new technology much more widely available while also taking the needed steps to ensure we are not introducing unanticipated consequences or harms."

    One of the most significant factors working against the adoption of AI and its productivity gains is the genuine need to address concerns about how AI is used, what types of data are being collected about people, and whether AI will put them out of a job.

    But Thomas says that worry is contrary to what’s happening today. "AI is augmenting what humans can accomplish, from helping researchers conduct studies faster to assisting bankers in designing fairer and more efficient loans to helping technicians inspect and fix equipment more quickly," said Thomas. "Explainable AI is one of the most important ways we are helping consumers understand that, so a user can say with a much greater degree of certainty that no, this AI isn't introducing bias, and here's exactly why and what this model is really doing."

    One tangible example IBM uses is AI Factsheets in their IBM Cloud Pak for Data. IBM describes the factsheets as 'nutrition labels' for AI, which allows them to list the types of data and algorithms that make up a particular in the same way a food item lists its ingredients.

    "To achieve trustworthy AI at scale, it takes more than one company or organization to lead the charge,” said Thomas. “AI should come from a diversity of datasets, diversity in practitioners, and a diverse partner ecosystem so that we have continuous feedback and improvement.”

    Wed, 27 Jul 2022 12:00:00 -0500 Jennifer Kite-Powell en text/html https://www.forbes.com/sites/jenniferhicks/2022/07/28/explainable-ai-is--trending-and-heres-why/
    Killexams : 5G Networks Moving To Cloud With IBM Satellite And AT&T Connection

    One of the many interesting aspects of big technology trends is how seemingly independent efforts end up getting intertwined over time. The most recent example is the multi-level link between cloud computing and 5G, with the common ties of hybrid clouds, private networks, and edge computing all coming together to enable a potentially powerful—but also potentially confusing—combination of capabilities.

    In two separate, but related announcements, IBM recently highlighted how its new Cloud Satellite private/hybrid cloud platform can be used to get telecom service provider customers to modernize more of their existing infrastructure and how it can help a specific telco provider (AT&T in this case) offer more of the edge computing-type services that these modernization efforts enable. The timing of these announcements seemed a bit backwards in that the IBM Cloud/AT&T partnership news came first, before the launch of the IBM Cloud for Telecommunications that could theoretically enable it, but that may have been due to legal approvals of the press releases more than anything.

    Regardless, it makes more logical sense to discuss the second bit of news first, because it helps set the stage for the new applications in the first announcement. To that end, IBM’s Cloud for Telecommunications looks to be an auspicious effort to bring together more than 35 different companies to help modernize 5G network infrastructure. Each of the announced ecosystem partners is offering tools that can help move telcos from the traditional closed-box hardware infrastructure that they’ve used for the last few decades into more modern, cloud computing-like architectures. (See “IBM Brings Open Hybrid Cloud Strategy To 5G And The Edge” for an overview of the company’s initial efforts here.)

    This transition is expected to be a long, complicated, but potentially very profitable process, hence the high degree of participation from many large vendors, including Microsoft, Samsung, Cisco, Dell, HPE, Lenovo, and Nokia in addition to IBM. At the same time, the transition is critical to the long-term success of 5G, because many of its advanced capabilities are only possible with the type of more modern, software-defined open infrastructure that this effort is intended to enable (see “Will 5G Networks Move to Open RAN?” for more).

    For IBM, the industry-specific focus of this telecommunications cloud effort is conceptually similar to what it did with its recent financial industry launches (see “New IBM Offering Highlights Rise Of Specialty Clouds”). It is also an excellent vehicle for the company to debut its new Cloud Satellite offering, which is a private/hybrid cloud platform akin to Amazon’s AWS Outposts or Microsoft’s Azure Stack, where the core IBM Cloud services can be run remotely or on-premise on a wide range of base hardware.

    One of the many challenges in getting telco providers to embrace and leverage cloud-native computing architectures for their core network functions is the highly distributed nature of today’s telco networks. Practically speaking, it would be impossible for telco service providers to rely solely on public cloud infrastructure because of the need to have resources that are geographically close to most of their key network destinations. However, by allowing cloud-based platforms to run locally across these many network edge locations, the transition to cloud-based core network functionality becomes more practical.

    That’s where the IBM Cloud Satellite offering, which is based on RedHat’s OpenShift, comes in. OpenShift can run in public cloud environments when that’s most appropriate, but it can also run at these edge or other on-premise locations, giving it the flexibility to meet the demanding and often highly regulated, utility-like environments used by network carriers. Most importantly, Cloud Satellite provides a single point of management and control for all these possible deployments.

    It also provides the basic foundation for connecting hardware and software companies that IBM organized into an ecosystem around its new Cloud for Telecommunications. These partners will provide the hardware platforms upon which the IBM Cloud Satellite platform can run, a variety of best-of-breed software tools that can extend the platform, or advisory/consulting services to assist in getting all the elements deployed.

    To make the solution even more suited to the telecommunications industry, IBM is also integrating its Edge Application Manager and Telco Network Cloud Manager tools to assist in the process of telco-specific workload automation, management and deployment of network services, and more.

    A practical implementation of some of these concepts can be seen in the first of the two announcements made, specifically the one in conjunction with AT&T. What’s interesting about that news is that it brings some of these principles to life, and it does so within the specific context of AT&T’s existing Multi-Access Edge Computing offering. One of the key opportunities that many telcos see with 5G is the deployment of private 5G networks within certain organizations (see “New Research Shows Pent-Up Demand For Private 5G Networks” for more).

    With the IBM/AT&T partnership, AT&T can essentially act as a sales channel for the IBM Cloud Satellite product, letting customers set up and manage private networks or other edge computing applications that benefit from cellular connectivity, all under the auspices of the IBM software. Because it can run locally in a private cloud mode, or connect with other cloud resources in a hybrid cloud mode, Cloud Satellite lets organizations set up the type of capabilities they want—ranging from 5G-connected machines in manufacturing sites to remote monitoring in healthcare and beyond—and control them via a single tool.

    Ultimately, the key to making 5G more than just a faster data pipe will be to figure out ways for carriers and their customers to create the kind of meaningful applications that can help transform businesses and their processes. IBM’s efforts here—particularly in driving an ecosystem of partners that can help companies achieve these goals—look to be important steps forward.

    Disclosure: TECHnalysis Research is a tech industry market research and consulting firm and, like all companies in that field, works with many technology vendors as clients, some of whom may be listed in this article.

    Fri, 13 Nov 2020 10:49:00 -0600 Bob O'Donnell en text/html https://www.forbes.com/sites/bobodonnell/2020/11/11/5g-networks-moving-to-cloud-with-ibm-satellite-and-att-connection/
    Killexams : IT Consulting Services Market May See a Big Move | Fujitsu, IBM, Gartner

    AMA introduce new research on Global IT Consulting Services covering micro level of analysis by competitors and key business segments (2021-2027). The Global IT Consulting Services explores comprehensive study on various segments like opportunities, size, development, innovation, sales and overall growth of major players. The research is carried out on primary and secondary statistics sources and it consists both qualitative and quantitative detailing.

    Ask demo Report PDF @ https://www.advancemarketanalytics.com/sample-report/6525-global-it-consulting-services—procurement-market

    Some of the Major Key players profiled in the study are Fujitsu Limited (Japan), HCL Technologies Limited (India), Hexaware Tech Limited (India), Infosys Limited (India), Ernst &Young (U.K), KPMG (Europe), PricewaterhouseCoopers (U.K), Avante (United States), Cognizant Tech Corp. (United States), Gartner, Inc. (United States), Syntel Inc. (United States), IBM Corp (United States), McKinsey & Company (United States),.

    IT consulting market is expected to face significantly higher demand due to factors like digitization, analytics, cloud, robotics, and the Internet of Things (IoT). IT consulting services involves professional business computer consultancy and advisory services which provide expertise, experience, industry intelligence to the enterprise. This industry deals with professional service firms, staffing firms, contractors, information security consultants. The IT consulting segment includes both advisory and implementation services but excludes transactional IT activities. The IT consulting services market consists of eight main divisions i.e. IT Strategy, IT Architecture, IT Implementation, ERP services, Systems Integration, Data Analytics, IT Security and Software Management.

    Influencing Market Trend

    • IT consulting services are helping organizations to manage their investment and technology and business strategies.

    Market Drivers

    • Current trend on Generalization of business and operating module
    • Requirement of IT investment monitoring
    • Change in traditional IT solutions to computing solution
    • Transition in IT infrastructure to cloud computing infrastructure.

    Opportunities:

    • Cloud Infrastructure prospective is projected to create market opportunities for the market manufacturers.

    Challenges:

    • Changing and rigorous legislative and accreditation needs is the major challenge faced by this market.

    For more data or any query mail at [email protected]

    Which market aspects are illuminated in the report?

    Executive Summary: It covers a summary of the most vital studies, the Global IT Consulting Services market increasing rate, modest circumstances, market trends, drivers and problems as well as macroscopic pointers.

    Study Analysis: Covers major companies, vital market segments, the scope of the products offered in the Global IT Consulting Services market, the years measured and the study points.

    Company Profile: Each Firm well-defined in this segment is screened based on a products, value, SWOT analysis, their ability and other significant features.

    Manufacture by region: This Global IT Consulting Services report offers data on imports and exports, sales, production and key companies in all studied regional markets

    Highlighted of Global IT Consulting Services Market Segments and Sub-Segment:

    IT Consulting Services Market by Key Players: Fujitsu Limited (Japan), HCL Technologies Limited (India), Hexaware Tech Limited (India), Infosys Limited (India), Ernst &Young (U.K), KPMG (Europe), PricewaterhouseCoopers (U.K), Avante (United States), Cognizant Tech Corp. (United States), Gartner, Inc. (United States), Syntel Inc. (United States), IBM Corp (United States), McKinsey & Company (United States),

    IT Consulting Services Market: by Application (Information protection (Data loss prevention, authentication and encryption), Threat protection (Data center and end point), Web and cloud based protection, Services (Advisory, Design, Implementation, Financial, Healthcare, IT telecom))

    IT Consulting Services Market by Geographical Analysis: Americas, United States, Canada, Mexico, Brazil, APAC, China, Japan, Korea, Southeast Asia, India, Australia, Europe, Germany, France, UK, Italy, Russia, Middle East & Africa, Egypt, South Africa, Israel, Turkey & GCC Countries

    For More Query about the IT Consulting Services Market Report? Get in touch with us at: https://www.advancemarketanalytics.com/enquiry-before-buy/6525-global-it-consulting-services—procurement-market

    The study is a source of reliable data on: Market segments and sub-segments, Market trends and dynamics Supply and demand Market size Current trends/opportunities/challenges Competitive landscape Technological innovations Value chain and investor analysis.

    Interpretative Tools in the Market: The report integrates the entirely examined and evaluated information of the prominent players and their position in the market by methods for various descriptive tools. The methodical tools including SWOT analysis, Porter’s five forces analysis, and investment return examination were used while breaking down the development of the key players performing in the market.

    Key Growths in the Market: This section of the report incorporates the essential enhancements of the marker that contains assertions, coordinated efforts, R&D, new item dispatch, joint ventures, and associations of leading participants working in the market.

    Key Points in the Market: The key features of this IT Consulting Services market report includes production, production rate, revenue, price, cost, market share, capacity, capacity utilization rate, import/export, supply/demand, and gross margin. Key market dynamics plus market segments and sub-segments are covered.

    Basic Questions Answered
    *who are the key market players in the IT Consulting Services Market?
    *Which are the major regions for dissimilar trades that are expected to eyewitness astonishing growth for the
    *What are the regional growth trends and the leading revenue-generating regions for the IT Consulting Services Market?
    *What are the major Product Type of IT Consulting Services?
    *What are the major applications of IT Consulting Services?
    *Which IT Consulting Services technologies will top the market in next 5 years?

    Examine Detailed Index of full Research Study [email protected]: https://www.advancemarketanalytics.com/reports/6525-global-it-consulting-services—procurement-market

    Table of Content

    Chapter One: Industry Overview

    Chapter Two: Major Segmentation (Classification, Application and etc.) Analysis

    Chapter Three: Production Market Analysis

    Chapter Four: Sales Market Analysis

    Chapter Five: Consumption Market Analysis

    Chapter Six: Production, Sales and Consumption Market Comparison Analysis

    Chapter Seven: Major Manufacturers Production and Sales Market Comparison Analysis

    Chapter Eight: Competition Analysis by Players

    Chapter Nine: Marketing Channel Analysis

    Chapter Ten: New Project Investment Feasibility Analysis

    Chapter Eleven: Manufacturing Cost Analysis

    Chapter Twelve: Industrial Chain, Sourcing Strategy and Downstream Buyers

    Buy the Full Research report of Global IT Consulting Services [email protected]: https://www.advancemarketanalytics.com/buy-now?format=1&report=6525

    Thanks for practicing this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.

    Contact US:
    Craig Francis (PR & Marketing Manager)
    AMA Research & Media LLP
    Unit No. 429, Parsonage Road Edison, NJ
    New Jersey USA – 08837
    Phone: +1 (206) 317 1218
    [email protected]

    Connect with us at
    https://www.linkedin.com/company/advance-market-analytics
    https://www.facebook.com/AMA-Research-Media-LLP-344722399585916
    https://twitter.com/amareport

    Thu, 28 Jul 2022 20:22:00 -0500 Newsmantraa en-US text/html https://www.digitaljournal.com/pr/it-consulting-services-market-may-see-a-big-move-fujitsu-ibm-gartner
    000-N55 exam dump and training guide direct download
    Training Exams List