Make your day with 000-081 cram for your exam success

Killexams.com provides legitimate and up in order to date and precise 000-081 Practice Test with a 100% move guarantee. You require to practice queries for at least twenty-four hrs to score high inside the exam. Your own actual task in order to pass in 000-081 examination, commences with killexams.com test exercise questions.

Exam Code: 000-081 Practice test 2022 by Killexams.com team
System x Technical Principles V9
IBM Principles approach
Killexams : IBM Principles approach - BingNews https://killexams.com/pass4sure/exam-detail/000-081 Search results Killexams : IBM Principles approach - BingNews https://killexams.com/pass4sure/exam-detail/000-081 https://killexams.com/exam_list/IBM Killexams : 7 Basic Tools That Can Strengthen Quality

Hitoshi Kume, a recipient of the 1989 Deming Prize for use of quality principles, defines problems as "undesirable results of a job." Quality improvement efforts work best when problems are addressed systematically using a consistent and analytic approach; the methodology shouldn't change just because the problem changes. Keeping the steps to problem-solving simple allows workers to learn the process and how to use the tools effectively.

Easy to implement and follow up, the most commonly used and well-known quality process is the plan/do/check/act (PDCA) cycle (Figure 1). Other processes are a takeoff of this method, much in the way that computers today are takeoffs of the original IBM system. The PDCA cycle promotes continuous improvement and should thus be visualized as a spiral instead of a closed circle.

Another popular quality improvement process is the six-step PROFIT model in which the acronym stands for:

P = Problem definition.

R = Root cause identification and analysis.

O = Optimal solution based on root cause(s).

F = Finalize how the corrective action will be implemented.

I = Implement the plan.

T = Track the effectiveness of the implementation and verify that the desired results are met.

If the desired results are not met, the cycle is repeated. Both the PDCA and the PROFIT models can be used for problem solving as well as for continuous quality improvement. In companies that follow total quality principles, whichever model is chosen should be used consistently in every department or function in which quality improvement teams are working.

Quality Improvement

Figure 1. The most common process for quality improvement is the plan/do/check/act cycle outlined above. The cycle promotes continuous improvement and should be thought of as a spiral, not a circle.
 

7 Basic Quality Improvement Tools

Once the basic problem-solving or quality improvement process is understood, the addition of quality tools can make the process proceed more quickly and systematically. Seven simple tools can be used by any professional to ease the quality improvement process: flowcharts, check sheets, Pareto diagrams, cause and effect diagrams, histograms, scatter diagrams, and control charts. (Some books describe a graph instead of a flowchart as one of the seven tools.)

The concept behind the seven basic tools came from Kaoru Ishikawa, a renowned quality expert from Japan. According to Ishikawa, 95% of quality-related problems can be resolved with these basic tools. The key to successful problem resolution is the ability to identify the problem, use the appropriate tools based on the nature of the problem, and communicate the solution quickly to others. Inexperienced personnel might do best by starting with the Pareto chart and the cause and effect diagram before tackling the use of the other tools. Those two tools are used most widely by quality improvement teams.

Flowcharts

Flowcharts describe a process in as much detail as possible by graphically displaying the steps in proper sequence. A good flowchart should show all process steps under analysis by the quality improvement team, identify critical process points for control, suggest areas for further improvement, and help explain and solve a problem.

The flowchart in Figure 2 illustrates a simple production process in which parts are received, inspected, and sent to subassembly operations and painting. After completing this loop, the parts can be shipped as subassemblies after passing a final test or they can complete a second cycle consisting of final assembly, inspection and testing, painting, final testing, and shipping.

Quality Improvement Tools

Figure 2. A basic production process flowchart displays several paths a part can travel from the time it hits the receiving dock to final shipping.
 

Flowcharts can be simple, such as the one featured in Figure 2, or they can be made up of numerous boxes, symbols, and if/then directional steps. In more complex versions, flowcharts indicate the process steps in the appropriate sequence, the conditions in those steps, and the related constraints by using elements such as arrows, yes/no choices, or if/then statements.

Check sheets

Check sheets help organize data by category. They show how many times each particular value occurs, and their information is increasingly helpful as more data are collected. More than 50 observations should be available to be charted for this tool to be really useful. Check sheets minimize clerical work since the operator merely adds a mark to the tally on the prepared sheet rather than writing out a figure (Figure 3). By showing the frequency of a particular defect (e.g., in a molded part) and how often it occurs in a specific location, check sheets help operators spot problems. The check sheet example shows a list of molded part defects on a production line covering a week's time. One can easily see where to set priorities based on results shown on this check sheet. Assuming the production flow is the same on each day, the part with the largest number of defects carries the highest priority for correction.

Quality Improvement Tools

Figure 3. Because it clearly organizes data, a check sheet is the easiest way to track information.
 

Pareto diagrams

The Pareto diagram is named after Vilfredo Pareto, a 19th-century Italian economist who postulated that a large share of wealth is owned by a small percentage of the population. This basic principle translates well into quality problems—most quality problems result from a small number of causes. Quality experts often refer to the principle as the 80-20 rule; that is, 80% of problems are caused by 20% of the potential sources.

A Pareto diagram puts data in a hierarchical order (Figure 4), which allows the most significant problems to be corrected first. The Pareto analysis technique is used primarily to identify and evaluate nonconformities, although it can summarize all types of data. It is perhaps the diagram most often used in management presentations.

Quality Improvement Tools

Figure 4. By rearranging random data, a Pareto diagram identifies and ranks nonconformities in the quality process in descending order.
 

To create a Pareto diagram, the operator collects random data, regroups the categories in order of frequency, and creates a bar graph based on the results.

Cause and effect diagrams

The cause and effect diagram is sometimes called an Ishikawa diagram after its inventor. It is also known as a fish bone diagram because of its shape. A cause and effect diagram describes a relationship between variables. The undesirable outcome is shown as effect, and related causes are shown as leading to, or potentially leading to, the said effect. This popular tool has one severe limitation, however, in that users can overlook important, complex interactions between causes. Thus, if a problem is caused by a combination of factors, it is difficult to use this tool to depict and solve it.

A fish bone diagram displays all contributing factors and their relationships to the outcome to identify areas where data should be collected and analyzed. The major areas of potential causes are shown as the main bones, e.g., materials, methods, people, measurement, machines, and design (Figure 5). Later, the subareas are depicted. Thorough analysis of each cause can eliminate causes one by one, and the most probable root cause can be selected for corrective action. Quantitative information can also be used to prioritize means for improvement, whether it be to machine, design, or operator.

Quality Improvement Tools

Figure 5. Fish bone diagrams display the various possible causes of the final effect. Further analysis can prioritize them.
 

Histograms

The histogram plots data in a frequency distribution table. What distinguishes the histogram from a check sheet is that its data are grouped into rows so that the identity of individual values is lost. Commonly used to present quality improvement data, histograms work best with small amounts of data that vary considerably. When used in process capability studies, histograms can display specification limits to show what portion of the data does not meet the specifications.

After the raw data are collected, they are grouped in value and frequency and plotted in a graphical form (Figure 6). A histogram's shape shows the nature of the distribution of the data, as well as central tendency (average) and variability. Specification limits can be used to display the capability of the process.

Quality Improvement Tools

Figure 6. A histogram is an easy way to see the distribution of the data, its average, and variability.
 

Scatter diagrams

A scatter diagram shows how two variables are related and is thus used to test for cause and effect relationships. It cannot prove that one variable causes the change in the other, only that a relationship exists and how strong it is. In a scatter diagram, the horizontal (x) axis represents the measurement values of one variable, and the vertical (y) axis represents the measurements of the second variable. Figure 7 shows part clearance values on the x-axis and the corresponding quantitative measurement values on the y-axis.

Quality Improvement Tool

Figure 7. The plotted data points in a scatter diagram show the relationship between two variables.
 

Control charts

A control chart displays statistically determined upper and lower limits drawn on either side of a process average. This chart shows if the collected data are within upper and lower limits previously determined through statistical calculations of raw data from earlier trials.

The construction of a control chart is based on statistical principles and statistical distributions, particularly the normal distribution. When used in conjunction with a manufacturing process, such charts can indicate trends and signal when a process is out of control. The center line of a control chart represents an estimate of the process mean; the upper and lower critical limits are also indicated. The process results are monitored over time and should remain within the control limits; if they do not, an investigation is conducted for the causes and corrective action taken. A control chart helps determine variability so it can be reduced as much as is economically justifiable.

In preparing a control chart, the mean upper control limit (UCL) and lower control limit (LCL) of an approved process and its data are calculated. A blank control chart with mean UCL and LCL with no data points is created; data points are added as they are statistically calculated from the raw data.

Figure 8. Data points that fall outside the upper and lower control limits lead to investigation and correction of the process.
 

Figure 8 is based on 25 samples or subgroups. For each sample, which in this case consisted of five rods, measurements are taken of a quality characteristic (in this example, length). These data are then grouped in table form (as shown in the figure) and the average and range from each subgroup are calculated, as are the grand average and average of all ranges. These figures are used to calculate UCL and LCL. For the control chart in the example, the formula is ± A2R, where A2 is a constant determined by the table of constants for variable control charts. The constant is based on the subgroup sample size, which is five in this example.

Conclusion

Many people in the medical device manufacturing industry are undoubtedly familiar with many of these tools and know their application, advantages, and limitations. However, manufacturers must ensure that these tools are in place and being used to their full advantage as part of their quality system procedures. Flowcharts and check sheets are most valuable in identifying problems, whereas cause and effect diagrams, histograms, scatter diagrams, and control charts are used for problem analysis. Pareto diagrams are effective for both areas. By properly using these tools, the problem-solving process can be more efficient and more effective.

Those manufacturers who have mastered the seven basic tools described here may wish to further refine their quality improvement processes. A future article will discuss seven new tools: relations diagrams, affinity diagrams (K-J method), systematic diagrams, matrix diagrams, matrix data diagrams, process decision programs, and arrow diagrams. These seven tools are used less frequently and are more complicated.

Ashweni Sahni is director of quality and regulatory affairs at Minnetronix, Inc. (St. Paul, MN), and a member of MD&DI's editorial advisory board.


Tue, 02 Aug 2022 12:00:00 -0500 en text/html https://www.mddionline.com/design-engineering/7-basic-tools-can-improve-quality
Killexams : Comprehensive Change Management for SoC Design By Sunita Chulani1, Stanley M. Sutton Jr.1, Gary Bachelor2, and P. Santhanam1
1 IBM T. J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 USA
2 IBM Global Business Services, PO BOX 31, Birmingham Road, Warwick CV34 5JL UK

Abstract

Systems-on-a-Chip (SoC) are becoming increasingly complex, leading to corresponding increases in the complexity and cost of SoC design and development.  We propose to address this problem by introducing comprehensive change management.  Change management, which is widely used in the software industry, involves controlling when and where changes can be introduced into components so that changes can be propagated quickly, completely, and correctly.
In this paper we address two main topics:   One is typical scenarios in electronic design where change management can be supported and leveraged. The other is the specification of a comprehensive schema to illustrate the varieties of data and relationships that are important for change management in SoC design.

1.    INTRODUCTION

SoC designs are becoming increasingly complex.  Pressures on design teams and project managers are rising because of shorter times to market, more complex technology, more complex organizations, and geographically dispersed multi-partner teams with varied “business models” and higher “cost of failure.”

Current methodology and tools for designing SoC need to evolve with market demands in key areas:  First, multiple streams of inconsistent hardware (HW) and software (SW) processes are often integrated only in the late stages of a project, leading to unrecognized divergence of requirements, platforms, and IP, resulting in unacceptable risk in cost, schedule, and quality.  Second, even within a stream of HW or SW, there is inadequate data integration, configuration management, and change control across life cycle artifacts.  Techniques used for these are often ad hoc or manual, and the cost of failure is high.  This makes it difficult for a distributed group team     to be productive and inhibits the early, controlled reuse of design products and IP.  Finally, the costs of deploying and managing separate dedicated systems and infrastructures are becoming prohibitive.

We propose to address these shortcomings through comprehensive change management, which is the integrated application of configuration management, version control, and change control across software and hardware design.  Change management is widely practiced in the software development industry.  There are commercial change-management systems available for use in electronic design, such as MatrixOne DesignSync [4], ClioSoft SOS [2], IC Manage Design Management [3], and Rational ClearCase/ClearQuest [1], as well as numerous proprietary, “home-grown” systems.  But to date change management remains an under-utilized technology in electronic design.

In SoC design, change management can help with many problems.  For instance, when IP is modified, change management can help in identifying blocks in which the IP is used, in evaluating other affected design elements, and in determining which tests must be rerun and which rules must be re-verified. Or, when a new release is proposed, change management can help in assessing whether the elements of the release are mutually consistent and in specifying IP or other resources on which the new release depends.

More generally, change management gives the ability to analyze the potential impact of changes by tracing to affected entities and the ability to propagate changes completely, correctly, and efficiently.  For design managers, this supports decision-making as to whether, when, and how to make or accept changes.  For design engineers, it helps in assessing when a set of design entities is complete and consistent and in deciding when it is safe to make (or adopt) a new release.

In this paper we focus on two elements of this approach for SoC design.  One is the specification of representative use cases in which change management plays a critical role.  These show places in the SoC development process where information important for managing change can be gathered.  They also show places where appropriate information can be used to manage the impact of change.  The second element is the specification of a generic schema for modeling design entities and their interrelationships.  This supports traceability among design elements, allows designers to analyze the impact of changes, and facilitates the efficient and comprehensive propagation of changes to affected elements.

The following section provides some background on a survey of subject-matter experts that we performed to refine the problem definition.     

2.    BACKGROUND

We surveyed some 30 IBM subject-matter experts (SMEs) in electronic design, change management, and design data modeling.  They identified 26 problem areas for change management in electronic design.  We categorized these as follows:

  • visibility into project status
  • day-to-day control of project activities
  • organizational or structural changes
  • design method consistency
  • design data consistency

Major themes that crosscut these included:

  • visibility and status of data
  • comprehensive change management
  • method definition, tracking, and enforcement
  • design physical quality
  • common approach to problem identification and handling

We held a workshop with the SMEs to prioritize these problems, and two emerged     as the most significant:  First, the need for basic management of the configuration of all the design data and resources of concern within a project or work package (libraries, designs, code, tools, test suites, etc.); second, the need for designer visibility into the status of data and configurations in a work package.

To realize these goals, two basic kinds of information are necessary:  1) An understanding of how change management may occur in SoC design processes; 2) An understanding of the kinds of information and relationships needed to manage change in SoC design.  We addressed the former by specifying change-management use cases; we addressed the latter by specifying a change-management schema.

3.    USE CASES

This section describes typical use cases in the SoC design process.  Change is a pervasive concern in these use cases—they cause changes, respond to changes, or depend on data and other resources that are subject to change.  Thus, change management is integral to the effective execution of each of these use cases. We identified nine representative use cases in the SoC design process, which are shown in Figure 1.


Figure 1.  Use cases in SoC design

In general there are four ways of initiating a project: New Project, Derive, Merge and Retarget.  New Project is the case in which a new project is created from the beginning.  The Derive case is initiated when a new business opportunity arises to base a new project on an existing design. The Merge case is initiated when an actor wants to merge configuration items during implementation of a new change management scheme or while working with teams/organizations outside of the current scheme. The Retarget case is initiated when a project is restructured due to resource or other constraints.  In all of these use cases it is important to institute proper change controls from the outset.  New Project starts with a clean slate; the other scenarios require changes from (or to) existing projects.    

Once the project is initiated, the next phase is to update the design. There are two use cases in the Update Design composite state.  New Design Elements addresses the original creation of new design elements.  These become new entries in the change-management system.  The Implement Change use case entails the modification of an existing design element (such as fixing a bug).  It is triggered in response to a change request and is supported and governed by change-management data and protocols.

The next phase is the Resolve Project and consists of 3 use cases. Backout is the use case by which changes that were made in the previous phase can be reversed.  Release is the use case by which a project is released for cross functional use. The Archive use case protects design asset by secure copy of design and environment.

4.    CHANGE-MANAGEMENT SCHEMA

The main goal of the change-management schema is to enable the capture of all information that might contribute to change management

4.1     Overview

The schema, which is defined in the Unified Modeling Language (UML) [5], consists of several high-level packages (Figure 2).


Click to enlarge

Figure 2.  Packages in the change-management schema

Package Data represents types for design data and metadata.  Package Objects and Data defines types for objects and data.  Objects are containers for information, data represent the information.  The main types of object include artifacts (such as files), features, and attributes.  The types of objects and data defined are important for change management because they represent the principle work products of electronic design: IP, VHDL and RTL specifications, floor plans, formal verification rules, timing rules, and so on.  It is changes to these things for which management is most needed.

The package Types defines types to represent the types of objects and data.  This enables some types in the schema (such as those for attributes, collections, and relationships) to be defined parametrically in terms of other types, which promotes generality, consistency, and reusability of schema elements.

Package Attributes defines specific types of attribute.  The basic attribute is just a name-value pair that is associated to an object.  (More strongly-typed subtypes of attribute have fixed names, value types, attributed-object types, or combinations of these.)  Attributes are one of the main types of design data, and they are important for change management because they can represent the status or state of design elements (such as version number, verification level, or timing characteristics).

Package Collections defines types of collections, including collections with varying degrees of structure, typing, and constraints.  Collections are important for change management in that changes must often be coordinated for collections of design elements as a group (e.g., for a work package, verification suite, or IP release).  Collections are also used in defining other elements in the schema (for example, baselines and change sets).

The package Relationships defines types of relationships.  The basic relationship type is an ordered collection of a fixed number of elements.  Subtypes provide directionality, element typing, and additional semantics.  Relationships are important for change management because they can define various types of dependencies among design data and resources.  Examples include the use of macros in cores, the dependence of timing reports on floor plans and timing contracts, and the dependence of test results on tested designs, test cases, and test tools.  Explicit dependency relationships support the analysis of change impact and the efficient and precise propagation of changes,

The package Specifications defines types of data specification and definition.  Specifications specify an informational entity; definitions denote a meaning and are used in specifications.

Package Resources represents things (other than design data) that are used in design processes, for example, design tools, IP, design methods, and design engineers.  Resources are important for change management in that resources are used in the actions that cause changes and in the actions that respond to changes.  Indeed, minimizing the resources needed to handle changes is one of the goals of change management.

Resources are also important in that changes to a resource may require changes to design elements that were created using that resource (for example, when changes to a simulator may require reproduction of simulation results).

Package Events defines types and instances of events.  Events are important in change management because changes are a kind of event, and signals of change events can trigger processes to handle the change.

The package Actions provides a representation for things that are done, that is, for the behaviors or executions of tools, scripts, tasks, method steps, etc.  Actions are important for change in that actions cause change.  Actions can also be triggered in response to changes and can handle changes (such as by propagating changes to dependent artifacts).

Subpackage Action Definitions defines the type Action Execution, which contains information about a particular execution of a particular action.  It refers to the definition of the action and to the specific artifacts and attributes read and written, resources used, and events generated and handled.  Thus an action execution indicates particular artifacts and attributes that are changed, and it links those to the particular process or activity by which they were changed, the particular artifacts and attributes on which the changes were based, and the particular resources by which the changes were effected.  Through this, particular dependency relationships can be established between the objects, data, and resources.  This is the specific information needed to analyze and propagate concrete changes to artifacts, processes, resources.


Package Baselines defines types for defining mutually consistent set of design artifacts. Baselines are important for change management in several respects.  The elements in a baseline must be protected from arbitrary changes that might disrupt their mutual consistency, and the elements in a baseline must be changed in mutually consistent ways in order to evolve a baseline from one version to another.

The final package in Figure 2 is the Change package.  It defines types that for representing change explicitly.  These include managed objects, which are objects with an associated change log, change logs and change sets, which are types of collection that contain change records, and change records, which record specific changes to specific objects.  They can include a reference to an action execution that caused the change

The subpackage Change Requests includes types for modeling change requests and responses.  A change request has a type, description, state, priority, and owner.  It can have an associated action definition, which may be the definition of the action to be taken in processing the change request.  A change request also has a change-request history log.

4.2    Example

An example of the schema is shown in Figure 3.  The clear boxes (upper part of diagram) show general types from the schema and the shaded boxes (lower part of the diagram) show types (and a few instances) defined for a specific high-level design process project at IBM.


Click to enlarge

Figure 3.  Example of change-management data

The figure shows a dependency relationship between two types of design artifact, VHDLArtifact and FloorPlannableObjects.  The relationship is defined in terms of a compiler that derives instances of FloorPlannableObjects from instances of VHDLArtifact.  Execution of the compiler constitutes an action that defines the relationship.  The specific schema elements are defined based on the general schema using a variety of object-oriented modeling techniques, including subtyping (e.g., VHDLArtifact), instantiation (e.g., Compile1) and parameterization (e.g. VHDLFloorplannable ObjectsDependency).

5.    USE CASE IMPLEMENT CHANGE

Here we present an example use case, Implement Change, with details on its activities and how the activities use the schema presented in Section 4.  This use case is illustrated in Figure 4.


Click to enlarge

Figure 4.  State diagram for use case Implement Change

The Implement Change use case addresses the modification of an existing design element (such as fixing a bug).  It is triggered by a change request.  The first steps of this use case are to identify and evaluate the change request to be handled.  Then the relevant baseline is located, loaded into the engineer’s workspace, and verified.  At this point the change can be implemented.  This begins with the identification of the artifacts that are immediately affected.  Then dependent artifacts are identified and changes propagated according to dependency relationships.  (This may entail several iterations.)  Once a stable state is achieved, the modified artifacts are Checked and regression tested.  Depending on test results, more changes may be required.  Once the change is considered acceptable, any learning and metrics from the process are captured and the new artifacts and relationships are promoted to the public configuration space.

6.    CONCLUSIONS

This paper explores the role of comprehensive change management in SoC design, development, and delivery.  Based on the comments of over thirty experienced electronic design engineers from across IBM, we have captured the essential problems and motivations for change management in SoC projects. We have described design scenarios, highlighting places where change management applies, and presented a preliminary schema to show the range of data and relationships change management may incorporate.  Change management can benefit both design managers and engineers.  It is increasingly essential for improving productivity and reducing time and cost in SoC projects.

ACKNOWLEDGMENTS

Contributions to this work were also made by Nadav Golbandi and Yoav Rubin of IBM’s Haifa Research Lab.  Much information and guidance were provided by Jeff Staten and Bernd-josef Huettl of IBM’s Systems and Technology Group. We especially thank Richard Bell, John Coiner, Mark Firstenberg, Andrew Mirsky, Gary Nusbaum, and Harry Reindel of IBM’s Systems and Technology Group for sharing design data and experiences.  We are also grateful to the many other people across IBM who contributed their time and expertise.

REFERENCES

1.    http://www306.ibm.com/software/awdtools/changemgmt/enterprise/index.html

2.    http://www.cliosoft.com/products/index.html

3.    http://www.icmanage.com/products/index.html

4.    http://www.ins.clrc.ac.uk/europractice/software/matrixone.html

5.    http://www.uml.org/

Mon, 18 Jul 2022 12:00:00 -0500 en text/html https://www.design-reuse.com/articles/15745/comprehensive-change-management-for-soc-design.html
Killexams : Quantum Computer Systems II No result found, try new keyword!This quantum computing course explores the basic design principles of today's quantum computer systems. In this course, students will learn to work with the IBM Qiskit software tools to write ... Wed, 15 Dec 2021 00:48:00 -0600 text/html https://www.usnews.com/education/skillbuilder/quantum-computer-systems-ii-1_course_v1:UChicagoX+QCS12000+1T2022_verified Killexams : How Much Is Your Data Actually Worth?

By Jamie Wilson, MD & Founder, Cryptoloc Technology Group

As our world gets smaller, and our systems for sharing information become increasingly interconnected, breaches are becoming an inevitability. It’s no longer a matter of if, but when, your data will come under attack – but do you have any idea how precious your data actually is?

The criminals who steal data – whether for the purpose of blackmail, identity theft, extortion or even espionage – are finding themselves competing in an increasingly crowded marketplace. Over the course of the global coronavirus pandemic, as the lines between our personal and professional lives and devices blurred like never before and ransomware proliferated, hackers became more active and empowered than ever.

According to Privacy Affairs’ latest Dark Web Price Index, the stolen data market grew significantly larger in both volume and variety over the last year, with more credit card data, personal information and documents on offer.

As the supply of stolen data has grown, prices for each individual piece of data have plummeted. Hacked credit card details that would have sold for US$240 in 2021 are going for US$120 in 2022, for instance, and stolen online banking logins are down from US$120 to US$65.

But this hasn’t discouraged cybercriminals. Instead, dark web sites have begun resorting to traditional marketing tactics like two-for-one discounts on stolen data, creating a bulk sales mentality that places an even greater imperative on cybercrime cartels to amass large quantities of data.

This makes it even more likely that your data will be stolen, because even if your organisation isn’t specifically targeted, you could be caught up in an increasingly common smash-and-grab raid – like the attack on Microsoft that exposed around a quarter of a million email systems last year.

And while the value of each piece of data on the dark web is decreasing for cybercriminals, cyber attacks are just getting costlier for the businesses the data is stolen from.

How much is your data worth to your business?

Not sure how much your data is worth? The exact answer is impossible to quantify definitively, as it will change from one business and one piece of data to another, but it’s clear that having your data stolen can have devastating consequences.

According to the Cost of a Data Breach Report 2021 from IBM and Ponemon, which studied the impacts of 537 real breaches across 17 countries and regions, the per-record cost to a business of a data breach sits at US$161 per record on average – a 10.3 per cent increase from 2020 to 2021.

For a personally identifiable piece of customer data, the cost goes up to US$180 per record. Not only is this the costliest type of record, it’s also the most commonly compromised, appearing in 44 per cent of all breaches in the study.

For a personally identifiable piece of employee data, the cost sits at US$176 per record. Intellectual property costs US$169 per record, while anonymised customer data will set you back US$157 per record.

But it’s extremely unlikely that a cybercriminal would go to the effort of hacking your business for one piece of data. In that sense, it’s more instructive to look at the average cost of a data breach in total – which currently sits at a staggering US$4.24M.

For ransomware breaches, in which cybercriminals encrypt files on a device and demand a ransom in exchange for their encryption, the average cost goes up to US$4.62M, while data breaches caused by business email compromise have an average cost of US$5.01M.

Breaches are costliest in the heavily regulated healthcare industry (US$9.23M) – a logical outcome, given the heightened sensitivity of medical records. By comparison, the ‘cheapest’ breaches are in less regulated industries such as hospitality (US$3.03M).

Mega breaches involving at least 50 million records were excluded from the study to avoid blowing up the average, but a separate section of the report noted that these types of attacks cost 100 times more than the average breach.

The report found the average breach takes 287 days to identify and contain, with the cost increasing the longer the breach remains unidentified. So when it comes to cybercrime, time really is money.

IBM and Ponemon broke the average cost of a breach up into four broad categories – detection and escalation (29 per cent), notification (6 per cent), post-breach response (27 per cent) and lost business cost (38 per cent). Lost business costs include business disruption and revenue losses from system downtime; the cost of lost customers; reputation losses; and diminished goodwill.

A 2019 Deloitte report determined that up to 90 per cent of the total costs in a cyberattack occur beneath the surface – that the disruption to a business’ operations, as well as insurance premium increases, credit rating impact, loss of customer relationships and brand devaluation are the real killers in the long run.

It can take time for the true impacts of a breach to reveal themselves. In 2021, the National Australia Bank revealed it had paid $686,878 in compensation to customers as the result of a 2019 data breach, which led to the personal account details of about 13,000 customers being uploaded to the dark web.

The costs included the reissuance of government identification documents, as well as subscriptions to independent, enhanced fraud detection services for the affected customers. But the bank also had to hire a team of cyber-intelligence experts to investigate the breach, the cost of which remains unknown.

The IBM and Ponemon report confirms that the costs of a data breach won’t all be felt straight away. While the bulk of an average data breach’s cost (53 per cent) is incurred in the first year, another 31 per cent is incurred in the second year, and the final 16 per cent is incurred more than two years after the event.

And with the latest rise of double extortion – in which cyber criminals not only take control of a system and demand payment for its return, but also threaten to leak the data they’ve stolen unless they receive a separate payment – we’re likely to see data breaches exact a heavy toll for even longer time periods moving forward.

How can you protect your data?

Data breaches are becoming costlier and more common, so it’s more important than ever to ensure your data is protected.

Many businesses are turning to cyber insurance to protect themselves. Cyber insurance typically covers costs related to the loss of data, as well as fines and penalties imposed by regulators, public relations costs, and compensation to third parties for failure to protect their data.

But as breaches become a virtual inevitability and claims for catastrophic cyberattacks become more common, insurers are getting cold feet. Premiums are skyrocketing, and insurers are limiting their coverage, with some capping their coverage at about half of what they used to offer and others refusing to offer cyber insurance policies altogether.

Regardless, cyber insurance is not a cyber security policy. Even the most favourable cyber insurance policy doesn’t prevent breaches, but merely attempts to mitigate the impact after the horse has already bolted.

The best approach is to educate your employees and other members of your organisation about cyber security, and put the appropriate controls and best practices in place, including using multi-factor authentication, implementing zero trust policies, and backing up and encrypting data.

The IBM and Ponemon report found that the use of strong encryption – at least 256 AES, at rest and in transit – was a top mitigating cost factor. Organisations using strong encryption had an average breach cost that was 29.4 per cent lower than those using low standard or no encryption.

When data is safely and securely encrypted, any files a cybercriminal gains access to will be worthless to them without an encryption key. My business, Cryptoloc, has taken this principle even further with our patented three-key encryption technology, which combines three different encryption algorithms into one unique multilayer process.

Built for a world without perimeters, our ISO-certified technology has been deployed across multiple products, including Cryptoloc Secure2Client, which enables users to send fully encrypted documents directly from Microsoft Outlook.

We’ve recently made Secure2Client available on the Salesforce AppExchange, so that marketing, sales, commerce, service and IT teams using Salesforce around the world can encrypt the reports they send to clients and third parties that are sensitive or confidential in nature.

This protects Salesforce users from the potentially catastrophic ramifications of a data breach, while allowing them to continue using the existing application that their business is built around.

We’ve also rolled out a new Ransomware Recovery capability that empowers users to protect and restore their data in real-time in the event of an attack, ensuring they never have to pay a costly ransom for the return of their data.

With Ransomware Recovery, every version of every file a user stores in the Cloud is automatically saved. If they suspect they’ve been the victim of a ransomware attack, they can simply lock down their Cloud temporarily to stop the spread of malware; view their files’ audit trails to determine when the attack occurred; roll back their data to the point before it was corrupted; and then unlock their Cloud.

This ensures users can recover their data as quickly and effectively as possible, minimising costly disruptions to their business, removing the need for a lengthy and expensive investigation, and ensuring they never have to pay a cent to a cybercriminal to get back the data that’s rightfully theirs.

Yes, cyber attacks are inevitable – but victimhood isn’t. If you take the right precautions, you can prevent costly breaches and maintain control of your precious data.

About the Author

Jamie Wilson AuthorJamie Wilson is the founder and chairman of Cryptoloc, recognized by Forbes as one of the 20 Best Cybersecurity Startups to watch in 2020. Headquartered in Brisbane, Australia, with offices in Japan, US, South Africa and the UK, Cryptoloc have developed one of the world’s strongest encryption technologies and cybersecurity platforms, ensuring clients have complete control over their data. Jamie can be reached online at www.linkedin.com/in/jamie-wilson-07424a68 and at www.cryptoloc.com

FAIR USE NOTICE: Under the "fair use" act, another author may make limited use of the original author's work without asking permission. Pursuant to 17 U.S. Code § 107, certain uses of copyrighted material "for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright." As a matter of policy, fair use is based on the belief that the public is entitled to freely use portions of copyrighted materials for purposes of commentary and criticism. The fair use privilege is perhaps the most significant limitation on a copyright owner's exclusive rights. Cyber Defense Media Group is a news reporting company, reporting cyber news, events, information and much more at no charge at our website Cyber Defense Magazine. All images and reporting are done exclusively under the Fair Use of the US copyright act.

Wed, 27 Jul 2022 22:37:00 -0500 en-US text/html https://www.cyberdefensemagazine.com/how-much/
Killexams : Don’t pop antibiotics every time you have a cold. But resistance crisis has an AI solution

These technologies are already working together to accelerate the discovery of new antimicrobial medicines. One subset of next-gen AI, dubbed generative models, produces hypotheses about the final molecule needed for a specific new drug. These AI models don’t just search for known molecules with relevant properties, such as the ability to bind to and neutralise a virus or a bacterium, they are powerful enough to learn features of the underlying data and can suggest new molecules that have not yet been synthesised. This design, as opposed to searching capability, is particularly transformative because the number of possible suitable molecules is greater than the number of atoms in the universe, prohibitively large for search tasks.

Generative AI can navigate this vast chemical space to discover the right molecule faster than any human using conventional methods. AI modelling already supports research that could help patients with Parkinson’s disease, diabetes and chronic pain. For example, antimicrobial peptides (AMPs), for example, small protein-like compounds, is one solution that is the subject of intensive study. These molecules hold great promise as next-generation antibiotics because they are inherently less susceptible to resistance and are produced naturally as part of the innate immune system of living organisms.

In latest studies published in Nature Biomedical Engineering, 2021, the AI-assisted search for new, effective, non-toxic peptides produced 20 promising novel candidates in just 48 days, a striking reduction compared to the conventional development times for new compounds.

Among these were two novel candidates used against Klebsiella pneumoniae, a bacterium frequently found in hospitals that causes pneumonia and bloodstream infections and has become increasingly resistant to conventional classes of antibiotics. Obtaining such a result with conventional research methods would take years.

AMPs already in commercial use

Collaborative work between IBM, Unilever, and STFC, which hosts one of IBM Research’s Discovery Accelerators at the Hartree Centre in the UK, has recently helped researchers better understand AMPs. Unilever has already used that new knowledge to create consumer products that boost the effects of these natural-defence peptides.

And, in this Biophysical Journal paper, researchers demonstrated how small-molecule additives (organic compounds with low molecular weights) are able to make AMPs much more potent and efficient. Using advanced simulation methods, IBM researchers, in combination with experimental studies from Unilever, also identified new molecular mechanisms that could be responsible for this enhanced potency. This is a first-of-its-kind proof of principle that scientists will take forward in ongoing collaborations.

Boosting material discovery with AI Generative models and advanced computer simulations is part of a much larger strategy at IBM Research, dubbed Accelerated Discovery, where we use emerging computing technologies to boost the scientific method and its application to discovery. The aim is to greatly speed up the rate of discovery of new materials and drugs, whether it is in preparation for the next global crisis or to rapidly address the current and the inevitable future ones.

This is just one element of the loop comprising the revised scientific method, a cutting-edge transformation of the traditional linear approach to material discovery. Broadly, AI learns about the desired properties of a new material. Next, another type of AI, IBM’s Deep Search, combs through the existing knowledge on the manufacturing of this specific material, meaning all the previous research tucked away in patents and papers.

Generative models have the potential to create a new molecule

Following this, the generative models create a possible new molecule based on the existing data. Once done, we use a high-performance computer to simulate this new candidate molecule and the reactions it should have with its neighbours to make sure it performs as expected. In the future, a quantum computer could Strengthen these molecular simulations even further.

The final step is AI-driven lab testing to experimentally validate the predictions and develop actual molecules. At IBM, we do this with a tool called RoboRXN, a small, fridge-sized chemistry lab’ that combines AI, cloud computing and robots to help researchers create new molecules anywhere at anytime. The combination of these approaches is well suited to tackle general ‘inverse design’ problems. Here, the task is to find or create for the first time a material with a desired property or function, as opposed to computing or measuring the properties of large numbers of candidates.

Proof that AI can go beyond the limits of classical computing

The antibiotics crisis is a particularly urgent example of a global inverse design challenge in need of a true paradigm shift towards the way we discover materials. The rapid progress in quantum computing and the development of quantum machine-learning techniques is now creating realistic prospects of extending the reach of artificial intelligence beyond the limitations of classical computing. Early examples show promise for quantum advantages in model training speed, classification tasks and prediction accuracy.

Overall, combining the most powerful emerging AI techniques (possibly with quantum acceleration) to learn features linked to antimicrobial activity with physical modelling at the molecular scale to reveal the modes of action is, arguably, the most promising route to creating these essential compounds faster than ever before.

The article originally appeared in the World Economic Forum.


Also read: CGPA instead of marks, lateral entry — Modi govt SoP to bring board parity


Sat, 06 Aug 2022 14:59:00 -0500 en-US text/html https://theprint.in/world/dont-pop-antibiotics-every-time-you-have-a-cold-but-resistance-crisis-has-an-ai-solution/1069101/
Killexams : Innov8: BPM/SOA video game simulator in the works at IBM IBM has been working on Innov8, a 3D video game SOA/BPM simulator. At the moment only a demo and screen shots are available, and the game is set to be available in September.  InfoQ spoke to IBM to find out more. According to IBM, "we're creating the game to deliver an introductory level understanding of BPM enabled by SOA. This includes the basic vocabulary of business process management, the typical steps of a BPM project, and role and value that IBM's BPM software can deliver. And, since the game's narrative is derived from real world experiences of IBM's expert BPM practitioners, it includes many helpful tips and perils to avoid."   A demo was posted on youtube:

The game puts the player into the world of After, Inc., a fictitious company that just acquired a rival firm. According to IBM: After's Board of Directors has called an emergency meeting to discuss the progress with After's CEO. Your character is tasked with the mission to help the CEO rapidly create an innovative new process that leverages the strengths of both companies "

More screenshots:
   
Player's don't actually write code or draw models in the game, but it does take users through many of the same thought processes that one would take in creating business process models.   Unofortunately, there is no combat in the game, nor can you "hijack people's cube and take their PCs for joyrides. Oh, and kick the crap out the guys walking the halls on their cell phone" as one youtuber commented. :)

Wed, 27 Jul 2022 12:00:00 -0500 en text/html https://www.infoq.com/news/2007/06/innov8/
Killexams : How Congress, the USPTO and firms can fix 101 woes: in-house

Counsel at IBM, Novartis, BMS and three other companies say legislation is their best option now SCOTUS has declined to hear American Axle

Unlock this article.

The content you are trying to view is exclusive to our subscribers.

To unlock this article:

Take a Free Trial or Login

Rani reports on all aspects of IP in the US and the Americas, particularly trademarks and copyright. Based in New York, she covers in-house and private practice lawyers' concerns and insights into the market.

Fri, 08 Jul 2022 04:39:00 -0500 en text/html https://www.managingip.com/article/2abz4wh8cm8hhs4277474/how-congress-the-uspto-and-firms-can-fix-101-woes-in-house
Killexams : IBM stock drops despite 'solid' results, as some analysts worry about slowing growth and rising risks

By James Rogers

IBM reported better-than-expected second-quarter results after market close on Monday, which were described as "solid" and "reasonable" by analysts.

International Business Machines Corp. reported better-than-expected second-quarter results after the stock market closed on Monday, which were described as "solid" and "reasonable" by some analysts.

Despite the earnings beat, the stock (IBM) sank 6.4% to in midday trading Tuesday, as others on Wall Street questioned the "quality" of the results decelerating growth. The stock's selloff bucked a rally in the broader stock market, as the Dow Jones Industrial Average surged more than 500 points.

IBM's sales were $15.54 billion, up from $14.22 billion in the year-ago quarter, after adjusting for discontinued operations, specifically the spinoff managed infrastructure-service business Kyndryl Holdings Inc. (KD). That beat the FactSet consensus of $15.08.

Net income was $1.39 billion, or $1.54 a share, up from $1.47 a share in the year-ago period. Adjusted earnings per share, which exclude stock-based compensation expenses and other items, were $2.31, up from $2.23 in the prior year's quarter, and above the FactSet EPS consensus of $2.26.

The tech giant had a "solid" quarter, as currency movements (FX) were an expected incremental headwind while the company's consulting and mainframe businesses showed strength, according to Stifel analyst David Grossman. Set against this backdrop, Stifel reiterated its buy rating on IBM.

"While risk is elevated, given slower than expected revenue stabilization, we believe the risk/reward remains attractive, given very negative market sentiment and several potential catalysts over the next 12 months, which could drive both estimates and the multiple higher," Grossman wrote. "These catalysts include re-accelerating services growth, better software performance and a weaker [U.S. dollar] (50-55% of revenue denominated in foreign currency)."

See Now: IBM beats on earnings, but the stock is not being rewarded

IBM is one of the few tech giants that have gained through the selloff in the sector, boosted by strong performance in its software and consulting businesses. During the second quarter, the company's software revenue was $6.4 billion, a 6.2% increase on the same period last year. Consulting revenue was $4.8 billion, an increase of 9.8%, while infrastructure revenue increased 19% to $4.2 billion.

BMO Capital Markets retained its market perform rating for IBM. "We thought IBM results were reasonable though we don't think the results will cause any equivocating investors to declare a new position," wrote BMO analyst Keith Bachman, in a note released on Tuesday. "While FX is material for IBM, our coverage universe will face the same challenges."

Wedbush maintained its neutral rating for IBM, with analyst Moshe Katri describing the company's results as "mixed."

Katri noted that while IBM's second-quarter revenue and earnings exceeded expectations, most of the upside was generated from the company's infrastructure and hardware segment. This, he explained, offset moderation in software and consulting growth, that was in-line with expectations.

"Non-GAAP [generally accepted accounting principles] gross margins in SW and consulting segments were also below expectations, likely reflecting FX headwinds as well as rising wage inflation, and offset by strong margins in IBM Financing," he wrote.

IBM's results point to earnings quality issues, Katri added, as well as the possibility of weakening IT spending.

UBS reiterated its sell rating for IBM on Tuesday, with analyst David Vogt citing concern about decelerating growth that could see organic and reported revenue turning negative in the fourth quarter.

See Now: Why IBM is one of few tech giants that are actually gaining through the selloff

Of 20 analysts surveyed by FactSet, seven have a buy rating, 10 have a hold rating and 3 have an underweight or sell rating for IBM.

-James Rogers

 

(END) Dow Jones Newswires

07-20-22 0803ET

Copyright (c) 2022 Dow Jones & Company, Inc.
Wed, 20 Jul 2022 04:13:00 -0500 en text/html https://www.morningstar.com/news/marketwatch/20220720148/ibm-stock-drops-despite-solid-results-as-some-analysts-worry-about-slowing-growth-and-rising-risks
Killexams : Easdales consider appeal to ministers over £100m plan for IBM site

MILLIONAIRE brothers James and Sandy Easdale are considering an appeal to Scottish ministers over a £100 million housing development bid following a dispute with a council.

The two businessmen submitted a detailed application more than two years ago to build 450 homes at the site of the former IBM plant in Greenock.

They also planned to develop the site at Spango Valley for leisure, community and retail use and said the project would create with the potential to create 130 jobs through the construction phase and a further 300 jobs upon completion. 

However, ahead of the proposals being heard by members of Inverclyde Council's planning board in January, officials recommended that the greenlight should only be given to 270 of the properties - 60 per cent of the development.

Part of the justification given by officials for limiting the number of properties was concerns over capacity issues at the a local Catholic school.

A report to the planning board by Inverclyde Council’s interim director of planning and regeneration said there were no objections to the development on education grounds, though the local Catholic high school had “some capacity issues”.

It said: “Education – no objections. It is advised that the development is within the catchment of St Columba’s High School, which is currently experiencing some capacity pressure.

“However, education services assessment, based on currently available information, is that the school estate will be able to accommodate additional pupils from this development in the future.”

The report added: “After careful consideration, the conclusion reached is therefore again that in order to protect its interests including realisation of the wider Spango Valley Priority Place development, and to take full cognisance of the potential impact on the capacity of the denominational secondary school, the council has to control, via condition, the number of residential units on the application site to the previously mentioned maximum figure of 270.”

Sources close to the Easdales said they were not informed of the housing limits when they submitted the proposals and if they had been they would not have lodged them. They said the lower number of houses did not make the development viable and have previously threatened to sue the council to recover their costs.

The planning board meeting scheduled to make the decision was then postponed until March to allow members to get more information about the application and the controversies surrounding it. In the end councillors narrowly voted to approve the development with the officials's recommendation for 270 dwellings.

But the Easdales brothers, who submitted the application with Advance Construction, want the full complement of 450 homes built in the development.

Asked by the Herald if the businessmen were considering an appeal to the Scottish Government, a source close to them said: "Yes it is high on their agenda."

The Easdale’s adviser Jack Irvine said: "Sandy and James Easdale will explore very avenue on this matter."

He added: “First Minister Nicola Sturgeon has said Scotland needs 100,000 new homes in the next decade. Local councillors have a very perverse approach to her wishes.” 

Last month industry body Homes for Scotland (HFS) warned that the country is suffering from a chronic housing shortage of about 100,000 new homes.

HFS said the lack of new housing is a result of a “consistent undersupply” over more than a decade.

The group represents 200 member organisations with an aim to deliver more homes for the country.

It has called on the Scottish Government for a target of at least 25,000 new homes per year – the minimum quantity the group believes is needed to meet with current demand.

A spokesman for Inverclyde Council said: “An application for planning permission in principle was approved at a meeting of the planning board on 2 March 2022 for a maximum of 270 houses to be built on the site.

“As is the case with all planning applications, applicants have the right to appeal to the Scottish Government within three months."

Last week the Easdale brothers unveiled plans for a £20 million development of an old Glasgow department store, which they hope to turn into a boutique hotel.

The brothers, the owners of McGills Buses, have submitted plans to Glasgow City Council for the redevelopment of the Watt Brothers store, which they said could be a “stepping stone” towards restoring Glasgow as a “great shopping and leisure centre”.

The former shop, on the corner of Bath Street and Sauchiehall Street has been empty since Watt Brothers went into administration in 2019.

Now, after reaching a deal with administrators at KPMG, the two brothers hope to turn the art-deco style building into a boutique hotel, complete with luxury residences and a shopping complex.

Architect Douglas McConville of Silverfern Consultancy has drawn up plans for the listed building, which dates back to 1914, aiming to restore and enhance historic features.

Sandy Easdale said: “We wanted a classy design that would maximise the use of the huge site but would not compromise the unique character of the original building.”

The majority of planning appeals to Scottish ministers are decided by an independent reporter from the Planning and Environmental Appeals Division of the Scottish Government.

The reporter is required to make a decision on the planning merits of the case and in accordance with the development plan, unless material considerations indicate otherwise. 

The reporter will take account of submissions made by all parties, including from members of the local community.

In the financial year 2020-21, Scottish Government reporters issued 135 appeal decisions, granting permission on almost 50 per cent of cases.

The technology firm IBM built its first factory in Spango Valley, Greenock, in 1951, initially manufacturing typewriters, printers and other office equipment.

Four decades later the plant started to produce personal computers. The company continued to evolve and expand along the valley through the 1960s, 1970s and 1980s, and was a major employer in the region.

However, operations began to decline from the late 1990s as they were relocated to other locations  across the globe, and the factory was subsequently closed. IBM retain a presence in Greenock with a client centre within the Pottery Street Business Park.

Since closure, the IBM facility at Spango Valley has been demolished and the site cleared.

Sat, 30 Apr 2022 16:05:00 -0500 en text/html https://www.heraldscotland.com/politics/20105039.easdales-consider-appeal-ministers-100m-plan-ibm-site/
000-081 exam dump and training guide direct download
Training Exams List