M9510-648 Test Prep changes on daily basis, download daily

killexams.com IBM Certification concentrate on guides comprise of real test questions and replies. Exceptionally our M9510-648 exam prep are legitimate, Latest, and 2022 refreshed on an ordinary premise. Many applicants breeze through their M9510-648 test with our genuine inquiries practice test. On the off chance that you like to appreciate achievement, you ought to download M9510-648 Test Prep.

Exam Code: M9510-648 Practice exam 2022 by Killexams.com team
IBM Rational IT Sales Mastery Test v2
IBM Rational test
Killexams : IBM Rational test - BingNews https://killexams.com/pass4sure/exam-detail/M9510-648 Search results Killexams : IBM Rational test - BingNews https://killexams.com/pass4sure/exam-detail/M9510-648 https://killexams.com/exam_list/IBM Killexams : A guide to continuous testing tools

Mobile Labs: Mobile Labs remains the leading supplier of in-house mobile device clouds that connect remote, shared devices to Global 2000 mobile web, gaming, and app engineering teams. Its patented GigaFox is offered on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

RELATED CONTENT: Testing all the time

NowSecure: NowSecure is the mobile app security software company trusted by
the world’s most demanding organizations. Only the NowSecure Platform delivers
fully automated mobile app security and privacy testing with the speed, accuracy,
and efficiency necessary for Agile and DevSecOps environments. Through the
industry’s most advanced static, dynamic, behavioral and interactive mobile app
security testing on real Android and iOS devices, NowSecure identifies the broadest array of security threats, compliance gaps and privacy issues in custom-developed, commercial, and business-critical mobile apps. NowSecure customers can choose automated software on-premises or in the cloud, expert professional penetration testing and managed services, or a combination of all as needed. NowSecure offers the fastest path to deeper mobile app security and privacy testing and certification.

Parasoft: Parasoft’s software testing tool suite automates time-consuming testing tasks for developers and testers, and helps managers and team leaders pinpoint priorities. With solutions that are easy to use, adopt, and scale, Parasoft’s software testing tools fit right into your existing toolchain and shrink testing time with nextlevel efficiency, augmented with AI. Parasoft users are able to succeed in today’s most strategic development initiatives, to capture new growth opportunities and meet the growing expectations of consumer demands.

Perfecto: Perfecto offers a cloud-based continuous testing platform that takes
mobile and web testing to the next level. It features a: continuous quality lab with
smart self-healing capabilities; test authoring, management, validations and debugging of even advanced and hard-to-test businesses scenarios; text execution simulations; and smart analysis. For mobile testing, users can test against more than 3,000 real devices, and web developers can boost their test portfolio with cross-browser testing in the cloud.

CA Technologies offers next-generation, integrated continuous testing solutions that automate the most difficult testing activities — from requirements engineering through test design automation, service virtualization and intelligent orchestration. Built on end-to-end integrations and open source, CA’s comprehensive solutions help organizations eliminate testing bottlenecks impacting their DevOps and continuous delivery practices to test at the speed of agile, and build better apps, faster.

HPE Software’s automated testing solutions simplify software testing within fastmoving agile teams and for Continuous Integration scenarios. Integrated with DevOps tools and ALM solutions, HPE automated testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures. 

IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery lifecycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more. 

Micro Focus is a leading global enterprise software company with a world-class testing portfolio that helps customers accelerate their application delivery and ensure quality and security at every stage of the application lifecycle — from the first backlog item to the user experience in production. Simplifying functional, mobile, performance and application security within fast-moving Agile teams and for DevOps, Micro Focus testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures with an integrated end-to-end application lifecycle management solution that is built for any methodology, technology and delivery model. 

Microsoft provides a specialized tool set for testers that delivers an integrated experience starting from agile planning to test and release management, on premises or in the cloud. 

Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. 

Progress: Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. 

QASymphony’s qTest is a Test Case Management solution that integrates with popular development tools. QASymphony offers qTest eXplorer for teams doing exploratory testing. 

Rogue Wave is the largest independent provider of cross-platform software development tools and embedded components in the world. Rogue Wave Software’s Klocwork boosts software security and creates more reliable software. With Klocwork, analyze static code on-the-fly, simplify peer code reviews, and extend the life of complex software. Thousands of customers, including the biggest brands in the automotive, mobile device, consumer electronics, medical technologies, telecom, military and aerospace sectors, make Klocwork part of their software development process. 

Sauce Labs provides the world’s largest cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium, both widely adopted open-source standards for automating browser and mobile application functionality.

SmartBear provides a range of frictionless tools to help testers and developers deliver robust test automation strategies. With powerful test planning, test creation, test data management, test execution, and test environment solutions, SmartBear is paving the way for teams to deliver automated quality at both the UI and API layer. SmartBear automation tools ensure functional, performance, and security correctness within your deployment process, integrating with tools like Jenkins, TeamCity, and more. 

SOASTA’s Digital Performance Management (DPM) Platform enables measurement, testing and improvement of digital performance. It includes five technologies: mPulse real user monitoring (RUM); the CloudTest platform for continuous load testing; TouchTest mobile functional test automation; Digital Operation Center (DOC) for a unified view of contextual intelligence accessible from any device; and Data Science Workbench, simplifying analysis of current and historical web and mobile user performance data. 

Synopsys: Through its Software Integrity platform, Synopsys provides a comprehensive suite of testing solutions for rapidly finding and fixing critical security vulnerabilities, quality defects, and compliance issues throughout the SDLC. 

TechExcel: DevTest is a sophisticated quality-management solution used by development and QA teams of all sizes to manage every aspect of their testing processes. 

Testplant: Eggplant’s Digital Automation Intelligence Suite empowers teams to continuously create amazing, user-centric digital experiences by testing the true UX, not the code. 

Tricentis is recognized by both Forrester and Gartner as a leader in software test automation, functional testing, and continuous testing. Our integrated software testing solution, Tricentis Tosca, provides a unique Model-based Test Automation and Test Case Design approach to functional test automation—encompassing risk-based testing, test data management and provisioning, service virtualization, API testing and more.

Thu, 30 Jun 2022 11:59:00 -0500 en-US text/html https://sdtimes.com/automated-test/a-guide-to-continuous-testing-tools/
Killexams : IBM, NI Plug Systems Engineering Gap

With the number of lines of code in the average car expected to skyrocket from 10 million in 2010 to 100 million in 2030, there's no getting around the fact that embedded software development and a systems engineering approach has become central not only to automotive design, but to product design in general.

Yet despite the invigorated focus on what is essentially a long-standing design process, organizations still struggle with siloed systems and engineering processes that stand in the way of true systems engineering spanning mechanical, electrical, and software functions. In an attempt to address some of those hurdles, IBM and National Instruments are partnering to break down the silos specifically as they relate to the quality management engineering system workflow, or more colloquially, the marriage between design and test.

"As customers go through iterative development cycles, whether they're building a physical product or a software subsystem, and get to some level of prototype testing, they run into a brick wall around the manual handoff between the development and test side," Mark Lefebvre, director, systems alliances and integrations, for IBM Rational, told us. "Traditionally, these siloed processes never communicate and what happens is they find errors downstream in the software development process when it is more costly to fix."

NI and IBM's answer to this gap? The pair is building a bridge -- specifically an integration between IBM Rational Quality Manager test management and quality management tool, and NI's VeriStand and TestStand real-time testing and test-automation environment. The integration, Lefebvre said, is designed to plug the gap and provide full traceability of what's defined on the test floor back to design and development, enabling more iterative testing throughout the lifecycle and uncovering errors earlier in the process, well before building costly prototypes.

The ability to break down the quality management silos and facilitate earlier collaboration can have a huge impact on cost if you look at the numbers IBM Rational is touting. According to Lefebvre, a bug that costs $1 to fix on a programmer's desktop costs $100 to fix once it makes its way into a complete program and many thousands of dollars once identified after the software has been deployed in the field.

While the integration isn't yet commercialized (Lefebvre said to expect it at the end of the third quarter), there is a proof of concept being tested with five or six big NI/IBM customers. The proof of concept is focused on the development of an embedded control unit (ECU) for a cruise control system that could operate across multiple vehicle platforms. The workflow exhibited marries the software development test processes to the hardware module test processes, from the requirements stage through quality management, so if a test fails or changes are made to the code, the results are shared throughout the development lifecycle.

Prior to such an integration, any kind of data sharing was limited to manual processes around Word documents and spreadsheets, Lefebvre said. "Typically, a software engineer would hand carry all the data in a spreadsheet and import it into the test environment. Now there's a pipe connecting the two."

Related posts:

Wed, 06 Jul 2022 12:00:00 -0500 en text/html https://www.designnews.com/design-hardware-software/ibm-ni-plug-systems-engineering-gap
Killexams : IBM is Modeling New AI After the Human Brain

Attentive Robots

Currently, artificial intelligence (AI) technologies are able to exhibit seemingly-human traits. Some are intentionally humanoid, and others perform tasks that we normally associate strictly with humanity — songwriting, teaching, and visual art.

But as the field progresses, companies and developers are re-thinking the basis of artificial intelligence by examining our own intelligence and how we might effectively mimic it using machinery and software. IBM is one such company, as they have embarked on the ambitious quest to teach AI to act more like the human brain.

Click to View Full Infographic

Many existing machine learning systems are built around the need to draw from sets of data. Whether they are problem-solving to win a game of Go or identifying skin cancer from images, this often remains true. This basis is, however, limited — and it differentiates from the human brain.

We as humans learn incrementally. Simply put, we learn as we go. While we acquire knowledge to pull from as we go along, our brains adapt and absorb information differently from the way that many existing artificial systems are built. Additionally, we are logical. We use reasoning skills and logic to problem solve, something that these systems aren't yet terrific at accomplishing.

IBM is looking to change this. A research team at DeepMind has created a synthetic neural network that reportedly uses rational reasoning to complete tasks.

Rational Machinery

By giving the AI multiple objects and a specific task, "We are explicitly forcing the network to discover the relationships that exist," says Timothy Lillicrap, a computer scientist at DeepMind in an interview with Science Magazine. In a test of the network back in June, it was questioned about an image with multiple objects. The network was asked, for example: "There is an object in front of the blue thing; does it have the same shape as the tiny cyan thing that is to the right of the gray metal ball?"

In this test, the network correctly identified the object a staggering 96 percent of the time, compared to the measly 42 to 77 percent that more traditional machine learning models achieved. The advanced network was also apt at word problems and continues to be developed and improved upon. In addition to reasoning skills, researchers are advancing the network's ability to pay attention and even make and store memories.

Image Credit: ColiN00B / Pixabay

The future of AI development could be hastened and greatly expanded by using such tactics, according to Irina Rish, an IBM research staff member, in an interview with Engadget, "Neural network learning is typically engineered and it's a lot of work to actually come up with a specific architecture that works best. It's pretty much a trial and error approach ... It would be good if those networks could build themselves."

It might be scary to think of AI networks building and improving themselves, but if monitored, initiated, and controlled correctly, this could allow the field to expand beyond current limitations. Despite the brimming fears of a robot takeover, the advancement of AI technologies could save lives in the medical field, allow humans to get to Mars, and so much more. 

Wed, 29 Dec 2021 18:58:00 -0600 text/html https://futurism.com/ibm-is-modeling-new-ai-after-the-human-brain
Killexams : Rational Apex

A comprehensive Ada development environment for Unix systems from IBM. The tools extend to Ada 95/Ada 83 development as well as support for C/C++. Rational Apex evolved from the original, proprietary hardware-based Ada environment that Rational Machines was founded on in the early 1980s. Later renamed Rational Software, IBM acquired the company in 2003. See Rational Rose.

Fri, 29 Mar 2019 12:26:00 -0500 en text/html https://www.pcmag.com/encyclopedia/term/rational-apex
Killexams : Comprehensive Change Management for SoC Design By Sunita Chulani1, Stanley M. Sutton Jr.1, Gary Bachelor2, and P. Santhanam1
1 IBM T. J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 USA
2 IBM Global Business Services, PO BOX 31, Birmingham Road, Warwick CV34 5JL UK


Systems-on-a-Chip (SoC) are becoming increasingly complex, leading to corresponding increases in the complexity and cost of SoC design and development.  We propose to address this problem by introducing comprehensive change management.  Change management, which is widely used in the software industry, involves controlling when and where changes can be introduced into components so that changes can be propagated quickly, completely, and correctly.
In this paper we address two main topics:   One is typical scenarios in electronic design where change management can be supported and leveraged. The other is the specification of a comprehensive schema to illustrate the varieties of data and relationships that are important for change management in SoC design.


SoC designs are becoming increasingly complex.  Pressures on design teams and project managers are rising because of shorter times to market, more complex technology, more complex organizations, and geographically dispersed multi-partner teams with varied “business models” and higher “cost of failure.”

Current methodology and tools for designing SoC need to evolve with market demands in key areas:  First, multiple streams of inconsistent hardware (HW) and software (SW) processes are often integrated only in the late stages of a project, leading to unrecognized divergence of requirements, platforms, and IP, resulting in unacceptable risk in cost, schedule, and quality.  Second, even within a stream of HW or SW, there is inadequate data integration, configuration management, and change control across life cycle artifacts.  Techniques used for these are often ad hoc or manual, and the cost of failure is high.  This makes it difficult for a distributed group team     to be productive and inhibits the early, controlled reuse of design products and IP.  Finally, the costs of deploying and managing separate dedicated systems and infrastructures are becoming prohibitive.

We propose to address these shortcomings through comprehensive change management, which is the integrated application of configuration management, version control, and change control across software and hardware design.  Change management is widely practiced in the software development industry.  There are commercial change-management systems available for use in electronic design, such as MatrixOne DesignSync [4], ClioSoft SOS [2], IC Manage Design Management [3], and Rational ClearCase/ClearQuest [1], as well as numerous proprietary, “home-grown” systems.  But to date change management remains an under-utilized technology in electronic design.

In SoC design, change management can help with many problems.  For instance, when IP is modified, change management can help in identifying blocks in which the IP is used, in evaluating other affected design elements, and in determining which tests must be rerun and which rules must be re-verified. Or, when a new release is proposed, change management can help in assessing whether the elements of the release are mutually consistent and in specifying IP or other resources on which the new release depends.

More generally, change management gives the ability to analyze the potential impact of changes by tracing to affected entities and the ability to propagate changes completely, correctly, and efficiently.  For design managers, this supports decision-making as to whether, when, and how to make or accept changes.  For design engineers, it helps in assessing when a set of design entities is complete and consistent and in deciding when it is safe to make (or adopt) a new release.

In this paper we focus on two elements of this approach for SoC design.  One is the specification of representative use cases in which change management plays a critical role.  These show places in the SoC development process where information important for managing change can be gathered.  They also show places where appropriate information can be used to manage the impact of change.  The second element is the specification of a generic schema for modeling design entities and their interrelationships.  This supports traceability among design elements, allows designers to analyze the impact of changes, and facilitates the efficient and comprehensive propagation of changes to affected elements.

The following section provides some background on a survey of subject-matter experts that we performed to refine the problem definition.     


We surveyed some 30 IBM subject-matter experts (SMEs) in electronic design, change management, and design data modeling.  They identified 26 problem areas for change management in electronic design.  We categorized these as follows:

  • visibility into project status
  • day-to-day control of project activities
  • organizational or structural changes
  • design method consistency
  • design data consistency

Major themes that crosscut these included:

  • visibility and status of data
  • comprehensive change management
  • method definition, tracking, and enforcement
  • design physical quality
  • common approach to problem identification and handling

We held a workshop with the SMEs to prioritize these problems, and two emerged     as the most significant:  First, the need for basic management of the configuration of all the design data and resources of concern within a project or work package (libraries, designs, code, tools, test suites, etc.); second, the need for designer visibility into the status of data and configurations in a work package.

To realize these goals, two basic kinds of information are necessary:  1) An understanding of how change management may occur in SoC design processes; 2) An understanding of the kinds of information and relationships needed to manage change in SoC design.  We addressed the former by specifying change-management use cases; we addressed the latter by specifying a change-management schema.


This section describes typical use cases in the SoC design process.  Change is a pervasive concern in these use cases—they cause changes, respond to changes, or depend on data and other resources that are subject to change.  Thus, change management is integral to the effective execution of each of these use cases. We identified nine representative use cases in the SoC design process, which are shown in Figure 1.

Figure 1.  Use cases in SoC design

In general there are four ways of initiating a project: New Project, Derive, Merge and Retarget.  New Project is the case in which a new project is created from the beginning.  The Derive case is initiated when a new business opportunity arises to base a new project on an existing design. The Merge case is initiated when an actor wants to merge configuration items during implementation of a new change management scheme or while working with teams/organizations outside of the current scheme. The Retarget case is initiated when a project is restructured due to resource or other constraints.  In all of these use cases it is important to institute proper change controls from the outset.  New Project starts with a clean slate; the other scenarios require changes from (or to) existing projects.    

Once the project is initiated, the next phase is to update the design. There are two use cases in the Update Design composite state.  New Design Elements addresses the original creation of new design elements.  These become new entries in the change-management system.  The Implement Change use case entails the modification of an existing design element (such as fixing a bug).  It is triggered in response to a change request and is supported and governed by change-management data and protocols.

The next phase is the Resolve Project and consists of 3 use cases. Backout is the use case by which changes that were made in the previous phase can be reversed.  Release is the use case by which a project is released for cross functional use. The Archive use case protects design asset by secure copy of design and environment.


The main goal of the change-management schema is to enable the capture of all information that might contribute to change management

4.1     Overview

The schema, which is defined in the Unified Modeling Language (UML) [5], consists of several high-level packages (Figure 2).

Click to enlarge

Figure 2.  Packages in the change-management schema

Package Data represents types for design data and metadata.  Package Objects and Data defines types for objects and data.  Objects are containers for information, data represent the information.  The main types of object include artifacts (such as files), features, and attributes.  The types of objects and data defined are important for change management because they represent the principle work products of electronic design: IP, VHDL and RTL specifications, floor plans, formal verification rules, timing rules, and so on.  It is changes to these things for which management is most needed.

The package Types defines types to represent the types of objects and data.  This enables some types in the schema (such as those for attributes, collections, and relationships) to be defined parametrically in terms of other types, which promotes generality, consistency, and reusability of schema elements.

Package Attributes defines specific types of attribute.  The basic attribute is just a name-value pair that is associated to an object.  (More strongly-typed subtypes of attribute have fixed names, value types, attributed-object types, or combinations of these.)  Attributes are one of the main types of design data, and they are important for change management because they can represent the status or state of design elements (such as version number, verification level, or timing characteristics).

Package Collections defines types of collections, including collections with varying degrees of structure, typing, and constraints.  Collections are important for change management in that changes must often be coordinated for collections of design elements as a group (e.g., for a work package, verification suite, or IP release).  Collections are also used in defining other elements in the schema (for example, baselines and change sets).

The package Relationships defines types of relationships.  The basic relationship type is an ordered collection of a fixed number of elements.  Subtypes provide directionality, element typing, and additional semantics.  Relationships are important for change management because they can define various types of dependencies among design data and resources.  Examples include the use of macros in cores, the dependence of timing reports on floor plans and timing contracts, and the dependence of test results on tested designs, test cases, and test tools.  Explicit dependency relationships support the analysis of change impact and the efficient and precise propagation of changes,

The package Specifications defines types of data specification and definition.  Specifications specify an informational entity; definitions denote a meaning and are used in specifications.

Package Resources represents things (other than design data) that are used in design processes, for example, design tools, IP, design methods, and design engineers.  Resources are important for change management in that resources are used in the actions that cause changes and in the actions that respond to changes.  Indeed, minimizing the resources needed to handle changes is one of the goals of change management.

Resources are also important in that changes to a resource may require changes to design elements that were created using that resource (for example, when changes to a simulator may require reproduction of simulation results).

Package Events defines types and instances of events.  Events are important in change management because changes are a kind of event, and signals of change events can trigger processes to handle the change.

The package Actions provides a representation for things that are done, that is, for the behaviors or executions of tools, scripts, tasks, method steps, etc.  Actions are important for change in that actions cause change.  Actions can also be triggered in response to changes and can handle changes (such as by propagating changes to dependent artifacts).

Subpackage Action Definitions defines the type Action Execution, which contains information about a particular execution of a particular action.  It refers to the definition of the action and to the specific artifacts and attributes read and written, resources used, and events generated and handled.  Thus an action execution indicates particular artifacts and attributes that are changed, and it links those to the particular process or activity by which they were changed, the particular artifacts and attributes on which the changes were based, and the particular resources by which the changes were effected.  Through this, particular dependency relationships can be established between the objects, data, and resources.  This is the specific information needed to analyze and propagate concrete changes to artifacts, processes, resources.

Package Baselines defines types for defining mutually consistent set of design artifacts. Baselines are important for change management in several respects.  The elements in a baseline must be protected from arbitrary changes that might disrupt their mutual consistency, and the elements in a baseline must be changed in mutually consistent ways in order to evolve a baseline from one version to another.

The final package in Figure 2 is the Change package.  It defines types that for representing change explicitly.  These include managed objects, which are objects with an associated change log, change logs and change sets, which are types of collection that contain change records, and change records, which record specific changes to specific objects.  They can include a reference to an action execution that caused the change

The subpackage Change Requests includes types for modeling change requests and responses.  A change request has a type, description, state, priority, and owner.  It can have an associated action definition, which may be the definition of the action to be taken in processing the change request.  A change request also has a change-request history log.

4.2    Example

An example of the schema is shown in Figure 3.  The clear boxes (upper part of diagram) show general types from the schema and the shaded boxes (lower part of the diagram) show types (and a few instances) defined for a specific high-level design process project at IBM.

Click to enlarge

Figure 3.  Example of change-management data

The figure shows a dependency relationship between two types of design artifact, VHDLArtifact and FloorPlannableObjects.  The relationship is defined in terms of a compiler that derives instances of FloorPlannableObjects from instances of VHDLArtifact.  Execution of the compiler constitutes an action that defines the relationship.  The specific schema elements are defined based on the general schema using a variety of object-oriented modeling techniques, including subtyping (e.g., VHDLArtifact), instantiation (e.g., Compile1) and parameterization (e.g. VHDLFloorplannable ObjectsDependency).


Here we present an example use case, Implement Change, with details on its activities and how the activities use the schema presented in Section 4.  This use case is illustrated in Figure 4.

Click to enlarge

Figure 4.  State diagram for use case Implement Change

The Implement Change use case addresses the modification of an existing design element (such as fixing a bug).  It is triggered by a change request.  The first steps of this use case are to identify and evaluate the change request to be handled.  Then the relevant baseline is located, loaded into the engineer’s workspace, and verified.  At this point the change can be implemented.  This begins with the identification of the artifacts that are immediately affected.  Then dependent artifacts are identified and changes propagated according to dependency relationships.  (This may entail several iterations.)  Once a stable state is achieved, the modified artifacts are Tested and regression tested.  Depending on test results, more changes may be required.  Once the change is considered acceptable, any learning and metrics from the process are captured and the new artifacts and relationships are promoted to the public configuration space.


This paper explores the role of comprehensive change management in SoC design, development, and delivery.  Based on the comments of over thirty experienced electronic design engineers from across IBM, we have captured the essential problems and motivations for change management in SoC projects. We have described design scenarios, highlighting places where change management applies, and presented a preliminary schema to show the range of data and relationships change management may incorporate.  Change management can benefit both design managers and engineers.  It is increasingly essential for improving productivity and reducing time and cost in SoC projects.


Contributions to this work were also made by Nadav Golbandi and Yoav Rubin of IBM’s Haifa Research Lab.  Much information and guidance were provided by Jeff Staten and Bernd-josef Huettl of IBM’s Systems and Technology Group. We especially thank Richard Bell, John Coiner, Mark Firstenberg, Andrew Mirsky, Gary Nusbaum, and Harry Reindel of IBM’s Systems and Technology Group for sharing design data and experiences.  We are also grateful to the many other people across IBM who contributed their time and expertise.


1.    http://www306.ibm.com/software/awdtools/changemgmt/enterprise/index.html

2.    http://www.cliosoft.com/products/index.html

3.    http://www.icmanage.com/products/index.html

4.    http://www.ins.clrc.ac.uk/europractice/software/matrixone.html

5.    http://www.uml.org/

Mon, 18 Jul 2022 12:00:00 -0500 en text/html https://www.design-reuse.com/articles/15745/comprehensive-change-management-for-soc-design.html
Killexams : What is B2B Marketing? And How to Do It Successfully

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Wed, 20 Jul 2022 21:30:00 -0500 en-US text/html https://www.business2community.com/b2b-marketing/what-is-b2b-marketing-and-how-to-do-it-successfully-02379116
Killexams : ALM techniques can help keep your apps in play

For developers and enterprise teams, application life-cycle management in today’s development climate is an exercise in organized chaos.

As movements such as agile, DevOps and Continuous Delivery have created more hybrid roles within a faster, more fluid application delivery cycle, there are new definitions of what each letter in the ALM acronym means. Applications have grown into complex entities with far more moving parts—from modular components to microservices—delivered to a wider range of platforms in a mobile and cloud-based world. The life cycle itself has grown more automated, demanding a higher degree of visibility and control in the tool suites used to manage it all.

Kurt Bittner, principal analyst at Forrester for application development and delivery, said the agile, DevOps and Continuous Delivery movements have morphed ALM into a way to manage a greatly accelerated delivery cycle.

“Most of the momentum we’ve seen in the industry has been around faster delivery cycles and less about application life-cycle management in the sense of managing traceability and requirements end-to-end,” said Bittner. “Those things are important and they haven’t gone away, but people want to do it really fast. When work was done manually, ALM ended up being the core of what everyone did. But as much of the work has become automated—builds, workflows, testing—ALM has become in essence a workflow-management tool. It’s this bookend concept that exists on the front end and then at the end of the delivery pipeline.”

Don McElwee, assistant vice president of professional services for Orasi Software, explained how the faster, more agile delivery process correlates directly to an organization’s bottom line.

“The application life cycle has become a more fluid, cost-effective process where time to market for enhancements and new products is decreased to meet market movements as well as customer expectations,” said McElwee. “It is a natural evolution of previous life cycles where the integration of development and quality assurance align to a common goal. By reducing the amount of functionality to be deployed to a production environment, testing and identifying issues earlier in the application life cycle, the overall cost of building and maintaining applications is decreased while increasing team unity and productivity.”

In addition to the business changes taking place in ALM, the advent of agile, DevOps and Continuous Delivery has also driven a cultural change, according to Kartik Raghavan, executive vice president of worldwide engineering at CollabNet. He said ALM is undergoing a fundamental enterprise shift from a life-cycle functionality focus toward a delivery process colored more by the consumer-focused value of an application.

“All these movements, whether it’s agile or DevOps or Continuous Delivery, try to take the focus away from the individual pieces of delivery to more of the ownership at an application level,” said Raghavan. “It’s pushing ALM toward more of a pragmatic value of the application as a whole. That is the big cultural change.”

ALM for a new slate of platforms
Bittner said ALM tooling has also segmented into different markets for different development platforms. He said development tool chains are different for everything from mobile and cloud to Web applications and embedded software, as developers deploy applications to everything from a mobile app store to a cloud platform such as Amazon’s AWS, Microsoft’s Azure or OpenStack.

“[Tool chains] often fragment along the technology platform lines,” said Bittner. “People developing for the cloud’s main goal is to get things to market quickly, so they tend to have a much more diverse ecosystem of tools, while mobile is so unique because the technology stack is changing all the time and evolving rapidly.”

Hadi Hariri, developer advocacy lead at JetBrains, said the growth of cloud-based applications and services in particular has shifted customer expectations when it comes to ALM.

“Before, having on-site ALM solutions was considered the de facto option,” he said. “Nowadays, more and more customers don’t want to have to deal with hosting, maintenance [or] upgrades of their tools. They want to focus on their own product and delegate these aspects to service and tool providers.”

CollabNet’s Raghavan said this shift toward a wider array of platforms has changed how developers and ALM tool providers think about software. On the surface, he said he sees cloud, mobile, Web and embedded as different channels for delivering applications.

He said there is more focus when developing and managing an application on changing the way a customer expects to consume an application.

“Each of these channels represents another flavor of how they enable customers to consume applications,” said Raghavan. “With the cloud, that means the ability to access the application anywhere. Customers expect to log into an application and quickly understand what it does. Mobile requires you to build an application that leverages the value of the device. You need an ALM suite that recognizes the different tools needed to deliver every application to the cloud, prepare that application for mobile consumption, and even gives you the freedom to think about putting the app on something like a Nest thermostat.”

What’s in an application?
Applications are becoming composites, according to Forrester’s Bittner, and he said ALM must evolve into a means of managing the delivery of these composite applications and the feedback coming from their modular parts integrated with the cloud.

“A mobile application is typically not standalone. It talks to services running in the cloud that talk to other services wrapping legacy systems to provide data,” he said. “So even a mobile application, which sounds like a relatively whole entity, is actually a network of things.”

Matt Brayley-Berger, worldwide product marketing manager of application life cycle and quality for HP, expanded on this concept of application modularity. With a composite application containing sometimes hundreds of interwoven components and services, he said the complexity of building releases has gone up dramatically.

“Organizations are making a positive tradeoff around risk,” he said. “Using all of these smaller pieces, the risk of a single aspect of functionality not working has gone down, but now you’re starting to bring in the risk of the entire system not working. In some ways it’s the ultimate SOA dream realized, but the other side means far more complexity to manage, which is where all these new ALM tools and technologies come in.”

Within that application complexity is also the rise of containers and microservices, which Bittner called the next big growth area in the software development life cycle. He said containers and microservices are turning applications from large pieces of software into a network of orchestrated services with far more moving parts to keep track of.

“Containers and microservices are really applicable to everything,” said Bittner. “They’ll lead to greater modularity for different parts of an application, to provide organizations the ability to develop different parts of an application independently with the option to replace parts at runtime, or [to] evolve at different speeds. This creates a lot of flexibility around developing and deploying an application, which leads to the notion of an application itself changing.”

JetBrains’ Hariri said microservices are, at their core, just a new way to think about existing SOA architecture, combined with containers to create a new deployment model within applications.

“Microservices, while being sometimes touted as the new thing, are actually very similar, if not the same, as a long-time existing architecture: SOA, except nowadays it would be hard to put the SOA label on something and not be frowned upon,” he said.

“Microservices have probably contributed to making us aware that services should be small and autonomous, so in that sense, maybe the word has provided value. Combining them with containers, which contribute to an autonomous deployment model, it definitely does provide rise to new potential scenarios that can provide value, as well as introduce new challenges to overcome in increasing the complexity of ALM if not managed appropriately.”

Within a more componentized application, Orasi’s McElwee said it’s even more critical for developers and testers throughout the ALM process to meticulously test each component.

“ALM must now be able to handle agile concepts, where smaller portions of development such as Web services change often and need to deployed rapidly to meet customer demand,” said McElwee. “These smaller application component changes must be validated quickly for both individual functional and larger system impacts. There must be an analysis to determine where failures are likely based on history so that higher-risk areas can be validated quickly. The ability to identify tests and associated data components are critical to the success of these smaller components.”

Managing the modern automated pipeline
For enterprise organizations and development teams to keep a handle on an accelerated delivery process with more complex applications to a wider range of platforms, Bittner believes ALM must provide visibility and control across the entire tool chain.

“There’s a tremendous need for a comprehensive delivery pipeline,” he said. “You have Continuous Integration tools handling a large part of the pipeline handing off to deployment automation tools, and once things get in production you have application analytics tools to gather data. The evolution of this ecosystem demands a single dashboard that lets you know where things are in the process, from the idea phase to the point where it’s in the customer’s hands.”

To achieve that visibility and end-to-end control, some ALM solution providers are relying on APIs. TechExcel’s director of product management Jason Hammon said that when it comes to third-party and open-source automation tools for tasks such as bug tracking, test automation or SCM, those services should be tied with APIs without losing sight of the core goals of ALM.

“At the end of the day, someone is still planning the requirements,” he said. “They’re not automating that process. Someone is still planning the testing and implementing the development. The core pieces of ALM are still there, but we need the ability to extend beyond those manual tasks and pull in automation in each stage.

“That’s the whole point of the APIs and integrations: Teams are using different tools. As the manager I can log in and see how many bugs have been found, even if one team is logging bugs in Bugzilla, another team is logging them in DevTrack, and another team is logging them in JIRA. We can’t say, ‘Here’s this monolithic solution and everyone should use just this.’ People don’t work that way anymore.”

Keeping track of all these automated processes and services running within a delivery pipeline requires constant information. Modern ALM suites are built on communication between teams and managers, as well as streams of real-time notifications through dashboards.

“Anywhere in the process where you have automation, metrics are critical,” said HP’s Brayley-Berger. “Being able to leverage metrics created through automation has become a valuable way to course-correct. We’re moving more toward an opportunity for organizations to use these pieces of data to predict future performance. It almost sounds like a time-travel analogy, but the only way for organizations to go even faster than they already are is to think ahead: What should teams automate? Where are the projects likely to face challenges?”

An end-to-end ALM solution plugged into all this data can also overwhelm teams working within it with excess information, said Paula Rome, senior product manager at Seapine Software.

“We want to make sure developers are getting exactly what they need for their day-to-day job,” said Rome. “Their data feed needs to be filled with notifications that are actually useful. The ALM tool should in no way be preventing them from going to a higher-level view, but we want to be wary of counterproductive interruptions.”

Where ALM goes from here
Rome said it was not so long ago that ALM’s biggest problem was that nobody knew of it. Now, in an environment where more and more applications exist purely in the cloud rather than in traditional on-premise servers, she said ALM provides a feeling of stability.

“Organizations are still storing data somewhere, there are still multiple components, multiple roles and team members that need to be up to date with information so you’re not losing the business vision,” said Rome. “But with DevOps and the pressure of Continuous Delivery, when the guy who wrote the code is the one fixing the bug in production, an ALM tool gives you a sort of DevOps safety net. You need information readily available to you. You can get a sense of the source code and you can start following this trail of clues to what’s going on to make that quick fix.”

As the concepts of what applications and life cycles are have changed, TechExcel’s Hammon said ALM is still about managing the same process.

“You still need to be able to see your project, see its progress and make sure there’s traceability from those requirements through the testing to make sure you’re on track, and that you’ve delivered both what you and the customer expected you to,” said Hammon. “Even if you’re continuously delivering, it’s a way to track what you need to do and what you’ve done. That never changes, and it may never change.”

What developers need in a tool suite for the modern application life cycle

Hadi Hariri
“A successful tool is one that provides value by removing grunt work and errors via automation. Its job is to allow developers to focus on the important tasks, not fight the tool.”

Don McElwee
“Developers should look for a suite of tools that can provide a holistic solution to maximize collaboration with different technologies and other teams such as Quality Assurance, Data Management and Operations. By integrating technologies that offer support to different departments, developers can maximize the talents of those individuals and prove that their code can work and be comfortable with potential real-world situations. No longer will they wonder how it will work, but can tell exactly what it does and why it will work.”

Jason Hammon
“The focus should really be traceability. You can manage requirements, implementation and testing, but developers need to look for something that’s flexible with an understanding that if they should want to change their process later, that they have flexibility to modify their process without being locked into one methodology. You also need flexibility in the tools themselves, and tools that can scale up with the customers and data you have. You need tools that will grow with you.”

Paula Rome
“Developers should do a quick bullet list. What aren’t they happy about in their current process? What are they really trying to fix with this tool? Are things falling through the cracks? Are you having trouble getting the information you need to answer questions right now, not next week? Do you find yourself repeating manual processes over and over? Play product manager for a moment and ask yourself what those high-level goals are; what ALM problems you’re really trying to solve.”

Kartik Raghavan
“[Developers] need to differentiate practitioner tools that help you do a job at a granular level from the tools that provide you a level of control, governance or visibility into an application. Especially for an enterprise, you have to first optimize tool delivery. Whatever gets you the best output of high-quality software quickly. There are rules and best practices behind that, though. How do you manage your core code? What model have you enabled for it? Do you want a centralized model or a distributed model, and when you roll those things out, you need to set controls. You need to get that right, but with the larger focus of getting rapid delivery automation in place for your Continuous Delivery life cycle.”

Matt Brayley-Berger
“Any tool set needs to be usable. That sounds simple, but oftentimes it’s frustrating when it’s so far from the current process. The tool itself may also have to annotate the existing processes rather than forcing change to connect that data. You need a tool that’s usable for the developer, but with the flexibility to connect to other disciplines and do some of the necessary tracking on the ground level that’s critical in organizations to report things back. Teams shouldn’t have to sacrifice reporting and compliance for something that’s usable.”

A guide to ALM tool suites
Teams use Atlassian tools to work and collaborate throughout the software development life cycle: JIRA for tracking issues and planning work; Confluence for collaborating on requirements; HipChat for chat; Bitbucket for collaborating on code; Stash for code collaboration and Git repository management; and Bamboo for continuous integration and delivery.

Borland, a Micro Focus company: Borland’s Caliber, StarTeam, AccuRev and Silk product offerings make up a comprehensive ALM suite that provides precision, control and validation across the software development life cycle. Borland’s products are unique in their ability to integrate with each other—and with existing third-party tools—at an asset level.

CollabNet: CollabNet TeamForge ALM is an open ALM platform that helps automate and manage the enterprise application life cycle in a governed, secure and efficient fashion. Leading global enterprises and government agencies rely on TeamForge to extract strategic and financial value from accelerated application development, delivery and DevOps.

HP: HP ALM is an open integration hub for ALM that encompasses requirements, test and development management. With HP ALM, users can leverage existing investments; share and reuse requirements and asset libraries across multiple projects; see the big picture with cross-project reporting and preconfigured business views; gain actionable insights into who is working on what, when, where and why; and define, manage and track requirements through every step of the life cycle.

IBM: IBM’s Rational solution for Collaborative Lifecycle Management is designed to deliver effective ALM to agile, hybrid and traditional teams. It brings together change and configuration management, quality management, requirements management, tracking, and project planning in a common unified platform.

Inflectra: SpiraTeam is an integrated ALM suite that provides everything you need to manage your software projects from inception to release and beyond. With more than 5,000 customers in 100 different countries using SpiraTeam, it’s the most powerful yet easy-to-use tool on the market. It includes features for managing your requirements, testing and development activities all hosted either in our secure cloud environment or available for customers to install on-premise.

JetBrains: JetBrains offers tools for both individual developers as well as teams. TeamCity provides Continuous Integration and Deployment, while YouTrack provides agile project and bug management, which has recently been extended with Upsource, a code review and repository-browsing tool. Alongside its individual developer offerings, which consist of its IDEs for the most popular languages on the market as well as .NET tools, JetBrains covers most of the needs of software development houses, moving toward a fully integrated solution.

Kovair: Kovair provides a complete integrated ALM solution on top of a Web-based central repository. The configurability of Kovair ALM allows users to collaborate with the level of functionality and information they need, using features like a task-based automated workflow engine with visual designer, dashboards, analytics, end-to-end traceability, easy collaboration between all stakeholders, and support for both agile and waterfall methodologies.

Microsoft: Visual Studio Online (VSO), Microsoft’s cloud-hosted ALM service, offers Git repositories; agile planning; build automation for Windows, Linux and Mac; cloud load testing; DevOps features like Continuous Deployment to Windows, Linux and Microsoft Azure; application analytics; and integration with third-party ALM tools. VSO is based on Team Foundation Server, and it integrates with Visual Studio and other popular code editors. VSO is free to the first five users on a team or with MSDN.

Orasi: Orasi is a leading provider of software, support, training, and consulting services using market-leading test-management, test automation, performance intelligence, test data-management and coverage, Continuous Delivery/Integration, and mobile testing technologies. Orasi helps customers reduce the cost and risk of software failures by focusing on a complete software quality life cycle.

Polarion: Polarion ALM is a unifying collaboration and management platform for software and multi-system development projects. Providing end-to-end traceability and transparency from requirements to design to production, Polarion’s flexible architecture and licensing model enables companies to deploy just what they need, where they need it, on-premise or in the cloud.

Rommana: Rommana ALM is a fully integrated set of tools and methodologies that provides full traceability among requirements, scenarios, test cases, issue reports, use cases, timelines, change requests, estimates and resources; one common repository for all project artifacts and documentation; full collaboration between all team members around the globe 24×7; and extensive reporting capabilities.

Seapine: Seapine Software’s integrated ALM suite enables product development and IT organizations to ensure the consistent release of high-quality products, while providing traceability, reporting and compliance. Featuring TestTrack for requirements, issue, and test management; Surround SCM for configuration management; and QA Wizard Pro for automated functional testing and load testing, Seapine’s tools provide a single source of truth for project development artifacts, statuses and quality to reduce risks inherent in complex product development.

Serena Software: Serena provides secure, collaborative and process-based ALM solutions. Dimensions RM improves the definition, management and reuse of requirements, increasing visibility and collaboration across stakeholders; Dimensions CM simplifies collaborative parallel development, improving team velocity and assuring release readiness; and Deployment Automation enables deployment pipeline automation, reducing cycle time and supporting rapid delivery.

Sparx Systems: Sparx Systems’ flagship product, Enterprise Architect provides full life-cycle modeling for real-time and embedded development, software and systems engineering, and business and IT systems. Based on UML and related specifications, Enterprise Architect is a comprehensive team-based modeling environment that helps organizations analyze, design and construct reliable, well-understood systems.

TechExcel: TechExcel DevSuite is specifically designed to manage both agile and traditional projects, as well as streamline requirements, development and QA processes. The fully definable user interface allows complete workflow and UI customization based on project complexity and the needs of cross-functional teams. DevSuite also features built-in multi-site support for distributed teams, two-way integration with MS Word, and third-party integrations using RESTful APIs. DevSuite’s dynamic, real-time reporting and analytics also enable faster issue detection and resolution.

Wed, 20 Dec 2017 17:19:00 -0600 en-US text/html https://sdtimes.com/agile/alm-techniques-can-help-keep-your-apps-in-play/
Killexams : Security Risks Widen With Commercial Chiplets

The commercialization of chiplets is expected to increase the number and breadth of attack surfaces in electronic systems, making it harder to keep track of all the hardened IP jammed into a package and to verify its authenticity and robustness against hackers.

Until now this has been largely a non-issue, because the only companies using chiplets today — AMD, Intel, and Marvell — internally source those chiplets. But as the market for third-party chiplets grows and device scaling becomes too expensive for most applications, advanced packaging using pre-verified and tested parts is a proven viable option. In fact, industry insiders predict that complex designs may include 100 or more chiplets, many of those sourced from different vendors. That could include various types of processors and accelerators, memories, I/Os, as well as chiplets developed for controlling and monitoring different functions such as secure boot.

The chiplet concept is being viewed increasingly as a successor to the SoC. In effect, it relies on a platform with well-defined interconnects to quickly integrate components that had to be shrunk to whatever process node the SoC was being created at. In most cases, that was the digital logic, and analog functions were largely digitized. But as the benefits of Moore’s Law diminish, and as different market slices demand more optimized solutions, the ability to pack in features developed at various process nodes, and choose alternatives from a menu, has put a spotlight on chiplets. They can be developed quickly and relatively cheaply by third-parties, characterized for standardized interconnect schemes, and at least in theory keep costs under control.

This is easier said than done, however. Commercially available chiplets will almost certainly increase the complexity of these designs, at least in the initial implementations. And equally important, they will open the door to a variety of security-related issues.

“The supply chain becomes the primary target,” said Adam Laurie, global security associate partner and lead hacker for IBM’s X-Force Red offensive security services. “If hackers can get into the back end of the supply chain, they can ship chiplets that are pre-hacked. The weakest company in the supply chain becomes the weakest link in a system, and you can adjust your attack to the weakest link.”

Sometimes, those weak links aren’t obvious until they are integrated into a larger system. “There was a 4G communications module that had so much additional processing power that people were using it for processing Java,” said Laurie, in an interview with Semiconductor Engineering at the latest hardwear.io conference. “We found they could flip the USB connection to read all the data stored on the device across all the IP. That affected millions of devices, including planes and trains. This was a 4G modem plug-in, and it was sold as a secure module.”

These problems become more difficult to prevent or even identify as the supply chain extends in all directions with off-the-shelf chiplets. “How do you ensure the authenticity of every piece of microelectronics that’s moving from wafer sort up through final test, where assembly and test are performed in a different country, and then attached to a board in yet another country?” asked Scott Best, senior technical director of product management for security IP at Rambus. “And then it’s imported to put into a system in the U.S. What are the reliable ways of actually tracking those pieces to ensure that the system that you’re building has authentic components? We have a lot more latest interest from customers panic about risk to the supply chain, where someone slips in a counterfeit part. Perhaps that’s done with malicious intent, or it could just be a cheap knockoff of an authentic part with the exact same part numbers and die markings to make it look fully compatible. It looks correct from the outside, but it’s not correct at all. The suppliers’ customers are a lot more panic about that now.”

Fig. 1: A six-chiplet design with 96 cores. Source: Leti

Fig. 1: A six-chiplet design with 96 cores. Source: Leti

The chip industry has been working on solutions for the past decade, starting with the rollout of third-party IP. But at least some of that work was pushed back as the IP market consolidated into a handful of big companies, rendering many of those solutions an unnecessary cost. That is changing with the introduction of a commercial chiplet marketplace and the inclusion of chiplets in mission- and safety-critical applications.

“One solution for future devices involves activation of chiplets,” said Maarten Bron, managing director at Riscure. “On the gray market, you may see 20% more chips ending up being used. But if you have to activate those parts, those chips become unusable.”

A similar approach is to use encrypted tests from the manufacturer. “In automotive, you have this validation process for the software, which produces reports that tell you this is real,” said Mitch Mliner, vice president of engineering at Cycuity (formerly Tortuga Logic). “We need to do the same on the hardware side. ‘Here’s a chip. Here’s what goes with it. Here’s the testing that was done. And here’s the outcome. So you can see this is safe. And here are even more tests. You can run these encrypted tests.’ This is similar to logging in to read encrypted stuff, and it will confirm that it’s still working when you insert a chiplet into your design. This is where the industry needs to go. Without that, it’s going to be hard for people to drop chiplets into their design and say, ‘Hey, I’m comfortable with this.’ They need to have traceability.”

It’s not just about the hardware, either. As chips remain in the market for longer periods of time — up to 25 years in industrial and mil/aero applications, and 10 to 20 years for automobiles — many of these chiplets will need to be updated through firmware or software in order to stay current with known security issues and current communications protocols.

“Chiplets are put together on a substrate or in a 3D stack that is essentially the same as a small computer network,” said Mike Borza, Synopsys scientist. “So if you can attack any part of it, you have the potential to use that as a launching pad for an attack on the rest of it. We’ve seen this time and again in all kinds of different networks. The idea is to get a toehold in the part. Software authenticity is great. You have secure boot and all those kinds of processes that are used to run cryptographic authentication to prove where the software came from, and that’s really important. But it has to have a basis in the hardware that allows you to really trust that the people who sent you that software are the real thing. It’s not good enough to say, ‘Take my software an install it.’ People have done that in the past and that’s turned into an attack. The software ultimately is what people are trying to defend, and it needs to be tied back to the hardware in a rational way that allows you to at least trust that when you start the system up you’ve got the right software and it’s authorized to be running where you are.”

One such approach is to keep track of all of these components through blockchain ledgers, which is part of the U.S. government’s “zero trust” initiative.

“More standards like UCIe for putting chiplets together will help with adoption,” said Simon Rance, vice president of marketing at ClioSoft. “But now we’re starting to get input from the mil/aero side, where they want blockchain traceability. We’ve been able to layer our HUB tool across that to provide visibility across the chiplets and the blockchain. Now we can look at the design data versus the spec and determine whether it was right or wrong, and even which version of a tool was used. That’s important for automotive and mil/aero.”

Rance noted that a lot of this effort started with the shift from on-premise design to the cloud and the rollout of the U.S. Department of Defense standards for chiplet design. “There was a big push for traceability,” he said. “If you look at the design data and compare that to spec, was it right or wrong? And then, which tool was used, and which version of the tool?”

Another option is to add programmability into a system using eFPGAs to change bitstreams as needed for security reasons. That makes it much harder to attack a device because the bitstreams are never the same.

“We’ve been working with the DoD on one-circuit obfuscation, where there are not a lot of gates,” said Andy Jaros, vice president of sales and marketing at Flex Logix. “With a chiplet, it will either work or not work. We also can encrypt the bitstream with a PUF. So you can have multiple different bitstreams in a design, and change them if one bitstream is compromised. With the DoD, it’s more about obfuscation and programming an eFPGA in a secure environment. But we also expect different encryption algorithms to be modified over time for security reasons.”

The impact of standards
The chiplet approach has seen its greatest success so far in the processor world. Standards such as the Universal Chiplet Interconnect Express (UCIe), the Open Domain-Specific Architecture (ODSA), and Compute Express Link (CXL), as well network-on-chip (NoC) approaches are expected to broaden that market for a variety of fabless companies in many different markets.

“The CXL protocol is really going to enable an ecosystem of solutions with accelerators and expanded memory options,” said Mark Papermaster, CTO of AMD. “Imagine that transported into a chiplet ecosystem. That’s the intent of the UCIe consortium. We believe we can leverage the same elements of the CXL protocol, but also align on the kind of physical specifications you need for chiplet interconnects. Those are certainly different than what you need for socket-to-socket connections. The chiplet approach will be the motherboard of the future. These standards will emerge, and we will align. But it won’t happen overnight. And so in the interim, it will be a few big companies putting these together. But my hope is that as we develop these standards, we will lower the barrier for others.”

There are many ways to ensure chiplets are what they are expected to be. Less obvious is how security requirements will change over time, and how a growing number of chiplet-related standards will need to be adjusted as new vulnerabilities emerge.

“There will be a lot of competition, and the person using a chiplet inherits its security propositions,” said Riscure’s Bron. “We’re seeing this with IP blocks that come from different IP vendors. Is it secure? Maybe. But in an SoC with 200 IP blocks, not all of them are secure. And wherever the weak link is, that will be exploited — most likely through a side-channel attack using fault injection.”

On top of that, there is a value proposition for security, and this is particularly evident with IoT devices. “In the IoT world, security has two different aspects,” said Thomas Rosteck, Connected Secure Systems Division president at Infineon. “One is whether you care if a device is hacked or not. Is it going to cost you? Yes. The second one is, does the society care? About four or five years ago there was a botnet attack, which was the first time they didn’t use a PC. They used IP cameras and AV receivers. That means these devices also have a lot of computation power, and many of them are built on Android, so they have to be be protected, as well. And that’s the critical thing. Without security, IoT is not going to work, because it’s just a matter of time until you have a big problem.”

The challenge with chiplets is that approach adds more pieces to the puzzle. It makes minimizing the possible attack surface that much more difficult.

Weeding out problems
One clear objective objective is to get a tighter rein on counterfeiting, which is hardly a new problem in the chip industry. But as chips are used for more critical functions, concerns about counterfeiting are growing.

Industry insiders say there are thousands of chips available today on the gray market that purport to be the same chips causing the ongoing shortages, but they are either counterfeit or remarketed chips from dead or discarded products. In some cases, the counterfeiters have etched legitimate part numbers into the chips or included an authentication code that matches the “golden” code provided by the manufacturer.

“There are some schemes that are highly sophisticated, and it’s not until you go through the authenticity testing that you discover an anomaly that you didn’t see on the surface,” said Art Figueroa, vice president of global operations at distributor Smith & Associates. “But the biggest issues occur on those parts that have no markings, like passive components or capacitors. That’s where you have to have the other elements in your process, whether it’s decapsulation or electrical testing of some sort to authenticate the component.”

Decapsulation is done selectively, using nitric acid or some solvent to remove the outside cover in order to examine the hidden markings and compare them against golden samples. “The golden samples are sourced either direct from the manufacturer, or though an authorized distributor for that manufacturer, where you know the traceability is direct,” Figueroa said. “Having a golden sample database is of utmost value to being able to authenticate a component, especially if you’re sourcing in the open market where you may not have direct manufacturer support. When components are in demand, we grab a few, run them through our process, capturing dimensions, performing tests including X-ray, and formulating a complete test report, which we file away for future use. That information is critical.”

Also critical is the sharing of information when something goes wrong. “If something happens in the future, especially for automotive where traceability is hugely important, you can show what was tested and whether a chiplet was in compliance,” said Cycuity’s Mliner. “That allows you to look for your problem elsewhere. Or maybe you found a flaw no one knew about and which was never tested for, and you’re upfront that no one was trying to hide anything. That’s going to be the trend going forward.”

Chiplets are coming, and a commercial marketplace will be part of that effort. But managing all of these different elements securely will be a continuous process that will require diligence for years to come.

“In a perfect world, we would make a catalog of chiplets, test all of them, and provide them a rating for security,” said Riscure CEO Marc Witteman. “And then, once you start building your chip, you compile these chiplets. You take the best one, and you’re good to go. That’s an ideal world. We’re very far from that for a couple of reasons. One is that there’s so much development and redevelopment that a chiplet may be obsolete after a couple of years. It would need to be redesigned and updated, and new vulnerabilities will be introduced. But in addition to that, the security landscape is continuously evolving because new attacks are being discovered. At every conference we hear about 10 new attacks that weren’t known a year before. What is secure today can be very insecure tomorrow. So security is not a state. It’s a process. You need to address it everyday, or someday you’re going to have a problem.”

Further Reading:
Securing Heterogeneous Integration At The Chiplet, Interposer, And System-In-Package Levels (FICS-University Of Florida)
New research paper titled “ToSHI – Towards Secure Heterogeneous Integration: Security Risks, Threat Assessment, and Assurance” was published by researchers at the Florida Institute for Cybersecurity (FICS) Research.
Chip Substitutions Raising Security Concerns
Lots of unknowns will persist for decades across multiple market segments.
Building Security Into ICs From The Ground Up
No-click and blockchain attacks point to increasing hacker sophistication, requiring much earlier focus on potential security risks and solutions.
Hiding Security Keys Using ReRAM PUFs
How two different technologies are being combined to create a unique and inexpensive security solution.
Verifying Side-Channel Security Pre-Silicon
Complexity and new applications are pushing security much further to the left in the design flow.
Technical papers on Security

Mon, 11 Jul 2022 11:24:00 -0500 en-US text/html https://semiengineering.com/security-risks-widen-with-commercial-chiplets/
Killexams : Lenovo IdeaCentre A300 and Multimedia Keyboard review
Lenovo seems to have developed a clear two-pronged strategy: for business, it leans on the knowhow and tradition it purchased from IBM with the demure Think line, and for the consumer end, it's developed its own, oftentimes flamboyant, Idea range of computers. Prime example of the latter is the IdeaCentre A300, which features an edge-to-edge glass screen, chrome accenting aplenty, and an unhealthily thin profile. As such, it's one of the more unashamed grabs for the hearts and minds of desktop aesthetes, so we had to bring it in for a test drive and see what we could see. Lenovo also sent us one of its diminutive Multimedia Keyboard remotes to have a play around with. Follow the break for our review of both.%Gallery-95775%%Gallery-95777%

Critics - Not yet scored


Users - Not yet scored



  • Keyboard is intelligently laid outTrackball and mouse keys work wellCute and compact


  • Doesn't replace a dedicated multimedia remoteA backlight would've made it more usefulCan be fiddly at times

Critics - Not yet scored

1 review


Users - Not yet scored



  • 1080p screen and TV tunerAttractive, slimline exteriorCompetent performance in most tasks


  • Bloatware-saddled boot timeNot the best build quality in the worldIssues with keyboard lag and sound output

Hardware and Construction
First impressions of this Lenovo all-in-one were overwhelmingly positive. Its slick and shiny exterior merited a second look even from jaded souls like us, while our unscientific polling of nearby laypersons ended with the conclusion that the A300 is "gorgeous." The asymmetric stand adds a smidgen of sophistication, and we can happily report that it handles the screen's weight with aplomb, keeping it upright in an extremely stable and reliable fashion. Considering how far off-center the chrome-covered base is, Lenovo's done a fine job to keep functionality in tact while diversifying form. Limited, but we would say sufficient, tilt and swivel are on offer as well.

Going around the A300's body, you'll find a litany of ports around the back and left side, including HDMI inputs and outputs, a quartet of USB jacks, Firewire, a handy multicard reader, and a TV signal input with its own adapter coming in the box. We weren't too thrilled about the positioning of the power jack, as we came close to unplugging the juice on multiple occasions while trying to use nearby ports. This is also down to the fact that the power adapter here is of the sort used in laptops and is easier to disconnect than your typical desktop fare -- which is dandy for battery-powered portable computers, but could prove disastrous if you're working on something important and start fiddling around the back of the machine absent-mindedly. Isolating that connector from the others could've helped remedy this situation, but it's not exactly a deal breaker as it is.

Sound is output through a pair of downward-firing speakers in the IdeaCentre's base, which are covered by some gruesome orange grills. Good thing you won't have to see them, we say. As to what you can expect in terms of aural delivery, you should use your nearest laptop for reference. Even at its highest setting, the A300 wasn't particularly loud, though to its credit that also meant it didn't garble or distort your music when pushed to its humble limits.

Plugging in a set of headphones produced a nasty surprise for us: a loud background hum was present, punctuated by intermittent buzzing, some of which was caused by our actions with the computer. This was clearly the result of the internal wiring causing interference, and Lenovo's failure to properly insulate the audio-out channel from such incursions is a major letdown. Even if we optimistically suppose this was a one-off problem with our review unit, it doesn't speak too highly of the quality control checks carried out with A300.

We had another unexpected and unpleasant discovery with the A300's keyboard: incredible as this sounds, simple text input on the A300, erm, lagged. That's to say we occasionally found our textual musings appearing on screen a good three to four seconds after punching them in. Similar behavior was exhibited when we Ctrl and W'd a few tabs in Firefox -- they hung around after our instruction, leading us to think it wasn't registered and doing it again, with the end result being that we closed more tabs than we intended to. Annoying. Our inclination, given that these were all keyboard inputs, is to suspect that the Bluetooth connection was causing the delays. Still, the underlying reason is less important than the fact we had an issue to fix with the most basic of operation on the A300.

It's a shame, really, since this spoils what's an otherwise thoroughly pleasing and sturdy keyboard. We tried hard (harder than Lenovo would appreciate) to find flex or creaks in it, but this is one well built slab of plastic. Button travel is somewhat shallow for a desktop part, but felt pretty much spot on for us. We enjoyed our time typing this review out on the A300, and were able to consistently reach 90 words per minute on our favorite typing benchmark. That's about a dozen words fewer than our typical rate, but comfortably high enough to mark this out as a highly competent button slate. The bundled mouse similarly acquitted itself well, with good traction in its scroll wheel and fine ambidextrous ergonomics.

We did manage to extract some creaks from the IdeaCentre's body, though. The ultrathin (19.8mm) display panel -- which we have to say looks like a massively enlarged white iPhone -- emits discomforting little noises when it's swiveled laterally, and has a tiny bit of flex around the chrome-addled Lenovo logo on the back. Are these things that'll ruin your experience and turn you off all-in-one computers forever? Certainly not. Most users won't have to fiddle with the stand or display at all, but the difference in build quality relative to something like Lenovo's own ThinkCentre A70z should be noted.

The display itself is actually an above average affair, in our opinion, with a lucid and well saturated picture. Stretching to a full 1,920 x 1,080 pixels, it offers plenty of real estate and we'd say its 21.5-inch size is just about the sweet spot for desktop use. We were fans of the contiguous glass front, and can definitely see the value on offer for students and the like who'd prefer to combine a TV set and computer into the smallest possible package. That does come with the caveat that vertical viewing angles are par for the LCD course (i.e. not very good), and the limited tilt available on the A300 could thwart your attempts at achieving converged technology nirvana. We must also mention that the screen here is of a highly reflective variety; it's no glossier than what you'd get on Apple's latest iMacs, but it'll cause you some grief if you have a light source directly opposite it during use.

Software and performance
We'll reiterate what we said in our A70z review: this is a Windows 7 (Home Premium 64-bit flavor) machine, and if you want the full dish on what the OS will and won't do for you, check out our comprehensive review. It merits mentioning that in spite of Lenovo slapping its Enhanced Experience label on the IdeaCentre A300 -- which is supposed to indicate the company optimized a few things under the hood to make it run faster -- bloatware and other ancillary programs slow the boot time down to a glacial 70 seconds. Hey, if Nic Cage can steal a car in less than a minute, then computers should be able to turn on in the same amount of time as well, we're not asking for too much here.
The processor inside our test unit was a 2.2GHz Intel Core 2 Duo T6600, which was long in the tooth this time last year, and positively ancient today. And yet, our experience with the A300 indicates that its inclusion here is more testament to the Intel chip's longevity than Lenovo skimping on component costs. The laptop CPU is powerful enough to run 1080p video flawlessly, and handles the mundanity of day to day computing with good humor and fitness. The 4GB memory allowance helps, while a half terabyte hard disk (formats down to 440GB) provides plenty of storage. If there's one thing we have to criticize on this spec sheet, it's the 5400RPM spindle speed on the storage unit: it showed its speed deficiency early and often. Oh, and speaking of spinning plates, there's no optical drive to be had -- an irrelevance for some, but a major downer for others who might have been contemplating turning this into their media playback station.

Operation of the A300 is on the whole extremely quiet, though the base -- which contains the majority of components -- does get warm to the touch. The only thing you might hear is the hard drive seeking, but if you want to kill two birds with one stone, slap an SDD in this machine and you'll nullify both the speed and noise disadvantages thrown up by Lenovo's default disk. On the whole, we might not recommend this as your Photoshop or 3D design rig, but regular things like web browsing, media playback, and basic productivity are handled smoothly and competently.

Multimedia Keyboard
Time to set our sights on this funny extra peripheral Lenovo shipped us with its AIO machine. The Multimedia Keyboard is a $59 accessory, working over a 2.4GHz wireless connection, that allows you to control your computer from up to 10 meters away with a keyboard, trackball, and a set of multimedia controls. Frankly, as clichéd as this might sound, we found it an irresistibly cute little peripheral. The trackball does its job, the keyboard is a tiny bit better than your typical QWERTY pad in modern smartphones, and the media buttons are laid out in a decently sensible order. On the surface then, it's just a barely above average keypad, and yet we didn't seem to stop enjoying ourselves while using it. Maybe it's because of the novelty or perhaps it's the fact it looks like a ping pong bat; whatever the appeal, the Multimedia Keyboard appears to be a classic case of a gadget that's more than the sum of its parts. We think the price tag is too steep to make this a particularly rational purchasing decision, but if you're asking if we'd like to receive one as a gift, we'd respond in the affirmative with little hesitation.

In conclusion then, what we've been looking at has been a set of laptop parts exploded into a jumbo iPhone-aping screen with an asymmetric base and attention-grabbing looks. The result is pretty close to what you might expect: happy, shiny, and pretty on the outside, but flawed and mildly deficient on the inside. At $949 for the model we reviewed, we can't say the A300 represents good value. Sure, you get that TV tuner, bi-directional HDMI connectivity, and a 1080p panel, but we'd argue you would be better off purchasing each of those things individually rather than trying to compound them all in this one imperfect device. Additionally, the media repository ambition indicated by all the storage and inputs is somewhat defeated by the omission of an optical drive, which becomes much more important in a media station or HTPC candidate of this kind.

As to the Multimedia Keyboard, you should be mindful that usage scenarios are limited, because it's not good enough at what it does to replace having a dedicated keyboard or multimedia remote. That proviso aside, it's just plain fun to use and would make for a great gift -- you know, because then you won't have to think through the whole question of whether it's good value for money or not.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Sun, 19 Jun 2022 12:00:00 -0500 en-US text/html https://www.engadget.com/2010-06-20-lenovo-ideacentre-a300-and-multimedia-keyboard-review.html
Killexams : How to be an AI & ML expert: A Webinar with Cloud Architect Subhendu Dey


The webinar outlined what AI and ML mean in today’s world and how students could get involved

Mr Subhendu Dey also laid out a comprehensive roadmap for those looking to start a career in AI and ML

Artificial Intelligence and Machine Learning as disciplines have taken the world by storm, particularly in the 21st century. While many youngsters have drawn inspiration from some of the best science fiction featuring AI and robots, the genuine world of AI and ML has been growing by leaps and bounds. But what does the world of AI and ML have to offer? How can you transition from campus to career with AL & ML? And how can you be an expert in AI & ML? To answer these and many other questions, The Telegraph Online Edugraph organised a webinar with Subhendu Dey, a Cloud Architect and advisor on Data and AI.

The webinar saw participants from class 8 right up to those in advanced degrees, as well as teachers. Hence, the subject matter of the webinar contained takeaways that would be relevant at all stages. Mr Dey also highlighted that he would be focusing on showing how things that have always existed around us contribute to AI - giving students a more intuitive idea of AI and making it more interesting.

The webinar started by taking a look at a simple action like sending a text. People would find that their mobiles would keep suggesting words to them. Be it as soon as they have typed a few letters or after they have typed a few words, they would get suggestions that are surprisingly accurate. This is called Language Modelling and requires an intuitive understanding of language. A human may be able to do it from his or her extensive knowledge of words and language, but in this case, it is a fine demonstration of the intuitiveness of AI.

Let’s look at another aspect of AI - when we key in a question into the Google search bar, a decade or so ago, Google would have analysed the keywords and thrown up a list of links that feature the keywords. But fast-forward to this decade and Natural Language search is today capable of not just reading the keywords but also finding out the intent behind the query. This means that Google will, in addition to giving you the links, also provide you the answer, as well as other questions that have the same or related intent. In fact, Google also has a system for taking feedback, which facilitates the Google AI to learn to be even more intuitive and better at giving suggestions.

One need only look at the digital assistant - Siri, Google Assistant or Alexa - to understand the advancements in AI. From understanding spoken queries to giving intuitive, and often very witty, answers, these assistants communicate in a surprisingly human-like manner. Of course, there is a cycle of tasks that they must perform behind the scenes, which Mr Dey spoke about in detail.

While these changes that we can observe are new, AI has been around for a long time now. One of the earliest feats was in 1997, when the IBM Supercomputer Deep Blue beat world chess champion Gary Kasparov, in a six-match tournament.

Today artificial intelligence is a booming area of development and the Ministry of Electronics & Information Technology projects the addition of about 20 million jobs in the sector by 2025. In fact, this is also underscored by multiple studies and reports prepared by global auditing firms like Deloitte, NASSCOM and PwC.

However, one question that has always baffled scientists and engineers working in the domain of AI, is striking a balance between behaviour and reasoning on the one hand and human/irrational and rational on the other, when designing the various Artificial Intelligence agents. It has, however, been found that more intuitive AI agents with better user experience interfaces have a higher penetration in human society.

Next we take a look at Machine Learning. When an AI agent learns on its own from the interactions it has, this is known as Machine Learning. When humans learn something, it registers in some form in the mind. However, machines perceive data in the form of functions and variables. With Machine Learning, AI agents create models which exist as executable software components made up of a sequence of mathematical variables and functions. Hence, becoming an expert in AI and ML usually requires a person to have a sound understanding of mathematics and statistics.

Speaking of building a career in AI and ML, Mr Dey threw light on three avenues into the industry. These are:

  • As a scientist
  • As an engineers
  • As a contributor

Let’s take a look at each of these.

As a Scientist

As mentioned above, to communicate with AI, your query must be represented in a mathematical/logical format. Hence, when choosing your educational degrees or courses, go for courses that cover the following courses which contribute to the core of AI:

  • Vectors and Matrices
  • Probability
  • Relation and Function
  • Differential Calculus
  • Statistical Analysis

Choosing a major which covers these aspects should arm you with the knowledge and skills you need to become a scientist in AI.

As an Engineer

Being a scientist is not your only option, though. AI also depends heavily on engineers to grow and develop. From the engineering perspective, here is a list of functions that need to be carried out:

  • Visualisation/representation of data
  • Collection of data from multiple sources
  • Building pipelines to prepare data to scale
  • Using Machine Learning services/frameworks available on clouds to scale up
  • Test, audit and explain to various stakeholders the Machine Learning output

As a contributor

If you find you are not interested in being a scientist of an engineer, there are other significant ways you can contribute to AI. That could be in the following areas:

  • User experience design
  • Process modelling
  • Domain knowledge
  • Linguistic details
  • Social aspects

Mr Dey discusses all these avenues at length in the course of the webinar with examples. At the same time, he lays out the basic qualities that one must have - irrespective of which role one chooses to pursue. And these are creative vision, innate curiosity and perseverance.

Here are some courses that you should explore if you want to build a career in the core AI aspects:

  • A Bachelors or Masters degree in Computer Science or Engineering or Mathematics or Statistics.
  • A specialisation in any of the following areas:
    • Artificial Intelligence
    • Machine Learning
    • Data Science
    • Automation and Robotics
  • B Tech/ BE in other engineering fields, followed by work experience in the field of software or IT.
  • Artificial Intelligence
  • Machine Learning
  • Data Science
  • Automation and Robotics

The webinar ended with a detailed Q&A session which opened with some questions received from participants submitted at the time of registration and carried on to questions asked by participants in the course of the webinar. The Q&A covered a range of interesting courses like:

  • Neural networks/deep learning
  • Importance of Maths and Statistics in AI/ML
  • How valuable are practical projects for developing skills needed to work in AI/ML
  • Which programming language is the best to learn for a career working with AI/ML
  • Which are the best courses to consider as a student - traditional degrees or online certification courses
  • How does AI compare to the human brain
  • Will AI and automation endanger human jobs in the future
  • What are intelligent agents and how are they useful in AI

To learn the answer to these and many more questions, watch our video recording of the live webinar.

A career in AI and ML is an excellent choice now - and this small initiative of The Telegraph Edugraph was aimed at providing the right guidance for you to make the transition from Campus to Career. Best of luck!

Last updated on 26 Jul 2022

Mon, 25 Jul 2022 01:52:00 -0500 text/html https://www.telegraphindia.com/edugraph/career/how-to-be-an-ai-ml-expert-a-webinar-with-cloud-architect-subhendu-dey/cid/1876427
M9510-648 exam dump and training guide direct download
Training Exams List