There is no better option than our C2140-823 cheat sheets and Dumps

You will get the exactly same replica of C2140-823 real exam questions that you are going to attempt in actual test. Killexams.com has maintained database of C2140-823 Dumps that is big questions bank highly pertinent to C2140-823 and served by test takers who attempt the C2140-823 exam and passed with high score.

Exam Code: C2140-823 Practice test 2022 by Killexams.com team
Rational Quality Manager V3
IBM Rational test
Killexams : IBM Rational test - BingNews https://killexams.com/pass4sure/exam-detail/C2140-823 Search results Killexams : IBM Rational test - BingNews https://killexams.com/pass4sure/exam-detail/C2140-823 https://killexams.com/exam_list/IBM Killexams : TestPlant integrates eggPlant with IBM Rational Quality Manager

TestPlant has announced the integration of its automated GUI- and screen-testing tool with IBM Rational Quality Manager, to round out platform coverage of IBM’s test solution.

eggPlant is designed for professional software application testers. “We are different because we use image recognition in a non-invasive fashion,” said George Mackintosh, CEO of TestPlant.

eggPlant automates testing through a “search and compare” approach of GUIs and screens. “eggPlant is a robotic tester,” Mackintosh said. “If you build software, you need to test software, so you can train eggPlant through the test process. eggPlant sees screens just like a human eye would see a screen and, therefore, it can be trained to spot start buttons as well as the numbers and icons that allow you to move through a software application. It’s a robotic test engineer.”

In addition to its image-recognition feature, another feature of eggPlant is that it can operate on multiple systems and platforms. “It operates on legacy systems, desktop systems, and now, hugely importantly, it operates on all the common mobile platforms and mobile systems such as smartphones, tablets and other mobile devices,” Mackintosh said. “It can operate from Windows, Linux, Mac, etc., in any browser, any operating system, and on any device.”

eggPlant has been globally adopted across multiple industries, including mobile, healthcare and gaming. “It’s very attractive that they can cover a broad range of platforms in different industries, and I think that will make TestPlant a great partner for us,” said Stephen Lauzon, senior manager of ISV Technical Enablement and Strategy at IBM Rational, in an interview with SD Times.

The qualities that IBM Rational particularly liked about TestPlant, Lauzon said, are its technology—both its use of image recognition so it’s not tied to a particular browser, and, more specifically, that it integrated the application using virtual network computing (VNC), which is becoming quite common. VNC is an emerging standard interface for providing support to enable external applications to integrate into a platform, he said.

Lauzon said that it was a good idea for IBM Rational to partner with TestPlant because it was in line with IBM’s strategy around quality management. This integration enables users to get strong test-automation support through eggPlant, as well as strong overall collaborative life-cycle management through Rational Quality Manager. “This covers everything, from initial planning of tests to ensure you’re covering all of your requirements, through creation of the test scripts, and then to execution of various test phases and analysis of the results,” he said.

TestPlant reached out to IBM Rational earlier this year, Lauzon said, as part of the “Ready for IBM Rational” program, which is IBM Rational’s validation program for ISV partners. “It’s not a sales program, it’s a certification program for integration,” he said. “As of last week, we have 190 current active solutions and approximately 110 partners.”

“Ready for IBM Rational” software validation enables companies to demonstrate and validate the integration between their tools and the IBM Rational software delivery platform. “TestPlant took eggPlant through our validation program, which enabled them to validate that their integration meets Rational’s requirements,” Lauzon said. “These requirements cover everything from ‘installability’ to safety of data as part of a common workflow experience.”

Mon, 13 Jun 2022 11:59:00 -0500 en-US text/html https://sdtimes.com/guis/testplant-integrates-eggplant-with-ibm-rational-quality-manager/
Killexams : IBM, NI Plug Systems Engineering Gap

With the number of lines of code in the average car expected to skyrocket from 10 million in 2010 to 100 million in 2030, there's no getting around the fact that embedded software development and a systems engineering approach has become central not only to automotive design, but to product design in general.

Yet despite the invigorated focus on what is essentially a long-standing design process, organizations still struggle with siloed systems and engineering processes that stand in the way of true systems engineering spanning mechanical, electrical, and software functions. In an attempt to address some of those hurdles, IBM and National Instruments are partnering to break down the silos specifically as they relate to the quality management engineering system workflow, or more colloquially, the marriage between design and test.

"As customers go through iterative development cycles, whether they're building a physical product or a software subsystem, and get to some level of prototype testing, they run into a brick wall around the manual handoff between the development and test side," Mark Lefebvre, director, systems alliances and integrations, for IBM Rational, told us. "Traditionally, these siloed processes never communicate and what happens is they find errors downstream in the software development process when it is more costly to fix."

NI and IBM's answer to this gap? The pair is building a bridge -- specifically an integration between IBM Rational Quality Manager test management and quality management tool, and NI's VeriStand and TestStand real-time testing and test-automation environment. The integration, Lefebvre said, is designed to plug the gap and provide full traceability of what's defined on the test floor back to design and development, enabling more iterative testing throughout the lifecycle and uncovering errors earlier in the process, well before building costly prototypes.

The ability to break down the quality management silos and facilitate earlier collaboration can have a huge impact on cost if you look at the numbers IBM Rational is touting. According to Lefebvre, a bug that costs $1 to fix on a programmer's desktop costs $100 to fix once it makes its way into a complete program and many thousands of dollars once identified after the software has been deployed in the field.

While the integration isn't yet commercialized (Lefebvre said to expect it at the end of the third quarter), there is a proof of concept being tested with five or six big NI/IBM customers. The proof of concept is focused on the development of an embedded control unit (ECU) for a cruise control system that could operate across multiple vehicle platforms. The workflow exhibited marries the software development test processes to the hardware module test processes, from the requirements stage through quality management, so if a test fails or changes are made to the code, the results are shared throughout the development lifecycle.

Prior to such an integration, any kind of data sharing was limited to manual processes around Word documents and spreadsheets, Lefebvre said. "Typically, a software engineer would hand carry all the data in a spreadsheet and import it into the test environment. Now there's a pipe connecting the two."

Related posts:

Wed, 06 Jul 2022 12:00:00 -0500 en text/html https://www.designnews.com/design-hardware-software/ibm-ni-plug-systems-engineering-gap
Killexams : IBM is Modeling New AI After the Human Brain

Attentive Robots

Currently, artificial intelligence (AI) technologies are able to exhibit seemingly-human traits. Some are intentionally humanoid, and others perform tasks that we normally associate strictly with humanity — songwriting, teaching, and visual art.

But as the field progresses, companies and developers are re-thinking the basis of artificial intelligence by examining our own intelligence and how we might effectively mimic it using machinery and software. IBM is one such company, as they have embarked on the ambitious quest to teach AI to act more like the human brain.

Click to View Full Infographic

Many existing machine learning systems are built around the need to draw from sets of data. Whether they are problem-solving to win a game of Go or identifying skin cancer from images, this often remains true. This basis is, however, limited — and it differentiates from the human brain.

We as humans learn incrementally. Simply put, we learn as we go. While we acquire knowledge to pull from as we go along, our brains adapt and absorb information differently from the way that many existing artificial systems are built. Additionally, we are logical. We use reasoning skills and logic to problem solve, something that these systems aren't yet terrific at accomplishing.

IBM is looking to change this. A research team at DeepMind has created a synthetic neural network that reportedly uses rational reasoning to complete tasks.

Rational Machinery

By giving the AI multiple objects and a specific task, "We are explicitly forcing the network to discover the relationships that exist," says Timothy Lillicrap, a computer scientist at DeepMind in an interview with Science Magazine. In a test of the network back in June, it was questioned about an image with multiple objects. The network was asked, for example: "There is an object in front of the blue thing; does it have the same shape as the tiny cyan thing that is to the right of the gray metal ball?"

In this test, the network correctly identified the object a staggering 96 percent of the time, compared to the measly 42 to 77 percent that more traditional machine learning models achieved. The advanced network was also apt at word problems and continues to be developed and improved upon. In addition to reasoning skills, researchers are advancing the network's ability to pay attention and even make and store memories.

Image Credit: ColiN00B / Pixabay

The future of AI development could be hastened and greatly expanded by using such tactics, according to Irina Rish, an IBM research staff member, in an interview with Engadget, "Neural network learning is typically engineered and it's a lot of work to actually come up with a specific architecture that works best. It's pretty much a trial and error approach ... It would be good if those networks could build themselves."

It might be scary to think of AI networks building and improving themselves, but if monitored, initiated, and controlled correctly, this could allow the field to expand beyond current limitations. Despite the brimming fears of a robot takeover, the advancement of AI technologies could save lives in the medical field, allow humans to get to Mars, and so much more. 


Wed, 29 Dec 2021 18:58:00 -0600 text/html https://futurism.com/ibm-is-modeling-new-ai-after-the-human-brain
Killexams : Rational Apex

A comprehensive Ada development environment for Unix systems from IBM. The tools extend to Ada 95/Ada 83 development as well as support for C/C++. Rational Apex evolved from the original, proprietary hardware-based Ada environment that Rational Machines was founded on in the early 1980s. Later renamed Rational Software, IBM acquired the company in 2003. See Rational Rose.

Sat, 27 Mar 2021 09:48:00 -0500 en text/html https://www.pcmag.com/index.php/encyclopedia/term/rational-apex
Killexams : Smarter Baggage Handling

Vanderlande Industries and IBM has helped Amsterdam's Schiphol Airport create a smarter baggage system and a more precise ability to manage the growing amount of baggage expected to pass through the airport in the future. The new baggage handling hall is part of the airport's 70 Million Bag program to increase the capacity of the airport by 40 percent to 70 million bags in the future.

Airport Schiphol is carrying out this project in collaboration with KLM, Vanderlande Industries and IBM. Vanderlande, IBM and Grenzebach Automation designed, built and tested this system, considered to be an advanced baggage handling facility, featuring space efficient applications such as robotized loading of baggage.

Through an interconnected, synchronized system every single bag can be located at any point in its journey. A 21 km transport conveyor contains innovative technology including AS/RS (Automated Storage and Retrieval System) bag storage with 36 cranes operating a fully redundant storage of over 4,200 bag positions and DCV-technology (Destination Coded Vehicles), as well as six robot cells for the automated loading of bags into containers and carts. It is expected that up to 60 percent of all baggage in the South hall will be handled by robots, which will increase productivity as well as Strengthen the ergonomic working conditions for operators.

After check-in bags go directly into the bag storage, waiting to be loaded. Robots enable this process by pulling bags from the bag storage on-demand, and releasing baggage on the conveyor belt only when needed to prevent overload of the system. This way, the airline can handle more bags in less time, with lower cost, energy efficient and in a limited space. It enables the airport to maximize its efficiency, cost effectiveness and service levels, as well as to meet increasing sustainability demands.

According to Greg Sikes, director of systems offering strategy and delivery for IBM Rational, what stands out about this application is the just-in-time process. Not only is the system able to deal with baggage, taking it from the traveler and setting it aside, it's also able to access it and pull it for the flight when it's needed. The result is more efficiency and productivity.

"Certainly the software continues to show it's more and more the key part of being the innovation behind the system," says Sikes. "Whether it's the embedded software in the robotic devices, the scheduling program that has to deal with gate changes, the logistics and communications software, or the individual device embedded software that's dealing with individual pieces of luggage, it all shows what's possible with a very complex system of systems."


IBM Rational is providing software applications for the project that support requirements management, change management and software configuration management. They have been working with Vanderlande Industries for a number of years, which took an application lifecycle management approach and a requirements-driven approach towards this application. Another key is that the requirements management software lets the user to understand, not only the requirements, but then also to look at them from the test side and how many of the requirements are actually being tested against.

By integrating the baggage control system with passenger check-in information, the Amsterdam airport has streamlined the process for the airlines of baggage tracking and reconciling passengers with their bags. Linking into real-time flight information allows for quick off-loading of baggage when a passenger misses his flights and for redirection of bags on alternative flights when connections are missed.

The integrated system also provides accurate, up-to-date information and metrics to monitor baggage handling performance, helping managers resolve issues quickly and identify areas for improvement. Heavy baggage is handled automatically by robots that work around the clock.

Click here to see how Amsterdam Schiphol Airport is increasing capacity and improving baggage flow using smarter software.

Tue, 28 Jun 2022 12:00:00 -0500 en text/html https://www.designnews.com/automation-motion-control/smarter-baggage-handling
Killexams : A guide to DevSecOps tools

Aqua Security enables enterprises to secure their container and cloud-native applications from development to production, accelerating application deployment and bridging the gap between DevOps and IT security. The Aqua Container Security Platform protects applications running on-premises or in the cloud, across a broad range of platform technologies, orchestrators and cloud providers. Aqua secures the entire software development lifecycle, including image scanning for known vulnerabilities during the build process, image assurance to enforce policies for production code as it is deployed, and run-time controls for visibility into application activity, allowing organizations to mitigate threats and block attacks in real-time.

CA Technologies creates software that fuels modern transformation for companies across the globe. DevSecOps enables the build, test, security and rollout of software quickly and efficiently, providing software that’s more resistant to hacker attacks. Through automation, CA Technologies extends faster deployment with an agile back end that delivers more reliable releases of code helping teams to work collaboratively earlier in the DevSecOps process to detect security vulnerabilities in every phase, from design to deployment.

CodeAI is smart automated secure coding application for DevOps, that fixes security vulnerabilities in computer source code to prevent hacking. It’s unique user-centric interface provides developers with a list of solutions to review instead of a list of problems to resolve. Teams that use CodeAI will experience a 30%-50% increase in overall development velocity.

CodeAI takes a unique approach to finding bugs using a proprietary deep learning technology for code trained on real-world bugs and fixes in large amounts of software. CodeAI fixes bugs using simple program transformation schemas derived from bug fixing commits in open source software.

Synopsys helps development teams build secure, high-quality software, minimizing risks while maximizing speed and productivity. Synopsys, a recognized leader in application security, provides static analysis, software composition analysis, and dynamic analysis solutions that enable teams to quickly find and fix vulnerabilities and defects in proprietary code, open source components, and application behavior. With a combination of industry-leading tools, services, and expertise, only Synopsys helps organizations optimize security and quality in DevSecOps and throughout the software development lifecycle.

RELATED CONTENT: Application security needs to shift left

Checkmarx provides application security at the speed of DevOps, enabling organizations to deliver secure software faster. It easily integrates with developers’ existing work environments, allowing them to stay in their comfort zone while still addressing secure coding practices.

Chef Automate is a continuous delivery platform that allows developers, operations, and security engineers to collaborate effortlessly on delivering application and infrastructure changes at the speed of business. Chef Automate provides actionable insights into the state of your compliance, configurations, with an auditable history of every change that’s been applied to your environments.

CloudPassage, the leader in automated cloud workload and container security, was founded in 2010. The first company to obtain U.S. patents for universal cloud infrastructure security, CloudPassage has been a leading innovator in cloud security automation and compliance monitoring for high-performance application development and deployment environments.

Its on-demand security solution, Halo, is an award-winning workload security automation platform that provides visibility and protection in any combination of data centers, private/public clouds, and containers. Delivered as a service, so it deploys in minutes and scales effortlessly, Halo fully integrates with popular infrastructure automation and orchestration tools along with leading CI/CD tools.

CollabNet VersionOne offers solutions across the DevOps toolchain. Its solutions provide the ability to measure and Strengthen end-to-end continuous delivery, orchestrate delivery pipelines and value streams, standardize and automate deployments and DevOps tasks, and ensure traceability and compliance across workflows, applications, and environments.

Contrast: Assess produces accurate results without dependence on application security experts, using deep security instrumentation to analyze code in real time from within the application. It scales because it instruments application security into each application, delivering vulnerability assessment across an entire application portfolio. Contrast Assess integrates seamlessly into the software lifecycle and into the tool sets that development & operations teams are already using.

Contrast Protect provides actionable and timely application layer threat intelligence across the entire application portfolio. Once instrumented, applications will self-report the following about an attack at a minimum – the attacker, method of attack, which applications, frequency, volume, and level of compromise. Protect provides specific guidance to engineering teams on where applications were attacked and how threats can be remediated. Contrast doesn’t require any changes to applications or the runtime environment, and no network configuration or learning mode is necessary.

CyberArk delivers the most comprehensive solution for protecting against the exploitation of privileged accounts, credentials and secrets anywhere – on the endpoint and across on-premises, hybrid cloud, and DevOps environments. CyberArk Conjur is a secrets management solution that secures and manages secrets used by machine identities (including applications, microservices, applications, CI/CD tools and APIs) and users throughout the DevOps pipeline to mitigate risk without impacting velocity. Conjur is the only platform-independent secrets management solution specifically architected for containerized environments and can be deployed at massive scale. CyberArk Conjur is also available to developers as an Open Source Community Edition.

Datical is a database company that allows organizations to deliver error-free application experiences faster. The company’s solutions make database code deployment as simple as application release automation, while still eliminating risks that cause application downtime and data security vulnerabilities.

Using Datical to automate database releases means organizations are now able to deliver error-free application experiences faster and safer while focusing resources on the high-value tasks that move the business forward.

DBmaestroDBmaestro brings DevOps best practices to the database, delivering a new level of efficiency, speed, security and process integration for databases. DBmaestro’s platform enables organizations to run database deployments securely and efficiently, increase development team productivity and significantly decrease time-to-market. The solution enables organizations to implement CI/CD practices for database activities, with repeatable pipeline release automation and automatic drift prevention mechanisms. The platform combines several key features for the database, including: pipeline release automation, database version control, governance and security modules and a business activity monitor.

IBM is recognized by IDC as a leader in DevSecOps. IBM’s approach is to deliver secure DevOps at scale in the cloud, or behind the firewall. IBM provides a set of industry-leading solutions that work with your existing environment. And of course they work fantastically together: Change is delivered from dev to production with the IBM UrbanCode continuous delivery suite. Changes are tested with Rational Test Workbench, and security tested with IBM AppScan or Application Security on Cloud. IBM helps you build your production safety net with application management, Netcool Operations Insight and IBM QRadar for security intelligence and events.

Imperva offers many different solutions to help you secure your applications. Organizations will be able to protect application in the cloud and on-premises with the same set of security policies and management capabilities. Its multiple deployment methods allow teams to meet the specific security and service level requirements for individual applications.

Imperva WAF protects against the most critical web application security risks: SQL injection, cross-site scripting, illegal resource access, remote file inclusion, and other OWASP Top 10 and Automated Top 20 threats. Imperva security researchers continually monitor the threat landscape and update Imperva WAF with the latest threat data.

JFrog Xray is a continuous security and universal artifact analysis tool, providing multilayer analysis of containers and software artifacts for vulnerabilities, license compliance, and quality assurance. Deep recursive scanning provides insight into your components graph and shows the impact that any issue has on all your software artifacts.

Nosprawl is security for DevOps. As DevOps matures and finds broader adoption in enterprises, the scope of DevOps must be expanded to include all the teams and stakeholders that contribute to application delivery including security. NoSprawl integrates with software development platforms to check for security vulnerabilities throughout the entire software development lifecycle to deliver Checked secure software before it gets into production.

Parasoft: Harden your software with a comprehensive security testing solution, with support for important standards like CERT-C, CWE, and MISRA. To help you understand and prioritize risk, Parasoft’s static analysis violation metadata includes likelihood of exploit, difficulty to exploit/remediate, and inherent risk, so you can focus on what’s most important in your C and C++ code.

In addition to static analysis that detects security vulnerabilities, weak code susceptible to hacking, and helps enforce secure engineering standards in support of Secure-by-Design, Parasoft provides flexible, intelligent dashboards and reports specifically designed for each standard to provide necessary information for reporting and compliance auditing. Configuration, reporting, and remediation are all standards centric – no need to translate vendor IDs to standards IDs.

Qualys is a leading provider of information security and compliance cloud solutions, with over 10,300 customers globally. It provides enterprises with greater agility, better business outcomes, and substantial cost savings for digital transformation efforts. The Qualys Cloud Platform and apps integrated with it help businesses simplify security operations and automates the auditing, compliance, and protection for IT systems and web applications.

Redgate Software’s SQL Data Privacy Suite helps you adopt a DevSecOps approach that protects your business, by providing a scalable and repeatable process for managing personally-identifiable information as it moves through your SQL Server estate. It maps your entire SQL data estate, identifies sensitive data, helps you protect it through automatic data masking and encryption, and allows you to monitor and demonstrate compliance for regulations such as GDPR, HIPAA and SOX during data handling. The all-in-one solution lets you discover, classify, protect, and monitor data, processes and activity throughout your SQL Server estate.

Rogue Wave Software helps thousands of global enterprise customers tackle the hardest and most complex issues in building, connecting, and securing applications. Our Klocwork static code analysis tool helps DevSecOps professionals, from developers to test automation engineers to compliance leaders, create more secure code with on-the-fly security analysis at the desktop and integrated into large-scale continuous integration workflows.

Signal Sciences secures the most important applications, APIs, and microservices of the world’s leading companies. Our next-gen WAF and RASP help you increase security and maintain site reliability without sacrificing velocity, all at the lowest total cost of ownership.

DevSecOps isn’t just about shifting left. Feedback loops on where attacks against applications occur and are successful in production are critical. Signal Sciences gets developers and operations involved by providing relevant data, helping them triage issues faster with less effort. With Signal Sciences, teams can see actionable insights, secure across the broadest attack classes, and scale to any infrastructure and volume elastically.

Sonatype‘s Nexus platform helps more than 10 million software developers innovate faster while mitigating security risks inherent in open source. Powered by Nexus IQ, the platform combines unrivaled, in-depth intelligence with real-time remediation guidance to automate and scale open source governance across every stage of the modern DevOps pipeline. Nexus IQ enables Nexus Firewall, which stops risky components from entering the development environment. From there, trusted components are stored in Nexus Repository, and can be easily distributed into the development process. Then, Nexus Lifecycle uses Nexus IQ to automatically and continuously identify and remediate, oss risks in all areas of an environment, including applications in production.

Sumo Logic is the leading secure, cloud-native, multi-tenant machine data analytics platform that delivers real-time, continuous intelligence across the entire application lifecycle and stack. Sumo Logic simplifies DevSecOps implementation at the code level, enabling customers to build infrastructure to scale securely and quickly. This approach is required to maintain speed, agility and innovation while simultaneously meeting security regulations while  staying alert for malicious cyber threats.

WhiteHat Security has been in the business of securing applications for 17 years. In that time, applications evolved and became the driving force of the digital business, but they’ve also remained the primary target of malicious hacks. The WhiteHat Application Security Platform is a cloud service that allows organizations to bridge the gap between security and development to deliver secure applications at the speed of business. Its software security solutions work across departments to provide fast turnaround times for Agile environments, near-zero false positives and precise remediation plans while reducing wasted time verifying vulnerabilities, threats and costs for faster deployment.

RELATED CONTENT: How these companies can help make your applications more secure

Tue, 03 Jul 2018 03:12:00 -0500 en-US text/html https://sdtimes.com/security/a-guide-to-devsecops-tools/
Killexams : Top News Stories

Crypto's Carbon Footprint Is About to Shrink

Ethereum, the second biggest blockchain in the world, is transitioning to proof of stake. That sentence may not make sense to you, but it's a huge deal for the climate.

Sat, 16 Jul 2022 04:42:00 -0500 en text/html https://www.cnet.com/news/
Killexams : Comprehensive Change Management for SoC Design By Sunita Chulani1, Stanley M. Sutton Jr.1, Gary Bachelor2, and P. Santhanam1
1 IBM T. J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 USA
2 IBM Global Business Services, PO BOX 31, Birmingham Road, Warwick CV34 5JL UK

Abstract

Systems-on-a-Chip (SoC) are becoming increasingly complex, leading to corresponding increases in the complexity and cost of SoC design and development.  We propose to address this problem by introducing comprehensive change management.  Change management, which is widely used in the software industry, involves controlling when and where changes can be introduced into components so that changes can be propagated quickly, completely, and correctly.
In this paper we address two main topics:   One is typical scenarios in electronic design where change management can be supported and leveraged. The other is the specification of a comprehensive schema to illustrate the varieties of data and relationships that are important for change management in SoC design.

1.    INTRODUCTION

SoC designs are becoming increasingly complex.  Pressures on design teams and project managers are rising because of shorter times to market, more complex technology, more complex organizations, and geographically dispersed multi-partner teams with varied “business models” and higher “cost of failure.”

Current methodology and tools for designing SoC need to evolve with market demands in key areas:  First, multiple streams of inconsistent hardware (HW) and software (SW) processes are often integrated only in the late stages of a project, leading to unrecognized divergence of requirements, platforms, and IP, resulting in unacceptable risk in cost, schedule, and quality.  Second, even within a stream of HW or SW, there is inadequate data integration, configuration management, and change control across life cycle artifacts.  Techniques used for these are often ad hoc or manual, and the cost of failure is high.  This makes it difficult for a distributed group team     to be productive and inhibits the early, controlled reuse of design products and IP.  Finally, the costs of deploying and managing separate dedicated systems and infrastructures are becoming prohibitive.

We propose to address these shortcomings through comprehensive change management, which is the integrated application of configuration management, version control, and change control across software and hardware design.  Change management is widely practiced in the software development industry.  There are commercial change-management systems available for use in electronic design, such as MatrixOne DesignSync [4], ClioSoft SOS [2], IC Manage Design Management [3], and Rational ClearCase/ClearQuest [1], as well as numerous proprietary, “home-grown” systems.  But to date change management remains an under-utilized technology in electronic design.

In SoC design, change management can help with many problems.  For instance, when IP is modified, change management can help in identifying blocks in which the IP is used, in evaluating other affected design elements, and in determining which tests must be rerun and which rules must be re-verified. Or, when a new release is proposed, change management can help in assessing whether the elements of the release are mutually consistent and in specifying IP or other resources on which the new release depends.

More generally, change management gives the ability to analyze the potential impact of changes by tracing to affected entities and the ability to propagate changes completely, correctly, and efficiently.  For design managers, this supports decision-making as to whether, when, and how to make or accept changes.  For design engineers, it helps in assessing when a set of design entities is complete and consistent and in deciding when it is safe to make (or adopt) a new release.

In this paper we focus on two elements of this approach for SoC design.  One is the specification of representative use cases in which change management plays a critical role.  These show places in the SoC development process where information important for managing change can be gathered.  They also show places where appropriate information can be used to manage the impact of change.  The second element is the specification of a generic schema for modeling design entities and their interrelationships.  This supports traceability among design elements, allows designers to analyze the impact of changes, and facilitates the efficient and comprehensive propagation of changes to affected elements.

The following section provides some background on a survey of subject-matter experts that we performed to refine the problem definition.     

2.    BACKGROUND

We surveyed some 30 IBM subject-matter experts (SMEs) in electronic design, change management, and design data modeling.  They identified 26 problem areas for change management in electronic design.  We categorized these as follows:

  • visibility into project status
  • day-to-day control of project activities
  • organizational or structural changes
  • design method consistency
  • design data consistency

Major themes that crosscut these included:

  • visibility and status of data
  • comprehensive change management
  • method definition, tracking, and enforcement
  • design physical quality
  • common approach to problem identification and handling

We held a workshop with the SMEs to prioritize these problems, and two emerged     as the most significant:  First, the need for basic management of the configuration of all the design data and resources of concern within a project or work package (libraries, designs, code, tools, test suites, etc.); second, the need for designer visibility into the status of data and configurations in a work package.

To realize these goals, two basic kinds of information are necessary:  1) An understanding of how change management may occur in SoC design processes; 2) An understanding of the kinds of information and relationships needed to manage change in SoC design.  We addressed the former by specifying change-management use cases; we addressed the latter by specifying a change-management schema.

3.    USE CASES

This section describes typical use cases in the SoC design process.  Change is a pervasive concern in these use cases—they cause changes, respond to changes, or depend on data and other resources that are subject to change.  Thus, change management is integral to the effective execution of each of these use cases. We identified nine representative use cases in the SoC design process, which are shown in Figure 1.


Figure 1.  Use cases in SoC design

In general there are four ways of initiating a project: New Project, Derive, Merge and Retarget.  New Project is the case in which a new project is created from the beginning.  The Derive case is initiated when a new business opportunity arises to base a new project on an existing design. The Merge case is initiated when an actor wants to merge configuration items during implementation of a new change management scheme or while working with teams/organizations outside of the current scheme. The Retarget case is initiated when a project is restructured due to resource or other constraints.  In all of these use cases it is important to institute proper change controls from the outset.  New Project starts with a clean slate; the other scenarios require changes from (or to) existing projects.    

Once the project is initiated, the next phase is to update the design. There are two use cases in the Update Design composite state.  New Design Elements addresses the original creation of new design elements.  These become new entries in the change-management system.  The Implement Change use case entails the modification of an existing design element (such as fixing a bug).  It is triggered in response to a change request and is supported and governed by change-management data and protocols.

The next phase is the Resolve Project and consists of 3 use cases. Backout is the use case by which changes that were made in the previous phase can be reversed.  Release is the use case by which a project is released for cross functional use. The Archive use case protects design asset by secure copy of design and environment.

4.    CHANGE-MANAGEMENT SCHEMA

The main goal of the change-management schema is to enable the capture of all information that might contribute to change management

4.1     Overview

The schema, which is defined in the Unified Modeling Language (UML) [5], consists of several high-level packages (Figure 2).


Click to enlarge

Figure 2.  Packages in the change-management schema

Package Data represents types for design data and metadata.  Package Objects and Data defines types for objects and data.  Objects are containers for information, data represent the information.  The main types of object include artifacts (such as files), features, and attributes.  The types of objects and data defined are important for change management because they represent the principle work products of electronic design: IP, VHDL and RTL specifications, floor plans, formal verification rules, timing rules, and so on.  It is changes to these things for which management is most needed.

The package Types defines types to represent the types of objects and data.  This enables some types in the schema (such as those for attributes, collections, and relationships) to be defined parametrically in terms of other types, which promotes generality, consistency, and reusability of schema elements.

Package Attributes defines specific types of attribute.  The basic attribute is just a name-value pair that is associated to an object.  (More strongly-typed subtypes of attribute have fixed names, value types, attributed-object types, or combinations of these.)  Attributes are one of the main types of design data, and they are important for change management because they can represent the status or state of design elements (such as version number, verification level, or timing characteristics).

Package Collections defines types of collections, including collections with varying degrees of structure, typing, and constraints.  Collections are important for change management in that changes must often be coordinated for collections of design elements as a group (e.g., for a work package, verification suite, or IP release).  Collections are also used in defining other elements in the schema (for example, baselines and change sets).

The package Relationships defines types of relationships.  The basic relationship type is an ordered collection of a fixed number of elements.  Subtypes provide directionality, element typing, and additional semantics.  Relationships are important for change management because they can define various types of dependencies among design data and resources.  Examples include the use of macros in cores, the dependence of timing reports on floor plans and timing contracts, and the dependence of test results on tested designs, test cases, and test tools.  Explicit dependency relationships support the analysis of change impact and the efficient and precise propagation of changes,

The package Specifications defines types of data specification and definition.  Specifications specify an informational entity; definitions denote a meaning and are used in specifications.

Package Resources represents things (other than design data) that are used in design processes, for example, design tools, IP, design methods, and design engineers.  Resources are important for change management in that resources are used in the actions that cause changes and in the actions that respond to changes.  Indeed, minimizing the resources needed to handle changes is one of the goals of change management.

Resources are also important in that changes to a resource may require changes to design elements that were created using that resource (for example, when changes to a simulator may require reproduction of simulation results).

Package Events defines types and instances of events.  Events are important in change management because changes are a kind of event, and signals of change events can trigger processes to handle the change.

The package Actions provides a representation for things that are done, that is, for the behaviors or executions of tools, scripts, tasks, method steps, etc.  Actions are important for change in that actions cause change.  Actions can also be triggered in response to changes and can handle changes (such as by propagating changes to dependent artifacts).

Subpackage Action Definitions defines the type Action Execution, which contains information about a particular execution of a particular action.  It refers to the definition of the action and to the specific artifacts and attributes read and written, resources used, and events generated and handled.  Thus an action execution indicates particular artifacts and attributes that are changed, and it links those to the particular process or activity by which they were changed, the particular artifacts and attributes on which the changes were based, and the particular resources by which the changes were effected.  Through this, particular dependency relationships can be established between the objects, data, and resources.  This is the specific information needed to analyze and propagate concrete changes to artifacts, processes, resources.


Package Baselines defines types for defining mutually consistent set of design artifacts. Baselines are important for change management in several respects.  The elements in a baseline must be protected from arbitrary changes that might disrupt their mutual consistency, and the elements in a baseline must be changed in mutually consistent ways in order to evolve a baseline from one version to another.

The final package in Figure 2 is the Change package.  It defines types that for representing change explicitly.  These include managed objects, which are objects with an associated change log, change logs and change sets, which are types of collection that contain change records, and change records, which record specific changes to specific objects.  They can include a reference to an action execution that caused the change

The subpackage Change Requests includes types for modeling change requests and responses.  A change request has a type, description, state, priority, and owner.  It can have an associated action definition, which may be the definition of the action to be taken in processing the change request.  A change request also has a change-request history log.

4.2    Example

An example of the schema is shown in Figure 3.  The clear boxes (upper part of diagram) show general types from the schema and the shaded boxes (lower part of the diagram) show types (and a few instances) defined for a specific high-level design process project at IBM.


Click to enlarge

Figure 3.  Example of change-management data

The figure shows a dependency relationship between two types of design artifact, VHDLArtifact and FloorPlannableObjects.  The relationship is defined in terms of a compiler that derives instances of FloorPlannableObjects from instances of VHDLArtifact.  Execution of the compiler constitutes an action that defines the relationship.  The specific schema elements are defined based on the general schema using a variety of object-oriented modeling techniques, including subtyping (e.g., VHDLArtifact), instantiation (e.g., Compile1) and parameterization (e.g. VHDLFloorplannable ObjectsDependency).

5.    USE CASE IMPLEMENT CHANGE

Here we present an example use case, Implement Change, with details on its activities and how the activities use the schema presented in Section 4.  This use case is illustrated in Figure 4.


Click to enlarge

Figure 4.  State diagram for use case Implement Change

The Implement Change use case addresses the modification of an existing design element (such as fixing a bug).  It is triggered by a change request.  The first steps of this use case are to identify and evaluate the change request to be handled.  Then the relevant baseline is located, loaded into the engineer’s workspace, and verified.  At this point the change can be implemented.  This begins with the identification of the artifacts that are immediately affected.  Then dependent artifacts are identified and changes propagated according to dependency relationships.  (This may entail several iterations.)  Once a stable state is achieved, the modified artifacts are Checked and regression tested.  Depending on test results, more changes may be required.  Once the change is considered acceptable, any learning and metrics from the process are captured and the new artifacts and relationships are promoted to the public configuration space.

6.    CONCLUSIONS

This paper explores the role of comprehensive change management in SoC design, development, and delivery.  Based on the comments of over thirty experienced electronic design engineers from across IBM, we have captured the essential problems and motivations for change management in SoC projects. We have described design scenarios, highlighting places where change management applies, and presented a preliminary schema to show the range of data and relationships change management may incorporate.  Change management can benefit both design managers and engineers.  It is increasingly essential for improving productivity and reducing time and cost in SoC projects.

ACKNOWLEDGMENTS

Contributions to this work were also made by Nadav Golbandi and Yoav Rubin of IBM’s Haifa Research Lab.  Much information and guidance were provided by Jeff Staten and Bernd-josef Huettl of IBM’s Systems and Technology Group. We especially thank Richard Bell, John Coiner, Mark Firstenberg, Andrew Mirsky, Gary Nusbaum, and Harry Reindel of IBM’s Systems and Technology Group for sharing design data and experiences.  We are also grateful to the many other people across IBM who contributed their time and expertise.

REFERENCES

1.    http://www306.ibm.com/software/awdtools/changemgmt/enterprise/index.html

2.    http://www.cliosoft.com/products/index.html

3.    http://www.icmanage.com/products/index.html

4.    http://www.ins.clrc.ac.uk/europractice/software/matrixone.html

5.    http://www.uml.org/

Sun, 26 Jun 2022 12:00:00 -0500 en text/html https://www.design-reuse.com/articles/15745/comprehensive-change-management-for-soc-design.html
Killexams : Security Risks Widen With Commercial Chiplets

The commercialization of chiplets is expected to increase the number and breadth of attack surfaces in electronic systems, making it harder to keep track of all the hardened IP jammed into a package and to verify its authenticity and robustness against hackers.

Until now this has been largely a non-issue, because the only companies using chiplets today — AMD, Intel, and Marvell — internally source those chiplets. But as the market for third-party chiplets grows and device scaling becomes too expensive for most applications, advanced packaging using pre-verified and tested parts is a proven viable option. In fact, industry insiders predict that complex designs may include 100 or more chiplets, many of those sourced from different vendors. That could include various types of processors and accelerators, memories, I/Os, as well as chiplets developed for controlling and monitoring different functions such as secure boot.

The chiplet concept is being viewed increasingly as a successor to the SoC. In effect, it relies on a platform with well-defined interconnects to quickly integrate components that had to be shrunk to whatever process node the SoC was being created at. In most cases, that was the digital logic, and analog functions were largely digitized. But as the benefits of Moore’s Law diminish, and as different market slices demand more optimized solutions, the ability to pack in features developed at various process nodes, and choose alternatives from a menu, has put a spotlight on chiplets. They can be developed quickly and relatively cheaply by third-parties, characterized for standardized interconnect schemes, and at least in theory keep costs under control.

This is easier said than done, however. Commercially available chiplets will almost certainly increase the complexity of these designs, at least in the initial implementations. And equally important, they will open the door to a variety of security-related issues.

“The supply chain becomes the primary target,” said Adam Laurie, global security associate partner and lead hacker for IBM’s X-Force Red offensive security services. “If hackers can get into the back end of the supply chain, they can ship chiplets that are pre-hacked. The weakest company in the supply chain becomes the weakest link in a system, and you can adjust your attack to the weakest link.”

Sometimes, those weak links aren’t obvious until they are integrated into a larger system. “There was a 4G communications module that had so much additional processing power that people were using it for processing Java,” said Laurie, in an interview with Semiconductor Engineering at the exact hardwear.io conference. “We found they could flip the USB connection to read all the data stored on the device across all the IP. That affected millions of devices, including planes and trains. This was a 4G modem plug-in, and it was sold as a secure module.”

These problems become more difficult to prevent or even identify as the supply chain extends in all directions with off-the-shelf chiplets. “How do you ensure the authenticity of every piece of microelectronics that’s moving from wafer sort up through final test, where assembly and test are performed in a different country, and then attached to a board in yet another country?” asked Scott Best, senior technical director of product management for security IP at Rambus. “And then it’s imported to put into a system in the U.S. What are the reliable ways of actually tracking those pieces to ensure that the system that you’re building has authentic components? We have a lot more exact interest from customers worried about risk to the supply chain, where someone slips in a counterfeit part. Perhaps that’s done with malicious intent, or it could just be a cheap knockoff of an authentic part with the exact same part numbers and die markings to make it look fully compatible. It looks correct from the outside, but it’s not correct at all. The suppliers’ customers are a lot more worried about that now.”

Fig. 1: A six-chiplet design with 96 cores. Source: Leti

Fig. 1: A six-chiplet design with 96 cores. Source: Leti

Solutions
The chip industry has been working on solutions for the past decade, starting with the rollout of third-party IP. But at least some of that work was pushed back as the IP market consolidated into a handful of big companies, rendering many of those solutions an unnecessary cost. That is changing with the introduction of a commercial chiplet marketplace and the inclusion of chiplets in mission- and safety-critical applications.

“One solution for future devices involves activation of chiplets,” said Maarten Bron, managing director at Riscure. “On the gray market, you may see 20% more chips ending up being used. But if you have to activate those parts, those chips become unusable.”

A similar approach is to use encrypted tests from the manufacturer. “In automotive, you have this validation process for the software, which produces reports that tell you this is real,” said Mitch Mliner, vice president of engineering at Cycuity (formerly Tortuga Logic). “We need to do the same on the hardware side. ‘Here’s a chip. Here’s what goes with it. Here’s the testing that was done. And here’s the outcome. So you can see this is safe. And here are even more tests. You can run these encrypted tests.’ This is similar to logging in to read encrypted stuff, and it will confirm that it’s still working when you insert a chiplet into your design. This is where the industry needs to go. Without that, it’s going to be hard for people to drop chiplets into their design and say, ‘Hey, I’m comfortable with this.’ They need to have traceability.”

It’s not just about the hardware, either. As chips remain in the market for longer periods of time — up to 25 years in industrial and mil/aero applications, and 10 to 20 years for automobiles — many of these chiplets will need to be updated through firmware or software in order to stay current with known security issues and current communications protocols.

“Chiplets are put together on a substrate or in a 3D stack that is essentially the same as a small computer network,” said Mike Borza, Synopsys scientist. “So if you can attack any part of it, you have the potential to use that as a launching pad for an attack on the rest of it. We’ve seen this time and again in all kinds of different networks. The idea is to get a toehold in the part. Software authenticity is great. You have secure boot and all those kinds of processes that are used to run cryptographic authentication to prove where the software came from, and that’s really important. But it has to have a basis in the hardware that allows you to really trust that the people who sent you that software are the real thing. It’s not good enough to say, ‘Take my software an install it.’ People have done that in the past and that’s turned into an attack. The software ultimately is what people are trying to defend, and it needs to be tied back to the hardware in a rational way that allows you to at least trust that when you start the system up you’ve got the right software and it’s authorized to be running where you are.”

One such approach is to keep track of all of these components through blockchain ledgers, which is part of the U.S. government’s “zero trust” initiative.

“More standards like UCIe for putting chiplets together will help with adoption,” said Simon Rance, vice president of marketing at ClioSoft. “But now we’re starting to get input from the mil/aero side, where they want blockchain traceability. We’ve been able to layer our HUB tool across that to provide visibility across the chiplets and the blockchain. Now we can look at the design data versus the spec and determine whether it was right or wrong, and even which version of a tool was used. That’s important for automotive and mil/aero.”

Rance noted that a lot of this effort started with the shift from on-premise design to the cloud and the rollout of the U.S. Department of Defense standards for chiplet design. “There was a big push for traceability,” he said. “If you look at the design data and compare that to spec, was it right or wrong? And then, which tool was used, and which version of the tool?”

Another option is to add programmability into a system using eFPGAs to change bitstreams as needed for security reasons. That makes it much harder to attack a device because the bitstreams are never the same.

“We’ve been working with the DoD on one-circuit obfuscation, where there are not a lot of gates,” said Andy Jaros, vice president of sales and marketing at Flex Logix. “With a chiplet, it will either work or not work. We also can encrypt the bitstream with a PUF. So you can have multiple different bitstreams in a design, and change them if one bitstream is compromised. With the DoD, it’s more about obfuscation and programming an eFPGA in a secure environment. But we also expect different encryption algorithms to be modified over time for security reasons.”

The impact of standards
The chiplet approach has seen its greatest success so far in the processor world. Standards such as the Universal Chiplet Interconnect Express (UCIe), the Open Domain-Specific Architecture (ODSA), and Compute Express Link (CXL), as well network-on-chip (NoC) approaches are expected to broaden that market for a variety of fabless companies in many different markets.

“The CXL protocol is really going to enable an ecosystem of solutions with accelerators and expanded memory options,” said Mark Papermaster, CTO of AMD. “Imagine that transported into a chiplet ecosystem. That’s the intent of the UCIe consortium. We believe we can leverage the same elements of the CXL protocol, but also align on the kind of physical specifications you need for chiplet interconnects. Those are certainly different than what you need for socket-to-socket connections. The chiplet approach will be the motherboard of the future. These standards will emerge, and we will align. But it won’t happen overnight. And so in the interim, it will be a few big companies putting these together. But my hope is that as we develop these standards, we will lower the barrier for others.”

There are many ways to ensure chiplets are what they are expected to be. Less obvious is how security requirements will change over time, and how a growing number of chiplet-related standards will need to be adjusted as new vulnerabilities emerge.

“There will be a lot of competition, and the person using a chiplet inherits its security propositions,” said Riscure’s Bron. “We’re seeing this with IP blocks that come from different IP vendors. Is it secure? Maybe. But in an SoC with 200 IP blocks, not all of them are secure. And wherever the weak link is, that will be exploited — most likely through a side-channel attack using fault injection.”

On top of that, there is a value proposition for security, and this is particularly evident with IoT devices. “In the IoT world, security has two different aspects,” said Thomas Rostock, Connected Secure Systems Division president at Infineon. “One is whether you care if a device is hacked or not. Is it going to cost you? Yes. The second one is, does the society care? About four or five years ago there was a botnet attack, which was the first time they didn’t use a PC. They used IP cameras and AV receivers. That means these devices also have a lot of computation power, and many of them are built on Android, so they have to be be protected, as well. And that’s the critical thing. Without security, IoT is not going to work, because it’s just a matter of time until you have a big problem.”

The challenge with chiplets is that approach adds more pieces to the puzzle. It makes minimizing the possible attack surface that much more difficult.

Weeding out problems
One clear objective objective is to get a tighter rein on counterfeiting, which is hardly a new problem in the chip industry. But as chips are used for more critical functions, concerns about counterfeiting are growing.

Industry insiders say there are thousands of chips available today on the gray market that purport to be the same chips causing the ongoing shortages, but they are either counterfeit or remarketed chips from dead or discarded products. In some cases, the counterfeiters have etched legitimate part numbers into the chips or included an authentication code that matches the “golden” code provided by the manufacturer.

“There are some schemes that are highly sophisticated, and it’s not until you go through the authenticity testing that you discover an anomaly that you didn’t see on the surface,” said Art Figueroa, vice president of global operations at distributor Smith & Associates. “But the biggest issues occur on those parts that have no markings, like passive components or capacitors. That’s where you have to have the other elements in your process, whether it’s decapsulation or electrical testing of some sort to authenticate the component.”

Decapsulation is done selectively, using nitric acid or some solvent to remove the outside cover in order to examine the hidden markings and compare them against golden samples. “The golden samples are sourced either direct from the manufacturer, or though an authorized distributor for that manufacturer, where you know the traceability is direct,” Figueroa said. “Having a golden demo database is of utmost value to being able to authenticate a component, especially if you’re sourcing in the open market where you may not have direct manufacturer support. When components are in demand, we grab a few, run them through our process, capturing dimensions, performing tests including X-ray, and formulating a complete test report, which we file away for future use. That information is critical.”

Also critical is the sharing of information when something goes wrong. “If something happens in the future, especially for automotive where traceability is hugely important, you can show what was tested and whether a chiplet was in compliance,” said Cycuity’s Mliner. “That allows you to look for your problem elsewhere. Or maybe you found a flaw no one knew about and which was never tested for, and you’re upfront that no one was trying to hide anything. That’s going to be the trend going forward.”

Conclusion
Chiplets are coming, and a commercial marketplace will be part of that effort. But managing all of these different elements securely will be a continuous process that will require diligence for years to come.

“In a perfect world, we would make a catalog of chiplets, test all of them, and supply them a rating for security,” said Riscure CEO Marc Witteman. “And then, once you start building your chip, you compile these chiplets. You take the best one, and you’re good to go. That’s an ideal world. We’re very far from that for a couple of reasons. One is that there’s so much development and redevelopment that a chiplet may be obsolete after a couple of years. It would need to be redesigned and updated, and new vulnerabilities will be introduced. But in addition to that, the security landscape is continuously evolving because new attacks are being discovered. At every conference we hear about 10 new attacks that weren’t known a year before. What is secure today can be very insecure tomorrow. So security is not a state. It’s a process. You need to address it everyday, or someday you’re going to have a problem.”

Further Reading:
Chip Substitutions Raising Security Concerns
Lots of unknowns will persist for decades across multiple market segments.
Building Security Into ICs From The Ground Up
No-click and blockchain attacks point to increasing hacker sophistication, requiring much earlier focus on potential security risks and solutions.
Hiding Security Keys Using ReRAM PUFs
How two different technologies are being combined to create a unique and inexpensive security solution.
Verifying Side-Channel Security Pre-Silicon
Complexity and new applications are pushing security much further to the left in the design flow.
Technical papers on Security


Fri, 15 Jul 2022 20:43:00 -0500 en-US text/html https://semiengineering.com/security-risks-widen-with-commercial-chiplets/
Killexams : Lenovo IdeaCentre A300 and Multimedia Keyboard review
Lenovo seems to have developed a clear two-pronged strategy: for business, it leans on the knowhow and tradition it purchased from IBM with the demure Think line, and for the consumer end, it's developed its own, oftentimes flamboyant, Idea range of computers. Prime example of the latter is the IdeaCentre A300, which features an edge-to-edge glass screen, chrome accenting aplenty, and an unhealthily thin profile. As such, it's one of the more unashamed grabs for the hearts and minds of desktop aesthetes, so we had to bring it in for a test drive and see what we could see. Lenovo also sent us one of its diminutive Multimedia Keyboard remotes to have a play around with. Follow the break for our review of both.%Gallery-95775%%Gallery-95777%

Critics - Not yet scored

N/A

Users - Not yet scored

N/A

Pros

  • Keyboard is intelligently laid outTrackball and mouse keys work wellCute and compact

Cons

  • Doesn't replace a dedicated multimedia remoteA backlight would've made it more usefulCan be fiddly at times

Critics - Not yet scored

1 review

N/A

Users - Not yet scored

N/A

Pros

  • 1080p screen and TV tunerAttractive, slimline exteriorCompetent performance in most tasks

Cons

  • Bloatware-saddled boot timeNot the best build quality in the worldIssues with keyboard lag and sound output

Hardware and Construction
First impressions of this Lenovo all-in-one were overwhelmingly positive. Its slick and shiny exterior merited a second look even from jaded souls like us, while our unscientific polling of nearby laypersons ended with the conclusion that the A300 is "gorgeous." The asymmetric stand adds a smidgen of sophistication, and we can happily report that it handles the screen's weight with aplomb, keeping it upright in an extremely stable and reliable fashion. Considering how far off-center the chrome-covered base is, Lenovo's done a fine job to keep functionality in tact while diversifying form. Limited, but we would say sufficient, tilt and swivel are on offer as well.

Going around the A300's body, you'll find a litany of ports around the back and left side, including HDMI inputs and outputs, a quartet of USB jacks, Firewire, a handy multicard reader, and a TV signal input with its own adapter coming in the box. We weren't too thrilled about the positioning of the power jack, as we came close to unplugging the juice on multiple occasions while trying to use nearby ports. This is also down to the fact that the power adapter here is of the sort used in laptops and is easier to disconnect than your typical desktop fare -- which is dandy for battery-powered portable computers, but could prove disastrous if you're working on something important and start fiddling around the back of the machine absent-mindedly. Isolating that connector from the others could've helped remedy this situation, but it's not exactly a deal breaker as it is.

Sound is output through a pair of downward-firing speakers in the IdeaCentre's base, which are covered by some gruesome orange grills. Good thing you won't have to see them, we say. As to what you can expect in terms of aural delivery, you should use your nearest laptop for reference. Even at its highest setting, the A300 wasn't particularly loud, though to its credit that also meant it didn't garble or distort your music when pushed to its humble limits.

Plugging in a set of headphones produced a nasty surprise for us: a loud background hum was present, punctuated by intermittent buzzing, some of which was caused by our actions with the computer. This was clearly the result of the internal wiring causing interference, and Lenovo's failure to properly insulate the audio-out channel from such incursions is a major letdown. Even if we optimistically suppose this was a one-off problem with our review unit, it doesn't speak too highly of the quality control checks carried out with A300.

We had another unexpected and unpleasant discovery with the A300's keyboard: incredible as this sounds, simple text input on the A300, erm, lagged. That's to say we occasionally found our textual musings appearing on screen a good three to four seconds after punching them in. Similar behavior was exhibited when we Ctrl and W'd a few tabs in Firefox -- they hung around after our instruction, leading us to think it wasn't registered and doing it again, with the end result being that we closed more tabs than we intended to. Annoying. Our inclination, given that these were all keyboard inputs, is to suspect that the Bluetooth connection was causing the delays. Still, the underlying reason is less important than the fact we had an issue to fix with the most basic of operation on the A300.

It's a shame, really, since this spoils what's an otherwise thoroughly pleasing and sturdy keyboard. We tried hard (harder than Lenovo would appreciate) to find flex or creaks in it, but this is one well built slab of plastic. Button travel is somewhat shallow for a desktop part, but felt pretty much spot on for us. We enjoyed our time typing this review out on the A300, and were able to consistently reach 90 words per minute on our favorite typing benchmark. That's about a dozen words fewer than our typical rate, but comfortably high enough to mark this out as a highly competent button slate. The bundled mouse similarly acquitted itself well, with good traction in its scroll wheel and fine ambidextrous ergonomics.

We did manage to extract some creaks from the IdeaCentre's body, though. The ultrathin (19.8mm) display panel -- which we have to say looks like a massively enlarged white iPhone -- emits discomforting little noises when it's swiveled laterally, and has a tiny bit of flex around the chrome-addled Lenovo logo on the back. Are these things that'll ruin your experience and turn you off all-in-one computers forever? Certainly not. Most users won't have to fiddle with the stand or display at all, but the difference in build quality relative to something like Lenovo's own ThinkCentre A70z should be noted.

The display itself is actually an above average affair, in our opinion, with a lucid and well saturated picture. Stretching to a full 1,920 x 1,080 pixels, it offers plenty of real estate and we'd say its 21.5-inch size is just about the sweet spot for desktop use. We were fans of the contiguous glass front, and can definitely see the value on offer for students and the like who'd prefer to combine a TV set and computer into the smallest possible package. That does come with the caveat that vertical viewing angles are par for the LCD course (i.e. not very good), and the limited tilt available on the A300 could thwart your attempts at achieving converged technology nirvana. We must also mention that the screen here is of a highly reflective variety; it's no glossier than what you'd get on Apple's latest iMacs, but it'll cause you some grief if you have a light source directly opposite it during use.

Software and performance
We'll reiterate what we said in our A70z review: this is a Windows 7 (Home Premium 64-bit flavor) machine, and if you want the full dish on what the OS will and won't do for you, check out our comprehensive review. It merits mentioning that in spite of Lenovo slapping its Enhanced Experience label on the IdeaCentre A300 -- which is supposed to indicate the company optimized a few things under the hood to make it run faster -- bloatware and other ancillary programs slow the boot time down to a glacial 70 seconds. Hey, if Nic Cage can steal a car in less than a minute, then computers should be able to turn on in the same amount of time as well, we're not asking for too much here.
The processor inside our test unit was a 2.2GHz Intel Core 2 Duo T6600, which was long in the tooth this time last year, and positively ancient today. And yet, our experience with the A300 indicates that its inclusion here is more testament to the Intel chip's longevity than Lenovo skimping on component costs. The laptop CPU is powerful enough to run 1080p video flawlessly, and handles the mundanity of day to day computing with good humor and fitness. The 4GB memory allowance helps, while a half terabyte hard disk (formats down to 440GB) provides plenty of storage. If there's one thing we have to criticize on this spec sheet, it's the 5400RPM spindle speed on the storage unit: it showed its speed deficiency early and often. Oh, and speaking of spinning plates, there's no optical drive to be had -- an irrelevance for some, but a major downer for others who might have been contemplating turning this into their media playback station.

Operation of the A300 is on the whole extremely quiet, though the base -- which contains the majority of components -- does get warm to the touch. The only thing you might hear is the hard drive seeking, but if you want to kill two birds with one stone, slap an SDD in this machine and you'll nullify both the speed and noise disadvantages thrown up by Lenovo's default disk. On the whole, we might not recommend this as your Photoshop or 3D design rig, but regular things like web browsing, media playback, and basic productivity are handled smoothly and competently.

Multimedia Keyboard
Time to set our sights on this funny extra peripheral Lenovo shipped us with its AIO machine. The Multimedia Keyboard is a $59 accessory, working over a 2.4GHz wireless connection, that allows you to control your computer from up to 10 meters away with a keyboard, trackball, and a set of multimedia controls. Frankly, as clichéd as this might sound, we found it an irresistibly cute little peripheral. The trackball does its job, the keyboard is a tiny bit better than your typical QWERTY pad in modern smartphones, and the media buttons are laid out in a decently sensible order. On the surface then, it's just a barely above average keypad, and yet we didn't seem to stop enjoying ourselves while using it. Maybe it's because of the novelty or perhaps it's the fact it looks like a ping pong bat; whatever the appeal, the Multimedia Keyboard appears to be a classic case of a gadget that's more than the sum of its parts. We think the price tag is too steep to make this a particularly rational purchasing decision, but if you're asking if we'd like to receive one as a gift, we'd respond in the affirmative with little hesitation.

Wrap-up
In conclusion then, what we've been looking at has been a set of laptop parts exploded into a jumbo iPhone-aping screen with an asymmetric base and attention-grabbing looks. The result is pretty close to what you might expect: happy, shiny, and pretty on the outside, but flawed and mildly deficient on the inside. At $949 for the model we reviewed, we can't say the A300 represents good value. Sure, you get that TV tuner, bi-directional HDMI connectivity, and a 1080p panel, but we'd argue you would be better off purchasing each of those things individually rather than trying to compound them all in this one imperfect device. Additionally, the media repository ambition indicated by all the storage and inputs is somewhat defeated by the omission of an optical drive, which becomes much more important in a media station or HTPC candidate of this kind.

As to the Multimedia Keyboard, you should be mindful that usage scenarios are limited, because it's not good enough at what it does to replace having a dedicated keyboard or multimedia remote. That proviso aside, it's just plain fun to use and would make for a great gift -- you know, because then you won't have to think through the whole question of whether it's good value for money or not.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Sun, 19 Jun 2022 12:00:00 -0500 en-US text/html https://www.engadget.com/2010-06-20-lenovo-ideacentre-a300-and-multimedia-keyboard-review.html
C2140-823 exam dump and training guide direct download
Training Exams List