Shortest course for LOT-925 exam in our LOT-925 test prep

Assuming you are intrigued to confront the IBM Installing and Configuring IBM Lotus Notes and Domino 8.5 test to take the risk to qualify, has accurate LOT-925 test inquiries with a reason to ensure you breeze through LOT-925 test effectively by rehearsing LOT-925 Exam Braindumps. We offer you the legitimate, most recent cutting-edge LOT-925 Exam Questions with a 100 percent unconditional promise.

Exam Code: LOT-925 Practice test 2022 by team
Installing and Configuring IBM Lotus Notes and Domino 8.5
IBM Configuring basics
Killexams : IBM Configuring basics - BingNews Search results Killexams : IBM Configuring basics - BingNews Killexams : Comprehensive Change Management for SoC Design By Sunita Chulani1, Stanley M. Sutton Jr.1, Gary Bachelor2, and P. Santhanam1
1 IBM T. J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 USA
2 IBM Global Business Services, PO BOX 31, Birmingham Road, Warwick CV34 5JL UK


Systems-on-a-Chip (SoC) are becoming increasingly complex, leading to corresponding increases in the complexity and cost of SoC design and development.  We propose to address this problem by introducing comprehensive change management.  Change management, which is widely used in the software industry, involves controlling when and where changes can be introduced into components so that changes can be propagated quickly, completely, and correctly.
In this paper we address two main topics:   One is typical scenarios in electronic design where change management can be supported and leveraged. The other is the specification of a comprehensive schema to illustrate the varieties of data and relationships that are important for change management in SoC design.


SoC designs are becoming increasingly complex.  Pressures on design teams and project managers are rising because of shorter times to market, more complex technology, more complex organizations, and geographically dispersed multi-partner teams with varied “business models” and higher “cost of failure.”

Current methodology and tools for designing SoC need to evolve with market demands in key areas:  First, multiple streams of inconsistent hardware (HW) and software (SW) processes are often integrated only in the late stages of a project, leading to unrecognized divergence of requirements, platforms, and IP, resulting in unacceptable risk in cost, schedule, and quality.  Second, even within a stream of HW or SW, there is inadequate data integration, configuration management, and change control across life cycle artifacts.  Techniques used for these are often ad hoc or manual, and the cost of failure is high.  This makes it difficult for a distributed group team     to be productive and inhibits the early, controlled reuse of design products and IP.  Finally, the costs of deploying and managing separate dedicated systems and infrastructures are becoming prohibitive.

We propose to address these shortcomings through comprehensive change management, which is the integrated application of configuration management, version control, and change control across software and hardware design.  Change management is widely practiced in the software development industry.  There are commercial change-management systems available for use in electronic design, such as MatrixOne DesignSync [4], ClioSoft SOS [2], IC Manage Design Management [3], and Rational ClearCase/ClearQuest [1], as well as numerous proprietary, “home-grown” systems.  But to date change management remains an under-utilized technology in electronic design.

In SoC design, change management can help with many problems.  For instance, when IP is modified, change management can help in identifying blocks in which the IP is used, in evaluating other affected design elements, and in determining which tests must be rerun and which rules must be re-verified. Or, when a new release is proposed, change management can help in assessing whether the elements of the release are mutually consistent and in specifying IP or other resources on which the new release depends.

More generally, change management gives the ability to analyze the potential impact of changes by tracing to affected entities and the ability to propagate changes completely, correctly, and efficiently.  For design managers, this supports decision-making as to whether, when, and how to make or accept changes.  For design engineers, it helps in assessing when a set of design entities is complete and consistent and in deciding when it is safe to make (or adopt) a new release.

In this paper we focus on two elements of this approach for SoC design.  One is the specification of representative use cases in which change management plays a critical role.  These show places in the SoC development process where information important for managing change can be gathered.  They also show places where appropriate information can be used to manage the impact of change.  The second element is the specification of a generic schema for modeling design entities and their interrelationships.  This supports traceability among design elements, allows designers to analyze the impact of changes, and facilitates the efficient and comprehensive propagation of changes to affected elements.

The following section provides some background on a survey of subject-matter experts that we performed to refine the problem definition.     


We surveyed some 30 IBM subject-matter experts (SMEs) in electronic design, change management, and design data modeling.  They identified 26 problem areas for change management in electronic design.  We categorized these as follows:

  • visibility into project status
  • day-to-day control of project activities
  • organizational or structural changes
  • design method consistency
  • design data consistency

Major themes that crosscut these included:

  • visibility and status of data
  • comprehensive change management
  • method definition, tracking, and enforcement
  • design physical quality
  • common approach to problem identification and handling

We held a workshop with the SMEs to prioritize these problems, and two emerged     as the most significant:  First, the need for basic management of the configuration of all the design data and resources of concern within a project or work package (libraries, designs, code, tools, test suites, etc.); second, the need for designer visibility into the status of data and configurations in a work package.

To realize these goals, two basic kinds of information are necessary:  1) An understanding of how change management may occur in SoC design processes; 2) An understanding of the kinds of information and relationships needed to manage change in SoC design.  We addressed the former by specifying change-management use cases; we addressed the latter by specifying a change-management schema.


This section describes typical use cases in the SoC design process.  Change is a pervasive concern in these use cases—they cause changes, respond to changes, or depend on data and other resources that are subject to change.  Thus, change management is integral to the effective execution of each of these use cases. We identified nine representative use cases in the SoC design process, which are shown in Figure 1.

Figure 1.  Use cases in SoC design

In general there are four ways of initiating a project: New Project, Derive, Merge and Retarget.  New Project is the case in which a new project is created from the beginning.  The Derive case is initiated when a new business opportunity arises to base a new project on an existing design. The Merge case is initiated when an actor wants to merge configuration items during implementation of a new change management scheme or while working with teams/organizations outside of the current scheme. The Retarget case is initiated when a project is restructured due to resource or other constraints.  In all of these use cases it is important to institute proper change controls from the outset.  New Project starts with a clean slate; the other scenarios require changes from (or to) existing projects.    

Once the project is initiated, the next phase is to update the design. There are two use cases in the Update Design composite state.  New Design Elements addresses the original creation of new design elements.  These become new entries in the change-management system.  The Implement Change use case entails the modification of an existing design element (such as fixing a bug).  It is triggered in response to a change request and is supported and governed by change-management data and protocols.

The next phase is the Resolve Project and consists of 3 use cases. Backout is the use case by which changes that were made in the previous phase can be reversed.  Release is the use case by which a project is released for cross functional use. The Archive use case protects design asset by secure copy of design and environment.


The main goal of the change-management schema is to enable the capture of all information that might contribute to change management

4.1     Overview

The schema, which is defined in the Unified Modeling Language (UML) [5], consists of several high-level packages (Figure 2).

Click to enlarge

Figure 2.  Packages in the change-management schema

Package Data represents types for design data and metadata.  Package Objects and Data defines types for objects and data.  Objects are containers for information, data represent the information.  The main types of object include artifacts (such as files), features, and attributes.  The types of objects and data defined are important for change management because they represent the principle work products of electronic design: IP, VHDL and RTL specifications, floor plans, formal verification rules, timing rules, and so on.  It is changes to these things for which management is most needed.

The package Types defines types to represent the types of objects and data.  This enables some types in the schema (such as those for attributes, collections, and relationships) to be defined parametrically in terms of other types, which promotes generality, consistency, and reusability of schema elements.

Package Attributes defines specific types of attribute.  The basic attribute is just a name-value pair that is associated to an object.  (More strongly-typed subtypes of attribute have fixed names, value types, attributed-object types, or combinations of these.)  Attributes are one of the main types of design data, and they are important for change management because they can represent the status or state of design elements (such as version number, verification level, or timing characteristics).

Package Collections defines types of collections, including collections with varying degrees of structure, typing, and constraints.  Collections are important for change management in that changes must often be coordinated for collections of design elements as a group (e.g., for a work package, verification suite, or IP release).  Collections are also used in defining other elements in the schema (for example, baselines and change sets).

The package Relationships defines types of relationships.  The basic relationship type is an ordered collection of a fixed number of elements.  Subtypes provide directionality, element typing, and additional semantics.  Relationships are important for change management because they can define various types of dependencies among design data and resources.  Examples include the use of macros in cores, the dependence of timing reports on floor plans and timing contracts, and the dependence of test results on tested designs, test cases, and test tools.  Explicit dependency relationships support the analysis of change impact and the efficient and precise propagation of changes,

The package Specifications defines types of data specification and definition.  Specifications specify an informational entity; definitions denote a meaning and are used in specifications.

Package Resources represents things (other than design data) that are used in design processes, for example, design tools, IP, design methods, and design engineers.  Resources are important for change management in that resources are used in the actions that cause changes and in the actions that respond to changes.  Indeed, minimizing the resources needed to handle changes is one of the goals of change management.

Resources are also important in that changes to a resource may require changes to design elements that were created using that resource (for example, when changes to a simulator may require reproduction of simulation results).

Package Events defines types and instances of events.  Events are important in change management because changes are a kind of event, and signals of change events can trigger processes to handle the change.

The package Actions provides a representation for things that are done, that is, for the behaviors or executions of tools, scripts, tasks, method steps, etc.  Actions are important for change in that actions cause change.  Actions can also be triggered in response to changes and can handle changes (such as by propagating changes to dependent artifacts).

Subpackage Action Definitions defines the type Action Execution, which contains information about a particular execution of a particular action.  It refers to the definition of the action and to the specific artifacts and attributes read and written, resources used, and events generated and handled.  Thus an action execution indicates particular artifacts and attributes that are changed, and it links those to the particular process or activity by which they were changed, the particular artifacts and attributes on which the changes were based, and the particular resources by which the changes were effected.  Through this, particular dependency relationships can be established between the objects, data, and resources.  This is the specific information needed to analyze and propagate concrete changes to artifacts, processes, resources.

Package Baselines defines types for defining mutually consistent set of design artifacts. Baselines are important for change management in several respects.  The elements in a baseline must be protected from arbitrary changes that might disrupt their mutual consistency, and the elements in a baseline must be changed in mutually consistent ways in order to evolve a baseline from one version to another.

The final package in Figure 2 is the Change package.  It defines types that for representing change explicitly.  These include managed objects, which are objects with an associated change log, change logs and change sets, which are types of collection that contain change records, and change records, which record specific changes to specific objects.  They can include a reference to an action execution that caused the change

The subpackage Change Requests includes types for modeling change requests and responses.  A change request has a type, description, state, priority, and owner.  It can have an associated action definition, which may be the definition of the action to be taken in processing the change request.  A change request also has a change-request history log.

4.2    Example

An example of the schema is shown in Figure 3.  The clear boxes (upper part of diagram) show general types from the schema and the shaded boxes (lower part of the diagram) show types (and a few instances) defined for a specific high-level design process project at IBM.

Click to enlarge

Figure 3.  Example of change-management data

The figure shows a dependency relationship between two types of design artifact, VHDLArtifact and FloorPlannableObjects.  The relationship is defined in terms of a compiler that derives instances of FloorPlannableObjects from instances of VHDLArtifact.  Execution of the compiler constitutes an action that defines the relationship.  The specific schema elements are defined based on the general schema using a variety of object-oriented modeling techniques, including subtyping (e.g., VHDLArtifact), instantiation (e.g., Compile1) and parameterization (e.g. VHDLFloorplannable ObjectsDependency).


Here we present an example use case, Implement Change, with details on its activities and how the activities use the schema presented in Section 4.  This use case is illustrated in Figure 4.

Click to enlarge

Figure 4.  State diagram for use case Implement Change

The Implement Change use case addresses the modification of an existing design element (such as fixing a bug).  It is triggered by a change request.  The first steps of this use case are to identify and evaluate the change request to be handled.  Then the relevant baseline is located, loaded into the engineer’s workspace, and verified.  At this point the change can be implemented.  This begins with the identification of the artifacts that are immediately affected.  Then dependent artifacts are identified and changes propagated according to dependency relationships.  (This may entail several iterations.)  Once a stable state is achieved, the modified artifacts are Checked and regression tested.  Depending on test results, more changes may be required.  Once the change is considered acceptable, any learning and metrics from the process are captured and the new artifacts and relationships are promoted to the public configuration space.


This paper explores the role of comprehensive change management in SoC design, development, and delivery.  Based on the comments of over thirty experienced electronic design engineers from across IBM, we have captured the essential problems and motivations for change management in SoC projects. We have described design scenarios, highlighting places where change management applies, and presented a preliminary schema to show the range of data and relationships change management may incorporate.  Change management can benefit both design managers and engineers.  It is increasingly essential for improving productivity and reducing time and cost in SoC projects.


Contributions to this work were also made by Nadav Golbandi and Yoav Rubin of IBM’s Haifa Research Lab.  Much information and guidance were provided by Jeff Staten and Bernd-josef Huettl of IBM’s Systems and Technology Group. We especially thank Richard Bell, John Coiner, Mark Firstenberg, Andrew Mirsky, Gary Nusbaum, and Harry Reindel of IBM’s Systems and Technology Group for sharing design data and experiences.  We are also grateful to the many other people across IBM who contributed their time and expertise.







Sun, 26 Jun 2022 12:00:00 -0500 en text/html
Killexams : Can IBM Get Back Into HPC With Power10?

The “Cirrus” Power10 processor from IBM, which we codenamed for Big Blue because it refused to do it publicly and because we understand the value of a synonym here at The Next Platform, shipped last September in the “Denali” Power E1080 big iron NUMA machine. And today, the rest of the Power10-based Power Systems product line is being fleshed out with the launch of entry and midrange machines – many of which are suitable for supporting HPC and AI workloads as well as in-memory databases and other workloads in large enterprises.

The question is, will IBM care about traditional HPC simulation and modeling ever again with the same vigor that it has in past decades? And can Power10 help reinvigorate the HPC and AI business at IBM. We are not sure about the answer to the first question, and got the distinct impression from Ken King, the general manager of the Power Systems business, that HPC proper was not a high priority when we spoke to him back in February about this. But we continue to believe that the Power10 platform has some attributes that make it appealing for data analytics and other workloads that need to be either scaled out across small machines or scaled up across big ones.

Today, we are just going to talk about the five entry Power10 machines, which have one or two processor sockets in a standard 2U or 4U form factor, and then we will follow up with an analysis of the Power E1050, which is a four socket machine that fits into a 4U form factor. And the question we wanted to answer was simple: Can a Power10 processor hold its own against X86 server chips from Intel and AMD when it comes to basic CPU-only floating point computing.

This is an important question because there are plenty of workloads that have not been accelerated by GPUs in the HPC arena, and for these workloads, the Power10 architecture could prove to be very interesting if IBM thought outside of the box a little. This is particularly true when considering the feature called memory inception, which is in effect the ability to build a memory area network across clusters of machines and which we have discussed a little in the past.

We went deep into the architecture of the Power10 chip two years ago when it was presented at the Hot Chip conference, and we are not going to go over that ground again here. Suffice it to say that this chip can hold its own against Intel’s current “Ice Lake” Xeon SPs, launched in April 2021, and AMD’s current “Milan” Epyc 7003s, launched in March 2021. And this makes sense because the original plan was to have a Power10 chip in the field with 24 fat cores and 48 skinny ones, using dual-chip modules, using 10 nanometer processes from IBM’s former foundry partner, Globalfoundries, sometime in 2021, three years after the Power9 chip launched in 2018. Globalfoundries did not get the 10 nanometer processes working, and it botched a jump to 7 nanometers and spiked it, and that left IBM jumping to Samsung to be its first server chip partner for its foundry using its 7 nanometer processes. IBM took the opportunity of the Power10 delay to reimplement the Power ISA in a new Power10 core and then added some matrix math overlays to its vector units to make it a good AI inference engine.

IBM also created a beefier core and dropped the core count back to 16 on a die in SMT8 mode, which is an implementation of simultaneous multithreading that has up to eight processing threads per core, and also was thinking about an SMT4 design which would double the core count to 32 per chip. But we have not seen that today, and with IBM not chasing Google and other hyperscalers with Power10, we may never see it. But it was in the roadmaps way back when.

What IBM has done in the entry machines is put two Power10 chips inside of a single socket to increase the core count, but it is looking like the yields on the chips are not as high as IBM might have wanted. When IBM first started talking about the Power10 chip, it said it would have 15 or 30 cores, which was a strange number, and that is because it kept one SMT8 core or two SMT4 cores in reserve as a hedge against bad yields. In the products that IBM is rolling out today, mostly for its existing AIX Unix and IBM i (formerly OS/400) enterprise accounts, the core counts on the dies are much lower, with 4, 8, 10, or 12 of the 16 cores active. The Power10 cores have roughly 70 percent more performance than the Power9 cores in these entry machines, and that is a lot of performance for many enterprise customers – enough to get through a few years of growth on their workloads. IBM is charging a bit more for the Power10 machines compared to the Power9 machines, according to Steve Sibley, vice president of Power product management at IBM, but the bang for the buck is definitely improving across the generations. At the very low end with the Power S1014 machine that is aimed at small and midrange businesses running ERP workloads on the IBM i software stack, that improvement is in the range of 40 percent, deliver or take, and the price increase is somewhere between 20 percent and 25 percent depending on the configuration.

Pricing is not yet available on any of these entry Power10 machines, which ship on July 22. When we find out more, we will do more analysis of the price/performance.

There are six new entry Power10 machines, the feeds and speeds of which are shown below:

For the HPC crowd, the Power L1022 and the Power L1024 are probably the most interesting ones because they are designed to only run Linux and, if they are like prior L classified machines in the Power8 and Power9 families, will have lower pricing for CPU, memory, and storage, allowing them to better compete against X86 systems running Linux in cluster environments. This will be particularly important as IBM pushed Red Hat OpenShift as a container platform for not only enterprise workloads but also for HPC and data analytic workloads that are also being containerized these days.

One thing to note about these machines: IBM is using its OpenCAPI Memory Interface, which as we explained in the past is using the “Bluelink” I/O interconnect for NUMA links and accelerator attachment as a memory controller. IBM is now calling this the Open Memory Interface, and these systems have twice as many memory channels as a typical X86 server chip and therefore have a lot more aggregate bandwidth coming off the sockets. The OMI memory makes use of a Differential DIMM form factor that employs DDR4 memory running at 3.2 GHz, and it will be no big deal for IBM to swap in DDR5 memory chips into its DDIMMs when they are out and the price is not crazy. IBM is offering memory features with 32 GB, 64 GB, and 128 GB capacities today in these machines and will offer 256 GB DDIMMs on November 14, which is how you get the maximum capacities shown in the table above. The important thing for HPC customers is that IBM is delivering 409 GB/sec of memory bandwidth per socket and 2 TB of memory per socket.

By the way, the only storage in these machines is NVM-Express flash drives. No disk, no plain vanilla flash SSDs. The machines also support a mix of PCI-Express 4.0 and PCI-Express 5.0 slots, and do not yet support the CXL protocol created by Intel and backed by IBM even though it loves its own Bluelink OpenCAPI interconnect for linking memory and accelerators to the Power compute engines.

Here are the different processor SKUs offered in the Power10 entry machines:

As far as we are concerned, the 24-core Power10 DCM feature EPGK processor in the Power L1024 is the only interesting one for HPC work, aside from what a theoretical 32-core Power10 DCM might be able to do. And just for fun, we sat down and figured out the peak theoretical 64-bit floating point performance, at all-core base and all-core turbo clock speeds, for these two Power10 chips and their rivals in the Intel and AMD CPU lineups. Take a gander at this:

We have no idea what the pricing will be for a processor module in these entry Power10 machines, so we took a stab at what the 24-core variant might cost to be competitive with the X86 alternatives based solely on FP64 throughput and then reckoned the performance of what a full-on 32-core Power10 DCM might be.

The answer is that IBM can absolutely compete, flops to flops, with the best Intel and AMD have right now. And it has a very good matrix math engine as well, which these chips do not.

The problem is, Intel has “Sapphire Rapids” Xeon SPs in the works, which we think will have four 18-core chiplets for a total of 72 cores, but only 56 of them will be exposed because of yield issues that Intel has with its SuperFIN 10 nanometer (Intel 7) process. And AMD has 96-core “Genoa” Epyc 7004s in the works, too. Power11 is several years away, so if IBM wants to play in HPC, Samsung has to get the yields up on the Power10 chips so IBM can sell more cores in a box. Big Blue already has the memory capacity and memory bandwidth advantage. We will see if its L-class Power10 systems can compete on price and performance once we find out more. And we will also explore how memory clustering might make for a very interesting compute platform based on a mix of fat NUMA and memory-less skinny nodes. We have some ideas about how this might play out.

Mon, 11 Jul 2022 12:01:00 -0500 Timothy Prickett Morgan en-US text/html
Killexams : IBM Expands Its Power10 Portfolio For Mission Critical Applications

It is sometimes difficult to understand the true value of IBM's Power-based CPUs and associated server platforms. And the company has written a lot about it over the past few years. Even for IT professionals that deploy and manage servers. As an industry, we have become accustomed to using x86 as a baseline for comparison. If an x86 CPU has 64 cores, that becomes what we used to measure relative value in other CPUs.

But this is a flawed way of measuring CPUs and a broken system for measuring server platforms. An x86 core is different than an Arm core which is different than a Power core. While Arm has achieved parity with x86 for some cloud-native workloads, the Power architecture is different. Multi-threading, encryption, AI enablement – many functions are designed into Power that don’t impact performance like other architectures.

I write all this as a set-up for IBM's announced expanded support for its Power10 architecture. In the following paragraphs, I will provide the details of IBM's announcement and deliver some thoughts on what this could mean for enterprise IT.

What was announced

Before discussing what was announced, it is a good idea to do a quick overview of Power10.

IBM introduced the Power10 CPU architecture at the Hot Chips conference in August 2020. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. Power10 is developed on the opensource Power ISA. Power10 comes in two variants – 15x SMT8 cores and 30x SMT4 cores. For those familiar with x86, SMT8 (8 threads/core seems extreme, as does SMT4. But this is where the Power ISA is fundamentally different from x86. Power is a highly performant ISA, and the Power10 cores are designed for the most demanding workloads.

One last note on Power10. SMT8 is optimized for higher throughput and lower computation. SMT4 attacks the compute-intensive space with lower throughput.

IBM introduced the Power E1080 in September of 2021. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. The E1080 is a system designed for mission and business-critical workloads and has been strongly adopted by IBM's loyal Power customer base.

Because of this success, IBM has expanded the breadth of the Power10 portfolio and how customers consume these resources.

The big reveal in IBM’s recent announcement is the availability of four new servers built on the Power10 architecture. These servers are designed to address customers' full range of workload needs in the enterprise datacenter.

The Power S1014 is the traditional enterprise workhorse that runs the modern business. For x86 IT folks, think of the S1014 equivalent to the two-socket workhorses that run virtualized infrastructure. One of the things that IBM points out about the S1014 is that this server was designed with lower technical requirements. This statement leads me to believe that the company is perhaps softening the barrier for the S1014 in data centers that are not traditional IBM shops. Or maybe for environments that use Power for higher-end workloads but non-Power for traditional infrastructure needs.

The Power S1022 is IBM's scale-out server. Organizations embracing cloud-native, containerized environments will find the S1022 an ideal match. Again, for the x86 crowd – think of the traditional scale-out servers that are perhaps an AMD single socket or Intel dual-socket – the S1022 would be IBM's equivalent.

Finally, the S1024 targets the data analytics space. With lots of high-performing cores and a big memory footprint – this server plays in the area where IBM has done so well.

In addition, to these platforms, IBM also introduced the Power E1050. The E1050 seems designed for big data and workloads with significant memory throughput requirements.

The E1050 is where I believe the difference in the Power architecture becomes obvious. The E1050 is where midrange starts to bump into high performance, and IBM claims 8-socket performance in this four-socket socket configuration. IBM says it can deliver performance for those running big data environments, larger data warehouses, and high-performance workloads. Maybe, more importantly, the company claims to provide considerable cost savings for workloads that generally require a significant financial investment.

One benchmark that IBM showed was the two-tier SAP Standard app benchmark. In this test, the E1050 beat an x86, 8-socket server handily, showing a 2.6x per-core performance advantage. We at Moor Insights & Strategy didn’t run the benchmark or certify it, but the company has been conservative in its disclosures, and I have no reason to dispute it.

But the performance and cost savings are not just associated with these higher-end workloads with narrow applicability. In another comparison, IBM showed the Power S1022 performs 3.6x better than its x86 equivalent for running a containerized environment in Red Hat OpenShift. When all was added up, the S1022 was shown to lower TCO by 53%.

What makes Power-based servers perform so well in SAP and OpenShift?

The value of Power is derived both from the CPU architecture and the value IBM puts into the system and server design. The company is not afraid to design and deploy enhancements it believes will deliver better performance, higher security, and greater reliability for its customers. In the case of Power10, I believe there are a few design factors that have contributed to the performance and price//performance advantages the company claims, including

  • Use Differential DIMM technology to increase memory bandwidth, allowing for better performance from memory-intensive workloads such as in-memory database environments.
  • Built-in AI inferencing engines that increase performance by up to 5x.
  • Transparent memory encryption performs this function with no performance tax (note: AMD has had this technology for years, and Intel introduced about a year ago).

These seemingly minor differences can add up to deliver significant performance benefits for workloads running in the datacenter. But some of this comes down to a very powerful (pardon the redundancy) core design. While x86 dominates the datacenter in unit share, IBM has maintained a loyal customer base because the Power CPUs are workhorses, and Power servers are performant, secure, and reliable for mission critical applications.

Consumption-based offerings

Like other server vendors, IBM sees the writing on the wall and has opened up its offerings to be consumed in a way that is most beneficial to its customers. Traditional acquisition model? Check. Pay as you go with hardware in your datacenter? Also, check. Cloud-based offerings? One more check.

While there is nothing revolutionary about what IBM is doing with how customers consume its technology, it is important to note that IBM is the only server vendor that also runs a global cloud service (IBM Cloud). This should enable the company to pass on savings to its customers while providing greater security and manageability.

Closing thoughts

I like what IBM is doing to maintain and potentially grow its market presence. The new Power10 lineup is designed to meet customers' entire range of performance and cost requirements without sacrificing any of the differentiated design and development that the company puts into its mission critical platforms.

Will this announcement move x86 IT organizations to transition to IBM? Unlikely. Nor do I believe this is IBM's goal. However, I can see how businesses concerned with performance, security, and TCO of their mission and business-critical workloads can find a strong argument for Power. And this can be the beginning of a more substantial Power presence in the datacenter.

Note: This analysis contains insights from Moor Insights & Strategy Founder and Chief Analyst, Patrick Moorhead.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex,, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Wed, 13 Jul 2022 12:00:00 -0500 Matt Kimball en text/html
Killexams : Quantum Computing 101: 5 Key Concepts to Understand

IBM's Q quantum computer. (Image source: IBM Research)

Even though artificial intelligence (AI) and machine learning (ML) are taking center stage in the world of emerging technologies, there's another technology that is slowly making its presence known to society – quantum computing. New quantum machines such as Google's Bristlecone chip and IBM's Q initiative are already appearing in headlines. IBM has even provided public access to an online quantum computer for research and experimentation purposes.

The science behind quantum machines dates to the early 1900s, to a German physicist named Max Planck. But experts say quantum computing has the potential to greatly enhance the technologies of today – including AI and ML algorithms – because of its ability to perform computations exponentially faster than today’s transistor-based computers.

But the workings of quantum computers can be quite a bit to untangle.

Here are five key concepts and questions regarding this unique computational machine:

1.) What Is Quantum Mechanics?

Quantum mechanics is defined as the branch of physical science that is concern with the behaviors of subatomic particles, waves, matter, and energy of atoms. The term was coined by German physicist Max Born while he was conducting theoretical solid-state physics and quantum theory research in 1924. There are several unique properties of quantum mechanics such as superposition, entanglement, collapse, and uncertainty that factors into the application and design of quantum computers. Several related technologies like nanotechnology, structural biology, particle physics, and electronics are also supported by quantum mechanics.

2.) What is Quantum Hardware?

Like traditional digital computers, quantum computers have three main components: inputs/outputs (I/O), memory, and a processor. The quantum computer’s I/O is a physical process of manipulating the states of qubits (more on those in a moment). The qubit manipulation is based on machine states that allow quanta (photonic energy) bits to propagate through the quantum computer. The qubit is the fundamental element of storing a 1, 0, or 0-1 quanta state. Multiple qubits can be grouped to make registers that assist in storing and moving large amounts of quanta data through the quantum system. Like traditional digital computers, the processor is created by using qubit logic gates. The qubit logic gates are constructed to perform complex operations within the quantum computer.

An example of quantum logic circuit. (Image source: IBM Research) 

3.) What Is a Qubit?

The quantum equivalent of a bit is called a qubit. The qubit’s quantum state can be a 1, 0, or 0-1. Qubits can be configured as registers for data storage or as processors using quantum logic gates. The combination of quantum logic gates allows the quantum computer to perform single or multiple operations based on unitary operators. Basic logic gates used in quantum computers are the Hadamard or H-gate, the X-gate, the CNOT gate, and transformation or phase gates (Z, S+, T, and T+).

A Josephson Junction and equivalent electrical circuit is the core component of a qubit. Image source:

A Josephson Junction and equivalent electrical circuit is the core component of a qubit. (Image source:

4.) What Is Superposition?

Unlike a traditional digital computer, a quantum computer has a third state where the qubit can be 0 and 1 simultaneously. This tertiary state is called superposition. The superposition is probabilistic upon measurement of the state. The data value having the highest percentage rating is probability of the qubit being in that state.

The analog equivalent of superposition is waves. A single physical disturbance can produce one wave, but additional waves can be superimposed to make one unique oscillatory pattern. With superposition, configuring qubits as registers allow new methods of computing complex problems using large data sets. AI and ML algorithms, therefore can be processed faster using quantum superposition.

5.) What is Entanglement?

Another unique attribute of the quantum computer is the ability of two qubits being linked without physical contact to one another. This physical link phenomenon is called entanglement. Qubits being able to posse information between them allows data processing to occur simultaneously. Traditional digital computers must use a pipeline-fetch method of avoiding multiple execution processes from occurring at the same time. Because of entanglement, race conditions are not a concern with quantum computers. Although two qubits will have unique states, entanglement will allow the initial and final data bits to be simultaneously equal during long distance transmission events.

Teleportation, which is typically reserved for science fiction, is actually being researched as it would be an application of entanglement that allows long distance data transmission.

Don Wilcher is a passionate teacher of electronics technology and an electrical engineer with 26 years of industrial experience. He’s worked on industrial robotics systems, automotive electronic modules/systems, and embedded wireless controls for small consumer appliances. He’s also a book author, writing DIY project books on electronics and robotics technologies.

ESC, Embedded Systems Conference


The nation's largest embedded systems conference is back with a new education program tailored to the needs of today's embedded systems professionals, connecting you to hundreds of software developers, hardware engineers, start-up visionaries, and industry pros across the space. Be inspired through hands-on training and education across five conference tracks. Plus, take part in technical tutorials delivered by top embedded systems professionals. Click here to register today!

Tue, 12 Jul 2022 12:00:00 -0500 en text/html
Killexams : IBM-PC in the Laboratory

The primary object of this manual is to build an understanding of the principles of computer operations and the use of computers in the laboratory. While the development of applications for computers has been rapid since their introduction, the principles of computer operation and their use in sensing and control have remained stable. Those are the primary subjects of this book, throughout which a gradual understanding of what goes on inside a computer is developed. The laboratory provides a vital experience in linking theory with physical reality, and all of the computer work is done in the context of doing experiments. The IBM-PC design is used as the basis for the book. The internal design of this machine is slightly more complicated than earlier personal computers, but it is still simple enough to be quickly learned. The computer can be directly controlled by proper programming, and offers considerably more power than earlier designs. The IBM design also has expansion slots which make the addition of special hardware capabilities relatively simple, and provide a great flexibility in interfacing the machine to other equipment. The book, based on courses given at Cornell University, is designed as a tutorial to be used in conjunction with laboratory work. It will be a valuable guide and reference for students who are familiar with first-year university physics and have some computing experience.

Thu, 01 Apr 2021 22:32:00 -0500 en text/html
Killexams : Review: RHEL 9 delivers better security, management

RHEL 9.0, the latest major release of  Red Hat Enterprise Linux, delivers tighter security, as well as improved installation, distribution, and management for enterprise server and cloud environments.

The operating system, code named Plow, is a significant upgrade over RHEL 8.0 and makes it easier for application developers to test and deploy containers.

Available in server and desktop versoins, RHEL remains one of the top Linux distributions for running enterprise workloads because of its stability, dependability, and robustness. 

It is free for software-development purposes, but instances require registration with the Red Hat Subscription Management (RHSM) service. Red Hat, owned by IBM, provides 24X7 subscription-based customer support as well as professional integration services. With the money Red Hat receives from subscriptions, it supports other open source efforts, including those that provide upstream features that eventually end up in RHEL itself.

How can RHEL 9 fit into my environment?

RHEL 9 can be run on a variety of physical hardware, as a virtual machine on hypervisors, in containers, or as instances in Infrastructure as a Service (IaaS) public cloud services. It supports legacy x86 hardware as well as 64-bit x86_64-v2, aarch64, and ARMv8.0-A hardware architectures. RHEL 9 supports IBM Power 9, Power 10, and Z-series (z14) hardware platforms.

RHEL also supports a variety of data-storage file systems, including the common Ext4 file system, GFS2 and XFS. Legacy support for Ext2, Ext3, and vfat (FAT32) still exists.

Copyright © 2022 IDG Communications, Inc.

Fri, 15 Jul 2022 13:08:00 -0500 en text/html
Killexams : You Got Something On Your Processor Bus: The Joys Of Hacking ISA And PCI

Although the ability to expand a home computer with more RAM, storage and other features has been around for as long as home computers exist, it wasn’t until the IBM PC that the concept of a fully open and modular computer system became mainstream. Instead of being limited to a system configuration provided by the manufacturer and a few add-ons that really didn’t integrate well, the concept of expansion cards opened up whole industries as well as a big hobbyist market.

The first IBM PC had five 8-bit expansion slots that were connected directly to the 8088 CPU. With the IBM PC/AT these expansion slots became 16-bit courtesy of the 80286 CPU it was built around. These slots  could be used for anything from graphics cards to networking, expanded memory or custom I/O. Though there was no distinct original name for this card edge interface, around the PC/AT era it got referred to as PC bus, as well as AT bus. The name Industry Standard Architecture (ISA) bus is a retronym created by PC clone makers.

With such openness came the ability to relatively easy and cheaply make your own cards for the ISA bus, and the subsequent and equally open PCI bus. To this day this openness allows for a vibrant ecosystem, whether one wishes to build a custom ISA or PCI soundcard, or add USB support to a 1981 IBM PC system.

But what does it take to get started with ISA or PCI expansion cards today?

The Cost of Simplicity

From top to bottom: 8-bit XT bus, 16-bit AT/ISA, 32-bit EISA.

An important thing to note about ISA and the original PC/AT bus is that it isn’t so much a generic bus as it describes devices hanging off an 8088 or 80286 addressing and data bus. This means that for example that originally the bus is as fast as the clock speed of the CPU in question: 4.77 MHz for the original PC bus and 6-8 MHz for the PC/AT. Although 8-bit cards could be used in 16-bit slots most of the time, there was no ensure that they would work properly.

As PC clone vendors began to introduce faster CPUs in their models, the AT bus ended up being clocked at anywhere from 10 to 16 MHz. Understandably, this led to many existing AT (ISA) bus cards not working properly in those systems. Eventually, the clock for the bus was decoupled from the processor clock by most manufacturers, but despite what the acronym ‘ISA’ suggests, at no point in time was ISA truly standardized.

It was however attempted to standardize a replacement for ISA in the form of Extended ISA (EISA). Created in 1988, this featured a 32-bit bus, running at 8.33 MHz. Although it didn’t take off in consumer PCs, EISA saw some uptake in the server market, especially as a cheaper alternative to IBM’s proprietary Micro Channel architecture (MCA) bus. MCA itself was envisioned by IBM as the replacement of ISA.

Ultimately, ISA survives to this day in mostly industrial equipment and embedded applications (e.g. the LPC bus), while the rest of the industry moved on to PCI and to PCIe much later. Graphics cards saw a few detours in the form of VESA Local Bus (VLB) and Accelerated Graphics Port (AGP), which were specialized interfaces aimed at the needs of GPUs.

Getting started with new old tech

The corollary of this tumultuous history of ISA in particular is that one has to be careful when designing a new ‘ISA expansion card’. For truly wide compatibility, one could design an 8-bit card that can work with a bus speed from anywhere from 4.77 to 20 MHz. Going straight to a 16-bit card would be an option if one has no need to support 8088-based PCs. When designing a PC/104 card, there should be no compatibility issues, as it follows pretty much the most standard form of the ISA bus.

The physical interface is not a problem with either ISA or PCI, as both use edge connectors. These were picked mostly because they were cheap yet reliable, which hasn’t changed today. On the PCB end, no physical connector exists, merely the conductive ‘fingers’ that contact the contacts of the edge connector. One can use a template for this part, to get good alignment with the contacts. Also keep in mind the thickness of the PCB as the card has to make good contact. Here the common 1.6 mm seems to be a good match.

One can easily find resources for ISA and PCI design rules online if one wishes to create the edge connector themselves, such as this excellent overview on the Multi-CB (PCB manufacturer, no affiliation) site. This shows the finger spacing, and the 45 degrees taper on the edge, along with finger thickness  and distance requirements.

Useful for the electrical circuit design part is to know that ISA uses 5 V level signaling, whereas PCI can use 5 V, 3.3 V or both. For the latter, this difference is indicated using the placement of the notch in the PCI slot, as measured from the IO plate: at 56.21 mm for 3.3 V cards and 104.47 mm for 5 V. PCI cards themselves will have either one of these notches, or both if they support both voltages (Universal card).

PCI slots exist in 32-bit and 64-bit versions, of which only the former made a splash in the consumer market. On the flip-side of PCI we find PCI-X: an evolution of PCI, which saw most use in servers in its 64-bit version. PCI-X essentially doubles the maximum frequency of PCI (66 to 133 MHz), while removing 5V signaling support. PCI-X cards will often work in 3.3V PCI slots for this reason, as well as vice-versa. A 64-bit card can fall back to 32-bit mode if it is inserted into a shorter, 32-bit slot, whether PCI or PCI-X.

Driving buses

Every device on a bus adds a load which a signaling device has to overcome. In addition, on a bus with shared lines, it’s important that individual devices can disengage themselves from these shared lines when they are not using them. The standard way to deal with this is to use a tri-state buffer, such as the common 74LS244. Not only does it provide the isolation provided by a standard digital buffer circuit, it can also switch to a Hi-Z (high-impedance) state, in which it is effectively disconnected.

In the case of our ISA card, we need to have something like the 74LS244 or its bi-directional sibling 74LS245 to properly interface with the bus. Each bus signal connection needs to have an appropriate buffer or latch placed on it, which for the ISA bus is covered in detail in this article by Abhishek Dutta. A good example of a modern-day ISA card is the ‘Snark Barker’ SoundBlaster clone.

PCI could conceivably be done in such a discrete manner as well, but most commonly commercial PCI cards used I/O accelerator ASICs, which provide a simple, ISA-like interface to the card’s circuitry. These ICs are however far from cheap today (barring taking a risk with something like the WCH CH365), so a good alternative is to implement the PCI controller in an FPGA. The MCA version of the aforementioned ‘Snark Barker’ (as previously covered by us) uses a CPLD to interface with the MCA bus. Sites like OpenCores feature existing PCI target projects one could use as a starting point.

Chatting with ISA and PCI

After creating a shiny PCB with gold edge contact fingers and soldering some bus buffer ICs or an FPGA onto it, one still has to be able to actually talk the actual ISA or PCI protocol. Fortunately, a lot of resources exist for the ISA protocol, such as this one for ISA. The PCI protocol is, like the PCIe protocol, a ‘trade secret’, and only officially available via the PCI-SIG website for a price. This hasn’t kept copies from the specification to leak over the past decades, however.

It’s definitely possible to use existing ISA and PCI projects as a template or reference for one’s own projects. The aforementioned CPLD/FPGA projects are a way to avoid implementing the protocol oneself and just getting to the good bits. Either way, one has to use the interrupt (IRQ) system for the respective bus (dedicated signal lines, as well as message-based in later PCI versions), with the option to use DMA (DRQn & DACKn on ISA). Covering the intricacies of the ISA and PCI bus would however take a whole article by itself. For those of us who have had ISA cards with toggle switches or (worse), ISA PnP (Plug’n’Pray) inflicted on them, a lot of this should already be familiar, however.

As with any shared bus, the essential protocol when writing or practicing involves requesting bus access from the bus master, or triggering the bus arbitration protocol with multiple bus masters in PCI. An expansion card can also be addressed directly using its bus address, as Abhishek Dutta covered in his ISA article, which on Linux involves using kernel routines (sys/io.h) to obtain access permissions before one can send data to a specific IO port on which the card can be addressed. Essentially:

if (ioperm(OUTPUT_PORT, LENGTH+1, 1)) {
if (ioperm(INPUT_PORT, LENGTH+1, 1)) {

outb(data, port);
data = inb(port);

With ISA, the IO address is set in the card, and the address decoder on the address signal lines used to determine a match. Often toggle switches or jumpers were used to allow a specific address, IRQ and DMA line. ISA PnP sought to Strengthen on this process, but effectively caused more trouble. For PCI, PnP is part of the standard: the PCI bus is scanned for devices on boot, and the onboard ROM (BIOS) queried for the card’s needs after which the address and other parameters are set up automatically.

Wrapping up

Obviously, this article has barely even covered the essentials when it comes to developing one’s own custom ISA or PCI expansion cards, but hopefully it has at least given a broad overview of the topic. A lot of what one needs depends on the type of card one wishes to develop, whether it’s a basic 8-bit ISA (PC/XT) card, or a 64-bit PCI-X one.

A lot of the fun with buses such as ISA and PCI, however, is that they are very approachable. Their bus speeds are well within the reach of hobbyist hardware and oscilloscopes in case of debugging/analysis. The use of a slower parallel data bus means that no differential signaling is used which simplifies the routing of traces.

Even though these legacy buses are not playing in the same league as PCIe, their feature set and accessibility means that it can deliver old systems a new lease on life, even if it is for something as simple as adding Flash-based storage to an original IBM PC.

[Heading image: Snark Barker ISA SoundBlaster clone board. Credit: Tube Time]

Sat, 09 Jul 2022 12:00:00 -0500 Maya Posch en-US text/html
Killexams : For the first time in history, we can modify atomic bonds in a single molecule

An international team of scientists has succeeded in modifying individual molecules by selectively forming and disassociating bonds between their atoms. The breakthrough will enable the creation of new molecules that were previously inconceivable, according to Spanish chemist Diego Peña, one of the leaders of the research team. “This technique is going to revolutionize chemistry,” he said. The study is featured on the cover of the prestigious Science magazine.

A molecule is simply a cluster of atoms. Water – famously known as H₂O – has two hydrogen atoms and one oxygen atom, joined by covalent bonds that share electrons. To modify molecules, scientists currently use a process manner likened to putting Lego blocks in a washing machine and hoping that the quintillions of molecules somehow end up assembling themselves into the desired product. This is the analogy used by Igor Alabugin and Chaowei Hu in another study published in Science. But Peña’s team instead used a state-of-the-art microscope, capable of focusing on a single molecule, one millionth of a millimeter in size, to selectively modify the molecule’s bonds using voltage pulses.

“We can now assemble atoms in previously inconceivable ways,” exults Peña, a professor with the Center for Research in Biological Chemistry and Molecular Materials (CiQUS), at the University of Santiago de Compostela in northwest Spain. Borrowing a famous line from the movie Blade Runner, Peña said, “I have seen molecules you wouldn’t believe.” The research team succeeded in creating different structures using 18 carbon and eight hydrogen atoms to form rings and other shapes, and then reverted them to the original structure. “If you were to ask chemists if some of these molecules could be synthesized, they would tell you it’s impossible, because the molecules would react with their environment and only last for a few milliseconds,” said Peña.

A molecule modified using a specialized microscope built by IBM.
A molecule modified using a specialized microscope built by IBM.Science

Peña’s team used an advanced version of the tunneling microscope invented by IBM scientists Gerd Binnig and Heinrich Rohrer, winners of the Nobel Prize for Physics in 1986. These instruments operate at cryogenic temperatures in an ultra-high vacuum to ensure molecular stability, and are capable of imaging surfaces at the atomic level. The Peña-led team was previously featured on the cover of Science magazine in 2012 when they were the first to distinguish individual molecular bonds.

Leo Gross, a German physicist with IBM’s research laboratory in Zurich, is one of the lead authors of the research study. “Selective single-molecule reactions may enable the creation of novel, more complex, and more versatile artificial molecular machines,” said Gross, who envisions a future with better drug synthesis and delivery. “These molecular machines could perform tasks such as transporting other molecules or nanoparticles, manufacturing and manipulating nanostructures, and facilitating chemical transformations,” said Gross. But to get there, this nascent technique must first be mastered.

Physicist Leo Gross with a tunneling microscope at the IBM research laboratory in Zurich.
Physicist Leo Gross with a tunneling microscope at the IBM research laboratory in Zurich.IBM

Peña, Gross, and their colleagues used low-voltage electrical pulses to manipulate a molecule (composed of 18 carbon and eight hydrogen atoms - C₁₈H₈) and created three distinct three-dimensional structures. Using the same technique, the configuration of the molecule can be changed over and over again, hundreds of times, to see if the result will react with other molecules. In their Science article, Alabugin and Hu liken this technique to “a Swiss Army knife for surface chemistry.”

IBM’s Zurich Research Laboratory builds the sophisticated microscopes that scientists like Peña use to solve challenging chemistry problems such as analyzing molecules in meteorites. “Classical techniques needed several million molecules for detection. With this new technique, the detection threshold is now only a miniscule piece of a single molecule,” said Peña.

The cover of the July 14 issue of ‘Science,’ featuring the research study led by Diego Peña.
The cover of the July 14 issue of ‘Science,’ featuring the research study led by Diego Peña.Revista Science

Chemists from the University of Santiago de Compostela and physicists from IBM have also researched the molecular structure of asphaltenes, solid components of petroleum that clog pipelines and are known as “the cholesterol of oil refineries.” When asphaltenes clump together to form a blockage, refinery operations must be halted so it can be removed. “We can analyze the structure of asphaltenes to help develop additives that prevent these molecules from clumping together in pipelines,” said Peña. His research consortium, which includes the University of Regensburg (Germany), received a $9 million grant from the European Research Council two years ago.

Diego Peña was in Madrid recently at the farewell concert of one of his favorite musical groups, Siniestro Total. Known for their irreverent lyrics that often take digs at science, the band from Galicia (northwest Spain) sang one of their anthems: “What is being? / What is essence? / What is nothingness? / What is eternity? / Are we soul? / Are we matter?” Peña mused on the matter that makes up human beings and everything else. “It’s very important for society to understand the value of basic research – it’s knowledge for knowledge’s sake. I want to be able to control how atoms are assembled. Why is that useful? Well, it’s useful for everything, because molecules and atoms make up everything.” The many applications of this science, such as creating new molecules, have yet to be envisioned. We’ll just have to wait and see, says Peña. “Obviously, we’re not going to cure cancer overnight.”

Fri, 15 Jul 2022 05:11:00 -0500 en-us text/html
Killexams : Building A Strong Cloud Security Posture

Timothy Liu is the CTO and co-founder of Hillstone Networks.

The rush to the cloud is on, with industry analyst firm Gartner predicting a nearly 50% growth rate in public cloud spending from 2020 to this year. The vast majority of companies now run at least some of their workloads in the cloud. It’s easy to see why—the public cloud offers the flexibility, scalability, resilience and rapid implementation that allow workloads to be deployed, decommissioned and adapted on the fly to accommodate changing business requirements.

Yet too often, cloud security has lagged behind the rapid growth in cloud adoption. For example, a recent survey by the Cloud Security Alliance (CSA) found that nearly 60% of respondents named network security as a key concern in cloud adoption. These fears are amplified by multiple headline-making cloud data breaches like those involving Kaseya, Accenture, Verizon and others.

Achieving a strong cloud security posture, however, requires a subtle transformation from traditional data security measures to methods that provide comprehensive visibility across cloud assets, with the ability to accurately identify potential threats and orchestrate defenses across multiple security resources.

Why Traditional Security Falls Short

Traditional data security architectures typically establish a perimeter, or network edge, as the leading point of defense. But in cloud architectures, the perimeter is somewhat amorphous—and even more so in the increasingly popular multicloud environments—making a perimeter-based security strategy next to impossible to achieve.

In addition, conventional security measures simply don’t adapt well to cloud environments. They lack the ability to automatically scale up and down along with workloads and may not support containers. Deployment and management are labor-intensive and difficult to achieve in the dynamic cloud environment. And, perhaps most importantly, discreet traditional security devices usually can’t communicate with each other, leaving potential security gaps.

A Roadmap To Cloud Security

Rather than relying on a perimeter, cloud security, by necessity, must adopt a data-centric approach. At the most basic level, identity and access management (IAM) must be applied and strictly enforced. Equally important is guarding against faulty configuration of the cloud infrastructure. Together, compromised credentials and cloud misconfiguration account for over 30% of malicious cloud breaches, according to research by IBM and the Ponemon Institute (pg. 9). These two factors are also prominently represented in the major cloud data breaches mentioned earlier.

Public cloud providers typically provide basic tools for IAM and configuration, but a relatively new class of products called cloud workload protection platforms, or CWPPs, can provide next-level cloud cybersecurity protections. CWPPs can scan for infrastructure misconfigurations, for example, as well as assure that compliance baselines are met. Along with threat and risk detection, CWPP services can span public and private clouds as well as VMs, containers, cloud-native applications and other resources.

A complementary strategy is ZTNA, or zero-trust network architecture, to address IAM. The core of ZTNA is a “never trust, always verify” mantra that eliminates unverified implicit trust and thereby enhances security across the entire network—including the cloud, data center, network, remote workers and other assets. Ideally, ZTNA will apply a user-to-application approach (not network-centric) to authenticate based on identity, context and resources requested—which allows much easier scaling in a fluid cloud environment.

Taking Cloud Security To The Next Level

While compromised credentials and cloud misconfiguration account for the majority of malicious breaches, adopting a layered cloud security approach can help defend against other threats and attacks. For example, micro-segmentation of east-west traffic between cloud resources can help prevent malicious lateral movements by malware. Sometimes called micro-isolation, this solution will continuously optimize itself to ensure assets are protected—and botnets and other threats can’t proliferate.

For north-south traffic, virtual next-gen firewalls and web application firewalls provide a strong defense for public-facing assets like cloud applications, web servers and APIs. These technologies can help prevent DoS and DDoS attacks, major security risks like the OWASP Top 10, and other threats like web page defacement.

Finally, extended detection and response (XDR) can span the entire security stack, including cloud, network, data center and endpoints, to provide comprehensive visibility, accurate threat identification and coordinated, automatic response. XDR solutions intake data from other security devices, standardize and correlate it, then use artificial intelligence methods to investigate and detect potential threats. Once a threat is identified, the XDR solution can orchestrate the appropriate security response across other security devices for comprehensive, coordinated protection.


To achieve a strong cloud security posture, security teams must transform from a traditional, perimeter-based security mindset to a data-oriented mentality and layered security approach. The most critical need is to protect against compromised credentials and misconfiguration of cloud infrastructure. However, the ultimate goal should be to gain comprehensive visibility, accurate threat identification and an automatic, immediate and coordinated defense against attacks and other threats.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Mon, 11 Jul 2022 23:05:00 -0500 Tim Liu en text/html
Killexams : Network Automation Market to Observe Exponential Growth By 2022 to 2030 | Cisco, Juniper Networks, IBM

New Jersey, United StatesNetwork automation is the technique of automating the configuration, control, checking out, deployment, and operations of physical and digital devices within a community. It is the technique wherein software routinely configures, provisions, manages, and tests community devices. It is used by exceptional organizations and carrier providers to Strengthen efficiency and reduce human mistakes and operating expenses. The equipment of community automation features are ranging from basic network mapping and device discovery to extra complex workflows including network configuration management and the provisioning of virtual community resources. It plays an essential function in software program-defined community, network virtualization, and community orchestration, enabling computerized provisioning of digital network tenants and features, which includes digital load balancing. It has several middle blessings which include stepped forward efficiency, decreased probability of human mistakes, and decreased operational charges.

Several open-supply generation builders have developed effective open-supply network monitoring answers, which might be considered higher than their more high-priced industrial counterparts. The presence of such open-supply and economic answers in the market is restraining the marketplace of commercial community automation solutions, majorly due to the fact SMEs have budget constraints and opt for open-source community automation answers. However, some providers are looking to take advantage of open-source solutions by way of liberating commercial upgrades of those products. The business upgrades include paid variations of equipment that have improved abilities like the automated obtain of software program patches, aid, and maintenance.
There has been an increasing adoption of advanced technologies inclusive of Artificial Intelligence (AI) and Machine Learning (ML), Software-described Wide Area networks (SD-WAN), 5G and Wi-Fi 6, and aspect computing that assist in enhancing the performance of networks and fixing network problems (at instances, even before they arise). Network automation is a critical step for organizations to enforce a networking answer that grows smarter, is responsive, and constantly adopts and protects the network.

Receive the trial Report of Network Automation Market Research Insights 2022 to 2030 @

Networking groups are heavily making an investment in the R&D of networking solutions, with a focal point on long-term fee introduction. Based on the fluctuations in brief-term commercial enterprise overall performance and monetary consequences, leading networking organizations have no longer decreased their investment for the innovation and checking out of community automation answers. For instance, networking organizations, together with Forward Networks, have provided you with Intent-based totally Networking Solutions. Intent-primarily based Networking takes enterprise policy as its input, converts the requirement to a network configuration, and generates community designs.

The modern and globally linked digital international needs business programs, statistics, and offerings to be continued to be had from any area, this means that networks have to span multiple hosting environments, constant and mobile devices, and other kinds of IT infrastructure. However, just as networks are a key enabler for the organization, they are also a supply of extended chance. Hackers and cybercriminals are continuously spawning new network attacks to compromise, steal, or destroy essential records and disrupt groups for their own blessings. Distributed denial of carrier (DDoS), phishing, ransomware, worms, and different kinds of malware attacks can hit the community.

The community automation market is segmented primarily based on component, deployment model, enterprise length, enterprise vertical, and vicinity. In terms of aspect, the market is bifurcated into answers and services. Based on the solution, the marketplace is once more sub-segmented into network automation gear, SD-WAN & network, virtualization, and net-based networking. Based on carriers, the market is subdivided into managed offerings and professional offerings. As according to deployment mode, the market is segmented into on-premise and cloud. In terms of organization size, the marketplace is assessed into big-size employers and small and medium-length corporations. As according to the enterprise vertical, the market is categorized into BFSI, Retail & e-trade, IT & telecommunication, production, healthcare, and others. Based on region, the market is analyzed throughout North America, Europe, Asia-Pacific, and LAMEA (Latin America, Middle East, and Africa).

North America is anticipated to hold the largest percentage of the worldwide community automation market in 2020. In the location, companies and provider vendors are constantly converting their community infrastructure to deal with superior technologies. North America is home to many technological innovators. Most of the main market gamers, which include Cisco, IBM, Juniper Networks, and NetBrain, have their headquarters in this location. These players provide powerful community automation answers worldwide and possess a massive client base.

The community automation marketplace contains key to answer and service companies, which includes Cisco, Juniper Networks, IBM, Micro Focus, NetBrain, Forward Networks, SolarWinds, VMware, BMC Software, Anuta Networks, Apstra, BlueCat, Entuity, Veriflow, Riverbed, Itential, Volta Networks, Sedona Systems, Kentik, SaltStack, NetYCE, Versa Networks, AppViewX, BackBox and 128 Technology.

The following are some of the reasons why you should take a Network Automation market report:

  • The paper looks at how the Network Automation industry is likely to develop in the future.
  • Using Porter’s five forces analysis, it investigates several perspectives on the Network Automation market.
  • This Network Automation market study examines the product type that is expected to dominate the market, as well as the regions that are expected to grow the most rapidly throughout the projected period.
  • It identifies recent advancements, Network Automation market shares, and important market participants’ tactics.
  • It examines the competitive landscape, including significant firms’ Network Automation market share and accepted growth strategies over the last five years.
  • The research includes complete company profiles for the leading Network Automation market players, including product offers, important financial information, current developments, SWOT analysis, and strategies.

Download the Full Index of the Network Automation Market Research Report 2022

Contact Us:
Amit Jain
Sales Co-Ordinator
International: +1 518 300 3575
Email: [email protected]

Thu, 09 Jun 2022 01:21:00 -0500 Newsmantraa en-US text/html
LOT-925 exam dump and training guide direct download
Training Exams List