Practice P2140-022 Actual Questions from provides the Latest and 2022 updated P2140-022 exam dumps with exam dumps Questions and Answers for new topics of IBM P2140-022 exam topics. Practice our P2140-022 free pdf download and sample test to Further, develop your insight and breeze through your test with High Marks. We 100 percent ensure your accomplishment in the Test Center, covering each of the points of the exam and practicing your Knowledge of the P2140-022 exam.

Exam Code: P2140-022 Practice test 2022 by team
IBM Rational Systems and Software Engineering Technical Sales Mastery v1
IBM Engineering candidate
Killexams : IBM Engineering candidate - BingNews Search results Killexams : IBM Engineering candidate - BingNews Killexams : Amazon, IBM Move Swiftly on Post-Quantum Cryptographic Algorithms Selected by NIST

A month after the National Institute of Standards and Technology (NIST) revealed the first quantum-safe algorithms, Amazon Web Services (AWS) and IBM have swiftly moved forward. Google was also quick to outline an aggressive implementation plan for its cloud service that it started a decade ago.

It helps that IBM researchers contributed to three of the four algorithms, while AWS had a hand in two. Google contributed to one of the submitted algorithms, SPHINCS+.

A long process that started in 2016 with 69 original candidates ends with the selection of four algorithms that will become NIST standards, which will play a critical role in protecting encrypted data from the vast power of quantum computers.

NIST's four choices include CRYSTALS-Kyber, a public-private key-encapsulation mechanism (KEM) for general asymmetric encryption, such as when connecting websites. For digital signatures, NIST selected CRYSTALS-Dilithium, FALCON, and SPHINCS+. NIST will add a few more algorithms to the mix in two years.

Vadim Lyubashevsky, a cryptographer who works in IBM's Zurich Research Laboratories, contributed to the development of CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon. Lyubashevsky was predictably pleased by the algorithms selected, but he had only anticipated NIST would pick two digital signature candidates rather than three.

Ideally, NIST would have chosen a second key establishment algorithm, according to Lyubashevsky. "They could have chosen one more right away just to be safe," he told Dark Reading. "I think some people expected McEliece to be chosen, but maybe NIST decided to hold off for two years to see what the backup should be to Kyber."

IBM's New Mainframe Supports NIST-Selected Algorithms

After NIST identified the algorithms, IBM moved forward by specifying them into its recently launched z16 mainframe. IBM introduced the z16 in April, calling it the "first quantum-safe system," enabled by its new Crypto Express 8S card and APIs that provide access to the NIST APIs.

IBM was championing three of the algorithms that NIST selected, so IBM had already included them in the z16. Since IBM had unveiled the z16 before the NIST decision, the company implemented the algorithms into the new system. IBM last week made it official that the z16 supports the algorithms.

Anne Dames, an IBM distinguished engineer who works on the company's z Systems team, explained that the Crypto Express 8S card could implement various cryptographic algorithms. Nevertheless, IBM was betting on CRYSTAL-Kyber and Dilithium, according to Dames.

"We are very fortunate in that it went in the direction we hoped it would go," she told Dark Reading. "And because we chose to implement CRYSTALS-Kyber and CRYSTALS-Dilithium in the hardware security module, which allows clients to get access to it, the firmware in that hardware security module can be updated. So, if other algorithms were selected, then we would add them to our roadmap for inclusion of those algorithms for the future."

A software library on the system allows application and infrastructure developers to incorporate APIs so that clients can generate quantum-safe digital signatures for both classic computing systems and quantum computers.

"We also have a CRYSTALS-Kyber interface in place so that we can generate a key and provide it wrapped by a Kyber key so that could be used in a potential key exchange scheme," Dames said. "And we've also incorporated some APIs that allow clients to have a key exchange scheme between two parties."

Dames noted that clients might use Kyber to generate digital signatures on documents. "Think about code signing servers, things like that, or documents signing services, where people would like to actually use the digital signature capability to ensure the authenticity of the document or of the code that's being used," she said.

AWS Engineers Algorithms Into Services

During Amazon's AWS re:Inforce security conference last week in Boston, the cloud provider emphasized its post-quantum cryptography (PQC) efforts. According to Margaret Salter, director of applied cryptography at AWS, Amazon is already engineering the NIST standards into its services.

During a breakout session on AWS' cryptography efforts at the conference, Salter said AWS had implemented an open source, hybrid post-quantum key exchange based on a specification called s2n-tls, which implements the Transport Layer Security (TLS) protocol across different AWS services. AWS has contributed it as a draft standard to the Internet Engineering Task Force (IETF).

Salter explained that the hybrid key exchange brings together its traditional key exchanges while enabling post-quantum security. "We have regular key exchanges that we've been using for years and years to protect data," she said. "We don't want to get rid of those; we're just going to enhance them by adding a public key exchange on top of it. And using both of those, you have traditional security, plus post quantum security."

Last week, Amazon announced that it deployed s2n-tls, the hybrid post-quantum TLS with CRYSTALS-Kyber, which connects to the AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In an update this week, Amazon documented its stated support for AWS Secrets Manager, a service for managing, rotating, and retrieving database credentials and API keys.

Google's Decade-Long PQC Migration

While Google didn't make implementation announcements like AWS in the immediate aftermath of NIST's selection, VP and CISO Phil Venables said Google has been focused on PQC algorithms "beyond theoretical implementations" for over a decade. Venables was among several prominent researchers who co-authored a technical paper outlining the urgency of adopting PQC strategies. The peer-reviewed paper was published in May by Nature, a respected journal for the science and technology communities.

"At Google, we're well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information," Venables wrote in a blog post published following the NIST announcement. "We have one goal: ensure that Google is PQC ready."

Venables recalled an experiment in 2016 with Chrome where a minimal number of connections from the Web browser to Google servers used a post-quantum key-exchange algorithm alongside the existing elliptic-curve key-exchange algorithm. "By adding a post-quantum algorithm in a hybrid mode with the existing key exchange, we were able to test its implementation without affecting user security," Venables noted.

Google and Cloudflare announced a "wide-scale post-quantum experiment" in 2019 implementing two post-quantum key exchanges, "integrated into Cloudflare's TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients." The experiment helped Google understand the implications of deploying two post-quantum key agreements with TLS.

Venables noted that last year Google tested post-quantum confidentiality in TLS and found that various network products were not compatible with post-quantum TLS. "We were able to work with the vendor so that the issue was fixed in future firmware updates," he said. "By experimenting early, we resolved this issue for future deployments."

Other Standards Efforts

The four algorithms NIST announced are an important milestone in advancing PQC, but there's other work to be done besides quantum-safe encryption. The AWS TLS submission to the IETF is one example; others include such efforts as Hybrid PQ VPN.

"What you will see happening is those organizations that work on TLS protocols, or SSH, or VPN type protocols, will now come together and put together proposals which they will evaluate in their communities to determine what's best and which protocols should be updated, how the certificates should be defined, and things like things like that," IBM's Dames said.

Dustin Moody, a mathematician at NIST who leads its PQC project, shared a similar view during a panel discussion at the RSA Conference in June. "There's been a lot of global cooperation with our NIST process, rather than fracturing of the effort and coming up with a lot of different algorithms," Moody said. "We've seen most countries and standards organizations waiting to see what comes out of our nice progress on this process, as well as participating in that. And we see that as a very good sign."

Thu, 04 Aug 2022 09:03:00 -0500 en text/html
Killexams : Explainable AI Is Trending And Here’s Why

According to the 2022 IBM Institute for Business Value study on AI Ethics in Action, building trustworthy Artificial Intelligence (AI) is perceived as a strategic differentiator and organizations are beginning to implement AI ethics mechanisms.

Seventy-five percent of respondents believe that ethics is a source of competitive differentiation. More than 67% of respondents who view AI and AI ethics as important indicate that their organizations outperform their peers in sustainability, social responsibility, and diversity and inclusion.

The survey showed that 79% of CEOs are prepared to embed AI ethics into their AI practices, up from 20% in 2018, but less than a quarter of responding organizations have operationalized AI ethics. Less than 20% of respondents strongly agreed that their organization's practices and actions match (or exceed) their stated principles and values.

Peter Bernard, CEO of Datagration, says that understanding AI gives companies an advantage, but Bernard adds that explainable AI allows businesses to optimize their data.

"Not only are they able to explain and understand the AI/ML behind predictions, but when errors arise, they can understand where to go back and make improvements," said Bernard. "A deeper understanding of AI/ML allows businesses to know whether their AI/ML is making valuable predictions or whether they should be improved."

Bernard believes this can ensure incorrect data is spotted early on and stopped before decisions are made.

Avivah Litan, vice president and distinguished analyst at Gartner, says that explainable AI also furthers scientific discovery as scientists and other business users can explore what the AI model does in various circumstances.

"They can work with the models directly instead of relying only on what predictions are generated given a certain set of inputs," said Litan.

But John Thomas, Vice President and Distinguished Engineer in IBM Expert Labs, says at its very basic level, explainable AI are the methods and processes for helping us understand a model's output. "In other words, it's the effort to build AI that can explain to designers and users why it made the decision it did based on the data that was put into it," said Thomas.

Thomas says there are many reasons why explainable AI is urgently needed.

"One reason is model drift. Over time as more and more data is fed into a given model, this new data can influence the model in ways you may not have intended," said Thomas. "If we can understand why an AI is making certain decisions, we can do much more to keep its outputs consistent and trustworthy over its lifecycle."

Thomas adds that at a practical level, we can use explainable AI to make models more accurate and refined in the first place. "As AI becomes more embedded in our lives in more impactful ways, [..] we're going to need not only governance and regulatory tools to protect consumers from adverse effects, we're going to need technical solutions as well," said Thomas.

"AI is becoming more pervasive, yet most organizations cannot interpret or explain what their models are doing," said Litan. "And the increasing dependence on AI escalates the impact of mis-performing AI models with severely negative consequences," said Litan.

Bernard takes it back to a practical level, saying that explainable AI [..] creates proof of what senior engineers and experts "know" intuitively and explaining the reasoning behind it simultaneously. "Explainable AI can also take commonly held beliefs and prove that the data does not back it up," said Bernard.

"Explainable AI lets us troubleshoot how an AI is making decisions and interpreting data is an extremely important tool in helping us ensure AI is helping everyone, not just a narrow few," said Thomas.

Hiring is an example of where explainable AI can help everyone.

Thomas says hiring managers deal with all kinds of hiring and talent shortages and usually get more applications than they can read thoroughly. This means there is a strong demand to be able to evaluate and screen applicants algorithmically.

"Of course, we know this can introduce bias into hiring decisions, as well as overlook a lot of people who might be compelling candidates with unconventional backgrounds," said Thomas. "Explainable AI is an ideal solution for these sorts of problems because it would allow you to understand why a model rejected a certain applicant and accepted another. It helps you make your make model better.”

Making AI trustworthy

IBM's AI Ethics survey showed that 85% of IT professionals agree that consumers are more likely to choose a company that's transparent about how its AI models are built, managed and used.

Thomas says explainable AI is absolutely a response to concerns about understanding and being able to trust AI's results.

"There's a broad consensus among people using AI that you need to take steps to explain how you're using it to customers and consumers," said Thomas. "At the same time, the field of AI Ethics as a practice is relatively new, so most companies, even large ones, don't have a Head of AI ethics, and they don't have the skills they need to build an ethics panel in-house."

Thomas believes it's essential that companies begin thinking about building those governance structures. "But there also a need for technical solutions that can help companies manage their use of AI responsibly," said Thomas.

Driven by industry, compliance or everything?

Bernard points to the oil and gas industry as why explainable AI is necessary.

"Oil and gas have [..] a level of engineering complexity, and very few industries apply engineering and data at such a deep and constant level like this industry," said Bernard. "From the reservoir to the surface, every aspect is an engineering challenge with millions of data points and different approaches."

Bernard says in this industry, operators and companies still utilize spreadsheets and other home-grown systems-built decades ago. "Utilizing ML enables them to take siloed knowledge, Excellerate it and create something transferrable across the organization, allowing consistency in decision making and process."

"When oil and gas companies can perform more efficiently, it is a win for everyone," said Bernard. "The companies see the impact in their bottom line by producing more from their existing assets, lowering environmental impact, and doing more with less manpower."

Bernard says this leads to more supply to help ease the burden on demand. "Even modest increases like 10% improvement in production can have a massive impact in supply, the more production we have [..] consumers will see relief at the pump."

But Litan says the trend toward explainable AI is mainly driven by regulatory compliance.

In a 2021 Gartner survey, AI in Organizations reported that regulatory compliance is the top reason privacy, security and risk are barriers to AI implementation.

"Regulators are demanding AI model transparency and proof that models are not generating biased decisions and unfair 'irresponsible' policies," said Litan. "AI privacy, security and/or risk management starts with AI explainability, which is a required baseline."

Litan says Gartner sees the biggest uptake of explainable AI in regulated industries like healthcare and financial services. "But we also see it increasingly with technology service providers that use AI models, notably in security or other scenarios," said Litan.

Litan adds that another reason explainable AI is trending is that organizations are unprepared to manage AI risks and often cut corners around model governance. "Organizations that adopt AI trust, risk and security management – which starts with inventorying AI models and explaining them – get better business results," adds Litan.

But IBM's Thomas doesn't think you can parse the uptake of explainable AI by industry.

"What makes a company interested in explainable AI isn't necessarily the industry they're in; they're invested in AI in the first place," said Thomas. "IT professionals at businesses deploying AI are 17% more likely to report that their business values AI explainability. Once you get beyond exploration and into the deployment phase, explaining what your models are doing and why quickly becomes very important to you."

Thomas says that IBM sees some compelling use cases in specific industries starting with medical research.

"There is a lot of excitement about the potential for AI to accelerate the pace of discovery by making medical research easier," said Thomas. "But, even if AI can do a lot of heavy lifting, there is still skepticism among doctors and researchers about the results."

Thomas says explainable AI has been a powerful solution to that particular problem, allowing researchers to embrace AI modeling to help them solve healthcare-related challenges because they can refine their models, control for bias and monitor the results.

"That trust makes it much easier for them to build models more quickly and feel comfortable using them to inform their care for patients," said Thomas.

IBM worked with Highmark Health to build a model using claims data to model sepsis and COVID-19 risk. But again, Thomas adds that because it's a tool for refining and monitoring how your AI models perform, explainable AI shouldn't be restricted to any particular industry or use case.

"We have airlines who use explainable AI to ensure their AI is doing a good job predicting plane departure times. In financial services and insurance, companies are using explainable AI to make sure they are making fair decisions about loan rates and premiums," said Thomas. "This is a technical component that will be critical for anyone getting serious about using AI at scale, regardless of what industry they are in."

Guard rails for AI ethics

What does the future look like with AI ethics and explainable AI?

Thomas says the hope is that explainable AI will spread and see adoption because that will be a sign companies take trustworthy AI, both the governance and the technical components, very seriously.

He also sees explainable AI as essential guardrails for AI Ethics down the road.

"When we started putting seatbelts in cars, a lot more people started driving, but we also saw fewer and less severe accidents," said Thomas. "That's the obvious hope - that we can make the benefits of this new technology much more widely available while also taking the needed steps to ensure we are not introducing unanticipated consequences or harms."

One of the most significant factors working against the adoption of AI and its productivity gains is the genuine need to address concerns about how AI is used, what types of data are being collected about people, and whether AI will put them out of a job.

But Thomas says that worry is contrary to what’s happening today. "AI is augmenting what humans can accomplish, from helping researchers conduct studies faster to assisting bankers in designing fairer and more efficient loans to helping technicians inspect and fix equipment more quickly," said Thomas. "Explainable AI is one of the most important ways we are helping consumers understand that, so a user can say with a much greater degree of certainty that no, this AI isn't introducing bias, and here's exactly why and what this model is really doing."

One tangible example IBM uses is AI Factsheets in their IBM Cloud Pak for Data. IBM describes the factsheets as 'nutrition labels' for AI, which allows them to list the types of data and algorithms that make up a particular in the same way a food item lists its ingredients.

"To achieve trustworthy AI at scale, it takes more than one company or organization to lead the charge,” said Thomas. “AI should come from a diversity of datasets, diversity in practitioners, and a diverse partner ecosystem so that we have continuous feedback and improvement.”

Wed, 27 Jul 2022 12:00:00 -0500 Jennifer Kite-Powell en text/html
Killexams : Get to Know Jimmy Tsang, Pondurance VP of Marketing

The post Get to Know Jimmy Tsang, Pondurance VP of Marketing appeared first on Pondurance.

*** This is a Security Bloggers Network syndicated blog from Blog | Pondurance authored by Pondurance. Read the original post at:

Tue, 02 Aug 2022 00:00:00 -0500 by Pondurance on August 2, 2022 en-US text/html Killexams : 7 Quantum Computing Stocks to Buy for the Next 10 Years

[Editor’s note: “7 Quantum Computing Stocks to Buy for the Next 10 Years” was previously published in August 2020. It has since been updated to include the most relevant information available.]

Quantum computing has long been a concept stuck in the theory phase. Using quantum mechanics to create a class of next-generation quantum computers with nearly unlimited computing power remained out of reach.

But quantum computing is starting to hit its stride. exact breakthroughs in this emerging field — such as IBM’s (IBM) progressive 100-qubit quantum chip – are powering quantum computing forward. Over the next several years, this space will go from theory to reality. And this transition will spark huge growth in the global quantum computing market.

The investment implication?

It’s time to buy quantum computing stocks.

Quantum Computing’s Transformational Power

At scale, quantum computing will disrupt every industry in the world, from finance to biotechnology, cybersecurity and everything in between.

It will Excellerate the way medicines are developed by simulating molecular processes. It will reduce energy loss in batteries via optimized routing and design, thereby allowing for hyper-efficient electric car batteries. In finance, it will speed up and augment portfolio optimization, risk modeling and derivatives creation. In cybersecurity, it will disrupt the way we go about encryption. It will create superior weather forecasting models, unlock advancements in autonomous vehicle technology and help humans fight climate change.

I’m not kidding when I say quantum computing will change everything.

And as this next-gen computing transforms the world, quantum computing stocks will be big winners over the next decade.

So, with that in mind, here are seven of those stocks to buy for the next 10 years:

Quantum Computing Stocks to Buy: Alphabet

Among the various quantum computing stocks to buy for the next 10 years, the best is probably Alphabet (GOOG, GOOGL) stock.

Its Google AI Quantum is built on the back of a state-of-the-art 54-qubit processor dubbed Sycamore. And many consider this to be the leading quantum computing project in the world. Why? This thinking is bolstered mostly by the fact that, in late 2019, Sycamore performed a calculation in 200 seconds that would have taken the world’s most powerful supercomputers 10,000 years to perform.

This achievement led Alphabet to claim that Sycamore had reached quantum supremacy. What does this mean? Well, that’s the point when a quantum computer can perform a task in a relatively short amount of time that no other supercomputer could in any reasonable amount of time.

Many have since debated whether or not Alphabet has indeed reached quantum supremacy.

But that’s somewhat of a moot point.

The reality is that Alphabet has built the world’s leading quantum computer. The engineering surrounding it will only get better. And so will Sycamore’s computing power. And through its Google Cloud business, Alphabet can turn Sycamore into a market-leading quantum-computing-as-a-service business with huge revenues at scale.

To that end, GOOG stock is one of the best quantum computing stocks to buy today for the next 10 years.

International Business Machines

The other “big dog” that closely rivals Alphabet in the quantum computing space is IBM.

IBM has been big in the quantum computing space for years. But Big Blue has attacked this space in a fundamentally different way than its peers.

That is, other quantum computing players like Alphabet have chased quantum supremacy. But IBM has shunned that idea in favor of building on something the company calls the “quantum advantage.”

Ostensibly, the quantum advantage really isn’t too different from quantum supremacy. The former deals with a continuum focused on making quantum computers perform certain tasks faster than traditional computers. The latter deals with a moment focused on making quantum computers permanently faster at all things than traditional computers.

But it’s a philosophical difference with huge implications. By focusing on building the quantum advantage, IBM is specializing its efforts into making quantum computing measurably useful and economic in certain industry verticals for certain tasks.

In so doing, IBM is creating a fairly straightforward go-to-market strategy for its quantum computing services in the long run.

IBM’s realizable, simple, tangible approach makes it one of the most sure-fire quantum computing stocks to buy today for the next 10 years.

Quantum Computing Stocks: Microsoft

Another big tech player in the quantum computing space with promising long-term potential is Microsoft (MSFT).

Microsoft already has a huge infrastructure cloud business, Azure. Building on that foundation, Microsoft has launched Azure Quantum. It’s a quantum computing business with potential to turn into a huge QCaaS business at scale.

Azure Quantum is a secure, stable and open ecosystem, serving as a one-stop shop for quantum computing software and hardware.

The bull thesis here is that Microsoft will lean into its already-huge Azure customer base to cross-sell Azure Quantum. Doing so will deliver Azure Quantum a big and long runway for widespread early adoption. And that’s the first step in turning Azure Quantum into a huge QCaaS business.

And it helps that Microsoft’s core Azure business is absolutely on fire right now.

Putting it all together, quantum computing is simply one facet of the much broader Microsoft enterprise cloud growth narrative. That narrative will remain robust for the next several years. And it will continue to support further gains in MSFT stock.

Quantum Computing

The most interesting, smallest and potentially most explosive quantum computing stock on this list is Quantum Computing (QUBT).

And the bull thesis is fairly simple.

Quantum computing will change everything over the next several years. But the hardware is expensive. It likely won’t be ready to deliver measurable benefits at reasonable costs to average customers for several years. So, Quantum Computing is building a portfolio of affordable quantum computing software and apps that deliver quantum computing power. And they can be run on traditional legacy supercomputers.

In so doing, Quantum Computing is hoping to fill the affordability gaps. It aims to become the widespread, low-cost provider of accessible quantum computing software for companies that can’t afford full-scale hardware.

Quantum Computing has begun to commercialize this software, namely with QAmplify, its suite of powerful QPU-expansion software technologies. through three products currently in beta mode. According to William McGann, the company’s chief operating and technology officer:

“The use of our QAmplify algorithm in the 2021 BMW Group Quantum Computing Challenge for vehicle sensor optimization provided proof of performance by expanding the effective capability of the annealer by 20-fold, to 2,888 qubits.”

Quantum Computing’s products will likely start signing up automaker, financial, healthcare and government customers to long-term contracts. Those early signups could be the beginning of thousands for Quantum’s services over the next five to 10 years.

You could really see this company go from zero to several hundred million dollars in revenue in the foreseeable future.

If that happens, QUBT stock — which has a market capitalization of $78 million today — could soar.

Quantum Computing Stocks: Alibaba

Like others in this space, Alibaba’s (BABA) focused on creating a robust QCaaS arm to complement its already-huge infrastructure-as-a-service business.

In short, Alibaba is the leading public cloud provider in China. Indeed, Alibaba Cloud owns about 10% of the global IaaS market. Alibaba intends to leverage this leadership position to cross-sell quantum computing services to its huge existing client base. And eventually, it hopes to become the largest QCaaS player in China, too.

Will it work?


The Great Tech Wall of China will prevent many on this list from participating in or reaching scale in China. Alibaba does have some in-country quantum computing competition. But this isn’t a winner-take-all market. And given Alibaba’s enormous resource advantages, it’s highly likely that it becomes a top player in China’s quantum computing market.

That’s just another reason to buy and hold BABA stock for the long haul.


The other big Chinese tech company diving head-first into quantum computing is Baidu (BIDU).

The company launched its own quantum computing research center in 2018. According to its website, the goal of this research center is to integrate quantum computing into Baidu’s core businesses.

If so, that means Baidu’s goal for quantum computing diverges from the norm. Others in this space want to build out quantum computing power to sell it as a service to third parties. Baidu wants to build out quantum computing power to, at least initially, Excellerate its own operations.

Doing so will pay off in a big way for the company.

Baidu’s core search and advertising businesses could markedly Excellerate with quantum computing. Advancements in computing power could dramatically Excellerate its search algorithms and ad-targeting techniques and power its profits higher.

And thanks to its early research into quantum computing, BIDU stock does have healthy upside.

Quantum Computing Stocks: Intel

Last — but not least — on this list of quantum computing stocks to buy is Intel (INTC).

Intel may be falling behind competitors — namely Advanced Micro Devices (AMD) — on the traditional CPU front. But the semiconductor giant is on the cutting edge of creating potential quantum CPU candidates.

Intel’s newly announced Horse Ridge cryogenic control chip is widely considered the market’s best quantum CPU candidate out there today. The chip includes four radio frequency channels that can control 128 qubits. That’s more than double Tangle Lake, Intel’s predecessor quantum CPU.

The big idea, of course, is that when quantum computers are built at scale, they will likely be built on Intel’s quantum CPUs.

Therefore, potentially explosive growth in the quantum computing hardware market over the next five to 10 years represents a huge, albeit speculative, growth catalyst for both Intel and INTC stock.

On the date of publication, Luke Lango did not have (either directly or indirectly) any positions in the securities mentioned in this article.

Fri, 08 Jul 2022 04:45:00 -0500 en-US text/html
Killexams : Cloud salespeople can make $185,000 in base salary. We asked recruiters and execs at Amazon, IBM, and Google how to break into these roles.
  • Leaders at Amazon Web Services, Google Cloud, SAP, and IBM shared how to break into cloud sales.
  • Certifications like AWS Certified Cloud Practitioner can help candidates stand out.
  • Candidates should understand a company's cloud products and be team players.

As more businesses move to cut costs by ditching on-site data centers, demand for cloud computing services continues to boom. That's good news for salespeople, who are in demand as customers look to make the switch. 

Demand for cloud services is set to grow another 20% this year alone, with spending due to hit $494 billion worldwide, according to management consultancy Gartner.

With so many customers up for grabs, top cloud providers Amazon Web Services, Microsoft's Azure, and Google Cloud are duking it out for market share and top-tier sales talent.

California-based sales representatives at Amazon Web Services can make $132,000 to $185,000, while California-based sales engineers at Google Cloud can make $151,000 to $186,000, according to US work-visa data.

Insider previously reported how major firms were fighting to attract the very best salespeople in the business, be it by revamping their compensation packages or poaching talent from rival firms.

Data previously compiled by Insider revealed cloud sales reps at the likes of Google and AWS can earn as much as $185,000 per year.

Recruiters and executives at the major cloud players gave their biggest tips for bagging a high-paying job in cloud sales.

Experience doesn't count for everything in cloud sales

Salespeople often help customers make decisions on how to adopt and use the cloud. They'll also need a working understanding of cloud, and AWS's cloud practitioner certifications can provide that, says Kevin Kelly, company's director of cloud career training programs at AWS Training and Certification.

Cloud salespeople focus on selling the cloud directly, or products built on the cloud, so they need to explain to customers why these services are valuable and why they should be interested in building applications on the cloud.

A certification can help do that, Kelly says, and the AWS Certified Cloud Practitioner qualification can help them stand out. "I think a curiosity and a willingness to learn and an enthusiasm for the space is a must," Kelly said. 

Michiel Verhoeven, managing director of SAP's UK and Ireland division, said while having a basic understanding of the firm's products is important, sales is "a team sport."

"We have relevant experts available to support and onboard new candidates," he said. "We look for born influencers who can make a real impact on the world by helping major companies transform the way they do business."

Over at Google Cloud, recruiters sift through "hundreds of thousands of resumes every year," according to strategy and operations VP Kelly Ducourty, but there's no reason less experienced candidates shouldn't apply.

"There's no one company or industry we recruit from," Ducourty told Insider. "This ensures we bring in unique perspectives and backgrounds to the team.

"Regardless of your level of experience, we're always looking for smart candidates who exhibit an ability to take ownership, navigate ambiguity, have a passion for excellence, and a focus on driving the best outcome for the customer."

Your interview isn't necessarily the last chance for success

At Google Cloud, candidates will generally go through a four-stage interview process, Ducourty said, covering role-related knowledge, leadership skills, cognitive ability, and how good a fit they are for the company's internal culture.

"Even if someone is not a perfect fit for the role, if they perform well in the interviews, they can re-enter the talent pool for other positions," she said.

"During the interview process, be yourself and stay true to who you are. There's a reason you've already gotten as far as you have," she added. "Authenticity will take you far."

At SAP, the interview process is "very much dependent on the level of the role," said Verhoeven. "It can be a simple two-step process after the initial screening, rising to a slightly longer three- to four-step process for experienced hires."

He continued: "If someone is passionate about a career with purpose, where you have the chance to make a real difference in the world, and they're outcomes-focused, willing to work collaboratively and recognize that their success is reliant on customer success — we encourage that person to apply.

David Hewitt, executive director at IBM, gives prospective candidates three key pieces of advice: "Firstly, think about what shaped you when you were growing up, and be ready to share how that continues to influence you today in the workplace.

"Second, demonstrate how you can work and win as a team; IBM-ers don't tend to reach the peak of their success through individual contribution alone," he said. "Finally, be prepared to work hard and be rewarded both personally and professionally."

Tue, 05 Jul 2022 22:36:00 -0500 en-US text/html
Killexams : CardinalOps Shortlisted for Best Security Innovation Category in 2022 SaaS Awards

TEL-AVIV, Israel and BOSTON, July 29, 2022 /PRNewswire/ -- CardinalOps, the AI-powered detection engineering company, has been shortlisted in the 2022 SaaS Awards program in the Best Security Innovation category alongside other security leaders including Mandiant, Nokia, Aqua Security, and Dashlane.? Previous winners in this category have included SIEMplify, Kenna Security, and Vectra.

CardinalOps (PRNewsfoto/CardinalOps)

CardinalOps uses AI-powered automation to address some of the biggest complexity headaches that organizations have in managing SIEMs in their Security Operations Centers (SOCs) – including Splunk, Microsoft Sentinel, and IBM QRadar – without requiring them to walk away from the significant investments they've made in their existing security stacks.

Now in its seventh year of celebrating software innovation, the Software Awards program accepts entries worldwide, including the US, Canada, Australasia, EMEA and UK.?

Head of operations for the SaaS Awards, James Williams, said: "Innovative technologies have always driven industry forward, and having disrupted the software business, SaaS continues to mature as a key driver for sustained improvement across manifold verticals. The shortlisted candidates announced have proven to be truly innovative thinkers in the SaaS industry, whether they're freshly-funded disruptors or established names."

Michael Mumcuoglu, CEO and co-founder at CardinalOps said: "SIEMs are essential elements of the SOC but they're also complex to configure, continuously update with detections for the latest threats, and ensure that detections are actually working as ntended. Compounding these challenges is the scarcity of detection engineering talent. We're proud to be recognized for helping our customers Excellerate the ROI and effectiveness of their existing SOC tools with continuous, analytics-driven automation to address these challenges. We must continue to innovate to stay ahead of our adversaries."

SaaS Awards finalists will be announced on Tuesday 23 August 2022, with the ultimate category winners announced on Tuesday 13 September 2022.

 To view the full shortlist, please visit:

About CardinalOps
CardinalOps maximizes your MITRE ATT&CK coverage for the latest threats and eliminates hidden detection gaps you may not even know you have. Its SOC automation platform delivers new detection content for your existing SIEM – leveraging a proprietary knowledge base of 5K+ best practice detections, automatically customized to your environment – and continuously audits your instance to ensure detections and data sources are functioning as intended, while boosting your detection engineering team's productivity 10x compared to manual processes.

Founded in 2020, CardinalOps is led by serial entrepreneurs whose previous companies were acquired by Palo Alto Networks, HP, Microsoft Security, IBM Security, and others. The company's advisory board includes Dr. Anton Chuvakin, Security Advisor in the Office of the CISO at Google Cloud; Dan Burns, former Optiv CEO and founder of Accuvant; and Randy Watkins, CTO of Critical Start. For more information, please visit

About the SaaS Awards?
The SaaS Awards is a sister program to the Cloud Awards, which was founded in 2011. The SaaS Awards focuses on recognizing excellence and innovation in software solutions. Categories range from Best Enterprise-Level SaaS to Best UX or UI Design in a SaaS Product.?

About the Cloud Awards?
The Cloud Awards is an international program which has been recognizing and honoring industry leaders, innovators and organizational transformation in cloud computing since 2011. The awards are open to large, small, established and start-up organizations from across the entire globe, with an aim to find and celebrate the pioneers who will shape the future of the Cloud as we move into 2023 and beyond. Categories include the Software as a Service award, Most Promising Start-Up, and "Best in Mobile" Cloud Solution.??

Winners are selected by a judging panel of international industry experts. For more information about the Cloud Awards and SaaS Awards, please visit

Contact details?

For CardinalOps
Nathaniel Hawthorne for CardinalOps
Lumina Communications
(661) 965-0407
[email protected] 

For the SaaS Awards?
James Williams – head of operations?
[email protected]?

Cision View original content to obtain multimedia:

SOURCE CardinalOps

[ Back To's Homepage ]

Fri, 29 Jul 2022 00:20:00 -0500 text/html
Killexams : IBM Watson Health’s Dramatic Expansion

Cognitive computing powerhouse IBM Watson Health is adding novel offerings and entering new agreements in an array of healthcare arenas.

This week, IBM Watson Health announced a slew of solutions and partnerships aimed at improving healthcare decision making and delivery. The announcement, which was released during a major gathering for the health IT community--the annual Healthcare Information and Management Systems Society (HIMSS) conference--covers offerings focused on value-based care, medical imaging, and population health.

"Healthcare organizations are operating in a complex and fluctuating business environment, one in which the insights they need to succeed can be hidden amidst an avalanche of disparate and siloed data," Deborah DiSanzo, general manager of IBM Watson Health, said in a press release. She added, "Watson Health's extensive industry expertise informs how we deploy data, cloud, and cognitive computing to help clients make more informed decisions today and understand precisely what their organization should address to achieve their quality care goals and outcomes in a value-based care system."

Among the new products is the IBM Watson Health Value-Based Care solutions. Applications set to be released later in 2017 include solutions that help track and forecast value-based care performance indicators, monitor patient engagement, customize analytics, and tools that can help pinpoint areas of high cost.

IBM also unveiled IBM Watson Imaging Clinical Review, choosing to focus first on aortic stenosis. The offering is designed to alert clinicians to patients who may have aortic stenosis but haven't been identified as a candidate for cardiovascular follow up care, according to a press release. The platform is expected to eventually be expanded to nine more cardiovascular diseases, including cardiomyopathy, deep vein thrombosis, heart attacks, among others.

"Out of the gate, this type of cognitive tool may provide big benefits to hospitals and doctors, providing insights we don't currently have and doing so in a way that fits how we work," Ricard Cury, MD, director of cardiac imaging at Baptist Health of South Florida and chairman and CEO of Radiology Associates of South Florida, said in the release.

Cury's institutions are among the new members of the Watson Health medical imaging collaborative, focused on optimizing the applications of medical imaging. IBM announced in the imaging release that there are now 24 organizations in the collaborative.

Another agreement announced the same day will bring the more than 2000 healthcare providers of the Central New York Care Collaborative (CNYCC) onto a population health platform run on the Watson Health Cloud. The effort aims to cut Medicaid costs and preventable emergency room visits, as well as cut hospital readmissions by 25%, according to a news release.

"Central New York is leading the way for a national movement toward an effective, scalabe patient-centric approach to population health management and value-based care," Anil Jain, MD, FACP, vice president and chief health informatics officer, Value-Based Care at IBM Watson Health, said in the release.

IBM Watson Health also signed on to an agreement with a healthcare organization, Atrius Health. The collaboration will center around more information that can be used to facilitate shared decision making and Excellerate delivery of patient care in the eastern Massachusetts region covered by the healthcare organization.

"Atrius Health is committed to increasing the joy in the practice of medicine for our clinicians and staff," Steve Strongwater, MD, president and CEO of Atrius Health, said in a news release. "Working with IBM Watson Health offers a unique opportunity to help our Atrius Health clinicians make greater use of the mountains of digitalized information generated daily through our care of patients."

Of course, there's more to these new collaborations than the exciting potential of the technology. Forbes reported recently that a partnership between IBM Watson and MD Anderson has been paused, highlighting the important role that project management and finances--in addition to the technology--play in the success of such a joint effort.

IBM Watson Health already has an impressive list of collaborations under its belt, including with Quest Diagnostics, Medtronic, Johnson & Johnson, and Memorial Sloan Kettering. More partnerships seem likely, as IBM also debuted its Watson Health Consulting Services unit and new features for its Watson Platform for Health Cloud this week. These offerings echo IBM Watson Health's other priorities, with a continued focus on quicker, better insights, improved patient care, and value-based care.

"The launch of the new Watson Health Consulting Services unit is about helping our clients transform healthcare, in quality, improved access, patient satisfaction and lower cost in the cognitive healthcare era," Matt Porta, vice president and partner for IBM Watson Health Consulting Group, said in the release


Tue, 26 Jul 2022 12:00:00 -0500 en text/html
Killexams : IBM hopes a new error mitigation technique will help it get to quantum advantage

It felt like for a long time, the quantum computing industry avoided talking about "quantum advantage" or "quantum supremacy," the point where quantum computers can solve problems that would simply take too long to solve on classical computers. To some degree, that's because the industry wanted to avoid the hype that comes with that, but IBM today brought back talk about quantum advantage again by detailing how it plans to use a novel error mitigation technique to chart a path toward running the increasingly large circuits it'll take to reach this goal -- at least for a certain set of algorithms.

It's no secret that quantum computers hate nothing more than noise. Qubits are fickle things, after all, and the smallest change in temperature or vibration can make them decohere. There's a reason the current era of quantum computing is associated with "noisy intermediate-scale quantum (NISQ) technology."

The engineers at IBM and every other quantum computing company are making slow but steady strides toward reducing that noise on the hardware and software level, with IBM's 65-qubit systems from 2020 now showing twice the coherence time compared to when they first launched, for example. The coherence time of IBM's transmon superconducting qubits is now over 1 ms.

But IBM is also taking another approach but betting on new error mitigation techniques, dubbed probabilistic error cancellation and zero-noise extrapolation. At a very basic level, you can almost think of this as the quantum equivalent of the active noise cancellation in your headphones. The system regularly checks the system for noise and then essentially inverts those noisy circuits to enable it to create virtually error-free results.

IBM has now shown that this isn't just a theoretical possibility but actually works in its existing systems. One disadvantage here is that there is quite a bit of overhead when you constantly demo these noisy circuits and that overhead is exponential in the number of qubits and the circuit depths. But that's a trade-off worth making, argues Jerry Chow, the director of Hardware Development for IBM Quantum.

"Error mitigation is about finding ways to deal with the physical errors in certain ways, by learning about the errors and also just running quantum circuits in such a way that allows us to cancel them," explained Chow. "In some ways, error correction is like the ultimate error mitigation, but the point is that there are techniques that are more near term with a lot of the hardware that we're building that already provide this avenue. The one that we're really excited about is called probabilistic error cancellation. And that one really is a way of trading off runtime -- trading off running more circuits in order to learn about the noise that might be inherent to the system that is impacting your calculations."

The system essentially inserts additional gates into existing circuits to demo the noise inherent in the system. And while the overhead increases exponentially with the size of the system, the IBM team believes it's a weaker exponential than the best classical methods to estimate those same circuits.

As IBM previously announced, it plans to introduce error mitigation and suppression techniques into its Qiskit Runtime by 2024 or 2025 so developers won't even have to think about these when writing their code.

Tue, 19 Jul 2022 01:43:00 -0500 en-GB text/html
Killexams : IBM acquires Israeli startup Databand to boost data capabilities

US tech giant IBM said Wednesday that it acquired Israeli startup, the developer of a data observability software platform for data scientists and engineers, to strengthen the multinational’s data, artificial intelligence, and automation offerings.

The terms of the acquisition were not disclosed. According to the agreement, Databand employees will join the IBM Data and AI division to further enhance IBM’s portfolio of data and AI products including its IBM Watson, a question-answering computer system, and IBM Cloud Pak for Data, a data analytics platform.

IBM said the acquisition was finalized in late June and that the purchase will build on IBM’s research and development investments, as well as strategic acquisitions in AI and automation. Databand is IBM’s fifth acquisition this year, the company noted.

Databand was founded in 2018 by Josh Benamram, Victor Shafran, and Evgeny Shulman, and rolled out a software platform that the company says helps enterprises and organizations get on top of their data to ensure “data health” and fix issues like errors and anomalies, pipeline failures, and general quality.

The data observability and data quality market is likely to see further growth, as more organizations look to closely track and protect their data. A Statista report estimated that the sector will grow from about $13 billion in worth in 2020 to almost $20 billion in 2024.

Based in Tel Aviv, Databand has raised about $20 million, according to the Start-Up Nation Finder database, with investors such as VCs Accel, Blumberg Capital, Ubiquity Ventures, Bessemer Venture Partners, Hyperwise, and F2 Ventures.

“By using with IBM Observability by Instana APM [an application performance monitoring solution] and IBM Watson Studio, IBM is well-positioned to address the full spectrum of observability across IT operations,” IBM said in the announcement Wednesday.

“Our clients are data-driven enterprises who rely on high-quality, trustworthy data to power their mission-critical processes. When they don’t have access to the data they need in any given moment, their business can grind to a halt,” said Daniel Hernandez, general manager for IBM Data and AI, in a statement.

“With the addition of, IBM offers the most comprehensive set of observability capabilities for IT across applications, data and machine learning, and is continuing to provide our clients and partners with the technology they need to deliver trustworthy data and AI at scale,” he explained.

Benamram, who serves as Databand CEO, said: “You can’t protect what you can’t see, and when the data platform is ineffective, everyone is impacted –including customers. That’s why global brands such as FanDuel, Agoda and Trax Retail already rely on to remove bad data surprises by detecting and resolving them before they create costly business impacts.

Joining IBM will help Databand “scale our software and significantly accelerate our ability to meet the evolving needs of enterprise clients,” he added.

Databand is one of a number of leading Israeli data observability companies including Coralogix, which raised a $142 million Series D funding round announced in May, and Monte Carlo, which secured a $135 million Series D round at a valuation of $1.6 billion, also in May.

Separately, IBM has been active in Israel for decades and runs an R&D center in Tel Aviv and a research lab in Haifa.

The Haifa team is the largest lab of IBM Research Division outside of the United States. Founded as a small scientific center in 1972, it grew into a lab that leads the development of innovative technological products and cognitive solutions for the IBM corporation. Its various projects utilize AI, cloud data services, blockchain, healthcare informatics, image and video analytics, and wearable solutions.

It's not (only) about you.

Supporting The Times of Israel isn’t a transaction for an online service, like subscribing to Netflix. The ToI Community is for people like you who care about a common good: ensuring that balanced, responsible coverage of Israel continues to be available to millions across the world, for free.

Sure, we'll remove all ads from your page and you'll gain access to some amazing Community-only content. But your support gives you something more profound than that: the pride of joining something that really matters

Join the Times of Israel Community Join our Community Already a member? Sign in to stop seeing this

You're a dedicated reader

We’re really pleased that you’ve read X Times of Israel articles in the past month.

That’s why we started the Times of Israel ten years ago - to provide discerning readers like you with must-read coverage of Israel and the Jewish world.

So now we have a request. Unlike other news outlets, we haven’t put up a paywall. But as the journalism we do is costly, we invite readers for whom The Times of Israel has become important to help support our work by joining The Times of Israel Community.

For as little as $6 a month you can help support our quality journalism while enjoying The Times of Israel AD-FREE, as well as accessing exclusive content available only to Times of Israel Community members.

Thank you,
David Horovitz, Founding Editor of The Times of Israel

Join Our Community Join Our Community Already a member? Sign in to stop seeing this
Wed, 06 Jul 2022 06:54:00 -0500 en-US text/html
Killexams : Honeycomb and AWS share diversity, equity and inclusion focus

Distributed services observability provider Honeycomb boasts a technically advanced solution that simplifies debugging. Its goal is to make DevOps engineers’ lives less hectic and solve issues affecting end users faster. In the past six months, the company was awarded the DevOps Dozen2 Best Observability Solution award and joined DataDog, Dynatrace, New Relic and IBM Instana in the 2022 Gartner Magic Quadrant leader’s square for application performance monitoring and observability.

While it doesn’t proclaim it in any press releases, Honeycomb stands out for another reason: In an industry dominated by white, Asian and male entrepreneurs and executives, its chief executive officer, chief technical officer, and vice president of engineering are all women. And its overall leadership team tips the gender scales at five women to three men.

But even successful female entrepreneurs and engineers suffer from imposter syndrome. The reason Honeycomb hasn’t traditionally publicized its female leadership was from a fear that it would diminish its legitimacy as an engineering organization, according to Vera Reynolds (pictured, left), engineering manager at Hound Technology Inc. (DBA Honeycomb).

Recently, Honeycomb’s rhetoric has shifted: “We believe that with great power comes great responsibility, and we’re trying to be more intentional as far as using that attribute of our company,” Reynolds told theCUBE industry analyst Lisa Martin during the “AWS Partner Showcase S1 E3: Women in Tech” event, an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. (* Disclosure below.) 

Joining Reynolds for the interview was Danielle Greshock (pictured, right), director of partner SAs and worldwide ISV at Amazon Web Services Inc. They spoke about Honeycomb’s relationship with AWS and how both companies are prioritizing building diverse, equal and inclusive workforces.

Honeycomb uses AWS in unconventional ways

Observability tools are replacing the older category of APM tools, which were designed for troubleshooting monolithic architectures and “just weren’t designed to help us answer questions about the complex distributed systems that we work with today,” Reynolds stated. Honeycomb’s datastore and query engine work with rich, contextual data and can detect patterns across billions of requests in as little as three seconds.

In an example, Reynolds describes a developer team that receives a report of users’ slowdowns. Thanks to Honeycomb, they can quickly establish that the issue is only occurring in a small subset of users that are using a specific language pack that was recently upgraded.

“It’s this level of granularity and being able to zoom in and out under data that allows you to understand what’s happening … even if everything else in your other tools looks fine [and] all of your dashboards are OK,” she said, describing how Honeycomb users can find and fix issues stemming from only customer feedback.

Honeycomb is built on AWS infrastructure, and its fast query engine uses AWS Lamba — but not in the traditional way. Honeycomb’s engineering creativity has led to a close relationship between the companies, as Honeycomb demonstrates to AWS how it can “draw outside the lines” of normal usage for its products. Honeycomb asks for early access to features and creatively tests them against Honeycomb’s customer needs. As AWS and Honeycomb share a large common customer pool, these creative use cases can be applied to the wider AWS ecosystem.

“We can be the guinea pigs and try things out in a way that … optimizes our own performance, but also allows other customers of AWS to follow in that path,” Reynolds said.

One example is a tool that Honeycomb built last year based on a new AWS Lambda extension capability. It enabled users to move telemetry data from applications running in Lambda into Honeycomb, and the “demand was win-win,” according to Reynolds.

AWS and Honeycomb seek diverse perspectives

One of AWS’ core leadership principles is to “seek diverse perspectives,” and it is this that drives Greshock’s hiring policy.

“You can’t really do that if everybody kind of looks the same and thinks the same and has the same background,” she said.

With those diverse perspectives in place, Greshock believes her team can build better products for its customers. And AWS encourages partners that have a vision to create a diverse, equal and inclusive workplace.

“Companies like Honeycomb that deliver customers choice and differentiate and help them to do what they need to do in their unique environments is super important,” she said.

Both Greshock and Reynolds started their careers as software engineers before becoming managers. Greshock worked independently in software development teams before joining AWS in 2014, while Reynolds, who described herself as a “generalist,” worked in web, mobile and platform development before gravitating into the developer tool space and becoming a manager at Honeycomb this past month. She admits a “fear of missing out” attitude led to her wide experience.

“Throughout my career, I was very much interested in ‘all the things,’” she said.

After realizing that it needed to become more intentional about its DEI initiatives, Honeycomb implemented a policy that managers should reach out to underrepresented communities before opening a job position to the wider tech community. This has led to Honeycomb forming partnerships with organizations such as AfroTech, The HUE Tech Summit and Latinas in Tech in order to have access to a more diverse field of candidates, according to Reynolds.

“The idea there is to kind of balance our pipeline of applicants, which the hope is will lead to more diverse hires in the long term,” she said.

Reynolds is currently hiring her first team member and has been working with the organizations she mentioned to “pre-load” her candidate list and deliver underrepresented groups a fair and equal opportunity for any job opening.

“Representation is very important,” Greshock stated. “I’m very happy to see all of our [ecosystem] engineering teams change as far as their composition, and I’m grateful to be part of it.”

Watch the complete video interview below:

(* Disclosure: Hound Technology Inc. (DBA Honeycomb) sponsored this segment of theCUBE. Neither Honeycomb nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Thu, 21 Jul 2022 11:29:00 -0500 en-US text/html
P2140-022 exam dump and training guide direct download
Training Exams List