Exam 00M-232 cheat sheet are updated on daily basis

killexams.com provides validly, Latest, and 2022 updated 00M-232 Dumps with questions answers Questions and Answers. Practice our 00M-232 Free PDF and Answers to Improve your insight about tips and deceives utilized by merchants and breeze through your 00M-232 test with High Marks. We ensure your achievement in the Test Center, covering every one of the references of IBM Solutions for Smart Business Sales Mastery Test v1 test and assembling your Knowledge. Pass with our 00M-232 Free PDF.

Exam Code: 00M-232 Practice exam 2022 by Killexams.com team
IBM Solutions for Smart Business Sales Mastery Test v1
IBM Solutions testing
Killexams : IBM Solutions testing - BingNews https://killexams.com/pass4sure/exam-detail/00M-232 Search results Killexams : IBM Solutions testing - BingNews https://killexams.com/pass4sure/exam-detail/00M-232 https://killexams.com/exam_list/IBM Killexams : Testing as a Service (TaaS) Market Set for More Growth | Infor, HCL Technologies, QualiTest, Capgemini, IBM

Latest Study on Industrial Growth of Worldwide Testing as a Service (TaaS) Market 2022-2028. A detailed study accumulated to offer Latest insights about acute features of the Worldwide Testing as a Service (TaaS) market. The report contains different market predictions related to revenue size, production, CAGR, Consumption, gross margin, price, and other substantial factors. While emphasizing the key driving and restraining forces for this market, the report also offers a complete study of the future trends and developments of the market. It also examines the role of the leading market players involved in the industry including their corporate overview, financial summary and SWOT analysis.

Get Free Exclusive PDF demo Copy of This Research @ https://www.advancemarketanalytics.com/sample-report/96242-global-testing-as-a-service-taas-market#utm_source=DigitalJournalLal

Some of the key players profiled in the study are: QualiTest (United States), Capgemini (France), IBM Corporation (United States), HCL Technologies Limited (India), Tata Consultancy Services (India), The Hewlett-Packard Company (United States), Wipro Limited (India), Accenture Plc (Ireland), Infosys Limited (India), Atos SE (France), Cognizant Technology Solutions Corp. (United States), SGS (Switzerland), Infor (United States).

Scope of the Report of Testing as a Service (TaaS)
Testing is a crucial component of ensuring that the product is functional and meets the quality and performance demands of customers. This testing done by the third-party is known as Testing as a Service (TaaS). The TaaS is rapidly gaining traction in the market as a cloud-based delivery model. It is mainly used to reduce the need for in-depth knowledge required for design delivery modes and helps organizations in achieving reduced testing cost. As per an estimation, approximately 10% of world testing services are outsourced to India through the traditional offshoring methods. This trend for outsourcing the testing services has driven global testing as a service (TaaS) market growth.

In addition to the aforementioned factor, Need To Reduce Operational Time And Cost By Enterprises is expected to propel the growth of the market over the forecast period.

Competitive Landscape
The global testing as a Service (TaaS) market is largely competitive and consists of several providers who compete based on factors such as service quality, technology, and pricing. The intense competition, changing consumer spending patterns, demographic trends, and frequent changes in consumer preferences pose significant risk factors for the growth of service providers in the market.

The titled segments and sub-section of the market are illuminated below:

by Type (Functionality Testing {UI/GUI Testing, Regression Testing, Integration and Automated User Acceptance Testing}, Usability Testing, Performance Testing, Compatibility Testing, Security Testing, Other), End User Industry (BFSI, Manufacturing, Retail, Healthcare, Automotive, Government, Other), Organization Size (Small and Medium Size Enterprises (SMEs), Large Enterprises), Service Type (On-Site Service, Off-Site Service)

Market Drivers:
Improve Overall Quality Assurance Process And Delivery Framework
Need To Reduce Operational Time And Cost By Enterprises

Market Trends:
Growing Developments of Innovative Products

Opportunities:
Increase In Adoption Of Outsourced Testing Services
Demand to Boost Scalability and Better Time-to-Market
Increasing Software-as-a-Service (SaaS) And Cloud-Based Applications

Have Any Questions Regarding Global Testing as a Service (TaaS) Market Report, Ask Our [email protected] https://www.advancemarketanalytics.com/enquiry-before-buy/96242-global-testing-as-a-service-taas-market#utm_source=DigitalJournalLal

Region Included are: North America, Europe, Asia Pacific, Oceania, South America, Middle East & Africa

Country Level Break-Up: United States, Canada, Mexico, Brazil, Argentina, Colombia, Chile, South Africa, Nigeria, Tunisia, Morocco, Germany, United Kingdom (UK), the Netherlands, Spain, Italy, Belgium, Austria, Turkey, Russia, France, Poland, Israel, United Arab Emirates, Qatar, Saudi Arabia, China, Japan, Taiwan, South Korea, Singapore, India, Australia and New Zealand etc.

Strategic Points Covered in Table of Content of Global Testing as a Service (TaaS) Market:

Chapter 1: Introduction, market driving force product Objective of Study and Research Scope the Testing as a Service (TaaS) market

Chapter 2: Exclusive Summary – the basic information of the Testing as a Service (TaaS) Market.

Chapter 3: Displaying the Market Dynamics- Drivers, Trends and Challenges & Opportunities of the Testing as a Service (TaaS)

Chapter 4: Presenting the Testing as a Service (TaaS) Market Factor Analysis, Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis.

Chapter 5: Displaying the by Type, End User and Region/Country 2015-2020

Chapter 6: Evaluating the leading manufacturers of the Testing as a Service (TaaS) market which consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company Profile

Chapter 7: To evaluate the market by segments, by countries and by Manufacturers/Company with revenue share and sales by key countries in these various regions (2021-2027)

Chapter 8 & 9: Displaying the Appendix, Methodology and Data Source

finally, Testing as a Service (TaaS) Market is a valuable source of guidance for individuals and companies.

Read Detailed Index of full Research Study at @ https://www.advancemarketanalytics.com/reports/96242-global-testing-as-a-service-taas-market#utm_source=DigitalJournalLal

Thanks for practicing this article; you can also get individual chapter wise section or region wise report version like North America, Middle East, Africa, Europe or LATAM, Southeast Asia.

Contact Us:

Craig Francis (PR & Marketing Manager)
AMA Research & Media LLP
Unit No. 429, Parsonage Road Edison, NJ
New Jersey USA – 08837
Phone: +1 (206) 317 1218

Thu, 07 Jul 2022 21:24:00 -0500 Newsmantraa en-US text/html https://www.digitaljournal.com/pr/testing-as-a-service-taas-market-set-for-more-growth-infor-hcl-technologies-qualitest-capgemini-ibm
Killexams : Three Common Mistakes That May Sabotage Your Security Training

Phishing incidents are on the rise. A report from IBM shows that phishing was the most popular attack vector in 2021, resulting in one in five employees falling victim to phishing hacking techniques.

The Need for Security Awareness Training

Although technical solutions protect against phishing threats, no solution is 100% effective. Consequently, companies have no choice but to involve their employees in the fight against hackers. This is where security awareness training comes into play.

Security awareness training gives companies the confidence that their employees will execute the right response when they discover a phishing message in their inbox.

As the saying goes, "knowledge is power," but the effectiveness of knowledge depends heavily on how it is delivered. When it comes to phishing attacks, simulations are among the most effective forms of training because the events in training simulations directly mimic how an employee would react in the event of an actual attack. Since employees do not know whether a suspicious email in their inbox is a simulation or a real threat, the training becomes even more valuable.

Phishing Simulations: What does the training include?

It is critical to plan, implement and evaluate a cyber awareness training program to ensure it truly changes employee behavior. However, for this effort to be successful, it should involve much more than just emailing employees. Key practices to consider include:

  • Real-life phishing simulations.
  • Adaptive learning - live response and protection from actual cyberattacks.
  • Personalized training based on factors such as department, tenure, and cyber experience level.
  • Empowering and equipping employees with an always-on cybersecurity mindset.
  • Data-driven campaigns

Because employees do not recognize the difference between phishing simulations and real cyberattacks, it's important to remember that phishing simulations evoke different emotions and reactions, so awareness training should be conducted thoughtfully. As organizations need to engage their employees to combat the ever-increasing attacks and protect their assets, it is important to keep morale high and create a positive culture of cyber hygiene.

Three common phishing simulation mistakes.

Based on years of experience, cybersecurity firm CybeReady has seen companies fall into these common mistakes.

Mistake #1: Testing instead of educating

The approach of running a phishing simulation as a test to catch and punish "repeat offenders" can do more harm than good.

An educational experience that involves stress is counterproductive and even traumatic. As a result, employees will not go through the training but look for ways to circumvent the system. Overall, the fear-based "audit approach" is not beneficial to the organization in the long run because it cannot provide the necessary training over an extended period.

Solution #1: Be sensitive

Because maintaining positive employee morale is critical to the organization's well-being, provide positive just-in-time training.

Just-in-time training means that once employees have clicked on a link within the simulated attack, they are directed to a short and concise training session. The idea is to quickly educate the employee on their mistake and provide them essential tips on spotting malicious emails in the future.

This is also an opportunity for positive reinforcement, so be sure to keep the training short, concise, and positive.

Solution #2: Inform relevant departments.

Communicate with relevant stakeholders to ensure they are aware of ongoing phishing simulation training. Many organizations forget to inform relevant stakeholders, such as HR or other employees, that the simulations are being conducted. Learning has the best effect when participants have the opportunity to feel supported, make mistakes, and correct them.

Mistake #2: Use the same simulation for all employees

It is important to vary the simulations. Sending the same simulation to all employees, especially at the same time, is not only not instructive but also has no valid metrics when it comes to organizational risk.

The "warning effect" - the first employee to discover or fall for the simulation warns the others. This prepares your employees to respond to the "threat" by anticipating the simulation, thus bypassing the simulation and the training opportunity.

Another negative impact is social desirability bias, which causes employees to over-report incidents to IT without noticing them in order to be viewed more favorably. This leads to an overloaded system and the department IT.

This form of simulation also leads to inaccurate results, such as unrealistically low click-through rates and over-reporting rates. Thus, the metrics do not show the real risks of the company or the problems that need to be addressed.

Solution: Drip mode

Drip mode allows sending multiple simulations to different employees at different times. Certain software solutions can even do this automatically by sending a variety of simulations to different groups of employees. It's also important to implement a continuous cycle to ensure that all new employees are properly onboarded and to reinforce that security is important 24/7 - not just checking a box for minimum compliance.

Mistake #3: Relying on data from a single campaign

With over 3.4 billion phishing attacks per day, it's safe to assume that at least a million of them differ in complexity, language, approach, or even tactics.

Unfortunately, no single phishing simulation can accurately reflect an organization's risk. Relying on a single phishing simulation result is unlikely to provide reliable results or comprehensive training.

Another important consideration is that different groups of employees respond differently to threats, not only because of their vigilance, training, position, tenure, or even education level but because the response to phishing attacks is also contextual.

Solution: Implement a variety of training programs

Behavior change is an evolutionary process and should therefore be measured over time. Each training session contributes to the progress of the training. Training effectiveness, or in other words, an accurate reflection of actual organizational behavior change, can be determined after multiple training sessions and over time.

The most effective solution is to continuously conduct various training programs (at least once a month) with multiple simulations.

It is highly recommended to train employees according to their risk level. A diverse and comprehensive simulation program also provides reliable measurement data based on systematic behavior over time. To validate their efforts at effective training, organizations should be able to obtain a valid indication of their risk at any given point in time while monitoring progress in risk reduction.

Implement an effective phishing simulation program.

Creating such a program may seem overwhelming and time-consuming. That's why we have created a playbook of the 10 key practices you can use to create a simple and effective phishing simulation. Simply download the CybeReady Playbook or meet with one of our experts for a product demo and learn how CybeReady's fully automated security awareness training platform can help your organization achieve the fastest results with virtually zero effort IT.


Found this article interesting? Follow THN on Facebook, Twitter and LinkedIn to read more exclusive content we post.
Wed, 03 Aug 2022 22:37:00 -0500 The Hacker News en text/html https://thehackernews.com/2022/08/three-common-mistakes-that-may-sabotage.html
Killexams : IBM Annual Cost of Data Breach Report 2022: Record Costs Usually Passed On to Consumers, “Long Breach” Expenses Make Up Half of Total Damage

IBM’s annual Cost of Data Breach Report for 2022 is packed with revelations, and as usual none of them are good news. Headlining the report is the record-setting cost of data breaches, with the global average now at $4.35 million. The report also reveals that much of that expense comes with the data breach version of “long Covid,” expenses that are realized more than a year after the attack.

Most organizations (60%) are passing these added costs on to consumers in the form of higher prices. And while 83% of organizations now report experiencing at least one data breach, only a small minority are adopting zero trust strategies.

Security AI and automation greatly reduces expected damage

The IBM report draws on input from 550 global organizations surveyed about the period between March 2021 and March 2022, in partnership with the Ponemon Institute.

Though the average cost of a data breach is up, it is only by about 2.6%; the average in 2021 was $4.24 million. This represents a total climb of 13% since 2020, however, reflecting the general spike in cyber crime seen during the pandemic years.

Organizations are also increasingly not opting to absorb the cost of data breaches, with the majority (60%) compensating by raising consumer prices separate from any other exact increases due to inflation or supply chain issues. The report indicates that this may be an underreported upward influence on prices of consumer goods, as 83% of organizations now say that they have been breached at least once.

Brad Hong, Customer Success Manager for Horizon3.ai, sees a potential consumer backlash on the horizon once public awareness of this practice grows: “It’s already a breach of confidence to lose the confidential data of customers, and sure there’s bound to be an organization across those surveyed who genuinely did put in the effort to protect against and curb attacks, but for those who did nothing, those who, instead of creating a disaster recovery plan, just bought cyber insurance to cover the org’s operational losses, and those who simply didn’t care enough to heed the warnings, it’s the coup de grâce to then pass the cost of breaches to the same customers who are now the victims of a data breach. I’d be curious to know what percent of the 60% of organizations who increased the price of their products and services are using the extra revenue for a war chest or to actually reinforce their security—realistically, it’s most likely just being used to fill a gap in lost revenue for shareholders’ sake post-breach. Without government regulations outlining restrictions on passing cost of breach to consumer, at the least, not without the honest & measurable efforts of a corporation as their custodian, what accountability do we all have against that one executive who didn’t want to change his/her password?”

Breach costs also have an increasingly long tail, as nearly half now come over a year after the date of the attack. The largest of these are generally fines that are levied after an investigation, and decisions or settlements in class action lawsuits. While the popular new “double extortion” approach of ransomware attacks can drive long-term costs in this way, the study finds that companies paying ransom demands to settle the problem quickly aren’t necessarily seeing a large amount of overall savings: their average breach cost drops by just $610,000.

Sanjay Raja, VP of Product with Gurucul, expands on how knock-on data breach damage can continue for years: “The follow-up attack effect, as described, is a significant problem as the playbooks and solutions provided to security operations teams are overly broad and lack the necessary context and response actions for proper remediation. For example, shutting down a user or application or adding a firewall block rule or quarantining a network segment to negate an attack is not a sustainable remediation step to protect an organization on an ongoing basis. It starts with a proper threat detection, investigation and response solution. Current SIEMs and XDR solutions lack the variety of data, telemetry and combined analytics to not only identify an attack campaign and even detect variants on previously successful attacks, but also provide the necessary context, accuracy and validation of the attack to build both a precise and complete response that can be trusted. This is an even greater challenge when current solutions cannot handle complex hybrid multi-cloud architectures leading to significant blind spots and false positives at the very start of the security analyst journey.”

Rising cost of data breach not necessarily prompting dramatic security action

In spite of over four out of five organizations now having experienced some sort of data breach, only slightly over 20% of critical infrastructure companies have moved to zero trust strategies to secure their networks. Cloud security is also lagging as well, with a little under half (43%) of all respondents saying that their security practices in this area are either “early stage” or do not yet exist.

Those that have onboarded security automation and AI elements are the only group seeing massive savings: their average cost of data breach is $3.05 million lower. This particular study does not track average ransom demands, but refers to Sophos research that puts the most exact number at $812,000 globally.

The study also notes serious problems with incident response plans, especially troubling in an environment in which the average ransomware attack is now carried out in four days or less and the “time to ransom” has dropped to a matter of hours in some cases. 37% of respondents say that they do not test their incident response plans regularly. 62% say that they are understaffed to meet their cybersecurity needs, and these organizations tend to suffer over half a million more dollars in damages when they are breached.

Of course, cost of data breaches is not distributed evenly by geography or by industry type. Some are taking much bigger hits than others, reflecting trends established in prior reports. The health care industry is now absorbing a little over $10 million in damage per breach, with the average cost of data breach rising by $1 million from 2021. And companies in the United States face greater data breach costs than their counterparts around the world, at over $8 million per incident.

Shawn Surber, VP of Solutions Architecture and Strategy with Tanium, provides some insight into the unique struggles that the health care industry faces in implementing effective cybersecurity: “Healthcare continues to suffer the greatest cost of breaches but has among the lowest spend on cybersecurity of any industry, despite being deemed ‘critical infrastructure.’ The increased vulnerability of healthcare organizations to cyber threats can be traced to outdated IT systems, the lack of robust security controls, and insufficient IT staff, while valuable medical and health data— and the need to pay ransoms quickly to maintain access to that data— make healthcare targets popular and relatively easy to breach. Unlike other industries that can migrate data and sunset old systems, limited IT and security budgets at healthcare orgs make migration difficult and potentially expensive, particularly when an older system provides a small but unique function or houses data necessary for compliance or research, but still doesn’t make the cut to transition to a newer system. Hackers know these weaknesses and exploit them. Additionally, healthcare orgs haven’t sufficiently updated their security strategies and the tools that manufacturers, IT software vendors, and the FDA have made haven’t been robust enough to thwart the more sophisticated techniques of threat actors.”

Familiar incident types also lead the list of the causes of data breaches: compromised credentials (19%), followed by phishing (16%). Breaches initiated by these methods also tended to be a little more costly, at an average of $4.91 million per incident.

Global average cost of #databreach is now $4.35M, up 13% since 2020. Much of that are realized more than a year after the attack, and 60% of organizations are passing the costs on to consumers in the form of higher prices. #cybersecurity #respectdataClick to Tweet

Cutting the cost of data breach

Though the numbers are never as neat and clean as averages would indicate, it would appear that the cost of data breaches is cut dramatically for companies that implement solid automated “deep learning” cybersecurity tools, zero trust systems and regularly tested incident response plans. Mature cloud security programs are also a substantial cost saver.

Mon, 01 Aug 2022 10:00:00 -0500 Scott Ikeda en-US text/html https://www.cpomagazine.com/cyber-security/ibm-annual-cost-of-data-breach-report-2022-record-costs-usually-passed-on-to-consumers-long-breach-expenses-make-up-half-of-total-damage/
Killexams : IBM Unveils $1 Billion Platform-as-a-Service Investment No result found, try new keyword!The third portion is composed of IBM's continued investment in and expansion of services running on SoftLayer, including DevOps, which offers users the capability to plan, develop, test ... Fri, 22 Jul 2022 12:00:00 -0500 en-us text/html https://www.thestreet.com/technology/ibm-unveils-1-billion-platform-as-a-service-investment-12438325 Killexams : Taking The Road To Modernizing Today's Mainframe

Milan Shetti, President and CEO, Rocket Software.

With the rising popularity of cloud-based solutions over the last decade, a growing misconception in the professional world is that mainframe technology is becoming obsolete. This couldn’t be further from the truth. In fact, the results of a exact Rocket survey of over 500 U.S. IT professionals found businesses today still rely heavily on the mainframe over cloud-based or distributed technologies to power their IT infrastructures—including 67 of the Fortune 100.

Despite the allure surrounding digital solutions, a exact IBM study uncovered that 82% of executives agree their business case still supports mainframe-based applications. This is partly due to the increase in disruptive events taking place throughout the world—the Covid-19 pandemic, a weakened global supply chain, cybersecurity breaches and increased regulations across the board—leading companies to continue leveraging the reliability and security of the mainframe infrastructure.

However, the benefits are clear, and the need is apparent for organizations to consider modernizing their mainframe infrastructure and implementing modern cloud-based solutions into their IT environment to remain competitive in today’s digital world.

Overcoming Mainframe Obstacles

Businesses leveraging mainframe technology that hasn’t been modernized may struggle to attract new talent to their organization. With the new talent entering the professional market primarily trained on cloud-based software, traditional mainframe software and processes create a skills gap that could deter prospective hires and lead to companies missing out on top-tier talent.

Without modernization, many legacy mainframes lack connectivity with modern cloud-based solutions. Although the mainframe provides a steady, dependable operational environment, it’s well known that the efficiency, accuracy and accessibility modern cloud-based solutions create have helped simplify and Boost many operational practices. Mainframe infrastructures that can’t integrate innovative tools—like automation—to streamline processes or provide web and mobile access to remote employees—which has become essential following the pandemic—have become impractical for most business operations.

Considering these impending hurdles, organizations are at a crossroads with their mainframe operations. Realistically, there are three roads a business can choose to journey down. The first is to continue “operating as-is,” which is cost-effective but more or less avoids the issue at hand and positions a company to get left in the dust by its competitors. A business can also “re-platform” or completely remove and replace its current mainframe infrastructure in favor of distributed or cloud models. However, this option can be disruptive, pricey and time-consuming and forces businesses to simply toss out most of their expensive technology investments.

The final option is to “modernize in place.” Modernizing in place allows businesses to continue leveraging their technology investments through mainframe modernization. It’s the preferred method of IT professionals—56% compared to 27% continuing to “operate as-is” and 17% opting to “re-platform”—because it’s typically cost-efficient, less disruptive to operations and improves the connectivity and flexibility of the IT infrastructure.

Most importantly, modernizing in place lets organizations integrate cloud solutions directly into their mainframe environment. In this way, teams can seamlessly transition into a more efficient and sustainable hybrid cloud model that helps alleviate the challenges of the traditional mainframe infrastructure.

Modernizing In Place With A Hybrid Cloud Strategy

With nearly three-quarters of executives from some of the largest and most successful businesses in agreement that mainframe-based applications are still central to business strategy, the mainframe isn’t going anywhere. And with many organizations still opting for mainframe-based solutions for data-critical operating systems—such as financial management, customer transaction systems of record, HR systems and supply chain data management systems—mainframe-based applications are actually expected to grow over the next two years. That’s why businesses must look to leverage their years of technology investments alongside the latest tools.

Modernizing in place with a hybrid cloud strategy is one of the best paths for an enterprise to meet the evolving needs of the market and its customers while simultaneously implementing an efficient and sustainable IT infrastructure. It lets companies leverage innovative cloud solutions in their tech stack that help bridge the skills gap to entice new talent while making operations accessible for remote employees.

The integration of automated tools and artificial intelligence capabilities in a hybrid model can help eliminate many manual processes to reduce workloads and Boost productivity. The flexibility of a modernized hybrid environment can also allow teams to implement cutting-edge processes like DevOps and CI/CD testing into their operations, helping ensure a continuously optimized operational environment.

With most IT professionals in agreement that hybrid is the answer moving forward, it’s clear that more and more businesses that work within mainframe environments will begin to migrate cloud solutions into their tech stack. Modernizing in place with a hybrid cloud strategy is one great way for businesses to meet market expectations while positioning themselves for future success.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Sun, 24 Jul 2022 23:15:00 -0500 Milan Shetti en text/html https://www.forbes.com/sites/forbestechcouncil/2022/07/25/taking-the-road-to-modernizing-todays-mainframe/
Killexams : Amazon, IBM Move Swiftly on Post-Quantum Cryptographic Algorithms Selected by NIST

A month after the National Institute of Standards and Technology (NIST) revealed the first quantum-safe algorithms, Amazon Web Services (AWS) and IBM have swiftly moved forward. Google was also quick to outline an aggressive implementation plan for its cloud service that it started a decade ago.

It helps that IBM researchers contributed to three of the four algorithms, while AWS had a hand in two. Google contributed to one of the submitted algorithms, SPHINCS+.

A long process that started in 2016 with 69 original candidates ends with the selection of four algorithms that will become NIST standards, which will play a critical role in protecting encrypted data from the vast power of quantum computers.

NIST's four choices include CRYSTALS-Kyber, a public-private key-encapsulation mechanism (KEM) for general asymmetric encryption, such as when connecting websites. For digital signatures, NIST selected CRYSTALS-Dilithium, FALCON, and SPHINCS+. NIST will add a few more algorithms to the mix in two years.

Vadim Lyubashevsky, a cryptographer who works in IBM's Zurich Research Laboratories, contributed to the development of CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon. Lyubashevsky was predictably pleased by the algorithms selected, but he had only anticipated NIST would pick two digital signature candidates rather than three.

Ideally, NIST would have chosen a second key establishment algorithm, according to Lyubashevsky. "They could have chosen one more right away just to be safe," he told Dark Reading. "I think some people expected McEliece to be chosen, but maybe NIST decided to hold off for two years to see what the backup should be to Kyber."

IBM's New Mainframe Supports NIST-Selected Algorithms

After NIST identified the algorithms, IBM moved forward by specifying them into its recently launched z16 mainframe. IBM introduced the z16 in April, calling it the "first quantum-safe system," enabled by its new Crypto Express 8S card and APIs that provide access to the NIST APIs.

IBM was championing three of the algorithms that NIST selected, so IBM had already included them in the z16. Since IBM had unveiled the z16 before the NIST decision, the company implemented the algorithms into the new system. IBM last week made it official that the z16 supports the algorithms.

Anne Dames, an IBM distinguished engineer who works on the company's z Systems team, explained that the Crypto Express 8S card could implement various cryptographic algorithms. Nevertheless, IBM was betting on CRYSTAL-Kyber and Dilithium, according to Dames.

"We are very fortunate in that it went in the direction we hoped it would go," she told Dark Reading. "And because we chose to implement CRYSTALS-Kyber and CRYSTALS-Dilithium in the hardware security module, which allows clients to get access to it, the firmware in that hardware security module can be updated. So, if other algorithms were selected, then we would add them to our roadmap for inclusion of those algorithms for the future."

A software library on the system allows application and infrastructure developers to incorporate APIs so that clients can generate quantum-safe digital signatures for both classic computing systems and quantum computers.

"We also have a CRYSTALS-Kyber interface in place so that we can generate a key and provide it wrapped by a Kyber key so that could be used in a potential key exchange scheme," Dames said. "And we've also incorporated some APIs that allow clients to have a key exchange scheme between two parties."

Dames noted that clients might use Kyber to generate digital signatures on documents. "Think about code signing servers, things like that, or documents signing services, where people would like to actually use the digital signature capability to ensure the authenticity of the document or of the code that's being used," she said.

AWS Engineers Algorithms Into Services

During Amazon's AWS re:Inforce security conference last week in Boston, the cloud provider emphasized its post-quantum cryptography (PQC) efforts. According to Margaret Salter, director of applied cryptography at AWS, Amazon is already engineering the NIST standards into its services.

During a breakout session on AWS' cryptography efforts at the conference, Salter said AWS had implemented an open source, hybrid post-quantum key exchange based on a specification called s2n-tls, which implements the Transport Layer Security (TLS) protocol across different AWS services. AWS has contributed it as a draft standard to the Internet Engineering Task Force (IETF).

Salter explained that the hybrid key exchange brings together its traditional key exchanges while enabling post-quantum security. "We have regular key exchanges that we've been using for years and years to protect data," she said. "We don't want to get rid of those; we're just going to enhance them by adding a public key exchange on top of it. And using both of those, you have traditional security, plus post quantum security."

Last week, Amazon announced that it deployed s2n-tls, the hybrid post-quantum TLS with CRYSTALS-Kyber, which connects to the AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In an update this week, Amazon documented its stated support for AWS Secrets Manager, a service for managing, rotating, and retrieving database credentials and API keys.

Google's Decade-Long PQC Migration

While Google didn't make implementation announcements like AWS in the immediate aftermath of NIST's selection, VP and CISO Phil Venables said Google has been focused on PQC algorithms "beyond theoretical implementations" for over a decade. Venables was among several prominent researchers who co-authored a technical paper outlining the urgency of adopting PQC strategies. The peer-reviewed paper was published in May by Nature, a respected journal for the science and technology communities.

"At Google, we're well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information," Venables wrote in a blog post published following the NIST announcement. "We have one goal: ensure that Google is PQC ready."

Venables recalled an experiment in 2016 with Chrome where a minimal number of connections from the Web browser to Google servers used a post-quantum key-exchange algorithm alongside the existing elliptic-curve key-exchange algorithm. "By adding a post-quantum algorithm in a hybrid mode with the existing key exchange, we were able to test its implementation without affecting user security," Venables noted.

Google and Cloudflare announced a "wide-scale post-quantum experiment" in 2019 implementing two post-quantum key exchanges, "integrated into Cloudflare's TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients." The experiment helped Google understand the implications of deploying two post-quantum key agreements with TLS.

Venables noted that last year Google tested post-quantum confidentiality in TLS and found that various network products were not compatible with post-quantum TLS. "We were able to work with the vendor so that the issue was fixed in future firmware updates," he said. "By experimenting early, we resolved this issue for future deployments."

Other Standards Efforts

The four algorithms NIST announced are an important milestone in advancing PQC, but there's other work to be done besides quantum-safe encryption. The AWS TLS submission to the IETF is one example; others include such efforts as Hybrid PQ VPN.

"What you will see happening is those organizations that work on TLS protocols, or SSH, or VPN type protocols, will now come together and put together proposals which they will evaluate in their communities to determine what's best and which protocols should be updated, how the certificates should be defined, and things like things like that," IBM's Dames said.

Dustin Moody, a mathematician at NIST who leads its PQC project, shared a similar view during a panel discussion at the RSA Conference in June. "There's been a lot of global cooperation with our NIST process, rather than fracturing of the effort and coming up with a lot of different algorithms," Moody said. "We've seen most countries and standards organizations waiting to see what comes out of our nice progress on this process, as well as participating in that. And we see that as a very good sign."

Thu, 04 Aug 2022 10:39:00 -0500 en text/html https://www.darkreading.com/dr-tech/amazon-ibm-move-swiftly-on-post-quantum-cryptographic-algorithms-selected-by-nist
Killexams : Astadia Publishes Mainframe to Cloud Reference Architecture Series No result found, try new keyword!Astadia is pleased to announce the release of a new series of Mainframe-to-Cloud reference architecture guides. The documents cover how to refactor IBM mainframes applications to Microsoft Azure, ... Wed, 03 Aug 2022 02:00:00 -0500 https://finance.dailyherald.com/dailyherald/article/bizwire-2022-8-3-astadia-publishes-mainframe-to-cloud-reference-architecture-series Killexams : Don’t pop antibiotics every time you have a cold. But resistance crisis has an AI solution

These technologies are already working together to accelerate the discovery of new antimicrobial medicines. One subset of next-gen AI, dubbed generative models, produces hypotheses about the final molecule needed for a specific new drug. These AI models don’t just search for known molecules with relevant properties, such as the ability to bind to and neutralise a virus or a bacterium, they are powerful enough to learn features of the underlying data and can suggest new molecules that have not yet been synthesised. This design, as opposed to searching capability, is particularly transformative because the number of possible suitable molecules is greater than the number of atoms in the universe, prohibitively large for search tasks.

Generative AI can navigate this vast chemical space to discover the right molecule faster than any human using conventional methods. AI modelling already supports research that could help patients with Parkinson’s disease, diabetes and chronic pain. For example, antimicrobial peptides (AMPs), for example, small protein-like compounds, is one solution that is the subject of intensive study. These molecules hold great promise as next-generation antibiotics because they are inherently less susceptible to resistance and are produced naturally as part of the innate immune system of living organisms.

In exact studies published in Nature Biomedical Engineering, 2021, the AI-assisted search for new, effective, non-toxic peptides produced 20 promising novel candidates in just 48 days, a striking reduction compared to the conventional development times for new compounds.

Among these were two novel candidates used against Klebsiella pneumoniae, a bacterium frequently found in hospitals that causes pneumonia and bloodstream infections and has become increasingly resistant to conventional classes of antibiotics. Obtaining such a result with conventional research methods would take years.

AMPs already in commercial use

Collaborative work between IBM, Unilever, and STFC, which hosts one of IBM Research’s Discovery Accelerators at the Hartree Centre in the UK, has recently helped researchers better understand AMPs. Unilever has already used that new knowledge to create consumer products that boost the effects of these natural-defence peptides.

And, in this Biophysical Journal paper, researchers demonstrated how small-molecule additives (organic compounds with low molecular weights) are able to make AMPs much more potent and efficient. Using advanced simulation methods, IBM researchers, in combination with experimental studies from Unilever, also identified new molecular mechanisms that could be responsible for this enhanced potency. This is a first-of-its-kind proof of principle that scientists will take forward in ongoing collaborations.

Boosting material discovery with AI Generative models and advanced computer simulations is part of a much larger strategy at IBM Research, dubbed Accelerated Discovery, where we use emerging computing technologies to boost the scientific method and its application to discovery. The aim is to greatly speed up the rate of discovery of new materials and drugs, whether it is in preparation for the next global crisis or to rapidly address the current and the inevitable future ones.

This is just one element of the loop comprising the revised scientific method, a cutting-edge transformation of the traditional linear approach to material discovery. Broadly, AI learns about the desired properties of a new material. Next, another type of AI, IBM’s Deep Search, combs through the existing knowledge on the manufacturing of this specific material, meaning all the previous research tucked away in patents and papers.

Generative models have the potential to create a new molecule

Following this, the generative models create a possible new molecule based on the existing data. Once done, we use a high-performance computer to simulate this new candidate molecule and the reactions it should have with its neighbours to make sure it performs as expected. In the future, a quantum computer could Boost these molecular simulations even further.

The final step is AI-driven lab testing to experimentally validate the predictions and develop actual molecules. At IBM, we do this with a tool called RoboRXN, a small, fridge-sized chemistry lab’ that combines AI, cloud computing and robots to help researchers create new molecules anywhere at anytime. The combination of these approaches is well suited to tackle general ‘inverse design’ problems. Here, the task is to find or create for the first time a material with a desired property or function, as opposed to computing or measuring the properties of large numbers of candidates.

Proof that AI can go beyond the limits of classical computing

The antibiotics crisis is a particularly urgent example of a global inverse design challenge in need of a true paradigm shift towards the way we discover materials. The rapid progress in quantum computing and the development of quantum machine-learning techniques is now creating realistic prospects of extending the reach of artificial intelligence beyond the limitations of classical computing. Early examples show promise for quantum advantages in model training speed, classification tasks and prediction accuracy.

Overall, combining the most powerful emerging AI techniques (possibly with quantum acceleration) to learn features linked to antimicrobial activity with physical modelling at the molecular scale to reveal the modes of action is, arguably, the most promising route to creating these essential compounds faster than ever before.

The article originally appeared in the World Economic Forum.


Also read: CGPA instead of marks, lateral entry — Modi govt SoP to bring board parity


Sat, 06 Aug 2022 14:59:00 -0500 en-US text/html https://theprint.in/world/dont-pop-antibiotics-every-time-you-have-a-cold-but-resistance-crisis-has-an-ai-solution/1069101/
Killexams : IBM Watson Gets a Factory Job

IBM has launched an Internet of Things system as part of Watson. The tools is called Cognitive Visual Inspection, and the idea is to provide manufacturers with a “cognitive assistant” on the factory floor to minimize defects and increase product quality. According to IBM, in early production-cycle testing, Watson was able to reduce 80% of the inspection time while reducing manufacturing defects by 7-10%.

The system uses an ultra-high definition camera and adds cognitive capabilities from Watson to create a tool that captures images of products as they move through production and assembly. Together with human inspectors, Watson recognizes defects in products, including scratches or pinhole-size punctures.

“Watson brings its cognitive capabilities to image recognition,” Bret Greenstein, VP of IoT at IBM, told Design News. “We’re applying this to a wide range of industries, including electronics and automotive.”

The Inspection Eye That Never Tires

The system continuously learns based on human assessment of the defect classifications in the images. The tool was designed to help manufacturers achieve specialization levels that were not possible with previous human or machine inspection. “We created a system and workflow to feed images of good and bad products into Watson and train it with algorithms,” said Greenstein. “This is a system that you can be trained in advance to see what acceptable products look like.”

According to IBM, more than half of product quality checks involve some form of visual confirmation. Visual checking helps ensure that all parts are in the correct location, have the right shape or color or texture, and are free from scratches, holes or foreign particles. Automating these visual checks is difficult due to volume and product variety. Add to that the challenge from defects that can be any size, from a tiny puncture to a cracked windshield on a vehicle.

Some of the inspection training precedes Watson’s appearance on the manufacturing line. “There are several components. You define the range of images, and feed the images into Watson. When it produces the confidence level you need, you push it to the operator stations,” said Greenstein. “Watson concludes whether the product awesome or defective. You let the system make the decision.”

The ultimate goal is to keep Watson on a continuous learning curve. “We can push this system out to different manufacturing lines, and we can train it based on operators in the field and suggest changes to make the system smarter, creating an evolving inspection process,” said Greenstein.

The ABB Partnership

As part of its move into the factory, IBM has formed a strategic collaboration with ABB. The goal is to combine ABB’s domain knowledge and digital solutions with IBM’s artificial intelligence and machine-learning capabilities. The first two joint industry solutions powered by ABB Ability and Watson were designed to bring real-time cognitive insights to the factory floor and smart grids.

READ MORE ARTICLES ON SMART MANUFACTURING:

The suite of solutions developed by ABB and IBM are intended to help companies Boost quality control, reduce downtime, and increase speed and yield. The goal is to Boost on current connected systems that simply gather data. Instead, Watson is designed to use data to understand, sense, reason, and take actions to help industrial workers to reduce inefficient processes and redundant tasks.

According to Greenstein, Watson is just getting its industry sea legs. In time, the thinking machine will take on increasing industrial tasks. “We found a wide range of uses. We’re working with drones to look at traffic flows in retail situation to analyze things that are hard to see from a human point of view,” said Greenstein. “We’re also applying Watson’s capabilities to predictive maintenance.

Rob Spiegel has covered automation and control for 17 years, 15 of them for Design News. Other courses he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

Image courtesy of IBM

Thu, 21 Jul 2022 12:00:00 -0500 en text/html https://www.designnews.com/automation-motion-control/ibm-watson-gets-factory-job
Killexams : IBM report: Middle Eastern consumers pay the price as regional data breach costs reach all-time high

Riyadh, Saudi Arabia: IBM, the leading global technology company, has published a study highlighting the importance of cybersecurity in an increasingly digital age. According to IBM Security’s annual Cost of a Data Breach Report,  the Middle East has incurred losses of SAR 28 million from data breaches  in 2022 alone — this figure already exceeding the total amount of losses accrued in each of the last eight years. 

The latest edition of the Cost of a Data Breach Report — now in its 17th year — reveals costlier and higher-impact data breaches than ever before. As outlined by the study, the global average cost of a data breach has reached an all-time high of $4.35 million for surveyed organizations. With breach costs increasing nearly 13% over the last two years of the report, the findings suggest these incidents may also be contributing to rising costs of goods and services. In fact, 60% of studied organizations raised their product or services prices due to the breach, when the cost of goods is already soaring worldwide amid inflation and supply chain issues.

Notably, the report ranks the Middle East2 among the top five countries and regions for the highest average cost of a data breach. As per the study, the average total cost of a data breach in the Middle East amounted to SAR 28 million in 2022, the region being second only to the United States on the list. The report also spotlights the industries across the Middle East that have suffered the highest per-record costs in millions; the financial (SAR 1,039), health (SAR 991) and energy (SAR 950) sectors taking first, second and third spot, respectively.    

Fahad Alanazi, IBM Saudi General Manager, said: “Today, more so than ever, in an increasingly connected and digital age, cybersecurity is of the utmost importance. It is essential to safeguard businesses and privacy. As the digital economy continues to evolve, enhanced security will be the marker of a modern, world class digital ecosystem.” 

He continued: “At IBM, we take great pride in enabling the people, businesses and communities we serve to fulfil their potential by empowering them with state-of-the-art services and support. Our findings reiterate just how important it is for us, as a technology leader, to continue pioneering solutions that will help the Kingdom distinguish itself as the tech capital of the region.”

The perpetuality of cyberattacks is also shedding light on the “haunting effect” data breaches are having on businesses, with the IBM report finding 83% of studied organizations have experienced more than one data breach in their lifetime. Another factor rising over time is the after-effects of breaches on these organizations, which linger long after they occur, as nearly 50% of breach costs are incurred more than a year after the breach.

The 2022 Cost of a Data Breach Report is based on in-depth analysis of real-world data breaches experienced by 550 organizations globally between March 2021 and March 2022. The research, which was sponsored and analyzed by IBM Security, was conducted by the Ponemon Institute.

Some of the key global findings in the 2022 IBM report include:

  • Critical Infrastructure Lags in Zero Trust – Almost 80% of critical infrastructure organizations studied don’t adopt zero trust strategies, seeing average breach costs rise to $5.4 million – a $1.17 million increase compared to those that do. All while 28% breaches amongst these organizations were ransomware or destructive attacks.
  • It Doesn’t Pay to Pay – Ransomware victims in the study that opted to pay threat actors’ ransom demands saw only $610,000 less in average breach costs compared to those that chose not to pay – not including the cost of the ransom. Factoring in the high cost of ransom payments, the financial toll may rise even higher, suggesting that simply paying the ransom may not be an effective strategy.
  • Security Immaturity in Clouds – Forty-three percent of studied organizations are in the early stages or have not started applying security practices across their cloud environments, observing over $660,000 on average in higher breach costs than studied organizations with mature security across their cloud environments. 
  • Security AI and Automation Leads as Multi-Million Dollar Cost Saver – Participating organizations fully deploying security AI and automation incurred $3.05 million less on average in breach costs compared to studied organizations that have not deployed the technology – the biggest cost saver observed in the study.

“Businesses need to put their security defenses on the offense and beat attackers to the punch. It’s time to stop the adversary from achieving their objectives and start to minimize the impact of attacks. The more businesses try to perfect their perimeter instead of investing in detection and response, the more breaches can fuel cost of living increases.” said Charles Henderson, Global Head of IBM Security X-Force. “This report shows that the right strategies coupled with the right technologies can help make all the difference when businesses are attacked.”

Over-trusting Critical Infrastructure Organizations 

Concerns over critical infrastructure targeting appear to be increasing globally over the past year, with many governments’ cybersecurity agencies urging vigilance against disruptive attacks. In fact, IBM’s report reveals that ransomware and destructive attacks represented 28% of breaches amongst critical infrastructure organizations studied, highlighting how threat actors are seeking to fracture the global supply chains that rely on these organizations. This includes financial services, industrial, transportation and healthcare companies amongst others.

Despite the call for caution, and a year after the Biden Administration issued a cybersecurity executive order that centers around the importance of adopting a zero trust approach to strengthen the nation’s cybersecurity, only 21% of critical infrastructure organizations studied adopt a zero trust security model, according to the report. Add to that, 17% of breaches at critical infrastructure organizations were caused due to a business partner being initially compromised, highlighting the security risks that over-trusting environments pose.

Businesses that Pay the Ransom Aren’t Getting a “Bargain” 

According to the 2022 IBM report, businesses that paid threat actors’ ransom demands saw $610,000 less in average breach costs compared to those that chose not to pay – not including the ransom amount paid. However, when accounting for the average ransom payment, which according to Sophos reached $812,000 in 2021, businesses that opt to pay the ransom could net higher total costs - all while inadvertently funding future ransomware attacks with capital that could be allocated to remediation and recovery efforts and looking at potential federal offenses.

The persistence of ransomware, despite significant global efforts to impede it, is fueled by the industrialization of cybercrime. IBM Security X-Force discovered the duration of studied enterprise ransomware attacks shows a drop of 94% over the past three years – from over two months to just under four days. These exponentially shorter attack lifecycles can prompt higher impact attacks, as cybersecurity incident responders are left with very short windows of opportunity to detect and contain attacks. With “time to ransom” dropping to a matter of hours, it's essential that businesses prioritize rigorous testing of incident response (IR) playbooks ahead of time. But the report states that as many as 37% of organizations studied that have incident response plans don’t test them regularly.

Hybrid Cloud Advantage

The report also showcased hybrid cloud environments as the most prevalent (45%) infrastructure amongst organizations studied. Averaging $3.8 million in breach costs, businesses that adopted a hybrid cloud model observed lower breach costs compared to businesses with a solely public or private cloud model, which experienced $5.02 million and $4.24 million on average respectively. In fact, hybrid cloud adopters studied were able to identify and contain data breaches 15 days faster on average than the global average of 277 days for participants.

The report highlights that 45% of studied breaches occurred in the cloud, emphasizing the importance of cloud security. However, a significant 43% of reporting organizations stated they are just in the early stages or have not started implementing security practices to protect their cloud environments, observing higher breach costs3 . Businesses studied that did not implement security practices across their cloud environments required an average 108 more days to identify and contain a data breach than those consistently applying security practices across all their domains. 

Additional findings in the 2022 IBM report include:

  • Phishing Becomes Costliest Breach Cause – While compromised credentials continued to reign as the most common cause of a breach (19%), phishing was the second (16%) and the costliest cause, leading to $4.91 million in average breach costs for responding organizations.
  • Healthcare Breach Costs Hit Double Digits for First Time Ever– For the 12th year in a row, healthcare participants saw the costliest breaches amongst industries with average breach costs in healthcare increasing by nearly $1 million to reach a record high of $10.1 million.
  • Insufficient Security Staffing – Sixty-two percent of studied organizations stated they are not sufficiently staffed to meet their security needs, averaging $550,000 more in breach costs than those that state they are sufficiently staffed.

Additional Sources

  • To obtain a copy of the 2022 Cost of a Data Breach Report, please visit: https://www.ibm.com/security/data-breach. 
  • Read more about the report’s top findings in this IBM Security Intelligence blog.
  • Sign up for the 2022 IBM Security Cost of a Data Breach webinar on Wednesday, August 3, 2022, at 11:00 a.m. ET here.
  • Connect with the IBM Security X-Force team for a personalized review of the findings: https://ibm.biz/book-a-consult.

-Ends-

About IBM Security

IBM Security offers one of the most advanced and integrated portfolios of enterprise security products and services. The portfolio, supported by world-renowned IBM Security X-Force® research, enables organizations to effectively manage risk and defend against emerging threats. IBM operates one of the world's broadest security research, development, and delivery organizations, monitors 150 billion+ security events per day in more than 130 countries, and has been granted more than 10,000 security patents worldwide. For more information, please check www.ibm.com/security, follow @IBMSecurity on Twitter or visit the IBM Security Intelligence blog.

Wed, 27 Jul 2022 22:20:00 -0500 en text/html https://www.zawya.com/en/press-release/research-and-studies/ibm-report-middle-eastern-consumers-pay-the-price-as-regional-data-breach-costs-reach-all-time-high-q1wbuec0
00M-232 exam dump and training guide direct download
Training Exams List