Full refund guarantee of P6040-017 Study Guide and vce

This really is more than 6 years that will killexams.com delivering valid, Latest plus 2022 updated test questions and solutions. We have a huge database of P6040-017 Exam Questions queries which is often up to time and abilities in order to serve particular clients. Download completely free P6040-017 Free Exam PDF to assess and sign upward for complete P6040-017 Free Exam PDF.

Exam Code: P6040-017 Practice test 2022 by Killexams.com team
IBM SurePOS 700 Series Models 7x3 Technical Mastery
IBM Technical outline
Killexams : IBM Technical outline - BingNews https://killexams.com/pass4sure/exam-detail/P6040-017 Search results Killexams : IBM Technical outline - BingNews https://killexams.com/pass4sure/exam-detail/P6040-017 https://killexams.com/exam_list/IBM Killexams : IBM Research Rolls Out A Comprehensive AI And ML Edge Research Strategy Anchored By Enterprise Partnerships And Use Cases

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Strengthen future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Strengthen quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Strengthen the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : Amazon, IBM Move Swiftly on Post-Quantum Cryptographic Algorithms Selected by NIST

A month after the National Institute of Standards and Technology (NIST) revealed the first quantum-safe algorithms, Amazon Web Services (AWS) and IBM have swiftly moved forward. Google was also quick to outline an aggressive implementation plan for its cloud service that it started a decade ago.

It helps that IBM researchers contributed to three of the four algorithms, while AWS had a hand in two. Google contributed to one of the submitted algorithms, SPHINCS+.

A long process that started in 2016 with 69 original candidates ends with the selection of four algorithms that will become NIST standards, which will play a critical role in protecting encrypted data from the vast power of quantum computers.

NIST's four choices include CRYSTALS-Kyber, a public-private key-encapsulation mechanism (KEM) for general asymmetric encryption, such as when connecting websites. For digital signatures, NIST selected CRYSTALS-Dilithium, FALCON, and SPHINCS+. NIST will add a few more algorithms to the mix in two years.

Vadim Lyubashevsky, a cryptographer who works in IBM's Zurich Research Laboratories, contributed to the development of CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon. Lyubashevsky was predictably pleased by the algorithms selected, but he had only anticipated NIST would pick two digital signature candidates rather than three.

Ideally, NIST would have chosen a second key establishment algorithm, according to Lyubashevsky. "They could have chosen one more right away just to be safe," he told Dark Reading. "I think some people expected McEliece to be chosen, but maybe NIST decided to hold off for two years to see what the backup should be to Kyber."

IBM's New Mainframe Supports NIST-Selected Algorithms

After NIST identified the algorithms, IBM moved forward by specifying them into its recently launched z16 mainframe. IBM introduced the z16 in April, calling it the "first quantum-safe system," enabled by its new Crypto Express 8S card and APIs that provide access to the NIST APIs.

IBM was championing three of the algorithms that NIST selected, so IBM had already included them in the z16. Since IBM had unveiled the z16 before the NIST decision, the company implemented the algorithms into the new system. IBM last week made it official that the z16 supports the algorithms.

Anne Dames, an IBM distinguished engineer who works on the company's z Systems team, explained that the Crypto Express 8S card could implement various cryptographic algorithms. Nevertheless, IBM was betting on CRYSTAL-Kyber and Dilithium, according to Dames.

"We are very fortunate in that it went in the direction we hoped it would go," she told Dark Reading. "And because we chose to implement CRYSTALS-Kyber and CRYSTALS-Dilithium in the hardware security module, which allows clients to get access to it, the firmware in that hardware security module can be updated. So, if other algorithms were selected, then we would add them to our roadmap for inclusion of those algorithms for the future."

A software library on the system allows application and infrastructure developers to incorporate APIs so that clients can generate quantum-safe digital signatures for both classic computing systems and quantum computers.

"We also have a CRYSTALS-Kyber interface in place so that we can generate a key and provide it wrapped by a Kyber key so that could be used in a potential key exchange scheme," Dames said. "And we've also incorporated some APIs that allow clients to have a key exchange scheme between two parties."

Dames noted that clients might use Kyber to generate digital signatures on documents. "Think about code signing servers, things like that, or documents signing services, where people would like to actually use the digital signature capability to ensure the authenticity of the document or of the code that's being used," she said.

AWS Engineers Algorithms Into Services

During Amazon's AWS re:Inforce security conference last week in Boston, the cloud provider emphasized its post-quantum cryptography (PQC) efforts. According to Margaret Salter, director of applied cryptography at AWS, Amazon is already engineering the NIST standards into its services.

During a breakout session on AWS' cryptography efforts at the conference, Salter said AWS had implemented an open source, hybrid post-quantum key exchange based on a specification called s2n-tls, which implements the Transport Layer Security (TLS) protocol across different AWS services. AWS has contributed it as a draft standard to the Internet Engineering Task Force (IETF).

Salter explained that the hybrid key exchange brings together its traditional key exchanges while enabling post-quantum security. "We have regular key exchanges that we've been using for years and years to protect data," she said. "We don't want to get rid of those; we're just going to enhance them by adding a public key exchange on top of it. And using both of those, you have traditional security, plus post quantum security."

Last week, Amazon announced that it deployed s2n-tls, the hybrid post-quantum TLS with CRYSTALS-Kyber, which connects to the AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In an update this week, Amazon documented its stated support for AWS Secrets Manager, a service for managing, rotating, and retrieving database credentials and API keys.

Google's Decade-Long PQC Migration

While Google didn't make implementation announcements like AWS in the immediate aftermath of NIST's selection, VP and CISO Phil Venables said Google has been focused on PQC algorithms "beyond theoretical implementations" for over a decade. Venables was among several prominent researchers who co-authored a technical paper outlining the urgency of adopting PQC strategies. The peer-reviewed paper was published in May by Nature, a respected journal for the science and technology communities.

"At Google, we're well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information," Venables wrote in a blog post published following the NIST announcement. "We have one goal: ensure that Google is PQC ready."

Venables recalled an experiment in 2016 with Chrome where a minimal number of connections from the Web browser to Google servers used a post-quantum key-exchange algorithm alongside the existing elliptic-curve key-exchange algorithm. "By adding a post-quantum algorithm in a hybrid mode with the existing key exchange, we were able to test its implementation without affecting user security," Venables noted.

Google and Cloudflare announced a "wide-scale post-quantum experiment" in 2019 implementing two post-quantum key exchanges, "integrated into Cloudflare's TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients." The experiment helped Google understand the implications of deploying two post-quantum key agreements with TLS.

Venables noted that last year Google tested post-quantum confidentiality in TLS and found that various network products were not compatible with post-quantum TLS. "We were able to work with the vendor so that the issue was fixed in future firmware updates," he said. "By experimenting early, we resolved this issue for future deployments."

Other Standards Efforts

The four algorithms NIST announced are an important milestone in advancing PQC, but there's other work to be done besides quantum-safe encryption. The AWS TLS submission to the IETF is one example; others include such efforts as Hybrid PQ VPN.

"What you will see happening is those organizations that work on TLS protocols, or SSH, or VPN type protocols, will now come together and put together proposals which they will evaluate in their communities to determine what's best and which protocols should be updated, how the certificates should be defined, and things like things like that," IBM's Dames said.

Dustin Moody, a mathematician at NIST who leads its PQC project, shared a similar view during a panel discussion at the RSA Conference in June. "There's been a lot of global cooperation with our NIST process, rather than fracturing of the effort and coming up with a lot of different algorithms," Moody said. "We've seen most countries and standards organizations waiting to see what comes out of our nice progress on this process, as well as participating in that. And we see that as a very good sign."

Thu, 04 Aug 2022 09:03:00 -0500 en text/html https://www.darkreading.com/dr-tech/amazon-ibm-move-swiftly-on-post-quantum-cryptographic-algorithms-selected-by-nist
Killexams : 9 Ways to Strengthen Your Business Performance with Mind Mapping

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Sun, 17 Jul 2022 20:27:00 -0500 en-US text/html https://www.business2community.com/strategy/9-ways-improve-business-performance-mind-mapping-01516027
Killexams : Business Outline of Multi-factor Authentication Market 2022- 2030 | 3M,Microsoft Corporation,CA Technologies,Fujitsu,IBM Corporation

The industry study on Global Multi-factor Authentication Market 2022 research report studies currently in addition to prospective facets of this market chiefly depending upon aspects which the businesses compete on the current market, key trends and Multi-factor Authentication Market segmentation analysis. This report covers all of this global market, which ranges from the essential market info and progressing more too various significant criteria, in line with this, the Multi-factor Authentication Market is segmented. Multi-factor Authentication Market industry research report examines, monitors, and gifts that the global market measurement of the significant competition in each region around the globe. What’s more, the report provides data on those top market players from the Multi-factor Authentication Market.

Request demo copy of this report at: https://www.thebrainyinsights.com/enquiry/sample-request/12856

The research is closely attached with significant information in the forms of tables and graphs to comprehend significant Multi-factor Authentication Market trends, challenges, and drivers. The report also covers the current Multi-factor Authentication Market size of their and the increase rate through recent years. Along with that, the research consists of historical statistics of upcoming years regarding company profiles of global Multi-factor Authentication Industry.

The well-known players of Multi-factor Authentication Market are: 3M,Microsoft Corporation,CA Technologies,Fujitsu,IBM Corporation,HID Global Corporation/ASSA ABLOY AB,VASCO Data Security International Inc.,Crossmatch,Gemalto NV,Safran

Read complete report at: https://www.thebrainyinsights.com/enquiry/sample-request/12856

Multi-factor Authentication Market Segmentation

Type Analysis of Multi-factor Authentication Market:

by Authentication Model:

  • Two-factor Authentication
  • Three-factor Authentication
  • Four-factor Authentication

Applications Analysis of Multi-factor Authentication Market:

by Industry Vertical:

  • Automotive
  • Consumer
  • Enterprise
  • BFSI
  • Retail
  • Healthcare
  • Government
  • Military
  • Education
  • Legal
  • Others

The comprehensive information by various sections of Multi-factor Authentication Market empowers enthusiasts to track prospective and create important choices for sustainable development. The data from the research centers on the technological progress, available abilities, SWOT and PESTEL and the shifting arrangement of the Multi-factor Authentication Market.

Geographically this report is subdivided into several vital regions, with data related to the manufacturing and consumption patterns, including revenues (million USD)and Multi-factor Authentication Market share and increased pace of market in these regions, including a decade in 2015 to 2021 (prediction), covering and it’s Share (percent) and also CAGR for its prediction period 2022 to 2030.

Highlights of this 2022-2030 Multi-factor Authentication Market Report:

* Market dynamics, Multi-factor Authentication Market economy manufacturing, opportunities on the total pricing of this top manufacturer and improvement trend analysis;

* Multi-factor Authentication Market industry players at the general regional industry and economy synopsis;

* Deep analysis of the most significant market players included by Worldwide Multi-factor Authentication Market study report;

* Evaluates that the Multi-factor Authentication Market manufacturing creation, leading issues, and methods to neutralize the development hazard;

* Understand more about the market plans that are increasingly now being adopted by leading Multi-factor Authentication Market businesses;

* Evaluation of this market character, namely market development drivers, essential challengers, inhibitors, and chances;

* To comprehend the many affecting driving and constraining forces at the Multi-factor Authentication Market and its particular effect on the global sector;

* Evaluation of international Multi-factor Authentication Market trends with statistics out of 2013, 2014, 2015 and so forth and projections from CAGRs throughout 2025.

* Manufacturing cost structure analysis, industry review, technical data and manufacturing analysis, Worldwide Multi-factor Authentication Market analysis with type, application;

* Worldwide Revenue from Multi-factor Authentication Market industry, industry arrangement, improvement of sectors and also sizing of neighborhood ingestion market;

(Countless Units) And revenue (Mn/Bn USD) economy divide by Multi-factor Authentication Market product type. What’s more, the investigation study is coordinated by applications with projected and historical market share and annual growth rate.

Ask our expert if you have a query at: https://www.thebrainyinsights.com/enquiry/request-customization/12856

About The Brainy Insights:

The Brainy Insights is a market research company, aimed at providing actionable insights through data analytics to companies to Strengthen their business acumen. We have a robust forecasting and estimation model to meet the clients’ objectives of high-quality output within a short span of time. We provide both customized (clients’ specific) and syndicate reports. Our repository of syndicate reports is diverse across all the categories and sub-categories across domains. Our customized solutions are tailored to meet the clients’ requirement whether they are looking to expand or planning to launch a new product in the global market.

Contact Us

Avinash D

Head of Business Development

Phone: +1-315-215-1633

Email: sales@thebrainyinsights.com

Web: www.thebrainyinsights.com

Thu, 28 Jul 2022 03:11:00 -0500 CDN Newswire en-US text/html https://www.digitaljournal.com/pr/business-outline-of-multi-factor-authentication-market-2022-2030-3mmicrosoft-corporationca-technologiesfujitsuibm-corporation
Killexams : Professional Service Agreement

Based in Green Bay, Wisc., Jackie Lohrey has been writing professionally since 2009. In addition to writing web content and training manuals for small business clients and nonprofit organizations, including ERA Realtors and the Bay Area Humane Society, Lohrey also works as a finance data analyst for a global business outsourcing company.

Wed, 18 Jul 2018 00:29:00 -0500 en-US text/html https://smallbusiness.chron.com/professional-service-agreement-74431.html
Killexams : SugarCRM Review

SugarCRM is our 2019 choice for the Best Startup CRM Software. What started as an open source project in 2004 has grown into a leading customer relationship management solution with a reputation for excellent customer service and easy implementation.

While SugarCRM is an excellent choice for any small business in need of a user-friendly CRM product, it's an especially attractive option for tech-savvy startups due to its developer-centric focus and rich set of features. Despite having the appearance of a run-of-the-mill cloud CRM, SugarCRM is astoundingly customizable and offers users more learning resources than nearly any other SaaS CRM out there.

The SaaS solution is also a breath of fresh air when it comes to pricing and product focus. While other business software companies are attempting to pack every type of product into a single bundle and then further confusing matters with elaborate pricing structures and add-ons, SugarCRM is staying squarely in the customer relationship management lane. The product is easy to understand and relatively quick to implement for users who don't need a lot of customization, but it's still powerful enough to fully customize if your dev team wants to get their hands dirty.

To understand how we selected our best picks, you can view our methodology, as well as a comprehensive list of the best CRM software.

Why SugarCRM?

Great for developers and end users

SugarCRM offers startups the opportunity to craft a fully customized customer relationship management solution, without starting from scratch. While many CRM products have an open API, SugarCRM provides a veritable tome of developer resources in the form of learning guides and technical documentation. Startups and other businesses with highly specific needs and lots of in-house tech talent can make this customer relationship management software their own as customizations are virtually unlimited.

Most customer management software solutions have online support communities, but it only takes a little perusing to see that most of these communities consist of hundreds (if not thousands) of questions, plenty of views, and very few answers. A forum is only as helpful as the community that populates it, and that's where SugarCRM really shines. This SaaS product boasts a highly active online dev community, which is invaluable for startups with small tech teams who need extra guidance and support. In fact, SugarCRM even has a separate open source Community Edition, specifically for developers, and it's supported on Linux, Unix, Mac, IBM, and Windows.

SugarCRM is a product built with developers in mind, but it delivers on the user side as well. While the resources and setup process are certainly geared toward the tech set, the overall interface is approachable, intuitive, and well designed, if a bit dated. SugarCRM's customer service is wildly impressive as well. In fact, no other CRM company we reviewed even came close to touching the quality and consistency of SugarCRM's customer service. This is a serious win for startups wanting their tech teams to focus on high-level issues and not day-to-day helpdesk questions, and it's a win for busy sales and marketing pros who need their questions answered quickly and succinctly.

Editor's Note: Looking for information on CRM software for your business? Fill out the below questionnaire to have our sister site Buyer Zone connect you with vendors that can help.

Company Pricing

SugarCRM is a super transparent company and that extends to how products are priced. Like other companies, SugarCRM offers a tiered pricing structure, but unlike other companies, it's blissfully easy to follow and there's a handy comparison matrix with every feature listed for easy side-by-side comparison.

The only semi-strange thing about SugarCRM's pricing model is that every level of service requires a minimum of 10 users, so if the entry-level cost is $40 per user, per month (billed annually) that really means the entry-level cost is $4,800 a year. On the plus side, this stipulation is clearly written out, so there aren't any hidden fees or surprises.

  • Sugar Professional: The starting level for SugarCRM is Sugar Professional, which as mentioned, starts at $40 per user, per month and is billed annually. While this is a higher starting price than many other CRMs, Sugar Professional is incredibly feature-rich. Nearly every sales, marketing, lead management, and customer support feature that SugarCRM has is included at this level of service, however, cloud document storage is limited to 15GB, there's no sandbox included, and most workflow functionality is not available.
  • Sugar Enterprise: The Sugar Enterprise package starts at $65 per user, per month, for a minimum annual total of $7,800. At this level of service, you'll receive more in-depth analytics, like product-level forecasting, SQL based reporting and full workflow functionality. Users can do things like automate sequential and parallel workflows, create reusable business process rules, and create customizable email templates. Sugar Enterprise also gives users access to the SDK dev kit and includes 60GB for cloud document storage as well as 2 sandboxes.
  • Sugar Ultimate: At $150 per user, per month, Sugar Ultimate is one of the more expensive CRM subscriptions out there, especially when you consider the 10-user minimum, which boosts the starting price to $18,000 a year. Sugar Ultimate was built with high-level dev in mind; the subscription includes 250GB cloud document storage and five sandboxes. However, other than these notable upgrades, Sugar Ultimate is nearly identical to Sugar Enterprise, so only businesses that really need the extra storage and testing space should seriously consider it.

The minimum number of users required and the relatively high starting prices places SugarCRM toward the top of the small business SaaS price scale. Many of the other CRMs we reviewed were less expensive and offered similar user functionality. If you're not planning on taking advantage of the dev resources and outstanding customer service this system may simply be too expensive for your SMB.

Ease of Use

While it's not the prettiest CRM out there, once implemented, SugarCRM is relatively intuitive to use. The top toolbar users see provides shortcuts to frequently used features like email, calendars, and phone (with click-to-call capabilities). Below that, users can navigate through tabs to view Home, Accounts, Contacts, Opportunities, Knowledgebase, Leads, Reports and more. SugarCRM definitely falls flat in the beauty division, especially compared to some other CRMs out there; the dashboard looks like something straight out of the aughts and the overall design is boring at best, but it gets the job done.

The only area in which SugarCRM is not user friendly (for all users) is during setup. You won't find a nicely curated marketplace of click-to-install extensions with this CRM, and the customization options can be downright overwhelming for non-tech folks. This isn't a obtain and instantly get started sort of system, so if you're looking for straight out-of-the-box functionality, keep shopping.

Customer Service

It's impossible to say enough good things about SugarCRM's customer service. This company sets the standard when it comes to how you should treat potential customers and media inquiries alike, and the consistency is unbeatable. In fact, not a single inquiry we issued went unanswered.

Despite having a relatively large user base (2 million users currently deployed), getting a live customer service representative on the phone was startlingly easy. Even calling the main customer service line connects you with a rep almost immediately.

What's even more impressive is that every rep we spoke with could answer highly technical questions. When customer service couldn't immediately outline the answer to our problems (due to complexity), they would offer a short explanation and then direct us to additional relevant training guides, without multiple transfers and holds. On the flip side, when we asked customer service very basic questions to belie a lack of basic technical knowledge, we were met with equally friendly answers, and the reps seemed to naturally scale their responses based on perceived user knowledge, which is incredibly rare.

The ability to get answers to technical questions and basic user questions with equal ease is good news for developers and users, and it's a big part of why we chose SugarCRM as the Best Startup CRM Software.

Company Features

SugarCRM is a developer's dream customer relationship management solution. The open-source, self-service approach is ideal for innovative teams that want to fully customize their own SaaS product without starting completely from scratch. The portals and documentation we highlight here represent a small percentage of what SugarCRM brings to the table.

Here, we've focused on highlighting a few of the most outstanding developer features and end-user focused features this CRM offers: 

  • Limitless third-party integrations: Thanks to the open API, there are limitless third-party integrations available for SugarCRM. From e-commerce and call center capabilities to integrations with legacy systems, there's not much you can't do with this CRM. Plus, the guides on syncing with outside systems and installing extensions are extensive, so your team isn't flying blind. SugarCRM is also JAVA and PHP-friendly.
  • Offline access at every level: A surprising number of CRMs do not offer offline access, or only offer it at high subscription levels, but SugarCRM offers offline access at every price point. This can be a major selling point for mobile users who may not always have service on the road.
  • Customizable campaign management: By accessing the Campaign Wizard, SugarCRM users can build out highly complex (or simple) campaigns that enable users to seamlessly execute, manage and monitor the progress of campaigns across multiple channels. Campaign approvals and routing can be easily maintained, and benchmarks can be created to effectively optimize future campaigns.
  • SDK development kit: For businesses wanting to create white label CRM apps, SugarCRM's SDK developer kit (and all the supporting documentation) is a major advantage. Just be aware that this kit is not included in the entry-level subscription, if you want it you must upgrade to at least Sugar Enterprise.
  • Self-service portal: Users who have at least 100 concurrent portal users, and are subscribed at the Enterprise level or above, gain access to the self-service portal, which allows outside clients to access information and submit orders and requests without going through a sales representative. Customers can also use the portal to create cases and upload information, and the layout and custom fields can easily be changed through Sugar Studio.
  • 26 languages are supported: SugarCRM is currently deployed in 26 different languages across 120 countries worldwide. Many lower-cost CRM solutions are only available in one or two languages, which can lead to internationally based businesses piecemealing CRMs together to create a single system. SugarCRM makes it relatively easy to deploy a single solution across different languages.
  • SQL-based reporting: The Advanced Reports module in SugarCRM allows users to utilize SQL to write queries and generate reports. Admin-level users can also do things like aggregate queries on a single report and add multiple data formats to a single report, both of which are excellent for the visualization/sharing side of things. For startups with in-house data analysts or data scientists, this type of functionality is essential, and as with other SugarCRM modules, the step-by-step documentation on using said modules is comprehensive.

Other Benefits

SugarCRM offers an add-on service called Hint, which costs $15 per user, per month (billed annually with the standard 10-user minimum). It allows users to import a customer's social information just by entering their name and email. Hint also offers additional business intelligence features through advanced analytics and automated data gathering.


SugarCRM isn't the best fit for every business. This solution does not offer point-and-click integrations through a marketplace; instead, users must access the API to integrate third-party solutions.

While this isn't necessarily a negative for all businesses, like tech-driven startups, it may prove challenging for businesses that don't have an in-house tech team. Plus, not all businesses require advanced customization options; companies desiring an inexpensive CRM that's ready to use out of the box would be better off with a lightweight solution that's easy to implement and cheap to maintain, because SugarCRM can get complicated and pricey.

Ready to choose a CRM solution? Here's a breakdown of our full coverage:

FAQs about CRM

What does a CRM system do?

Businesses of all sizes use customer relationship management (CRM) systems to manage customer information, Strengthen communication and accountability across the sales staff, and use real time analytics to make informed decisions. CRMs also make it easy for businesses to keep track of past communication with current and potential clients, send automated follow ups and record calls.

How much does a CRM cost?

Customer relationship management (CRM) software is accessible for every budget. Some CRM software, like Zoho, offers free limited versions for up to 10 users, but even paid versions of Salesforce, the industry standard for enterprises and SMB alike, start at $25 per user, per month. There are also highly customized CRM solutions that cost thousands of dollars just to implement, but such systems are typically only purchased by enterprise level businesses.

What is a SaaS CRM?

Software as a Service (SaaS) refers to any software product that's sold on a subscription basis and accessed by users who are not hosting the software on their own servers. Today, most business products are offered as SaaS packages, because SaaS solutions are inexpensive to deploy and easy for SMB clients to manage. Some popular SaaS CRMs include HubSpot, SugarCRM and Salesforce.

Wed, 22 Jun 2022 12:00:00 -0500 en text/html https://www.businessnewsdaily.com/10071-best-crm-tools-startup.html
Killexams : Buckle up for Black Hat 2022: Sessions your security team should not miss


Black Hat is set to return next week with two years of pent up cybersecurity research and discoveries. Here are the talks you don’t want to miss. 

Just because cybersecurity’s biggest conferences halted their productions these past two years, cybersecurity itself did not take a backseat. Continued advancements in the industry, plus non-stop cybercriminal activity have left the community with much to discuss as we reflect on the events that have unfolded since the start of the pandemic (think SolarWinds, Colonial Pipeline, and Log4j … just to name a few). 

After two years of cancellations and a halting return, Black Hat USA 2022 is set to return to Las Vegas next week in something close to its former glory. And with two years of pent up cybersecurity research and discoveries, there’s lots to look forward to. 

To help you plan your itinerary, we’ve compiled the Black Hat sessions we’re eager to attend, broken down by category.  


Chris Krebs: Black Hat at 25: Where Do We Go From Here?

Thursday at 9:00am

Since being unceremoniously sacked by then-President Trump for confirming that the 2020 presidential election was free of hacking incidents or tampering, Chris Krebs has been on the front lines helping private sector firms address their cyber risks, as a Founding Partner of Krebs Stamos Group (with former Facebook CISO Alex Stamos).

Krebs’ unique perspective as the Federal Government’s former top expert on cybersecurity and a highly valued private sector consultant makes his Black Hat keynote this year a “must see” event. In this talk, Krebs will reflect on where the InfoSec community stands today after convening in the desert for 25 years. His thoughts on where we stand? Not good. Krebs will outline how the industry needs to both shift its mindset and actions in order to take on the next 25 years of InfoSec. 

Kim Zetter: Pre-Stuxnet, Post-Stuxnet: everything Has Changed, Nothing Has Changed

Thursday at 9:00am

In the “deep perspective” category, Thursday’s keynote by award winning investigative cybersecurity journalist Kim Zetter is another “must see” event at Black Hat. Zetter has covered cybersecurity and national security since 1999, writing for WIRED, Politico, PC World and other publications. She is the author of Countdown to Zero Day, the definitive account of the creation of the Stuxnet malware, which was deployed against Iran. 

Zetter’s talk will focus on cyberattacks on critical infrastructure (CI) dating back to Stuxnet in 2010. Despite all of the changes in cybersecurity since Stuxnet was discovered, Zetter argues that nothing has really changed: continuous attacks on CI come as a surprise when the community should have seen these attacks coming. In this talk, Zetter will argue that attacks like Colonial Pipeline were foreseeable, and that the future’s attacks will be no different. 


With a kinetic war ravaging cities and towns in Ukraine, the specter of cyberwar has taken a back seat. But behind the scenes, offensive cyber operations have played a pivotal role in Russia’s war on Ukraine, since long before Russian troops rolled across the border this past February. This year’s Black Hat has a number of interesting talks delving into the cyber aspects of the Ukraine conflict. They include: 

Industroyer2: Sandworms Cyberwarfare Targets Ukraine’s Power Grid Again

Wednesday at 10:20am

ESET’s Robert Lipovsky and Anton Cherepanov take us on a tour of the multiple forms of cyberwarfare that have taken place throughout Russia’s military operations against Ukraine, dating back to 2016 with the launch of the original Industroyer malware. Recently, a new version of the malware was discovered, known as Industroyer2, with the same goal of triggering electricity blackouts. In this talk, the ESET researchers will give a technical overview of this new malware, as well as the several other wiper malwares they discovered impacting Ukraine this past year.

Real ‘Cyber War’: Espionage, DDoS, Leaks, and Wipers in the Russian Invasion of Ukraine

Wednesday at 3:20pm

Experts have been in agreement that cyber is a new threat of operation in military conflicts, but have disagreed on what form an actual cyberwar might take. Russia’s war on Ukraine is putting much of that debate to rest. In this talk, SentinelOne’s Juan Andres Guerrero-Saade and Tom Hegel will give an overview of what cyberwarfare really is, versus what society’s collective assumptions are about the role of cyber in modern warfare.

They will specifically discuss the strains of wiper malware that have impacted Ukraine in 2022, considering that nation-state wiper malware prior to Russia’s war on Ukraine was rare. This discussion of various strains of wiper malware will help to show what we can realistically expect from cyberwarfare in the modern era. 

Securing open source and the software supply chain

The security of software supply chains and development organizations is another dominant theme at this year’s Black Hat Briefings, with a slew of talks addressing various aspects of supply chain risk and attacks (check out our analysis of the supply chain thread at Black Hat here). If you’re interested in learning more about how malicious actors may target your organization by exploiting weaknesses in your software supply chain, here are some talks to consider: 

Don’t get owned by your dependencies: how FireFox uses in-process sandboxing to protect itself from exploitable libraries (and you can too!)

Thursday at 2:30pm

PhD Student Shravan Narayan and Research Scientist Tal Garfinkel of UC San Diego’s Black Hat talk will focus on the threat of memory safety vulnerabilities in third party C libraries, which are a major source of zero-day attacks in today’s applications. Their research team has been using Firefox to test sandbox capabilities that could mitigate this threat, which led them to create RLBox: an open source language level framework. Their presentation will discuss how they came up with this tool, and how it can be applied to other applications.  

Scaling the security researcher to eliminate OSS vulnerabilities once and for all

Thursday at 3:20pm

Moderne Inc.’s Patrick Way, plus HUMAN Security’s Jonathan Leitschuh and Shyam Mehta will present their talk on how to manage open source software (OSS) in a way that best leverages researchers’ time, knowledge, and resources. The solution they propose is bulk pull request generation, which they will demonstrate on several real-world OSS projects during their presentation. Their goal is to fix vulnerabilities on a large, reasonable scale. 

Controlling the source: abusing source code management systems

Thursday at 3:20pm

Brett Hawkins, a Red Team Operator a part of IBM X-Force Red’s Adversary Simulation will discuss an overlooked, widely-used system that threat actors can exploit to carry out software supply chain attacks: Source Code Management (SCM) systems. His presentation will demonstrate how popular SCM systems can be easily exploited by attackers. Brett will also share an open source tool and defensive guidance that can be used to mitigate this threat. 

Threat hunting

It wouldn’t be Black Hat without discussions of vulnerabilities, threats, attacks and cyber defense. And this year’s show doesn’t disappoint. One clear theme in the schedule of talks is the growing prominence of “right of boom” tools and approaches in the cybersecurity community. A number of talks delve into new approaches to Strengthen the quality of incident response and threat hunting. They include:  

The Open Threat Hunting Framework: Enabling Organizations to Build, Operationalize, and Scale Threat Hunting

Wednesday at 2:30pm

The definition of threat hunting, and the practical application of it, varies across industries and technologies, making it difficult to start a threat hunting program from scratch that works best for your organization. But, too often, threat hunting floats above the security “poverty line” — inaccessible to organizations without sizable information security budgets and teams.

In this presentation, John Dwyer, Neil Wyler, and Sameer Koranne of IBM Security X-Force will share a new, free threat hunting framework. The team’s hope is that this framework will help to detect incidents that can be prevented by a reliable threat hunting program. 

No One Is Entitled to Their Own Facts, Except in Cybersecurity? Presenting an Investigation Handbook To Develop a Shared Narrative of Major Cyber Incidents

Wednesday at 3:20pm

Do the stories we tell ourselves (and others) about cyber incidents affect our ability to respond to them? Of course they do! In fact, developing a shared understanding of cyber incidents is critical to making sure they don’t happen again. Fortunately, we can look to other industries for the best way to do this.

In this talk, Victoria Ontiveros, a Researcher at Harvard Kennedy School talks about the findings of a report by Harvard’s Belfer Center that looks at how the aviation industry draws lessons from aviation incidents, and applies these lessons to cybersecurity incidents. This allowed her team and Tarah Wheeler, CEO of Red Queen Dynamics, Inc to create the Major Cyber Incident Investigations Playbook. In this talk, Ontiveros and Wheeler will be presenting this playbook, which is meant to make cyber incident investigations more actionable among the industry. 

A New Trend for the Blue Team — Using a Practical Symbolic Engine to Detect Evasive Forms of Malware/Ransomware

Wednesday at 4:20pm

Blue Teams have it rough. Constrained by time, staffing and budget, they need to choose carefully when deciding which threats to investigate and how best to direct their reverse engineering talent against suspected malware or ransomware binaries, while also navigating efforts by malicious actors to misdirect or even attack them.

In this talk, TXOne Networks Inc.’s Sheng-Hao Ma, Mars Cheng, and Hank Chen will highlight the efforts of actual Blue Teams and share a new tool for the Blue Team known as the Practical Symbolic Engine, which they argue offers the best threat hunting techniques in a fully static situation. 

Come say hello to ReversingLabs at the show

The ReversingLabs team will be at Black Hat 2022. Stop at booth 2460 to chat with us. Our team will be giving out demos, presentations, plus limited-edition schwag. See you there!

Keep learning

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Carolynn van Arsdale. Read the original post at: https://blog.reversinglabs.com/blog/buckle-up-for-black-hat-2022-here-are-the-sessions-your-security-team-should-not-miss

Thu, 04 Aug 2022 02:38:00 -0500 by Carolynn van Arsdale on August 4, 2022 en-US text/html https://securityboulevard.com/2022/08/buckle-up-for-black-hat-2022-sessions-your-security-team-should-not-miss/ Killexams : Red Hat announces Matt Hicks as its new CEO

Red Hat Inc. sprang a surprise today as it announced that Matt Hicks has been promoted to be its new president and chief executive officer, effective immediately.

Hicks (pictured) takes the reins from current president and CEO Paul Cormier, who has moved upstairs to become the open-source software provider’s new chairman.

Red Hat is a subsidiary of IBM Corp. and seen as one of the pioneers of open-source software. Its flagship product is the Red Hat Enterprise Linux operating system, which is widely used to power thousands of enterprise data centers around the world. The company also sells tools around virtualization, software containers, middleware and various other applications. It was acquired by IBM in 2019 for $34 billion.

IBM CEO Arvind Krishna said Red Hat serves as the foundation of thousands of enterprises’ technology strategies thanks to its open-source, hybrid cloud capabilities. “Matt’s deep experience and technical knowledge of Red Hat’s entire portfolio makes him the ideal leader as Red Hat continues to grow and develop innovative, industry-leading software,” he said.

During IBM’s most recent quarterly earnings report, the company called out Red Hat as a key and growing revenue driver. Red Hat’s sales jumped 21% in the quarter from one year prior. IBM said it’s well-positioned to enjoy further growth too, having recently forged a partnership with Nvidia Corp. that will see it create new, artificial intelligence-powered applications that span data centers, clouds and the network edge.

Incoming chief Hicks joined the company back in 2006 as a software developer on one of its information technology teams. He became a key member of the engineering team that created Red Hat OpenShift, the company’s Kubernetes platform, helping expand that offering to cover multiple clouds and other environments.

Hicks has also helped the company to deliver new managed cloud offerings, AI capabilities and cloud-native applications. His importance to the company was on show during the most recent Red Hat Summit in May, where he delivered part of its keynote address.

During that event, Hicks stopped by theCUBE, SiliconANGLE’s mobile livestreaming studio, where he discussed how the company’s commitment to open-source software helps to drive innovation:

“There has never been a more exciting time to be in our industry and the opportunity in front of Red Hat is vast,” Hicks said in a statement today. ”I’m ready to roll up my sleeves and prove that open source technology truly can unlock the world’s potential.”

Reaction to the news from industry analysts was positive. Charles King of Pund-IT Inc. told SiliconANGLE that Hicks is an interesting if somewhat predictable choice as Red Hat’s next CEO, given that he has both experience and credibility in terms of engineering leadership and commercial product development.

“The fact he’s a longtime Red Hat management figure is an important point, since it suggests that the company is looking to continue its stable progress and evolution,” King added. “That’s further emphasized by Cormier’s elevation to the chairman role. Overall, Red Hat appears to be in a good place individually and also in its collaboration with IBM.”

Holger Mueller of Constellation Research Inc. said IBM needed to get its choice of CEO right, because it’s betting the farm on the future success of Red Hat’s software offerings.

“Luckily, Red Hat appears to be in good hands with Hicks, who has been steering both product and platform with a good track record,” Mueller said. “This could signal the beginning of a change at Red Hat, where it evolves to become more of a research and development subsidiary than a full-fledged auxiliary with its own go-to-market strategy.”

Cormier had sat in the CEO hot seat since 2020, having taken over from predecessor Jim Whitehurst, who became chairman of IBM after Red Hat was acquired. In his new role, he will work on scaling up the company, expanding customer adoption, and mergers and acquisitions, IBM said. In a statement, Cormier said Hicks is “absolutely the right person” to serve as Red Hat’s next CEO, noting how his experience across different parts of the business has given him the depth and breadth of knowledge to remain a hybrid cloud leader.

“He understands our product strategy and the direction the industry is moving in a way that’s second to none,” Cormier said. “As chairman, I’m excited to get to work with our customers, partners and Matt in new ways.”

Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Tue, 12 Jul 2022 13:04:00 -0500 en-US text/html https://siliconangle.com/2022/07/12/red-hat-announces-matt-hicks-new-ceo/
Killexams : Zinnov Awards celebrates the Titans of Tech - both trailblazing individuals and organizations at forefront of technology and innovation No result found, try new keyword!Winners: - Business or Tech Leaders (Senior) - Bhumika P Balani - IBM India Pvt Ltd - Sumathi ... Innovation Campus Bengaluru Category 9: Technical Role Model: The second award category with ... Thu, 21 Jul 2022 20:34:00 -0500 text/html https://news.webindia123.com/news/articles/Business/20220722/3965494.html P6040-017 exam dump and training guide direct download
Training Exams List