Ensure that you go through these 000-138 test questions before test day.

The high quality of 000-138 dumps questions provided in killexams.com will be excellent. You possess to just proceed to killexams.com and download totally free PDF Questions sample Queries before you choose to register with regard to complete Rational RequisitePro questions financial institution. You will become convinced. You may submit a manual up-date check anytime a person like to confirm your 000-138 PDF Dumps.

Exam Code: 000-138 Practice test 2022 by Killexams.com team
Rational RequisitePro
IBM information source
Killexams : IBM information source - BingNews http://www.bing.com:80/news/search?q=IBM+information+source&cc=us&format=RSS Search results Killexams : IBM information source - BingNews http://www.bing.com:80/news/search?q=IBM+information+source&cc=us&format=RSS https://killexams.com/exam_list/IBM Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Strengthen future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Strengthen quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Strengthen the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : IBM reveals ways to use native source-code management functionality in attacks

IBM’s pen testing group X-Force Red released a new source-code management (SCM) attack simulation toolkit Tuesday, with new research revealing ways to use native SCM functionality in attacks. 

Brett Hawkins of X-Force Red will present the research at Black Hat later in the week. 

Source-code management tools like GitHub are more than just a home to intellectual property. They are a way to install code en masse on every system that code reaches. Two of the most devastating attacks in history - NotPetya and Solarwinds - came out of malicious code inserted into updates, then uploaded to clients. Sloppy SCM users sometimes leave API keys and passwords exposed in code, giving SCM dorks access to other systems; from there, SCM may be connected to other DevOps servers and become a pivot point. 

Click here for more coverage from the Black Hat Conference in Las Vegas.

“There's not really any research out there on attacking and defending these systems,” Hawkins told SC Media. 

At present, most attacks on SCM are by bad actors searching for interesting exposed files, repositories and content. But Hawkins developed more sophisticated attacks leading to privilege escalation, stealth and persistence to use in pen tests. 

That might mean using administrator access to create or duplicate tokens used to access the SCM. Alternatively, on GitHub, that might mean clicking a single button to impersonate users. 

Hawkins jammed his research and reconnaissance tools into SCMKit, the toolkit released Tuesday.  

“There's nothing out there that exists like SCM-Kit right now. It allows you to do a bunch of different attack scenarios including reconnaissance, privilege escalation, and persistence against GitHub Enterprise, GitLab enterprise and Bitbucket,” said Hawkins. “I’m hoping to get some good feedback from the infosec community.”

Mon, 08 Aug 2022 22:00:00 -0500 en text/html https://www.scmagazine.com/analysis/application-security/ibm-reveals-ways-to-use-native-source-code-management-functionality-in-attacks
Killexams : Tech, Cyber Companies Launch Security Standard to Monitor Hacking Attempts

A group of 18 tech and cyber companies said Wednesday they are building a common data standard for sharing cybersecurity information. They aim to fix a problem for corporate security chiefs who say that cyber products often don’t integrate, making it hard to fully assess hacking threats.

Amazon. com Inc.’s AWS cloud business, cybersecurity company Splunk Inc. and International Business Machines Corp.’s security unit, among others, launched the Open Cybersecurity Schema Framework, or OCSF, Wednesday at the Black Hat USA cybersecurity conference in Las Vegas.

Products and services that support the OCSF specifications would be able to collate and standardize alerts from different cyber monitoring tools, network loggers and other software, to simplify and speed up the interpretation of that data, said Patrick Coughlin, Splunk’s group vice president of the security market. “Folks expect us to figure this out. They’re saying, ‘We’re tired of complaining about the same challenges.’”

Other companies involved in the initiative are CrowdStrike Holdings Inc., Rapid7 Inc., Palo Alto Networks Inc., Cloudflare Inc., DTEX Systems Inc., IronNet Inc., JupiterOne Inc., Okta Inc., Salesforce Inc., Securonix Inc., Sumo Logic Inc., Tanium Inc., Zscaler Inc. and Trend Micro Inc.

Chief information security officers have grumbled about proprietary cyber products that force security teams to integrate data manually. More than three-quarters of 280 security professionals surveyed want to see vendors build open standards into their products to enable interoperability, according to research from the Information Systems Security Association and TechTarget Inc.’s analyst unit published in July.

Often, cyber teams build several dashboards to monitor items such as attempted logins and unusual network activity. To get a full picture of events, they frequently have to write custom code to reformat data for one dashboard or analysis tool or another, said Mark Ryland, director of the office of the CISO at AWS. “There’s a lot of custom software out there in the security world,” he said.

Products that support OCSF would be able to share information in one dashboard without that manual labor, Mr. Ryland said. “We’ll benefit from this,” he said of AWS’s internal security teams.

Tech providers writing the initial version of OCSF expect to incorporate it into their products in the coming months, said Chris Niggel, regional chief security officer for the Americas at identity management company Okta.

Internally, Okta uses cloud services from Alphabet Inc.’s Google, human resources company Workday Inc., communications tool Slack Inc. and others, Mr. Niggel said. “Our incident response team has to normalize all that information so they can see what’s happening,” he said.

With data about potential hacking activity in one format, internal teams will be able to recognize attacks earlier, he said. Plus, companies will be able to share incident data with each other faster, he added.

The OCSF standard and documentation will be on the GitHub open-source repository. Early work on the project began years ago at Symantec, now part of infrastructure technology company Broadcom Inc.

Write to Kim S. Nash at kim.nash@wsj.com

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Tue, 09 Aug 2022 20:30:00 -0500 en-US text/html https://www.wsj.com/articles/tech-cyber-companies-launch-security-standard-to-monitor-hacking-attempts-11660123802
Killexams : NetSPI rolls out 2 new open-source pen-testing tools at Black Hat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Preventing and mitigating cyberattacks is a day-to-day — sometimes hour-to-hour — is a massive endeavor for enterprises. New, more advanced techniques are revealed constantly, especially with the rise in ransomware-as-a-service, crime syndicates and cybercrime commoditization. Likewise, statistics are seemingly endless, with a regular churn of new, updated reports and research studies revealing worsening conditions. 

According to Fortune Business Insights, the worldwide information security market will reach just around $376 billion in 2029. And, IBM research revealed that the average cost of a data breach is $4.35 million.

The harsh truth is that many organizations are exposed due to common software, hardware or organizational process vulnerabilities — and 93% of all networks are open to breaches, according to another exact report

Cybersecurity must therefore be a team effort, said Scott Sutherland, senior director at NetSPI, which specializes in enterprise penetration testing and attack-surface management. 

The company today announced the release of two new open-source tools for the information security community: PowerHuntShares and PowerHunt. Sutherland is demoing both at Black Hat USA this week. 

These new tools are aimed at helping defense, identity and access management (IAM) and security operations center (SOC) teams discover vulnerable network shares and Strengthen detections, said Sutherland. 

They have been developed — and released in an open-source capacity — to “help ensure our penetration testers and the IT community can more effectively identify and remediate excessive share permissions that are being abused by bad actors like ransomware groups,” said Sutherland. 

He added, “They can be used as part of a regular quarterly cadence, but the hope is they’ll be a starting point for companies that lacked awareness around these issues before the tools were released.” 

Vulnerabilities revealed (by the good guys)

The new PowerHuntShares capability inventories, analyzes and reports excessive privilege assigned to server message block (SMB) shares on Microsoft’s Active Directory (AD) domain-joined computers. 

SMB allows applications on a computer to read and write to files and to request services from server programs in a computer network.

NetSPI’s new tool helps address risks of excessive share permissions in AD environments that can lead to data exposure, privilege escalation and ransomware attacks within enterprise environments, explained Sutherland. 

“PowerHuntShares is focused on identifying shares configured with excessive permissions and providing data insight to understand how they are related to each other, when they were introduced into the environment, who owns them and how exploitable they are,” said Sutherland. 

For instance, according to a exact study from cybersecurity company ExtraHop, SMB was the most prevalent protocol exposed in many industries: 34 out of 10,000 devices in financial services; seven out of 10,000 devices in healthcare; and five out of 10,000 devices in state, local and education (SLED).

Enhanced threat hunting

Meanwhile, PowerHunt is a modular threat-hunting framework that identifies signs of compromise based on artifacts from common MITRE ATT&CK techniques. It also detects anomalies and outliers specific to the target environment.

The new tool can be used to quickly collect artifacts commonly associated with malicious behavior, explained Sutherland. It automates the collection of artifacts at scale using Microsoft PowerShell and by performing initial analysis. It can also output .csv files that are easy to consume. This allows for additional triage and analysis through other tools and processes.

“While [the PowerHunt tool] calls out suspicious artifacts and statistical anomalies, its greatest value is simply producing data that can be used by other tools during threat-hunting exercises,” said Sutherland.

NetSPI offers penetration testing-as-a-service (PTaaS) through its ResolveTM penetration testing and vulnerability management platform. With this, its experts perform deep-dive manual penetration testing across application, network and cloud attack surfaces, said Sutherland. Historically, they test more than one million assets to find 4 million unique vulnerabilities.

The company’s global penetration testing team has also developed several open-source tools, including PowerUpSQL and MicroBurst. 

Sutherland underscored the importance of open-source tool development and said that NetSPI actively encourages innovation through collaboration.

Open source offers “the ability to use tools for free to better understand a concept or issue,” he said. And, while most open-source tools may not end up being an enterprise solution, they can bring awareness to specific issues and “encourage exploration of long-term solutions.” 

The ability to customize code is another advantage — anyone can download an open-source project and customize it to their needs. 

Ultimately, open source offers an “incredibly powerful” ability, said Sutherland. “It’s great to be able to learn from someone else’s code, build off that idea, collaborate with a complete stranger and produce something new that you can share with thousands of people instantly around the world.”

Specifically relating to PowerHuntShares and PowerHunt, he urged the security community to check them out and contribute to them. 

“This will allow the community to better understand our SMB share attack surfaces and Strengthen strategies for remediation — together,” he said.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Tue, 09 Aug 2022 09:00:00 -0500 Taryn Plumb en-US text/html https://venturebeat.com/security/netspi-rolls-out-2-new-open-source-pen-testing-tools-at-black-hat/
Killexams : How 'living architecture' could help the world avoid a soul-deadening digital future

My first Apple laptop felt like a piece of magic made just for me—almost a part of myself. The rounded corners, the lively shading, the delightful animations. I had been using Windows my whole life, starting on my family's IBM 386, and I never thought using a computer could be so fun.

Indeed, Apple co-founder Steve Jobs said that computers were like bicycles for the mind, extending your possibilities and helping you do things not only more efficiently but also more beautifully. Some technologies seem to unlock your humanity and make you feel inspired and alive.

But not all technologies are like this. Sometimes devices do not work reliably or as expected. Often you have to change to conform to the limitations of a system, as when you need to speak differently so a digital voice assistant can understand you. And some platforms bring out the worst in people. Think of anonymous flame wars.

As a researcher who studies technology, design and ethics, I believe that a hopeful way forward comes from the world of architecture. It all started decades ago with an architect's observation that newer buildings tended to be lifeless and depressing, even if they were made using ever fancier tools and techniques.

Tech's wear on humanity

The problems with technology are myriad and diffuse, and widely studied and reported: from short attention spans and tech neck to clickbait and AI bias to trolling and shaming to conspiracy theories and misinformation.

As people increasingly live online, these issues may only get worse. Some exact visions of the metaverse, for example, suggest that humans will come to live primarily in virtual spaces. Already, people worldwide spend on average seven hours per day on digital screens—nearly half of waking hours.

While public awareness of these issues is on the rise, it's not clear whether or how tech companies will be able to address them. Is there a way to ensure that future technologies are more like my first Apple laptop and less like a Twitter pile-on?

Over the past 60 years, the architectural theorist Christopher Alexander pursued questions similar to these in his own field. Alexander, who died in March 2022 at age 85, developed a theory of design that has made inroads in architecture. Translated to the technology field, this theory can provide the principles and process for creating technologies that unlock people's humanity rather than suppress it.

Christopher Alexander discussing place, repetition and adaptation.

How good design is defined

Technology design is beginning to mature. Tech companies and product managers have realized that a well-designed user interface is essential for a product's success, not just nice to have.

As professions mature, they tend to organize their knowledge into concepts. Design patterns are a great example of this. A design pattern is a reusable solution to a problem that designers need to solve frequently.

In user experience design, for instance, such problems include helping users enter their shipping information or get back to the home page. Instead of reinventing the wheel every time, designers can apply a design pattern: clicking the logo at the upper left always takes you home. With design patterns, life is easier for designers, and the end products are better for users.

Design patterns facilitate good design in one sense: They are efficient and productive. Yet they do not necessarily lead to designs that are good for people. They can be sterile and generic. How, exactly, to avoid that is a major challenge.

A seed of hope lies in the very place where design patterns originated: the work of Christopher Alexander. Alexander dedicated his life to understanding what makes an environment good for humans—good in a deep, moral sense—and how designers might create structures that are likewise good.

His work on design patterns, dating back to the 1960s, was his initial effort at an answer. The patterns he developed with his colleagues included details like how many stories a good building should have and how many light sources a good room should have.

But Alexander found design patterns ultimately unsatisfying. He took that work further, eventually publishing his theory in his four-volume magnum opus, "The Nature of Order."

While Alexander's work on design patterns is very well known—his 1977 book "A Pattern Language" remains a bestseller—his later work, which he deemed much more important, has been largely overlooked. No surprise, then, that his deepest insights have not yet entered technology design. But if they do, good design could come to mean something much richer.

On creating structures that foster life

Architecture was getting worse, not better. That was Christopher Alexander's conclusion in the mid-20th century.

Much modern architecture is inert and makes people feel dead inside. It may be sleek and intellectual—it may even win awards—but it does not help generate a feeling of life within its occupants. What went wrong, and how might architecture correct its course?

Motivated by this question, Alexander conducted numerous experiments throughout his career, going deeper and deeper. Beginning with his design patterns, he discovered that the designs that stirred up the most feeling in people, what he called living structure, shared certain qualities. This wasn't just a hunch, but a testable empirical theory, one that he validated and refined from the late 1970s until the turn of the century. He identified 15 qualities, each with a technical definition and many examples.

The qualities are:

  • Levels of scale
  • Strong centers
  • Boundaries
  • Alternating repetition
  • Positive space
  • Good shape
  • Local symmetries
  • Deep interlocking and ambiguity
  • Contrast gradients
  • Roughness
  • Echoes
  • The void
  • Simplicity and inner calm
  • Notseparateness

As Alexander writes, living structure is not just pleasant and energizing, though it is also those. Living structure reaches into humans at a transcendent level—connecting people with themselves and with one another—with all humans across centuries and cultures and climates.

Yet modern architecture, as Alexander showed, has very few of the qualities that make living structure. In other words, over the 20th century architects taught one another to do it all wrong. Worse, these errors were crystallized in building codes, zoning laws, awards criteria and education. He decided it was time to turn things around.

Alexander's ideas have been hugely influential in architectural theory and criticism. But the world has not yet seen the paradigm shift he was hoping for.

By the mid-1990s, Alexander recognized that for his aims to be achieved, there would need to be many more people on board—and not just architects, but all sorts of planners, infrastructure developers and everyday people. And perhaps other fields besides architecture. The digital revolution was coming to a head.

Alexander's invitation to technology designers

As Alexander doggedly pursued his research, he started to notice the potential for digital technology to be a force for good. More and more, digital technology was becoming part of the human environment—becoming, that is, architectural.

Meanwhile, Alexander's ideas about design patterns had entered the world of technology design as a way to organize and communicate design knowledge. To be sure, this older work of Alexander's proved very valuable, particularly to software engineering.

Because of his fame for design patterns, in 1996 Alexander was invited to deliver a keynote address at a major software engineering conference sponsored by the Association for Computing Machinery.

In his talk, Alexander remarked that the tech industry was making great strides in efficiency and power but perhaps had not paused to ask: "What are we supposed to be doing with all these programs? How are they supposed to help the Earth?"

"For now, you're like guns for hire," Alexander said. He invited the audience to make technologies for good, not just for pay.

Loosening the design process

In "The Nature of Order," Alexander defined not only his theory of living structure, but also a process for creating such structure.

In short, this process involves democratic participation and springs from the bottom up in an evolving progression incorporating the 15 qualities of living structure. The end result isn't known ahead of time—it's adapted along the way. The term "organic" comes to mind, and this is appropriate, because nature almost invariably creates living structure.

But typical architecture—and design in many fields—is, in contrast, top-down and strictly defined from the outset. In this machinelike process, rigid precision is prioritized over local adaptability, project roles are siloed apart and the emphasis is on commercial value and investment over anything else. This is a recipe for lifeless structure.

Alexander's work suggests that if living structure is the goal, the is the place to focus. And the technology field is starting to show inklings of change.

In project management, for example, the traditional waterfall approach followed a rigid, step-by-step schedule defined upfront. The turn of the century saw the emergence of a more dynamic approach, dubbed agile, which allows for more adaptability through frequent check-ins and prioritization, progressing in "sprints" of one to two weeks rather than longer phases.

And in design, the human-centered design paradigm is likewise gaining steam. Human-centered design emphasizes, among other elements, continually testing and refining small changes with respect to design goals.

A design process that promotes life

However, Alexander would say that both these trajectories are missing some of his deeper insights about living structure. They may spark more purchases and increase stock prices, but these approaches will not necessarily create technologies that are good for each person and good for the world.

Yet there are some emerging efforts toward this deeper end. For example, design pioneer Don Norman, who coined the term "user experience," has been developing his ideas on what he calls humanity-centered design. This goes beyond human-centered design to focus on ecosystems, take a long-term view, incorporate human values and involve stakeholder communities along the way.

The vision of humanity-centered design calls for sweeping changes in the technology field. This is precisely the kind of reorientation that Alexander was calling for in his 1996 keynote speech. Just as design patterns suggested in the first place, the technology field doesn't need to reinvent the wheel. Technologists and people of all stripes can build up from the tremendous, careful work that Alexander has left.



This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: How 'living architecture' could help the world avoid a soul-deadening digital future (2022, August 10) retrieved 10 August 2022 from https://techxplore.com/news/2022-08-architecture-world-soul-deadening-digital-future.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Wed, 10 Aug 2022 01:01:00 -0500 en text/html https://techxplore.com/news/2022-08-architecture-world-soul-deadening-digital-future.html
Killexams : IBM aims for immediate quantum advantage with error mitigation technique

You don’t have to be a physicist to know that noise and quantum computing don’t mix. Any noise, movement or temperature swing causes qubits – the quantum computing equivalent to a binary bit in classical computing – to fail.

That’s one of the main reasons quantum advantage (the point at which quantum surpasses classic computing) and quantum supremacy (when quantum computers solve a problem not feasible for classical computing) feel like longer-term goals and emerging technology.  It’s worth the wait, though, as quantum computers promise exponential increases over classic computing, which tops out at supercomputing.  However, due to the intricacies of quantum physics (e.g., entanglement), quantum computers are also more prone to errors based on environmental factors when compared to supercomputers or high-performance computers.

Quantum errors arise from what’s known as decoherence, a process that occurs when noise or nonoptimal temperatures interfere with qubits, changing their quantum states and causing information stored by the quantum computer to be lost.

The road(s) to quantum

Many enterprises view quantum computing technology as a zero-sum scenario and that if you want value from a quantum computer, you need fault-tolerant quantum processors and a multitude of qubits. While we wait, we’re stuck in the NISQ era — noisy intermediate-scale quantum — where quantum hasn’t surpassed  classical computers.

That’s an impression IBM hopes to change.

In a blog published today by IBM, its quantum team (Kristan Temme, Ewout van den Berg, Abhinav Kandala and Jay Gambett) writes that the history of classical computing is one of incremental advances. 

“Although quantum computers have seen tremendous improvements in their scale, quality and speed in exact years, such a gradual evolution seems to be missing from the narrative,” the team wrote.  “However, exact advances in techniques we refer to broadly as quantum error mitigation allow us to lay out a smoother path towards this goal. Along this path, advances in qubit coherence, gate fidelities and speed immediately translate to measurable advantage in computation, akin to the steady progress historically observed with classical computers.”

Finding value in noisy qubits

In a move to get a quantum advantage sooner – and in incremental steps – IBM claims to have created a technique that’s designed to tap more value from noisy qubits and move away from NISQ.

Instead of focusing solely on fault-tolerant computers. IBM’s goal is continuous and incremental improvements, Jerry Chow, the director of hardware development for IBM Quantum, told VentureBeat.

To mitigate errors, Chow points to IBM’s new probabilistic error cancellation, a technique designed to invert noisy quantum circuits to achieve error-free results, even though the circuits themselves are noisy. It does bring a runtime tradeoff, he said, because you’re giving up running more circuits to gain insight into the noise causing the errors.

The goal of the new technique is to provide a step, rather than a leap, towards quantum supremacy.  It’s “a near-term solution,” Chow said, and a part of a suite of techniques that will help IBM learn about error correction through error migration. “As you increase the runtime, you learn more as you run more qubits,” he explained.

Chow said that while  IBM continues to scale its quantum platform, this offers an incremental step. Last year, IBM unveiled a 127-qubit Eagle processor, which is capable of running quantum circuits that can’t be replicated classically.  Based on its quantum roadmap laid out in May, IBM systems is on track to reach 4,000-plus qubit quantum devices in 2025.

Not an either-or scenario: Quantum starts now

Probabilistic error cancellation represents a shift for IBM and the quantum field overall. Rather than relying solely on experiments to achieve full error correction under certain circumstances, IBM has focused on a continuous push to address quantum errors today while still moving toward fault-tolerant machines, Chow said. “You need high-quality hardware to run billions of circuits. Speed is needed. The goal is not to do error mitigation  long-term. It’s not all or nothing.”

IBM quantum computing bloggers add that its quantum error mitigation technique “is the continuous path that will take us from today’s quantum hardware to tomorrow’s fault-tolerant quantum computers. This path will let us run larger circuits needed for quantum advantage, one hardware improvement at a time.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Tue, 19 Jul 2022 15:37:00 -0500 Dan Muse en-US text/html https://venturebeat.com/quantum-computing/ibm-aims-for-immediate-quantum-advantage-with-error-mitigation-technique/
Killexams : Amazon, IBM Move Swiftly on Post-Quantum Cryptographic Algorithms Selected by NIST

A month after the National Institute of Standards and Technology (NIST) revealed the first quantum-safe algorithms, Amazon Web Services (AWS) and IBM have swiftly moved forward. Google was also quick to outline an aggressive implementation plan for its cloud service that it started a decade ago.

It helps that IBM researchers contributed to three of the four algorithms, while AWS had a hand in one. Google is also among those who contributed to SPHINCS+.

A long process that started in 2016 with 69 original candidates ends with the selection of four algorithms that will become NIST standards, which will play a critical role in protecting encrypted data from the vast power of quantum computers.

NIST's four choices include CRYSTALS-Kyber, a public-private key-encapsulation mechanism (KEM) for general asymmetric encryption, such as when connecting websites. For digital signatures, NIST selected CRYSTALS-Dilithium, FALCON, and SPHINCS+. NIST will add a few more algorithms to the mix in two years.

Vadim Lyubashevsky, a cryptographer who works in IBM's Zurich Research Laboratories, contributed to the development of CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon. Lyubashevsky was predictably pleased by the algorithms selected, but he had only anticipated NIST would pick two digital signature candidates rather than three.

Ideally, NIST would have chosen a second key establishment algorithm, according to Lyubashevsky. "They could have chosen one more right away just to be safe," he told Dark Reading. "I think some people expected McEliece to be chosen, but maybe NIST decided to hold off for two years to see what the backup should be to Kyber."

IBM's New Mainframe Supports NIST-Selected Algorithms

After NIST identified the algorithms, IBM moved forward by specifying them into its recently launched z16 mainframe. IBM introduced the z16 in April, calling it the "first quantum-safe system," enabled by its new Crypto Express 8S card and APIs that provide access to the NIST APIs.

IBM was championing three of the algorithms that NIST selected, so IBM had already included them in the z16. Since IBM had unveiled the z16 before the NIST decision, the company implemented the algorithms into the new system. IBM last week made it official that the z16 supports the algorithms.

Anne Dames, an IBM distinguished engineer who works on the company's z Systems team, explained that the Crypto Express 8S card could implement various cryptographic algorithms. Nevertheless, IBM was betting on CRYSTAL-Kyber and Dilithium, according to Dames.

"We are very fortunate in that it went in the direction we hoped it would go," she told Dark Reading. "And because we chose to implement CRYSTALS-Kyber and CRYSTALS-Dilithium in the hardware security module, which allows clients to get access to it, the firmware in that hardware security module can be updated. So, if other algorithms were selected, then we would add them to our roadmap for inclusion of those algorithms for the future."

A software library on the system allows application and infrastructure developers to incorporate APIs so that clients can generate quantum-safe digital signatures for both classic computing systems and quantum computers.

"We also have a CRYSTALS-Kyber interface in place so that we can generate a key and provide it wrapped by a Kyber key so that could be used in a potential key exchange scheme," Dames said. "And we've also incorporated some APIs that allow clients to have a key exchange scheme between two parties."

Dames noted that clients might use Dilithium to generate digital signatures on documents. "Think about code signing servers, things like that, or documents signing services, where people would like to actually use the digital signature capability to ensure the authenticity of the document or of the code that's being used," she said.

AWS Engineers Algorithms Into Services

During Amazon's AWS re:Inforce security conference last week in Boston, the cloud provider emphasized its post-quantum cryptography (PQC) efforts. According to Margaret Salter, director of applied cryptography at AWS, Amazon is already engineering the NIST standards into its services.

During a breakout session on AWS' cryptography efforts at the conference, Salter said AWS had implemented an open source, hybrid post-quantum key exchange based on a specification called s2n-tls, which implements the Transport Layer Security (TLS) protocol across different AWS services. AWS has contributed it as a draft standard to the Internet Engineering Task Force (IETF).

Salter explained that the hybrid key exchange brings together its traditional key exchanges while enabling post-quantum security. "We have regular key exchanges that we've been using for years and years to protect data," she said. "We don't want to get rid of those; we're just going to enhance them by adding a public key exchange on top of it. And using both of those, you have traditional security, plus post quantum security."

Last week, Amazon announced that it deployed s2n-tls, the hybrid post-quantum TLS with CRYSTALS-Kyber, which connects to the AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In an update this week, Amazon documented its stated support for AWS Secrets Manager, a service for managing, rotating, and retrieving database credentials and API keys.

Google's Decade-Long PQC Migration

While Google didn't make implementation announcements like AWS in the immediate aftermath of NIST's selection, VP and CISO Phil Venables said Google has been focused on PQC algorithms "beyond theoretical implementations" for over a decade. Venables was among several prominent researchers who co-authored a technical paper outlining the urgency of adopting PQC strategies. The peer-reviewed paper was published in May by Nature, a respected journal for the science and technology communities.

"At Google, we're well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information," Venables wrote in a blog post published following the NIST announcement. "We have one goal: ensure that Google is PQC ready."

Venables recalled an experiment in 2016 with Chrome where a minimal number of connections from the Web browser to Google servers used a post-quantum key-exchange algorithm alongside the existing elliptic-curve key-exchange algorithm. "By adding a post-quantum algorithm in a hybrid mode with the existing key exchange, we were able to test its implementation without affecting user security," Venables noted.

Google and Cloudflare announced a "wide-scale post-quantum experiment" in 2019 implementing two post-quantum key exchanges, "integrated into Cloudflare's TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients." The experiment helped Google understand the implications of deploying two post-quantum key agreements with TLS.

Venables noted that last year Google tested post-quantum confidentiality in TLS and found that various network products were not compatible with post-quantum TLS. "We were able to work with the vendor so that the issue was fixed in future firmware updates," he said. "By experimenting early, we resolved this issue for future deployments."

Other Standards Efforts

The four algorithms NIST announced are an important milestone in advancing PQC, but there's other work to be done besides quantum-safe encryption. The AWS TLS submission to the IETF is one example; others include such efforts as Hybrid PQ VPN.

"What you will see happening is those organizations that work on TLS protocols, or SSH, or VPN type protocols, will now come together and put together proposals which they will evaluate in their communities to determine what's best and which protocols should be updated, how the certificates should be defined, and things like things like that," IBM's Dames said.

Dustin Moody, a mathematician at NIST who leads its PQC project, shared a similar view during a panel discussion at the RSA Conference in June. "There's been a lot of global cooperation with our NIST process, rather than fracturing of the effort and coming up with a lot of different algorithms," Moody said. "We've seen most countries and standards organizations waiting to see what comes out of our nice progress on this process, as well as participating in that. And we see that as a very good sign."

Thu, 04 Aug 2022 09:03:00 -0500 en text/html https://www.darkreading.com/dr-tech/amazon-ibm-move-swiftly-on-post-quantum-cryptographic-algorithms-selected-by-nist
Killexams : IBM earnings show solid growth but stock slides anyway

IBM Corp. beat second-quarter earnings estimates today, but shareholders were unimpressed, sending the computing giant’s shares down more than 4% in early after-hours trading.

Revenue rose 16%, to $15.54 billion in constant currency terms, and rose 9% from the $14.22 billion IBM reported in the same quarter a year ago after adjusting for the spinoff of managed infrastructure-service business Kyndryl Holdings Inc. Net income jumped 45% year-over-year, to $2.5 billion, and diluted earnings per share of $2.31 a share were up 43% from a year ago.

Analysts had expected adjusted earnings of $2.26 a share on revenue of $15.08 billion.

The strong numbers weren’t a surprise given that IBM had guided expectations toward high single-digit growth. The stock decline was attributed to a lower free cash flow forecast of $10 billion for 2022, which was below the $10 billion-to-$10.5 billion range it had initially forecast. However, free cash flow was up significantly for the first six months of the year.

It’s also possible that a report saying Apple was looking at slowing down hiring, which caused the overall market to fall slightly today, might have spilled over to other tech stocks such as IBM in the extended trading session.

Delivered on promises

On the whole, the company delivered what it said it would. Its hybrid platform and solutions category grew 9% on the back of 17% growth in its Red Hat Business. Hybrid cloud revenue rose 19%, to $21.7 billion. Transaction processing sales rose 19% and the software segment of hybrid cloud revenue grew 18%.

“This quarter says that [Chief Executive Arvind Krishna] and his team continue to get the big calls right both from a platform strategy and also from the investments and acquisitions IBM has made over the last 18 months,” said Bola Rotibi, research director for software development at CCS Insight Ltd. Despite broad fears of a downturn in the economy, “the company is bucking the expected trend and more than meeting expectations,” she said.

Software revenue grew 11.6% in constant currency terms, to $6.2 billion, helped by a 7% jump in sales to Kyndryl. Consulting revenue rose almost 18% in constant currency, to $4.8 billion, while infrastructure revenue grew more than 25%, to $4.2 billion, driven largely by the announcement of a new series of IBM z Systems mainframes, which delivered 69% revenue growth.

With investors on edge about the risk of recession and his potential impact on technology spending, Chief Executive Arvind Krishna (pictured) delivered an upbeat message. “There’s every reason to believe technology spending in the [business-to-business] market will continue to surpass GDP growth,” he said. “Demand for solutions remains strong. We continue to have double-digit growth in IBM consulting, broad growth in software and, with the z16 launch, strong growth in infrastructure.”

Healthy pipeline

Krishna called IBM’s current sales pipeline “pretty healthy. The second half at this point looks consistent with the first half by product line and geography,” he said. He suggested that technology spending is benefiting from its leverage in reducing costs, making the sector less vulnerable to recession. ”We see the technology as deflationary,” he said. “It acts as a counterbalance to all of the inflation and labor demographics people are facing all over the globe.”

While IBM has been criticized for spending $34 billion to buy Red Hat Inc. instead of investing in infrastructure, the deal appears to be paying off as expected, Rotibi said. Although second-quarter growth in the Red Hat business was lower than the 21% recorded in the first quarter, “all the indices show that they are getting very good value from the portfolio,” she said. Red Hat has boosted IBM’s consulting business but products like Red Hat Enterprise Linux and OpenShift have also benefited from the Big Blue sales force.

With IBM being the first major information technology provider to report results, Pund-IT Inc. Chief Analyst Charles King said the numbers bode well for reports soon to come from other firms. “The strength of IBM’s quarter could portend good news for other vendors focused on enterprises,” he said. “While those businesses aren’t immune to systemic problems, they have enough heft and buoyancy to ride out storms.”

One area that IBM has talked less and less about over the past few quarters is its public cloud business. The company no longer breaks out cloud revenues and prefers to talk instead about its hybrid business and partnerships with major public cloud providers.

Hybrid focus

“IBM’s primary focus has long been on developing and enabling hybrid cloud offerings and services; that’s what its enterprise customers want, and that’s what its solutions and consultants aim to deliver,” King said.

IBM’s recently expanded partnership with Amazon Web Services Inc. is an example of how the company has pivoted away from competing with the largest hyperscalers and now sees them as a sales channel, Rotibi said. “It is a pragmatic recognition of the footprint of the hyperscalers but also playing to IBM’s strength in the services it can build on top of the other cloud platforms, its consulting arm and infrastructure,” she said.

Krishna asserted that, now that the Kyndryl spinoff is complete, IBM is in a strong position to continue on its plan to deliver high-single-digit revenue growth percentages for the foreseeable future. Its consulting business is now focused principally on business transformation projects rather than technology implementation and the people-intensive business delivered a pretax profit margin of 9%, up 1% from last year. “Consulting is a critical part of our hybrid platform thesis,” said Chief Financial Officer James Kavanaugh.

Pund-IT’s King said IBM Consulting “is firing on all cylinders. That includes double-digit growth in its three main categories of business transformation, technology consulting and application operations as well as a notable 32% growth in hybrid cloud consulting.”

Dollar worries

With the U.S. dollar at a 20-year high against the euro and a 25-year high against the yen, analysts on the company’s earnings call directed several questions to the impact of currency fluctuations on IBM’s results.

Kavanaugh said these are unknown waters but the company is prepared. “The velocity of the [dollar’s] strengthening is the sharpest we’ve seen in over a decade; over half of currencies are down-double digits against the U.S. dollar,” he said. “This is unprecedented in rate, breadth and magnitude.”

Kavanaugh said IBM is more insulated against currency fluctuations than most companies because it has long hedged against volatility. “Hedging mitigates volatility in the near term,” he said. “It does not eliminate currency as a factor but it allows you time to address your business model for price, for source, for labor pools and for cost structures.”

The company’s people-intensive consulting business also has some built-in protections against a downturn, Kavanaugh said. “In a business where you hire tens of thousands of people, you also churn tens of thousands each year,” he said. “It gives you an automatic way to hit a pause in some of the profit controls because if you don’t see demand you can slow down your supply-side. You can get a 10% to 20% impact that you pretty quickly control.”

Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Mon, 18 Jul 2022 12:15:00 -0500 en-US text/html https://siliconangle.com/2022/07/18/ibm-earnings-show-solid-growth-stock-slides-anyway/
Killexams : IBM Acquires Databand.ai to Boost Data Observability Capabilities

IBM is acquiring Databand.ai, a leading provider of data observability software that helps organizations fix issues with their data, including errors, pipeline failures, and poor quality. The acquisition further strengthens IBM's software portfolio across data, AI, and automation to address the full spectrum of observability.

Databand.ai is IBM's fifth acquisition in 2022 as the company continues to bolster its hybrid cloud and AI skills and capabilities.

Databand.ai's open and extendable approach allows data engineering teams to easily integrate and gain observability into their data infrastructure.

This acquisition will unlock more resources for Databand.ai to expand its observability capabilities for broader integrations across more of the open source and commercial solutions that power the modern data stack.

Enterprises will also have full flexibility in how to run Databand.ai, whether as-a-Service (SaaS) or a self-hosted software subscription.

The acquisition of Databand.ai builds on IBM's research and development investments as well as strategic acquisitions in AI and automation. By using Databand.ai with IBM Observability by Instana APM and IBM Watson Studio, IBM is well-positioned to address the full spectrum of observability across IT operations.

"Our clients are data-driven enterprises who rely on high-quality, trustworthy data to power their mission-critical processes. When they don't have access to the data they need in any given moment, their business can grind to a halt," said Daniel Hernandez, general manager for data and AI, IBM. "With the addition of Databand.ai, IBM offers the most comprehensive set of observability capabilities for IT across applications, data and machine learning, and is continuing to provide our clients and partners with the technology they need to deliver trustworthy data and AI at scale."

The acquisition of Databand.ai further extends IBM's existing data fabric solution by helping ensure that the most accurate and trustworthy data is being put into the right hands at the right time—no matter where it resides.

Headquartered in Tel Aviv, Israel, Databand.ai employees will join IBM Data and AI, further building on IBM's growing portfolio of Data and AI products, including its IBM Watson capabilities and IBM Cloud Pak for Data. Financial details of the deal were not disclosed. The acquisition closed on June 27, 2022.

For more information about this news, visit www.ibm.com.


Mon, 11 Jul 2022 01:00:00 -0500 en text/html https://www.dbta.com/Editorial/News-Flashes/IBM-Acquires-Databandai-to-Boost-Data-Observability-Capabilities-153842.aspx
Killexams : IBM Q2 Preview: Can Shares Uphold Their Strength? No result found, try new keyword!IBM provides advanced information technology solutions ... to-date and easily crushing the S&P 500’s performance. Image Source: Zacks Investment Research Upon widening the timeframe to encompass ... Thu, 14 Jul 2022 10:41:00 -0500 text/html https://www.nasdaq.com/articles/ibm-q2-preview%3A-can-shares-uphold-their-strength
000-138 exam dump and training guide direct download
Training Exams List