Download 700-765 free pdf and practice with real questions

killexams.com served thousands of candidates who passed their 700-765 exams and get their certifications. We had huge number of successful reviews. Our 700-765 real questions are reliable, affordable, updated and of really best standard to overcome the difficulties of 700-765 exam. killexams.com 700-765 test prep are latest updated in highly outclass manner on regular basis and mock exam are updated on regular basis.

Exam Code: 700-765 Practice test 2022 by Killexams.com team
Cisco Security Architecture for System Engineers
Cisco Architecture information source
Killexams : Cisco Architecture information source - BingNews https://killexams.com/pass4sure/exam-detail/700-765 Search results Killexams : Cisco Architecture information source - BingNews https://killexams.com/pass4sure/exam-detail/700-765 https://killexams.com/exam_list/Cisco Killexams : Best practices for modern enterprise data architecture Best practices for modern enterprise data architecture image

Modernisation of data architecture is key to maximising value across the business.

Dietmar Rietsch, CEO of Pimcore, identifies best practices for organisations to consider when managing modern enterprise data architecture

Time and again, data has been touted as the lifeline that businesses need to grow and, more importantly, differentiate and lead. Data powers decisions about their business operations and helps solve problems, understand customers, evaluate performance, Boost processes, measure improvement, and much more. However, having data is just a good start. Businesses need to manage this data effectively to put it into the right context and figure out the “what, when, who, where, why and how” of a given situation to achieve a specific set of goals. Evidently, a global, on-demand enterprise survives and thrives on an efficient enterprise data architecture that serves as a source of product and service information to address specific business needs.

A highly functional product and master data architecture is vital to accelerate the time-to-market, Boost customer satisfaction, reduce costs, and acquire greater market share. It goes without saying that data architecture modernisation is the true endgame to meet today’s need for speed, flexibility, and innovation. Now living in a data swamp, enterprises must determine whether their legacy data architecture can handle the vast amount of data accumulated and address the current data processing needs. Upgrading their data architecture to Boost agility, enhance customer experience, and scale fast is the best way forward. In doing so, they must follow best practices that are critical to maximising the benefits of data architecture modernisation.

Below are the seven best practices that must be followed for enterprise data architecture modernisation.

1. Build flexible, extensible data schemas

Enterprises gain a potent competitive edge by enhancing their ability to explore data and leverage advanced analytics. To achieve this, they are shifting toward denormalised, mutable data schemas with lesser physical tables for data organisation to maximise performance. Using flexible and extensible data models instead of rigid ones allows for more rapid exploration of structured and unstructured data. It also reduces complexity as data managers do not need to insert abstraction layers, such as additional joins between highly normalised tables, to query relational data.

Data models can become extensible with the help of the data vault 2.0 technique, a prescriptive, industry-standard method of transforming raw data into intelligent, actionable insights. Also, graph databases of NoSQL tap into unstructured data and enable applications requiring massive scalability, real-time capabilities, and access to data layers in AI systems. Besides, analytics can help access stored data while standard interfaces are running. Enterprises can store data using JavaScript Object Notation (JSON), permitting database structural change without affecting the business information model.

2. Focus on domain-based architecture aligned with business needs

Data architects are moving away from clusters of centralised enterprise data lakes to domain-based architectures. Herein, data virtualisation techniques are used throughout enterprises to organise and integrate distributed data assets. The domain-driven approach has been instrumental in meeting specific business requirements to speed up the time to market for new data products and services. For each domain, the product owner and product team can maintain a searchable data catalog, along with providing consumers with documentation (definition, API endpoints, schema, and more) and other metadata. As a bounded context, the domain also empowers users with a data roadmap that covers data, integration, storage, and architectural changes.

This approach significantly reduces the time spent on building new data models in the lake, usually from months to days. Instead of creating a centralised data platform, organisations can deploy logical platforms that are managed within various departments across the organisation. For domain-centric architecture, a data infrastructure as a platform approach leverages standardised tools for the maintenance of data assets to speed up implementation.

3. Eliminate data silos across the organisations

Implications of data silos for the data-driven enterprise are diverse. Due to data silos, business operations and data analytics initiatives are hindered since it is not possible to interpret unstructured, disorganised data. Organisational silos make it difficult for businesses to manage processes and make decisions with accurate information. Removing silos allows businesses to make more informed decisions and use data more effectively. Evidently, a solid enterprise architecture must eliminate silos by conducting an audit of internal systems, culture, and goals.

A crucial part of modernising data architecture involves making internal data accessible to the people who need it when they need it. When disparate repositories hold the same data, data duplicates created make it nearly impossible to determine which data is relevant. In a modern data architecture, silos are broken down, and information is cleansed and validated to ensure that it is accurate and complete. In essence, enterprises must adopt a complete and centralised MDM and PIM to automate the management of all information across diverse channels in a single place and enable the long-term dismantling of data silos.

4. Execute real-time data processing

With the advent of real-time product recommendations, personalised offers, and multiple customer communication channels, the business world is moving away from legacy systems. For real-time data processing, modernising data architecture is a necessary component of the much-needed digital transformation. With a real-time architecture, enterprises can process and analyse data with zero or near-zero latency. As such, they can perform product analytics to track behaviour in digital products and obtain insights into feature use, UX changes, usage, and abandonment.

The deployment of such an architecture starts with the shift from a traditional model to one that is data-driven. To build a resilient and nimble data architecture model that is both future-proof and agile, data architects must integrate newer and better data technologies. Besides, streaming models, or a combination of batch and stream processing, can be deployed to solve multiple business requirements and witness availability and low latency.

5. Decouple data access points

Data today is no longer limited to structured data that can be analysed with traditional tools. As a result of big data and cloud computing, the sheer amount of structured and unstructured data holding vital information for businesses is often difficult to access for various reasons. It implies that the data architecture should be able to handle data from both structured and unstructured sources, both in a structured and an unstructured format. Unless enterprises do so, they miss out on essential information needed to make informed business decisions.

Data can be exposed through APIs so that direct access to view and modify data can be limited and protected, while enabling faster and more current access to standard data sets. Data can be reused among teams easily, accelerating access to and enabling seamless collaboration among analytics teams. By doing this, AI use cases can be developed more efficiently.

6. Consider cloud-based data platforms

Cloud computing is probably the most significant driving force behind a revolutionary new data architecture approach for scaling AI capabilities and tools quickly. The declining costs of cloud computing and the rise of in-memory data tools are allowing enterprises to leverage the most sophisticated advanced analytics. Cloud providers are revolutionising how companies of all sizes source, deploy and run data infrastructure, platforms, and applications at scale. With a cloud-based PIM or MDM, enterprises can take advantage of ready-use and configured solutions, wherein they can seamlessly upload their product data, automate catalog creation, and enrich it diverse marketing campaigns.

With a cloud PIM or MDM, enterprises can eliminate the need for hardware maintenance, application hosting, version updates, and security patches. From the cost perspective, the low cost of subscription of cloud platforms is beneficial for small businesses that can scale their customer base cost-effectively. Besides, cloud-based data platforms also bring a higher level of control over product data and security.

7. Integrate modular, best-of-breed platforms

Businesses often have to move beyond legacy data ecosystems offered by prominent solution vendors to scale applications. Many organisations are moving toward modular data architectures that use the best-of-breed and, frequently, open source components that can be swapped for new technologies as needed without affecting the other parts of the architecture. An enterprise using this method can rapidly deliver new, data-heavy digital services to millions of customers and connect to cloud-based applications at scale. Organisations can also set up an independent data layer that includes commercial databases and open source components.

Data is synchronised with the back-end systems through an enterprise service bus, and business logic is handled by microservices that reside in containers. Aside from simplifying integration between disparate tools and platforms, API-based interfaces decrease the risk of introducing new problems into existing applications and speed time to market. They also make the replacement of individual components easier.

Data architecture modernisation = increased business value

Modernising data architecture allows businesses to realise the full value of their unique data assets, create insights faster through AI-based data engineering, and even unlock the value of legacy data. A modern data architecture permits an organisation’s data to become scalable, accessible, manageable, and analysable with the help of cloud-based services. Furthermore, it ensures compliance with data security and privacy guidelines while enabling data access across the enterprise. Using a modern data approach, organisations can deliver better customer experiences, drive top-line growth, reduce costs, and gain a competitive advantage.

Written by Dietmar Rietsch, CEO of Pimcore

Related:

How to get ahead of the National Data Strategy to drive business value — Toby Balfre, vice-president, field engineering EMEA at Databricks, discusses how organisations can get ahead of the National Data Strategy to drive business value.

A guide to IT governance, risk and compliance — Information Age presents your complete business guide to IT governance, risk and compliance.

Thu, 28 Jul 2022 12:00:00 -0500 Editor's Choice en text/html https://www.information-age.com/best-practices-for-modern-enterprise-data-architecture-123499796/
Killexams : Datacentre Network Architecture Market Share, Size, Financial Summaries Analysis from 2022-2030 | By -Cisco, Juniper Networks, Arista Networks

The MarketWatch News Department was not involved in the creation of this content.

Jul 08, 2022 (Heraldkeepers) -- New Jersey, United States-The Datacentre Network Architecture market study contains huge examination information and verifications and is planned to be a valuable asset report for supervisors, investigators, industry specialists, and other key individuals who need a prepared to-get to, self-broke down study to more readily comprehend market patterns, development drivers, potential open doors, and impending difficulties, as well as about contenders.

Receive the demo Report of Datacentre Network Architecture Market 2022 to 2030:

key points of a market Research Report:
• A top to bottom examination of the market on an overall and provincial level is remembered for the market report.
• Critical changes in market elements and rivalry.
• Division by kind, application, geology, and different standards.
• Statistical surveying, both verifiable and future, concerning size, share development, volume, and deals.
• Critical changes in market elements and improvements, as well as appraisals.
• Key portions and districts that are on the ascent
• Significant market members’ center business systems, as well as their key methodologies.

The worldwide Datacentre Network Architecture market is expected to grow at a booming CAGR of 2022-2030, rising from USD billion in 2021 to USD billion in 2030. It also shows the importance of the Datacentre Network Architecture market main players in the sector, including their business overviews, financial summaries, and SWOT assessments.

Datacentre Network Architecture Market Segmentation & Coverage:

Datacentre Network Architecture Market segment by Type: 
Hardware, Software

Datacentre Network Architecture Market segment by Application: 
Pharmaceuticals, Life Sciences, Automobile, IT & Telecom, Public, BFSI, Others

The years examined in this study are the following to estimate the Datacentre Network Architecture market size:

History Year: 2015-2019
Base Year: 2021
Estimated Year: 2022
Forecast Year: 2022 to 2030

Cumulative Impact of COVID-19 on Market:

Coronavirus can possibly have three significant monetary results: COVID-19 can possibly have three fundamental outcomes on the worldwide economy: straightforwardly influencing creation and request, upsetting stockpile chains and commercial centers, and influencing undertakings and monetary business sectors monetarily. The COVID-19 pandemic helpfully affects market development, as reception has expanded to all the more likely to understand the monetary effect of COVID-19.

Get a demo Copy of the Datacentre Network Architecture Market Report: https://www.infinitybusinessinsights.com/request_sample.php?id=837340

Regional Analysis:

The Asia-Pacific district has as of late overwhelmed the global Datacentre Network Architecture market, attributable to the inescapable as an industry. During the survey time frame, the APC region is supposed to keep up with its market predominance.

The Key companies profiled in the Datacentre Network Architecture Market:

The study examines the Datacentre Network Architecture market’s competitive landscape and includes data on important suppliers, including Cisco, Juniper Networks, Arista Networks, Hewlett-Packard, Dell, Brocade Communications, IBM, Avaya Networks,& Others

Table of Contents:

List of Data Sources:
Chapter 2. Executive Summary
Chapter 3. Industry Outlook
3.1. Datacentre Network Architecture Global Market segmentation
3.2. Datacentre Network Architecture Global Market size and growth prospects, 2015 – 2026
3.3. Datacentre Network Architecture Global Market Value Chain Analysis
3.3.1. Vendor landscape
3.4. Regulatory Framework
3.5. Market Dynamics
3.5.1. Market Driver Analysis
3.5.2. Market Restraint Analysis
3.6. Porter’s Analysis
3.6.1. Threat of New Entrants
3.6.2. Bargaining Power of Buyers
3.6.3. Bargaining Power of Buyers
3.6.4. Threat of Substitutes
3.6.5. Internal Rivalry
3.7. PESTEL Analysis
Chapter 4. Datacentre Network Architecture Global Market Product Outlook
Chapter 5. Datacentre Network Architecture Global Market Application Outlook
Chapter 6. Datacentre Network Architecture Global Market Geography Outlook
6.1. Datacentre Network Architecture Industry Share, by Geography, 2022 & 2030
6.2. North America
6.2.1. Market 2022 -2030 estimates and forecast, by product
6.2.2. Market 2022 -2030, estimates and forecast, by application
6.2.3. The U.S.
6.2.3.1. Market 2022 -2030 estimates and forecast, by product
6.2.3.2. Market 2022 -2030, estimates and forecast, by application
6.2.4. Canada
6.2.4.1. Market 2022 -2030 estimates and forecast, by product
6.2.4.2. Market 2022 -2030, estimates and forecast, by application
6.3. Europe
6.3.1. Market 2022 -2030 estimates and forecast, by product
6.3.2. Market 2022 -2030, estimates and forecast, by application
6.3.3. Germany
6.3.3.1. Market 2022 -2030 estimates and forecast, by product
6.3.3.2. Market 2022 -2030, estimates and forecast, by application
6.3.4. the UK
6.3.4.1. Market 2022 -2030 estimates and forecast, by product
6.3.4.2. Market 2022 -2030, estimates and forecast, by application
6.3.5. France
6.3.5.1. Market 2022 -2030 estimates and forecast, by product
6.3.5.2. Market 2022 -2030, estimates and forecast, by application
Chapter 7. Competitive Landscape
Chapter 8. Appendix

Download here the full INDEX of Datacentre Network Architecture Market Research Report @

Faqs
What are the various types of worldwide business sectors?
Who are the Datacentre Network Architecture market’s critical players?
What locales are impacted by the Datacentre Network Architecture market?
What stages does the worldwide Datacentre Network Architecture market go through?

Contact Us:
Amit Jain
Sales Co-Ordinator
International: +1 518 300 3575
Email: inquiry@infinitybusinessinsights.com
Website: https://www.infinitybusinessinsights.com

COMTEX_409842713/2582/2022-07-08T00:54:52

Is there a problem with this press release? Contact the source provider Comtex at editorial@comtex.com. You can also contact MarketWatch Customer Service via our Customer Center.

The MarketWatch News Department was not involved in the creation of this content.

Thu, 07 Jul 2022 12:54:00 -0500 en-US text/html https://www.marketwatch.com/press-release/datacentre-network-architecture-market-share-size-financial-summaries-analysis-from-2022-2030-by--cisco-juniper-networks-arista-networks-2022-07-08
Killexams : Secure Access Service Edge (SASE) Market Recovery and Impact Analysis Report Cisco Systems, VMware, Fortinet

New Jersey, N.J., Aug 03, 2022 The Secure Access Service Edge (SASE) Market Research Report is a professional asset that provides dynamic and statistical insights into regional and global markets. It includes a comprehensive study of the current scenario to safeguard the trends and prospects of the market. Secure Access Service Edge (SASE) Research reports also track future technologies and developments. Thorough information on new products, and regional and market investments is provided in the report.

Secure Access Service Edge (SASE) is a network architecture that combines VPN and SD-WAN features with cloud-native security features such as secure internet gateways, cloud access security brokers, firewalls, and zero trust network access.

Advances in cloud computing technologies are helping to increase business productivity and strengthen security network management. Realizing the benefits of cloud computing, companies are aggressively deploying cloud-based IT infrastructure. Therefore, the growing popularity of cloud-based IT systems and solutions also bodes well for the growth of the market.

Get the PDF demo Copy (Including FULL TOC, Graphs, and Tables) of this report @:

https://www.a2zmarketresearch.com/sample-request/580912

“Secure Access Service Edge (SASE) is growing at a good CAGR over the forecast period. Increasing individual interest in Service industry is a major reason for the expansion of this market.”

Top Companies in this report are:

Cisco Systems, VMware, Fortinet, Inc , Palo Alto Networks, Akamai Technologies, Zscaler,, Cloudflare, Cato Networks (Israel), Versa Networks,, Forcepoint , Broadcom, Check Point Software Technologies Ltd. (Israel), McAfee, LLC , Citrix Systems, Netskope , Perimeter 81 Ltd. (Israel), Open Systems (Switzerland), Aryaka Networks, Proofpoint, Secucloud Network GmbH (Deutschland), Aruba Networks , Juniper Networks, Verizon Communications,, SonicWall , Barracuda Networks, and Twingate ., .

Report Overview:

* The report analyses regional growth trends and future opportunities.

* Detailed analysis of each segment provides relevant information.

* The data collected in the report is investigated and Verified by analysts.

* This report provides realistic information on supply, demand and future forecasts.

Secure Access Service Edge (SASE) Market Overview:

This systematic research study provides an inside-out assessment of the Secure Access Service Edge (SASE) market while proposing significant fragments of knowledge, chronic insights and industry-approved and measurably maintained Service market conjectures. Furthermore, a controlled and formal collection of assumptions and strategies was used to construct this in-depth examination.

Segmentation

The report offers an in-depth assessment of the Secure Access Service Edge (SASE) market strategies, and geographic and business segments of the key players in the market.

Market Segmentation: By Type

Network as a service
Security as a service

Market Segmentation: By Application

Government
BFSI
Retail and eCommerce
IT and ITeS
Other Verticals

During the development of this Secure Access Service Edge (SASE) research report, the driving factors of the market are investigated. It also provides information on market constraints to help clients build successful businesses. The report also addresses key opportunities.

For Any Query or Customization: https://a2zmarketresearch.com/ask-for-customization/580912

This report provides an in-depth and broad understanding of Secure Access Service Edge (SASE). With accurate data covering all the key features of the current market, the report offers extensive data from key players. An audit of the state of the market is mentioned as accurate historical data for each segment is available during the forecast period. Driving forces, restraints, and opportunities are provided to help provide an improved picture of this market investment during the forecast period 2022-2029.

Some essential purposes of the Secure Access Service Edge (SASE) market research report:

o Vital Developments: Custom investigation provides the critical improvements of the Secure Access Service Edge (SASE) market, including R&D, new item shipment, coordinated efforts, development rate, partnerships, joint efforts, and local development of rivals working in the market on a global scale and regional.

o Market Characteristics: The report contains Secure Access Service Edge (SASE) market highlights, income, limit, limit utilization rate, value, net, creation rate, generation, utilization, import, trade, supply, demand, cost, part of the industry in general, CAGR and gross margin. Likewise, the market report offers an exhaustive investigation of the elements and their most latest patterns, along with Service market fragments and subsections.

o Investigative Tools: This market report incorporates the accurately considered and evaluated information of the major established players and their extension into the Secure Access Service Edge (SASE) market by methods. Systematic tools and methodologies, for example, Porter’s Five Powers Investigation, Possibilities Study, and numerous other statistical investigation methods have been used to analyze the development of the key players working in the Secure Access Service Edge (SASE) market.

o Convincingly, the Secure Access Service Edge (SASE) report will provide you an unmistakable perspective on every single market reality without the need to allude to some other research report or source of information. This report will provide all of you with the realities about the past, present, and eventual fate of the Service market.

Buy Exclusive Report @: https://www.a2zmarketresearch.com/checkout

Contact Us:

Roger Smith

1887 WHITNEY MESA DR HENDERSON, NV 89014

[email protected]

+1 775 237 4157

Wed, 03 Aug 2022 00:37:00 -0500 A2Z Market Research en-US text/html https://www.digitaljournal.com/pr/secure-access-service-edge-sase-market-recovery-and-impact-analysis-report-cisco-systems-vmware-fortinet
Killexams : Cisco leverages Snort 3 and Talos to manage trust in an evolving cloud-based world

Hybrid and multicloud computing environments have redefined the trust boundary.

In the computer world, a trust boundary serves as an interface for the marking on a data packet that is allowed to flow through a network. Remote work by remote users and the consumption of cloud-based tools to perform business functions have dramatically changed the business environment and the trust boundary along with it.

“The traditional trust boundary has evaporated, or at least transformed dramatically,” said Eric Kostlan (pictured), technical marketing engineer at Cisco Systems Inc. “Although the concept of a trust boundary still exists, the nature of the hybrid, multicloud environment makes it very difficult to define. It’s not that the concept of trusted versus untrusted has gone away; it’s just become fundamentally more complex. The complexity itself is a vulnerability.”

Kostlan spoke with theCUBE industry analysts John Furrier and Dave Vellante at AWS re:Inforce, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed Cisco’s portfolio of security solutions and the need for seamless cloud integration. (* Disclosure below.)

Protecting virtual environments

The changing nature of the trust boundary is one of many factors in enterprise computing that Kostlan and his colleagues at Cisco are managing. One of the company’s solutions involves Snort 3, an open-source network security tool for intrusion detection. As more companies have turned to the cloud, tools such as Snort 3 have become key elements that can be integrated in virtual environments.

“There’s a large number of components to the solution, and this spans workload protection, as well as infrastructure protection,” Kostlan said. “These are integrated into cloud components, and this is what allows comprehensive protection across the hybrid cloud environment. Some of the most important technologies that we use, such as Snort 3 — which is a best-of-breed intrusion protection system that we have adopted — are applicable, as well, to the virtual environment so that we push into the cloud in a way that’s seamless.”

Cisco also applies its cloud security solutions by leveraging threat information through its Talos Intelligence Group. Talos is comprised of an experienced group of security experts whose mission is to protect Cisco customer products and services.

“Talos updates our products approximately once every hour with new information about emerging attacks,” Kostlan said. “That architecture is very easily extensible into the cloud, because you can inform a virtual device just as easily as you can inform a physical device of an emergent threat. We have expanded our capacity to visualize what’s happening.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the AWS re:Inforce event:

(* Disclosure: Cisco Systems Inc. sponsored this segment of theCUBE. Neither Cisco nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Thu, 28 Jul 2022 03:16:00 -0500 en-US text/html https://siliconangle.com/2022/07/27/cisco-leverages-snort-3-and-talos-to-manage-trust-in-an-evolving-cloud-based-world-reinforce/
Killexams : Stage Is Set For Consolidation In ZTNA Market

You may have heard about the hype in zero trust cybersecurity, a new architecture that assumes that everybody and everything is the bad guy. Enterprises are increasingly looking for more sophisticated ways to guard networks and data, as the attack vectors includes with cloud and Internet use.

ZTNA has received heightened interest in latest years, with significant venture capital (VC) investment as well as investment by larger companies building out their portfolios. Expect it to be a continued area of focus for networking and cybersecurity companies going forward, which will drive mergers and acquisitions, especially since the correction in technology markets has brought prices of private companies down.

What is Zero Trust?

If you haven’t heard of it, zero trust is a hot buzzword (s) in cybersecurity circles. It is more a philosophy than a specific technology, but it has important implications for emerging cybersecurity technologies, especially in the networking area, where the applications of the approach are often referred to as zero trust network access (ZTNA).

Because networks and applications are becoming more complex, it’s more important to verify and authenticate users and applications across multiple dimensions. We now have cloud networks, WiFi, Internet of Things (IoT) devices, remote users, and hybrid work. ZTNA technology can be used to track and automate the authentication of devices and people as they use all these networks and applications, or in many cases travel across clouds or the Internet.

A good base definition of ZTNA comes from the U.S. National Institute of Standards and Technology (NIST), which describes zero trust as: “A collection of concepts and ideas designed to minimize uncertainty in enforcing accurate, least privilege per-request access decisions in information systems and services in the face of a network viewed as compromised.“

NIST believes that a zero-trust strategy is “primarily focused on data and service protection but can and should be expanded to include all enterprise assets (devices, infrastructure, components, applications, virtual and cloud components) and subjects (end users, applications and other non-human entities that request information from resources.”

Many ZTNA systems as well as cybersecurity tools function in a similar way: Collect as much data as possible from different sources, then process or analyze that data in a policy engine that can determine if user access is legitimate or a threat. These sources can include:

• User credentials

• Network devices (routers, switches)

• Devices and endpoints

• Log files

• Applications workloads: For example, virtual machines (VMs) or containers

• Cloud or applications data

• API sources such as single sign on (SSO), security information and event management (SIEM), identity managers, threat intelligence databases

So What’s Next for ZTNA?

My firm Futuriom recently did a deep dive on the the ZTNA technology market, identifying the key trends and market leaders. This included an examination of all of the public companies and private companies involved. As mentioned, ZTNA is a hot area of VC investment these days, with more than 30 active startups. Some startups in this area have recently received big rounds – for example Perimeter 81 just recently raised a $100 million C round in June.

ZTNA is likely to be a fertile area of acquisition, with 20 major public cybersecurity companies adding ZTNA products and solutions. There have already been some deals in this area, notably Juniper Networks’ acquisition of WiteSand earlier this year.

Some of the conclusions our analyst team has drawn after examining this market:

  • The current addressable market for ZTNA products and services is over $10 billion. This number will be driven as ZTNA products are used to replace outdated approaches in the multibillion-dollar Virtual Private Network (VPN) market, but the upside is much higher.
  • While latest VC funding for ZTNA remains robust, it’s likely to slow down. Valuations were too high in the startup market and the VC market is retrenching. With interest rates having risen substantially and tech markets down, 2021 valuations at greater than 20X sales are no longer sustainable.
  • Consolidation is coming in the ZTNA vendor market. With a VC slowdown and more than 20 strong ZTNA startups in the market, larger companies will make acquisitions to fill out ZTNA portfolios. The drop in startup valuations will be opportunistic for public companies that have the cash and equity to make plays.

Many public companies have positioned themselves as ZTNA leaders – though they may need more technology to fill in parts of their portfolio. Some of the public companies to watch as this market develops include Akamai, Appgate, Cisco, Cloudflare, Fortinet, Jamf, Juniper Networks, Okta, Palo Alto Networks, VMware, and Zscaler.

Citrix, which recently went private, is also one to watch, along with another private equity company, Barracuda.

On the startup side, look for companies to either raise more funding to take it to the next level or look for deals to exit. Some of the key startups to watch include Axis Security, Banyan Security, Cato Networks, Cyolo, Elisity, Infiot, Illumio, NetFoundry, Netskope, Perimeter 81, Teleport, Versa Networks, Wandera, Waverly Labs, Zentera Systems.

Some of these companies are even approaching maturity for potential Initial Public Offering (IPO). The companies I’d identify at the stage of development to start considering an IPO include Cato Networks, Netskope, and Versa Networks. All of this makes ZTNA an exciting niche cybersecurity market to watch over the next year.

Fri, 29 Jul 2022 10:55:00 -0500 R. Scott Raynovich en text/html https://www.forbes.com/sites/rscottraynovich/2022/07/29/stage-is-set-for-consolidation-in-ztna-market/
Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Use Cases And Partnerships

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Boost future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Boost quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Boost the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
Killexams : Safeguarding the open source model amidst big tech involvement No result found, try new keyword!Dima Lazerka, co-founder of VictoriaMetrics, discusses how the open source model community can be safeguarded ... 4 August 2022 / Software delivery company CloudBees has appointed former Cisco and SAP ... Sun, 17 Jul 2022 20:17:00 -0500 text/html https://www.information-age.com/safeguarding-open-source-model-amidst-big-tech-involvement-123499727/ Killexams : This Week In Security: Zimbra RCE, Routers Under Attack, And Old Tricks In WebAssembly

There’s a problem in the unrar utility, and as a result, the Zimbra mail server was vulnerable to Remote Code Execution by simply sending an email. So first, unrar is a source-available command-line application made by RarLab, the same folks behind WinRAR. CVE-2022-30333 is the vulnerability there, and it’s a classic path traversal on archive extraction. One of the ways this attack is normally pulled off is by extracting a symlink to the intended destination, which then points to a location that should be restricted. unrar has code hardening against this attack, but is sabotaged by its cross-platform support. On a Unix machine, the archive is checked for any symbolic links containing the ../ pattern. After this check is completed, a function runs to convert any Windows paths to Unix notation. As such, the simply bypass is to include symlinks using ..\ traversal, which don’t get caught by the check, and then are converted to working directories.

That was bad enough, but Zimbra made it worse by automatically extracting .rar attachments on incoming emails, in order to run a virus and spam check. That extraction isn’t sandboxed, so an attacker’s files are written anywhere on the filesystem the zimbra user can write. It’s not hard to imagine how this turns into a full RCE very quickly. If you have an unrar binary based on RarLab code, check for version 6.1.7 or 6.12 of their binary release. While Zimbra was the application specifically called out, there are likely to be other cases where this could be used for exploitation.

Router Malware

A widespread malware campaign has been discovered in a bit of an odd place: running on the firmware of Small Office/Home Office (SOHO) network devices. The surprising element is how many different devices are supported by the campaign, including Cisco, Netgear, ASUS, etc. The key is that the malware is a little binary compiled for the MIPS architecture, which is used by many routers and access points.

Once in place, the malware then launches Man-in-the-Middle attacks against DNS and HTTP connections, all with the goal of compromising the Windows machines that run their connection through the compromised device. There have been mass exploit campaigns in the past, where the DNS resolver was modified on vulnerable routers, but this seems to be quite a bit more sophisticated, leading the researchers to suspect that this may be a state-sponsored campaign. There’s an odd note in the source report, that the initial exploit script makes a call to /cgi-bin/luci, which is the web interface for OpenWRT routers. We’ve reached out for more information from Lumen, so stay tuned for the details. It may very well be that this malware campaign is specifically targeting the hoard of very old, vulnerable OpenWRT-based routers out there. There may be a downside to multiple companies using old versions of the Open Source project as their SDK.

WebAssembly and Old Tricks

One of the most interesting concepts to happen recently in the browser space is WebAssembly. You have a library written in C, and want to use it with JavaScript in a browser? Compile it to WebAssembly, and you have a solution that’s faster than JavaScript, and easier to use than a traditionally compiled binary. It’s a very clever solution, and allows for some crazy feats, like Google Earth in the browser. Could there be any down side to running C in the browser? The good folks at Grav have an example of the sort of thing that could go wrong: good old buffer overflows.

Now it’s a bit different from how a standard overflow exploit works. One reason, Wasm doesn’t have address layout randomization or Data Execution Prevention. On the other hand, web assembly functions don’t reside at a memory address, but simply a function index. The RET instruction equivalent can’t jump to arbitrary locations, but just to function indexes. However, it’s still a stack, and overflowing a buffer can result in overwriting important data, like the return pointer. Time will tell whether WebAssembly exploits are going to be a big deal, or will forever be a novelty.

Intune Remote Management

In our new, brave future, remote work seems to be the new standard, and this brings some new security considerations. Example: Microsoft’s Intune remote management suite. It’s supposed to be an easy way to deploy, manage, and monitor laptops and desktops remotely. In theory, a robust remote admin suite combined with Bitlocker should make for an effective protection against tampering. Why Bitlocker? While it prevents an attacker from memorizing data from the disk, it also prevents tampering. For instance, there’s a really old trick, where you copy the cmd.exe binary over the top of the sticky keys, or accessibility binary. These can be launched from the login page, and results in a super-easy root shell. Bitlocker prevents this.

It sounds great, but there’s a problem. Intune can be deployed in two ways. The “user-driven” flow results in a system with more administrative capabilities entrusted to the end user, including access to the BitLocker recovery key. The only way around this is to do the setup, and then remove the Primary User and rotate the Bitlocker keys. Then there’s the troubleshooting mode, holding Shift+F10 during initial setup grants SYSTEM access to the end user. Yikes. And finally, that last gotcha to note is that a remote wipe removes user data, and deletes extra binaries from some important places, but doesn’t do any sort of file verification, so our simple sticky-keys hack would survive. Oof.

Bits and Bytes

[Jack Dates] participated in 2021 Pwn2Own, and put together an Apple Safari exploit that uses the Intel graphics kernel extensions for the escape. It’s a *very* deep dive into OSX exploitation. The foothold is an off-by-one error in a length check, which in total allows writing four bytes of arbitrary data. The key to turn this into something useful was to strew some corpses around memory — forked, dead processes. Corrupt the size of the corpse, and you can use it to free other memory that’s still in use. Use after free for the win!

The OpenSSL bug we talked about last week is still being looked into, with [Guido Vranken] leading the charge. He found a separate bug that specifically isn’t a security problem back in May, and it’s the fix for that bug that introduced the AVX512 problem we’re interested in. There still looks to be a potential for RCE here, but at least it’s proving to be non-trivial to put such an attack together.

There’s a new malware campaign, ytstealer, that is explicitly targeting YouTube account credentials. The malware is distributed as fake installers for popular tools, like OBS Studio, Auto-Tune, and even cheats and cracks for other software. When run, YTStealer looks for an authentication cookie, logs into YouTube Studio, and grabs all the data available there from the attached account. This information and cookie are encrypted and sent on to a C&C server. It’s unclear why YouTube accounts are so interesting to an attacker, but maybe we can all look forward to spammy videos getting uploaded to our favorite channels.

And finally, because there’s more to security than just computers, a delightful puzzle solve from LockPickingLawyer. Loki is a puzzle lock made from a a real padlock, and it caught the attention of our favorite lock-picker, who makes an attempt to open it. We won’t spoil any of the results, but if puzzles or locks are your jam, it’s worth a watch.

Fri, 05 Aug 2022 12:00:00 -0500 Jonathan Bennett en-US text/html https://hackaday.com/2022/07/01/this-week-in-security-zimbra-rce-routers-under-attack-and-old-tricks-in-webassembly/
Killexams : Fastly Appoints Todd Nightingale as CEO

SAN FRANCISCO--(BUSINESS WIRE)--Aug 3, 2022--

Fastly, Inc. (NYSE: FSLY), the world’s fastest global edge cloud platform, today announced that the Board of Directors has appointed Todd Nightingale as the company’s next Chief Executive Officer, effective September 1, 2022. Nightingale will also join the Fastly Board of Directors upon assuming the role. He will succeed Joshua Bixby, who, as previously announced, will step down as CEO and from Fastly’s Board of Directors. Bixby will remain with Fastly as an advisor.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220803005944/en/

Fastly Appoints Todd Nightingale as CEO (Photo: Business Wire)

Nightingale’s appointment culminates a broad search process to identify the company’s next leader. He joins Fastly from Cisco, where he currently leads business strategy and development efforts for Cisco's multi-billion dollar networking portfolio as Executive Vice President and General Manager of Enterprise Networking and Cloud.

“Todd is a proven and passionate technology leader and we are thrilled to have him join our team,” said David Hornik, Lead Independent Director on the Fastly Board of Directors. “We are confident that Todd’s deep background helping customers transform their infrastructures and digitize their businesses will be instrumental to strengthening Fastly’s technology and go-to-market strategy and lead the company into its next stage of growth.”

"Fastly is extraordinary at the things that make us unique, including our incredibly powerful programmable edge cloud, innovative performance-focused product and engineering, and our unmatched support of customers as they build the next generation of globally performant, secure and reliable applications," said Artur Bergman, Fastly’s Founder, Chief Architect and Executive Chairperson. "I'm confident in Todd's ability to lead the company with the rigor and energy needed to elevate Fastly to its next level of extraordinary technology and product growth, including a strong go-to-market motion and operational strengths."

“Fastly is delivering unparalleled application experiences for users around the world with exceptional flexibility, security and performance,” said Nightingale. “I'm honored and grateful for the opportunity to be a part of the Fastly team.”

During his time at Cisco, Todd Nightingale led the Enterprise Networking and Cloud business as Executive Vice President and General Manager. He managed business strategy and development efforts for Cisco's multi-billion-dollar networking portfolio. Nightingale is known for his passionate technology leadership and his vision of powerful, simple solutions for businesses, schools, and governments. Previously, Nightingale was the Senior Vice President and General Manager of Cisco's Meraki business. His focus on delivering a simple, secure, digital workplace led to the expansion and growth of the Meraki portfolio, making it the largest cloud-managed networking platform in the world. Nightingale joined Cisco with the Meraki acquisition in 2012. He previously held engineering and senior management positions at AirDefense, where he was responsible for product development and guided the company through a successful acquisition by Motorola.

About Fastly

Fastly’s powerful and programmable edge cloud platform helps the world’s top brands deliver the fastest online experiences possible, while improving site performance, enhancing security, and empowering innovation at global scale. With world-class support that consistently achieves 95%+ customer satisfaction ratings*, Fastly's beloved suite of edge compute, delivery, and security offerings has been recognized as a leader by industry analysts such as IDC, Forrester and Gartner. Compared to legacy providers, Fastly’s powerful and modern network architecture is the fastest on the planet, empowering developers to deliver secure websites and apps at global scale with rapid time-to-market and industry-leading cost savings. Thousands of the world’s most prominent organizations trust Fastly to help them upgrade the internet experience, including Reddit, Pinterest, Stripe, Neiman Marcus, The New York Times, Epic Games, and GitHub. Learn more about Fastly at https://www.fastly.com/, and follow us @fastly.

*As of June 1, 2022

This press release contains “forward-looking” statements that are based on Fastly’s beliefs and assumptions and on information currently available to Fastly on the date of this press release. Forward-looking statements may involve known and unknown risks, uncertainties, and other factors that may cause its real results, performance, or achievements to be materially different from those expressed or implied by the forward-looking statements. These statements include, but are not limited to, those regarding Mr. Nightingale’s anticipated appointment as Chief Executive Officer and a member of Fastly’s Board of Directors, Fastly’s ability to strengthen its technology and go-to-market strategy, enter its next stage of growth, and deliver a robust portfolio for customers to continue developing the next generation of globally performant, secure and reliable applications. Except as required by law, Fastly assumes no obligation to update these forward-looking statements publicly, or to update the reasons real results could differ materially from those anticipated in the forward-looking statements, even if new information becomes available in the future. Important factors that could cause Fastly’s real results to differ materially are detailed from time to time in the reports Fastly files with the Securities and Exchange Commission (SEC), in its Annual Report on Form 10-K for the fiscal year ended December 31, 2021. Additional information will also be set forth in Fastly’s Quarterly Report on Form 10-Q for the fiscal quarter ended June 30, 2022. Copies of reports filed with the SEC are posted on Fastly’s website and are available from Fastly without charge.

Source: Fastly, Inc.

View source version on businesswire.com:https://www.businesswire.com/news/home/20220803005944/en/

CONTACT: Investor Contact:

Vernon Essi, Jr.

ir@fastly.comMedia Contact:

press@fastly.com

KEYWORD: CALIFORNIA UNITED STATES NORTH AMERICA

INDUSTRY KEYWORD: NETWORKS INTERNET SECURITY TECHNOLOGY SOFTWARE

SOURCE: Fastly, Inc.

Copyright Business Wire 2022.

PUB: 08/03/2022 04:05 PM/DISC: 08/03/2022 04:07 PM

http://www.businesswire.com/news/home/20220803005944/en

Wed, 03 Aug 2022 08:07:00 -0500 en text/html https://www.eagletribune.com/region/fastly-appoints-todd-nightingale-as-ceo/article_61f6c807-be8e-5797-80cb-66351c9daf41.html
Killexams : A Slack Bug Exposed Some Users’ Hashed Passwords for 5 Years

The office communication platform Slack is known for being easy and intuitive to use. But the company said on Friday that one of its low-friction features contained a vulnerability, now fixed, that exposed cryptographically scrambled versions of some users' passwords. 

When users created or revoked a link—known as a “shared invite link”—that others could use to sign up for a given Slack workspace, the command also inadvertently transmitted the link creator's hashed password to other members of that workspace. The flaw impacted the password of anyone who made or scrubbed a shared invite link over a five-year period, between April 17, 2017, and July 17, 2022.

Slack, which is now owned by Salesforce, says a security researcher disclosed the bug to the company on July 17, 2022. The errant passwords weren't visible anywhere in Slack, the company notes, and could have only been apprehended by someone actively monitoring relevant encrypted network traffic from Slack's servers. Though the company says it's unlikely that the real content of any passwords were compromised as a result of the flaw, it notified impacted users on Thursday and forced password resets for all of them. 

Slack said the situation impacted about 0.5 percent of its users. In 2019 the company said it had more than 10 million daily active users, which would mean roughly 50,000 notifications. By now, the company may have nearly doubled that number of users. Some users who had passwords exposed throughout the five years may not still be Slack users today.

“We immediately took steps to implement a fix and released an update the same day the bug was discovered, on July 17th, 2022,” the company said in a statement. “Slack has informed all impacted customers and the passwords for impacted users have been reset.”

The company did not respond to questions from WIRED by press time about which hashing algorithm it used on the passwords or whether the incident has prompted broader assessments of Slack's password-management architecture.

“It's unfortunate that in 2022 we're still seeing bugs that are clearly the result of failed threat modeling,” says Jake Williams, director of cyber-threat intelligence at the security firm Scythe. “While applications like Slack definitely perform security testing, bugs like this that only come up in edge case functionality still get missed. And obviously, the stakes are very high when it comes to sensitive data like passwords.”

The situation underscores the challenge of designing flexible and usable web applications that also silo and limit access to high-value data like passwords. If you received a notification from Slack, change your password, and make sure you have two-factor authentication turned on. You can also view the access logs for your account.

Fri, 05 Aug 2022 10:09:00 -0500 en-US text/html https://www.wired.com/story/slack-hashed-passwords-exposed/
700-765 exam dump and training guide direct download
Training Exams List