Dubai, United Arab Emirates — Cisco, the leader in enterprise networking and security, is dramatically enhancing its Extended Detection and Response (XDR) solution. By adding recovery to the response process, Cisco XDR is redefining what customers should expect from security products. Today’s announcement brings near real-time recovery for business operations after a ransomware attack.
Cisco continues to drive momentum towards its vision of the Cisco Security Cloud—a unified, AI-driven, cross-domain security platform. With the launch of Cisco XDR at the RSA Conference this year, Cisco delivered deep telemetry and unmatched visibility across the network and endpoints. Now, by reducing the crucial time between the beginnings of a ransomware outbreak and capturing a snapshot of business-critical information to near-zero, Cisco XDR will further support that vision, while enabling new levels of business continuity.
“Cybercrime remains a present risk that cannot be ignored for individuals and organizations across our region. In the last quarter, we have seen ransomware continuing to be one of the most-observed threats. To drive fightback against these cyber-attacks, a platform approach has become crucial. That is why we are consistently striving to build a resilient and open cybersecurity platform that can withstand ransomware attacks,” said Fady Younes, Cybersecurity Director, EMEA Service Providers and MEA. “Our innovations with automated ransomware recovery are a significant step towards achieving truly unified detection and response data, turning security insights into action.”
During the second quarter of 2023, the Cisco Talos Incident Response (IR) team responded to the highest number of ransomware engagements in more than a year. With the new capabilities in Cisco XDR, Security Operations Center (SOC) teams will be able to automatically detect, snapshot, and restore the business-critical data at the very first signs of a ransomware attack, often before it moves laterally through the network to reach high-value assets.
"Cisco is quickly disrupting the security landscape across their entire portfolio and their XDR solution could become the de facto reference architecture organizations turn to,” said Chris Konrad, Area Vice President, Global Cyber, World Wide Technology. “Not only does it provide broad visibility by integrating data across endpoints, network, cloud, and other sources - this extensive attack surface insight allows for superior threat detection using advanced analytics. Organizations should strongly consider the implementation of Cisco XDR to bolster their security posture and safeguard assets effectively. Cisco undoubtedly is contributing to the overall resilience of any organization.”
Cisco is expanding its initially released, extensive set of third-party XDR integrations to include leading infrastructure and enterprise data backup and recovery vendors. Today, Cisco is excited to announce the first integration of this kind with Cohesity’s DataProtect and DataHawk solutions.
“Cybersecurity is a board-level concern, and every CIO and CISO is under pressure to reduce risks posed by threat actors. To this end, Cisco and Cohesity have partnered to help enterprises around the world strengthen their cyber resilience,” said Sanjay Poonen, CEO and President, Cohesity. “Our first-of-its-kind proactive response is a key piece of our data security and management vision, and we’re excited to bring these capabilities to market first with Cisco.”
Cohesity has a proven track record of innovation in data backup and recovery capabilities. Cohesity’s products provide configurable recovery points and mass recovery for systems assigned to a protection plan. The new features take this core functionality to the next level by preserving potentially infected virtual machines for future forensic investigation, while simultaneously protecting data and workloads in the rest of the environment. Cohesity’s engineers worked alongside Cisco technical teams to dynamically adapt data protection policies to offer organizations a stronger security posture. This complements Cisco XDR’s robust detection, correlation, and integrated response capabilities and will enable customers to benefit from accelerated response for data protection and automated recovery.
Cisco XDR is now available globally to simplify security operations in today’s hybrid, multi-vendor, multi-threat landscape. To learn more, visit cisco.com/go/xdr
-Ends-
About Cisco
Cisco (NASDAQ: CSCO) is the worldwide technology leader that securely connects everything to make anything possible. Our purpose is to power an inclusive future for all by helping our customers reimagine their applications, power hybrid work, secure their enterprise, transform their infrastructure, and meet their sustainability goals. Discover more on The Newsroom and follow us on Twitter at @Cisco.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company.
Zoom has been in the crossfire this week.
Photo 207113187 © Daniel Chetroni | Dreamstime.comIn a whirlwind week of developments for Zoom, speculation about privacy issues connected to the company’s terms of service (TOS) has sparked concerns—along with some panic—about how it uses customer data to train AI models. This echoes broader concerns about privacy and data security across the digital communication landscape. Plus it’s another instance in which questions about the handling of AI are arising as quickly as AI technology is advancing.
The breaking news here at the end of the week is that the backlash had led Zoom to change its TOS to avoid the issue of data collection for AI altogether. Let’s unpack what happened.
The level of vitriol in the Zoom example has not been trivial. Some industry leaders publicly called out Zoom for mishandling this situation, which is understandable. Zoom has been on the wrong side of data privacy guardrails before. The company, which grew at an astronomical rate during the pandemic, was found to have misrepresented the use of certain encryption protocols, which led to a settlement with the FTC in 2021. That’s the part specific to Zoom. But the company is also being condemned as one more example in the litany of bad actors in big tech, where lawsuits about and investigations into data practices are countless. It’s no surprise that the public assumes the worst, especially given its added unease about the future of AI.
Fair enough. No one put Zoom in that crossfire. Nonetheless, it’s still true that software makers must strike a delicate balance between user data protection and technological advancement. Without user data protection, any company’s reputation will be shot, and customers will leave in droves; yet without technological advancement, no company will attract new customers or keep meeting the needs of the ones it already has. So we need to examine these concerns—about Zoom and more broadly—to shed light on the nuanced provisions and safeguards that shape a platform's data usage and its AI initiatives.
An analyst’s take on Zoom
By pure coincidence, around 20 other industry analysts and I spent three days with Zoom’s senior leadership in Silicon Valley last week. During this closed-door event, which Zoom hosts every year to get unvarnished feedback from analysts, we got an in-depth look into Zoom's operations, from finance to product and marketing, acquisitions, AI and beyond. Much of what we learned was under NDA, but I came away with not only a positive outlook on Zoom's future, but also a deeper respect for its leadership team and an admiration for its culture and ethos.
It’s worth noting that we had full access to the execs the whole time, without any PR people on their side trying to control the narrative. I can tell you from experience that this kind of unfettered access is rare.
You should also know that analysts are a tough crowd. When we have this kind of private access to top executives and non-public company information, we ask the toughest questions—the awkward questions—and we poke holes in the answers. I compared notes with Patrick Moorhead, CEO and principal analyst of Moor Insights & Strategy, who’s covered Zoom for years and attended many gatherings like this one. He and I couldn’t think of one analyst knowledgeable about Zoom’s leadership and operations whose opinion has soured on the company because of the furor about the TOS.
Still, we were intent on finding out more, so Moorhead and I requested a meeting with key members of Zoom's C-suite to get a better understanding of what was going on with the TOS. We had that meeting mid-week, yet before we could even finish this analysis, our insights were supplemented by a startlingly vulnerable LinkedIn post by Zoom CEO Eric Yuan. In that post, he said Zoom would never train AI models with customers' content without explicit consent. He pledged that Zoom would not train its AI models using customer "audio, video, chat, screen sharing, attachments and other communications like poll results, whiteboard and reactions."
What happened with Zoom's terms of service change?
In March 2023, Zoom updated its TOS “to be more transparent about how we use and who owns the various forms of content across our platform.” Remembering that Zoom is under FTC mandates for security disclosures, this kind of candor makes sense. Where the company went wrong was in making this change quietly and with a lack of clear delineation of how Zoom would use data to train AI.
In our discussions with Zoom this week, the company took full ownership of that lack of communication. I don’t believe that the company was trying to hide anything or get anything past users. In fact, many of the provisions in the TOS don’t currently affect the vast majority of Zoom's customers. In being so proactive, the company inadvertently got too far ahead of itself, which caused unnecessary alarm among many customers who weren’t ever affected by the issue of AI training data.
Once the (understandable) panic began, Zoom released an updated version of its TOS, along with a blog post explaining the changes from the company's chief product officer, Smita Hashim. Hashim clarified that Zoom is authorized to use customer content to develop value-added services, but that customers always retain ownership and control over their content. She also emphasized the wording added to the TOS: “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”
The day after Zoom released its blog post explaining the TOS changes, Yuan addressed the communication failure and the company’s plans for training AI more directly and soberly. The CEO took responsibility in his LinkedIn mea culpa, saying the company had an internal process failure. The post on his personal page addressed users’ concerns, similar to Zoom’s official blog post, but Yuan emphasized the promise not to train AI with customer data with a bold statement. He wrote, “It is my fundamental belief that any company that leverages customer content to train its AI without customer consent will be out of business overnight.”
By the end of the week, Zoom cemented Yuan’s commitment not to use customer data to train AI and issued a revised TOS, effective August 11, 2023. Hashim’s blog post was also updated with an editor’s note reiterating Zoom’s AI policy. What’s more, the company made immediate changes to the product.
Will this satisfy everyone who believes that Zoom steals their information and can’t be trusted? Maybe not. Yet with all of this in mind, let’s take a clear-eyed look at the different aspects of how Zoom uses data.
How Zoom uses customer data
First, let's distinguish between the two types of data addressed in Zoom's TOS. Zoom can gather two categories of data: "service-generated data," which includes telemetry, diagnostic and similar data, and "customer content," such as audio recordings or user-generated chat transcripts.
Zoom owns the service-generated data, but the company says it is used only to Strengthen the service. Meanwhile, video content, audio, chat and any files shared within the virtual four walls—that is, the customer content—of any Zoom meeting is entirely owned by the user. Zoom has limited rights to use that data to provide the service in the first place (as in the example that follows) or for legal, safety or security purposes.
The usage rights outlined in the TOS for meetings are used to safeguard the platform from potential copyright claims. These rights protect Zoom’s platform infrastructure and operation, allowing the company to manage and store files on its servers without infringing on content ownership.
Here's an example: a budding music band uses the platform to play some music for friends. Zoom, just by the nature of how the service works, must upload and buffer that audio onto company servers (among other processes) to deliver that song—which is considered intellectual property—to participants on the platform. If Zoom does not have the rights to do so, that band, its future management, its record label or anyone who ever owns that IP technically could sue Zoom for possessing that IP on its servers.
This may sound like a fringe use case, and it would be unlikely to hold up in court, but it is not unheard of and would expose the company or any future company owner to legal risk.
Is Zoom using your data to train AI models?
After this week’s changes to the TOS, the answer to this question is now a resounding No. When Zoom IQ Meeting Summary and Zoom IQ Chat Compose were recently introduced on a trial basis, they used AI to elevate the Zoom experience with automated meeting summaries and AI-assisted chat composition. But as we are publishing this article on August 11, Zoom says that it no longer uses any customer data to train AI models, either its own or from third parties. However, to best understand the series of events, I’ll lay out how the company previously handled the training of models.
Account owners and administrators were given full control over enabling the AI features. How Zoom IQ handled data during the free trial was addressed transparently in this blog post, which was published well before the broader concerns around data harvesting and AI model training arose. (The post has now been updated to reflect the clarified policy on handling customer data.)
When Zoom IQ was introduced, collecting data to train Zoom's AI models was made opt-in based on users' and guests’ active choice. As with the recording notifications that are familiar to most users, Zoom's process notified participants when their data was being used, and the notification had to be acknowledged for a user to proceed with their desired action. Separate from the collection of data for AI, Zoom told me this week that the product alerts users if the host has even enabled a generative AI feature such as Meeting Summary.
User data was collected to enhance the AI models' capabilities and overall user experience. Given the latest change to the TOS, it is unclear how Zoom plans to train its AI models now that it won’t have customer data to work with.
Until this week, here is what the opt-in looked like within the Zoom product.
How account owners and administrators previously enabled and controlled the Zoom IQ for Meeting ... [+]
ZoomAnd here is what it looks like as of August 11, 2023.
How account owners and administrators now enable and control the Zoom IQ for Meeting Summary feature
ZoomZoom's federated AI approach integrates various AI models, including its own, alongside ones from companies such as Anthropic and OpenAI, as well as select customer models. This adaptability lets Zoom tailor AI solutions to individual business demands and user preferences—including how models are trained.
Responsible AI regulation will be a long time in the making. Legislators have admitted to being behind the curve on the rapid adoption of AI as industry pioneers such as OpenAI have called for Congress to regulate the technology. In the current period of self-regulation, the company’s AI model prioritizes safety, interpretability and steerability. It operates within established safety constraints and ethical guidelines, enabling training with well-defined parameters for decision making.
The bottom line: Zoom is using your data, but not in scary ways
Amid widespread privacy and data security concerns, I believe Zoom's approach is rooted in user control and transparency—something reinforced by this week’s changes to the TOS. There are nuanced provisions within Zoom's TOS that allow it to take steps that are necessary to operate the platform. This week’s events have highlighted the need for Zoom to communicate actively and publicly what I believe they are already prioritizing internally.
As technology—and AI in particular—evolves, fostering an open dialogue about data usage and privacy will be critical in preserving (or in some cases, rebuilding) trust among Zoom's users. This week has shown that people are still very skittish about AI, and rightfully so. There are still many unknowns about AI, but Moor Insights & Strategy’s assessment is that Zoom is well positioned to securely deliver a broad set of AI solutions customized for its users. Zoom has established that it intends to do so without using customer content to train its AI models. As the company navigates data privacy concerns, I hope it can strike a balance to meet users’ concerns while advancing technology to meet their business needs.
The company admittedly had an operational misstep. Let’s not confuse that with reprehensible intent. Zoom as an organization and its CEO personally have acknowledged its customers’ concerns and made necessary adjustments to the TOS that accurately reflect Zoom's sensible data privacy and security governance. I now look forward to seeing Zoom get back to focusing on connecting millions of people worldwide, bringing solutions to meetings, contact centers and more to make people and gatherings more meaningful and productive.
Note: This analysis contains content from Moor Insights & Strategy CEO and Chief Analyst Patrick Moorhead.
Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Multefire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA, Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in Fivestone Partners, Frore Systems, Groq, MemryX, Movandi, and Ventana Micro.
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Microsegmentation is table stakes for CISOs looking to gain the speed, scale and time-to-market advantages that multicloud tech stacks provide digital-first business initiatives.
Gartner predicts that through 2023, at least 99% of cloud security failures will be the user’s fault. Getting microsegmentation right in multicloud configurations can make or break any zero-trust initiative. Ninety percent of enterprises migrating to the cloud are adopting zero trust, but just 22% are confident their organization will capitalize on its many benefits and transform their business. Zscaler’s The State of Zero Trust Transformation 2023 Report says secure cloud transformation is impossible with legacy network security infrastructure such as firewalls and VPNs.
Microsegmentation divides network environments into smaller segments and enforces granular security policies to minimize lateral blast radius in case of a breach. Network microsegmentation aims to segregate and isolate defined segments in an enterprise network, reducing the number of attack surfaces to limit lateral movement.
It’s considered one of the main components of zero trust and is defined by NIST’s zero-trust framework. CISOs tell VentureBeat that microsegmentation is a challenge in large-scale, complex multicloud and hybrid cloud infrastructure configurations and they see the potential for AI and machine learning (ML) to Strengthen their deployment and use significantly.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Gartner defines microsegmentation as “the ability to insert a security policy into the access layer between any two workloads in the same extended data center. Microsegmentation technologies enable the definition of fine-grained network zones down to individual assets and applications.”
CISOs tell VentureBeat that the more hybrid and multicloud the environment, the more urgent — and complex — microsegmentation becomes. Many CISOs schedule microsegmentation in the latter stages of their zero-trust initiatives after they’ve achieved a few quick zero trust wins.
“You won’t really be able to credibly tell people that you did a zero trust journey if you don’t do the micro-segmentation,” David Holmes, Forrester senior analyst said during the webinar “The time for microsegmentation is now,” hosted by PJ Kirner, Illumio cofounder and advisor.
Holmes continued: “I recently was talking to somebody [and]…they said, ‘The global 2000 will always have a physical network forever.’ And I was like, “You know what? They’re probably right.’ At some point, you’re going to need to microsegment that. Otherwise, you’re not zero trust.”
CIOs and CISOs who have successfully deployed microsegmentation advise their peers to develop their network security architectures with zero trust first, concentrating on securing identities often under siege, along with applications and data, instead of the network perimeter. Gartner predicts that by 2026, 60% of enterprises working toward zero trust architecture will use more than one deployment form of microsegmentation, up from less than 5% in 2023.
Every leading microsegmentation provider has active R&D, DevOps and potential acquisition strategies underway to strengthen their AI and ML expertise further. Leading providers include Akamai, Airgap Networks, AlgoSec, Amazon Web Services, Cisco, ColorTokens, Elisity, Fortinet, Google, Illumio, Microsoft Azure, Onclave Networks, Palo Alto Networks, Tempered Networks, TrueFort, Tufin, VMware, Zero Networks and Zscaler.
Microsegmentation vendors offer a wide spectrum of products spanning network-based, hypervisor-based, and host-agent-based categories of solutions.
Bringing greater accuracy, speed and scale to microsegmentation is an ideal use case for AI, ML and the evolving area of new generative AI apps based on private Large Language Models (LLMs). Microsegmention is often scheduled in the latter stages of a zero trust framework’s roadmap because the large-scale implementation can often take longer than expected.
AI and ML can help increase the odds of success earlier in a zero-trust initiative by automating the most manual aspects of implementation. Using ML algorithms to learn how an implementation can be optimized further strengthens results by enforcing the least privileged access for every resource and securing every identity.
Forrester found that the majority of microsegmentation projects fail because on-premise private networks are among the most challenging domains to secure. Most organizations’ private networks are also flat and defy granular policy definitions to the level that microsegmentation needs to secure their infrastructure fully. The flatter the private network, the more challenging it becomes to control the blast radius of malware, ransomware and open-source attacks including Log4j, privileged access credential abuse and all other forms of cyberattack.
Startups see an opportunity in the many challenges that microsegmentation presents. Airgap Networks, AppGate SDP, Avocado Systems and Byos are startups with differentiated approaches to solving enterprises’ microsegmentation challenges. AirGap Networks is one of the top twenty zero trust startups to watch in 2023. Their approach to agentless microsegmentation shrinks the attack surface of every connected endpoint on a network. Segmenting every endpoint across an enterprise while integrating the solution into a running network without device changes, downtime or hardware upgrades is possible.
Airgap Networks also introduced its Zero Trust Firewall (ZTFW) with ThreatGPT, which uses graph databases and GPT-3 models to help SecOps teams gain new threat insights. The GPT-3 models analyze natural language queries and identify security threats, while graph databases provide contextual intelligence on endpoint traffic relationships.
AI and ML can deliver great accuracy, speed and scale in microsegmentation in the following areas:
One of the most difficult aspects of microsegmentation is manually defining and managing access policies between workloads. AI and ML algorithms can automatically model application dependencies, communication flows and security policies. By applying AI and ML to these challenges, IT and SecOps teams can spend less time on policy management. Another ideal use case for AI in microsegmentation is its ability to simulate proposed policy changes and identify potential disruptions before enforcing them.
Another challenge in implementing microsegmentation is capitalizing on the numerous sources of real-time telemetry and transforming them into a unified approach to reporting that provides deep visibility into network environments. Approaches to real-time analytics based on AI and ML provide a comprehensive view of communication and process flows between workloads. Advanced behavioral analytics provided by ML-based algorithms have proven effective in detecting anomalies and threats across east-west traffic flows. These analytics Strengthen security while simplifying management.
AI can autonomously identify assets, establish communication links and identify irregularities and distribute segmentation policies without manual intervention. This self-sufficient capability diminishes the time and exertion needed to execute microsegmentation and maintains its currency as assets alter. It additionally mitigates the potential for human error in policy development.
AI algorithms can analyze extensive amounts of network traffic data, allowing for the identification of abnormal patterns. This empowers scalable security measures while maintaining optimal speed. By harnessing AI for anomaly detection, microsegmentation can expand across extensive hybrid environments without introducing substantial overhead or latency. This ensures the preservation of security effectiveness amidst the expansion of the environment.
AI can Strengthen microsegmentation’s integration across on-premises, public cloud and hybrid environments by identifying roadblocks to achieving optimized scaling and policy enforcement. AI-enabled integration provides a consistent security posture across heterogeneous environments, eliminating vulnerabilities attackers could exploit. It reduces operational complexity as well.
AI allows for automated responses to security incidents, reducing response times. Microsegmentation solutions can use trained ML models to detect anomalies and malicious behavior patterns in network traffic and workflow in real-time. These models can be trained on large datasets of normal traffic patterns and known attack signatures to detect emerging threats. When a model detects a potential incident, predefined playbooks can initiate automated response actions such as quarantining affected workloads, limiting lateral movement and alerting security teams.
AI streamlines team collaboration and automates workflows, decreasing the time required for planning, analysis and implementation. By enhancing collaboration and automation, AI has optimized the entire microsegmentation lifecycle, allowing for a quicker time-to-value and ongoing agility, thereby enhancing the productivity of security teams.
Microsegmentation is essential to zero trust architecture, but scaling it is difficult. AI and ML show potential for streamlining and strengthening microsegmentation in several key areas, including automating policy management, providing real-time insights, enabling autonomous discovery and segmentation and more.
When microsegmentation projects are delayed, AI and ML can help identify where the roadblocks are and how an organization can more quickly reach the results they’re after. AI and ML’s accuracy, speed and scale help organizations overcome implementation challenges and Strengthen microsegmentation. Enterprises can reduce blast radius, stop lateral movement and grow securely across complex multicloud environments.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
LogicMonitor Inc., a veteran player in the application observability market, is revamping its platform today with new features, expanded integrations with key software platforms, and a fresh lick of paint to spruce up the user experience.
The company’s LM Envision platform provides companies with an array of tools they can use to monitor the health of their software applications and the infrastructure they run on.
Observability refers to the practice of collecting data such as application logs and other metrics that can indicate if an app has any problems and show where a fix might be needed. This data is aggregated, collated and then presented to application teams in an easy to consume dashboard where it can be explored further.
The headline update is a new event management tool called LM Dexda, which uses advanced machine learning algorithms to try and help filter through the noise caused by thousands of daily alerts. It works by prioritizing alerts so teams can move towards automating prioritized credible actions, LogicMonitor said.
Key attributes of LM Dexda include “adaptive correlation,” with alerts being automatically reclustered when a more optimal option is detected, and user-defined correlation, which enables administrators to fine-tune the underlying machine learning models to their particular needs. LM Dexda is also “ServiceNow-ready,” meaning that alerts can be enriched with ServiceNow data to provide additional context to alerts.
LogicMonitor also announced a series of new and enhanced integrations, including one for Red Hat Inc.’s Ansible platform. Jointly developed with Red Hat, the integration is said to assist with auto-remediation and auto-troubleshooting, and allows users to trigger remediation workflows directly within Ansible, acting in accordance with predefined rules.
With its improved VMware Inc. vSphere support, LM Envision now enables the discovery and monitoring of new ESXi Hosts and mission critical virtual machines, while the Cisco Meraki and Catalyst SD-Wan integrations are brand new, making it simpler for customers to monitor Cisco Systems Inc.-based environments. LogicMonitor is also delivering improved monitoring for Kubernetes deployments, with greater coverage and deeper visibility into the cloud environments that host it.
The new Datapoint Analysis features, meanwhile, rely on machine learning algorithms to surface related metrics and patterns across different infrastructure resources, helping expedite issue diagnosis and reduce the mean-time-to-resolution.
Finally, LogicMonitor is changing the look and feel of its platform to create a more unified and consistent experience for users, saying this can aid in reducing the complexity of observability operations. Its user interface has been optimized to present information from complex, hybrid environments in a more intuitive way, the company said, with consistency across devices, services and networks.
It’s also adding new components such as “bulk actions,” enabling teams to do more tasks at once, plus better search and filtering capabilities. In addition, it’s debuting 20 new, out-of-the-box dashboards for Amazon Web Services and Microsoft Azure, providing service-specific views to provide users more insights into health, performance and availability, the company said.
Announcing today’s updates, LogicMonitor Chief Executive Christina Kosmowski said they address the reality that businesses are coming under tremendous pressure to deliver exceptional digital experiences to their users. “To efficiently do that, our customers look to us to contextualize the overwhelming amount of data within their complex IT environments,” she said.
THANK YOU