The MITRE Engenuity ATT&CK evaluations are a transparent, yearly assessment of leading enterprise endpoint-protection solutions as tested against known threats. The level of detail provided by their results not only demonstrates the efficiency of endpoint solutions but provides any defending team with deep knowledge of how to protect their own organization according to the MITRE ATT&CK framework.
The underlying model for most evaluations of security products is the antivirus review. But generally, an AV review will tell you only whether a product stopped a threat, or perhaps whether the threat was blocked instead of neutralized.
Such reviews may be useful for consumer antivirus products that defend home PCs against internet-based threats, but enterprise endpoint-protection products require more detailed evaluations.
Antivirus reviews "may potentially help evaluate a protection product, like a traditional AV from a traditional AV vendor," said Shyue Hong Chuang, product manager for Cisco Secure Endpoint. "But when it comes to the stuff that got past, what did your product tell me? It's the MITRE evaluation, it's the AV-Comparatives EPR [Endpoint Prevention and Response] test that gives a bit more visibility (across the attack kill chain)."
The MITRE evaluation Chuang refers to is the MITRE Engenuity ATT&CK evaluations, or Evals for short, which MITRE has run almost every year since 2018. The Evals document every step in the kill chain of a well-known, real-life, sophisticated attack against a Microsoft Azure instance protected by one of the endpoint security products being tested.
For example, in the latest round of Evals, conducted in late 2021 with results released in March 2022, 30 different security vendors submitted their products for testing, including Cisco, CrowdStrike, McAfee, Microsoft and Symantec.
Each product faced two well-known adversaries: first, the Wizard Spider criminal group that has used the BazarLoader, Conti, Emotet, Ryuk and Trickbot malware against enterprise targets; and second, the Russian state-sponsored Sandworm group, notorious for attacks upon the Ukrainian energy sector as well as the NotPetya wiper malware attack in 2017.
Because the MITRE ATT&CK framework is well understood among security practitioners, the level of detail provided by the Engenuity evaluation results is a treasure trove of information about how each tested endpoint product fares at each step of the kill chain. MITRE posts the results publicly and freely, and while the documentation can be a bit hard to decipher, there's no better way for organizations considering an endpoint solution to assess how well a product may be suited for them.
"Defenders use Evals to make better informed decisions on leveraging the products that secure their networks," states the MITRE Engenuity ATT&CK evaluations website. "Each vendor evaluation is independently assessed on their unique approach to threat detection. Evaluation rounds are not a competitive analysis; they do not showcase scores, rankings, or ratings and are transparent and openly published."
Dr. Joel Fulton, co-founder and CEO of Lucidum, an asset discovery company, pointed out that the MITRE ATT&CK framework also helps CISOs better communicate their needs to executives.
"Most CISOs will ask for investments and increases in budget to respond to either current events or longstanding security concerns, but they don't have sufficient data points to support the ask," Fulton told CyberRisk Alliance. "By using the MITRE ATT&CK framework as a guide for these conversations, CISOs will be able to effectively explain the severity of threats and the actions to mitigate them while allowing CIOs to be active participants."
But it's not only those enterprises looking for new endpoint-protection software that can benefit from the MITRE Engenuity ATT&CK results. Because the evaluation results are so granular, skilled defense teams can use them to pinpoint weaknesses in their own security posture and adjust their strategies accordingly.
"Here is a true-to-form attack in sequence with the kill chain, the way that Sandworm or Wizard Spider actually facilitated these opportunities," said Adam Tomeo, senior product marketing manager for Cisco Secure Endpoint. "At this point, regardless of where you can potentially stop it on the kill chain, you can leverage each one of these sub-steps to help strengthen your security posture in your organization."
Both these threat actors are still very active, as are the attackers in the previous rounds of MITRE Engenuity ATT&CK evaluations, which include the Carbanak and Fin7 criminal gangs and the Russian state-sponsored Cozy Bear or APT 29, the latter believed to be behind the devastating SolarWinds supply-chain compromise of 2020.
"By viewing the MITRE ATT&CK framework as a 'board game' or checklist, security teams can thoroughly understand where their vulnerabilities lie and take the appropriate action to prevent attacks," said Fulton.
To Eric Howard, lead technical engineer for Cisco Secure Endpoint, the MITRE Engenuity ATT&CK evaluations provide "the ability to have a common language between both those that know how to test an environment and those that are tasked with defending against the things that are thrown at an environment."
"Red and blue teams can speak the same language," Howard added, "reversing the power of the Babel effect so that we can get to the same goal."
Cisco is convinced a long-term, fundamental shift in the way people work is underway and irreversible, and it believes it is uniquely positioned to help businesses adapt.
Jeetu Patel, Cisco's executive vice president and general manager of security and collaboration, declared that “hybrid work doesn’t work yet,” because too many employees are stuck with uncomfortable working conditions and balky connections that harm their productivity and drain their mental energy.
Cisco, however, has a portfolio of collaboration hardware and software that’s both comprehensive and growing, including home office routers, cameras and desktop hubs for videoconferences to its popular Webex collaboration platform. Cisco is adding a number of artificial intelligence-driven upgrades to Webex, including several designed to Strengthen the experience for remote attendees of hybrid meetings: making them look and feel (to the remote attendees) like everyone’s remote, for example, and adding a virtual whiteboard that everyone can use.
“Just like the Kindle is a purpose-built device for memorizing a book, we’ve built purpose-built devices for making sure you can engage with your work experience from home,” Patel said.
Cisco made clear that it will amplify its focus on optimizing end-user experiences, including both customer and employee experiences.
In addition to its hybrid work solutions, it introduced new applications that business can use to manage and secure their own apps. The offerings reflect Cisco’s acknowledgement that the world has entered the “application economy,” said Liz Centoni, Cisco’s chief strategy officer and general manager of applications.
“We care more about application program interfaces today because they deliver us the ability to leverage services from anywhere and everywhere,” Centoni said. “You can add new services or drop services without taking your application down. It’s a beautiful new world.”
Calisti is described as a “service mesh manager,” allowing businesses to see the health and performance of their entire application environment at a glance, then dig deeper to test how an app will likely perform under additional stressors, such as a traffic influx.
Panoptica is an application security tool that shows the vulnerabilities of all applications, based on the well-known Mitre Attack Framework, and makes it easy to take actions to remediate vulnerabilities.
Engineers at General Motors are the first to use backhaul technology developed at Cisco to conduct pre-production performance testing on vehicles. The wireless, Wi-Fi-based technology lets auto engineers monitor several hundred data channels simultaneously in real time during test runs to monitor vehicle operational parameters and modify the test as it’s being conducted (if necessary).
Pre-production vehicles are built specifically for validation testing to ensure cars or trucks perform as intended. During this development phase enormous volumes of performance data from a wide range of tests are collected. These results are then used to refine the pre-production vehicles before they are built and sold to consumers.
Without backhaul, data has been collected data on “black boxes” and could not be analyzed until after the full test was run; there was no way to analyze data during a test or changes the test on the fly. So, if there were issues with a vehicle system or test parameters that would render a test unusable, nothing could be done until the data was checked at an offsite lab after the test. Then the 30- to 60-minute test might have to be repeated, which wastes time and resources. Backhaul lets engineers identify any issues with the vehicle or test parameters during the test process and resolve them in real time.
Using earlier wireless networks to collect this pre-production data didn’t work reliably either due to the speeds the test vehicles were travelling, which often exceeded 100 mph.
To combat these problems, backhaul combines the reliability and speed of fiber connectivity with the flexibility of wireless communications. It delivers up to 500 Mbps. It features ultra-low latencies, high-bandwidths, seamless handoffs with zero packet losses and private mobile connectivity for mission-critical applications. It lets carmakers and other companies extend their networks wirelessly.
While backhaul is based on Wi-Fi, it is not an access technology. It can connect moving assets such as cars, AVG, cranes, tele-remote vehicle and trains, or extend networks when running fiber isn’t feasible or is too costly. It is also more reliable on wireless networks that cover areas prone to creating interference such as ports and warehouses, where stacks of containers or pallets can create dead zones.
At GM, backhaul is giving test engineers instant access to information, letting them be more actively involved and make real-time decisions during test runs to eliminate the need to re-run a test. This streamlines the testing process and saves time.
“Since deploying Cisco wireless backhaul at the performance tracks of our Milford Proving Ground, GM now has stable and secure wireless network connections in that environment where vehicle speeds that can exceed 100 mph,” says Stephen Jenkins, director, Global Labs, Proving Grounds Operations & Materials Engineering. “This connectivity lets us perform real-time analysis and stream information directly into our Enterprise Data Center without buffering or human intervention.”
“GM needed a mature solution to gain real-time visibility into vehicle test data, and they tested many technologies which all fell short,” said Michael Shannon, VP of engineering, Cisco IoT. “Since deploying backhaul, however, GM has shortened the engineering cycles and ultimately helped Strengthen time-to-market for technical innovations.”
It is sometimes difficult to understand the true value of IBM's Power-based CPUs and associated server platforms. And the company has written a lot about it over the past few years. Even for IT professionals that deploy and manage servers. As an industry, we have become accustomed to using x86 as a baseline for comparison. If an x86 CPU has 64 cores, that becomes what we used to measure relative value in other CPUs.
But this is a flawed way of measuring CPUs and a broken system for measuring server platforms. An x86 core is different than an Arm core which is different than a Power core. While Arm has achieved parity with x86 for some cloud-native workloads, the Power architecture is different. Multi-threading, encryption, AI enablement – many functions are designed into Power that don’t impact performance like other architectures.
I write all this as a set-up for IBM's announced expanded support for its Power10 architecture. In the following paragraphs, I will provide the details of IBM's announcement and deliver some thoughts on what this could mean for enterprise IT.
What was announced
Before discussing what was announced, it is a good idea to do a quick overview of Power10.
IBM introduced the Power10 CPU architecture at the Hot Chips conference in August 2020. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. Power10 is developed on the opensource Power ISA. Power10 comes in two variants – 15x SMT8 cores and 30x SMT4 cores. For those familiar with x86, SMT8 (8 threads/core seems extreme, as does SMT4. But this is where the Power ISA is fundamentally different from x86. Power is a highly performant ISA, and the Power10 cores are designed for the most demanding workloads.
One last note on Power10. SMT8 is optimized for higher throughput and lower computation. SMT4 attacks the compute-intensive space with lower throughput.
IBM introduced the Power E1080 in September of 2021. Moor Insights & Strategy chief analyst Patrick Moorhead wrote about it here. The E1080 is a system designed for mission and business-critical workloads and has been strongly adopted by IBM's loyal Power customer base.
Because of this success, IBM has expanded the breadth of the Power10 portfolio and how customers consume these resources.
The big reveal in IBM’s recent announcement is the availability of four new servers built on the Power10 architecture. These servers are designed to address customers' full range of workload needs in the enterprise datacenter.
The Power S1014 is the traditional enterprise workhorse that runs the modern business. For x86 IT folks, think of the S1014 equivalent to the two-socket workhorses that run virtualized infrastructure. One of the things that IBM points out about the S1014 is that this server was designed with lower technical requirements. This statement leads me to believe that the company is perhaps softening the barrier for the S1014 in data centers that are not traditional IBM shops. Or maybe for environments that use Power for higher-end workloads but non-Power for traditional infrastructure needs.
The Power S1022 is IBM's scale-out server. Organizations embracing cloud-native, containerized environments will find the S1022 an ideal match. Again, for the x86 crowd – think of the traditional scale-out servers that are perhaps an AMD single socket or Intel dual-socket – the S1022 would be IBM's equivalent.
Finally, the S1024 targets the data analytics space. With lots of high-performing cores and a big memory footprint – this server plays in the area where IBM has done so well.
In addition, to these platforms, IBM also introduced the Power E1050. The E1050 seems designed for big data and workloads with significant memory throughput requirements.
The E1050 is where I believe the difference in the Power architecture becomes obvious. The E1050 is where midrange starts to bump into high performance, and IBM claims 8-socket performance in this four-socket socket configuration. IBM says it can deliver performance for those running big data environments, larger data warehouses, and high-performance workloads. Maybe, more importantly, the company claims to provide considerable cost savings for workloads that generally require a significant financial investment.
One benchmark that IBM showed was the two-tier SAP Standard app benchmark. In this test, the E1050 beat an x86, 8-socket server handily, showing a 2.6x per-core performance advantage. We at Moor Insights & Strategy didn’t run the benchmark or certify it, but the company has been conservative in its disclosures, and I have no reason to dispute it.
But the performance and cost savings are not just associated with these higher-end workloads with narrow applicability. In another comparison, IBM showed the Power S1022 performs 3.6x better than its x86 equivalent for running a containerized environment in Red Hat OpenShift. When all was added up, the S1022 was shown to lower TCO by 53%.
What makes Power-based servers perform so well in SAP and OpenShift?
The value of Power is derived both from the CPU architecture and the value IBM puts into the system and server design. The company is not afraid to design and deploy enhancements it believes will deliver better performance, higher security, and greater reliability for its customers. In the case of Power10, I believe there are a few design factors that have contributed to the performance and price//performance advantages the company claims, including
These seemingly minor differences can add up to deliver significant performance benefits for workloads running in the datacenter. But some of this comes down to a very powerful (pardon the redundancy) core design. While x86 dominates the datacenter in unit share, IBM has maintained a loyal customer base because the Power CPUs are workhorses, and Power servers are performant, secure, and reliable for mission critical applications.
Like other server vendors, IBM sees the writing on the wall and has opened up its offerings to be consumed in a way that is most beneficial to its customers. Traditional acquisition model? Check. Pay as you go with hardware in your datacenter? Also, check. Cloud-based offerings? One more check.
While there is nothing revolutionary about what IBM is doing with how customers consume its technology, it is important to note that IBM is the only server vendor that also runs a global cloud service (IBM Cloud). This should enable the company to pass on savings to its customers while providing greater security and manageability.
I like what IBM is doing to maintain and potentially grow its market presence. The new Power10 lineup is designed to meet customers' entire range of performance and cost requirements without sacrificing any of the differentiated design and development that the company puts into its mission critical platforms.
Will this announcement move x86 IT organizations to transition to IBM? Unlikely. Nor do I believe this is IBM's goal. However, I can see how businesses concerned with performance, security, and TCO of their mission and business-critical workloads can find a strong argument for Power. And this can be the beginning of a more substantial Power presence in the datacenter.
Note: This analysis contains insights from Moor Insights & Strategy Founder and Chief Analyst, Patrick Moorhead.
Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.
More integrations between different parts of Cisco’s extensive portfolio in networking and security, combined with as-a-service consumption models meant to make customers’ lives a lot easier.
For years, Cisco events were mainly about adding new features, products, services and often entire acquired companies to the portfolio. This has resulted in an extensive and very powerful platform that you can use in any way you want as an organization. One consequence of this was that the portfolio could appear rather complex and confusing. The hybrid world organizations are moving towards add even more complexity. Organizations have moved towards a distributed and hybrid infrastructure, not to mention the impact that hybrid working has on their IT environments.
So some of the complexity has been created by Cisco itself, some is the result of general market developments. The good news from Cisco is that because of the important role it plays in these developments, the company can therefore also help solve them. Chuck Robbins, Cisco’s CEO, promises to do just that during the keynote: “We want to simplify the things we do with customers. We’re working hard on that, and we’ve made some good strides in doing so.” He is also refreshingly honest when he points out that while Cisco has made some progress, it really needs to improve. Among other things, he mentions simplifying licensing and merging the various platforms that Cisco has on offer.
To counter the negative side effects of more than a decade of progress, Nightingale (and Cisco) made a big announcement during Cisco Live. In fact, it was the biggest announcement of Cisco Live. Not the most surprising by the way, because it was inevitably coming, but a very important one. Cisco is going to merge Meraki and Catalyst. That is, it will ensure that Catalyst products can be managed and monitored from Meraki’s cloud platform. In doing so, Cisco says it is merging the number one cloud-managed networking platform with the number one campus networking platform.
Mind you, it doesn’t mean that the current ways to manage and connect Catalyst hardware will be retired. There will still be support for Catalyst hardware from (on-premises) DNA Center. Customers and partners can also continue to use CLI to do the management. Lastly, all Catalyst hardware that has come to market since the introduction of the Catalyst 9000 series can become part of the new management environment.
Merging Meraki and Catalyst may be a good and timely move on paper. However, what does it mean in practice? Also, how will the market react to this merger? It’s not hard to imagine that not everyone will be happy with this. Partners and organizations have been using Catalyst for a long time now. Changing how they monitor and manage that platform is quite fundamental. We wouldn’t be surprised if many of them want to continue to use Catalyst hardware in the old way.
There are several reasons for this. The first is that until at least a few years ago, Meraki had an SMB/SME focus at Cisco as far as we know. As such, the Enterprise Networking division and the Meraki division are two different parts of the company. That must affect the capabilities that Meraki offers versus what Catalyst offered and is offering. The second reason is that many partners have created a business model around the ‘complex’ nature of Catalyst management. They don’t want to deliver that up.
In time, these two parts will undoubtedly move (even further) towards each other. This will not be the case immediately at launch. In any event, Cisco will be more in the driver’s seat moving forward, according to Nightingale. That is, the new offering won’t be about having as many features as possible anymore. “We don’t necessarily want to try to be feature-complete, but use-case complete”, he states. This probably means that the number of features will decrease. Nightingale more or less confirms that. He gives a (hypothetical) example that the new environment may well reduce the way you do a specific configuration from twelve ways to two ways.
Mind you, reducing features, specs and options is not necessarily a bad idea. In fact, it’s where the market as a whole is headed. We are moving more and more towards a self-driving, autonomous network, in which AI will play an increasingly important role. That no longer includes extremely complex manual configurations. Whether all current Cisco customers and partners who use Catalyst hardware already think this way, however, we wonder. That may take some time. Cisco has quite a lot of legacy there (in the positive sense of the word). As far as we are concerned, however, Cisco has taken the right step by merging the two environments. It is now up to the company to convince customers and partners of its added value.
Merging Meraki and Catalyst is about reducing complexity in Cisco’s own portfolio. In addition to this, the company also announced something today that should address more general complexity. More specifically, complexity caused by the move towards a hybrid infrastructure. To address this, Cisco announces the Nexus Cloud SaaS offering. This will allow customers to manage their Nexus devices in their data centers from the cloud. Nexus Cloud is part of (or powered by) the Intersight Platform. Intersight is a collection of services that allows organizations to deploy and optimize their distributed infrastructure, among other things. Intersight sees all the endpoints in the infrastructure and analyzes the telemetry data they generate. Additionally, there are services within Intersight that deal with optimizing Kubernetes environments and HashiCorp Terraform environments.
Adding the management of Nexus devices into organizations’ private cloud should obviously simplify and speed up things like deployment and management (think upgrades) of infrastructure a lot. It also integrates with the other services within Intersight. That means you can now manage UCS servers, HyperFlex HCI, Nexus-based private clouds, cloud-native Kubernetes environments and third-party hardware from a single location. This should bring the promised simplification another step closer for customers.
The introduction of Nexus Cloud, by the way, is not a standalone event, according to Robbins. “Everything that we can deliver as a service, we want to start delivering as a service,” he stated during his keynote at Cisco Live. As was the case with the integration of Meraki and Catalyst, the “old” way will continue to be available as well. That is, if you don’t want to use Nexus Cloud, you don’t have to.
When we talk about making it easier to set up and deploy infrastructure, you can’t ignore security. Especially in a hybrid and distributed architecture, it can quickly become a confusing topic. Jeetu Patel, the EVP Security and Collaboration at Cisco, sees this as a golden opportunity for Cisco. This is because Cisco focuses on a platform approach to security. That approach entails deep integrations between different components within the portfolio. “The complexity of hybrid architectures and the increasingly sophisticated threats lead to a preference for an integrated approach to security,” he states.
Providing integration between different components alone is not enough to get the desired simplicity. From Cisco’s point of view, there is also a fair amount of work to be done to make the experience as a whole as good as possible. That starts with something as simple as merging all the different clients into a single client or application. Now Cisco offers a VPN client, a client for DUO and about twenty more. “We really need to get away from that,” Patel indicates. So we will see a consolidation of all those security clients into one.
Solving the complexity of the past is part of Cisco’s ambition. However, this also has consequences during the development of new products and services. These must be as simple as possible from the outset, without losing any of their capabilities.
An example of such a new product is Cisco+ Secure Connect. This is (part of) the company’s SASE solution. “It’s completely turnkey and therefore easy to use,” says Patel. Cisco’s SASE offering also scales very well, with Points of Presence around the world. In addition to pre-login security, Cisco also doesn’t forget about post-login security. That’s why it developed Wi-Fi Fingerprint. This new feature makes it possible to offer continuous trusted access. The interesting thing about this feature is that it does not reveal where you are geographically, because that is undesirable. It scans the SSIDs in the area and thus determines whether an employee is in an environment where he or she can access the company network and resources with full privileges or not.
The cloud also plays an important role for Cisco in the area of security. During his keynote, Patel talked about, among other things, a firewall management center as a SaaS solution. This allows you to manage on-prem and cloud firewalls from a single location in the cloud.
All in all, Cisco wants to provide an end-to-end platform solution for prevention, detection, response and threat intelligence. The management of that platform must happen at a central location too. For that, Cisco seems to select the cloud.
Finally, there is the issue of lock-in. A vendor like Cisco has a very extensive portfolio. That immediately conjures up images of vendor lock-in. That may have been the case in the past, but has changed in recent years. The fact that Cisco uses OpenAPI standards is an example of this. So you don’t have to buy everything from Cisco, and still have deep integrations with third-party tooling.
A final example of integration within Cisco and thus of simplifying offerings and reducing complexity is ThousandEyes WAN Insights. With Cisco’s acquisition of ThousandEyes a few years ago, Cisco acquired new WAN capabilities. One of those capabilities is ThousandEyes WAN Insights, which Cisco announced during Cisco Live. This is an integration between ThousandEyes’ offering and Cisco’s SD-WAN offering. In other words, it links the configurations of WAN connections to cloud services and other sites to the insights ThousandEyes has around the quality of the global backbone.
The idea behind ThousandEyes WAN Insights is that it is becoming increasingly important but also increasingly complex to optimally configure connections across the WAN. With this new offering, ThousandEyes continuously analyzes so-called path metrics from Cisco vAnalytics. Based on that, ThousandEyes WAN Insights provides recommendations back to Cisco SD-WAN to optimally route outbound traffic.
So ThousandEyes WAN Insights is about analyzing each individual path, not merely the connection between sites as a whole. In practice, there are often several such paths per connection. These often run via different networks, such as a provider’s fiber optic network and MPLS. According to the ThousandEyes WAN Insights announcement, it needs an average of at least 24 hours to gather the data it needs to make recommendations.
At the end of the day, the outcome of ThousandEyes WAN Insights should be that applications/environments such as Webex, Salesforce, Office 365 and Google Cloud perform better and that connections between branch offices and a data center have a higher availability and perform better.
More generally, according to Cisco, ThousandEyes WAN Insights allows you to move from a reactive to a proactive attitude. You no longer solve problems after they occur and your organization is therefore affected by them, but before this is the case. As far as we understand it, there is no automation yet of the steps to be taken to fix the upcoming problem. That is, ITOps teams need to get to work with the input from ThousandEyes. Toward the future, it should become possible to automate that step.
The announcements and strategy discussed in this article shows that Cisco takes its role as a major player in the IT and security industry seriously. It has to as well, by the way, otherwise it will lose that role. It is good to see that Cisco looks critically at its own offering too. Addressing the complexity of the current landscape starts with reducing complexity in its own offering. Not everyone will be happy with the changes. Then again, that always happens when you fundamentally change things.
Mind you, merging Meraki and Catalyst is also simply necessary in order to keep up with the other players. Enterprise networking is inexorably on the way to being cloud-managed. That’s the way it is nowadays. We are curious to see whether Meraki can carry and propagate the Catalyst legacy (in the positive sense of the word) from the cloud. In any case, Cisco will have to work hard to demonstrate the added value of this. It will certainly do so, if we are to believe Nightingale: “We are going to make it so convincing that customers will want to make the switch themselves.” That’s a nice promise, one we’ll keep in mind. We’ll be sure to come back to it in subsequent conversations with people at Cisco, too.
When you think of a typical startup, what words come to mind? “Progressive and trendy?” “Lean and innovative?” “Agile and engaged?”
Now, think of a classic Fortune 500 enterprise. Odds are, a large corporation that’s been around for decades invokes a different set of notions – perhaps “bureaucratic,” “too political,” “hierarchical,” “legacy” and perhaps a bit “outmoded” ring true.
Despite the intrinsic differences between startups and enterprises, one thing is certain: The pace of change in today’s market is so fast and volatile that companies of all sizes risk their very survival unless they become more disruptive and more innovative. In fact, both startups and enterprises are at risk, as 90 percent of startups fail, and 40 percent of Fortune 500 companies may cease to exist in 10 years.
While there’s no silver bullet solution to success, large organizations can more readily adapt to the new business climate by developing a culture of open collaboration, and most importantly, innovation – that is, hyper co-innovation.
For startups, innovation is treated as a team sport, where a diverse set of players from all departments and roles inside and outside the company are as important as the ideas they generate. This is a stark contrast to a traditional enterprise’s approach, where innovation is often treated as a more rigid, defined process. But by changing their mindset and approach, even large enterprises can unleash the innovative nature that is within their employees and become more disruptive. Here’s how to think like a startup, but scale like an enterprise – and balance the best of both worlds.
Innovation can come from anywhere, anytime. Employees in all job functions and at all levels should be encouraged to come up with innovative ideas and given the support needed to implement them.
At Cisco, we’ve personally experienced the success that comes from this mindset through our Innovate Everywhere Challenge, a companywide, cross-functional innovation competition that mirrors real-life startup practices. Employees from all job roles and levels are encouraged to form teams and pitch their innovative ideas for everything from business process improvements to new digital solutions. Teams with the best ideas are given funding, mentorship and time off from their regular job functions to make their ideas a reality.
One of the most successful projects to arise from the challenge was LifeChanger, which helps people with disabilities work remotely by leveraging voice, video and collaboration technologies. To date, more than 100 people with disabilities have gained access to meaningful employment through LifeChanger, and several other organizations are looking to implement the solution as well.
To begin thinking more like a startup, leaders should first emphasize cross-functional teams, think outside of functions and break down business-unit silos. This will ensure that you are tapping into the best and brightest ideas. We know from experience and research that the most valuable digital solutions come from teams with different backgrounds and perspectives.
Second, enable transparent digital communication amongst teams and stakeholders. This could involve setting up an online forum, establishing a mentor network, or disseminating employee surveys and sharing the feedback.
Leaders must also be flexible. Encourage rapid prototyping for solutions – validate concepts with potential customers early on, pivot fast and take risks. And when something does not work, empower teams to learn from failures and move on to the next idea.
Lastly, understand that innovation is in everything: innovation should be integral to the way a company conducts day-to-day business, not just an approach to developing new products. Therefore, focus on people – not just technology – when incubating new ideas.
On the other side of the coin, large enterprises also embody a core set of strengths, resources and partnerships that can accelerate innovation, such as their ability to quickly scale and get products to market.
Most successful enterprises actively build their ecosystems, leveraging vertical, horizontal and local partners to ensure the scalability, mass customization and reach of their solutions. They also know how to set their sights on clearly defined, broader goals, understanding that innovation is about more than delivering a cool new app or futuristic device.
Enterprises focus on the business outcomes and value at stake, rather than taking a scattered approach driven by passions that can be fleeting or change over time. Plus, they have established customers, partners and marketing channels to broaden exposure and credibility of innovative ideas.
Use these attributes and resources to your advantage as you begin to weave the startup mindset into your culture.
As you take the best characteristics from both successful startups and enterprises, the next step is to ignite a startup culture by engaging and challenging employees to innovate. Here’s how:
Innovation programs must be extended to all employees, across departments, levels and roles. From there, encourage inclusion and diversity of perspectives, and empower employees to make decisions and tap into their inner entrepreneur.
As employees innovate, support their ideas with mentor networks, angel investors and other resources that an actual startup would use. This will help lead your internal innovators through the four phases of a lean startup: ideation, validation, funding and development.
Use not only gamification to make innovation fun, but also introduce rewards (monetary, time off, etc.) as incentives. Additionally, look for opportunities to secure publicity for the innovative ideas or solutions your employees create. Highlighting their accomplishments is extremely rewarding for them, spurs further engagement and innovation from other employees, and can bolster your company’s brand reputation.
Designate an internal champion to lead your innovation initiatives and attain buy-in from the higher-ups, all the way to the C-Suite. Most importantly, your champion can help you discover existing, untapped talent by engaging as many employees as possible and sharing what your programs have to offer.
One size doesn’t fit all when it comes to innovation, especially for global companies. Every stakeholder in the mix (including employees) has a different set of personalities, priorities and passions. And there will always be polarities of tension between established and startup practices in a big company. Therefore, customize and balance your program’s approach to stimulating employee innovation, blending both the startup and large enterprise mindsets. As innovation expert and author, Michael Docherty, advises: “Embrace the power of AND,” referring to the required blend of more than one approach to innovation.
By combining the best attributes of a startup culture with the scalability of an enterprise, competing in an age of disruption is a far less daunting. The goal is to transform your organization and its culture by empowering and inspiring employees – regardless of role, rank, or region – to innovate. Then, providing a wealth of resources, training and incentives, nurture their innovation to bring big ideas to market. With employees innovating anytime, anywhere, you’ll better your entire organization – and change the world while you’re at it.
Editor’s note: For the first installment in this two-part series, click here to read, “Cisco on Routed Optical Networking: The efficiency of it all (Part 1)”.
5G is maturing but new service revenues remain elusive; operators have something of a tough road to hoe. As they try to figure out the new revenue piece–something that will certainly flow from delivering solutions to enterprises rather than selling more of [insert thing they could sell more of] to consumers–a secondary (primary maybe) goal is removing structural operational costs and automating whatever can be automated.
To the removal of structural costs, see the link to Part 1 of this story in the subheading. As for the automation of it all, we turn to an interview conducted on the sidelines of Cisco Live with Kevin Wollenweber, vice president of product management for Cisco’s Service Provider Network Systems business. One of the recurring themes of our conversation was how do operators–by and large risk-averse, bloated organizations seemingly unaware that their own corporate inertia is a primary culprit for stagnant ARPUs (my words, not his)–figure out what amount of automation is the correct amount of automation.
“The word automation means something different to everyone,” he said. “When we started down the path of automation, it was more about simple automation of tasks: I used to type these seven commands, now I write a script that automates the writing of those seven commands. What we’re really seeing now is the era of more automating intent, and delivering full use cases through automation. In terms of automation, I’m definitely seeing a shift of the [service] providers away from just task-based automation. They want to simplify how they use multiple tools and multiple technologies and deliver on these end use cases like what we’re doing with Routed Optical, or as they roll out Kubernetes infrastructure.”
When he’s talking through it with potential or existing customers, Wollenweber said he likes to boil things down to three primary tenets:
Wollenweber, pulling out another recurring theme from the larger Cisco Live program, pointed out that the technology isn’t really the hard part here. The bigger lift is assembling an organization in a manner that lets the technology do what’s is supposed to do which is drive operational savings. This is particularly important if you look at the vision of 5G network slicing wherein its most functional form a customer could enter network performance requirements into a portal and the operator’s network would provision and deliver those features automatically and end-to-end.
The friction is around the sheer complexity of the networks operators have built. There’s also something of a disconnect around the idea that in order to get to an end state (if there ever is an end state) of automated, intelligent, almost elegant, networks, things will probably have to get more complex before they get less complex.
“I would say a lot of the components and underlying building blocks, we’re delivering that today,” he said. “I think we see there’s going to be a lot of revenue that comes in from enterprises and private networks. The analogy I like to use a lot with what we’re doing with Routed Optical and a lot of these tools, think about Tesla and self-driving cars and what’s happening in that space. The goal isn’t to transform a car; it’s to transform the transportation industry. That’s what we’re trying to do with networks. I live in [network] transport. For me, you can’t build a house without a foundation. You can’t build a next-gen communications infrastructure without transport. What we’re really trying to do is transform the service provider infrastructure, drive that cost and efficiency moving forward.”
In connected car news are BMW,NASCAR, Ansys, HARMAN and SkyWater Tech.
BMW has started selling subscriptions for services that are blocked by software but are activated by paying a fee. The two features that are causing heated debates are heated steering wheels at $12.00 a month and front heated seats at $18 a month. The services have been launched in the UK, Australia, South Korea, New Zealand and South Africa.
Other pay as you go features are:
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced that BMW Group is joining the Yocto Project as a member.
BMW Group’s membership restates their commitment to work with, and in, sustainable ecosystems and software and to support open source and key tools they use to build their products. The Yocto Project welcomes this support and looks forward to benefiting from their input and experience. They are joining other members including Intel, Comcast, Arm, Cisco, Facebook (Meta), Xilinx, Microsoft, Wind River, and AWS.
With the rise of devices and sensors being used across every industry, developers today require a common set of tools that help them manage software stacks, configurations, and best practices tailored for Linux images for embedded and IoT devices. Over the last decade Yocto Project has been tuned for this purpose and today is the de facto set of tools for building and supporting a new generation of devices. In short, it helps developers create custom Linux-based systems regardless of the hardware architecture.
The Yocto Project has grown significantly since it was created, rising to the constantly evolving challenge of building custom operating systems for products in a maintainable and scalable way. The project leads in build system technology with bitwise identical build output every time, advanced software manifests, license handling capabilities, and strong binary artifact reuse among many other developments. Yocto Project 4.0 (aka Kirkstone) was released in April. Based on Linux kernel 5.15, glibc 2.35, and roughly 300 other recipe upgrades, Yocto 4.0 supports SPDX SBOM generation and is the latest Long Term Support (LTS) release.
NASCAR used Ansys’ (NASDAQ: ANSS) simulation solutions to ensure the safety of its Next Gen race car in time for the 2022 season with virtual crash tests that accelerated validation time and reduced material costs for physical testing by $1 million. The crash simulations allowed NASCAR to overcome pandemic-induced physical testing challenges and meet its goal to debut the car in February at the Daytona 500 motor race, the 500-mile season-opener considered the most prestigious and important race in NASCAR.
By integrating Ansys® LS-DYNA® into crash testing, NASCAR was able to analyze, test, and validate multidirectional influences, including nonlinear and linear contact to the entire vehicle, spanning frontal impacts, roof crashes, lateral side impacts, rear impacts, and oblique impacts. This high-fidelity data, compiled by virtual crash simulations, slashed typical validation timing and material costs by reducing physical crash tests — estimated at $500,000 each — to only two full scale vehicle physical crash tests.
Further, Ansys’ predictive accuracy and results gave NASCAR the engineering confidence and ability to build parts without physical crash test data during early development stages in 2020 when on-site crash facilities were shut down due to COVID-19. When physical crash tests were later performed, Ansys’ robust and comprehensive simulation models were verified. Similarly, the software’s cloud computing capabilities allowed NASCAR to run and manage a large volume of simulations remotely using Ansys® Cloud™.
HARMAN International, a wholly-owned subsidiary of Samsung Electronics Co., Ltd. focused on connected technologies and solutions for automotive, consumer and enterprise markets, announced today that the ARD Audiothek app will be available in the HARMAN Ignite Store, a leading connected vehicle platform that enables automakers to develop, manage, and operate their own in-vehicle app store. Starting in Germany, the collaboration between HARMAN and ARD will enable automotive manufacturers to offer the ARD Audiothek app easily and securely in their vehicles moving forward, offering millions of drivers the opportunity to experience ARD audio content in their cars.
Fans and discoverers of audio offerings can look forward to the extensive content ARD Audiothek offers, including podcasts, audio books, documentaries, reports, and the live radio streams of the public broadcasters. Background information on recent syllabus from politics, science and society, as well as live and exclusive ARD content are available. Users of the app will be able to easily access their favorite podcasts through their personal playlists when entering the car.
The HARMAN Ignite Store brings users their favorite content in-vehicle, making the vehicle a seamless extension of consumers’ digital lifestyle. It optimizes the driving experience by providing access to a rich ecosystem of cloud-based applications and services. This means that automotive manufacturers, dealers and service providers can easily import and manage new cloud applications and services into a vehicle’s infotainment system and thus serve the comfort, information and entertainment needs of customers all over the world. HARMAN Ignite Store includes an ever-growing range of media content, point-of-interest solutions, and messaging applications.
SkyWater Technology (NASDAQ: SKYT), the trusted technology realization partner, today announced the Department of Defense (DOD) is funding a $27 million Other Transactional (OT) Agreement Option to further develop intellectual property (IP) for its 90 nm Strategic Rad-Hard by Process (RH90) FDSOI technology platform. This is the latest agreement between SkyWater and the DOD to ensure a reliable and trusted source of U.S.-made chips for use in strategic defense and space applications.
This is another step in SkyWater’s RH90 technology roadmap and is part of the previously announced up to $170M investment in SkyWater by the DOD to broaden onshore production capabilities for strategic rad-hard electronics. The DOD recently determined that SkyWater has successfully completed the base prototype project.
SkyWater’s RH90 platform is based on MIT Lincoln Laboratory’s 90 nm fully depleted silicon-on-insulator (FDSOI) complementary metal-oxide-semiconductor (CMOS) process, which was engineered to produce radiation-hardened (rad-hard) electronics which can withstand harsh radiation environments. Radiation effects can rapidly degrade microelectronics, and unmitigated can cause compromised performance, malfunctions or complete failure. Lincoln Laboratory developed the FDSOI process for making integrated circuits resistant to degradation and malfunction caused by extreme radiation levels.
New Jersey, United States – IoT in Manufacturing Market 2022 – 2028, Size, Share, and Trends Analysis Research Report Segmented with Type, Component, Application, Growth Rate, Region, and Forecast | key companies profiled -Cisco (US), IBM (US), PTC (US), and others.
IoT in Manufacturing Market has seen a huge development in the assembling area. Inferable from the high market rivalry and end-client interest, makers are more worried about creating high volume and quality items. This, thus, has driven them to zero in on center regions, for example, creation process, resource observing, and upkeep and backing of resources, in the plant. Utilizing robotization would empower makers to decrease direct human work expenses and costs, increment efficiency, Strengthen consistency of cycles or items, and convey quality items.IoT in Manufacturing Market processes use control frameworks, like PCs or robots, to screen and deal with cycles and machines. IoT assumes a significant part in improving the course of modern mechanization in assembling businesses. IoT creates helpful interchanges and collaboration from assembling field info or results including actuators, mechanical technology, and analyzers to further develop adaptability and convey improved assembling.
As an ever increasing number of associations enter the IoT business, the normalization across information guidelines, remote conventions, and innovations become more different to diminish intricacy and cost. This can likewise be credited to the rising number of recently evolved associated gadgets that are running on different stages and advances. IoT includes pretty much every part of human existence, and the test lies in bringing together these principles so that Machine-to-Machine (M2M) correspondence turns out to be more easy to understand and adaptable.
According to our latest report, the IoT in Manufacturing market, which was valued at US$ million in 2022, is expected to grow at a CAGR of approximate percent over the forecast period.
Receive the demo Report of IoT in Manufacturing Market Research Insights 2022 to 2028 @ https://www.infinitybusinessinsights.com/request_sample.php?id=849933
IoT in Manufacturing Market is assuming a significant part in improving assembling processes by utilizing brilliant sensors and actuators. These organizations expect to foster arrangements that can deal with huge volumes of unstructured information to profit from the advantages of IoT. Server farms can deal with such huge volumes of information; they can gather information sent by IoT-empowered gadgets, investigate it, and aggregate significant data to work with further developed dynamics connected with assembling tasks.
The services segment of the IoT in Manufacturing Market in the manufacturing market is predicted to increase faster over the forecast period. In the IoT in manufacturing industry, services are critical because they allow manufacturers to create digitized and connected production processes with mass customization and a self-configuring, automated manufacturing floor.
The services section of the IoT in Manufacturing Market in the manufacturing market is significant since it focuses on enhancing business operations and lowering needless expenses and overheads for manufacturing companies. The cloud deployment type is predicted to develop at a faster rate than others in the IoT in manufacturing market. Cloud-based IoT in manufacturing software allows SMEs and major corporations to concentrate on their core capabilities rather than IT operations.
Organizations can save money on software, storage, and technical staff by using cloud-based IoT in industrial systems. Cloud-based IoT in manufacturing solutions provides a centralized approach to link the system and its components with online and mobile applications, assisting organizations in asset management, asset maintenance, and asset productivity.
Access the Premium IoT in Manufacturing market research report 2022 with a full index.
By region, APAC is expected to grow at the highest CAGR during the forecast period. The IoT in the Manufacturing market in APAC is expected to experience strong growth in the coming years, due to the constant economic growth, increasing the young workforce, and the usage of tablets and smartphones for business purposes will lead toward the adaptation of enterprise mobility solutions to meet the growing demand for securing and protecting critical data. The major reason for this high growth in APAC is the increasing digitalization among people and the rising infusion of automation at industries and government initiatives to promote technology adoption across the region.
The report covers the competitive landscape and profiles major market players, as Cisco (US), IBM (US), PTC (US), Microsoft (US), Siemens AG (Germany), GE (US), SAP (Germany), Huawei (China), ATOS (France), HCL (India), Intel (US), Oracle (US), Schneider Electric (France), Zebra Technologies (US), Software AG (Germany), Wind River (US), Samsara (US), Telit (UK), ScienceSoft (US), Impinj (US), Bosch.IO (Germany), Litmus Automation (US), Uptake (US), Mocana (US), HQ Software (Estonia), FogHorn(US), ClearBlade (US). These players have adopted several organic and inorganic growth strategies, including new product launches, partnerships and collaborations, and acquisitions, to expand their offerings and market shares in the global IoT in Manufacturing market.
The following are some of the reasons why you should Buy a IoT in Manufacturing market report:
Click here to get the full index of the IoT in Manufacturing market research report 2022
International: +1 518 300 3575
Email: [email protected]
The Cisco Meraki Z3 teleworker gateway is an ideal solution for organizations looking to manage remote worker security with confidence and ease. Higher education institutions are increasingly challenged to deliver secure IT services to faculty, staff and students who may need to be off campus due to weather conditions, health problems or work assignments that require travel.
That’s where the Z3, an enterprise-class firewall and VPN gateway, can become a real ace in the hole for accommodating and empowering teleworkers without compromising organizational security.
Network administrators can control the SSIDs that are broadcast by the device and can configure those SSIDs to integrate with institutional authentication servers such as LDAP, Active Directory or RADIUS. For IT shops that use single sign-on, web SSO is supported via SAML, and SSIDs can be configured to require two-factor authentication. A customizable splash page can be created to ask users to acknowledge an acceptable use policy prior to connecting.
When the Z3 is coupled with the Meraki Cloud, organizations gain impressive capabilities for automated deployment at large scale. The Meraki Cloud allows network administrators to register devices by serial number prior to deployment. Larger deployments allow network administrators to specify an order number, which will add all of the devices automatically on that order. Once the device is registered to the Meraki Cloud, it downloads the configuration and policies specified by the network administrator. This ensures consistent configuration and security policy while greatly reducing the support burden for teleworkers to connect.
The Z3 and Meraki Cloud also ensure the comprehensive application of security policy on network traffic. The AutoVPN feature greatly simplifies configuration of site-to-site VPN tunnels. Layer 7 packet inspection and traffic shaping allow network administrators to apply quality of service policies to prioritize Voice over IP or remote desktop traffic over other traffic. Network administrators can require client VPN connections to the Z3 — in effect, creating a small branch office.
Network administrators can specify wireless LAN settings such as channel selection, radio power and channel width, but can also leave these settings to auto-tune based on the teleworker’s environment. Engineers can also remotely view channel utilization and contention.
Setting up the Z3 is simple even for nontechnical teleworkers, though organizations may want to supplement the instructions provided in the product packaging. The device can be configured with a static IP via connection to a LAN port, or with DHCP via the WAN uplink port. In the latter configuration, the teleworker simply connects the WAN port to the router or gateway with the included cable, and then powers on the device.
When the device begins to broadcast a Meraki Setup SSID, the teleworker connects a client device to this SSID and completes the setup via the Meraki Dashboard. This allows individual device configuration when the device is deployed in an unmanaged environment.
Supporting remote teleworkers presents a unique set of challenges, and the Z3 and Meraki Cloud provide some specific tools to make it easier. For example, remote packet capture allows a network administrator to capture traffic from the device, and network administrators can send NetFlow data from the device to a NetFlow collector or network management suite. The Z3 can send alerts via email or be integrated with a log aggregator or security information and event management solution via webhooks. Most important, a suite of troubleshooting tools — including remote ping, traceroute (MTR), throughput test, and Domain Name System and Address Resolution Protocol table inspection — is available to help network engineers troubleshoot remotely.
The Z3 is a small form factor device (6.83x4.41x1.04 inches) that supports up to 5 client devices. It includes a 100-megabit-per-second stateful firewall and is rated for 50Mpbs VPN throughput. Wireless capabilities include a full dual-band 802.11ac Wave 2 array with MU-MIMO and a maximum wireless data rate of 1.3Gbps. The device includes four internal dipole antennas and can support up to four SSIDs.
Wired connectivity is provided by four 1-gigabit-per-second LAN ports, one of which provides 802.3af Power over Ethernet. Wired uplink is provided via a 1Gbps WAN port, and a USB 2.0 port provides the interface for a backup cellular modem.
The Meraki Z3 offers organizations an effective platform for providing secure and scalable access to an increasingly remote workforce in higher education. The feature set and deployment processes are well thought out and address the challenges of deploying, maintaining and supporting the institutional network to meet most teleworkers’ needs. When coupled with the Meraki Cloud, the Z3 is a great choice to ‘send home’ with your remote workers.