Pass4sure 500-551 Cisco Networking: On-Premise and Cloud Solutions exam real questions

Our 500-551 test prep dumps contain practice test as well as genuine 500-551 questions. Cisco 500-551 practice test that we will give, will offer you 500-551 test inquiries with confirmed responses that is a reproduction of a actual test. We at killexams.com guarantee to have the most recent substance to empower you to breeze through your 500-551 test with high scores.

Exam Code: 500-551 Practice exam 2022 by Killexams.com team
Cisco Networking: On-Premise and Cloud Solutions
Cisco Networking: learning
Killexams : Cisco Networking: learning - BingNews https://killexams.com/pass4sure/exam-detail/500-551 Search results Killexams : Cisco Networking: learning - BingNews https://killexams.com/pass4sure/exam-detail/500-551 https://killexams.com/exam_list/Cisco Killexams : How digital twins are transforming network infrastructure: Future state (part 2)

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


This is the second of a two-part series. Read part 1 about the current state of networking and how digital twins are being used to help automate the process, and the shortcomings involved.

As noted in part 1, digital twins are starting to play a crucial role in automating the process of bringing digital transformation to networking infrastructure. Today, we explore the future state of digital twins – comparing how they’re being used now with how they can be used once the technology matures.

The market for digital twins is expected to grow at a whopping 35% CAGR (compound annual growth rate) between 2022 and 2027, from a valuation of $10.3 billion to $61.5 billion. Internet of things (IoT) devices are driving a large percentage of that growth, and campus networks represent a critical aspect of infrastructure required to support the widespread rollout of the growing number of IoT devices.

Current limitations of digital twins

One of the issues plaguing the use of digital twins today is that network digital twins typically only help model and automate pockets of a network isolated by function, vendors or types of users. However, enterprise requirements for a more flexible and agile networking infrastructure are driving efforts to integrate these pockets.

Several network vendors, such as Forward Networks, Gluware, Intentionet and Keysight’s exact Scalable Networks acquisition, are starting to support digital twins that work across vendors to Excellerate configuration management, security, compliance and performance. 

Companies like Asperitas and Villa Tech are creating “digital twins-as-a-service” to help enterprise operations.

In addition to the challenge of building a digital twin for multivendor networks, there are other limitations that digital twin technology needs to overcome before it’s fully adopted, including:

  • The types of models used in digital twins needs to match the actual use case. 
  • Building the model, supporting multiple models and evolving the model over time all require significant investment, according to Balaji Venkatraman, VP of product management, DNA, at Cisco.
  • Keeping the data lake current with the state of the network. If the digital twin operates on older data, it will return out-of-date answers. 

Future solutions

Minas Tiwari, client partner for cross-industry comms solutions at Capgemini Engineering, believes that digital twins will help roll out disaggregated networks composed of different equipment, topologies and service providers in the same way enterprises now provision services across multiple cloud services. 

Tiwari said digital twins will make it easier to model different network designs up front and then fine-tune them to ensure they work as intended. This will be critical for widespread rollouts in healthcare, factories, warehouses and new IoT businesses. 

Vendors like Gluware, Forward Networks and others are creating real-time digital twins to simulate network, security and automation environments to forecast where problems may arise before these are rolled out. These tools are also starting to plug into continuous integration and continuous deployment (CI/CD) tools to support incremental updates and rollback using existing devops processes.

Cisco has developed tools for what-if analysis, change impact analysis, network dimensioning and capacity planning. These areas are critical for proactive and predictive analysis to prevent network or service downtime or impact user experience adversely.

Overcoming the struggle with new protocols

Early modeling and simulation tools, such as the GNS3 virtual labs, help network engineers understand what is going on in the network in terms of traffic path, connectivity and isolation of network elements. Still, they often struggle with new protocols, domains or scaling to more extensive networks. They also need to simulate the ideal flow of traffic, along with all the ways it could break or that paths could be isolated from the rest of the network. 

Christopher Grammer, vice president of solution technology at IT solutions provider Calian, told VentureBeat that one of the biggest challenges is that real network traffic is random. The network traffic produced by a coffee shop full of casual internet users is a far cry from the needs of petroleum engineers working with real-time drilling operations. Therefore, simulating network performance is subject to the users’ needs, which can change at any time, making it more difficult to actively predict.

Not only that, but, modeling tools are costly to scale up. 

“The cost difference between simulating a relatively simple residential network model and an AT&T internet backbone is astronomical,” Grammer said. 

Thanks to algorithms and hardware improvements, vendors like Forward Enterprise are starting to scale these computations to support networks of hundreds of thousands of devices.

Testing new configurations

The crowning use case for networking digital twins is evaluating different configuration settings before updating or installing new equipment. Digital twins can help assess the likely impact of changes to ensure equipment works as intended. 

In theory, these could eventually make it easier to assess the performance impact of changes. However, Mike Toussaint, senior director analyst at Gartner, said it may take some time to develop new modeling and simulation tools that account for the performance of newer chips.

One of the more exciting aspects is that these modeling and simulation capabilities are now being integrated with IT automation. Ernest Lefner, chief product officer at Gluware, which supports intelligent network process automation, said this allows engineers to connect inline testing and simulation with tools for building, configuring, developing and deploying networks. 

“You can now learn about failures, bugs, and broken capabilities before pushing the button and causing an outage. Merging these key functions with automation builds confidence that the change you make will be right the first time,” he said.

Wireless analysis

Equipment vendors such as Juniper Networks are using artificial intelligence (AI) to incorporate various kinds of telemetry and analytics to automatically capture information about wireless infrastructure to identify the best layout for wireless networks. Ericsson has started using Nvidia Omniverse to simulate 5G reception in a city. Nearmap recently partnered with Digital Twin Sims to create dynamically updated 5G coverage maps into 5G planning and operating systems. 

Security and compliance

Grammer said digital twins could help Excellerate network heuristics and behavioral analysis aspects of network security management. This could help identify potentially unwanted or malicious traffic, such as botnets or ransomware. Security companies often model known good and bad network traffic to teach machine learning algorithms to identify suspicious network traffic. 

According to Lefner, digital twins could model real-time data flows for complex audit and security compliance tasks. 

“It’s exciting to think about taking complex yearly audit tasks for things like PCI compliance and boiling that down to an automated task that can be reviewed daily,” he said. 

Coupling these digital twins with automation could allow a step change in challenging tasks like identifying up-to-date software and remediating newly identified vulnerabilities. For example, Gluware combines modeling, simulation and robotic process automation (RPA) to allow software robots to take actions based on specific network conditions. 

Peyman Kazemian, cofounder of Forward Networks, said they are starting to use digital twins to model network infrastructure. When a new vulnerability is discovered in a particular type of equipment or software version, the digital twins can find all the hosts that are reachable from less trustworthy entry points to prioritize the remediation efforts. 

Cross-domain collaboration

Network digital twins today tend to focus on one particular use case, owing to the complexities of modeling and transforming data across domains. Teresa Tung, cloud first chief technologist at Accenture, said that new knowledge graph techniques are helping to connect the dots. For example, a digital twin of the network can combine models from different domains such as engineering R&D, planning, supply chain, finance and operations. 

They can also bridge workflows between design and simulations. For example, Accenture has enhanced a traditional network planner tool with new 3D data and an RF simulation model to plan 5G rollouts. 

Connect2Fiber is using digital twins to help model its fiber networks to Excellerate operations, maintenance and sales processes. Nearmap’s drone management software automatically inventories wireless infrastructure to Excellerate network planning and collaboration processes with asset digital twins. 

These efforts could all benefit from the kind of innovation driven by building information models (BIM) in the construction industry. Jacob Koshy, information technology and communications associate, Arup, an IT services firm, predicts that comparable network information models (NIM) could have a similarly transformative role in building complex networks. 

For example, the RF propagation analysis and modeling for coverage and capacity planning could be reused during the installation and commissioning of the system. Additionally, integrating the components into a 3D modeling environment could Excellerate collaboration and workflows across facilities and network management teams.

Emerging digital twin APIs from companies like Mapped, Zyter and PassiveLogic might help bridge the gap between dynamic networks and the built environment. This could make it easier to create comprehensive digital twins that include the networking aspects involved in more autonomous business processes. 

The future is autonomous networks

Grammer believes that improved integration between digital twins and automation could help fine-tune network settings based on changing conditions. For example, business traffic may predominate in the daytime and shift to more entertainment traffic in the evening. 

“With these new modeling tools, networks will automatically be able to adapt to application changes switching from a business video conferencing profile to a streaming or gaming profile with ease,” Grammer said. 

How digital twins will optimize network infrastructure

The most common use case for digital twins in network infrastructure is testing and optimizing network equipment configurations. Down the road, they will play a more prominent role in testing and optimizing performance, vetting security and compliance, provisioning wireless networks and rolling out large-scale IoT networks for factories, hospitals and warehouses

Experts also expect to see more direct integration into business systems such as enterprise resource planning (ERP) and customer relationship management (CRM) to automate the rollout and management of networks to support new business services.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Mon, 08 Aug 2022 09:53:00 -0500 George Lawton en-US text/html https://venturebeat.com/2022/08/08/how-digital-twins-are-driving-network-transformation-future-state-part-2/
Killexams : New programs aid North Dakota job seekers

Job seekers in North Dakota are getting more help through two recently announced programs.

Cisco Networking Academy's Skills for All program will be available to all state residents at no cost, according to Gov. Doug Burgum. The program provides self-paced, online learning aligned to industry jobs, providing people with a pathway to a career in technology.

There are numerous courses, badging and industry certifications available, with an emphasis on cybersecurity, along with other technology-focused courses.

“In the 21st century, nearly every job in every industry is a computer job," North Dakota Chief Information Officer Shawn Riley said. "Providing free technology courses to citizens of North Dakota will allow for exciting opportunities for adults to expand their credentials in high-demand jobs from any industry.”

The program doesn't cost the state anything, according to Burgum spokesman Mike Nowatzki.

"It is funded through Cisco's Corporate Social Responsibility program, and the Networking Academy program is Cisco's largest social investment," he said.

Separately, Job Service North Dakota is partnering with North Dakota-based virtual reality studio Be More Colorful to help people explore career paths.

CareerViewXR is available to Job Service clients during a 12-month pilot project. It lets job seekers use immersive media and virtual reality to essentially "test-drive a job,” Job Service Executive Director Pat Bertagnolli said.

Bismarck-area businesses who would like to be considered as a filming location for a virtual reality experience can contact Job Service at 701-328-5000.

The project costs $8,700 and is being funded by a federal grant, according to Job Service spokeswoman Sarah Arntson.

Sun, 31 Jul 2022 23:15:00 -0500 en text/html https://bismarcktribune.com/news/state-and-regional/govt-and-politics/new-programs-aid-north-dakota-job-seekers/article_b2f2cf66-0d14-11ed-9123-47e6dfad9da9.html
Killexams : How digital twins are transforming network infrastructure, part 1

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Designing, testing and provisioning updates to data digital networks depends on numerous manual and error-prone processes. Digital twins are starting to play a crucial role in automating more of this process to help bring digital transformation to network infrastructure. These efforts are already driving automation for campus networks, wide area networks (WANs) and commercial wireless networks. 

The digital transformation of the network infrastructure will take place over an extended period of time. In this two-part series, we’ll be exploring how digital twins are driving network transformation. Today, we’ll look at the current state of networking and how digital twins are helping to automate the process, as well as the shortcomings that are currently being seen with the technology. 

In part 2, we’ll look at the future state of digital twins and how the technology can be used when fully developed and implemented.

About digital twins

At its heart, a digital twin is a model of any entity kept current by constant telemetry updates. In practice, multiple overlapping digital twins are often used across various aspects of the design, construction and operation of networks, their components, and the business services that run on them. 

Peyman Kazemian, cofounder of Forward Networks, argues that the original Traceroute program written by Van Jacobson in 1987 is the oldest and most used tool to understand the network. Although it neither models nor simulates the networks, it does help to understand the behavior of the network by sending a representative packet through the network and observing the path it takes. 

Later, other network simulation tools were developed, such as OPNET (1986), NetSim (2005), and GNS3 (2008), that can simulate a network by running the same code as the actual network devices. 

“These kinds of solutions are useful in operating networks because they supply you a lab environment to try out new ideas and changes to your network,” Kazemian said. 

Teresa Tung, cloud first chief technologist at Accenture, said that the open systems interconnection (OSI) conceptual model provides the foundation for describing networking capabilities along with separation of concerns. 

This approach can help to focus on different layers of simulation and modeling. For example, a use case may focus on RF models at the physical layer, through to the packet and event-level within the network layer, the quality of service (QoS) and mean opinion score (MoS) measures in the presentation and application layers.

Modeling: The interoperability issue

Today, network digital twins typically only help model and automate pockets of a network isolated by function, vendors or types of users. 

The most common use case for digital twins is testing and optimizing network equipment configurations. However, because there are differences in how equipment vendors implement networking standards, this can lead to subtle variances in routing behavior, said Ernest Lefner, chief product officer at Gluware.

Lefner said the challenge for everyone attempting to build a digital twin is that they must have detailed knowledge of every vendor, feature, and configuration and  customization in their network. This can vary by device, hardware type, or software release version. 

Some network equipment providers, like Extreme Networks, let network engineers build a network that automatically synchronizes the configuration and state of that provider’s specific equipment. 

Today, Extreme’s product supports only the capability to streamline staging, validation and deployment of Extreme switches and access points. The digital twin feature doesn’t currently support the SD-WAN customer on-premises equipment or routers. In the future, Extreme plans to add support for testing configurations, OS upgrades and troubleshooting problems.

Other network vendor offerings like Cisco DNA, Juniper Networks Mist and HPE Aruba Netconductor make it easier to capture network configurations and evaluate the impact of changes, but only for their own equipment. 

“They are allowing you to stand up or test your configuration, but without specifically replicating the entire environment,” said Mike Toussaint, senior director analyst at Gartner.

You can test a specific configuration, and artificial intelligence (AI) and machine learning (ML) will allow you to understand if a configuration is optimal, suboptimal or broken. But they have not automated the creation and calibration of a digital twin environment to the same degree as Extreme. 

Virtual labs and digital twins vs. physical testing

Until digital twins are widely adopted, most network engineers use virtual labs like GNS3 to model physical equipment and assess the functionality of configuration settings. This tool is widely used to train network engineers and to model network configurations. 

Many larger enterprises physically test new equipment at the World Wide Technology Advanced Test Center. The firm has a partnership with most major equipment vendors to provide virtual access for assessing the performance of actual physical hardware at their facility in St. Louis, Missouri. 

Network equipment vendors are adding digital twin-like capabilities to their equipment. Juniper Networks’ exact Mist acquisition automatically captures and models different properties of the network that informs AI and machine optimizations. Similarly, Cisco’s network controller serves as an intermediary between business and network infrastructure. 

Balaji Venkatraman, VP of product management, DNA, Cisco, said what distinguishes a digital twin from early modeling and simulation tools is that it provides a digital replica of the network and is updated by live telemetry data from the network.

“With the introduction of network controllers, we have a centralized view of at least the telemetry data to make digital twins a reality,” Venkatraman said. 

However, network engineering practices will need to evolve their practices and cultures to take advantage of digital twins as part of their workflows. Gartner’s Toussaint told VentureBeat that most network engineering teams still create static network architecture diagrams in Visio. 

And when it comes to rolling out new equipment, they either test it in a live environment with physical equipment or “do the cowboy thing and test it in production and hope it does not fail,” he said. 

Even though network digital twins are starting to virtualize some of this testing workload, Toussaint said physically testing the performance of cutting-edge networking hardware that includes specialized ASICs, FPGAs, and TPUs chips will remain critical for some time. 

Culture shift required

Eventually, Toussaint expects networking teams to adopt the same devops practices that helped accelerate software development, testing and deployment processes. Digital twins will let teams create and manage development and test network sandboxes as code that mimics the behavior of the live deployment environment. 

But the cultural shift won’t be easy for most organizations.

“Network teams tend to want to go in and make changes, and they have never really adopted the devops methodologies,” Toussaint said.

They tend to keep track of configuration settings on text files or maps drawn in Visio, which only provide a static representation of the live network. 

“There have not really been the tools to do this in real time,” he said.

Getting a network map has been a very time-intensive manual process that network engineers hate, so they want to avoid doing it more than once. As a result, these maps seldom get updated. 

Toussaint sees digital twins as an intermediate step as the industry uses more AI and ML to automate more aspects of network provisioning and management. Business managers are likely to be more enthused by more flexible and adaptable networks that keep pace with new business ideas than a dynamically updated map. 

But in the interim, network digital twins will help teams visualize and build trust in their recommendations as these technologies improve.

“In another five or 10 years, when networks become fully automated, then digital twins become another tool, but not necessarily something that is a must-have,” Toussaint said.

Toussaint said these early network digital twins are suitable for vetting configurations, but have been limited in their ability to grapple with more complex issues. He said he likes to consider it to be analogous to how we might use Google Maps as a kind of digital twin of our trip to work, which is good at predicting different routes under current traffic conditions. But it will not tell you about the effect of a trip on your tires or the impact of wind on the aerodynamics of your car. 

This is the first of a two-part series. In part 2, we’ll outline the future of digital twins and how organizations are finding solutions to the issues outlined here.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Fri, 05 Aug 2022 09:20:00 -0500 George Lawton en-US text/html https://venturebeat.com/2022/08/05/how-digital-twins-are-transforming-network-infrastructure-part-1/
Killexams : This is our chance to secure the metaverse
Jaeson Schultz is a technical leader for Cisco’s Talos Security Intelligence & Research Group, one of the largest commercial threat intelligence teams in the world. Schultz, along with other expert researchers, analysts and engineers, spend their days working to make the internet safer for everyone. Lately, he’s been thinking a lot about the metaverse, and what it will take to make that safer, too. Here, he shares his insights on a syllabu bound to affect us all in the future.

The internet you know today is gradually going the way of the original web. Those of us old enough will remember web 1.0–that clunky world of screeching modems where companies essentially created online brochures and Amazon made its debut as “the world’s largest bookstore.” Then came web 2.0, with everything-as-a-service delivered by centralized applications and social apps hosted by cloud giants. 

At some point in the future, we’ll regard web 2.0 in the same way we think of those ancient dial-up days. That’s because the internet is already changing into an online world of decentralized applications and file storage, known as web 3.0 or simply web3. 

The aspect of web3 that’s most exciting–and most concerning to cybersecurity wonks like me–is the metaverse, an immersive 3D experience where people can explore, shop, play games, spend time with distant friends, attend a concert, or hold a business meeting. The metaverse is what bold virtual reality pioneers envisioned way back in the ‘90s when most people lacked the compute power, storage, or network bandwidth to make it real.

Think of the metaverse as the next iteration of social media. It’s a place where internet users will increasingly spend hours and money engaging with friends, content, goods, and services. 

To enable this, metaverse users and platforms are relying on cryptocurrency and its underlying blockchain technology. Cryptocurrency in particular is playing a huge role in both making metaverse experiences possible–it’s largely how people pay for goods and services in virtual worlds–and in presenting uniquely vexing cybersecurity challenges. 

For one thing, cryptocurrency itself can be staggeringly risky, as millions of crypto investors recently learned the hard way. Since late 2021, $2 trillion in cryptocurrency wealth has vanished. After investors witnessed both currencies and established crypto exchanges crash and burn, the FOMO that prompted millions to buy Bitcoin, Ethereum and the rest when crypto values were on the rise appears to have evaporated to some degree. A exact survey found that 60% of cryptocurrency investors expect Bitcoin’s value to continue to decline.  

As it turns out, a lot of people seem to have decided they aren’t ready to act as their own banks, which essentially is what cryptocurrency requires today. And while they wait for the crypto winter to thaw, those of us with an eye on the security implications of crypto-funded metaverse experiences see this as a golden opportunity.

We can use this break to build a more secure metaverse.

The metaverse today is already experiencing security growing pains. Much of this has to do with the use of cryptocurrency blockchains, which function as a distributed public ledger of all historical transactions. Armed with the hash of a transaction, or the address of a cryptocurrency wallet, anyone can examine any of the transactions that have previously occurred.

This is great for transparency, which is one of the big selling points of cryptocurrency. But it also means everyone has access to all the information available on that blockchain. And not everyone can be trusted. Here are five areas where the metaverse presents security risks. 

  • Cryptocurrency wallets as metaverse identities. Identity in the metaverse is directly tied to your cryptocurrency wallet–a virtual or physical cache of currency, collectibles, in-world progress, and more. While connecting to metaverse experiences via crypto wallets doesn’t intrinsically result in security issues, it can invite them. Bad actors, for instance, can in some cases track wallet addresses to unearth a wallet holder’s real-world identity. But that’s just the beginning.
  • Smart contracts, both buggy and malevolent. In addition to wallet addresses, you might find cryptocurrency addresses belonging to “smart contracts.” A smart contract is a computer program deployed on a blockchain; most are deployed on the Ethereum blockchain. Smart contracts enable users to interact with the blockchain ecosystem, including making purchases with cryptocurrency to unlock metaverse experiences like gaming, or to purchase non-fungible tokens (NFTs), which we’ll cover below. These digital contracts are trustless, autonomous, decentralized, and transparent; they’re also usually irreversible and unmodifiable once deployed. This can be a problem if they’re written by nefarious parties who have no intention of interacting honestly with wallet holders. It also can be a problem when bugs in even legitimate smart contracts are exploited by hackers.
  • ENS squatting. Now comes the Ethereum Naming Service (ENS), a kind of blockchain version of the internet’s domain name system. Except that instead of a friendly name like cisco.com which points to an Internet IP address, ENS names are friendly names that point to cryptocurrency wallet addresses. Anyone can register any name, and owing to the blockchain, that name cannot be taken away once registered. As a consequence, some names, such as cisco.eth, may not actually be owned by the legal owner of that trademark. Who would squat on an ENS name? Bad actors might. And if those bad actors do their work well, wallet holders could conduct transactions with a metaverse experience built solely to scam them.
  • Non-fungible tokens (NFTs). NFTs are unique digital tokens works that represent ownership over various items that users take with them into the metaverse. These items may take the form of monkey or cat drawings created by NFT artists, or even wearables for your avatar, or images and other content from brands like Disney and Pixar. NFTs can even be dangerous when the smart contract governing them is malicious. They invite additional security problems because they’re often in such high demand among a certain set of collectors—and when people really want something, they’ll sometimes take risks to get it. Which leads me to…
  • Seed phrase scams. Seed phrases are a kind of last-resort, backdoor password for crypto wallet holders to gain access to their wallets if they lose their primary passwords. Users are advised never to share their seed phrase with anyone. Numerous different social engineering scams are designed to trick users into giving up their seed phrase, including posing as technical support reps or other legitimate personnel from some project. Some metaverse scams post notices on otherwise legitimate forums like Discord announcing the free availability of a limited number of new NFTs expected to be worth hundreds or even thousands of dollars; all users have to do to receive one is to is sign up using their seed phrase. Once that information is shared with attackers, the wallet is effectively theirs.

There are other risks, but these should supply you an idea of how this new world is breeding new security concerns.

Now is the time we should be thinking about, and acting on, new measures to secure the metaverse. 

To begin with, metaverse platform and service providers must step up. They and their constituents have a lot to lose which, let’s be honest, is the primary incentive for bolstering cybersecurity protections no matter where you go. They need to examine how they interact with users, and where the security gaps are. They must understand their vulnerabilities and take a risk-based approach to addressing them. They must invest in security resilience, because cybercriminals are evolving as rapidly as the techniques defenders use to combat them.

Some platform providers are already taking action. Crypto marketplace OpenSea recently announced it will hide fraudulent transactions from users to protect them from scammers. This is a good start, and in a way it serves as a kind of model for other platforms. At Cisco Talos, we know from experience how algorithms driven by machine learning are enormously helpful in identifying potential and active threats. That same kind of technology can be deployed to help gaming, shopping, trading, and other platforms find and eliminate threats for their users.

There’s still time for further protections, such as systems that create abstraction layers between users’ wallet identities and their metaverse presence. As the metaverse evolves, we must take a feature-by-feature approach to locking down the web3 experience. After all, that’s how internet security evolved in the first place. 

And because the metaverse is likely to become a fully integrated and open environment, one in which a virtual good purchased on one platform could be worn or used on another, we must take the same approach to security. Proprietary solutions will have no place here. The very ethos of web3 demands it. At Cisco, we’re already creating that open, integrated environment for the multi-cloud future every business is adopting. It’s a perfect fit for the metaverse. 

Eventually, the crypto winter will end, so we can’t waste this opportunity to build a safer metaverse before the insanity returns. Security industry leaders should take this moment to map out a secure future for this next generation of the internet. 

Fri, 05 Aug 2022 07:49:00 -0500 en-US text/html https://techcrunch.com/sponsor/cisco/this-is-our-chance-to-secure-the-metaverse/
Killexams : Cisco Networking Academy to offer skills training to all North Dakotans

The U.S.   Cisco


North Dakota is the first state in the nation to provide these courses statewide at no cost to all residents.

The Cisco Networking Academy Skills for All program provides free, quality, mobile, self-paced, online learning aligned to industry jobs, providing a pathway to a career in technology.

There are numerous courses, badging and industry certifications available, with an emphasis on cybersecurity, along with coding, networking essentials, Internet of Things (IoT) and other technology-focused courses.

“This statewide program will greatly benefit the people of North Dakota by providing opportunities to acquire best-in-class skills in highly sought after, in-demand and growing professions,” Burgum said.

“Skills for All provides North Dakota residents from all backgrounds and experiences the opportunity to obtain 21st century skills and help our state build a strong and competitive workforce.”

The Skills for All program expands the number of courses offered from seven to almost 25, and new course modules are continually being added.

Fri, 22 Jul 2022 20:13:00 -0500 en text/html https://www.poandpo.com/companies/cisco-networking-academy-to-offer-skills-training-to-all-north-dakotans/
Killexams : EdCreate Foundation Empowers Students in India To Pursue New Opportunities

Published 08-03-22

Submitted by Cisco Systems, Inc.

a person standing, working on a laptop, a warehouse of servers behind them

With shifts to hybrid work, IT talent continues to be in high demand across the globe. A exact report by Manpower Group suggests IT and data skills represent the largest hard-skills talent shortage worldwide.

India alone faces a shortage of 1.5M to 1.9M tech professionals by 2026. India’s National Association of Software and Services Companies (NASSCOM) Future of Work report identifies access to new and diverse talent, continuous learning, upskilling of the workforce and frontline enablement as particular areas of concern.

But in line with the shift to hybrid and online work, similar trends can be found in technology education. In India, organizations such as Cisco Networking Academy partner and exact Be the Bridge award winner, EdCreate Foundation, is leading an outstanding effort to help bridge those skills gaps, and to provide opportunities for learners who may not have had the opportunity in the past.

Manas Deep, co-founder of EdCreate Foundation, says its vision is to ensure equitable education for all, by leveraging technology and technical education to impact millions of students across India.

Its aim is to build a skilled workforce that can accelerate the digital economy of India. In partnerships alongside governments and regulators it advocates global learning programs as a part of the academic curriculum to empower young people, particularly underserved students, with special learning programs.

I’m thrilled to award EdCreate Foundation a Be the Bridge award for its 2022 Skill-A-Thon. This has been a successful way to encourage competition between students, their instructors, and educational institutions, and helps students focus on a specific skills theme, such as cybersecurity, programming, or networking.

In the second edition of Skill-A-Thon in 2022, EdCreate Foundation collaborated with state governments and academies and saw more than 20,000 students participate in Cisco Certification Training, and Career and Explore courses.

Finding a different path

Kishan Kumar is a student at B. P. Mandal College of Engineering Madhepura in Bihar. Bihar is a state in India where, according to India Brand Equity Foundation, 80 percent of people are employed in agricultural production. Kishan used the opportunity with EdCreate to find a different path.

“I always wanted to hone the skills which could make me employable. When I got to know about the Cisco Networking Academy program, I jumped at the opportunity and asked my NetAcad instructor to enroll me in the NetAcad course during the Skill-A-Thon,” said Kishan.

“I enjoyed collaborating with like-minded friends in the classes,” he said. “I learned problem solving skills, got the opportunity to do hands-on challenges, and completed lots of quizzes and assignments on the NetAcad portal and this holistic experience has really helped.”

“Thanks to my NetAcad credentials, I got placed with Tata Consultancy Services, which is one of the most reputable IT companies in India. This would not have been possible if I had not decided to upskill and I will always be thankful to Cisco Networking Acadamy, to my faculties for their guidance and to EdCreate Foundation.”

Kishan is just one of many students across India – many from underserved backgrounds – who benefit from the EdCreate Foundation. EdCreate Foundation is a worthy Be the Bridge Award winner, and I look forward to more outstanding programs like this in the future.

For more on the partnership between Cisco Networking Academy and EdCreate Foundation and our work with other partners in India, please visit:

EdCreate Foundation on LinkedIn

View original content here.

Wed, 03 Aug 2022 07:00:00 -0500 en text/html https://www.csrwire.com/press_releases/751551-edcreate-foundation-empowers-students-india-pursue-new-opportunities
Killexams : High-demand TSTC programs offer more flexibility this fall

This fall, Texas State Technical College’s Computer Networking and Systems Administration, Cybersecurity, and Drafting and Design programs will offer students the choice to complete their training either fully online or in a format that combines online learning with in-person lab time — opportunities that have not been available in these programs since before the pandemic.

Computer Networking and Systems Administration students who enroll in the in-person/online learning format can look forward to getting their hands on network cables, servers, and Cisco routers and switches in the lab, TSTC instructor Renee Blackshear said.

“It will be industry-level engagement,” she said. “We also get to work with students on their soft skills and help pull them out of their shells.”

Students have the power to select flexible lab times based on their schedules, with the added benefit of connecting with instructors who offer real-world advice and experience.

“I’m hoping that when students come to labs, it will allow them to open up to us; building connections with them was the one thing that I missed during the pandemic,” TSTC instructor Adrian Medrano said. “I think being able to physically connect things together is going to help in students’ learning processes.”

Blackshear and Medrano urged prospective students to take advantage of the opportunity to schedule time to tour the facilities and see where they will be studying — and the equipment they will work with.

TSTC offers an Associate of Applied Science degree, certificates of completion and an occupational skills award in Computer Networking and Systems Administration, as well as an advanced technology certificate in Cloud Computing.

In Texas, computer network support certified can earn an average annual salary of $62,280, according to onetonline.org, which forecasts that the number of these positions will grow 17% in the state through 2028.

TSTC Cybersecurity instructor Norma Colunga-Hernandez has also missed day-to-day and face-to-face interactions with students. She hopes that the in-person/online format of training will help to reestablish a campus cybersecurity club.

Colunga-Hernandez highly encourages students who are new to the Cybersecurity program to choose the in-person/online format, especially in their initial semesters.

“For new students, this is going to be their first time seeing this equipment and information,” she said. “It’s really important that they get the help they need so they can build a really strong foundation.”

Ideally, having Cybersecurity instructors in labs will help students advance, Colunga-Hernandez added.

“We can watch over their shoulder and see if they’re struggling,” she said. “They can get their answers faster and progress better.”

TSTC offers an Associate of Applied Science degree, certificates of completion and occupational skills awards in Cybersecurity — plus an advanced technology certificate in Digital Forensics Specialist.

Information security engineers can earn an average annual salary of $84,220 in Texas, according to onetonline.org. The number of these positions is forecast to grow by 20% throughout the state by 2028, the website shows.

Whether TSTC Drafting and Design students are pursuing degrees in Architectural/Civil Drafting Technology or Engineering Graphics and Design Technology — or a blend of both in Architectural Design and Engineering Graphics Technology, they will be able to get access to industry-level tools and equipment when they take advantage of the in-person/online format.

That includes CAD stations, gaming-level laptops, plotters, laser printers and traditional drafting tables.

“It’s going to benefit students in several ways,” TSTC instructor Bryan Clark said. “They’re going to have a place they can come to. They’re not going to have to buy a computer and source a space in their house and hope everyone is quiet so they can get their work done. There will be a seat available for them and a qualified instructor to answer any questions.”

TSTC instructor Corby Myers agreed, citing the industry experience that instructors can share with their students.

“To me, the biggest benefit to students is when they walk into the lab, there will be two instructors who have spent years in the industry doing the job that we are preparing them to do,” he said. “If they have questions not only about the coursework but about what’s it like out there, what can they expect on the first day of the job — or the 366th day on the job — we can tell them because we’ve been there.”

In its Drafting and Design program, TSTC offers a variety of specialized associate degrees, certificates of completion and occupational skills awards.

According to onetonline.org, architectural and civil drafters can earn an average annual salary of $59,110 in Texas, while mechanical drafters can earn an average of $60,300 a year.

Fall enrollment for TSTC is underway. Learn more at tstc.edu.

Sat, 06 Aug 2022 12:00:00 -0500 admin en-US text/html https://freestonecountytimesonline.com/high-demand-tstc-programs-offer-more-flexibility-this-fall/
Killexams : Network automation, SASE, 5G rank among enterprise priorities

From incorporating cloud services to keeping the hybrid workforce humming, network execs and architects face myriad challenges every day.

The main goals of large organizations are to prioritize those challenges, adjust the network architecture to handle widely distributed applications, services and users, and keep corporate resources secure, according to Neil Anderson, area vice president with World Wide Technology, a $14.5 billion global technology services provider.

The pandemic exposed weaknesses in the ability of traditional network architectures to support distributed employees at scale, and while organizations managed through the crisis with quick-fix solutions like remote access VPN, it's become clear that fundamental changes to the architecture are needed for long-term success, Anderson stated.

With that in mind, WWT recently issued a report that details what it says should be businesses' core networking priorities.

Network automation initiatives mature

The first of those priorities is automation.

“What’s happening with automation is that we're moving into a new phase of SDN. The first phase was kind of proprietary, in that Cisco works with Cisco, Aruba works with Aruba, for example,” Anderson said. “And I think customers experimented with that. They certainly took advantage of the benefits SDN offers, including programmability.”

Copyright © 2022 IDG Communications, Inc.

Mon, 25 Jul 2022 16:14:00 -0500 en text/html https://www.networkworld.com/article/3667991/network-automation-sase-5g-rank-among-enterprise-priorities.html
Killexams : Federated Learning Uses The Data Right on Our Devices

An approach called federated learning trains machine learning models on devices like smartphones and laptops, rather than requiring the transfer of private data to central servers.

The biggest benchmarking data set to date for a machine learning technique designed with data privacy in mind is now available open source.

“By training in-situ on data where it is generated, we can train on larger real-world data,” explains Fan Lai, a doctoral student in computer science and engineering at the University of Michigan, who presents the FedScale training environment at the International Conference on Machine Learning this week. A paper on the work is available on ArXiv.

“This also allows us to mitigate privacy risks and high communication and storage costs associated with collecting the raw data from end-user devices into the cloud,” Lai says.

Still a new technology, federated learning relies on an algorithm that serves as a centralized coordinator. It delivers the model to the devices, trains it locally on the relevant user data, and then brings each partially trained model back and uses them to generate a final global model.

For a number of applications, this workflow provides an added data privacy and security safeguard. Messaging apps, health care data, personal documents, and other sensitive but useful training materials can Excellerate models without fear of data center vulnerabilities.

In addition to protecting privacy, federated learning could make model training more resource-efficient by cutting down and sometimes eliminating big data transfers, but it faces several challenges before it can be widely used. Training across multiple devices means that there are no guarantees about the computing resources available, and uncertainties like user connection speeds and device specs lead to a pool of data options with varying quality.

“Federated learning is growing rapidly as a research area,” says Mosharaf Chowdhury, associate professor of computer science and engineering. “But most of the work makes use of a handful of data sets, which are very small and do not represent many aspects of federated learning.”

And this is where FedScale comes in. The platform can simulate the behavior of millions of user devices on a few GPUs and CPUs, enabling developers of machine learning models to explore how their federated learning program will perform without the need for large-scale deployment. It serves a variety of popular learning tasks, including image classification, object detection, language modeling, speech recognition, and machine translation.

“Anything that uses machine learning on end-user data could be federated,” Chowdhury says. “Applications should be able to learn and Excellerate how they provide their services without actually recording everything their users do.”

The authors specify several conditions that must be accounted for to realistically mimic the federated learning experience: heterogeneity of data, heterogeneity of devices, heterogeneous connectivity and availability conditions, all with an ability to operate at multiple scales on a broad variety of machine learning tasks. FedScale’s data sets are the largest released to date that cater specifically to these challenges in federated learning, according to Chowdhury.

“Over the course of the last couple years, we have collected dozens of data sets. The raw data are mostly publicly available, but hard to use because they are in various sources and formats,” Lai says. “We are continuously working on supporting large-scale on-device deployment, as well.”

The FedScale team has also launched a leaderboard to promote the most successful federated learning solutions trained on the university’s system.

The National Science Foundation and Cisco supported the work.

Wed, 27 Jul 2022 23:00:00 -0500 en text/html https://www.nextgov.com/ideas/2022/07/federated-learning-uses-data-right-our-devices/374926/
Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Partnerships & Use Cases

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

ADVERTISEMENT

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

ADVERTISEMENT

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

ADVERTISEMENT

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

ADVERTISEMENT

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

ADVERTISEMENT

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Excellerate future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

ADVERTISEMENT

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

ADVERTISEMENT

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

ADVERTISEMENT

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

ADVERTISEMENT

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

ADVERTISEMENT

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Excellerate quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

ADVERTISEMENT

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

ADVERTISEMENT

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

ADVERTISEMENT

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

ADVERTISEMENT

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

ADVERTISEMENT

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

ADVERTISEMENT

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Excellerate the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

ADVERTISEMENT

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

ADVERTISEMENT

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

ADVERTISEMENT

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

ADVERTISEMENT

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

ADVERTISEMENT

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

ADVERTISEMENT

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

ADVERTISEMENT

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

ADVERTISEMENT

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

ADVERTISEMENT

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

ADVERTISEMENT

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

ADVERTISEMENT

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
500-551 exam dump and training guide direct download
Training Exams List