I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.
Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.
Edge In, not Cloud Out
In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.
A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.
“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.
IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.
IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).
IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.
It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.
Why edge is important
Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.
Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.
Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.
IBM at the Edge
In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.
Example #1 – McDonald’s drive-thru
Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.
McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.
Example #2 – Boston Dynamics and Spot the agile mobile robot
According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Boost future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.
To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.
IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.
IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.
IBM market opportunities
Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.
Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.
Challenges with scaling
“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”
Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.
Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.
IBM AI entry points at the edge
IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.
IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.
There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.
Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Boost quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.
For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:
Maximo Application Suite
IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.
IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.
Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.
Day-2 AI Operations (retraining and scaling)
Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.
IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.
A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).
“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”
Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.
Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.
The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.
Data Fabric Extensions to Hub and Spokes
IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.
In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.
Multicloud and Edge platform
In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.
For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.
Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.
Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.
First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).
Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.
Telco network intelligence and slice management with AL/ML
Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:
The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.
An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.
5G network slicing and slice management
Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.
5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.
Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.
Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”
In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:
Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.
5G radio access
Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.
O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.
The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:
IBM Cloud and Infrastructure
The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.
IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.
As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).
Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.
IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.
IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.
Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.
Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.
However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.
It is reassuring that IBM has a plan and that its plan is sound.
Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.
Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.
Quality social-emotional learning (SEL) and effective special education (SpEd) programming look remarkably similar. Each relies on a positive, safe learning environment and touts activities geared toward student strengths and weaknesses. Both types of programming facilitate a group experience where individual outcomes are designed to be disparate, be recorded, and used to track growth. Because these two types of programming are similar in philosophy, it should come as no surprise that both SEL and SpEd can be enhanced and expanded by innovative edtech solutions—most notably, student-created virtual reality (360 VR videos).
VR is proving to be an effective engagement tool in diverse ways: visiting museums around the world, blasting off into space, etc. But VR does not have to be limited to geography and science classrooms. By using student-created, perspective-taking videos, VR can be a powerful experiential tool that aligns with and augments both SEL and SpEd outcomes.
When students put on a headset to view these types of videos, they are stepping into another life, another story. They will find connection in the familiar and discover meaning in what they perceive to be different. Students then begin to develop perspective-taking skills, resulting in newfound levels of relationship skills (communication), self-management (emotional control in response to a story), and social awareness (empathizing with the storyteller). As a bonus, viewing VR films is an incredibly immersive experience, making student engagement—often a legitimate challenge—easier to achieve.…Read More
GUEST OPINION: In the early days of the COVID-19 pandemic, many organisations made significant changes to their IT infrastructures in a very short space of time. However, that was just the beginning of an infinite process of change.
One of the first steps many organisations took was to equip their staff to work effectively from home. Reliable networking links had to be established so that staff could be as productive working remotely as they had been in the office.
In many cases additional Virtual Private Network (VPN) links were established. In others, workloads were shifted onto cloud platforms to ensure performance could be maintained.
Because this work needed to be done so swiftly, corners often had to be cut. For many organisations, the result was a networking infrastructure that connected users and resources, but perhaps was not as efficient or secure as it should be. However, as more companies become remote, the edge of the network extended further and further: creating what we call the era of the Infinite Enterprise.
Ongoing network evolution
Now, in a post-COVID environment, this situation needs to change. Focus must now be on finding ways to offer seamless, high-speed, secure connectivity to data, applications, and users regardless of their location, connection type, or device being used.
Networking is now very much at the core of business activity. Where previously it might have been viewed as an annoying necessity, it now underpins everything from communication to data generation, analysis, and storage.
Network boundaries have also changed. Where in the past the boundary would have been considered to be a firewall or VPN access point, this has now shifted out to the users themselves.
An organisation’s users could be working from home, on an aircraft, or in a hotel. This is now where the boundary is located and IT networking and security teams need to understand the implications this has for infrastructure design and management.
To cope with this change in the concept of network boundaries, network managers need to look well beyond simply providing connectivity. They need to achieve clear visibility into the performance of the network and the workloads being undertaken by users.
The role of automation
Within a complex network environment, achieving such visibility can be a challenging task. For this reason, increasing numbers of IT teams are making use of the rapidly evolving range of artificial intelligence (AI)-powered tools currently on the market.
These tools help to overcome the challenges being posed by the increasing amount of data about network performance and events. In complex environments, it becomes impossible for human teams to make sense of all the data and make any required changes to configurations to Boost performance.
This is where AI-powered tools can help. They can review large volumes of event data and flag any incidents that should receive closer attention. In this way, it is not about replacing humans with AI tools but augmenting their performance by removing the tedious task of assessing large volumes of event data.
Acting as a co-pilot, the tools free up IT professionals to focus on tasks that add additional value to the organisation. Additionally, by reducing the number of manual IT tasks like investigating network alarms and creating IT tickets, organizations can continue to grow and expand their network without having to radically increase IT headcount. Productivity can be improved and business outcomes achieved more quickly.
Designing networks in the post-pandemic world
With most organisations continuing to work in a hybrid mode, networks must still be able to fully support staff regardless of their location. For this reason, organisations require networks that are
infinitely distributed, built to scale, and are very user-centric.
To achieve this, three things are required. They are:
One network: To cover all users and resources, organisations will continue to require a mix of networking types and technologies. Regardless of the elements that are selected, they must be deployed and managed as one single network. This removes complexity around configurations and ensures users enjoy the same experience regardless of where and how they are connecting.
One cloud: Again, although multiple cloud platforms may be in use, they should be managed as a cohesive whole. This will Boost the user experience and ensure security of all assets is maintained.
A strong technology partner: For these to be achieved, an organisation has to select an appropriate networking technology partner. This needs to be someone who can work with the organisation to understand its requirements and match the best technology to them.
It’s clear the world of work has changed significantly since the pandemic and is unlikely to revert to the way it operated before the virus appeared. For this reason, networking teams need to understand that the task of evolving and updating their infrastructures is far from over. However, by implementing aids such as AI-powered tools and understanding how the network edge has shifted, teams can be well-equipped to provide the support and guidance that users need both now and in the months ahead.
As AI tools develop further, organisations can take advantage of new features like networking digital twins, which can help ensure smooth roll outs and eliminate weeks or months associated with hardware deployments. Networks may become more complex, but with intelligent tools that simplify management and automate tedious tasks, IT teams can focus on finding new ways to maximise the value of their networks and deliver better outcomes for their organisations.
Global Pipette Tips Market
Dublin, July 05, 2022 (GLOBE NEWSWIRE) -- The "Pipette Tips: Global Markets" report has been added to ResearchAndMarkets.com's offering.
The global market for pipette tips was estimated at $3.6 billion in 2021. The market is forecast to grow at a compound annual growth rate (CAGR) of 9.1% to reach $5.6 billion through 2027.
The global market is segmented based on the type of operations, type of applications, type of cleanliness, and region. The goals of this study were to determine the current market scenario for pipette tips and evaluate the market's growth potential over the five years from 2022 through the end of 2027.
The study explores dynamics such as drivers, restraints, opportunities, and trends that will impact the market's growth. The study's main objective is to present a comprehensive analysis of the current market for pipette tips and the market's future directions.
Pipette tips are a growing market globally, with potential stemming from the growth in laboratory testing and demand for research and development (R&D). The market's growth potential in the forecast period is very promising; an increase in early and accurate disease diagnoses will also boost this market.
This report will focus on the types of operations such as automated and manual, different types of cleanliness such as standard/nonsterile tips, sterile/pre sterile tips, RNAse/DNAse-free, and pyrogen/endotoxin-free. The report will portray the trends and dynamics affecting the market. The report also covers market projections through 2027 and company profiles.
An updated overview of the global market for pipette tips within the industry
Analyses of the global market trends, with historic revenue (sales) data from 2019-2021, estimates for 2022, and projections of compound annual growth rates (CAGRs) through 2027
Highlights of the current market potential and market growth opportunities for pipette tips over the next five years (2022 to 2027)
Evaluation and forecast of the global pipette tips market size, and corresponding market share analysis by type of operation, application, level of cleanliness, and geographic region
Assessment of major driving trends, challenges, and opportunities in this innovation-driven market, along with current trends, future perspectives, exact developments, and regulatory implications within the marketplace
In-depth information on increasing investments in R&D activities, key technology issues, industry-specific challenges, and COVID-19 implications on the progress of this market
Review of the competitive landscape of the key companies operating in the global pipette tips market, and their value share analysis based on the segmental revenues
Insight into the latest information on key mergers and acquisitions, partnerships, agreements, collaborations, and product launch strategies within the marketplace
Key subjects Covered:
2. Summary And Highlights
3. Market Overview And Market Dynamics
4. Market Breakdown By Type Of Operation
Global Market For Pipette Tips By Type Of Operation
Manual Pipette Tips
Ergonomics And Hazards Of Manual Pipetting
Market Size And Forecast
Automated Pipette Tips
Automated Liquid-Handling Systems
Market Size And Forecast
5. Market Breakdown By Level Of Cleanliness
Global Market For Pipette Tips By Level Of Cleanliness
Market Size And Forecast
Sterilized Pipette Tip
Market Size And Forecast
Dnase- And Rnase-Free Pipette Tips
Market Size And Forecast
Pyrogen-Free Pipette Tips
Market Size And Forecast
Human DNA-Free Pipette Tips
Market Size And Forecast
6. Market Breakdown By Application
7. Market Breakdown By Region
Asia A New Hub For R&D
Rest Of The World
8. Competitive Landscape
Global Company Share Analysis
Trends In Laboratory Supplies Purchasing Decisions
Supplier And Customer Engagement
Mergers And Acquisitions
Agreements, Collaborations, And Partnerships
9. Company Profiles
For more information about this report visit https://www.researchandmarkets.com/r/2g1gx4
CONTACT: CONTACT: ResearchAndMarkets.com Laura Wood, Senior Press Manager firstname.lastname@example.org For E.S.T Office Hours Call 1-917-300-0470 For U.S./CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900
VANCOUVER, British Columbia, Aug. 03, 2022 (GLOBE NEWSWIRE) -- Algernon Pharmaceuticals Inc. (the “Company” or “Algernon”) (CSE: AGN) (FRANKFURT: AGW0) (OTCQB: AGNPF) a clinical stage Canadian pharmaceutical development company, is pleased to announce it has been invited to present the results from its Phase 2a Study of NP-120 (“Ifenprodil”) for idiopathic pulmonary fibrosis (“IPF”) and Chronic Cough, at the 9th American Cough Conference in June, 2023.
The American Cough Conference is the world’s leading educational meeting for health care professionals involved in the research and management of patients with cough and is held every two years.
“I am pleased that Algernon has accepted our invitation to present at the American Cough Conference,” said Dr. Peter Dicpinigaitis, Professor of Medicine at Albert Einstein College of Medicine, Editor-in-Chief of LUNG, and conference chair. “The NMDA receptor is a fascinating target, and Ifenprodil, if successful, would be a first-in-class treatment. I am excited about the drug’s potential not only for cough in IPF, but also for the wider refractory chronic cough population.”
Ifenprodil is an N-methyl-D-aspartate (NMDA) receptor antagonist specifically targeting the NMDA-type subunit 2B (GluN2B), which prevents glutamate signalling. Ifenprodil represents a novel first in class treatment for both IPF and chronic cough.
About Algernon Pharmaceuticals Inc.
Algernon is a drug re-purposing company that investigates safe, already approved drugs for new disease applications, moving them efficiently and safely into new human trials, developing new formulations and seeking new regulatory approvals in global markets. Algernon specifically investigates compounds that have never been approved in the U.S. or Europe to avoid off label prescription writing.
Christopher J. Moreau
Algernon Pharmaceuticals Inc.
604.398.4175 ext 701
Neither the Canadian Securities Exchange nor its Market Regulator (as that term is defined in the policies of the Canadian Securities Exchange) accepts responsibility for the adequacy or accuracy of this release.
CAUTIONARY DISCLAIMER STATEMENT: No Securities Exchange has reviewed nor accepts responsibility for the adequacy or accuracy of the content of this news release. This news release contains forward-looking statements relating to product development, licensing, commercialization and regulatory compliance issues and other statements that are not historical facts. Forward-looking statements are often identified by terms such as “will”, “may”, “should”, “anticipate”, “expects” and similar expressions. All statements other than statements of historical fact, included in this release are forward-looking statements that involve risks and uncertainties. There can be no assurance that such statements will prove to be accurate and actual results and future events could differ materially from those anticipated in such statements. Important factors that could cause actual results to differ materially from the Company’s expectations include the failure to satisfy the conditions of the relevant securities exchange(s) and other risks detailed from time to time in the filings made by the Company with securities regulators. The reader is cautioned that assumptions used in the preparation of any forward-looking information may prove to be incorrect. Events or circumstances may cause actual results to differ materially from those predicted, as a result of numerous known and unknown risks, uncertainties, and other factors, many of which are beyond the control of the Company. The reader is cautioned not to place undue reliance on any forward-looking information. Such information, although considered reasonable by management at the time of preparation, may prove to be incorrect and actual results may differ materially from those anticipated. Forward-looking statements contained in this news release are expressly qualified by this cautionary statement. The forward-looking statements contained in this news release are made as of the date of this news release and the Company will update or revise publicly any of the included forward-looking statements as expressly required by applicable law.
On Aug. 2, Holland voters will face the easiest choice they’ve ever made: Vote for increased property taxes and municipal debt or vote "no" on the broadband tax.
The proposal asks residents of Holland to allow the city’s Board of Public Works to borrow $30 million to build a government-owned broadband network. If this idea sounds familiar, it’s because it is: In 2010, the city tried to draw Google Fiber in, but failed as Holland already was served by multiple broadband providers. Undeterred in 2011, Holland commissioned a study that estimated a $58 million price tag to deploy the necessary infrastructure.
Then came a 2016 plan, from a high-priced consulting firm that suggested that the city take on $63 million in debt to finance a new network. Facing failure again, city leaders chose to cherry pick the most profitable part of town and launched a pilot program in Holland’s downtown area.
The designers of the current proposal assume the service will have a 51 percent take-rate — meaning the number of people who sign up, versus those eligible. The municipal consultants who sell these proposals have historically overestimated the take rate significantly. The take-rate matters, because it’s vital to the longevity of the network and the ability for it to fund itself. Or to put it more simply: The lower the take-rate, the more taxpayers will have to foot the bill. Look no further than a couple counties to our South East in the City of Marshall to see what happens when the take rate falls short of the assumptions. Three years after building a municipal fiber network they were forced to raise the rates because they have not paid off a single cent of their debt.
But there is more to this than just a program that spends more money than it earns. The proposed $30 million to finance the pilot’s expansion would be paid for over the decades by property tax increases in Holland. At a time of stratospheric home prices, what the city wants you to do is vote yes to pay more to buy a home — to underwrite a failing internet network.
And that’s not all. These government-owned networks, or GONs, have a bad habit of becoming money pits. They don’t just cost taxpayers in the short-term. Because the take-rates end up low and the costs to run the networks ends up high, cities with GONs end up borrowing more money or raising taxes to keep the networks going.
Case in point: A network to our north, in Traverse City, just took a $15.7 million dollar federal loan — but still needs another $3.2 million from the city to operate. Unsurprisingly, the network hasn’t come close to hitting its revenue goals. And here’s the truly astonishing part: All that money is being spent to serve 640 customers in total.
We can’t let Holland become another case study in the failures of government-owned broadband networks. And what’s more, we don’t need to. Every home and business in Holland already have access to a gigabit connection. Moreover, low-income Hollanders can sign up for free high-speed Internet through the FCC’s Affordable Connectivity Program. At a time when housing costs in West Michigan are skyrocketing, why would we increase the cost of living with a tax hike when our low-income neighbors can get connected for free today?
I love Holland, and it’s a special place that I cherish and want to see flourish. Which is why this vote is an easy choice: Holland residents know better than to sign up for years of wasteful spending on projects that haven’t proven their merit. Say no to this ballot initiative — and reject debt the city doesn’t need to solve a problem it doesn’t have.
— Orlando Estrada is a lifelong resident of Holland and an entrepreneur.
This article originally appeared on The Holland Sentinel: My Take: Say no to higher taxes, say no to more debt, say no to Holland ballot proposal
Cisco's goal for Webex is to be the leading collaboration brand. It wants "Webex Suite" to be the "most comprehensive communications platform, most cost-effective and most reliable and secure collaboration tool." Aggressive? Yes. And today, Cisco made numerous, bold announcements about the platform, which I found quite interesting, which I think will add customer value and provide even more competition into the collaboration market.
The company officially announced the Webex rebrand, a new campaign, added new capabilities into what's now called "Webex Suite," and repriced many of its latest products. The rebrand will change the nomenclature of Webex to Webex by Cisco, but I believe this business decision goes a lot deeper than simply updating a name and logo and committing to a new marketing strategy like I see some companies do. The future of Hybrid collaboration primarily drives the decision to rebrand the company's collaboration platform as we continue to experience working with both in-office and remote employees simultaneously.
I believe Cisco is taking this rebranding very seriously. Nothing echoes that sentiment more than Cisco's goal of being the most loved brand in collaboration . The collaboration market is an excellent market for Cisco to focus on as it lies within its core competencies. I’ll admit, at times I have questioned this, but it makes sense to me now, particularly that I see the company “going for it”. The rebrand means very little to the customer without massive product innovations that provide a much-improved user experience.
Cisco decided to invest in the rebrand of Webex because it believes that hybrid work is here for the long haul. There were a few key metrics that Cisco pointed to drive this home. According to a 2020 Global Workforce study by Dimensional Research, meetings that included at least a single remote participant grew from 8% previously to 98% going forward. Also, 87% of executives plan to make changes to their real estate strategy in the next 12 months. Updating company real estate strategy could mean moving away from individual workspaces to more collaborative, open environments.
With the new launch, Cisco seems hyper-focused on crafting a superb user experience for hybrid work, which will be essential because it wants to be the most loved brand in collaboration. One of the most significant upgrades is the new Webex Suite. Think of this as a singular platform for completing various tasks ranging from calling and messaging to polling and events. With this version of Webex, there were more than 800 innovations added since September. These included gesture recognition, noise removal, speech enhancement, custom layouts, open APIs, immersive share, and much more. By memorizing the list, I can tell that Webex is much improved compared to the last version. I am excited to use the application for myself soon and provide a more in-depth analysis of its functionality.
Cisco also claimed that pricing the new Webex subscription was 40% lower than a-la-carte and competitive offerings. On the hardware side, Cisco seems to be incentivizing customers to use its collaboration devices with Webex. Customers can save up to 50% on devices like the Webex Desk Camera, Webex Desk Pro, and Room Kit when they have a qualifying Webex contract including the Webex Suite . I like this move from Cisco because it gets enterprises in the Webex ecosystem for less and allows them to experience a hardware and software combination that Cisco fully manages. Suppose customers pair Webex with third-party hardware its more complicated to ensure a great experience. This aggressive pricing structure is a great way to get users in quickly.
I have confidence that a company as competent as Cisco can pull off a world-class collaboration application but leading the industry in innovation and being the preferred application for enterprise customers won't be easy. The amount of effort Cisco has put into new Webex features out of the gate gives me confidence that the company is ready to fight. Several other big fish in this pond don’t want to lose any market share, so this will be a battle between giants.
Cisco knows it has many challenges when it comes to brand perception. The company knows it needs to increase awareness and, at the same time, change the perception of the brand. I believe part of the brand challenge is that many self-hosting Webex customers don’t pay enough attention to quality of service and the experience suffers. I already talked about the innovations, and modified cloud pricing that I believe are part and parcel of being the top collaboration brand.
The new Webex logo looks premium and modern. The blue and green rotating helix in the logo represents "the harmonious flow of ideas that happens when people come together as equals." I see it as an equal footing between IT and end users. The new branding embodies a deep commitment to the future of Hybrid work collaboration from Cisco. The company wants its branding to compliment all of the work and acquisitions that went into the new application.
The Webex rebrand looks like a natural extension of Cisco's declared purpose around "powering an inclusive future", and a continuum of the corporate Bridge to Possible campaign. I wrote about that campaign here in 2018.
We've seen the company lean-in to stories about bringing critical medical care to remote, rural communities; to connecting classrooms around the world to help teachers and students navigate a virtual learning experience. It's nice to see this continuity carry through the business, from Chuck Robbins and the leadership team, to the brand story, to way the company has re-imagined its products. I do love to see the brand consistency and I'm surprised it still "works" three years later. But it does.
Existing users will be able to see the new logo with updates starting on June 15th. New users downloading the app, starting on June 8th, all Webex apps will boast the updated branding. The updated Webex branding is about changing the focus and perception of Webex while backing it up with a deep product portfolio of collaboration hardware and software.
Webex brand campaign
With this Webex rebrand, Cisco says it’s going big with a marketing push. The brand campaign for this new launch wouldn't be complete without customer testimonials and focusing on premium partners. Customer testimonials need to show, not tell other customers, how Webex is changing and improving the way they collaborate.
The McLaren F1 Racing team is the focus of this new brand campaign. The company is competing at the top level of Formula 1 racing and using Webex to design its new race cars and collaborate; that's cool, in my opinion. You can see how the McLaren F1 Racing team is using Webex to collaborate here.
Adding a premium partner like McLaren does a lot for changing the perception of the Webex brand at a critical time for the company. I love to see the big initial marketing push from Cisco, and I believe partnering with McLaren for its first campaign is a good move.
I’ll admit. I’m a huge F1 fan and can’t wait to see how McClaren uses Webex when the race comes to my hometown, Austin at the COTA.
I think this is an excellent move from Cisco, and it is coming at the right time. I ran all my businesses in a “go big or stay home” mentality and this is great to see. The collaboration market is more significant than ever, and with the shift to hybrid work continuing to expand, this is the right time for the company to go all in. Cisco has deep roots in the collaboration market, and with its new brand and fantastic product, the future seems to be very promising.
I don't want to sway you to view the launch through rose-colored classes, in any case. I’ve run many large businesses and led numerous product marketing teams. There is a lot of execution and competition between the company becoming the most beloved brand in collaboration. I believe Cisco has the hardware, software, services and many bright people up to the task. I will continue to watch this rollout closely and look forward to sharing my first impressions of the new Webex Suite.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.
Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including 8x8, Advanced Micro Devices, Amazon, Applied Micro, ARM, Aruba Networks, AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics, Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR, Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MapBox, Marvell, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Poly, Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY, Springpath, Spirent, Splunk, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zebra, Zededa, and Zoho which may be cited in blogs and research.
It's no secret that Jennifer Aniston is big into fitness. Thankfully, over the years she's been pretty open about how she stays so strong – see: barre classes, boxing and intermittent fasting – which means we've been able to take her fitness tips and tricks and add them into our own workouts whenever we fancy getting a Jen An inspired sweat on.
As for what she's been doing in the gym lately, the Friends star told PopSugar that there are two workouts she 'loves' right now. 'I'm really focusing on Pilates,' the 53-year-old revealed, adding: 'I love a three-minute plank.' Yep, you read that right - three minutes. Jen = a machine.
She pointed out that she also does strength training whenever possible. 'I always have five to eight pound weights in my trailer [or] in my hotel rooms if I'm away,' the actor said. 'And if I'm watching television or winding down, memorizing emails, [I] just use my weights.'
And it's not just her physical health that she works on, as she dedicates plenty of time to her mental health as well. 'I really, really give a lot of credit to morning meditation,' Jennifer added. 'Don't just mindlessly wake up. Wake up, take a moment. Don't look at your phone.'
She went on: 'I'm having mindful mornings. [It's a time to] observe my thoughts, so I actually know what I'm doing and I understand my focus for the day.'
Taking care of her mental health is something she made a priority during the pandemic. 'We were dodging Omicron like Donkey Kong,' she explained, recalling how spikes in COVID last year threw a three month filming stint into disarray. 'I don't think I really understood the amount of stress we were enduring during those three months.'
'I wasn't working out like I normally do,' she says of that period of time, which resulted in a back injury when she then tried to jump back into her usual fitness routine, although she admitted that she's 'slowly getting back into it.'
Get it Jen!
You Might Also Like
TORONTO & SEATTLE, August 02, 2022--(BUSINESS WIRE)--POSaBIT Systems Corporation (CSE: PBIT, OTC: POSAF), the leading provider of point of sale software and payments infrastructure in the cannabis industry, will host a conference call and live webcast on August 25, 2022 at 4:30 p.m. eastern time to discuss the results of the second quarter ended June 30, 2022.
Conference Call Information
Date: August 25, 2022
Time: 4:30pm Eastern Time
Toll Free: 888-506-0062
Entry Code: Participant Access Code: 742426
Live Webcast: https://www.webcaster4.com/Webcast/Page/2708/46302
Conference Call Replay Information:
The replay will be available approximately 1 hour after the completion of the live event.
Toll Free: 877-481-4010
Replay Passcode: 46302
Replay Webcast: https://www.webcaster4.com/Webcast/Page/2708/46302
POSaBIT (CSE: PBIT) is a financial technology company that delivers unique and innovative payment processing and point-of-sale systems for cash-only businesses. POSaBIT specializes in resolving pain points for complex, high-risk, emerging industries like cannabis with an all-in-one solution that is compliant, user-friendly, and utilizes top-of-the-line hardware. POSaBIT’s unique solution provides a safe and transparent environment for merchants while creating a better overall experience for the consumer. For additional information, visit www.posabit.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220802005181/en/
Co-founder and CEO of POSaBIT