Specifically same 1T6-215 questions answers that I actually saw in the real test!

killexams.com furnish Most recent and 2022 up-to-date cheat sheets with Practice Test Questions plus Answers for brand new matters of Network-General Sniffer Portable Switch Expert Analysis and Troubleshooting Examination. Practice our questions answers in order to improve your knowledge and pass your own test with Higher Marks. We assure your success inside the Test Middle, covering each a single of the referrals of the test plus building your Knowledge of the 1T6-215 examination. Pass with our own braindumps.

Exam Code: 1T6-215 Practice exam 2022 by Killexams.com team
Sniffer Portable Switch Expert Analysis and Troubleshooting
Network-General Troubleshooting resources
Killexams : Network-General Troubleshooting resources - BingNews https://killexams.com/pass4sure/exam-detail/1T6-215 Search results Killexams : Network-General Troubleshooting resources - BingNews https://killexams.com/pass4sure/exam-detail/1T6-215 https://killexams.com/exam_list/Network-General Killexams : Verizon Not Registered On Network (Verizon Network Issues)

Verizon might give you a "Not registered on network" error for several reasons. It could be because your device entered airplane mode and failed to connect to the Verizon cellular tower. It could also be because your Verizon SIM card is not inserted correctly.

This article will guide you on fixing this error and returning to the Verizon network in no time. Your issue with the Verizon "Not registered on a network" error will be gone.

Why are You Getting "Not Registered on Network" on Verizon?

If you notice a "Not registered on network" error on Verizon, it may be because the software is not updated and your phone is running an earlier version of the software. 

Also, when your phone is locked to another carrier, it may display such an error. Unlocking your phone might help.

How To Fix The "Not Registered On Network" Error on Verizon

Basic Troubleshooting

  • Ensure you have an active and valid mobile data plan with your Verizon and that your reception is strong.
  • Ensure that the Airplane mode is OFF. Sometimes, we accidentally enable this feature. Open Settings > Connections > Airplane mode. Toggle the switch button.
  • Reinsert your Verizon SIM card, check for damage, and ensure it is inserted correctly. If you have another phone around, place your SIM card in it and try to make a phone call.
  • Restart your phone. 

Quick Check (Verizon)

Perhaps you accidentally enabled the Airplane mode on your phone, disabled Mobile data, etc. Before we go further, perform these quick troubleshooting steps:

  1.  Make sure the Airplane mode is disabled. You can also toggle the Airplane mode on and off
  2.  Toggle Mobile Data

Solution 1 - Your Verizon SIM Card

First, try to reinsert your Verizon Sim card. Open the SIM tray, take the SIM card and check it out. Make sure it is not damaged. If it is, contact your carrier for a replacement.

Solution 2 - Enter Service Mode

This solution requires you to open the dialer and proceed with the steps below.

  • Enter the code *#*#4636#*#* in the dialer
  • Enter Service mode
  • Click on the top option – Device information or Phone information.
  • Next, tap on the Run Ping test.
  • The radio option will be visible at the bottom of this screen.
  • Check if it is off or on. Please press the button next to it to turn on the radio.
  • You will be prompted to reboot the device.
  • Click reboot and your phone will start rebooting. Once completed, check if the problem is gone.

Solution 3 - Update Your Software Version

Ensure you are connected to a reliable Wi-Fi network.

Software Update on newer devices

From your home screen, select :

  • Settings
  • Navigate to System updates
  • Check for system updates 

Software Update on older devices

  • Navigate to Settings
  • Scroll down to the extreme bottom
  • Select Software Update
  • Please wait for it to reboot and complete the update
  • Finished!

If your device finds a new update, tap get now. When it is finished downloading, a new screen will appear, alerting you that the software version is ready to be installed.

If the method above didn't work for you, I recommend reading Restore Galaxy Null IMEI # and Fix Not Registered on Network.

 

Solution 4  - Rebooting Method (Technobezz Origin)

If this solution does not work on the first attempt, try doing it again. Technobezz originally crafted this method. Follow these steps: 

  • Turn off your Verizon phone by holding the Power Button and the Home (Or Volume Down Button)  in conjunction.
  • While the phone is off, wait for 2 minutes.
  • After 2 minutes, remove the battery (Only if your phone battery can be removed) and the Verizon SIM card from the phone.
  • Press the Power button and the home  (Or Volume Down) button together ten times.
  • Afterwards, hold the Power and Home (Or Volume Down) keys for 1-3 minutes.
  • Next, insert your Verizon SIM card and the battery (Only if your phone battery can be removed)
  • Turn on your phone.
  • While your phone is on, remove your Verizon SIM card and then reinsert it. Repeat this five times. (On some Android phones, you need to remove the battery before removing the sim card. If this is the case, please skip this step)
  • A message will appear saying that you need to "Restart your Phone"- click it.
  • Finally, your Verizon phone should boot up with no errors.

Solution 6 - Select Verizon as your Network Operator 

Go to Settings on your phone.

  • Go to Wireless & Networks Or Connections
  • Select Mobile Networks 
  • Select Network Operators 
  • Tap on Search Now
  • Then, Select Verizon

Solution 7: The Corrupt ESN

  • Turn your Verizon device on and go to the dialer to enter the code (*#06#), which shows up the IMEI number of the device. If it shows 'Null,' the IMEI number is corrupt.
  • Dial (*#197328640#) or (*#*#197328640#*#*) from the phone dialer. Users are required to select the option 'Common.'
  • Next, select option #1, Field Test Mode (FTM). It should be OFF.' This process will restore the IMEI number.
  • Return to the key input and select option 2, which will turn off FTM.
  • Remove the SIM card from the device and wait 2 minutes to re-insert your Verizon SIM card.
  • Turn on the device and type (*#197328640#) again from the phone dial.
  • Next, go to and select Debug screen > phone control > Nas control > RRC > RRC revision .
  • Select Option 5
  • Restart your phone. 

Solution 8 - Reset Network Settings

Sometimes just a simple network reset can fix the issue. From your phone's home screen, select settings :

  • Tap General Management. 
  • Select Reset 
  • Tap Reset Settings.
  • Select Reset network settings

Solution 9 -  Update your Verizon APN Settings

Update your Verizon APN Settings

  • Navigate to Settings
  • Tap Connections.
  • Tap Mobile Networks
  • Select Access Point Names
  • Tap More (3 dots)
  • Tap Reset to Default.
  • Then enter the new APN Settings.

Below are the Verizon APN settings for iPhone and Android Devices.

Verizon APN settings for iPhone and Android Devices (LTE)

  • Name: Verizon
  • APN: vzwinternet
  • Proxy: <Not set>
  • Port: <Not set>
  • Username: <Not set>
  • Password: <Not set>
  • Server: <Not set>
  • MMSC: http://mms.vtext.com/servlets/mms
  • MMS proxy: <Not set>
  • MMS port: 80
  • MMS protocol: <Not set>
  • MCC: 310
  • MNC: 12
  • Authentication Type: <Not set>
  • APN Type: default,supl,mms OR Internet+MMS
  • APN Protocol: <Not set> Or IPv4
  • APN roaming protocol: <Not set>
  • Bearer: Unspecified

Notes:

vzwims: Used for connections to IMS services. Required for TXT messaging.

vzwadmin: Used for administrative functions.

vzwinternet: Required for general Internet connections.

vzwapp: Required for PDA data service.

View the Updated APN Settings For AT&T, Verizon, T-Mobile, Sprint ( +4 More)

 Other workarounds worth trying 

  • Toggle Wifi and Airplane Mode -> Turn off Wi-Fi & Airplane for 40 seconds and turn it back on.
  • Try a different SIM Card apart from Verizon.
  • Change to a different Network Mode - > Navigate to Settings > Connections > Mobile Networks  > Select Network Modes > Choose Your Preffered Network Mode ( Toggle between these - > 3G, 3G/2G or 4G/3G/2G)
  • Contact Verizon -> Let them be aware of the issue. In most cases, they will send you a new APN or act on their path (Remotely)
  • Perform a Factory Reset.

Other Solutions and Methods

Was this article helpful?

This helps us Boost our website.


Technobezz

Wed, 27 Jul 2022 05:00:00 -0500 en text/html https://www.technobezz.com/verizon-not-registered-on-network-verizon-network-issues/
Killexams : A ‘small’ fix for renewable energy’s ‘big’ problem

Electricity used to be so simple.

Flip a switch and the lights came on. Most of us didn’t know where our power came from and didn’t really care.

If we thought about it at all, we knew there were some huge power plants—hydropower dams, thermal plants fueled by coal or natural gas—connected to huge, high-voltage transmission lines that carried huge amounts of electricity to huge transformers, connected to smaller transformers, connected to wires that were connected to our homes.

Mostly out of sight. Mostly out of mind.

Today, most western states have adopted clean-energy policies, and the wholesale power market increasingly is dominated by emission-free, cheap renewable power.

Aging and economically inefficient thermal power plants (coal, mainly) are being retired.

As power consumers we can’t help but notice that if we don’t take actions—individual as well as societal—to reduce carbon emissions from power plants, vehicles and other sources, we face the very real possibility of a catastrophically warm future.

We are becoming—we have to become—more involved in our personal energy use and society’s energy future.

We no longer can afford to be passive consumers. For our own welfare—economic, societal—we’re becoming better informed energy consumers.

Supply-side resources are evolving rapidly, trending away from thermal power plants to carbon-free renewables in response to concern about global warming and to comply with state clean-energy policies.

Demand-side resources are evolving, too, and rapidly.

Energy efficiency — aka “conservation” — has been the principal demand-side resource in the Pacific Northwest for the last 40 years, helping our region reduce its demand for power by using it more efficiently.

But as demand for renewable energy increases and the technology that creates it improves, a new problem has arisen—where to locate new and needed massive power-generating facilities?

Concerns are being raised about industrial-scale solar and wind farms blocking wildlife corridors, harming birds, ruining scenic views, infringing on tribal lands and being installed in rural communities whether the locals like it or not.

How will we balance the need for renewable energy on a mass scale with the problems increasingly associated with industrial-scale facilities?

New term to learn

The most exciting developments that can be seen as at least a partial solution are with what are called “distributed generation resources.”

The most common “distributed generation resources” are the rooftop solar panels that all of us have seen on private residencies or commercial buildings, and some of us own.

These small-scale, location-specific devices can supplement or even replace a connection to a local electricity distribution service, such as a public utility.

The idea is that as consumers take ownership of their own power generation, the cord is cut — or at least weakened — between users and large-scale power suppliers located hundreds of miles away.

“Distributed generation resources” theoretically lessen the need to transport power across the countryside via power lines and towers.

Create enough small distributed generation sites and you’ll need fewer utility-scale installations disrupting the landscape. Or so goes the thinking.

Interest surging

Perhaps the most exciting and practical of the evolving, carbon-free, demand-side technologies is energy storage, i.e., batteries.

In its 2021 Integrated Resource Plan, PacifiCorp, which includes Pacific Power based in Portland and Rocky Mountain Power based in Salt Lake City, noted that customer interest in self-generation is growing — solar alone and solar plus battery backup.

Pacific Power is seeing a surge of interest in solar. As of June 7, the utility had 12,065 net-metering customers (10,897 are residential), with 90% of those self-generating solar power.

The Columbian is becoming a rare example of a news organization with local, family ownership. Subscribe today to support local journalism and help us to build a stronger community.

Installations are increasing in 2022. Through late May, Pacific Power says it had put in 345 new net meters, compared with 180 by the same time in 2021, 110 in 2020 and 70 in 2019.

Batteries also are becoming more common.

“In 2018, we only had 14 batteries installed,” says Tim Gauntt, a Pacific Power spokesman. “This bumped to 23 in 2019, 52 in 2020, 63 in 2021 and we have had 39 in 2022 so far with 94 installations pending approval.”

Portland’s other electric utility, Portland General Electric, is also tracking increases in distributed generation resources.

Sarah Hamaker, a PGE spokesperson, says approximately 16,600 customers have rooftop solar installations, and more than 90% of these are residential systems. The number is rising—growth in 2021 was 20% greater than in 2020.

PGE also contracts for other demand-side generation, including small-scale wind and hydropower, fuel cells, biogas generators and some combined heat and power generators. This technology produces electricity with a thermal fuel such as natural gas, but also provides heat from this combustion.

Costs on both sides inhibit growth

All this is good news for consumers, but it might be not-so-good news for utilities.

As solar installations increase, some utilities are growing concerned about the cost of paying customers for the power they generate and return to the grid, and whether those payments amount to a subsidy that disadvantages other customers.

Idaho Power Company recently announced it was considering a ”transition” to a revised net-metering rate.

The current “export credit rate” is between 8 and 10 cents per kilowatt-hour. But in a study submitted to the Idaho Public Utilities Commission, Idaho Power proposed alternative methods for calculating the export rate that would reduce it to less than 4 cents per kilowatt-hour.

The proposal drew criticism from environmental and consumer groups. The Sierra Club called for an independent review of the study; the Idaho Conservation League said cutting the export rate in half would effectively make rooftop and other small-scale solar installations “financially unavailable” for many homeowners and businesses.

Portland General Electric compensates solar exports “at the applicable retail rate, which is currently just over 12 cents per kilowatt-hour for residential customers,” according to Hamaker.

That rate may be adjusted in the future by the Oregon Public Utilities Commission, she says.

Cost also is an issue for consumers. While the cost of home solar installation has come down over time, and probably will continue to decline, it still isn’t cheap.

According to Energysage — an “unbiased solar matchmaker, connecting homeowners with our network of over 500 pre-screened solar installers” — the cost of a home solar installation in Oregon in 2022 is between $11,475 and $15,525.

Energysage says the payback period for a 5,000-watt system in Oregon is 9.9 to 13.3 years, and that the net energy savings over 20 years would be between $14,565 and $19,705.

While costs are substantial for many homeowners, solar installations are nonetheless booming.

According to the Oregon Department of Energy, the total amount of solar generation in Oregon — residential, commercial, utility-scale — grew more than five-fold between 2015 and 2019, with generation increasing from 116,000 megawatt-hours to 776,000 megawatt-hours.

However, most of the growth came from utility-scale solar. In 2018, utility-scale solar accounted for 79% of solar generation in Oregon, with commercial solar accounting for 13% and residential solar accounting for 8%.

As of 2019, there were about 18,000 residential and commercial solar facilities in Oregon, and 77 utility-scale solar farms.

The problem with low energy costs

Solar’s increase is evident throughout the Pacific Northwest.

According to the Northwest Power and Conservation Council, rooftop solar installations are expected to increase in the four-state region by about 10% per year through 2050.

As of 2020, behind-the-meter solar (it’s not always installed on roofs) totaled 538 megawatts of installed capacity in the Northwest.

Ten percent per year isn’t a rapid growth rate. There’s a reason for that, says Massoud Jourabchi, the Council’s manager of economic analysis.

Because of low electricity prices in the Pacific Northwest compared with the rest of the nation—Southern California, for example, where the average cost of electricity is twice what it is in the Northwest—installing solar panels with battery backup isn’t economical for many consumers.

Jourabchi suspects people installing solar panels or panels plus battery backup devices probably aren’t motivated by cost, but by personal interest—doing something good for society by using less electricity generated with carbon fuels, or an interest in greater personal energy independence.

“My opinion on the demand side is that more is needed desperately, but we don’t quite have the market construct and regulatory processes to support a robust demand side yet,” says Nicole Hughes, executive director of Renewable Northwest.

One way around the cost issue is a “community” renewable energy project. These are becoming popular in Oregon.

Community systems have the advantage of economy of scale — a small group of consumers can participate in a community distributed generation project for less individually than installing their own equipment.

A typical minimum consumer investment might be as low as $1,000.

Rebates remain important incentive

In light of state-mandated lower emissions goals, Oregon utilities are encouraging customers to look into home energy generation.

They’re helping customers interested in a home solar installation navigate the maze of incentives, rebates and tax breaks that can greatly reduce the cost of a system.

Rebates are important both for solar installations and for other demand-side products, like electric vehicle chargers.

In June, Pacific Power announced a new set of rebates for installation of rapid (Level Two) electric-vehicle chargers in homes and workplaces.

The program offers a rebate of up to $500, capped at 75% of total project costs, which can include the charging equipment, permits and electrical installation work. For income-qualifying homeowners, the rebate can be as high as $1,000.

People power

Energy experts are generally optimistic that the market will evolve, costs will come down, technologies will Boost and a zero-carbon electric energy future is realistic.

Electricity isn’t so simple anymore. With distributed generation resources we can participate in our power system.

We might never be completely rid of fossil fuels, but now that we recognize greenhouse gas emissions as the root of climate change, we have the ability to do more than just flip a switch and walk away.

We have the ability to do something about it.

Mon, 08 Aug 2022 01:00:00 -0500 en-US text/html https://www.columbian.com/news/2022/aug/08/a-small-fix-for-renewable-energys-big-problem/
Killexams : The Drug Crisis: Problems and Solutions for Local Policymakers

From Urban to Rural: The Spread of the Drug Crisis

Between 2000 and 2021, the annual drug overdose (OD) death rate in the U.S. quadrupled, to roughly 107,000 deaths by the end of last year.[1] Drug overdose deaths now routinely exceed those from homicide, suicide, car crashes, and many medical causes.[2] By best estimates, the drug OD death rate is now six times higher than its highest point in the 20th century[3] and well above any point since the initiation of the modern drug-control regime.[4]

Public perception of this crisis lagged behind its growth. About 20 years ago, drug OD deaths were rising disproportionately among white, middle-aged Americans in predominantly rural or small metropolitan areas, particularly in Appalachia and the upper Midwest, areas particularly affected by the decline in manufacturing employment. As depicted in books like J. D. Vance’s Hillbilly Elegy or Beth Macy’s Dopesick, the white, rural drug crisis is often thought to combine longstanding socioeconomic disenfranchisement with the acute effects of predatory pharmaceutical firms, which flooded communities with highly potent prescription opioids and created iatrogenic (physician-caused) addiction.

While this picture remains accurate in some regards, it is also incomplete. Figure 1 breaks out OD death rates by level of urbanization for three years: 2000, 2010, and 2020.[5] The percentages over the 2010 and 2020 bars represent the percent increase in the OD death rate compared to the same urbanization in the prior decade.

Figure 1

Drug Overdose Death Rates in 2000, 2010, and 2020, by County Urbanization

As of the year 2000—near the beginning of the crisis—OD deaths were more common in more urban counties, that is, those that contained large or medium-sized cities or large cities’ suburbs (“large fringe metro”),[6] and less common in small cities, towns, and rural (“NonCore [Nonmetro]”) counties. Over the next 10 years, OD death rates rose across urbanizations, but they rose more—as captured by the percent change—in less urban areas. The total effect was that by 2010, OD death rates were higher in more rural areas than they were in more urban ones, particularly in the most rural counties—creating the impression of the crisis as a predominantly rural problem.

But between 2010 and 2020, the pattern again reversed. Although all urbanization types saw a large increase in OD death rates, the increases this time were largest for cities. Although the drug crisis continued to worsen in rural areas, the increases in OD deaths in cities were so extreme that cities overtook rural areas and again had the highest death rates.[7]

Figure 2 shows OD death rates by year, urbanization, and type of drug involved. This chart shows in clearer detail the phenomenon captured in Figure 1. In particular, it shows how precipitously from 2000–10 OD deaths rose in small-town and rural America, driven primarily by prescription opioids (included in “other opioids” in the CDC’s system). Then, between 2010 and 2020, illegally manufactured fentanyl (IMF) and other novel synthetic opioids arrived on the scene, eclipsing death rates from other types of drugs. This is particularly true in large and medium cities, where illegal opioids (formerly heroin, now also IMF) are a far larger problem than methamphetamine, which is a bigger issue relative to IMF for more rural Americans.[8]

Figure 2

Drug Overdose Death Rates 1999–2020, by Type and Urbanization

These data illustrate what drug policy scholar Daniel Ciccarone has called the “triple wave” epidemic.[9] A first iatrogenic wave was caused by the widespread distribution, diversion, and consumption of prescription opioids, particularly Purdue Pharmaceutica’s OxyContin.[10] Policymakers responded to this wave through pressure on pharmaceutical firms and tighter controls on prescription opioids. Those now-dependent individuals responded by switching from prescription opioids to heroin, igniting the second “wave” driven by heroin deaths.[11]

The third wave was ushered in by drug suppliers introducing synthetic opioids into the drug supply. This shift was the product of a variety of factors: the acquisition of relevant chemistry knowledge by Mexican drug trafficking organizations, China’s production of fentanyl, and then, later, grey-market precursor chemicals.[12] Synthetic opioids are, from a producer’s perspective, a vastly superior product: easier and much cheaper to produce and transport, as well as more potent and therefore more compact.

Whereas the second wave stemmed from people with an opioid-use disorder joining an (existing) illegal market, the third wave was instigated by the producers. As one RAND analysis notes, “dealers are not transparent when it comes to the distribution of synthetic opioids, using them to adulterate heroin or pressing them into tablets made to look like prescription medications.”[13] Unlike the iatrogenic wave, there is no reason to believe that fentanyl has drawn in many more users. Rather, the wave of deaths comes from increasing the death rate of a preexisting stock of users.[14] In fact, users are often alarmed by the introduction of far more potent fentanyl into their supply of illegal opioids (that had previously been mostly heroin).[15]

Many other drugs are now adulterated with IMF. Such multidrug combinations are increasingly common and particularly dangerous. Drug deaths regularly involve multiple substances, with the simultaneous presence of opioids and cocaine, methamphetamine, benzodiazepines, or alcohol leading to death.[16] Figure 3 shows drug OD death rates, distinguishing between deaths that did and did not involve synthetic opioids.

Figure 3

Drug Overdose Death Rates 1999–2020, by Urbanization and Synthetic Involvement

This figure recapitulates Figure 2, insofar as IMF is a bigger share of the problem in large and medium-sized cities than it is in small cities and rural areas. Absent IMF-involved deaths, for example, the OD death rate in more urban areas would be below its 2016 peak; in less urban areas, it would be at or above it. But Figure 3 underscores that IMF in isolation is not the problem. Rather, it is IMF leaching into the urban drug supply more generally, which can cause people with histories of (comparatively) less harmful use to experience deadly overdoses.[17]

How Local Leaders Can Respond

Drug policy interfaces uncomfortably with America’s federalized system. Drug selling, consumption, addiction, and death play out at the local level, but supply is driven by forces at the national and international scale—high-level, wholesale distribution organizations that span state borders, the international drug market and border enforcement, and the socioeconomic factors that marginally affect the tendency to produce and consume drugs, among others. It may be easy for local policymakers to feel like the drug crisis is simply too big a problem for them to cope with, and that only national or international action can stem the tide.

While drug policy must be conducted at all levels of government, there is a significant role for local leaders to play in combatting the crisis. In this section, I discuss policy approaches uniquely suited for the local level.

Naloxone Access/Distribution

Naloxone is an FDA-approved medication that can rapidly reverse the effects of opioid overdose by binding to opioid receptors without causing the respiratory depression and other harmful effects of opioids that lead to overdose.[18] Because of these potent properties, naloxone is a powerful lifesaving tool. Many jurisdictions already equip first responders with naloxone, while some have experimented with wider distribution, including to people at risk of OD or to the general public.

But does such distribution have a significant impact on overdose death rates? In allocating scarce dollars, is naloxone a good use of money? And to whom should naloxone be distributed?

One view is that widespread naloxone availability will uniformly reduce overdose deaths by increasing the probability that the drug will be available at the time of overdose. Some argue, however, that this effect could be counterbalanced by “moral hazard”—by reducing the risk associated with each individual use, greater naloxone access may lead to more use, cancelling out beneficial effects or actually leading to more overdose deaths. It is also the case that having a naloxone supply is not necessarily enough: just because naloxone is available does not mean that people will use it.

One way to understand the effect of naloxone availability on OD deaths is to look at research on “naloxone access laws” (NALs). Such laws make naloxone more available, for example, by allowing the purchase of naloxone with or without prescription, or preventing prescribers or bystanders from being held liable for administering naloxone to someone other than the person to whom it was prescribed or in the process of a criminal act (i.e., consuming drugs).[19] What is the impact? A comprehensive review of studies on NALs finds that the passage of NALs is generally associated with either no effect on or a decrease in opioid overdose mortality.[20]

Not all NALs are the same, however, and different laws may have different effects. Of the studies covered by the aforementioned review, one found that NALs immunizing prescribers from liability reduced OD deaths by 23%, while other NALs had no impact.[21] Another found that removing criminal liability for possession of naloxone was associated with a 16% reduction in overdose mortality.[22] And a third found that the only NAL with an impact contained provisions that allowed pharmacists to dispense naloxone directly, reducing OD deaths by 34%.[23] Lastly, one paper found no effects of NALs except for laws permitting civilians to administer naloxone, which was associated with a significant increase in OD deaths.[24] Why so much ambiguity? One possible answer is that naloxone-access laws tend to go into effect at the same time, often across many locales, which means there is little variation to exploit for purposes of identifying their effect. There may be effects that we can’t easily measure.

What should policymakers take away from this (confusing) evidence? One safe conclusion is that there is little evidence that NALs, with the possible exception of lay administration laws, lead to an increase in OD deaths, and at least some evidence that they reduce them. Another reasonable inference is based on the insight that NALs expand access to people other than first responders. If there is some evidence that giving naloxone to the man on the street can reduce OD deaths, it stands to reason that trained responders equipped with naloxone are very likely to reduce deaths.

Another way to think about naloxone is in terms of costs and benefits. Naloxone is relatively cheap—usually somewhere around $20 to $50 per dose for the generic form and up to $150 per dose for name-brand Narcan.[25] That cost is almost always worth it, insofar as naloxone is rarely going to be used in a situation where the harm it prevents is worth less—for example, in terms of quality-adjusted life-years—than the cost of using it. There may be individuals on the margins for whom naloxone use is harmful, but it’s likely that in most individual instances, it is beneficial.

Local policymakers, then, should certainly support giving their first responders—EMTs, firefighters, and police officers—naloxone. In addition, distributing naloxone to the average person on the street, and even the average person who uses drugs, is as at least unlikely to do net harm, and quite possibly, it may have a net benefit. Policymakers worried about the risk of “moral hazard” should think about how to minimize downside without losing the upside of naloxone distribution. Distribution could be tied to offers of treatment, for example, and repeated emergency department visits as a sign that someone may need more intensive attention. Fitting naloxone into the matrix of other drug treatment and control methods, in other words, may help to mitigate any risks associated with it while preserving its benefits.

Invest in Treatment Capacity

Drug treatment is a dramatically underutilized resource. The federal Substance Abuse and Mental Health Services Administration (SAMHSA), a division of the Department of Health and Human Services, estimated that as of 2020, some 40.3 million Americans needed treatment for a substance-use disorder (SUD), including about 18.4 million suffering from an illicit drug–use disorder, and 4.2 million suffering from a substance-use disorder related to an illicit drug other than marijuana. By comparison, only about 4 million Americans actually received treatment, roughly two-thirds of whom were in treatment for a drug-use disorder (many in addition to an alcohol-use disorder).[26]

One explanation for this “treatment gap”—sometimes the only explanation offered—is inadequate funding of treatment. But some of the gap also reflects disinterest on the part of users. Some disagree with the premise that they need treatment—SAMHSA’s definition of a SUD is quite broad, and denial is a hallmark of addiction. Yet even among people who said they needed treatment, only one in five actually made an effort to get it.[27] In other words, even among those who believe they would benefit from drug treatment, many are not actually getting into treatment.

Of course, there are many barriers to getting someone into treatment: health insurance, stigma, lack of childcare, etc. And not all drug treatment is created equal—some approaches are far more efficacious than others (a full review is beyond the scope of this paper, but the issue is treated briefly below). That acknowledged, it is worthwhile for municipal policymakers to ask holistically whether they have adequate capacity to meet the demand for drug treatment, and whether they are successfully connecting those who want help with the help that they need.

Local government has a role to play in treatment access. As of 2020, local, county, and community governments operated about 4% of treatment facilities—a small share but double that administered by state, federal, or tribal governments.[28] And government at all levels funds drug treatment that is provided by non- and for-profit providers.[29] In addition, local government often oversees drug treatment in jails, a place where some people get access to treatment while others lose it, thereby increasing their risk of an overdose. Government can also play a role in advertising for drug treatment and connecting people with substance-use disorders to treatment through the social services system, criminal justice system, and street outreach.

Treatment for substance-abuse disorders is not bulletproof. The National Institute of Drug Abuse estimates a 40 to 60% relapse rate, which NIDA attributes to a failure to understand treatment as an ongoing, rather than one-time, intervention.[30] On the other hand, much treatment depends on therapeutic approaches, from cognitive behavioral therapy to contingency management. In the case of opioid-use disorders, however, several medications—buprenorphine, methadone, and naltrexone—exist to effectively substitute for harmful opioids, mitigating cravings without leading to respiratory depression and death. Such “medication assisted treatment” has been shown to substantially reduce overdose mortality.[31] Although local government cannot necessarily set regulations about the use of “medication assisted treatment,” it can in many cases encourage or even mandate its use in treatment facilities or jails it oversees, funds, or operates.

Cost estimates for treatment will necessarily vary based on the kind of treatment. NIDA estimates, for example, that methadone maintenance costs roughly $4,700 per year per patient; somewhat older estimates (vintage 2001–02) peg the cost of outpatient treatment at between $1,500 and $8,200 per patient per year, though these may have changed.[32] Those costs can be balanced against their benefits: analysis usually indicates that the social benefits yielded by drug treatment are worth the cost, with one commonly cited estimate being seven dollars saved to one dollar spent.[33] This comes from reduced health harms as well as reduced social harms, as expanding substance abuse treatment also reduces crime in the surrounding area, for example.[34] But they can also be balanced against alternatives: as mentioned below, the cost of methadone maintenance therapy is similar to the cost of supervised consumption.

Limited as it is, treatment is often the only way to mitigate the long-run risks and harms of disordered drug use. Filling the “treatment gap” is not innovative, but it is lifesaving, and local leaders should ask if they have done so before trying other, more daring, approaches.

Drug Courts

Many people with substance-abuse disorders interact with the criminal justice system, either because they are apprehended for drug possession/distribution, because of behavior driven by their SUD (for instance, theft or public disorder), or simply incidentally, that is, they both offend and have a SUD. In many jurisdictions today, some of these people are diverted from the regular criminal justice process of trial, conviction, and incarceration to specialized courts meant to target the “root cause” of their behavior, namely their drug abuse. Such courts usually offer enrollees (“clients”) an opportunity to have their sentences reduced or waived altogether, assuming they complete a course of drug treatment actively overseen by the court, including through regular check-ins and the dispensing of rewards and punishments. The model depends, in essence, on a carrot-and-stick approach: if a client gets clean, he stays out of jail; if he fails, he gets locked up.[35]

The first drug court opened in 1989 in Miami-Dade, Florida, in response to the crack cocaine epidemic.[36] As of 2014, there were over 3,000 drug courts nationwide, serving an estimated 127,000 participants. That number of drug courts represents a 24% increase over 2009, meaning almost certainly that even more exist today. As of 2014, drug courts cost an average of $6,008 per participant, though the range of reported costs extended from $1,200 to $17,000, and much data was missing.[37]

Drug courts appear to have a significant effect on clients’ tendency to reoffend—a proxy for both their drug use and the harmful effects of their drug use on their communities. Multiple meta-analyses find that drug courts reduce enrollees’ recidivism rates by between 9 and 24%, albeit with substantial heterogeneity in the effectiveness of different kinds of drug courts.[38] These effects persist for at least three years after leaving the program, with one study finding effects up to 14 years later.[39] A 23-site study confirmed that drug court involvement not only reduced criminal behavior but also genuine criminal drug use.[40] These findings suggest drug courts are a winning option for local executives who want to reduce crime while being compassionate.

Depending on the laws within their jurisdiction, local administrators have the power to establish drug court programs. This is in part because many judges are locally elected, and successful drug court programs require a committed judge to spearhead the effort. But it is also because a successful drug court program requires coordinating many resources—access to treatment, housing, jail capacity, etc.—across multiple local government agencies. Such coordination requires local administrative competency to get it done, and so local executives should see doing it as partially their responsibility.

Wastewater Tracking

In spite of the scale of the drug overdose crisis, the U.S. continues to struggle with tracking drug overdose and use in real time. The elimination of previous surveillance systems, namely the Drug Abuse Warning Network (DAWN) and Arrestee Drug Abuse Monitoring (ADAM) programs, have effectively blinded policymakers at a time when up-to-date information is essential.[41] The CDC does, as of recently, report provisional and partial estimates of monthly OD deaths on a one-month lag. Local coroners can track deaths, and local emergency departments can track nonfatal overdoses, but a death or overdose is the outcome we want to avoid. Policymakers at all levels of government need tools for tracking drug use in real time, so as to enable a proactive response before drugs kill.

One solution is to track the level of drug metabolites—the excreted byproduct of metabolizing a given substance—in municipal wastewater. Wastewater testing has been shown to be an effective indicator for nearly two decades, and it has been used to compare drug consumption rates in major European cities.[42] Australia has conducted a nationwide drug wastewater surveillance program since 2016, and as of 2021 was monitoring the consumption of 12 substances across 58 waste-treatment sites covering 57% of the population.[43] It is in many ways preferable to surveys, as it can be tracked day-to-day and avoids social desirability bias—people can lie, but pee can’t.

The information derived from wastewater surveillance can be used for a variety of purposes. It can, according to one review of the approach, “monitor temporal and spatial trends in drug use at different scales, provide updated estimates of drug use, and identify changing habits and the use of new substances.”[44] Because it can tell authorities what drugs are being used, and where, in real time, it allows the targeting of relevant resources—law enforcement, social services, treatment, etc.—both to the areas where they are most needed and with the tools they most need (naloxone, e.g., is more useful in the case of a surging opioid OD than a methamphetamine OD). The great boon of wastewater surveillance is that it allows real-time response to trends, precisely what is missing from many locales’ toolboxes.

Wastewater surveillance is particularly attractive because it is universally identified as cost-effective method of gathering information compared to more conventional approaches.[45] The European Monitoring Center for Drugs and Drugs Addiction estimates the cost of analysis as between €100 and €200 (roughly $105 to $210) per sample, where a trial represents many, many people—the number of people contributing to a waste treatment site could be tens of thousands or more.[46] Comparisons from other wastewater-based epidemiology approaches are instructive: wastewater monitoring of alcohol, nicotine, and caffeine consumption reduced costs from roughly $127 per person (the cost of a conventional questionnaire) to $0.58 per person, a 200-fold reduction.[47] A World Bank analysis pegs the cost of wastewater surveillance for Covid to between $0.20 and $3 per person per year.[48]

The U.S. has lagged far behind in this area, but some cities are now using wastewater testing to track the Covid-19 pandemic, working with the CDC under the auspices of the National Wastewater Surveillance System.[49] Wastewater testing allows cities like Boston and Minneapolis to project a rise in Covid cases before they impact hospitals;[50] wastewater testing for drugs can produce similar ahead-of-the-curve information. In fact, the infrastructure already in place for Covid-19 testing should lower the start-up cost of adding wastewater testing to the fight against opioids.

Supervised Consumption Sites

A number of major cities across the U.S.—including New York, Seattle, and Philadelphia—have opened or attempted to open “supervised consumption sites” (SCSs) within their jurisdictions.[51] Such venues, sometimes also called “safe consumption sites” or “overdose prevention centers,” offer a place for people to use drugs under supervision—usually by medical professionals—who can administer oxygen, naloxone, or other lifesaving support as necessary. Although officially sanctioned SCSs are new to the U.S., they have a long history in other countries, with over 120 sites operating in Australia, Canada, and across Europe. Unsanctioned SCSs have operated in the U.S. since at least 2014.[52]

Proponents of SCSs argue that they reduce drug-related harms for existing users, both because they make it easier for people using drugs to access overdose-reversing medication and because they can connect those same people to sterile use equipment (needles, pipes, etc.) and treatment. Critics, meanwhile, argue that they normalize drug use, may only delay overdose deaths (since most SCS clients continue to also use outside of SCSs) without necessarily leading to cessation of drug use, all while contributing to crime, disorder, and drug consumption in the broader community. They also note that SCSs supervise only a small share of use sessions—about 5% of use sessions in Vancouver in the early 2000s, for example.[53] That small share of use sessions is likely concentrated among the most risk-avoidant (and therefore least overdose-prone) users, mitigating the claimed benefits.

The total evidence, in either direction, is best characterized as both lacking and somewhat mixed. Research focuses disproportionately on just two SCSs (one in Vancouver and one in Sydney), which may not be representative. One RAND Corporation review of the literature notes that most of the studies on SCSs—especially those which underpin estimates of their cost-effectiveness—“ merely report associations that do not permit causal inference.” They offer, for example, evidence that safe drug use—sanitary practices, for example—is positively correlated with frequency of visits to Vancouver’s SCSs. This association could mean that SCS attendance drives safer use, but it could also mean that those who are prone to safer use anyway are also more likely to go to SCSs.[54] Such problems limit what we can conclude about SCSs’ effects in general.

Several studies do offer quasi-experimental measures of the effects of SCSs. Two studies based on the SCS established in Sydney, Australia, find that it reduced opioid-related calls for ambulance service relative to the rest of New South Wales but not a reduction in genuine overdose deaths.[55] Another, based on Vancouver, identified a reduction in overdose mortality in the half kilometer around the SCS relative to the rest of the city.[56] The causal literature also finds no adverse effects on crime and few adverse effects except possibly on public disorder.[57]

There are issues, however, about which the literature does not speak. It is particularly hard to isolate the general effect of SCSs on the intensity and extent of drug use. Use at SCSs may delay, rather than accelerate, transition to treatment, increasing the cumulative lifetime risk of overdose death outside of the facility.[58] And tacit social sanction of drug use, even in the name of harm reduction, may shift potential users’ willingness to start using at the margin. There has also not been much attention in the literature to a SCS’s possible effects on neighbors, including effects on their sense of safety or property values.

These concerns are speculative, of course. But negative social effects from SCSs have been documented. A 2020 review by the government of Alberta found that the seven SCSs the province had opened since 2018 were associated with an increase in drug OD deaths, drug-involved aggressive behavior, crime, and debris in their immediate vicinity. The committee responsible for the review noted that, “Except for Edmonton, stakeholder feedback predominantly suggested that the SCSs have had a negative social and economic impact on the community. In Edmonton, however, there were reports that stakeholders felt intimidated and were prevented from expressing their true sentiments and opinions about these sites out of fear of retribution from site supporters.”[59]

SCSs are also probably illegal under the “Crack House Statute” of the federal Anti-Drug Abuse Act (ADAA) of 1986, which makes it unlawful to “knowingly open, lease, rent, use, or maintain any place, whether permanently or temporarily, for the purpose of manufacturing, distributing, or using any controlled substance.”[60] Advocates argue that that prohibition does not apply when a “third-party visitor” acts with criminal intent, and that the original intent of the ADAA did not include the regulation of “public health facilities.”[61] But federal courts sided against Philadelphia’s SCS, which made that argument in its legal dispute with the Department of Justice.[62] The Biden administration has signaled that it may not interfere with more accurate city experiments with SCSs, although such a stance is likely to change from administration to administration.[63]

Should these issues scare policymakers off from setting up SCSs? Bracketing the question of legality, not necessarily. The scale and urgency of the drug crisis may mean that radical policy solutions are the appropriate way forward. At the same time, there is much we do not know about the true impact of SCSs—they may have positive impacts, no impact, or negative impacts. And they may or may not be the best use of scarce dollars.

As ever, it is worth weighing costs against benefits. Most cost-benefit analyses focus on Vancouver’s InSite SCS, which cost between $2 and $3 million (Canadian) per year in 2007 dollars, including about $1.5 million for the supervised injection component of its services (i.e., not counting counseling, primary health care, or other services).[64] That amounts to an annual operating cost of between $2 and $3.1 million (U.S.) in today’s dollars, including about $1.5 million in the cost of providing supervised consumption services.65[65] Is this cost worth it? It depends on how one thinks about it. As the RAND report notes, “supervising all injections for someone who uses twice a day could cost approximately Can$5500–7300 per year. That might be in the same range as the cost of providing methadone for a year to a patient in the United States.” Whether a SCS or treatment is a better use of the marginal dollar is a matter of municipal priority.

If a municipality does opt to set up a SCS, it should first and foremost prioritize rigorous evaluation of its impact. That should include both quantitative metrics—overdose-related deaths and ER admissions, drug-associated and not-associated crime, even measures of disposed needles—and qualitative ones, including regular surveys of all relevant stakeholders, including neighbors. If individual people cannot be pseudo-randomly assigned to access the SCS, then policy evaluators should at least predesignate “treatment” and “control” areas of the city that are substantively similar on observable qualities and measure how the introduction of the SCS affects the aforementioned metrics.

In other words, policymakers should understand any foray into SCS not so much as a certain of success as an experiment. Experimentation may be warranted, but it should be regarded with the sober caution that experimental policymaking demands.

Drug Market Interventions

Much of drug control policy works either by reducing demand—discouraging people from initiating drug use or helping them desist—or by reducing the harms of drug use. Supply-side approaches are the province of law enforcement: the interdiction of drugs and the cash it generates, and the arrest and incarceration of those who benefit, from street-level dealers up to kingpins.

A longstanding question in drug policy is how, or if, these policies can be effective. On the one hand, it is likely that policing can reduce the criminal behavior incidental to and often overlapping with drug use or sales. But it’s less clear how effective law enforcement is at controlling supply. Arresting low-level dealers or sweeping up even tons of drugs is just a drop in the bucket of the massive international drug market. The rise of synthetic drugs likely exacerbates this problem because the dramatic cost-savings they have brought makes it harder for supply reduction to drive up the price and thus reduce use.[66] This is a particular challenge for local administrators, who lack the reach or capacity to bring down international drug smuggling rings, a task usually reserved to the DEA. Local cops can often feel like arresting drug dealers and users is little more than an exercise in catch-and-release.[67]

In accurate years, however, there has been promising evidence in favor of another supply-side approach, the “Drug Market Intervention” or “Initiative” (DMI). The approach applies policing methods of “focused deterrence” or “pulling levers,” which entail targeting the small number of offenders who drive the majority of crime in a given area.[68] DMIs target areas with high concentrations of active drug sales, the “drug markets” that crop up in many American cities and are a significant contributor to OD deaths, at least as measured by spatial concentration.[69] During a DMI, a police department will identify drug dealers in a target area, build cases against them, then execute a “call in,” in which offenders are rounded up and notified that they can either get out of the drug game—with the support of the city’s arrayed social services—or go to prison.[70] Doing so creates a clear and powerful disincentive for offenders to continue to deal, a deterrent far more certain than the often random arrests under the conventional approach.

The earliest DMI was set up in High Point, North Carolina, in 2004. It initially targeted a large drug market in the city’s West End neighborhood, then expanded to three other drug markets. During implementation, “overt drug activity … was almost entirely eliminated,” with no displacement of drug activity to other areas. Violent crime also fell, driven by the areas where the drug markets were.[71]

Since then, a number of other jurisdictions have created their own DMIs, with varying levels of success. Successful implementations have been identified in Rockford, Illinois, and Nashville, Tennessee.[72] But a DMI in Peoria, Illinois, did not have a significant impact.[73] And in a seven-site study of implementation, just four had any genuine call-in meetings, and only one successfully reduced crime overall.[74]

This last finding, the authors note, reflects the challenges of following the original High Point model successfully: “The DMI program was challenging for sites to implement and resulted in significant reductions in crime in the site with the implementation fidelity that was highest and most similar to the original site.”[75] Indeed, implementing a DMI is more complicated than doing standard drug busts. They entail coordination across government, including the police department, state/district/county attorneys, social services providers, mayors or other executive’s offices, and, ideally, civil society actors like nonprofits and churches. They can also be costly: a study of DMIs implemented in two cities suggested they cost police departments on the order of $100,000 to $150,000 per intervention.[76]

That said, the cost and risk of failure should be balanced against current approaches to local-level interdiction, which are often of limited efficacy. Particularly given the scale of the crisis, any jurisdiction willing to consider radical harm-reduction interventions (e.g., SCSs) should also be willing to consider a DMI.

Conclusion

Thanks in large part to miraculous medical innovation, Covid-19 is now far less a threat to life today than it was two years ago. But as one epidemic recedes, another continues. Unlike Covid, there is no sign of the drug crisis abating: it is likely that, at current rates, drugs will eventually kill more people cumulatively than Covid did in its first three years. There does not appear to be any vaccine or any virus-destroying medication coming.

But while the current scale of death is unprecedented, there are steps that government, particularly local government, can take to stem the tide. Naloxone distribution, treatment capacity, and drug courts are all effective tools for reducing death and minimizing the harms of drug use. Wastewater surveillance, long an underappreciated tool in the U.S., is primed for expansion thanks to its use during the pandemic. More experimental approaches like SCSs or DMIs are also worth investigating, assuming local leaders do so with an eye to their harms, benefits, and cost.

The drug crisis is not the rural problem it was 10 years ago. It is now, more so than ever, everyone’s battle. Local leaders are on the front lines of this conflict—it is incumbent on them, therefore, to act.

About the Author

Charles Fain Lehman is a fellow at the Manhattan Institute for Policy Research, working primarily on the Policing and Public Safety Initiative, and a contributing editor of City Journal. He has addressed public safety policy before the House of Representatives, at universities including Cornell and Carnegie Mellon, and in the Wall Street Journal, Dallas Morning News, New York Post, National Review, and elsewhere. He was previously a staff writer with the Washington Free Beacon, where he covered domestic policy from a data-driven perspective. Lehman graduated from Yale in 2016 with a BA in history.

Acknowledgments

The Manhattan Institute thanks the Klinsky Leadership Series for its support in the publication of this paper.

Endnotes

Please see Endotes in PDF

Wed, 03 Aug 2022 23:16:00 -0500 en text/html https://www.manhattan-institute.org/drug-crisis-problems-and-solutions-for-local-policymakers
Killexams : Former Trump aide in trouble for not turning over government emails on private account

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

The Justice Department is suing Peter Navarro, the former White House trade policy director, claiming he did government business on a private email account and never turned records over to the National Archives and Records Administration. If true, that would violate the Presidential Records Act. DOJ claimed both NARA and its own attorneys have tried to...

READ MORE

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

  • The Justice Department is suing Peter Navarro, the former White House trade policy director, claiming he did government business on a private email account and never turned records over to the National Archives and Records Administration. If true, that would violate the Presidential Records Act. DOJ claimed both NARA and its own attorneys have tried to negotiate with Navarro. The government said he’s refused to do so unless he’s guaranteed immunity from prosecution over anything those records reveal.
  • The Office of Personnel Management is getting a second-in-command. President Joe Biden nominated Rob Shriver to be the OPM deputy director. Shriver is a political appointee already, having been the associate director for employee services for the past 18 months. If confirmed by the Senate, he would be the first OPM deputy director since January 2021, when Michael Rigas left the role. (Federal News Network)
  • Three agencies have received new money to accelerate customer experience improvements. The Technology Modernization Fund Board and Office of Management and Budget delivered their first installment of money as part of their commitment to spend $100 million to Boost citizen services. The TMF Board and OMB announced yesterday they were awarding $26.8 million to three agencies to address legacy systems that serve the public. AmeriCorps received a loan of  $14 million. The Labor Department won $7.2 million and USAID will receive $5.6 million to accelerate different modernization efforts. The TMF Board has now made 29 awards worth more than $500 million. (Federal News Network)
  • For the fourth year in a row, the General Services Administration IT won the 5-star EPEAT Purchaser Award. The award is for excellence in sustainable procurement of IT products and services. The Global Electronics Council gives the award annually to recognize organizations that purchase and sell environmentally friendly electronic products. GSA recycles its equipment by donating it to schools and other institutions or by repurposing it.
  • The nominee to serve as the State Department’s top cyber ambassador is laying out his priorities. Nathaniel Fick is the Biden administration’s pick to be the first ever ambassador-at-large for Cyberspace and Digital Policy. Fick told the Senate Foreign Relations Committee that his first priority is building more expertise in cyber and digital technologies at the State Department. “I can imagine a future where any candidate to be a chief of mission is expected to have an understanding of these issues, because they’re a substrate that cuts across every aspect of our foreign policy,” Fick said. Fick is a retired Marine Corps officer and technology executive. If confirmed, he would head up the State Department’s new Bureau of Cyberspace and Digital Diplomacy, just established in April.
  • The Air Force is trying to amp up its artificial intelligence and data network with a new conference. The First ever Data and AI Forum will bring together top Defense technology officials with industry and academia. The summit will be held in Massachusetts at the end of August. The Defense Department and military services are putting increased resources into data and AI in order to Boost weapons systems.
  • TRICARE beneficiaries got a reprieve from telehealth copays during the pandemic, but that’s about to change. The Defense Health Agency said it will no longer subsidize telehealth appointment copays now that threats from COVID-19 are easing. The rule will not go into effect until the agency informs providers of the change. Currently there is no timeline for that process. DHA said appointments made by telephone will still be copay-free. DHA decided not to reinstitute those charges after public comments from the American Medical Association and other health institutions. The Defense Department will recoup nearly $5 million a month by reinstating telehealth copays. (Federal News Network)
  • The Navy has a new top intelligence officer. Rear Adm. Mike Studeman assumed command of the Office of Naval Intelligence and directorship of the National Maritime Intelligence-Integration Office on Aug. 1. He took over for Rear Adm. Curt Copley, who had led the command since June 2021. Studeman was previously director for Intelligence at the U.S. Indo-Pacific Command.
  • The Labor Department has a new leader for diversity, equity, inclusion and accessibility initiatives. The agency named Alaysia Black Hackett as its chief diversity and equity officer. Hackett is the first person to hold the position for DOL. The agency created the role after the White House tasked all agencies with naming a chief diversity officer, as part of an executive order to advance DEIA in the federal workforce.
  • Federal first responders who are disabled are one step closer to securing equal retirement benefits. In a unanimous vote, the Senate Homeland Security and Governmental Affairs Committee passed the First Responder Fair RETIRE Act. The legislation would let disabled federal first responders, like firefighters and law enforcement officers, continue receiving the same retirement benefits as all first responders in the government. The House unanimously passed partner legislation last month, and the bill now moves to the full Senate for consideration.
  • The Department of Veterans Affair’s programs for identifying employees on its networks face some serious problems, according to a new report by the department’s inspector general. The IG said the governance process for identity, credentialing and access management or ICAM is spread across different offices that don’t agree on how to implement ICAM. The review found the lack of cooperation means VA is likely restricting some data from employees who truly need it, and leaving other sensitive data open to workers that don’t.
  • The American Federation of Government Employees and the Veterans Healthcare Policy Institute are calling on leaders in Congress and the Department of Veterans Affairs to fully fund, staff and expand VA resources. The demand comes after AFGE published a survey of 2,300 VA employees and veterans. Some 50% of respondents reported that budget shortages closed VA beds, units and programs. Respondents also indicated that 88% of their facility needs more clinical frontline staff. According to the Partnership for Public Service, the VA had the highest attrition rate at 7.1%. That’s a 6.4% increase from last year, and 1% higher than the governmentwide average. Despite the agency’s challenges, VA was ranked fifth among the “Best Places to Work in the Federal Government” in an annual report produced by the Partnership for Public Service and Boston Consulting Group. (Federal News Network)
Thu, 04 Aug 2022 00:48:00 -0500 en-US text/html https://federalnewsnetwork.com/federal-newscast/2022/08/former-trump-aide-in-trouble-for-not-turning-over-government-emails-on-private-account/
Killexams : How the new general manager plans to tackle a damaged Metro

Electrical fires, outages, and the new regulatory report all coming in first week on the job.

WASHINGTON — He only began the job on July 25th, but in his second week on the job, the new general manager for Washington Metropolitan Area Transit Authority (WMATA) has already experienced the full D.C. metro chaos locals have come to expect. 

Moving from Austin, Texas, where he was president and CEO of Capital Metro, Randy Clarke was not only taking on the normal headaches of a move, but Sunday night's Metro fire which shut down a section of the Red Line in Northwest Washington for nearly three days, and IT issues that caused delays on multiple lines, to the ongoing 7000-series train issues from last year's derailment.   

"Obviously, a very kind of challenging first couple of weeks obviously personally moving to a new city getting accustomed to a new job," he told WUSA in an exclusive interview. "And then obviously we've had a couple of issues here at Metro over the last couple couple of days." 

Issues he says that gave him a window into not just the problems, but the positive of the agency. 

"In some ways, I kind of try to always be optimistic and look at the opportunity side for me somewhat new new set of eyes and the organization to really see where the really strengths in the organization are, but where maybe we have areas for improvement," he explained. "Sometimes when you have a crisis like that you can really quickly see those juxtapositions inside of an organization. So overall, it's been actually a way for me to start analyzing and working towards how we're going to take the organization to the next level."

After that Red Line electrical fire he went down into the tunnels, even tweeting photos of the work being done. Hoping to increase transparency within the agency plagued with problems.  

On Thursday, the Washington Metrorail Safety Commission, an independent regulatory agency created by Congress to regulate the system safety, released a report outlining major safety issues within Metro control rooms, which include the equipment that keeps trains on schedule and prevents collisions.  

According to the report, the control room inspected at Friendship heights had "water leaks, some of which were being caught by buckets place by Metrorail personnel..." located bear "vital automatic train controls (ATC) equipment." 

They also found "a layer of dust and debris, which could interfere with the equipment's safe operation."

The regulatory agency had first flagged the issues back in March but found, although Metrorail had stated 'it would take action' as of a follow-up in July and August they had failed to make progress.

"We appreciate this report. There's things that the team is already has in motion, some contracts related to how we're going to do some like if you will structural repairs to some of these rooms that are really just old and dated," he said in reply to questions about the order. "And then there's a significant way we got to look at our overall maintenance performance standards and resource those standards. So we never, ever have a report that says this is dirty, that is wet, you name it, those types of things. So this is all about building the maturity curve of a maintenance program. So the heart of safety is maintenance."

"That's not going to change overnight, but that is what we are focused on," he  added.

WMATA's board of directors announced that the organization was getting Clarke as the new general manager and CEO in early May. The search and selection of Clarke came after Paul Wiedefeld announced he was retiring from his position as general manager and CEO of WMATA in January.

Clarke also previously worked for Boston’s public transit agency, as well as a transit trade group in D.C.  

Part of his charge, getting the 7,000-series trains back in service after the derailment that took them off line last year, and completing the Silver Line. All while continuing to tackle the safety issues that have compounded.

"We have underlying issues. There's no question about standing here publicly saying there's nothing to fix and I have the easiest job in the world. But we're going to be open and we're going to be transparent," he said. "So there's issues but that that report identified issues, we're gonna go and systematically attack those issues."

However, stressing the 'overall system is safe' and is one he continues to take multiple times a day. Even taking photos of elevator outages during his commutes and chatting up passengers. Working to rebuild the publics trust in a damaged system. 

"I live and breathe transit and therefore therefore I live and breathe Metro. And I'm going to be focused, whether it's a customer or an employee, and other stakeholder you name it to hear their input and try to make the placements," he said.

Fri, 05 Aug 2022 16:23:00 -0500 en-US text/html https://www.wusa9.com/article/traffic/mission-metro/how-new-general-manager-plans-tackle-damaged-dc-metro/65-beb61190-58e5-4296-a30d-823fb3f8d359
Killexams : New ERDC commander at the Army Corp of Engineers

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drive’s daily audio interviews on Apple Podcasts or PodcastOne.

The U.S. Army Corps of Engineers’ Engineer Research and Development Center or ERDC, has a new commander. Following ERDC’s first ever female commander, the first ever African-American commander is now at the helm. Federal News Network’s Eric White spoke with the new man in charge: Lt. Col. Christian Patterson on the  Federal Drive with Tom Temin.

Christian Patterson  The U.S. Army Engineer Research and Development Center is headquartered in Vicksburg, Mississippi. We’ve got seven total laboratories. Four of them are on our Vicksburg campus and then we have three other laboratories that are in Hanover, New Hampshire; Alexandria, Virginia and Champaign, Illinois. In addition to that, we’ve got a field research facility in Duck, North Carolina, as well as a permafrost research tunnel facility in Fox, Alaska. In terms of our mission, what we do is we work to solve some of our nation’s most challenging problems in civil and military engineering; geospatial sciences; water resources [and] environmental sciences for the Army, for the Department of Defense, civilian agencies and for our nation and, of course, for the warfighter.

Eric White  That is a wide footprint. Can you get any more specific on the kinds of projects that ERDC handles for the Army and the Army Corps of Engineers?

Christian Patterson  You know, it’s an excellent question. It’s so vast. In my previous experience with ERDC, I served as the director of communications and it was a challenge to keep up with all of the cutting edge science and technology and research and development that’s going on with ERDC, because our researchers often have multiple projects. The way that I kind of describe it is exciting. What I mean by that is, it’s kind of like a Rubik’s Cube. Instead of having nine sides that you have to line up in terms of keeping up with all the great research, it’s like 100. Every single one of our projects is amazing [in] so many special ways, but just to kind of give you an example is we have an Boost ribbon bridge project that I just saw the other day with our coastal and hydraulics laboratory. What they’re doing is working on a bridge that will allow for the next generation of tanks to be able to cross from one side to another safely and so looking at the model that they had in order to be able to test that was amazing. That’s something that’s very important for our warfighters in terms of safety, contributing to mission accomplishment down the road in the future. Other things that are going on that are pretty cool are we’ve got a lot of blast weapons effects testing that is ongoing, improved pavement operations that are going on as far as like pavement technologies for different types of aircraft to be able to land on and then also installation and operational environments is pretty cool. Making sure that the installations that we have in the future are energy friendly, that we use our water resources, everything as efficiently as possible, you know, for our soldiers and our servicemembers. And then at our permafrost research tunnel facility up in Alaska, they’re continuously studying the effects of climate change and microbes, for example. So are there microbes in the soil that could potentially harm us whenever permafrost melts? Some interesting research that’s going on there as well. And then I would like to invite you to take a look out on the web at our power of verdict podcast, and on there, we have a great episode about the supercomputing that we’re doing in order to help with the next generation engine for the B-52. Take a look at that, you’ll definitely hear some great in depth information about the research and development that we have going on with ERDC.

Eric White  Yeah, with such a eclectic research, vast seems to be an understatement. How do you keep up with all that different kinds of research? And I guess we can use this to kind of segue into what you have planned for as now you are the commander of ERDC.

Christian Patterson  Yes, keeping up is a challenge because like I said, it’s a lot of it. But our leadership is really good about keeping us informed in terms of what projects are out there, who were supporting in terms of those projects and also who we’re working with. One of the great things about ERDC is that we have a ton of relationships that are throughout the DoD, with interagency and with academia. That constant collaboration [is] always there in terms of our projects and everything. And you mentioned, what are my plans is commander, well the commander in ERDC is a part of a very big leadership team. Dr. David Pittman is actually the director of ERDC and he is the chief scientist for the U.S. Army Corps of Engineers, so he is the one that provides the vision for the research and development for our ERDC team. Where I come in is that I work shoulder to shoulder with him as the lead over what’s called the installation operations command. So all of those laboratories that I mentioned earlier, they have to have enablers, they have to have a force that help them to be positioned to do the world class research and development that we do. [We] work with our Resource Management Office, contracting, safety, Department of Public Works, so those type of offices that support the research and development so that we can make everything happen as far as benefiting the nation and the warfighter.

Eric White  Is there a push to diversify the workforce within the STEM fields? You are the first African American commander of ERDC, your predecessor was the first female commander of ERDC. What ways do you think that you can, whether it’s your job personally or other initiatives that ERDC is taking to, have more women and minorities in the engineering field?

Christian Patterson  You know, that’s something that we’re placing a very big focus on is to continue to grow as far as diversity, equity and inclusion. And you know, one of the things about Mississippi, and this is a good example of what we’re doing in order to get ahead in that area, is that we have a lot of historically black colleges and universities (HBCU). And so recently, we’ve done a lot of partnerships and visits in collaboration with Jackson State University, Alcorn State University, we’re in touch with Mississippi Valley State University up in Itta Bena, Mississippi. So we’re continuously building those relationships, not only with HBCUs, but we also have a strong relationship with University of Puerto Rico, Mayagüez and so we’re working in order to diversify our workforce for the future, because the way that we look at it is that those are diverse perspectives that are going to allow us to continue to make the research and development that we conduct even better down the road.

Eric White  And let’s bring the focus back to you getting to this point in your career. Can you tell us a little bit about your background and how you made it this far?

Christian Patterson  You know, it’s interesting, I am a communication focused person. And when I say communications is my focus, I’m going all the way back to speech and debate with, you know, the Byram High School speech and debate team back from 1990 to 94. And so I tell the story all the time about how my father actually made it happen to where I joined the Mississippi National Guard, and it wasn’t my choice. One day, he said, “Hey, Chris, you’re gonna join the National Guard next week.” And I was a senior in high school. And I was like, “Gee, Dad, thanks so much for making this important life decision for me.”

Eric White  Okay.

Christian Patterson  And so the next week came around, and you know, I tried to stay with my grandparents who lived in the same subdivision, so I would not be at home. But there was one day that I left something in my room and I said, “Okay, I’m gonna drive over there, literally be in 30 seconds, grab it and be gone.” Well, once I went in the house, literally 30 seconds and started to walk out, the recruiter was walking up the driveway. So I was trapped. And so we were in the living room, and he said, “Hey, I heard you’re interested in getting in the military.” And I said, “No, there’s nothing in the guard that I want to do.” And he’s like, “Well, what do you want to do?” I said, “I want to be a broadcast journalist. I want to be the next Rob J,” who, at the time, was a local sportscaster with our NBC affiliate in Jackson. He was kind of like the local version of Stuart Scott. And so he said, “You know what, Chris, we’ve got a unit that does that downtown across the street from the stadium. Why don’t you go and visit them just to see if you like it.” So I did and almost 29 years later, here I am. And so the amazing thing is that, from that point [I] deployed to Bosnia, deployed to Kosovo, deployed to Afghanistan. I got the chance to work with General Milley, Mark Milley, who’s the four star now and then  General James McConville, who’s chief of staff [at the] Army, and outside of that it’s opened up some great opportunities working with Louisiana State University [LSU] football, and then you have ERDC. [I] was asked to come over and be the director communications three years ago. And a couple of months ago, I thought here at the end of my tour, I thought I was going back to the Mississippi National Guard, but then the command opportunity for ERDC opened up, and it was not something that was on my radar and every single day, I’m thankful to God for that opportunity, because it was bigger than anything that I could have imagined for my career. Dr. Pittman likes to say that ERDC is Disneyland for scientists, and it really is. We have some things that are going on there that are just truly amazing. Our employees are world class. We work together as a true team. And outside of that they wanted me to be able to tell the story. We have an awesome story at ERDC and Eric, we invite you to come down and take a look at it for yourself. Once you do you’ll be truly amazed. But we’d invite everybody to follow was on Facebook, visit our website and take a look at what we have going on because it doesn’t just benefit warfighters, we have things that are benefiting our entire nation and making things faster, cheaper, better and more efficient and better for our entire country down the road.

Tom Temin  Lieutenant Colonel Christian Patterson is the new commander of the U.S. Army Corps of Engineers Engineer Research and Development Center, speaking with Federal News Network’s, Eric White.

Mon, 01 Aug 2022 03:09:00 -0500 en-US text/html https://federalnewsnetwork.com/defense-main/2022/08/new-erdc-commander-at-the-army-corp-of-engineers/
Killexams : Kentucky teacher shortage: A look at what can be done to keep educators on the job.

cincinnati.com cannot provide a good user experience to your browser. To use this site and continue to benefit from our journalism and site features, please upgrade to the latest version of Chrome, Edge, Firefox or Safari.

Sun, 07 Aug 2022 21:01:00 -0500 en-US text/html https://www.cincinnati.com/story/news/education/2022/08/08/teacher-labor-shortage-jcps-kentucky-schools-fixing-problems/7791341001/
Killexams : Tencent Cloud EdgeOne provides integrated security protection & network performance services for global businesses

A unified, fast, reliable and secure upgraded one-stop platform that integrates Tencent's years of experience in network performance and security

Along with the rapid development of enterprise digitalisation, new edge computing scenarios and applications have now begun emerging in various industries. Drawing from more than 20 years of experience in technology solutions, Tencent Cloud today announced the launch of Tencent Cloud EdgeOne – an upgraded one-stop platform that integrates Tencent’s experience in network performance and security with high efficiency and stability for global enterprises.

In 2021, given the unprecedented surge of short-video and live steaming businesses globally, Tencent Cloud launched the RT-ONE™ network to build a foundation of the most comprehensive audio and video communication network in order to meet market needs. To further enhance corporate customers' cyber-security, Tencent Cloud has applied its security technology to the RT-ONE™ network by introducing an upgraded, highly integrated one-stop platform. The new platform offers cutting-edge security capabilities, creating integrated services to fulfil businesses’ requirements on network speed and security features. 

Leveraging Tencent Cloud's over 2,800 global acceleration nodes across more than 70 countries and regions, Tencent Cloud EdgeOne allows users to enjoy high-quality network performance without compromising their security.  It also features the following advantages:

  • Moves the services to the edge nodes closer to end-users and provides layer-3 (Network), layer-4 (Transport), layer-7 (Application) protection as well as acceleration services to the global market. It also highlights a unified dashboard that greatly reduces the configuration workload and saves time for customers. 
  • Provides a set of dedicated interconnections to accelerate traffic between Tencent Cloud EdgeOne and Origin Server. Additionally, it integrates the domain name system (DNS), which ensures stable and high-performance DNS resolution, and greatly reduces the latency for static and dynamic data.
  • Integrates security features and technologies based on Tencent’s experience in security for over two decades, including but not limited to DDoS protection (Anti-DDoS), web protection (Web Application Firewall), bot management and behaviour analysis (Tencent Cloud bot program management), and adaptive rate-limiting, among others. It deploys security functions on edge nodes closer to users, detecting and mitigating malicious requests before they hit application services.

Poshu Yeung, Senior Vice President, Tencent Cloud International, said, “With the emergence of technologies such as cloud computing, big data, AI, blockchain, Web3 and the Internet of Things, the digital transformation of enterprises is now considered to be essential. Relying on Tencent’s own security business operation experience of serving 1 billion users, we are excited to see how Tencent Cloud EdgeOne provides users an unprecedented, high-quality and highly reliable network experience as they safeguard their security at the same time.” 

Tommy Li, Vice President of Tencent Cloud, said, “Tencent Cloud EdgeOne can be summarised by an acronym using one word: ACROSS. It stands for Advanced technology; exclusive Connectivity; Real-time service; Optimisation of data; Smart application; and Security protection. Enterprises that are adapting to the growing digitalisation trend are experiencing common problems such as network latency, congestion, and security threats. By applying our sophisticated security technology to our RT-ONE™ network, the new platform Tencent Cloud EdgeOne provides lower latency and built-in security features with outstanding performance, giving businesses and organisations reliable services to address their concerns.”

Applications of Tencent Cloud EdgeOne in various industries

Global businesses, ranging from commerce, retail, financial service, content and news to the gaming, media, audio and video sector, can now enjoy the benefits of using Tencent Cloud EdgeOne. One of the integrated solution’s accurate major clients is a popular online video-on-demand platform in China, which addressed its issues including degraded user experience, malicious SEO, bot cheating/scamming and content piracy with Tencent Cloud EdgeOne. The platform successfully reduced the video’s loading time by 40 percent, and resource response delay by 50 percent. 

Cyberattacks in the e-commerce industry cause business disruption and affect user experience. To avoid the risk of damage due to cyberattacks, an online e-commerce platform – which needs to carry out hundreds of billions of daily operations including user account management, activities, billing analysis and other modules – made use of Tencent Cloud EdgeOne to successfully defend against Challenge Collapsar (CC) attacks with peak traffic for over 9 million QPS (queries per second). 

During a new game release by one of the world’s biggest game publishers, security and network capabilities were put to the test when it attracted more than one million users. Tencent Cloud EdgeOne adopted a distributed strategy of preloading and provided DDoS attack protection, web protection, rate limiting, and robot behaviour interception capabilities, ultimately helping the new game to achieve 100% success in downloads with zero business interruption.

Eric Cheng, General Manager of Tencent Security, said, “At Tencent Cloud, we aim to make user access more efficient, and make business more secure as they go full throttle with their digitalisation journey. With the launch of Tencent Cloud EdgeOne, we now have a unified, fast, reliable, and secure platform that can detect and defend against attacks earlier, mitigating malicious traffic right on edges before they reach our data centre.” 

Baal Feng, General Manager of Global DevOps at Tencent Games, said, “The global gaming market enjoys an enormous potential, and is expanding rapidly. As more opportunities arise, so do the challenges, such as addressing the extremely high demands for get speed and latency. Network quality determines the get speed and user experience, and Tencent Cloud EdgeOne is now here to provide an integrated service that ensures these demands are met, without compromising security.” 

Chang Foo, Chief Operating Officer of Tencent (Thailand), stated, “As a digital enabler, Tencent Cloud Thailand is proud to help empower individuals and businesses of all sizes with convenient, secure access to technology. While digital technology continues to play significant roles in our modern-day times, cybersecurity has become an integral part of work and life both from business and end-user points of view. The launch of Tencent Cloud EdgeOne reaffirms our commitment to being the trusted partner for clients. As well as safeguarding businesses against cyber threats backed by our extensive industry experiences, plus Thailand team experts to provide support and advice for smooth and efficient business operation in the Digital Transformation era."

With the rapid development of the global digital economy, digital innovation and transformation have expanded from the internet to traditional industries and has fully blossomed in the development and investment of cross-border e-commerce, games and other independent applications. Meanwhile, the high requirements for business performance and security protection are significant factors for the digital transformation of enterprises. Tencent Cloud aims to continue exporting high-level security capabilities, providing global partners with safe, stable, extremely fast, and professional edge-integrated security services, and reliably assisting enterprises in their digital journey.

For more information about Tencent Cloud, click here

Thu, 04 Aug 2022 04:42:00 -0500 text/html https://www.bangkokpost.com/thailand/pr/2361277/tencent-cloud-edgeone-provides-integrated-security-protection-network-performance-services-for-global-businesses
Killexams : Tennessee’s unemployment website outage is causing problems for applicants No result found, try new keyword!Many Tennesseans have had trouble submitting unemployment benefits or documents on Jobs4You.gov for the last week.People haven’t been able to log into the website, which means more than 12,000 people ... Wed, 29 Jun 2022 06:31:21 -0500 en-us text/html https://www.msn.com/en-us/news/politics/tennessees-unemployment-website-outage-is-causing-problems-for-applicants/ar-AAZ4qkj Killexams : IBM Research Rolls Out A Comprehensive AI And Platform-Based Edge Research Strategy Anchored By Enterprise Partnerships & Use Cases

I recently met with Dr. Nick Fuller, Vice President, Distributed Cloud, at IBM Research for a discussion about IBM’s long-range plans and strategy for artificial intelligence and machine learning at the edge.

Dr. Fuller is responsible for providing AI and platform–based innovation for enterprise digital transformation spanning edge computing and distributed cloud management. He is an IBM Master Inventor with over 75 patents and co-author of 75 technical publications. Dr. Fuller obtained his Bachelor of Science in Physics and Math from Morehouse College and his PhD in Applied Physics from Columbia University.

Edge In, not Cloud Out

In general, Dr. Fuller told me that IBM is focused on developing an "edge in" position versus a "cloud out" position with data, AI, and Kubernetes-based platform technologies to scale hub and spoke deployments of edge applications.

A hub plays the role of a central control plane used for orchestrating the deployment and management of edge applications in a number of connected spoke locations such as a factory floor or a retail branch, where data is generated or locally aggregated for processing.

“Cloud out” refers to the paradigm where cloud service providers are extending their cloud architecture out to edge locations. In contrast, “edge in” refers to a provider-agnostic architecture that is cloud-independent and treats the data-plane as a first-class citizen.

IBM's overall architectural principle is scalability, repeatability, and full stack solution management that allows everything to be managed using a single unified control plane.

IBM’s Red Hat platform and infrastructure strategy anchors the application stack with a unified, scalable, and managed OpenShift-based control plane equipped with a high-performance storage appliance and self-healing system capabilities (inclusive of semi-autonomous operations).

IBM’s strategy also includes several in-progress platform-level technologies for scalable data, AI/ML runtimes, accelerator libraries for Day-2 AI operations, and scalability for the enterprise.

It is an important to mention that IBM is designing its edge platforms with labor cost and technical workforce in mind. Data scientists with PhDs are in high demand, making them difficult to find and expensive to hire once you find them. IBM is designing its edge system capabilities and processes so that domain experts rather than PhDs can deploy new AI models and manage Day-2 operations.

Why edge is important

Advances in computing and storage have made it possible for AI to process mountains of accumulated data to provide solutions. By bringing AI closer to the source of data, edge computing is faster and more efficient than cloud. While Cloud data accounts for 60% of the world’s data today, vast amounts of new data is being created at the edge, including industrial applications, traffic cameras, and order management systems, all of which can be processed at the edge in a fast and timely manner.

Public cloud and edge computing differ in capacity, technology, and management. An advantage of edge is that data is processed and analyzed at / near its collection point at the edge. In the case of cloud, data must be transferred from a local device and into the cloud for analytics and then transferred back to the edge again. Moving data through the network consumes capacity and adds latency to the process. It’s easy to see why executing a transaction at the edge reduces latency and eliminates unnecessary load on the network.

Increased privacy is another benefit of processing data at the edge. Analyzing data where it originates limits the risk of a security breach. Most of the communications between the edge and the cloud is then confined to such things as reporting, data summaries, and AI models, without ever exposing the raw data.

IBM at the Edge

In our discussion, Dr. Fuller provided a few examples to illustrate how IBM plans to provide new and seamless edge solutions for existing enterprise problems.

Example #1 – McDonald’s drive-thru

Dr. Fuller’s first example centered around Quick Service Restaurant’s (QSR) problem of drive-thru order taking. Last year, IBM acquired an automated order-taking system from McDonald's. As part of the acquisition, IBM and McDonald's established a partnership to perfect voice ordering methods using AI. Drive-thru orders are a significant percentage of total QSR orders for McDonald's and other QSR chains.

McDonald's and other QSR restaurants would like every order to be processed as quickly and accurately as possible. For that reason, McDonald's conducted trials at ten Chicago restaurants using an edge-based AI ordering system with NLP (Natural Language Processing) to convert spoken orders into a digital format. It was found that AI had the potential to reduce ordering errors and processing time significantly. Since McDonald's sells almost 7 million hamburgers daily, shaving a minute or two off each order represents a significant opportunity to address labor shortages and increase customer satisfaction.

Example #2 – Boston Dynamics and Spot the agile mobile robot

According to an earlier IBM survey, many manufacturers have already implemented AI-driven robotics with autonomous decision-making capability. The study also indicated that over 80 percent of companies believe AI can help Boost future business operations. However, some companies expressed concern about the limited mobility of edge devices and sensors.

To develop a mobile edge solution, IBM teamed up with Boston Dynamics. The partnership created an agile mobile robot using IBM Research and IBM Sustainability Software AI technology. The device can analyze visual sensor readings in hazardous and challenging industrial environments such as manufacturing plants, warehouses, electrical grids, waste treatment plants and other hazardous environments. The value proposition that Boston Dynamics brought to the partnership was Spot the agile mobile robot, a walking, sensing, and actuation platform. Like all edge applications, the robot’s wireless mobility uses self-contained AI/ML that doesn’t require access to cloud data. It uses cameras to read analog devices, visually monitor fire extinguishers, and conduct a visual inspection of human workers to determine if required safety equipment is being worn.

IBM was able to show up to a 10X speedup by automating some manual tasks, such as converting the detection of a problem into an immediate work order in IBM Maximo to correct it. A fast automated response was not only more efficient, but it also improved the safety posture and risk management for these facilities. Similarly, some factories need to thermally monitor equipment to identify any unexpected hot spots that may show up over time, indicative of a potential failure.

IBM is working with National Grid, an energy company, to develop a mobile solution using Spot, the agile mobile robot, for image analysis of transformers and thermal connectors. As shown in the above graphic, Spot also monitored connectors on both flat surfaces and 3D surfaces. IBM was able to show that Spot could detect excessive heat build-up in small connectors, potentially avoiding unsafe conditions or costly outages. This AI/ML edge application can produce faster response times when an issue is detected, which is why IBM believes significant gains are possible by automating the entire process.

IBM market opportunities

Drive-thru orders and mobile robots are just a few examples of the millions of potential AI applications that exist at the edge and are driven by several billion connected devices.

Edge computing is an essential part of enterprise digital transformation. Enterprises seek ways to demonstrate the feasibility of solving business problems using AI/ML and analytics at the edge. However, once a proof of concept has been successfully demonstrated, it is a common problem for a company to struggle with scalability, data governance, and full-stack solution management.

Challenges with scaling

“Determining entry points for AI at the edge is not the difficult part,” Dr. Fuller said. “Scale is the real issue.”

Scaling edge models is complicated because there are so many edge locations with large amounts of diverse content and a high device density. Because large amounts of data are required for training, data gravity is a potential problem. Further, in many scenarios, vast amounts of data are generated quickly, leading to potential data storage and orchestration challenges. AI Models are also rarely "finished." Monitoring and retraining of models are necessary to keep up with changes the environment.

Through IBM Research, IBM is addressing the many challenges of building an all-encompassing edge architecture and horizontally scalable data and AI technologies. IBM has a wealth of edge capabilities and an architecture to create the appropriate platform for each application.

IBM AI entry points at the edge

IBM sees Edge Computing as a $200 billion market by 2025. Dr. Fuller and his organization have identified four key market entry points for developing and expanding IBM’s edge compute strategy. In order of size, IBM believes its priority edge markets to be intelligent factories (Industry 4.0), telcos, retail automation, and connected vehicles.

IBM and its Red Hat portfolio already have an established presence in each market segment, particularly in intelligent operations and telco. Red Hat is also active in the connected vehicles space.

Industry 4.0

There have been three prior industrial revolutions, beginning in the 1700s up to our current in-progress fourth revolution, Industry 4.0, that promotes a digital transformation.

Manufacturing is the fastest growing and the largest of IBM’s four entry markets. In this segment, AI at the edge can Boost quality control, production optimization, asset management, and supply chain logistics. IBM believes there are opportunities to achieve a 4x speed up in implementing edge-based AI solutions for manufacturing operations.

For its Industry 4.0 use case development, IBM, through product, development, research and consulting teams, is working with a major automotive OEM. The partnership has established the following joint objectives:

  • Increase automation and scalability across dozens of plants using 100s of AI / ML models. This client has already seen value in applying AI/ML models for manufacturing applications. IBM Research is helping with re-training models and implementing new ones in an edge environment to help scale even more efficiently. Edge offers faster inference and low latency, allowing AI to be deployed in a wider variety of manufacturing operations requiring instant solutions.
  • Dramatically reduce the time required to onboard new models. This will allow training and inference to be done faster and allow large models to be deployed much more quickly. The quicker an AI model can be deployed in production; the quicker the time-to-value and the return-on-investment (ROI).
  • Accelerate deployment of new inspections by reducing the labeling effort and iterations needed to produce a production-ready model via data summarization. Selecting small data sets for annotation means manually examining thousands of images, this is a time-consuming process that will result in - labeling of redundant data. Using ML-based automation for data summarization will accelerate the process and produce better model performance.
  • Enable Day-2 AI operations to help with data lifecycle automation and governance, model creation, reduce production errors, and provide detection of out-of-distribution data to help determine if a model’s inference is accurate. IBM believes this will allow models to be created faster without data scientists.

Maximo Application Suite

IBM’s Maximo Application Suite plays an important part in implementing large manufacturers' current and future IBM edge solutions. Maximo is an integrated public or private cloud platform that uses AI, IoT, and analytics to optimize performance, extend asset lifecycles and reduce operational downtime and costs. IBM is working with several large manufacturing clients currently using Maximo to develop edge use cases, and even uses it within its own Manufacturing.

IBM has research underway to develop a more efficient method of handling life cycle management of large models that require immense amounts of data. Day 2 AI operations tasks can sometimes be more complex than initial model training, deployment, and scaling. Retraining at the edge is difficult because resources are typically limited.

Once a model is trained and deployed, it is important to monitor it for drift caused by changes in data distributions or anything that might cause a model to deviate from original requirements. Inaccuracies can adversely affect model ROI.

Day-2 AI Operations (retraining and scaling)

Day-2 AI operations consist of continual updates to AI models and applications to keep up with changes in data distributions, changes in the environment, a drop in model performance, availability of new data, and/or new regulations.

IBM recognizes the advantages of performing Day-2 AI Operations, which includes scaling and retraining at the edge. It appears that IBM is the only company with an architecture equipped to effectively handle Day-2 AI operations. That is a significant competitive advantage for IBM.

A company using an architecture that requires data to be moved from the edge back into the cloud for Day-2 related work will be unable to support many factory AI/ML applications because of the sheer number of AI/ML models to support (100s to 1000s).

“There is a huge proliferation of data at the edge that exists in multiple spokes,” Dr. Fuller said. "However, all that data isn’t needed to retrain a model. It is possible to cluster data into groups and then use sampling techniques to retrain the model. There is much value in federated learning from our point of view.”

Federated learning is a promising training solution being researched by IBM and others. It preserves privacy by using a collaboration of edge devices to train models without sharing the data with other entities. It is a good framework to use when resources are limited.

Dealing with limited resources at the edge is a challenge. IBM’s edge architecture accommodates the need to ensure resource budgets for AI applications are met, especially when deploying multiple applications and multiple models across edge locations. For that reason, IBM developed a method to deploy data and AI applications to scale Day-2 AI operations utilizing hub and spokes.

The graphic above shows the current status quo methods of performing Day-2 operations using centralized applications and a centralized data plane compared to the more efficient managed hub and spoke method with distributed applications and a distributed data plane. The hub allows it all to be managed from a single pane of glass.

Data Fabric Extensions to Hub and Spokes

IBM uses hub and spoke as a model to extend its data fabric. The model should not be thought of in the context of a traditional hub and spoke. IBM’s hub provides centralized capabilities to manage clusters and create multiples hubs that can be aggregated to a higher level. This architecture has four important data management capabilities.

  1. First, models running in unattended environments must be monitored. From an operational standpoint, detecting when a model’s effectiveness has significantly degraded and if corrective action is needed is critical.
  2. Secondly, in a hub and spoke model, data is being generated and collected in many locations creating a need for data life cycle management. Working with large enterprise clients, IBM is building unique capabilities to manage the data plane across the hub and spoke estate - optimized to meet data lifecycle, regulatory & compliance as well as local resource requirements. Automation determines which input data should be selected and labeled for retraining purposes and used to further Boost the model. Identification is also made for atypical data that is judged worthy of human attention.
  3. The third issue relates to AI pipeline compression and adaptation. As mentioned earlier, edge resources are limited and highly heterogeneous. While a cloud-based model might have a few hundred million parameters or more, edge models can’t afford such resource extravagance because of resource limitations. To reduce the edge compute footprint, model compression can reduce the number of parameters. As an example, it could be reduced from several hundred million to a few million.
  4. Lastly, suppose a scenario exists where data is produced at multiple spokes but cannot leave those spokes for compliance reasons. In that case, IBM Federated Learning allows learning across heterogeneous data in multiple spokes. Users can discover, curate, categorize and share data assets, data sets, analytical models, and their relationships with other organization members.

In addition to AI deployments, the hub and spoke architecture and the previously mentioned capabilities can be employed more generally to tackle challenges faced by many enterprises in consistently managing an abundance of devices within and across their enterprise locations. Management of the software delivery lifecycle or addressing security vulnerabilities across a vast estate are a case in point.

Multicloud and Edge platform

In the context of its strategy, IBM sees edge and distributed cloud as an extension of its hybrid cloud platform built around Red Hat OpenShift. One of the newer and more useful options created by the Red Hat development team is the Single Node OpenShift (SNO), a compact version of OpenShift that fits on a single server. It is suitable for addressing locations that are still servers but come in a single node, not clustered, deployment type.

For smaller footprints such as industrial PCs or computer vision boards (for example NVidia Jetson Xavier), Red Hat is working on a project which builds an even smaller version of OpenShift, called MicroShift, that provides full application deployment and Kubernetes management capabilities. It is packaged so that it can be used for edge device type deployments.

Overall, IBM and Red Hat have developed a full complement of options to address a large spectrum of deployments across different edge locations and footprints, ranging from containers to management of full-blown Kubernetes applications from MicroShift to OpenShift and IBM Edge Application Manager.

Much is still in the research stage. IBM's objective is to achieve greater consistency in terms of how locations and application lifecycle is managed.

First, Red Hat plans to introduce hierarchical layers of management with Red Hat Advanced Cluster Management (RHACM), to scale by two to three orders of magnitude the number of edge locations managed by this product. Additionally, securing edge locations is a major focus. Red Hat is continuously expanding platform security features, for example by recently including Integrity Measurement Architecture in Red Hat Enterprise Linux, or by adding Integrity Shield to protect policies in Red Hat Advanced Cluster Management (RHACM).

Red Hat is partnering with IBM Research to advance technologies that will permit it to protect platform integrity and the integrity of client workloads through the entire software supply chains. In addition, IBM Research is working with Red Hat on analytic capabilities to identify and remediate vulnerabilities and other security risks in code and configurations.

Telco network intelligence and slice management with AL/ML

Communication service providers (CSPs) such as telcos are key enablers of 5G at the edge. 5G benefits for these providers include:

  • Reduced operating costs
  • Improved efficiency
  • Increased distribution and density
  • Lower latency

The end-to-end 5G network comprises the Radio Access Network (RAN), transport, and core domains. Network slicing in 5G is an architecture that enables multiple virtual and independent end-to-end logical networks with different characteristics such as low latency or high bandwidth, to be supported on the same physical network. This is implemented using cloud-native technology enablers such as software defined networking (SDN), virtualization, and multi-access edge computing. Slicing offers necessary flexibility by allowing the creation of specific applications, unique services, and defined user groups or networks.

An important aspect of enabling AI at the edge requires IBM to provide CSPs with the capability to deploy and manage applications across various enterprise locations, possibly spanning multiple end-to-end network slices, using a single pane of glass.

5G network slicing and slice management

Network slices are an essential part of IBM's edge infrastructure that must be automated, orchestrated and optimized according to 5G standards. IBM’s strategy is to leverage AI/ML to efficiently manage, scale, and optimize the slice quality of service, measured in terms of bandwidth, latency, or other metrics.

5G and AI/ML at the edge also represent a significant opportunity for CSPs to move beyond traditional cellular services and capture new sources of revenue with new services.

Communications service providers need management and control of 5G network slicing enabled with AI-powered automation.

Dr. Fuller sees a variety of opportunities in this area. "When it comes to applying AI and ML on the network, you can detect things like intrusion detection and malicious actors," he said. "You can also determine the best way to route traffic to an end user. Automating 5G functions that run on the network using IBM network automation software also serves as an entry point.”

In IBM’s current telecom trial, IBM Research is spearheading the development of a range of capabilities targeted for the IBM Cloud Pak for Network Automation product using AI and automation to orchestrate, operate and optimize multivendor network functions and services that include:

  • End-to-end 5G network slice management with planning & design, automation & orchestration, and operations & assurance
  • Network Data and AI Function (NWDAF) that collects data for slice monitoring from 5G Core network functions, performs network analytics, and provides insights to authorized data consumers.
  • Improved operational efficiency and reduced cost

Future leverage of these capabilities by existing IBM Clients that use the Cloud Pak for Network Automation (e.g., DISH) can offer further differentiation for CSPs.

5G radio access

Open radio access networks (O-RANs) are expected to significantly impact telco 5G wireless edge applications by allowing a greater variety of units to access the system. The O-RAN concept separates the DU (Distributed Units) and CU (Centralized Unit) from a Baseband Unit in 4G and connects them with open interfaces.

O-RAN system is more flexible. It uses AI to establish connections made via open interfaces that optimize the category of a device by analyzing information about its prior use. Like other edge models, the O-RAN architecture provides an opportunity for continuous monitoring, verification, analysis, and optimization of AI models.

The IBM-telco collaboration is expected to advance O-RAN interfaces and workflows. Areas currently under development are:

  • Multi-modal (RF level + network-level) analytics (AI/ML) for wireless communication with high-speed ingest of 5G data
  • Capability to learn patterns of metric and log data across CUs and DUs in RF analytics
  • Utilization of the antenna control plane to optimize throughput
  • Primitives for forecasting, anomaly detection and root cause analysis using ML
  • Opportunity of value-added functions for O-RAN

IBM Cloud and Infrastructure

The cornerstone for the delivery of IBM's edge solutions as a service is IBM Cloud Satellite. It presents a consistent cloud-ready, cloud-native operational view with OpenShift and IBM Cloud PaaS services at the edge. In addition, IBM integrated hardware and software Edge systems will provide RHACM - based management of the platform when clients or third parties have existing managed as a service models. It is essential to note that in either case this is done within a single control plane for hubs and spokes that helps optimize execution and management from any cloud to the edge in the hub and spoke model.

IBM's focus on “edge in” means it can provide the infrastructure through things like the example shown above for software defined storage for federated namespace data lake that surrounds other hyperscaler clouds. Additionally, IBM is exploring integrated full stack edge storage appliances based on hyperconverged infrastructure (HCI), such as the Spectrum Fusion HCI, for enterprise edge deployments.

As mentioned earlier, data gravity is one of the main driving factors of edge deployments. IBM has designed its infrastructure to meet those data gravity requirements, not just for the existing hub and spoke topology but also for a future spoke-to-spoke topology where peer-to-peer data sharing becomes imperative (as illustrated with the wealth of examples provided in this article).

Wrap up

Edge is a distributed computing model. One of its main advantages is that computing, and data storage and processing is close to where data is created. Without the need to move data to the cloud for processing, real-time application of analytics and AI capabilities provides immediate solutions and drives business value.

IBM’s goal is not to move the entirety of its cloud infrastructure to the edge. That has little value and would simply function as a hub to spoke model operating on actions and configurations dictated by the hub.

IBM’s architecture will provide the edge with autonomy to determine where data should reside and from where the control plane should be exercised.

Equally important, IBM foresees this architecture evolving into a decentralized model capable of edge-to-edge interactions. IBM has no firm designs for this as yet. However, the plan is to make the edge infrastructure and platform a first-class citizen instead of relying on the cloud to drive what happens at the edge.

Developing a complete and comprehensive AI/ML edge architecture - and in fact, an entire ecosystem - is a massive undertaking. IBM faces many known and unknown challenges that must be solved before it can achieve success.

However, IBM is one of the few companies with the necessary partners and the technical and financial resources to undertake and successfully implement a project of this magnitude and complexity.

It is reassuring that IBM has a plan and that its plan is sound.

Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, MulteFire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Mon, 08 Aug 2022 03:51:00 -0500 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2022/08/08/ibm-research-rolls-out-a-comprehensive-ai-and-ml-edge-research-strategy-anchored-by-enterprise-partnerships-and-use-cases/
1T6-215 exam dump and training guide direct download
Training Exams List