You should simply download 1T6-220 braindumps questions and answers

killexams.com helps a large number of applicants pass the exams and get their Certifications. We have a large number of successful audits. Our 1T6-220 exam questions are dependable, latest, updated, and of the really best quality to beat the challenges of any IT certifications. Killexams 1T6-220 practice exam are collected from real 1T6-220 exams, thats why no doubt in passing the 1T6-220 exam with hight marks.

Exam Code: 1T6-220 Practice exam 2022 by Killexams.com team
Switched Ethemet Network Analysis and Troubleshooting
Network-General Troubleshooting learning
Killexams : Network-General Troubleshooting learning - BingNews https://killexams.com/pass4sure/exam-detail/1T6-220 Search results Killexams : Network-General Troubleshooting learning - BingNews https://killexams.com/pass4sure/exam-detail/1T6-220 https://killexams.com/exam_list/Network-General Killexams : The Computer Scientist Challenging AI to Learn Better

Kanan has been toying with machine intelligence nearly all his life. As a kid in rural Oklahoma who just wanted to have fun with machines, he taught bots to play early multi-player computer games. That got him wondering about the possibility of artificial general intelligence — a machine with the ability to think like a human in every way. This made him interested in how minds work, and he majored in philosophy and computer science at Oklahoma State University before his graduate studies took him to the University of California, San Diego.

Now Kanan finds inspiration not just in video games, but also in watching his nearly 2-year-old daughter learn about the world, with each new learning experience building on the last. Because of his and others’ work, catastrophic forgetting is no longer quite as catastrophic.

Quanta spoke with Kanan about machine memories, breaking the rules of training neural networks, and whether AI will ever achieve human-level learning. The interview has been condensed and edited for clarity.

How does your training in philosophy impact the way you think about your work?

It has served me very well as an academic. Philosophy teaches you, “How do you make reasoned arguments,” and “How do you analyze the arguments of others?” That’s a lot of what you do in science. I still have essays from way back then on the failings of the Turing test, and things like that. And so those things I still think about a lot.

My lab has been inspired by asking the question: Well, if we can’t do X, how are we going to be able to do Y? We learn over time, but neural networks, in general, don’t. You train them once. It’s a fixed entity after that. And that’s a fundamental thing that you’d have to solve if you want to make artificial general intelligence one day. If it can’t learn without scrambling its brain and restarting from scratch, you’re not really going to get there, right? That’s a prerequisite capability to me.

How have researchers dealt with catastrophic forgetting so far?

The most successful method, called replay, stores past experiences and then replays them during training with new examples, so they are not lost. It’s inspired by memory consolidation in our brain, where during sleep the high-level encodings of the day’s activities are “replayed” as the neurons reactivate.

In other words, for the algorithms, new learning can’t completely eradicate past learning since we are mixing in stored past experiences.

There are three styles for doing this. The most common style is “veridical replay,” where researchers store a subset of the raw inputs — for example, the original images for an object recognition task — and then mix those stored images from the past in with new images to be learned. The second approach replays compressed representations of the images. A third far less common method is “generative replay.” Here, an artificial neural network actually generates a synthetic version of a past experience and then mixes that synthetic example with new examples. My lab has focused on the latter two methods.

Unfortunately, though, replay isn’t a very satisfying solution.

Wed, 03 Aug 2022 08:11:00 -0500 en text/html https://www.quantamagazine.org/the-computer-scientist-trying-to-teach-ai-to-learn-like-we-do-20220802/
Killexams : Large language models can’t plan, even if they write fancy essays

This article is part of our coverage of the latest in AI research.

Large language models like GPT-3 have advanced to the point that it has become difficult to measure the limits of their capabilities. When you have a very large neural network that can generate articles, write software code, and engage in conversations about sentience and life, you should expect it to be able to reason about tasks and plan as a human does, right?

Wrong. A study by researchers at Arizona State University, Tempe, shows that when it comes to planning and thinking methodically, LLMs perform very poorly, and suffer from many of the same failures observed in current deep learning systems.

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

Interestingly, the study finds that, while very large LLMs like GPT-3 and PaLM pass many of the tests that were meant to evaluate the reasoning capabilities and artificial intelligence systems, they do so because these benchmarks are either too simplistic or too flawed and can be “cheated” through statistical tricks, something that deep learning systems are very good at.

With LLMs breaking new ground every day, the authors suggest a new benchmark to test the planning and reasoning capabilities of AI systems. The researchers hope that their findings can help steer AI research toward developing artificial intelligence systems that can handle what has become popularly known as “system 2 thinking” tasks.

The illusion of planning and reasoning

“Back last year, we were evaluating GPT-3’s ability to extract plans from text descriptions—a task that was attempted with special purpose methods earlier—and found that off-the-shelf GPT-3 does quite well compared to the special purpose methods,” Subbarao Kambhampati, professor at Arizona State University and co-author of the study, told TechTalks. “That naturally made us wonder what ‘emergent capabilities’—if any–GPT3 has for doing the simplest planning problems (e.g., generating plans in toy domains). We found right away that GPT3 is pretty spectacularly bad on anecdotal tests.”

However, one interesting fact is that GPT-3 and other large language models perform very well on benchmarks designed for common-sense reasoning, logical reasoning, and ethical reasoning, skills that were previously thought to be off-limits for deep learning systems. A previous study by Kambhampati’s group at Arizona State University shows the effectiveness of large language models in generating plans from text descriptions. Other accurate studies include one that shows LLMs can do zero-shot reasoning if provided with a special trigger phrase.

However, “reasoning” is often used broadly in these benchmarks and studies, Kambhampati believes. What LLMs are doing, in fact, is creating a semblance of planning and reasoning through pattern recognition.

“Most benchmarks depend on shallow (one or two steps) type of reasoning, as well as tasks for which there is sometimes no genuine ground truth (e.g., getting LLMs to reason about ethical dilemmas),” he said. “It is possible for a purely pattern completion engine with no reasoning capabilities to still do fine on some of such benchmarks. After all, while System 2 reasoning abilities can get compiled to System 1 sometimes, it is also the case that System 1’s ‘reasoning abilities’ may just be reflexive responses from patterns the system has seen in its training data, without actually doing anything resembling reasoning.”

System 1 and System 2 thinking

System 1 and System 2 thinking were popularized by psychologist Daniel Kahneman in his book Thinking Fast and Slow. The former is the fast, reflexive, and automated type of thinking and acting that we do most of the time, such as walking, brushing our teeth, tying our shoes, or driving in a familiar area. Even a large part of speech is performed by System 1.

System 2, on the other hand, is the slower thinking mode that we use for tasks that require methodical planning and analysis. We use System 2 to solve calculus equations, play chess, design software, plan a trip, solve a puzzle, etc.

But the line between System 1 and System 2 is not clear-cut. Take driving, for example. When you are learning to drive, you must fully concentrate on how you coordinate your muscles to control the gear, steering wheel, and pedals while also keeping an eye on the road and the side and rear mirrors. This is clearly System 2 at work. It consumes a lot of energy, requires your full attention, and is slow. But as you gradually repeat the procedures, you learn to do them without thinking. The task of driving shifts to your System 1, enabling you to perform it without taxing your mind. One of the criteria of a task that has been integrated into System 1 is the ability to do it subconsciously while focusing on another task (e.g., you can tie your shoe and talk at the same time, brush your teeth and read, drive and talk, etc.).

Even many of the very complicated tasks that remain in the domain of System 2 eventually become partly integrated into System 1. For example, professional chess players rely a lot on pattern recognition to speed up their decision-making process. You can see similar examples in math and programming, where after doing things over and over again, some of the tasks that previously required careful thinking come to you automatically.

A similar phenomenon might be happening in deep learning systems that have been exposed to very large datasets. They might have learned to do the simple pattern-recognition phase of complex reasoning tasks.

“Plan generation requires chaining reasoning steps to come up with a plan, and a firm ground truth about correctness can be established,” Kambhampati said.

A new benchmark for testing planning in LLMs

“Given the excitement around hidden/emergent properties of LLMs however, we thought it would be more constructive to develop a benchmark that provides a variety of planning/reasoning tasks that can serve as a benchmark as people Boost LLMs via finetuning and other approaches to customize/improve their performance to/on reasoning tasks. This is what we wound up doing,” Kambhampati said.

The team developed their benchmark based on the domains used in the International Planning Competition (IPC). The framework consists of multiple tasks that evaluate different aspects of reasoning. For example, some tasks evaluate the LLMs capacity to create valid plans to achieve a certain goal while others will test whether the generated plan is optimal. Other tests include reasoning about the results of a plan, recognizing whether different text descriptions refer to the same goal, reusing parts of one plan in another, shuffling plans, and more.

To carry out the tests, the team used Blocks world, a problem framework that revolves around placing a set of different blocks in a particular order. Each problem has an initial condition, an end goal, and a set of allowed actions.

“The benchmark itself is extensible and is meant to have tests from several of the IPC domains,” Kambhampati said. “We used the Blocks world examples for illustrating the different tasks. Each of those tasks (e.g., Plan generation, goal shuffling, etc.) can also be posed in other IPC domains.”

The benchmark Kambhampati and his colleagues developed uses few-shot learning, where the prompt given to the machine learning model includes a solved example plus the main problem that must be solved.

Unlike other benchmarks, the problem descriptions of this new benchmark are very long and detailed. Solving them requires concentration and methodical planning and can’t be cheated through pattern recognition. Even a human who would want to solve them would have to carefully think about each problem, take notes, possibly make visualizations, and plan the solution step by step.

“Reasoning is a system-2 task in general. The collective delusion of the community has been to look at those types of reasoning benchmarks that could probably be handled via compilation to system 1 (e.g., ‘the answer to this ethical dilemma, by pattern completion, is this’) as against actually doing reasoning that is needed for the task at hand,” Kambhampati said.

Large language models are bad at planning

The researchers tested their framework on Davinci, the largest version of GPT-3. Their experiments show that GPT-3 has mediocre performance on some types of planning tasks but performs very poorly in areas such as plan reuse, plan generalization, optimal planning, and replanning.

“The initial studies we have seen basically show that LLMs are particularly bad on anything that would be considered planning tasks–including plan generation, optimal plan generation, plan reuse or replanning,” Kambhampati said. “They do better on the planning-related tasks that don’t require chains of reasoning–such as goal shuffling.”

In the future, the researchers will add test cases based on other IPC domains and provide performance baselines with human subjects on the same benchmarks.

“We are also ourselves curious as to whether other variants of LLMs do any better on these benchmarks,” Kambhampati said.

Kambhampati stresses that the goal of the project is to put the benchmark out and supply an idea of where the current baseline is. The researchers hope that their work opens new windows for developing planning and reasoning capability for current AI systems. For example, one direction they propose is evaluating the effectiveness of finetuning LLMs for reasoning and planning in specific domains. The team already has preliminary results on an instruction-following variant of GPT-3 that seems to do marginally better on the easy tasks, although it too remains around the 5-percent level for genuine plan generation tasks, Kambhampati said.

Kambhampati also believes that learning and acquiring world models would be an essential step for any AI system that can reason and plan. Other scientists, including deep learning pioneer Yann LeCun, have made similar suggestions.

“If we agree that reasoning is part of intelligence, and want to claim LLMs do it, we certainly need plan generation benchmarks there,” Kambhampati said. “Rather than take a magisterial negative stand, we are providing a benchmark, so that people who believe that reasoning can be emergent from LLMs even without any special mechanisms such as world models and reasoning about dynamics, can use the benchmark to support their point of view.”

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Sun, 31 Jul 2022 08:50:00 -0500 en text/html https://thenextweb.com/news/large-language-models-cant-plan
Killexams : Verizon Not Registered On Network (Verizon Network Issues)

Verizon might supply you a "Not registered on network" error for several reasons. It could be because your device entered airplane mode and failed to connect to the Verizon cellular tower. It could also be because your Verizon SIM card is not inserted correctly.

This article will guide you on fixing this error and returning to the Verizon network in no time. Your issue with the Verizon "Not registered on a network" error will be gone.

Why are You Getting "Not Registered on Network" on Verizon?

If you notice a "Not registered on network" error on Verizon, it may be because the software is not updated and your phone is running an earlier version of the software. 

Also, when your phone is locked to another carrier, it may display such an error. Unlocking your phone might help.

How To Fix The "Not Registered On Network" Error on Verizon

Basic Troubleshooting

  • Ensure you have an active and valid mobile data plan with your Verizon and that your reception is strong.
  • Ensure that the Airplane mode is OFF. Sometimes, we accidentally enable this feature. Open Settings > Connections > Airplane mode. Toggle the switch button.
  • Reinsert your Verizon SIM card, check for damage, and ensure it is inserted correctly. If you have another phone around, place your SIM card in it and try to make a phone call.
  • Restart your phone. 

Quick Check (Verizon)

Perhaps you accidentally enabled the Airplane mode on your phone, disabled Mobile data, etc. Before we go further, perform these quick troubleshooting steps:

  1.  Make sure the Airplane mode is disabled. You can also toggle the Airplane mode on and off
  2.  Toggle Mobile Data

Solution 1 - Your Verizon SIM Card

First, try to reinsert your Verizon Sim card. Open the SIM tray, take the SIM card and check it out. Make sure it is not damaged. If it is, contact your carrier for a replacement.

Solution 2 - Enter Service Mode

This solution requires you to open the dialer and proceed with the steps below.

  • Enter the code *#*#4636#*#* in the dialer
  • Enter Service mode
  • Click on the top option – Device information or Phone information.
  • Next, tap on the Run Ping test.
  • The radio option will be visible at the bottom of this screen.
  • Check if it is off or on. Please press the button next to it to turn on the radio.
  • You will be prompted to reboot the device.
  • Click reboot and your phone will start rebooting. Once completed, check if the problem is gone.

Solution 3 - Update Your Software Version

Ensure you are connected to a reliable Wi-Fi network.

Software Update on newer devices

From your home screen, select :

  • Settings
  • Navigate to System updates
  • Check for system updates 

Software Update on older devices

  • Navigate to Settings
  • Scroll down to the extreme bottom
  • Select Software Update
  • Please wait for it to reboot and complete the update
  • Finished!

If your device finds a new update, tap get now. When it is finished downloading, a new screen will appear, alerting you that the software version is ready to be installed.

If the method above didn't work for you, I recommend reading Restore Galaxy Null IMEI # and Fix Not Registered on Network.

 

Solution 4  - Rebooting Method (Technobezz Origin)

If this solution does not work on the first attempt, try doing it again. Technobezz originally crafted this method. Follow these steps: 

  • Turn off your Verizon phone by holding the Power Button and the Home (Or Volume Down Button)  in conjunction.
  • While the phone is off, wait for 2 minutes.
  • After 2 minutes, remove the battery (Only if your phone battery can be removed) and the Verizon SIM card from the phone.
  • Press the Power button and the home  (Or Volume Down) button together ten times.
  • Afterwards, hold the Power and Home (Or Volume Down) keys for 1-3 minutes.
  • Next, insert your Verizon SIM card and the battery (Only if your phone battery can be removed)
  • Turn on your phone.
  • While your phone is on, remove your Verizon SIM card and then reinsert it. Repeat this five times. (On some Android phones, you need to remove the battery before removing the sim card. If this is the case, please skip this step)
  • A message will appear saying that you need to "Restart your Phone"- click it.
  • Finally, your Verizon phone should boot up with no errors.

Solution 6 - Select Verizon as your Network Operator 

Go to Settings on your phone.

  • Go to Wireless & Networks Or Connections
  • Select Mobile Networks 
  • Select Network Operators 
  • Tap on Search Now
  • Then, Select Verizon

Solution 7: The Corrupt ESN

  • Turn your Verizon device on and go to the dialer to enter the code (*#06#), which shows up the IMEI number of the device. If it shows 'Null,' the IMEI number is corrupt.
  • Dial (*#197328640#) or (*#*#197328640#*#*) from the phone dialer. Users are required to select the option 'Common.'
  • Next, select option #1, Field Test Mode (FTM). It should be OFF.' This process will restore the IMEI number.
  • Return to the key input and select option 2, which will turn off FTM.
  • Remove the SIM card from the device and wait 2 minutes to re-insert your Verizon SIM card.
  • Turn on the device and type (*#197328640#) again from the phone dial.
  • Next, go to and select Debug screen > phone control > Nas control > RRC > RRC revision .
  • Select Option 5
  • Restart your phone. 

Solution 8 - Reset Network Settings

Sometimes just a simple network reset can fix the issue. From your phone's home screen, select settings :

  • Tap General Management. 
  • Select Reset 
  • Tap Reset Settings.
  • Select Reset network settings

Solution 9 -  Update your Verizon APN Settings

Update your Verizon APN Settings

  • Navigate to Settings
  • Tap Connections.
  • Tap Mobile Networks
  • Select Access Point Names
  • Tap More (3 dots)
  • Tap Reset to Default.
  • Then enter the new APN Settings.

Below are the Verizon APN settings for iPhone and Android Devices.

Verizon APN settings for iPhone and Android Devices (LTE)

  • Name: Verizon
  • APN: vzwinternet
  • Proxy: <Not set>
  • Port: <Not set>
  • Username: <Not set>
  • Password: <Not set>
  • Server: <Not set>
  • MMSC: http://mms.vtext.com/servlets/mms
  • MMS proxy: <Not set>
  • MMS port: 80
  • MMS protocol: <Not set>
  • MCC: 310
  • MNC: 12
  • Authentication Type: <Not set>
  • APN Type: default,supl,mms OR Internet+MMS
  • APN Protocol: <Not set> Or IPv4
  • APN roaming protocol: <Not set>
  • Bearer: Unspecified

Notes:

vzwims: Used for connections to IMS services. Required for TXT messaging.

vzwadmin: Used for administrative functions.

vzwinternet: Required for general Internet connections.

vzwapp: Required for PDA data service.

View the Updated APN Settings For AT&T, Verizon, T-Mobile, Sprint ( +4 More)

 Other workarounds worth trying 

  • Toggle Wifi and Airplane Mode -> Turn off Wi-Fi & Airplane for 40 seconds and turn it back on.
  • Try a different SIM Card apart from Verizon.
  • Change to a different Network Mode - > Navigate to Settings > Connections > Mobile Networks  > Select Network Modes > Choose Your Preffered Network Mode ( Toggle between these - > 3G, 3G/2G or 4G/3G/2G)
  • Contact Verizon -> Let them be aware of the issue. In most cases, they will send you a new APN or act on their path (Remotely)
  • Perform a Factory Reset.

Other Solutions and Methods

Was this article helpful?

This helps us Boost our website.


Technobezz

Wed, 27 Jul 2022 05:00:00 -0500 en text/html https://www.technobezz.com/verizon-not-registered-on-network-verizon-network-issues/
Killexams : Why tween girls especially are struggling so much No result found, try new keyword!Mental health issues are on the rise, but tween girls about ages 10 to 14 appear to be struggling more than in the past, a psychologist says. Mon, 08 Aug 2022 05:31:00 -0500 en-us text/html https://www.msn.com/en-us/news/us/why-tween-girls-especially-are-struggling-so-much/ar-AA10r5Uy Killexams : Language models are complex. Now imagine adapting it for children with speech and hearing disabilities

Artificial intelligence has changed many industries and made them more efficient, safer, and cheaper. But there are still areas that AI has not yet penetrated such as Speech Therapy for example. According to the National Institute of Deafness and Communication Disorders, in the U.S., about seven percent of children aged 3-17 (or 1 in 12) suffer from problems related to voice, language and speech. Today, only about 60 percent of children receive treatment, and speech therapists feel the burden as they treat about 80-100 children at the same time. At best, they can allocate 5-10 minutes of treatment per week to each child. One of the most in-depth studies in the field, which included approximately 7,000 participants and lasted almost 30 years, found that those with communication disabilities also suffer in their adult lives from a lower socioeconomic status, low self-esteem and a higher risk of mental health problems. Additionally, according to research recently conducted in the UK, untreated communication disorders are a significant risk factor for child development, and there is a correlation between untreated communication disorders and crime. In standard educational settings, communication deficiencies are not always detected by teaching staff and sometimes the issue is fused to the child's laziness, low IQ, and lack of discipline.

And this is exactly where AI comes in since it leads to smart systems that can be very advantageous. For example, it can help speech therapists in certain stages of the treatment thereby relieving them of their burden. Treatments usually include two main phases: learning new material and concepts and practice. Practice, which usually takes quite a bit of time, is an essential part of the entire learning process, especially in speech therapy. An AI-based system can work with a student to help them practice, check performance and report back to their clinician on progress. Such an automatic system could help an unlimited number of students at all hours of the day, yet is also significantly cheaper, when compared to investing in personnel for the same purpose.

Existing solutions are not good enough

The solution to having software include those with communication disabilities seems simple: to understand what the child is saying, use a Speech-to-Text engine (S2T for short) like Google's engine, which can convert the speech into text. The problem is that commercial S2T engines are often trained using data taken from mature speakers with no impairments, such as LibriSpeech, which has about 1,000 hours of audiobooks. Children with speech and language problems do not speak like book narrators, so commercial S2T engines often fail at the task.

From tests we conducted, for example, with the commercial S2T engine, we discovered that it correctly recognized only about 30-40 percent of the words spoken by children with communication disabilities. The solution was clear: to develop an S2T system that could understand them.

So how do you build an AI system for speech therapy?

Until the era of deep networks, the construction of S2T was mainly done by huge companies and required a huge investment in collecting, cleaning, and tagging data. Sometimes hundreds or even thousands of hours of tagged speech were required to train classical models, such as HMM. But that reality has changed with the development of deep networks.

To develop S2T for children with communication disabilities, we used Transfer Learning. This method allows you to take a network that has been trained for a similar purpose and refine it to Boost performance for specific data. As the contributor, we chose to use wav2vec 2.0. The acoustic model for speech recognition wav2vec was developed several years ago by Facebook. This is a Transformers-based deep web. The advantage of wav2vec is the network's ability to learn from unlabeled data. The learning process of the network is carried out in two stages: self-learning on untagged data and fine-tuning of tagged data (speech signal with appropriate text).

In the process of self-learning, the network is required to reproduce part of the original signal – a hidden part of it. This is how the system learns to recognize the sounds of the language and the structure of the phonemes. In the second stage, the system learns to associate the learned phonemes with the characters of the text. One of the amazing things we discovered is that the amount of tagged data required for the second stage can be relatively small compared to classical systems: the network manages to reach an error of 8.2 percent per test set with only 10 minutes of tagged data. One hour of tagged data equates to 5.8 percent and 100 hours, only four percent. A variety of wav2vec networks are available to the general public and can be downloaded free of charge. We chose a network that underwent full training on LibriSpeech and fine-tuning with 960 hours.

To train the network, we collected thousands of recordings of children with communication problems. Collection of the data was carried out during treatments using a computer; some are labelled, and some are not. As we saw earlier, wav2vec allows us flexibility in using tagged data as well as untagged ones. Labelled data improves the accuracy of S2T, so it is always better to label the data. As the number of tagged data increases, the accuracy of the system will also improve.

After the data was collected, we recruited a team of speech therapists to label it. During the labelling, the experts were required to provide the text of the recording as well as supply additional indications related to the nature of the recording itself. In quite a few cases there are disturbances during the lesson: background noises, voices of other children who are in the same room, and more. Using noisy recordings can complicate the learning process.

After some of the data was tagged, we ran a fine-tuning of the wav2vec system on a few hours of data and saw a dramatic increase in accuracy in recognizing children's speech. The WER (Word Error Rate) dropped almost twice. True, it still does not reach the performance level of commercial systems for adult speakers, but it is much better for speech recognition in children. The data tagging project is still ongoing, but there is already cautious optimism about expected results.

Written by Edward Roddick Director of Core Tech at AmplioSpeech

Sat, 06 Aug 2022 18:45:00 -0500 en text/html https://www.geektime.com/anguage-model-for-special-needs-children/
Killexams : Physicist: The Entire Universe Might Be a Neural Network

It's not every day that we come across a paper that attempts to redefine reality.

But in a provocative preprint uploaded to arXiv this summer, a physics professor at the University of Minnesota Duluth named Vitaly Vanchurin attempts to reframe reality in a particularly eye-opening way — suggesting that we're living inside a massive neural network that governs everything around us. In other words, he wrote in the paper, it's a "possibility that the entire universe on its most fundamental level is a neural network."

For years, physicists have attempted to reconcile quantum mechanics and general relativity. The first posits that time is universal and absolute, while the latter argues that time is relative, linked to the fabric of space-time.

In his paper, Vanchurin argues that artificial neural networks can "exhibit approximate behaviors" of both universal theories. Since quantum mechanics "is a remarkably successful paradigm for modeling physical phenomena on a wide range of scales," he writes, "it is widely believed that on the most fundamental level the entire universe is governed by the rules of quantum mechanics and even gravity should somehow emerge from it."

"We are not just saying that the artificial neural networks can be useful for analyzing physical systems or for discovering physical laws, we are saying that this is how the world around us actually works," reads the paper's discussion. "With this respect it could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong."

The concept is so bold that most physicists and machine learning experts we reached out to declined to comment on the record, citing skepticism about the paper's conclusions. But in a Q&A with Futurism, Vanchurin leaned into the controversy — and told us more about his idea.

Futurism: Your paper argues that the universe might fundamentally be a neural network. How would you explain your reasoning to someone who didn't know very much about neural networks or physics?

Vitaly Vanchurin: There are two ways to answer your question.

The first way is to start with a precise model of neural networks and then to study the behavior of the network in the limit of a large number of neurons. What I have shown is that equations of quantum mechanics describe pretty well the behavior of the system near equilibrium and equations of classical mechanics describes pretty well how the system further away from the equilibrium. Coincidence? May be, but as far as we know quantum and classical mechanics is exactly how the physical world works.

The second way is to start from physics. We know that quantum mechanics works pretty well on small scales and general relativity works pretty well on large scales, but so far we were not able to reconcile the two theories in a unified framework. This is known as the problem of quantum gravity. Clearly, we are missing something big, but to make matters worse we do not even know how to handle observers. This is known as the measurement problem in context of quantum mechanics and the measure problem in context of cosmology.

Then one might argue that there are not two, but three phenomena that need to be unified: quantum mechanics, general relativity and observers. 99% of physicists would tell you that quantum mechanics is the main one and everything else should somehow emerge from it, but nobody knows exactly how that can be done. In this paper I consider another possibility that a microscopic neural network is the fundamental structure and everything else, i.e. quantum mechanics, general relativity and macroscopic observers, emerges from it. So far things look rather promising.

What first gave you this idea?

First I just wanted to better understand how deep learning works and so I wrote a paper entitled “Towards a theory of machine learning”. The initial idea was to apply the methods of statistical mechanics to study the behavior of neural networks, but it turned out that in certain limits the learning (or training) dynamics of neural networks is very similar to the quantum dynamics we see in physics. At that time I was (and still is) on a sabbatical leave and decided to explore the idea that the physical world is actually a neural network. The idea is definitely crazy, but if it is crazy enough to be true? That remains to be seen.

In the paper you wrote that to prove the theory was wrong, "all that is needed is to find a physical phenomenon which cannot be described by neural networks." What do you mean by that? Why is such a thing "easier said than done?"

Well, there are many "theories of everything" and most of them must be wrong. In my theory, everything you see around you is a neural network and so to prove it wrong all that is needed is to find a phenomenon which cannot be modeled with a neural network. But if you think about it it is a very difficult task manly because we know so little about how the neural networks behave and how the machine learning actually works. That was why I tried to develop a theory of machine learning on the first place.

The idea is definitely crazy, but if it is crazy enough to be true? That remains to be seen.

How does your research relate to quantum mechanics, and does it address the observer effect?

There are two main lines of thought the Everett’s (or many-world’s) interpretation of quantum mechanics and Bohm’s (or hidden variables) interpretation. I have nothing new to say about the many-worlds interpretation, but I think I can contribute something to the hidden variables theories. In the emergent quantum mechanics which I considered, the hidden variables are the states of the individual neurons and the trainable variables (such as bias vector and weight matrix) are quantum variables. Note that the hidden variables can be very non-local and so the Bell’s inequalities are violated. An approximated space-time locality is expected to emerge, but strictly speaking every neuron can be connected to every other neuron and so the system need not be local.

Do you mind expanding on the way this theory relates to natural selection? How does natural selection factor into the evolution of complex structures/biological cells?

What I am saying is very simple. There are structures (or subnetworks) of the microscopic neural network which are more stable and there are other structures which are less stable. The more stable structures would survive the evolution, and the less stable structure would be exterminated. On the smallest scales I expect that the natural selection should produce some very low complexity structures such as chains of neurons, but on larger scales the structures would be more complicated. I see no reason why this process should be confined to a particular length scale and so the claim is that everything that we see around us (e.g. particles, atoms, cells, observers, etc.) is the outcome of natural selection.

I was intrigued by your first email when you said you might not understand everything yourself. What did you mean by that? Were you referring to the complexity of the neural network itself, or to something more philosophical?

Yes, I only refer to the complexity of neural networks. I did not even have time to think about what could be philosophical implications of the results.

I need to ask: would this theory mean we're living in a simulation?

No, we live in a neural network, but we might never know the difference.


Mon, 11 Jul 2022 10:33:00 -0500 text/html https://futurism.com/physicist-entire-universe-neural-network
Killexams : One Network Enterprises Strengthens Automated Intelligence and Machine Learning to Boosts End-to-End Supply Chain Planning

One Network Enterprises (ONE), the leading global provider of intelligent control towers and the AI-driven Digital Supply Chain Network™, is pleased to announce significant advancements to the NEO Platform’s supply chain planning capabilities. These capabilities span the entire supply chain network ecosystem and functions, from revenue planning, demand planning, IBP/S&OP, supply chain planning, logistics, and network optimization.

With the release of NEO 3.5, One Network continues to make major strides in the area of AI and machine learning (ML), advancing the underlying proprietary framework of its intelligent agent, known as NEO, to increase the effectiveness of its learning capabilities. Two new capabilities are now available: Network BOM Constrained Supply Planning and Field Service Optimization. These new capabilities combine optimization with machine learning, enabling best-in-class prediction accuracy compared to traditional approaches.

NEO benefits organizations on the Digital Supply Chain Network™ with customer and provider insights, full network collaboration, and increased network density. NEO captures network-wide data, making that data available for big data analytics through a supply chain data warehouse. Predictive analytics are generated from this current and historical data by applying advanced analytics such as ML, neural networks, and combinatorial optimization, along with traditional analytical techniques.

Prescriptions are generated from predictive analytics based on solving root causes combined with targeted business and process KPIs. The prescriptions are “smart” in the sense that they incorporate both local and global objectives, and the relationship between demand, supply, and logistics. They offer a series of dynamic prescriptions that are sensitive to current conditions and constraints, to optimize execution and completely resolve problems.

NEO’s learning capabilities now include the ability to learn successful prescription sequences that generate optimal outcomes, so that it can offer those patterns in similar contexts in the future. The combination of the digitized supply chain network and smart prescriptions enables continuous and incremental planning on a near real-time basis. Thus, NEO 3.5 attains a significant milestone, enabling autonomous decision-making (based on user-defined KPI “guardrails”) across the supply network.

Also Read: Role of Poor Digital Experience in the Great Resignation

NEO 3.5 also introduces the concept of “bring your own intelligence” (BYOI), enabling companies to leverage insights from elsewhere as part of their decision-making on the Digital Supply Chain Network™. NEO enables BYOI with ML “plug points,” so customers can extend the solution based on their own analytics. Any such SDK-created extensions are guaranteed to be supported in future NEO releases.

These new capabilities are made possible by the distinctive architecture of One Network’s NEO Platform. On the platform, planning and execution run concurrently, using the same data objects and the same data model, enabling true planning married to execution. Forecasts, orders, and deliveries move through the network in a seamless flow across all time horizons, without the need for a bridge between planning and execution. Continuous and incremental planning provides near real-time demand-supply matching across the network. Opportunities and problems are handled through interactive workbenches, where they can be autonomously or collaboratively engaged. Due to the fact that NEO’s distributed transaction management spans demand, supply, and logistics, it enables more powerful, dynamic workflow problem resolution across all functions, and significantly increases the chances of completely resolving issues.

NEO 3.5 also introduces a new NEO capability called “Optimized Execution,” which is enabled by unifying planning and execution on one platform. Optimized Execution is equipped with sophisticated forecasting tools, real-time insights, decision support, and decision execution. It can run autonomously, based on KPI “guardrails” and user-defined business rules, or present exceptions, problems, and potential issues to users via the workbench. Users can review and execute NEO’s smart prescriptions, or collaborate with relevant trading partners to determine the best path forward.

Optimized Execution brings the traditionally distinct functions of planning and execution together in a unique way. Its smart prescriptions are designed to solve issues caused by demand and supply variation, to meet both local and network-wide objectives. Smart prescriptions bridge the gap between decision support and execution, by enabling plans to be executed even as demand, supply, and logistics conditions vary; and to ensure that execution continues to align with objectives.

The NEO 3.5 release demonstrates One Network’s commitment to making AI/ML transparent, practical, and a source of value for all trading partners in the Digital Supply Chain Network™.

Check Out The New Enterprisetalk Podcast. For more such updates follow us on Google News Enterprisetalk News.

Wed, 03 Aug 2022 23:36:00 -0500 text/html https://enterprisetalk.com/news/one-network-enterprises-strengthens-automated-intelligence-and-machine-learning-to-boosts-end-to-end-supply-chain-planning/
Killexams : How analog AI hardware may one day reduce costs and carbon emissions

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Could analog artificial intelligence (AI) hardware – rather than digital – tap fast, low-energy processing to solve machine learning’s rising costs and carbon footprint? 

Researchers say yes: Logan Wright and Tatsuhiro Onodera, research scientists at NTT Research and Cornell University, envision a future where machine learning (ML) will be performed with novel physical hardware, such as those based on photonics or nanomechanics. These unconventional devices, they say, could be applied in both edge and server settings. 

Deep neural networks, which are at the heart of today’s AI efforts, hinge on the heavy use of digital processors like GPUs. But for years, there have been concerns about the monetary and environmental cost of machine learning, which increasingly limits the scalability of deep learning models. 

A 2019 paper out of the University of Massachusetts, Amherst, for example, performed a life cycle assessment for training several common large AI models. It found that the process can emit more than 626,000 pounds of carbon dioxide equivalent — nearly five times the lifetime emissions of the average American car, including the manufacturing of the car itself. 

At a session with NTT Research at VentureBeat Transform’s Executive Summit on July 19, CEO Kazu Gomi said machine learning doesn’t have to rely on digital circuits, but instead can run on a physical neural network. This is a type of artificial neural network in which physical analog hardware is used to emulate neurons as opposed to software-based approaches.

“One of the obvious benefits of using analog systems rather than digital is AI’s energy consumption,” he said. “The consumption issue is real, so the question is what are new ways to make machine learning faster and more energy-efficient?” 

Analog AI: More like the brain? 

From the early history of AI, people weren’t trying to think about how to make digital computers, Wright pointed out.

“They were trying to think about how we could emulate the brain, which of course is not digital,” he explained. “What I have in my head is an analog system, and it’s actually much more efficient at performing the types of calculations that go on in deep neural networks than today’s digital logic circuits.” 

The brain is one example of analog hardware for doing AI, but others include systems that use optics. 

“My favorite example is waves, because a lot of things like optics are based on waves,” he said. “In a bathtub, for instance, you could formulate the problem to encode a set of numbers. At the front of the bathtub, you can set up a wave and the height of the wave gives you this vector X. You let the system evolve for some time and the wave propagates to the other end of the bathtub. After some time you can then measure the height of that, and that gives you another set of numbers.” 

Essentially, nature itself can perform computations. “And you don’t need to plug it into anything,” he said. 

Analog AI hardware approaches

Researchers across the industry are using a variety of approaches to developing analog hardware. IBM Research, for example, has invested in analog electronics, in particular memristor technology, to perform machine learning calculations.

“It’s quite promising,” said Onodera. “These memristor circuits have the property of having information be naturally computed by nature as the electrons ‘flow’ through the circuit, allowing them to have potentially much lower energy consumption than digital electronics.” 

NTT Research, however, is focused on a more general framework that isn’t limited to memristor technology. “Our work is focused on also enabling other physical systems, for instance those based on light and mechanics (sound), to perform machine learning,” he said. “By doing so, we can make smart sensors in the native physical domain where the information is generated, such as in the case of a smart microphone or a smart camera.” 

Startups including Mythic also focus on analog AI using electronics – which Wright says is a “great step, and it is probably the lowest risk way to get into analog neural networks.” But it’s also incremental and has a limited ceiling, he added: “There is only so much improvement in performance that is possible if the hardware is still based on electronics.” 

Long-term potential of analog AI

Several startups, such as Lightmatter, Lightelligence and Luminous Computing, use light, rather than electronics, to do the computing – known as photonics. This is riskier, less-mature technology, said Wright. 

“But the long-term potential is much more exciting,” he said. “Light-based neural networks could be much more energy-efficient.” 

However, light and electrons aren’t the only thing you can make a computer out of, especially for AI, he added. “You could make it out of biological materials, electrochemistry (like our own brains), or out of fluids, acoustic waves (sound), or mechanical objects, modernizing the earliest mechanical computers.” 

MIT Research, for example, announced last week that it had new protonic programmable resistors, a network of analog artificial neurons and synapses that can do calculations similarly to a digital neural network by repeatedly repeating arrays of programmable resistors in intricate layers. They used an “a practical inorganic material in the fabrication process,” they said, that enables their devices “to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.”

NTT Research says it’s taking a step further back from all these approaches and asking much bigger, much longer-term questions: What can we make a computer out of? And if we want to achieve the highest speed and energy efficiency AI systems, what should we physically make them out of?

“Our paper provides the first answer to these questions by telling us how we can make a neural network computer using any physical substrate,” said Logan. “And so far, our calculations suggest that making these weird computers will one day soon actually make a lot of sense, since they can be much more efficient than digital electronics, and even analog electronics. Light-based neural network computers seem like the best approach so far, but even that question isn’t completely answered.” 

Analog AI not the only nondigital hardware bet

According to Sara Hooker, a former Google Brain researcher who currently runs the nonprofit research lab Cohere for AI, the AI industry is “in this really interesting hardware stage.” 

Ten years ago, she explains, AI’s massive breakthrough was really a hardware breakthrough. “Deep neural networks did not work until GPUs, which were used for video games [and] were just repurposed for deep neural networks,” she said. 

The change, she added, was almost instantaneous.  “Overnight, what took 13,000 CPUs overnight took two GPUs,” she said. “That was how dramatic it was.” 

It’s very likely that there’s other ways of representing the world that could be equally powerful as digital, she said. “If even one of these data directions starts to show progress, it can unlock a lot of both efficiency as well as different ways of learning representations,” she explained. “That’s what makes it worthwhile for labs to back them.” 

Hooker, whose 2020 essay “The Hardware Lottery” explored the reasons why various hardware tools have succeeded and failed, says the success of GPUs for deep neural networks was “actually a bizarre, lucky coincidence – it was winning the lottery.”

GPUs, she explained, were never designed for machine learning — they were developed for video games. So much of the adoption of GPUs for AI use “depended upon the right moment of alignment between progress on the hardware side and progress on the modeling side,” she said. “Making more hardware options available is the most important ingredient because it allows for more unexpected moments where you see those breakthroughs.” 

Analog AI, however, isn’t the only option researchers are looking at when it comes to reducing the costs and carbon emissions of AI. Researchers are placing bets on other areas like field-programmable gate arrays (FPGAs) as application-specific accelerators in data centers, that can reduce energy consumption and increase operating speed. There are also efforts to Boost software, she explained.

Analog, she said, “is one of the riskier bets.”

Expiration date on current approach

Still, risks have to be taken, Hooker said. When asked whether she thought the big tech companies are supporting analog and other types of alternative nondigital AI future, she said, “One hundred percent. There is a clear motivation,” adding that what is lacking is sustained government investment in a long-term hardware landscape. 

“It’s always been tricky when investment rests solely on companies, because it’s so risky,” she said. “It often has to be part of a nationalist strategy for it to be a compelling long-term bet.” 

Hooker said she wouldn’t place her own bet on widespread analog AI hardware adoption, but insists the research efforts good for the ecosystem as a whole.

“It’s kind of like the initial NASA flight to the moon,” she said. “There’s so many scientific breakthroughs that happen just by having an objective. 

And there is an expiration date on the industry’s current approach, she cautioned: “There’s an understanding among people in the field that there has to be some bet on more riskier projects.”

The future of analog AI

The NTT researchers made clear that the earliest, narrowest applications of their analog AI work will take at least 5-10 years to come to fruition – and even then will likely be used first for specific applications such as at the edge. 

“I think the most near-term applications will happen on the edge, where there are less resources, where you might not have as much power,” said Onodera. “I think that’s really where there’s the most potential.” 

One of the things the team is thinking about is which types of physical systems will be the most scalable and offer the biggest advantage in terms of energy efficiency and speed. But in terms of entering the deep learning infrastructure, it will likely happen incrementally, Wright said. 

“I think it would just slowly come into the market, with a multilayered network with maybe the front end happening on the analog domain,” he said. “I think that’s a much more sustainable approach.” 

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Wed, 03 Aug 2022 14:49:00 -0500 Sharon Goldman en-US text/html https://venturebeat.com/applied-ai/how-analog-ai-hardware-may-one-day-reduce-costs-and-carbon-emissions/
Killexams : Overcoming The “Cold Start” Problem in Healthcare

Platform companies - those whose products enjoy network effects - have not only changed the way we live, but have upended industries, opened new markets, and completely revolutionized how business gets done. The majority of the top valued companies worldwide are platforms.

And now, the “Platform Revolution” has finally started to drive change across the healthcare industry, where there is tremendous opportunity to leverage platform concepts to bring an antiquated industry into the digital age and solve some of healthcare’s biggest challenges.

The potential for platforms to unleash their power and Boost a fragmented, broken healthcare system is undeniably great. But bringing their benefits to fruition relies on achieving virtuous network effects and overcoming the “cold start problem.”


Network Effects: Why Platforms Are Becoming The Preeminent Business Model In Healthcare

Digital health platforms have been catching fire for years now and funding has exploded in kind. Digital health platform investment hit nearly $12B in 2021, outpacing all digital health investments (platforms and non-platforms) just two years prior in 2019 ($8B).

What’s driving this incredible uptick and market interest is the fact that platforms are uniquely well-suited to fix critical issues that have plagued the healthcare industry for years– from more effectively matching supply and demand for healthcare solutions and services, to lowering transaction costs and reducing information asymmetry. Platforms have also proven to be a valuation darling, growing and scaling faster than their non-platform counterparts and ultimately delivering more value and profitability to investors.

The valuation premium that platforms achieve is explained by network effects. Network effects are used to describe a situation in which the value of a platform ultimately depends on the number of individuals or parties using it; the greater the number of users on either side of a network, the greater the network effect and value it delivers. The ultimate goal for any platform is to achieve the “flywheel effect,” where virtuous network effects alone are enough to sustain a network’s growth at a steady or accelerating rate.

Getting to this point, however, is no easy task. Network effects may explain why platform companies are able to get so powerful, valuable and profitable once they achieve the “flywheel effect”, but it’s also what leads them fail at 2x the rate of their non-platform counterparts early on: the absence of a critical mass of other users can deter adoption. Achieving virtuous network effects requires that platforms overcome this “cold start problem,” and find creative ways to do so.


Overcoming The “Cold Start Problem” In Healthcare

In his book, The Cold Start Problem: How to Start and Scale Network Effects, Andrew Chen – general partner at Andreessen Horowitz and former Uber executive during the company’s high-growth, pre-IPO years— explores how some of today’s biggest platforms overcame the "cold start problem” by using network effects to launch and ultimately scale to billions of users.

The majority of Chen’s recommendations are from platforms serving B2C constituents, or those that can tap into product-led growth - in which individuals or departments start using a product - to serve as a wedge into business revenue.

Unfortunately, healthcare is a different beast, which means while many of Chen’s recommendations hold water as-is, others must be reconsidered. For instance, as healthcare is not a normal “good”, consumers generally do not make decisions in a vacuum; ideally, they make decisions with their providers. In addition, the regulatory environment and real-time clinical consequences mean there is less opportunity for individual procurement decisions in an organizational context, so product-led growth may be more difficult. And because of regulatory conditions, the interconnectedness of players, and incumbent inertia, we don’t see many examples of “virality”, and markets do not tip as quickly.

In short, overcoming the cold start problem and achieving network effects can be incredibly difficult.

That said, there are network-effects-building tactics from Chen’s book and beyond that digital health platform leaders should keep in mind in their quest to overcome the cold start, achieve the flywheel, and deliver outsized returns:

1. Start with transactions that solve problems for both parties: For a platform to solve an integration or common transaction problem, it needs to solve a problem for both sides of a commercial transaction. Ideon (formerly Vericred), for example, connects health insurance and benefits carriers with insurtech companies via its reusable APIs for more effective data exchange at scale with multiple partners. Ideon’s single integration point allows each side (health insurance carrier and insurtech) to focus more on their core businesses, and less on integration efforts.

2. Embrace the “Mechanical Turk”, i.e., “Flinstoning”: The idea behind Chen’s term “Flinstoning” suggests a little smoke and mirrors. In other words, if the product doesn’t have all the features automated that users might need, have employees provide some of those features behind the scenes. Take CoverMyMeds, for example. Before the company was able to digitally connect doctors with PBMs to fully automate via electronically process prior authorizations (PAs) for medication, it used a combination of back-end faxing to PBMs and a large staffed support center, where employees were actually manually processing the faxed forms on the backend. This was all masked to the clinician users, who simply interacted with CoverMyMeds’ website.

This was of course not a sustainable approach and is no longer the case today, but the user experience CoverMyMeds was able to deliver in the beginning won out. The website and illusion of a seamless digital transaction kept current users on the platform and brought others to it, which helped the company achieve network effects, prove out its concept, and simultaneously build out the platform as intended.

3. Create (or leverage) a sense of exclusivity: Another approach that Andrew Chen mentions in his book is using an invite-only approach to create a sense of user exclusivity and thus scarcity and platform demand. While that’s generally not going to be a successful approach in healthcare, platform companies that have same-side network effects can benefit from it. Doximity, a doc-to-doc professional networking site, and Patients Like Me, a platform community of patients with rare diseases, are both use-case focused and cater to specific groups, which helped to cultivate the user bases for each.

Importantly, any platform able to cultivate same-side network effects can enhance the value of its product/service to users, will likely see faster growth of its user base, and experience extreme network defensibility, given same-side network effects are so hard to replicate.

4. Start small and focus on “atomic networks”: Chen also writes about starting small and building atomic networks first. If platforms start small and with fewer sides, they can better identify stakeholder interests, develop features that map to those interests, and figure out the workflow orchestration to ensure both (or all) sides receive real value. Starting out smaller, and narrower reduces the number of potential failure points. A good example (and perhaps only example) in interoperability is e-prescribing.

E-prescribing was Surescripts’ first use-case on its health information network. It worked well early on because it was a specific set of transactions with clearly defined partners, concrete transaction workflow orchestration between those parties, and a network operator to provide technical support to each side. Today, the network processes the majority of e-prescriptions in the country, in addition to a number of other healthcare transactions between/among providers, pharmacies and PBMs.

5. Leverage “operational virality”: Viral user-led growth typically doesn’t happen in healthcare, given consumers don’t behave the same way and decisions aren’t made in the same way or as quickly. However, there are ways to tap into viral audience-development concepts for healthcare at an operational level – by leveraging day-to-day relationships with other stakeholders to help steer interest.

CoverMyMeds accomplished this when they saw stalled physician adoption on their PA platform. The company realized that it could grow its user base by going to pharmacies first, which is where most medication prior authorization headaches start. By offering pharmacies a free platform integration, with the promise of getting PAs resolved quicker, the pharmacies then pointed doctors to the site. And it worked – CoverMyMeds’ network grew both in number of users and type of users. What’s good for the goose was good for the gander.

6. Focus on aligning stakeholder incentives: Misaligned incentives plague the healthcare industry, one of the most highly-governed industries in the country. The platform companies that are likely to make the biggest difference are the ones that can find the points where interests align. Sempre Health, a platform that enables behavior-based, dynamic medication pricing, has done a great job at aligning incentives to fix a longstanding industry issue.

Sempre Health’s two sides typically have conflicting interests pharmaceutical manufacturers on one side (with an incentive to get patients on certain therapies) and health plans on another (just trying to manage costs).

By developing a marketplace that connects typically “combative” parties, and agreeing on the rules of engagement, Sempre has been able to align competing interests and Boost a market deficiency that benefits multiple stakeholders. Pharma clients win because the platform helps to get patients on therapies; health plans win because they can choose what they want to offer members; and patients win because they have increased access to therapies.

7. Piggybacking off of existing networks: Perhaps the most well-known example of one network piggybacking off of another is PayPal piggybacking off of eBay, whereby eBay transactions were made primarily through PayPal’s integration into eBay’s platform. By tapping into an existing payment platform, eBay was able to solve a problem for both buyers and sellers, which ended up being one of the reasons eBay acquired PayPal.

In healthcare, a great example of piggybacking is Health Gorilla, a data aggregator platform that normalizes and standardizes healthcare data. Health Gorilla improves data access and is helping to solve for health data interoperability, by serving as the first connector between CommonWell and Care Equality.

8. “Come for the tool, stay for the network”: Chen’s example here is one of the best known consumer platforms: Instagram, which provided a valuable picture editing and posting tool, and grew its network of users – and value of its network and certain uses (influencers) – from there.

One company taking this approach to network building and value generation is Zus Health, which has created an infrastructure for digital health companies to help them Boost speed to market. The more clients that Zus Health gets, the more attractive its data sources become. Zus is also building out valuable tools for its clients, and the value of such tools will only become more powerful as they become part of standards development.

9. Build the killer app (or get it federally subsidized): The example Chen uses in his book for the “killer app” is Zoom, but the learning can be applied in healthcare. If a company can build a killer app, it can always be transitioned into a platform structure, so long as other parties find value in interacting with the platform’s users (or the data they generate) and there is a way to balance the interests of the original set of users with those of the new party.

Electronic health records (EHR) vendors represent healthcare’s biggest unmet opportunity for platformization in this way. Although they certainly aren’t the “killer app”, EHRs that benefited from federal subsidies and now have a significant market footprint are well positioned to shift to a platform strategy; if not, they are likely to fade away into further irrelevance.

10. Create FOMO with a “Big Bang” of PR and communications activities: Digital health platforms should not overlook or underestimate the importance of doing PR and comms right at the outset to create FOMO (fear of missing out) and the interest that comes with it. Having a well thought-out and orchestrated PR and comms launch strategy can help drum up early interest and network-building momentum, including activities like collaborating with marquee clients on announcements, speaking opportunities or other thought leadership initiatives. There is, however, a downside to this approach: if there aren’t enough existing network users on the platform, then new adopters may be dismayed by the lack of value, and drop off the platform shortly after joining.

Beyond The Cold Start

Healthcare platforms are only as successful as the network effects they are able to achieve. Overcoming the cold start takes a level of scrappiness, creativity, and business acumen that not all platform companies will be able to deliver, especially in as complex and fragmented an industry as healthcare. But for the ones that push the envelope and strive to get it right, the network effects and industry impact can be incredibly virtuous.

Stay up to date on the latest platform concepts and evolving industry imperatives by registering for The Platform Revolution Comes to Healthcare: A Deep Dive at the 2022 MIT Platform Strategy Summit and in particular the “Value Creation from Data” at 3:20pm ET on July 13, 2022, with panelists:

  • Micky Tripathi – National Coordinator, Office of the National Coordinator for Health IT for Health IT
  • Michael Byczkowski – Global Head of Healthcare, SAP
  • Christian Howell – Vice President & General Manager, Medical Device and Diagnostic Group, Aetion
  • Suchi Saria – Founder and CEO, Bayesian Health & Director of the Machine Learning, AI and Healthcare Lab, Johns Hopkins University
Sun, 10 Jul 2022 12:00:00 -0500 Seth Joseph en text/html https://www.forbes.com/sites/sethjoseph/2022/07/11/overcoming-the-cold-start-problem-in-healthcare/
Killexams : Bridging the region’s digital connectivity divide The COVID-19 pandemic exposed multiple societal inequities, many of which were previously under addressed by both policy and charitable efforts. Existing underlying needs were laid bare, and the haves and have nots suddenly came into view in ways previously unseen. The startling consequences of public shutdown were swift, segregative and broad in scope.

Across the nation, an imbalance in digital capability became grossly apparent. As terms like “work from home,” “Zoom meeting” and “remote learning” further whittled their way into the common vernacular, many rural and low-income residents were left in the lurch. Those who were not deemed “essential” within the pandemic landscape often lacked the proper means for virtual transportation to their previous jobs or studies. Children attempted to do schoolwork on cellular phones. Teachers in remote communities struggled with inadequate connectivity for reaching their students simultaneously or in groups.

While funding was rapidly distributed at the federal and state levels through the CARES Act and various other emergency relief efforts, questions remain about the outcome of these hurried attempts to bridge the digital divide. How have local communities been affected? What continuing changes are being made to bring digital equity to Cincinnati and its surrounding communities with their vastly different needs?

At the onset, CARES Act funding rushed tech devices to the doorsteps of underserved children, and local internet companies offered free connectivity to low-income residents. This, however, was difficult to procure or implement, as phone lines were immediately and continuously jammed with requests for the sought-after benefit.

Eventually, providing hotspots and free broadband to neighborhoods, businesses or even specific apartment complexes was seen as a viable blanket method, but only as a short-term solution to a much larger problem.

Digital inequity is the modern-day version of transportation inequity. In today’s society, being unable to easily reach out along these new, virtual paths is similar to having a broken down, out of gas or nonexistent vehicle. Parallels of motility can be drawn of owning outdated or broken devices, having no or slow internet access, or both.

Renee Mahaffey Harris has been president and CEO of The Health Gap for the past three years. Although her organization’s mission is more about empowering Black communities to be proactive about their health, the pandemic increased the need for assistance with tech barriers as a facilitative aspect.

“I think all these issues are systemic issues which cannot be solved overnight. And so, what the pandemic did was just further widen the understanding of a gap to equity in care. Not that anybody designed the great broadband system to leave a population out – I’m not saying that, but I think COVID just exposed the gap,” observes Mahaffey Harris.

Mahaffey’s work revolves around underserved minority populations in communities around the city of Cincinnati. However, she worries about inequity and systemic issues in remote areas as well. She is in contact with many transplanted Appalachian communities pocketed around the vicinity and strives to understand their needs and provide them with resources.

Funding mostly targeted at areas with the worst COVID-19 outbreaks left outlying communities healthier overall, but even more isolated in a sense. While systemic issues surrounding online healthcare and new formats like Telehealth in both rural and low-income communities nearer the city are somewhat analogous, the solutions are different due to layout.

Renee Mahaffey Harris, President/CEO of The Center for Closing the Health Gap“There is an intentionality around looking at it – an understanding that there's not a one size fits all and no population is monolithic, “offers Mahaffey Harris. “Rural communities have very big challenges, different than urban settings, because the geography requires more mobile strategies. People live distant from that source of medical care, and the solution looks different.”

So, what progress has been made locally towards long-term solutions for bridging the disparate digital gaps exposed by the pandemic?

“The effort to eliminate racial and ethnic health disparities, advocacy, education and community outreach is our mission,” says Mahaffey Harris. “How we do that is through our grassroots mobilization model, which is focused on activating the individual agency of the people that we are serving so that they get the tools and knowledge to be a part of their own health solutions and advocate for themselves.”

“COVID resulted in us getting a grant for iPads with programming, and we designed an instruction guide. Our team then went into senior buildings and other community-based sites to train people on how to better utilize technology,” continues Mahaffey. “Implementation of tech skills was part of it for older residents and members of the community that didn't have access.”

Additionally, The Health Gap conducted a town hall format webinar to assist residents with effectively navigating the Telehealth landscape. This was important because, as Mahaffey Harris stresses, the provided resources were worthless without the knowledge of how to access them using specific (and often unfamiliar) tools.

“One of those tools is broadband or having the right device. But if (tech training) isn’t something that educational loop has been closed on, you can’t be given the access to it,” says Mahaffey Harris.

“We were also contacted by three different neighborhoods – Avondale, Evanston and Walnut Hills – because parents, even though their children were getting (tools) to work remotely, weren't able to assist their children with navigating whatever that technology was that they got,” adds Mahaffey.

Mahaffey Harris firmly believes that addressing various longstanding systemic issues at their core is the only way to enact genuine change in neighborhoods with serious economic problems but faults the incongruence of various outreach methodologies for hindering the achievement of this shared, overall purpose.

She says she is also frustrated by the shocking lack of local awareness of vast community resources she frequently sees. She believes that the allocation of these assets could be improved with more organization – specifically via informed data use guiding precisely curtailed delivery systems that “meet different groups of people where they are.”

The Health Gap’s website now includes a link directing users to an online survey about their at-home broadband capability, which collects useful data for a partner organization about how well those contacting The Health Gap are being served in relation to digital connectivity.

“The strategies to correct or address the challenges have to be very targeted,” reiterates Mahaffey Harris. “Our data tells us where we are. The factors are very difficult to change because they didn't get here overnight. And there is no quick fix. Those are the realities. It's not easy. If it were easy it’d be done.”

Jason Praeter, President and General Manager of altafiber’s network division“High-speed connectivity is essential in order for individuals to access education opportunities, employment opportunities, and healthcare opportunities as we live in a world where remote education, remote work, and telemedicine are increasingly prevalent,” explains Jason Praeter, president, and general manager of altafiber’s network division.

But according to a research study by Benton Institute for Broadband & Society over half of funding from the CARES Act focused on digital learning for K-12 students, while only a third was spent on broadband infrastructure – now viewed as a vital resource for success in developing and sustaining wealth in local communities.

While in retrospect this may have been shortsighted, paving the way for future employment opportunities and healthcare took a backseat to the immediate need for educational provisions because schooling was the more obvious pandemic problem, at least at first glance.

Altafiber, formerly Cincinnati Bell, has been an active local and regional participant in several accurate initiatives aimed at bridging the connectivity gaps initially exposed by the pandemic. The priority has been to stretch broadband fiber into rural communities and get low-income residents no-cost access in their homes, local businesses, and community buildings. 

Altafiber’s goal of extending its fiber network to underserved communities has been strengthened by the corporation’s accurate acquisition by Macquerie. Furthering the mission of connectivity for all are programs such as Connect Our Students, the NKY Digital Equity Initiative, UniCity (altafiber’s Smart City organization), and a partnership with the Butler Rural Electric Cooperative.

The BREC partnership brought fiber-based internet to approximately 2,000 Butler County member locations, along with obtaining the cooperative’s substations and switching equipment in 2021; while UniCity delivered fiber-enabled, high-speed public Wi-Fi to Clovernook Apartments, Compton Lake, Burney View, and Lake of the Woods. In total this represents 828 apartment units in Mt. Healthy that now have access to public Wi-Fi.

“We were thrilled to assist the City of Mt. Healthy in connecting more Hamilton County residents, schools, and businesses to the internet over Wi-Fi,” was the sentiment of the President of the Hamilton County Board of County Commissioners Stephanie Summerow Dumas, regarding the UniCity initiative in Mt. Healthy. “We’ve been able to make a transformative investment in our community’s ability to increase digital equity.”

James Wolf, Mt. Healthy Mayor, felt similarly, adding an emphasis on public/private partnerships: “By working together, Mt. Healthy City School District, the City of Mt. Healthy, the State of Ohio, and altafiber are increasing information accessibility for our residents and bridging the digital divide.”

Dr. Valerie Hawkins, Mt. Healthy Schools Superintendent also weighed in, saying, “The students at Mt. Healthy City Schools deserve every opportunity to help them achieve their potential. Removing the obstacle of reliable Wi-Fi is a step closer to equity for our students.”

The current goal of altafiber’s UniCity effort in the City of Wyoming is slightly dissimilar due to differing community needs. A focus on WiFi coverage in the central business district, the village green and Crescent Park is intended to enhance community engagement for visitors and residents with seamless event navigation and calendar updates. This is part of a broader effort to ignite economic growth in Wyoming.

Springfield Township, Montgomery, Cheviot and Lockland have also benefitted from UniCity partnerships. Many local leaders look toward the possibility of expansion for UniCity and similar initiatives, hoping for assistance with their own communities’ various requirements for connectivity.

“We could definitely put assets like that to good public use. I think the ability to offer free high speed internet access to low-income communities could benefit those local economies,” says Stefan Densmore, Mayor of the Village of Golf Manor.

As a result of pandemic-wrought struggles – faced nationally, as a region, as a city, and as individual communities – a developing landscape has been further revealed. New paths to navigate have been created, and new ways of addressing barriers to longstanding issues of equity have begun. Over time, with planning, partnerships and communication, gaps can be bridged – bringing one unique household at a time into this modern age of interconnectivity and all the resources it can bestow.

The Health Gap’s Mahaffey Harris sees this as a great opportunity.

“COVID-19 gave me hope. I mean, I know it was difficult time for all of us, but it gave me hope because we had no choice but to work together. And when we work together, we tackle hard issues,” she says. “We can continue to move down this continuum. We need to recognize that when we had to do it, we did it, right? Let's figure out how to keep doing it. We need to understand that when we are all better, then we are better as a community.”

The First Suburbs—Beyond Borders series is made possible with support from a coalition of stakeholders including Mercy Healtha Catholic health care ministry serving Ohio and Kentuckythe Murray & Agnes Seasongood Good Government Foundation - The Seasongood Foundation is devoted to the cause of good local government; LISC Greater Cincinnati LISC Greater Cincinnati supports resident-led, community-based development organizations transform communities and neighborhoods; Hamilton County Planning Partnership; plus First Suburbs Consortium of Southwest Ohio, an association of elected and appointed officials representing older suburban communities in Hamilton County, Ohio.
 

Mon, 08 Aug 2022 16:22:00 -0500 en text/html https://www.soapboxmedia.com/features/bridging-connectivity-digital-divide.aspx
1T6-220 exam dump and training guide direct download
Training Exams List