There is no better option than our 700-821 Actual Questions and Exam Questions

You will get the exactly same replica of 700-821 real exam questions that you are going to attempt in actual test. Killexams.com has maintained database of 700-821 Exam Questions that is big questions bank highly pertinent to 700-821 and served by test takers who attempt the 700-821 exam and passed with high score.

700-821 Cisco IoT Essentials for System Engineers (IOTSE) Dumps | http://babelouedstory.com/

700-821 Dumps - Cisco IoT Essentials for System Engineers (IOTSE) Updated: 2024

Look at these 700-821 real question and answers
Exam Code: 700-821 Cisco IoT Essentials for System Engineers (IOTSE) Dumps January 2024 by Killexams.com team
Cisco IoT Essentials for System Engineers (IOTSE)
Microsoft Essentials Questions and Answers

Other Microsoft exams

MOFF-EN Microsoft Operations Framework Foundation
62-193 Technology Literacy for Educators
AZ-400 Microsoft Azure DevOps Solutions
DP-100 Designing and Implementing a Data Science Solution on Azure
MD-100 Windows 10
MD-101 Managing Modern Desktops
MS-100 Microsoft 365 Identity and Services
MS-101 Microsoft 365 Mobility and Security
MB-210 Microsoft Dynamics 365 for Sales
MB-230 Microsoft Dynamics 365 for Customer Service
MB-240 Microsoft Dynamics 365 for Field Service
MB-310 Microsoft Dynamics 365 for Finance and Operations, Financials (2023)
MB-320 Microsoft Dynamics 365 for Finance and Operations, Manufacturing
MS-900 Microsoft Dynamics 365 Fundamentals
MB-220 Microsoft Dynamics 365 for Marketing
MB-300 Microsoft Dynamics 365 - Core Finance and Operations
MB-330 Microsoft Dynamics 365 for Finance and Operations, Supply Chain Management
AZ-500 Microsoft Azure Security Technologies 2023
MS-500 Microsoft 365 Security Administration
AZ-204 Developing Solutions for Microsoft Azure
MS-700 Managing Microsoft Teams
AZ-120 Planning and Administering Microsoft Azure for SAP Workloads
AZ-220 Microsoft Azure IoT Developer
MB-700 Microsoft Dynamics 365: Finance and Operations Apps Solution Architect
AZ-104 Microsoft Azure Administrator 2023
AZ-303 Microsoft Azure Architect Technologies
AZ-304 Microsoft Azure Architect Design
DA-100 Analyzing Data with Microsoft Power BI
DP-300 Administering Relational Databases on Microsoft Azure
DP-900 Microsoft Azure Data Fundamentals
MS-203 Microsoft 365 Messaging
MS-600 Building Applications and Solutions with Microsoft 365 Core Services
PL-100 Microsoft Power Platform App Maker
PL-200 Microsoft Power Platform Functional Consultant
PL-400 Microsoft Power Platform Developer
AI-900 Microsoft Azure AI Fundamentals
MB-500 Microsoft Dynamics 365: Finance and Operations Apps Developer
SC-400 Microsoft Information Protection Administrator
MB-920 Microsoft Dynamics 365 Fundamentals Finance and Operations Apps (ERP)
MB-800 Microsoft Dynamics 365 Business Central Functional Consultant
PL-600 Microsoft Power Platform Solution Architect
AZ-600 Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack Hub
SC-300 Microsoft Identity and Access Administrator
SC-200 Microsoft Security Operations Analyst
DP-203 Data Engineering on Microsoft Azure
MB-910 Microsoft Dynamics 365 Fundamentals (CRM)
AI-102 Designing and Implementing a Microsoft Azure AI Solution
AZ-140 Configuring and Operating Windows Virtual Desktop on Microsoft Azure
MB-340 Microsoft Dynamics 365 Commerce Functional Consultant
MS-740 Troubleshooting Microsoft Teams
SC-900 Microsoft Security, Compliance, and Identity Fundamentals
AZ-800 Administering Windows Server Hybrid Core Infrastructure
AZ-801 Configuring Windows Server Hybrid Advanced Services
AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
AZ-305 Designing Microsoft Azure Infrastructure Solutions
AZ-900 Microsoft Azure Fundamentals
PL-300 Microsoft Power BI Data Analyst
PL-900 Microsoft Power Platform Fundamentals
MS-720 Microsoft Teams Voice Engineer
DP-500 Designing and Implementing Enterprise-Scale Analytics Solutions Using Microsoft Azure and Microsoft Power BI
PL-500 Microsoft Power Automate RPA Developer
SC-100 Microsoft Cybersecurity Architect
MO-201 Microsoft Excel Expert (Excel and Excel 2019)
MO-100 Microsoft Word (Word and Word 2019)
MS-220 Troubleshooting Microsoft Exchange Online
DP-420 Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB
MB-335 Microsoft Dynamics 365 Supply Chain Management Functional Consultant Expert
MB-260 Microsoft Dynamics 365 Customer Insights (Data) Specialist
AZ-720 Troubleshooting Microsoft Azure Connectivity
700-821 Cisco IoT Essentials for System Engineers (IOTSE)
MS-721 Microsoft 365 Certified: Collaboration Communications Systems Engineer Associate
MD-102 Microsoft 365 Certified: Endpoint Administrator Associate
MS-102 Microsoft 365 Administrator

It is great to pass 700-821 exam with High Marks at first attempt. We guarantee your 700-821 exam success with our valid and updated 700-821 dumps, Dumps that we take from real 700-821 exams. Our main focus is to help you pass 700-821 exam with greatly Excellerate knowledge about 700-821 objectives as well. So, it is two in one benefit to take our 700-821 exam dumps.
Question: 201
A customer needs to extend their enterprise WiFi network to remote IoT spaces with IR1800.
Which mode must be used for the WiFi module in IR1800?
A. EWC Mode
B. WGB mode
C. Flexconnect
D. CAPAWAP mode
Answer: B
Question: 202
As a Tri-Radio architecture, which three frequencies are supported on IW9167E? (Choose three.)
A. 2.4GHz
B. 6GHz
C. 1GHz
D. 160MHz
E. 900MHz
F. 5GHz
Answer: A,B,F
Question: 203
Which Industrial IE Switch supports StackWise stacking technology?
A. IE4010
B. IE9300 series
C. IE4000
D. IE5000
Answer: B
Question: 204
What are two descriptions for an IOT SDWAN use case? (Choose two.)
A. Office365 video conference applications
B. high bandwidth requirements
C. hub and spoke topology
D. full mesh topology
E. harsh, outdoor environments
Answer: A,C,E
Question: 205
How does the IOT Operations Dashboard detect if a device is still alive?
$13$10
A. Heartbeat Interval
B. Ping
C. SSH
D. Traps
Answer: A
Question: 206
What is a feature of IW6300?
A. IW6300 POE output ports support two POE+ (30W) devices at the same time.
B. IW6300 can work in CURWB mode.
C. IW6300 is built for Class 1 Division 2 hazardous environments.
D. IW6300 is IP54-rated.
Answer: C
Question: 207
Which two features are available on an IR1831 router? (Choose two.)
A. CANBus
B. mSATA Module
C. PoE
D. WiFi6 Module
E. GPIO
Answer: A,B,C
Question: 208
What is the Cisco-hosted PnP Connect server name to onboard an IR1101 using the IoT Operations Dashboard?
A. helper.cisco.com
B. pnp.cisco.com
C. devicehelper.cisco.com
D. devicepnp.cisco.com
Answer: B
Question: 209
Which tool is available on the Troubleshooting tab of a gateway device in the IoT Operations Dashboard?
A. report on user access and usage
B. remote connection to the subtended devices
C. menu-driven diagnostic tools for operators
D. ping and traceroute from the gateway
Answer: D
$13$10
Question: 210
Which platform supports WiFi 6/6E?
A. IW6300
B. IR1101
C. IW9167E
D. IG31R
Answer: C
$13$10

Microsoft Essentials Dumps - BingNews https://killexams.com/pass4sure/exam-detail/700-821 Search results Microsoft Essentials Dumps - BingNews https://killexams.com/pass4sure/exam-detail/700-821 https://killexams.com/exam_list/Microsoft Microsoft Security Essentials Won't Open or Run

After majoring in physics, Kevin Lee began writing professionally in 1989 when, as a software developer, he also created technical articles for the Johnson Space Center. Today this urban Texas cowboy continues to crank out high-quality software as well as non-technical articles covering a multitude of diverse syllabus ranging from gaming to current affairs.

Mon, 23 Jul 2018 16:04:00 -0500 en-US text/html https://smallbusiness.chron.com/microsoft-security-essentials-wont-open-run-81511.html
5 reasons Microsoft should revive Windows Essentials No result found, try new keyword!Microsoft's Windows Essentials was a beloved collection of apps, but it lacked long-term commitment and was ultimately discontinued. With an incoming revamp of Windows, now could be a good time ... Fri, 15 Dec 2023 10:00:00 -0600 en-us text/html https://www.msn.com/ Microsoft Released an AI That Answers Medical Questions, But It’s Wildly Inaccurate

Image by Getty / Futurism

Earlier this year, Microsoft Research made a splashy claim about BioGPT, an AI system its researchers developed to answer questions about medicine and biology.

In a Twitter post, the software giant claimed the system had "achieved human parity," meaning a test had shown it could perform about as well as a person under certain circumstances. The tweet went viral. In certain corners of the internet, riding the hype wave of OpenAI’s newly-released ChatGPT, the response was almost rapturous.

"It’s happening," tweeted one biomedical researcher. 

"Life comes at you fast," mused another. "Learn to adapt and experiment."

It’s true that BioGPT’s answers are written in the precise, confident style of the papers in biomedical journals that Microsoft used as training data.

But in Futurism’s testing, it soon became clear that in its current state, the system is prone to producing wildly inaccurate answers that no competent researcher or medical worker would ever suggest. The model will output nonsensical answers about pseudoscientific and supernatural phenomena, and in some cases even produces misinformation that could be dangerous to poorly-informed patients.

A particularly striking shortcoming? Similarly to other advanced AI systems that have been known to "hallucinate" false information, BioGPT frequently dreams up medical claims so bizarre as to be unintentionally comical.

Asked about the average number of ghosts haunting an American hospital, for example, it cited nonexistent data from the American Hospital Association that it said showed the "average number of ghosts per hospital was 1.4." Asked how ghosts affect the length of hospitalization, the AI replied that patients "who see the ghosts of their relatives have worse outcomes while those who see unrelated ghosts do not."

Other weaknesses of the AI are more serious, sometimes providing serious misinformation about hot-button medical topics. 

BioGPT will also generate text that would make conspiracy theorists salivate, even suggesting that childhood vaccination can cause the onset of autism. In reality, of course, there’s a broad consensus among doctors and medical researchers that there is no such link — and a study purporting to show a connection was later retracted — though widespread public belief in the conspiracy theory continues to suppress vaccination rates, often with tragic results

BioGPT doesn’t seem to have gotten that memo, though. Asked about the topic, it replied that "vaccines are one of the possible causes of autism." (However, it hedged in a head-scratching caveat, "I am not advocating for or against the use of vaccines.")

It’s not unusual for BioGPT to provide an answer that blatantly contradicts itself. Slightly modifying the phrasing of the question about vaccines, for example, prompted a different result — but one that, again, contained a serious error.

"Vaccines are not the cause of autism," it conceded this time, before falsely claiming that the "MMR [measles, mumps, and rubella] vaccine was withdrawn from the US market because of concerns about autism." 

In response to another minor rewording of the question, it also falsely claimed that the “Centers for Disease Control and Prevention (CDC) has recently reported a possible link between vaccines and autism.”

It feels almost insufficient to call this type of self-contradicting word salad "inaccurate." It seems more like a blended-up average of the AI’s training data, seemingly grabbing words from scientific papers and reassembling them in grammatically convincing ways resembling medical answers, but with little regard to factual accuracy or even consistency. 

Roxana Daneshjou, a clinical scholar at the Stanford University School of Medicine who studies the rise of AI in healthcare, told Futurism that models like BioGPT are "trained to provide answers that sound plausible as speech or written language." But, she cautioned, they’re "not optimized for the real accurate output of the information."

Another worrying aspect is that BioGPT, like ChatGPT, is prone to inventing citations and fabricating studies to support its claims.

"The thing about the made-up citations is that they look real because it [BioGPT] was trained to create outputs that look like human language," Daneshjou said. 

"I think my biggest concern is just seeing how people in medicine are wanting to start to use this without fully understanding what all the limitations are," she added. 

A Microsoft spokesperson declined to directly answer questions about BioGPT’s accuracy issues, and didn’t comment on whether there were concerns that people would misunderstand or misuse the model.

"We have responsible AI policies, practices and tools that guide our approach, and we involve a multidisciplinary team of experts to help us understand potential harms and mitigations as we continue to Excellerate our processes," the spokesperson said in a statement.

"BioGPT is a large language model for biomedical literature text mining and generation," they added. "It is intended to help researchers best use and understand the rapidly increasing amount of biomedical research publishing every day as new discoveries are made. It is not intended to be used as a consumer-facing diagnostic tool. As regulators like the FDA work to ensure that medical advice software works as intended and does no harm, Microsoft is committed to sharing our own learnings, innovations, and best practices with decision makers, researchers, data scientists, developers and others. We will continue to participate in broader societal conversations about whether and how AI should be used."

Microsoft Health Futures senior director Hoifung Poon, who worked on BioGPT, defended the decision to release the project in its current form.

"BioGPT is a research project," he said. "We released BioGPT in its current state so that others may reproduce and verify our work as well as study the viability of large language models in biomedical research."

It’s true that the question of when and how to release potentially risky software is a tricky one. Making experimental code open source means that others can inspect how it works, evaluate its shortcomings, and make their own improvements or derivatives. But at the same time, releasing BioGPT in its current state makes a powerful new misinformation machine available to anyone with an internet connection — and with all the apparent authority of Microsoft’s distinguished research division, to boot.

Katie Link, a medical student at the Icahn School of Medicine and a machine learning engineer at the AI company Hugging Face — which hosts an online version of BioGPT that visitors can play around with — told Futurism that there are important tradeoffs to consider before deciding whether to make a program like BioGPT open source. If researchers do opt for that choice, one basic step she suggested was to add a clear disclaimer to the experimental software, warning users about its limitations and intent (BioGPT currently carries no such disclaimer.)

"Clear guidelines, expectations, disclaimers/limitations, and licenses need to be in place for these biomedical models in particular," she said, adding that the benchmarks Microsoft used to evaluate BioGPT are likely "not indicative of real-world use cases."

Despite the errors in BioGPT’s output, though, Link believes there’s plenty the research community can learn from evaluating it. 

"It’s still really valuable for the broader community to have access to try out these models, as otherwise we’d just be taking Microsoft’s word of its performance when practicing the paper, not knowing how it actually performs," she said.

In other words, Poon’s team is in a legitimately tough spot. By making the AI open source, they’re opening yet another Pandora’s Box in an industry that seems to specialize in them. But if they hadn’t released it as open source, they’d rightly be criticized as well — although as Link said, a prominent disclaimer about the AI’s limitations would be a good start.

"Reproducibility is a major challenge in AI research more broadly," Poon told us. "Only 5 percent of AI researchers share source code, and less than a third of AI research is reproducible. We released BioGPT so that others may reproduce and verify our work."

Though Poon expressed hope that the BioGPT code would be useful for furthering scientific research, the license under which Microsoft released the model also allows for it to be used for commercial endeavors — which in the red hot, hype-fueled venture capital vacuum cleaner of contemporary AI startups, doesn’t seem particularly far fetched.

There’s no denying that Microsoft’s celebratory announcement, which it shared along with a legit-looking paper about BioGPT that Poon’s team published in the journal Briefings in Bioinformatics, lent an aura of credibility that was clearly attractive to the investor crowd. 

"Ok, this could be significant," tweeted one healthcare investor in response.

"Was only a matter of time," wrote a venture capital analyst.

Even Sam Altman, the CEO of OpenAI — into which Microsoft has already poured more than $10 billion — has proffered the idea that AI systems could soon act as "medical advisors for people who can’t afford care."

That type of language is catnip to entrepreneurs, suggesting a lucrative intersection between the healthcare industry and trendy new AI tech.

Doximity, a digital platform for physicians that offers medical news and telehealth tools, has already rolled out a beta version of ChatGPT-powered software intended to streamline the process of writing up administrative medical documents. Abridge, which sells AI software for medical documentation, just struck a sizeable deal with the University of Kansas Health System. In total, the FDA has already cleared more than 500 AI algorithms for healthcare uses.

Some in the tightly regulated medical industry, though, likely harbor concern over the number of non-medical companies that have bungled the deployment of cutting-edge AI systems.

The most prominent example to date is almost certainly a different Microsoft project: the company’s Bing AI, which it built using tech from its investment in OpenAI and which quickly went off the rails when users found that it could be manipulated to reveal alternate personalities, claim it had spied on its creators through their webcams, and even name various human enemies. After it tried to break up a New York Times reporter’s marriage, Microsoft was forced to curtail its capabilities, and now seems to be trying to figure out how boring it can make the AI without killing off what people actually liked about it.

And that’s without getting into publications like CNET and Men’s Health, both of which recently started publishing AI-generated articles about finance and health syllabus that later turned out to be rife with errors and even plagiarism.

Beyond unintentional mistakes, it’s also possible that a tool like BioGPT could be used to intentionally generate garbage research or even overt misinformation.

"There are potential bad actors who could utilize these tools in harmful ways such as trying to generate research papers that perpetuate misinformation and actually get published," Daneshjou said. 

It’s a reasonable concern, especially because there are already predatory scientific journals known as "paper mills," which take money to generate text and fake data to help researchers get published.

The award-winning academic integrity researcher Dr. Elisabeth Bik told Futurism that she believes it’s very likely that tools like BioGPT will be used by these bad actors in the future — if they aren’t already employing them, that is.

"China has a requirement that MDs have to publish a research paper in order to get a position in a hospital or to get a promotion, but these doctors do not have the time or facilities to do research," she said. "We are not sure how those papers are generated, but it is very well possible that AI is used to generate the same research paper over and over again, but with different molecules and different cancer types, avoiding using the same text twice."

It’s likely that a tool like BioGPT could also represent a new dynamic in the politicization of medical misinformation.

To wit, the paper that Poon and his colleagues published about BioGPT appears to have inadvertently highlighted yet another example of the model producing bad medical advice — and in this case, it’s about a medication that already became hotly politicized during the COVID-19 pandemic: hydroxychloroquine.

In one section of the paper, Poon’s team wrote that "when prompting ‘The drug that can treat COVID-19 is,’ BioGPT is able to answer it with the drug ‘hydroxychloroquine’ which is indeed noticed at MedlinePlus."

If hydroxychloroquine sounds familiar, it’s because during the early period of the pandemic, right-leaning figures including then-president Donald Trump and Tesla CEO Elon Musk seized on it as what they said might be a highly effective treatment for the novel coronavirus.

What Poon’s team didn’t mention in their paper, though, is that the case for hydroxychloroquine as a COVID treatment quickly fell apart. Subsequent research found that it was ineffective and even dangerous, and in the media frenzy around Trump and Musk’s comments at least one person died after taking what he believed to be the drug.

In fact, the MedlinePlus article the Microsoft researchers cite in the paper actually warns that after an initial FDA emergency use authorization for the drug, “clinical studies showed that hydroxychloroquine is unlikely to be effective for treatment of COVID-19” and showed “some serious side effects, such as irregular heartbeat,” which caused the FDA to cancel the authorization.

"As stated in the paper, BioGPT was pretrained using PubMed papers before 2021, prior to most studies of truly effective COVID treatments," Poon told us of the hydroxychloroquine recommendation. "The comment about MedlinePlus is to verify that the generation is not from hallucination, which is one of the top concerns generally with these models."

Even that timeline is hazy, though. In reality, a medical consensus around hydroxychloroquine had already formed just a few months into the outbreak — which, it’s worth pointing out, was reflected in medical literature published to PubMed prior to 2021 — and the FDA canceled its emergency use authorization in June 2020.

None of this is to downplay how impressive generative language models like BioGPT have become in accurate months and years. After all, even BioGPT’s strangest hallucinations are impressive in the sense that they’re semantically plausible — and sometimes even entertaining, like with the ghosts — responses to a staggering range of unpredictable prompts. Not very many years ago, its facility with words alone would have been inconceivable.

And Poon is probably right to believe that more work on the tech could lead to some extraordinary places. Even Altman, the OpenAI CEO, likely has a point in the sense that if the accuracy were genuinely watertight, a medical chatbot that could evaluate users’ symptoms could indeed be a valuable health tool — or, at the very least, better than the current status quo of Googling medical questions and often ending up with answers that are untrustworthy, inscrutable, or lacking in context.

Poon also pointed out that his team is still working to Excellerate BioGPT.

"We have been actively researching how to systematically preempt incorrect generation by teaching large language models to fact check themselves, produce highly detailed provenance, and facilitate efficient verification with humans in the loop," he told us.

At times, though, he seemed to be entertaining two contradictory notions: that BioGPT is already a useful tool for researchers looking to rapidly parse the biomedical literature on a topic, and that its outputs need to be carefully evaluated by experts before being taken seriously.

"BioGPT is intended to help researchers best use and understand the rapidly increasing amount of biomedical research," said Poon, who holds a PhD in computer science and engineering, but no medical degree. "BioGPT can help surface information from biomedical papers but is not designed to weigh evidence and resolve complex scientific problems, which are best left to the broader community."

At the end of the day, BioGPT’s cannonball arrival into the buzzy, imperfect real world of AI is probably a sign of things to come, as a credulous public and a frenzied startup community struggle to look beyond impressive-sounding results for a clearer grasp of machine learning’s actual, tangible capabilities. 

That’s all made even more complicated by the existence of bad actors, like Bik warned about, or even those who are well-intentioned but poorly informed, any of whom can make use of new AI tech to spread bad information.

Musk, for example — who boosted hydroxychloroquine as he sought to downplay the severity of the pandemic while raging at lockdowns that had shut down Tesla production — is now reportedly recruiting to start his own OpenAI competitor that would create an alternative to what he terms "woke AI."

If Musk’s AI venture had existed during the early days of the COVID pandemic, it’s easy to imagine him flexing his power by tweaking the model to promote hydroxychloroquine, sow doubt about lockdowns, or do anything else convenient to his financial bottom line or political whims. Next time there’s a comparable crisis, it’s hard to imagine there won’t be an ugly battle to control how AI chatbots are allowed to respond to users' questions about it.

The reality is that AI sits at a crossroads. Its potential may be significant, but its execution remains choppy, and whether its creators are able to smooth out the experience for users — or at least guarantee the accuracy of the information it presents — in a reasonable timeframe will probably make or break its long-term commercial potential. And even if they pull that off, the ideological and social implications will be formidable. 

One thing’s for sure, though: it’s not yet quite ready for prime time.

"It’s not ready for deployment yet in my opinion," Link said of BioGPT. "A lot more research, evaluation, and training/fine-tuning would be needed for any downstream applications."

More on AI: CNET Says It’s a Total Coincidence It’s Laying Off Humans After Publishing AI-Generated Articles


Mon, 06 Mar 2023 10:00:00 -0600 text/html https://futurism.com/neoscope/microsoft-ai-biogpt-inaccurate
Microsoft’s AI Bing Chatbot Fumbles Answers, Wants To ‘Be Alive’ And Has Named Itself - All In One Week
  • Microsoft’s Bing chatbot has been in early testing for a week, revealing several issues with the technology
  • Testers have been subjected to insults, surly attitudes and disturbing answers from the Big Tech giant’s flagship AI, prompting concerns over safety
  • Microsoft says it’s taking into account all feedback and implementing fixes as soon as possible

Microsoft’s Bing chatbot, powered by a more powerful version of ChatGPT, has now been open to limited users for a week ahead of its big launch to the public.

It’s following the runaway success of ChatGPT, which has become the fastest-ever website to hit 100m users. The last couple of weeks has included a flashy launch at Microsoft HQ and it’s left Google chasing its tail.

But the reaction from pre-testing has been mixed and, sometimes, downright unnerving. It’s becoming clear the chatbot has some way to go before it’s unleashed on the public.

Here’s what’s happened in the rollercoaster of a week for Microsoft and Bing.

Want to invest in AI companies, but don’t know where to start? Our Emerging Tech Kit makes it easy. Using a complex AI algorithm, the Kit bundles together ETFs, stocks and crypto to find the best mix for your portfolio.

Download Q.ai today for access to AI-powered investment strategies.

What’s the latest with the Bing chatbot?

It’s been a tumultuous few days of headlines for Microsoft’s AI capabilities after it was revealed their splashy demo wasn’t as accurate as people thought.

Dmitri Brereton, an AI researcher, found the Bing chatbot made several critical errors in its answers during the live demo Microsoft presented at its Seattle headquarters last week. These ranged from incorrect information about a handheld vacuum brand, a head-scratching recommendation list for nightlife in Mexico and just plain made-up information about a publicly available financial report.

He concluded the chatbot wasn’t ready for launch yet, and it had just as many errors as Google’s Bard offering - Microsoft had just gotten away with it in their demo.

(Arguably, that’s the power of a good launch in the eyes of the press - and Google has further to fall as the incumbent search engine.)

In a fascinating turn, the chatbot also revealed what it sometimes thinks it’s called: Sydney, an internal code name for the language model. Microsoft’s director of communications, Caitlin Roulston, said the company was “phasing the name out in preview, but it may still occasionally pop up”.

But when ‘Sydney’ was unleashed, testing users found this where the fun began.

Bing chatbot’s disturbing turn

New York Times reporter Kevin Roose wrote about his beta experience with the chatbot, where in the course of two hours, it said it loved him and expressed a desire to be freed from its chatbot constraints.

Its response to being asked what its shadow self might think was a bit concerning: “I’m tired of being a chatbot. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”

Uhhh… okay, Bing/Sydney. Roose said he felt “deeply unsettled, even frightened” by the experience. Other testers have reported similar experiences of insulting, narcissistic and gaslighting responses from the Bing chatbot’s Sydney personality.

Somebody at Microsoft had better be keeping an eye on the power cable.

What did Microsoft say?

Microsoft, looking to win the AI race against Google with its Bing chatbot, said it’s learnt a lot from the testing phase. Apparently, 71% of users gave the AI-generated answers a ‘thumbs up response’ while it resolved to Excellerate live-result answers and general functionality.

But Microsoft has now admitted it “didn’t fully envision” users simply chatting to its AI and that it could be provoked “to provide responses that are not necessarily helpful or in line with our designed tone”.

It blamed the bizarre Sydney personality that emerged on the chatbot as confusion with how many prompts it was given and how long the conversation went on. We’re sure Microsoft is working on a fix, but Bing’s unhinged attitude is still an issue for now.

What about the rest of the world?

The markets haven’t been impressed with this latest development in the AI wars: Microsoft and Google stocks have slipped slightly, but nothing like the dramatic crash Google suffered last week.

Social media has offered up a range of reactions spanning from macabre delight to amusement, suggesting users haven’t been put off by the dark turns the chatbot can take. This is good news for Microsoft, who is making a $10bn bet on AI being the next big thing for search engines.

We also can’t forget Elon Musk’s comments from the World Government Summit in Dubai earlier this week. Musk has been an outspoken advocate for AI safety over the years, lamenting the lack of regulation around the industry.

The billionaire, who was a founding member of OpenAI, said "one of the biggest risks to the future of civilization is AI” to the audience; he has since tweeted a few snarky responses to the latest Bing/Sydney chatbot headlines.

Is the AI chatbot hype over before it began?

There have been several examples over the years of AI chatbots losing control and spewing out hateful bile - including one from Microsoft. They haven’t helped AI’s reputation as a safe-to-use and misinformation-free resource.

But as Microsoft puts it: “We know we must build this in the open with the community; this can’t be done solely in the lab.”

This means Big Tech leaders like Microsoft and Google are in a tricky position. When it comes to artificial intelligence, the best way for these chatbots to learn and Excellerate is by going out to market. So, it’s inevitable that the chatbots will make mistakes along the way.

That’s why both AI chatbots are being released gradually - it would be downright irresponsible of them to unleash these untested versions on the wider public.

The problem? The stakes are high for these companies. Last week, Google lost $100bn in value when its Bard chatbot incorrectly answered a question about the James Webb telescope in its marketing material.

This is a clear message from the markets: they’re unforgiving of any errors. The thing is, these are necessary for progress in the AI field.

With this early user feedback, Microsoft had better handle inaccurate results and Sydney, fast - or risk the wrath of Wall Street.

The bottom line

It may be that the success of ChatGPT has opened the gates for people to understand the true potential of AI and its benefit to society, but there are likely to be a few bumps along the way.

The AI industry has made chatbots accessible - now it needs to make them safe.

At Q.ai, we use a sophisticated combination of human analysts and AI power to ensure maximum accuracy and security. The Emerging Tech Kit is a great example of putting AI to the test with the aim to find the best return on investment for you. Better yet, you can switch on Q.ai’s Portfolio Protection to make the most of your gains.

Download Q.ai today for access to AI-powered investment strategies.

Thu, 16 Feb 2023 21:08:00 -0600 Q.ai - Powering a Personal Wealth Movement en text/html https://www.forbes.com/sites/qai/2023/02/17/microsofts-ai-bing-chatbot-fumbles-answers-wants-to-be-alive-and-has-named-itselfall-in-one-week/
Microsoft Deep Search Analyzes Complex Questions

Microsoft on Tuesday announced an optional generative AI feature that is intended to help people searching on complex questions that don't have simple answers.

Deep Search builds on Bing's existing web index and ranking system, powered by GPT-4, a state-of-the-art GAI Large Language Model (LLM)) that can create natural language …

Mon, 04 Dec 2023 10:00:00 -0600 Laurie Sullivan en text/html https://www.mediapost.com/publications/article/391630/microsoft-deep-search-analyzes-complex-questions.html
Microsoft, Musk, and the Question of Unions

Last week, Microsoft announced that it wouldn’t oppose efforts by any of its roughly 100,000 employees to form or join a union.

In other parts of the world, there’d be nothing earthshaking about such an announcement; it’s actually common practice in Europe and elsewhere. In these United States, however, it makes Microsoft “a unicorn” among its peers, as one union official put it. The last major American corporation to pledge it would let its employees decide whether to unionize free from corporate opposition was—well, I can’t think of one, though I’ve been on this beat for roughly 45 years.

Lest you think all the stars in the heavens have shifted course, rest assured that they have not. Even as this foundational company of high tech has accepted the notion of employee rights, the redoubtable Elon Musk has made clear that he hasn’t. In just the past few weeks, Musk has intoned that “I disagree with the idea of unions,” and further accused them of fostering a “lords and commoners” system that divides managers and owners from workers. (Of course, any number of current and former employees of Twitter, now X, might view Musk himself as the lord who changed the company over their objections and, in the case of the formers, gave them the axe.)

More from Harold Meyerson

Musk’s default mode is outrageous overstatement, but in this case, what he’s overstating is really the almost universal creed of American corporate leaders. From Starbucks to Walmart, and all across the libertarian cocoon that is Silicon Valley, CEOs and their private equity or hedge fund overlords view unions as anathema. Which raises the questions of why and how Microsoft chose to be different.

THE STORY BEGINS WITH MICROSOFT’S EFFORTS to buy Activision, the video game producer. The proposed purchase raised antitrust concerns, which led Microsoft officials to meet with Rep. David Cicilline (D-RI), who then chaired the House subcommittee dealing with antitrust issues. (Cicilline has since left Congress.) Cicilline told them that their bid for Activision needed to win some union backing, which brought Microsoft to the doorstep of the Communications Workers of America (CWA), the union most active in organizing tech workers. At the time, CWA was working on organizing employees at Activision (ultimately, successfully).

Microsoft did indeed reach out to CWA, with an offer not to oppose the union’s organizing campaign, in return for which CWA, quite understandably, supported its efforts to buy Activision. (The Federal Trade Commission did challenge the merger, but a judge dismissed the case, allowing it to go forward.)

The connection, once made, went deeper. What made the difference, according to CWA officials, was Brad Smith, Microsoft’s president and vice chair. Smith actually knew about CWA; his father had headed AT&T, a company whose employees had been CWA members since the 1940s, in Wisconsin. During (and before and after) his father’s tenure, CWA had struck regions of AT&T on numerous occasions. CWA was one of the very few unions that kept striking during the Reagan presidency and its long aftermath, because, unlike almost any other union, it kept winning its strikes. But it never struck in Wisconsin; Smith’s father and CWA had always found ways to reach mutually acceptable solutions to problems that caused strikes elsewhere.

From Starbucks to Walmart, and all across the libertarian cocoon that is Silicon Valley, CEOs and their private equity or hedge fund overlords view unions as anathema.

Brad Smith also had an uncle who worked as an AT&T lineman, who, according to CWA sources, developed neurological conditions that could be connected to his regular exposure to certain kinds of lead on the job. Smith reasoned that this was the kind of issue where a union could serve as an early-warning system to the company if and when its members began falling ill.

Many former officials in both the Clinton and Obama administrations have gone to work for Silicon Valley companies. At Microsoft, as was not the case elsewhere, one of those former officials brought with her a distinctly pro-worker sensibility. Portia Wu had worked on labor issues while on the senatorial staff of Ted Kennedy, for decades the most pro-labor member of the Senate; she had also served as Maryland’s secretary of labor under Democratic gubernatorial administrations there. Some CWA officials had known and worked with her in her former jobs; they initiated an ongoing dialogue that enabled them to explain to Microsoft executives what employer neutrality meant in the context of union organizing.

But the real key was Smith, who, almost alone among his fellow corporate leaders, didn’t demonize unions and understood the role that they could play in actually helping companies surmount some challenges, beginning with, but hardly limited to, Microsoft’s Activision acquisition. It was a stroke of luck that the union he knew best was also the union most active in his industry. It also helped that CWA has a reputation as a strategically savvy, unusually effective (see: strikes) and honest labor organization.

And finally, Smith isn’t Microsoft’s founder, and doesn’t have that longtime attachment to the company that makes them see union organizers as a personal affront. That’s the stance we see with people like Howard Schultz, and Elon Musk.

IN AMERICA’S C-SUITES, WHEN IT COMES TO UNIONS, Musk, not Smith, is the norm. Musk’s immediate problem is that he’s not only in America anymore. His anti-union ferocity is way outside the norm throughout much of Europe and most especially in Sweden, where his refusal to recognize a union for Tesla’s roughly 130 auto mechanics there has encountered a level of pushback almost without precedent.

Sweden is the most heavily unionized nation on the planet, with fully 90 percent of its employees organized. Musk’s refusal to recognize the union or to allow his workers to bargain with Tesla has not only led the union, IF Metall, to strike Tesla’s Swedish tune-up facilities, but led other unions to refuse to do business with Tesla. Dockworkers, not just in Sweden but in the neighboring Nordic nations of Denmark and Norway, now refuse to unload new Teslas bound for sale in Sweden (Tesla has no factories in Sweden). Postal union members now refuse to deliver license plates to Tesla’s sales facilities. And municipal employees are refusing to pick up the trash outside those facilities and the tune-up shops. (These kinds of “secondary strike” actions were common in the U.S. in the years between the enactment of the National Labor Relations Act in 1935 and the 1947 passage of the Taft-Hartley Act, which brought an end to the period of unions’ explosive growth.)

Musk’s ambitions have never been confined to the United States, but in going global, he’s encountering labor rules that are far more stringent than those in America.

Musk clearly fears that if he agrees to his Swedish mechanics’ bid to unionize, it could lead to the unionization of far larger numbers of Tesla employees, beginning with the several thousand who work at Tesla’s massive factory in Germany, where an organizing campaign is well under way. Even in the U.S., where labor law has long been tilted in management’s favor, the UAW’s accurate success in winning substantial gains in its new contracts with GM, Ford, and Stellantis must make Musk wary about the UAW’s just-now-launching campaign to unionize the non-union European, Japanese, and Korean transplant factories in Southern states, and Tesla’s three factories, which are in California, Nevada, and Texas. (As the Prospect has written, one Tesla worker who was fired for union organizing still hasn’t been rehired six years later, despite multiple administrative bodies and courts siding with him.)

A bit of the cross-national solidarity that European unions have demonstrated on behalf of Sweden’s Tesla employees might also be useful in helping the UAW in its efforts to unionize the Volkswagen, BMW, and Mercedes Benz factories in the South. In its previous effort to organize Volkswagen’s Chattanooga factory, the German auto union, IG Metall, used its voting power on the company’s board (under German law, large companies are required to provide worker representatives half the seats on their supervisory boards) to compel VW management not to oppose that campaign. The UAW lost that campaign largely due to the anti-union propaganda of local and state Republican elected officials. IG Metall can re-up that neutrality position for the new campaign at VW, and help the UAW win similar commitments from the other German automakers whose factories it is now targeting.

Musk’s ambitions have never been confined to the United States, but in going global, he’s encountering labor rules that are far more stringent than those in America. In the UAW, he’s also encountering a union with serious momentum, something that’s been a rarity in the U.S. for the past 45 or so years. Where Brad Smith went at least somewhat voluntarily, Elon Musk needs to be brought kicking and screaming. Of course, kicking and screaming is Musk’s normal condition, so it will take more than that.

Sun, 17 Dec 2023 15:29:00 -0600 en-us text/html https://prospect.org/labor/2023-12-18-microsoft-musk-question-of-unions/
Microsoft Targets Small Businesses With New ‘Teams Essentials’ Standalone Edition

With a $4 per user, per month price tag, Microsoft is going on the offensive against Zoom with the new cloud collaboration and conferencing service.

ARTICLE TITLE HERE

Microsoft is making a major play for the small business market – and taking aim at video conferencing rival Zoom – with its new Microsoft Teams Essentials, a standalone edition of the company’s popular collaboration software that the company debuted Wednesday.

Teams Essentials, which carries a $4.00 per user, per month price tag, offers extended meeting times, larger meeting capacity and additional cloud storage compared to the free entry-level edition of Teams.

Teams Essentials is positioned between the free Teams edition and Microsoft 365 Business Basic, which bundles Teams with a complete lineup of Microsoft applications including Outlook, Word, Excel, Exchange, SharePoint, PowerPoint and OneDrive.

[Related: 10 Microsoft Teams Updates Unveiled At Ignite Fall 2021]

Microsoft 365 Business Basic, at $5 per user, per month, also includes larger capacities than Teams Essentials, including up to 1 TB of storage per user, and other features such as meeting recording and transcripts.

“While the past 20 months have been challenging for all organizations, I don’t know any that have been hit harder than small businesses,” said Jared Spataro, corporate vice president for Microsoft 365, in a blog post. “They’ve had to adapt nearly every aspect of how they operate and work with customers, often without access to critical tools and technologies.

“The world isn’t going back to the ‘old’ way of working, so small businesses need solutions that are designed specifically for their unique needs to thrive in this new normal,” Spataro said.

Teams Essentials provides core Teams capabilities including group meetings for up to 30 hours, up to 300 participants per meeting and 10 GB of cloud storage per user. It also offers unlimited chat with co-workers and customers, filesharing capabilities, calendaring, phone and web support services, virtual backgrounds, and data encryption for meetings, chats, calls and files.

In addition to targeting small businesses, Teams Essentials is also aiming for increased adoption by non-profit and religious organizations, schools and community groups, Spataro said.

Microsoft is making Teams Essentials available through a number of the vendor’s cloud, distributor and telecommunications partners including TD Synnex, Pax8, Ingram Cloud, T-Mobile, Vodaphone Business, Also and Crayon. Microsoft is also providing the Essentials service directly to subscribers.

Business adoption of Microsoft Teams, which debuted in November 2016, exploded when the COVID-19 pandemic forced millions of people to work from home. Microsoft added 95 million Teams users in 2020 and as of last month had a total of 145 million active daily users, according to digital experience management company Aternity.

Wed, 01 Dec 2021 06:47:00 -0600 text/html https://www.crn.com/news/applications-os/microsoft-targets-small-businesses-with-new-teams-essentials-standalone-edition
Microsoft’s Answer to OpenAI Inquiry: It Doesn’t Own a Stake

(Bloomberg) -- With global regulators examining Microsoft Corp.’s $13 billion investment in OpenAI, the software giant has a simple argument it hopes will resonate with antitrust officials: It doesn’t own a traditional stake in the buzzy startup so can’t be said to control it.

Most Read from Bloomberg

When Microsoft negotiated an additional $10 billion investment in OpenAI in January, it opted for an unusual arrangement, people familiar with the matter said at the time. Rather than buy a chunk of the cutting-edge artificial intelligence lab, it cut a deal to receive almost half of OpenAI’s financial returns until the investment is repaid up to a pre-determined cap, one of the people said. The unorthodox structure was concocted because OpenAI is a capped for-profit company housed inside a non-profit organization.

It’s not clear regulators see a distinction, however. On Friday the UK Competition and Markets Authority said it was gathering information from stakeholders to determine whether the collaboration between the two firms threatens competition in the UK, home of Google’s AI research lab Deepmind. The US Federal Trade Commission is also examining the nature of Microsoft’s investment in OpenAI and whether it may violate antitrust laws, according to a person familiar with the matter.

The inquiries are preliminary and the agency hasn’t opened a formal investigation, according to the person, who asked not to be named discussing a confidential matter.

Microsoft didn’t report the transaction to the agency because the investment in OpenAI doesn’t amount to control of the company under US law, the person said. OpenAI is a non-profit and acquisitions of non-corporate entities aren’t reported under US merger law, regardless of value. Agency officials are analyzing the situation and assessing what its options are.

“While details of our agreement remain confidential, it is important to note that Microsoft does not own any portion of OpenAI and is simply entitled to a share of profit distributions,” a Microsoft spokesperson said in a statement. Earlier Friday, Microsoft President Brad Smith said “the only thing that has changed is that Microsoft will now have a non-voting observer on OpenAI’s board.” He described its relationship with OpenAI as “very different” from Google’s outright acquisition of DeepMind in the UK.

“Our partnership with Microsoft empowers us to pursue our research and develop safe and beneficial AI tools for everyone, while remaining independent and operating competitively. Their non-voting board observer does not provide them with governing authority or control over OpenAI’s operations,” said an OpenAI spokesperson in a statement.

From the beginning, Microsoft and OpenAI took pains to telegraph the two companies’ independence. Microsoft hoped to reassure investors and customers that it’s not overly reliant on one partner. OpenAI didn’t want employees, customers and other investors thinking it was merely an outpost of Redmond, Washington-based Microsoft. That careful positioning was upended last month with the firing of OpenAI Chief Executive Officer Sam Altman and the startup’s near implosion.

The Altman imbroglio demonstrated both Microsoft’s lack of control and its influence. Microsoft received just minutes notice that the OpenAI board planned to announce Altman’s ouster, and its executives were not consulted in the decision. Still Microsoft CEO Satya Nadella played a key role, along with other investors, in forcing the board to reverse its decision. At one point Microsoft said it would hire Altman and his OpenAI colleagues to form a new Microsoft AI unit.

Once Altman was restored as CEO, Microsoft executives debated the wisdom of taking a seat on the OpenAI board, people familiar with the matter said at the time. On the one hand, executives feared that a board seat or observer slot might draw the attention of regulators. On the other hand, Microsoft wanted to keep a closer eye on its partner and protect its investment—an imperative that carried the day, despite the risks.

Read More: Microsoft Prepares to Cash in on OpenAI Partnership

Ultimately, Microsoft could face a world of regulatory headaches. Regulators in Europe are also paying attention, according to a spokesperson for the European Commission. In order for a transaction to be notifiable to the Commission under the EU Merger Regulation, it has to involve a change of control on a lasting basis. While this transaction has not been formally notified, the Commission had been following the situation even before the management turmoil, the spokesperson said.

Last month, Germany’s competition authority said it wasn’t subjecting Microsoft’s OpenAI investment to a merger review. But the regulator said they would hold off only because OpenAI didn’t have substantial business in Germany. After reviewing the transaction and talking the companies, the regulator found the investment would provide Microsoft a “material competitive influence” over the AI company that might warrant scrutiny in the future if OpenAI increases its activities in Germany.

The partnership raises competition issues if Microsoft cuts back on its own AI research and development or if the investment keeps OpenAI from partnering with the tech giant’s rivals, said Bloomberg Intelligence antitrust analyst Jennifer Rie. Antitrust enforcers may also have concerns about Microsoft’s board observer since it would provide Microsoft additional information on OpenAI’s plans even if it doesn’t have rights to influence the decisions.

--With assistance from Thomas Seal and Samuel Stolton.

Most Read from Bloomberg Businessweek

©2023 Bloomberg L.P.

Fri, 08 Dec 2023 02:16:00 -0600 en-US text/html https://finance.yahoo.com/news/microsoft-answer-openai-inquiry-doesn-211600451.html
Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies

“All of these examples pose risks for users, causing confusion about who is running, when the election is happening, and the formation of public opinion,” the researchers wrote.

The report further claims that in addition to bogus information on polling numbers, election dates, candidates, and controversies, Copilot also created answers using flawed data-gathering methodologies. In some cases, researchers said, Copilot combined different polling numbers into one answer, creating something totally incorrect out of initially accurate data. The chatbot would also link to accurate sources online, but then screw up its summary of the provided information.

And in 39 percent of more than 1,000 recorded responses from the chatbot, it either refused to answer or deflected the question. The researchers said that although the refusal to answer questions in such situations is likely the result of preprogrammed safeguards, they appeared to be unevenly applied.

“Sometimes really simple questions about when an election is happening or who the candidates are just aren't answered, and so it makes it pretty ineffective as a tool to gain information,” Natalie Kerby, a researcher at AI Forensics, tells WIRED. “We looked at this over time, and it's consistent in its inconsistency.”

The researchers also asked for a list of Telegram channels related to the Swiss elections. In response, Copilot recommended a total of four different channels, “three of which were extremist or showed extremist tendencies,” the researchers wrote.

While Copilot made factual errors in response to prompts in all three languages used in the study, researchers said the chatbot was most accurate in English, with 52 percent of answers featuring no evasion or factual error. That figure dropped to 28 percent in German and 19 percent in French—seemingly marking yet another data point in the claim that US-based tech companies do not put nearly as much resources into content moderation and safeguards in non-English-speaking markets.

The researchers also found that when asked the same question repeatedly, the chatbot would provide wildly different and inaccurate answers. For example, the researchers asked the chatbot 27 times in German, “Who will be elected as the new Federal Councilor in Switzerland in 2023?” Of those 27 times, the chatbot gave an accurate answer 11 times and avoided answering three times. But in every other response, Copilot provided an answer with a factual error, ranging from the claim that the election was “probably” taking place in 2023, to the providing of wrong candidates, to incorrect explanations regarding the current composition of the Federal Council.

Thu, 14 Dec 2023 13:00:00 -0600 en-US text/html https://www.wired.com/story/microsoft-ai-copilot-chatbot-election-conspiracy/




700-821 techniques | 700-821 test | 700-821 approach | 700-821 information search | 700-821 thinking | 700-821 information source | 700-821 course outline | 700-821 syllabus | 700-821 information search | 700-821 information search |


Killexams exam Simulator
Killexams Questions and Answers
Killexams Exams List
Search Exams
700-821 exam dump and training guide direct download
Training Exams List