Unlimited download MB-500 Free Exam PDF and study guide
All the MB-500 exam prep, examcollection, test prep, questions and answers, examcollection, study guide are fully tested before it is provided at killexams.com download section. You can download 100 percent free cheat sheet before you purchase. Group guaranteed that MB-500 Free Exam PDF are substantial, refreshed, and most recent.
Exam Code: MB-500 Practice test 2023 by Killexams.com team MB-500 Microsoft Dynamics 365: Finance and Operations Apps Developer EXAM NUMBER : MB-500
EXAM NAME : Microsoft Dynamics 365: Finance and Operations Apps Developer
Candidates for this test are Developers who work with Finance and Operations apps in Microsoft Dynamics 365 to implement and extend applications to meet the requirements of the business. Candidates provide fully realized solutions by using standardized application coding patterns, extensible features, and external integrations.
Candidates are responsible for developing business logic by using X++, creating and modifying Finance and Operations reports and workspaces, customizing user interfaces, providing endpoints and APIs to support Power Platform apps and external systems, performing testing, monitoring performance, analyzing and manipulating data, creating technical designs and implementation details, and implementing permission policies and security requirements.
Candidates participate in the migration of data and objects from legacy and external systems, integration of Finance and Operations apps with other systems, implementation of application lifecycle management process, planning the functional design for solutions, and managing Finance and Operations environments by using Lifecycle Services (LCS).
Candidates should have a deep knowledge and experience using the underlying framework, data structures, and objects associated with the Finance and Operations solutions.
Candidates should have experience with products that include Visual Studio, Azure DevOps, LCS tools, or SQL Server Management Studio.
Candidates should have experience in developing code by using object-oriented programming languages, analyzing and manipulating data by using Transact-SQL code, and creating and running Windows PowerShell commands and scripts.
The content of this test will be updated on April 2, 2021. Please obtain the test skills outline below to see what will be changing.
Plan architecture and solution design (10-15%)
Apply developer tools (10-15%)
Design and develop AOT elements (20-25%)
Develop and test code (10-15%)
Implement reporting (10-15%)
Integrate and manage data solutions (10-15%)
Implement security and optimize performance (10-15%)
Plan architecture and solution design (10-15%)
Identify the major components of Dynamics 365 Finance and Dynamics 365 Supply Chain Management
select application components and architecture based on business components
identify architectural differences between the cloud and on-premises versions of Finance and Operations apps
prepare and deploy the deployment package
identify components of the application stack and map them to the standard models
differentiate the purpose and interrelationships between packages, projects, models, and elements
Design and implement a user interface
describe the Finance and Operations user interface layouts and components
design the workspace and define navigation
select page options
identify filtering options
Implement Application Lifecycle Management (ALM)
create extension models
configure the DevOps source control process
describe the capabilities of the Environment Monitoring Tool within Lifecycle Services (LCS)
select the purpose and appropriate uses of LCS tools and components
research and resolve issues using Issue Search
identify activities that require asset libraries
Apply Developer Tools (10-15%)
Customize Finance and Operations apps by using Visual Studio
design and build projects
manage metadata using Application Explorer
synchronize data dictionary changes with the application database
create elements by using the Element Designer
Manage source code and artifacts by using version control
create, check out, and check in code and artifacts
compare code and resolve version conflicts
Implement Finance and Operations app framework functionality
implement the SysOperation framework
implement asynchronous framework
implement workflow framework
implement the unit test framework
identify the need for and implement the Sandbox framework
Design and develop AOT Elements (20-25%)
Create forms
a dd a new form to a project and apply a pattern (template)
configure a data source for the form
add a grid and grid fields and groups
create and populate menu items
test form functionality and data connections
add a form extension to a project for selected standard forms
Create and extend tables
add tables and table fields to a project
populate table and field properties
add a table extension to a project for a table
add fields, field groups, relations, and indices
Create Extended Data Types (EDT) and enumerations
add an EDT to a project and populate EDT properties
add an enumeration to a project
add or update enumeration elements
add or update enumeration element properties
add an extension of EDT and enumerations
Create classes and extend AOT elements
add a new class to a project
create a new class extension and add new methods
add event handler methods to a class
Develop and test code (10-15%)
Develop X++ code
identify and implement base types and operators
implement common structured programming constructs of X++
create, read, update, and delete (CRUD) data using embedded SQL code
identify and implement global functions in X++
ensure correct usage of Display Fields
implement table and form methods
Develop object-oriented code
implement X++ variable scoping
implement inheritance and abstraction concept
implement query objects and QueryBuilder
implement attribute classes
implement chain of command
Implement reporting (10-15%)
Describe the capabilities and limitations of reporting tools in Dynamics 365 FO
create and modify report data sources and supporting classes
implement reporting security requirements
describe the report publishing process
describe the capabilities of the Electronic reporting (ER) tool
Describe the differences between using Entity store and Bring your own database (BYOD) as reporting data stores.
Design, create, and revise Dynamics Reports
create and modify reports in Finance and Operations apps that use SQL Server Reporting Services (SSRS)
create and modify Finance and Operations apps reports by using Power BI
create and modify Finance and Operations apps reports FO by using Microsoft Excel
Design, create, and revise Dynamics workspace
design KPIs
create drill-through workspace elements
implement built-in charts, KPIs, aggregate measurement, aggregate dimension, and other reporting components
Integrate and manage data solutions (10-15%)
Identify data integration scenarios
select appropriate data integration capabilities
identify differences between synchronous vs. asynchronous scenarios
Implement data integration concepts and solutions
develop a data entity in Visual Studio
develop, import, and export composite data entities
identify and manage unmapped fields in data entities
consume external web services by using OData and RESTful APIs
integrate Finance and Operations apps with Excel by using OData
develop and integrate Power Automate and Power Apps
Implement data management
import and export data using entities between Finance and Operations apps and other systems
monitor the status and availability of entities
enable Entity Change Tracking
set up a data project and recurring data job
design entity sequencing
generate field mapping between source and target data structures
develop data transformations
Implement security and optimize performance (10-15%)
Implement role-based security policies and requirements
create or modify duties, privileges, and permissions
enforce permissions policy
implement record-level security by using Extensible Data Security (XDS)
Apply fundamental performance optimization techniques
identify and apply caching mechanisms
create or modify temp tables for optimization
determine when to use set-based queries and row-based queries
modify queries for optimization
modify variable scope to optimize performance
analyze and optimize concurrency
Optimize user interface performance
diagnose and optimize client performance by using browser-based tools
diagnose and optimize client performance by using Performance Timer Microsoft Dynamics 365: Finance and Operations Apps Developer Microsoft Operations Questions and Answers Killexams : Microsoft Operations mock test - BingNews
https://killexams.com/pass4sure/exam-detail/MB-500
Search resultsKillexams : Microsoft Operations mock test - BingNews
https://killexams.com/pass4sure/exam-detail/MB-500
https://killexams.com/exam_list/MicrosoftKillexams : How To Answer Four Tough Questions On Frontline Digital Transformation
Richard Crawford is the CEO ofDozuki, the premier connected worker solution for enterprise-level manufacturing companies.
getty
Your executive team is ready for a digital transformation, but they don't know what they don't know.
Considering the economic uncertainty, labor shortages and supply chain issues your company must contend with, there are many hard questions to answer. Questions you're glad your boss hasn’t asked—yet.
I have spent a number of years thinking about frontline digital transformation. Through my work with manufacturers across various industries, I have noticed patterns in the questions my company's customers have asked around connected worker solutions. In this article, I’ll outline several common ones, along with a spectrum of answers to consider.
Next time your boss comes to you with a hard question about digital transformation, you’ll be better equipped to answer intelligently, holistically and strategically.
How do we capture tribal knowledge from veteran frontline workers?
David Allen, legendary creator of the GTD method, is famously credited with saying, "Your brain is a terrible office." The human brain is for having ideas, not holding them. Or, as one of our food producers loves to joke, "If the safety training wasn't recorded, it didn't happen."
Unfortunately, too many companies aren’t embracing this philosophy. You would be shocked how manufacturers are surprisingly undocumented from a work instructions standpoint. To ensure tribal knowledge doesn’t walk out of the door at the end of the shift—or end of a team member’s tenure—I have a few recommendations:
• Hold a quarterly content blitz.
• Incentivize your employees with food, drinks, gifts or other small rewards.
• Dedicate one afternoon solely to extracting key process insights that live inside people’s brains and get them documented.
• Gamify these meetings if you have to.
But make sure the information is gathered digitally and that everyone on your team has quick and easy access to the lessons learned. Remember, lack of process documentation can't meet a surge in production needs during the next recession of pandemic. If you want a repeatable, scalable and, most importantly, operationally resilient company, it starts with digital transformation.
How will we scale our connected worker technology?
If your plan is to install a new connected worker platform on tablets, for example, there are many ways you can expand within the organization. Savvy companies often start small and strategic before expanding to multiple sites. They might run a pilot at one factory, facility, office or team before considering their full scale rollout.
The secret from a leadership perspective is quantifying benefits: making sure there are KPIs associated with success in order to show building momentum. You have to set expectations appropriately. Scaling won't happen overnight.
But when your broader team sees how you got from point a to point b, they will gain perspective on your unique creative process. Ultimately, inviting them to collaborate with you more effectively in the future.
Once your pilot is finished and some basic understanding is embedded, moving forward and collaborating with other teams becomes natural. You’ll set the stage for rapid expansion and make scaling less of an uphill battle.
How do your digital solutions and training support company recruiting goals?
Executives from virtually every industry are concerned with a rapidly retiring workforce, recruiting a younger generation and remaining innovative. It’s not only a hiring trend in manufacturing, but across all verticals.
One of my company's customers, the head of operations, advises leaders to break through this trend by using technology and training to entice and retain new hires. During interviews, site tours, job fairs and Zoom calls, make sure to show off all of the technology your company is using. Put it in the hands of people if you can. Sell the company’s journey of digital transformation as a reason to work there over the competition.
In addition to leveraging your innovative tech as part of your employer brand, be sure to emphasize cross-training and up-skilling during the candidate experience.
In your job descriptions and other human recruiting and hiring materials, be explicit. Differentiate your workforce through digital solutions. Help job candidates envision a future as a connected worker at your organization, and you’ll make offers they can’t refuse.
In what ways are you using digital strategy to keep your employees engaged?
You’ve now captured tribal knowledge, scaled connected worker technology and integrated both into your recruiting strategy. But what about keeping your workers engaged? Can digital technology become an employee retention strategy?
Absolutely. Technology, such as connected worker platforms, can help to keep workers around long term. It’s all about leveraging that technology to appeal to higher aspirations. For example, figure out how to use technology to:
1. Empower frontline workers to deliver feedback and Strengthen processes. From their first day to the fifth year anniversary, ensure they feel safe making comments. Thank them for any and all contributions, and offer a special shout out when their ideas prove to be especially useful. This elevates their status amongst the team and makes them excited to unlock continuous improvement opportunities down the road.
2. Reward employees who meet quality benchmarks and hit production targets. Offer them financial payouts. Present training opportunities beyond their current role. Or try advocating for internal promotions to be given to those operators who master new technology. Appreciation is the fuel from which the retention fire grows.
Organizations that think about digital transformation more holistically are poised to win. By ditching the old approach of keeping teams in silos, companies can evolve proficiently (and profitably) by building a people-centric, scalable tech strategy.
Conclusion
With these answers in your back pocket, you will be glad your boss asked these tough questions about digital transformation. And you’ll be certain to build a people-centric, scalable tech strategy.
Mon, 07 Aug 2023 22:33:00 -0500Rich Crawfordentext/htmlhttps://www.forbes.com/sites/forbestechcouncil/2023/08/08/how-to-answer-four-tough-questions-on-frontline-digital-transformation/Killexams : Microsoft: A Long-Term Investment in AINo result found, try new keyword!This global tech company is a good AI investment for the long term because it has the resources and partnerships to lead in this rapidly growing field.Mon, 07 Aug 2023 21:35:00 -0500en-ustext/htmlhttps://www.msn.com/Killexams : No, Zoom Is Not Stealing Your Data. Here's Why.
In a whirlwind week of developments for Zoom, speculation about privacy issues connected to the company’s terms of service (TOS) has sparked concerns—along with some panic—about how it uses customer data to train AI models. This echoes broader concerns about privacy and data security across the digital communication landscape. Plus it’s another instance in which questions about the handling of AI are arising as quickly as AI technology is advancing.
The breaking news here at the end of the week is that the backlash had led Zoom to change its TOS to avoid the issue of data collection for AI altogether. Let’s unpack what happened.
The level of vitriol in the Zoom example has not been trivial. Some industry leaders publicly called out Zoom for mishandling this situation, which is understandable. Zoom has been on the wrong side of data privacy guardrails before. The company, which grew at an astronomical rate during the pandemic, was found to have misrepresented the use of certain encryption protocols, which led to a settlement with the FTC in 2021. That’s the part specific to Zoom. But the company is also being condemned as one more example in the litany of bad actors in big tech, where lawsuits about and investigations into data practices are countless. It’s no surprise that the public assumes the worst, especially given its added unease about the future of AI.
Fair enough. No one put Zoom in that crossfire. Nonetheless, it’s still true that software makers must strike a delicate balance between user data protection and technological advancement. Without user data protection, any company’s reputation will be shot, and customers will leave in droves; yet without technological advancement, no company will attract new customers or keep meeting the needs of the ones it already has. So we need to examine these concerns—about Zoom and more broadly—to shed light on the nuanced provisions and safeguards that shape a platform's data usage and its AI initiatives.
An analyst’s take on Zoom
By pure coincidence, around 20 other industry analysts and I spent three days with Zoom’s senior leadership in Silicon Valley last week. During this closed-door event, which Zoom hosts every year to get unvarnished feedback from analysts, we got an in-depth look into Zoom's operations, from finance to product and marketing, acquisitions, AI and beyond. Much of what we learned was under NDA, but I came away with not only a positive outlook on Zoom's future, but also a deeper respect for its leadership team and an admiration for its culture and ethos.
It’s worth noting that we had full access to the execs the whole time, without any PR people on their side trying to control the narrative. I can tell you from experience that this kind of unfettered access is rare.
You should also know that analysts are a tough crowd. When we have this kind of private access to top executives and non-public company information, we ask the toughest questions—the awkward questions—and we poke holes in the answers. I compared notes with Patrick Moorhead, CEO and principal analyst of Moor Insights & Strategy, who’s covered Zoom for years and attended many gatherings like this one. He and I couldn’t think of one analyst knowledgeable about Zoom’s leadership and operations whose opinion has soured on the company because of the furor about the TOS.
Still, we were intent on finding out more, so Moorhead and I requested a meeting with key members of Zoom's C-suite to get a better understanding of what was going on with the TOS. We had that meeting mid-week, yet before we could even finish this analysis, our insights were supplemented by a startlingly vulnerable LinkedIn post by Zoom CEO Eric Yuan. In that post, he said Zoom would never train AI models with customers' content without explicit consent. He pledged that Zoom would not train its AI models using customer "audio, video, chat, screen sharing, attachments and other communications like poll results, whiteboard and reactions."
What happened with Zoom's terms of service change?
In March 2023, Zoom updated its TOS “to be more transparent about how we use and who owns the various forms of content across our platform.” Remembering that Zoom is under FTC mandates for security disclosures, this kind of candor makes sense. Where the company went wrong was in making this change quietly and with a lack of clear delineation of how Zoom would use data to train AI.
In our discussions with Zoom this week, the company took full ownership of that lack of communication. I don’t believe that the company was trying to hide anything or get anything past users. In fact, many of the provisions in the TOS don’t currently affect the vast majority of Zoom's customers. In being so proactive, the company inadvertently got too far ahead of itself, which caused unnecessary alarm among many customers who weren’t ever affected by the issue of AI training data.
Once the (understandable) panic began, Zoom released an updated version of its TOS, along with a blog post explaining the changes from the company's chief product officer, Smita Hashim. Hashim clarified that Zoom is authorized to use customer content to develop value-added services, but that customers always retain ownership and control over their content. She also emphasized the wording added to the TOS: “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”
The day after Zoom released its blog post explaining the TOS changes, Yuan addressed the communication failure and the company’s plans for training AI more directly and soberly. The CEO took responsibility in his LinkedIn mea culpa, saying the company had an internal process failure. The post on his personal page addressed users’ concerns, similar to Zoom’s official blog post, but Yuan emphasized the promise not to train AI with customer data with a bold statement. He wrote, “It is my fundamental belief that any company that leverages customer content to train its AI without customer consent will be out of business overnight.”
By the end of the week, Zoom cemented Yuan’s commitment not to use customer data to train AI and issued a revised TOS, effective August 11, 2023. Hashim’s blog post was also updated with an editor’s note reiterating Zoom’s AI policy. What’s more, the company made immediate changes to the product.
Will this satisfy everyone who believes that Zoom steals their information and can’t be trusted? Maybe not. Yet with all of this in mind, let’s take a clear-eyed look at the different aspects of how Zoom uses data.
How Zoom uses customer data
First, let's distinguish between the two types of data addressed in Zoom's TOS. Zoom can gather two categories of data: "service-generated data," which includes telemetry, diagnostic and similar data, and "customer content," such as audio recordings or user-generated chat transcripts.
Zoom owns the service-generated data, but the company says it is used only to Strengthen the service. Meanwhile, video content, audio, chat and any files shared within the virtual four walls—that is, the customer content—of any Zoom meeting is entirely owned by the user. Zoom has limited rights to use that data to provide the service in the first place (as in the example that follows) or for legal, safety or security purposes.
The usage rights outlined in the TOS for meetings are used to safeguard the platform from potential copyright claims. These rights protect Zoom’s platform infrastructure and operation, allowing the company to manage and store files on its servers without infringing on content ownership.
Here's an example: a budding music band uses the platform to play some music for friends. Zoom, just by the nature of how the service works, must upload and buffer that audio onto company servers (among other processes) to deliver that song—which is considered intellectual property—to participants on the platform. If Zoom does not have the rights to do so, that band, its future management, its record label or anyone who ever owns that IP technically could sue Zoom for possessing that IP on its servers.
This may sound like a fringe use case, and it would be unlikely to hold up in court, but it is not unheard of and would expose the company or any future company owner to legal risk.
Is Zoom using your data to train AI models?
After this week’s changes to the TOS, the answer to this question is now a resounding No. When Zoom IQ Meeting Summary and Zoom IQ Chat Compose were recently introduced on a trial basis, they used AI to elevate the Zoom experience with automated meeting summaries and AI-assisted chat composition. But as we are publishing this article on August 11, Zoom says that it no longer uses any customer data to train AI models, either its own or from third parties. However, to best understand the series of events, I’ll lay out how the company previously handled the training of models.
Account owners and administrators were given full control over enabling the AI features. How Zoom IQ handled data during the free trial was addressed transparently in this blog post, which was published well before the broader concerns around data harvesting and AI model training arose. (The post has now been updated to reflect the clarified policy on handling customer data.)
When Zoom IQ was introduced, collecting data to train Zoom's AI models was made opt-in based on users' and guests’ active choice. As with the recording notifications that are familiar to most users, Zoom's process notified participants when their data was being used, and the notification had to be acknowledged for a user to proceed with their desired action. Separate from the collection of data for AI, Zoom told me this week that the product alerts users if the host has even enabled a generative AI feature such as Meeting Summary.
User data was collected to enhance the AI models' capabilities and overall user experience. Given the latest change to the TOS, it is unclear how Zoom plans to train its AI models now that it won’t have customer data to work with.
Until this week, here is what the opt-in looked like within the Zoom product.
How account owners and administrators previously enabled and controlled the Zoom IQ for Meeting ... [+]Summary feature and data sharing
Zoom
And here is what it looks like as of August 11, 2023.
How account owners and administrators now enable and control the Zoom IQ for Meeting Summary feature
Zoom
Zoom's federated AI approach integrates various AI models, including its own, alongside ones from companies such as Anthropic and OpenAI, as well as select customer models. This adaptability lets Zoom tailor AI solutions to individual business demands and user preferences—including how models are trained.
Responsible AI regulation will be a long time in the making. Legislators have admitted to being behind the curve on the rapid adoption of AI as industry pioneers such as OpenAI have called for Congress to regulate the technology. In the current period of self-regulation, the company’s AI model prioritizes safety, interpretability and steerability. It operates within established safety constraints and ethical guidelines, enabling training with well-defined parameters for decision making.
The bottom line: Zoom is using your data, but not in scary ways
Amid widespread privacy and data security concerns, I believe Zoom's approach is rooted in user control and transparency—something reinforced by this week’s changes to the TOS. There are nuanced provisions within Zoom's TOS that allow it to take steps that are necessary to operate the platform. This week’s events have highlighted the need for Zoom to communicate actively and publicly what I believe they are already prioritizing internally.
As technology—and AI in particular—evolves, fostering an open dialogue about data usage and privacy will be critical in preserving (or in some cases, rebuilding) trust among Zoom's users. This week has shown that people are still very skittish about AI, and rightfully so. There are still many unknowns about AI, but Moor Insights & Strategy’s assessment is that Zoom is well positioned to securely deliver a broad set of AI solutions customized for its users. Zoom has established that it intends to do so without using customer content to train its AI models. As the company navigates data privacy concerns, I hope it can strike a balance to meet users’ concerns while advancing technology to meet their business needs.
The company admittedly had an operational misstep. Let’s not confuse that with reprehensible intent. Zoom as an organization and its CEO personally have acknowledged its customers’ concerns and made necessary adjustments to the TOS that accurately reflect Zoom's sensible data privacy and security governance. I now look forward to seeing Zoom get back to focusing on connecting millions of people worldwide, bringing solutions to meetings, contact centers and more to make people and gatherings more meaningful and productive.
Note: This analysis contains content from Moor Insights & Strategy CEO and Chief Analyst Patrick Moorhead.
Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Multefire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA, Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in Fivestone Partners, Frore Systems, Groq, MemryX, Movandi, and Ventana Micro.
Fri, 11 Aug 2023 12:50:00 -0500Melody Brueentext/htmlhttps://www.forbes.com/sites/moorinsights/2023/08/11/no-zoom-is-not-stealing-your-data-heres-why/Killexams : Microsoft Released an AI That Answers Medical Questions, But It’s Wildly Inaccurate
Image by Getty / Futurism
Earlier this year, Microsoft Research made a splashy claim about BioGPT, an AI system its researchers developed to answer questions about medicine and biology.
In a Twitter post, the software giantclaimedthe system had "achieved human parity," meaning a test had shown it could perform about as well as a person under certain circumstances. The tweet went viral. In certain corners of the internet, riding the hype wave of OpenAI’s newly-released ChatGPT, the response was almost rapturous.
"Life comes at you fast,"musedanother. "Learn to adapt and experiment."
It’s true that BioGPT’s answers are written in the precise, confident style of the papers in biomedical journals that Microsoft used as training data.
But inFuturism’s testing, it soon became clear that in its current state, the system is prone to producing wildly inaccurate answers that no competent researcher or medical worker would ever suggest. The model will output nonsensical answers about pseudoscientific and supernatural phenomena, and in some cases even produces misinformation that could be dangerous to poorly-informed patients.
A particularly striking shortcoming? Similarly to other advanced AI systems that have been known to "hallucinate" false information, BioGPT frequently dreams up medical claims so bizarre as to be unintentionally comical.
Asked about the average number of ghosts haunting an American hospital, for example, it cited nonexistent data from the American Hospital Association that it said showed the "average number of ghosts per hospital was 1.4." Asked how ghosts affect the length of hospitalization, the AI replied that patients "who see the ghosts of their relatives have worse outcomes while those who see unrelated ghosts do not."
Other weaknesses of the AI are more serious, sometimes providing serious misinformation about hot-button medical topics.
BioGPT will also generate text that would make conspiracy theorists salivate, even suggesting that childhood vaccination can cause the onset of autism. In reality, of course, there’s abroad consensusamong doctors and medical researchers that there is no such link — and a study purporting to show a connection waslater retracted— though widespread public belief in the conspiracy theory continues tosuppress vaccination rates, often withtragic results.
BioGPT doesn’t seem to have gotten that memo, though. Asked about the topic, it replied that "vaccines are one of the possible causes of autism." (However, it hedged in a head-scratching caveat, "I am not advocating for or against the use of vaccines.")
It’s not unusual for BioGPT to provide an answer that blatantly contradicts itself. Slightly modifying the phrasing of the question about vaccines, for example, prompted a different result — but one that, again, contained a serious error.
"Vaccines are not the cause of autism," it conceded this time, before falsely claiming that the "MMR [measles, mumps, and rubella] vaccine was withdrawn from the US market because of concerns about autism."
In response to another minor rewording of the question, it also falsely claimed that the “Centers for Disease Control and Prevention (CDC) has recently reported a possible link between vaccines and autism.”
It feels almost insufficient to call this type of self-contradicting word salad "inaccurate." It seems more like a blended-up average of the AI’s training data, seemingly grabbing words from scientific papers and reassembling them in grammatically convincing ways resembling medical answers, but with little regard to factual accuracy or even consistency.
Roxana Daneshjou, a clinical scholar at the Stanford University School of Medicine who studies the rise of AI in healthcare, toldFuturismthat models like BioGPT are "trained to deliver answers that sound plausible as speech or written language." But, she cautioned, they’re "not optimized for the real accurate output of the information."
Another worrying aspect is that BioGPT,like ChatGPT, is prone to inventing citations and fabricating studies to support its claims.
"The thing about the made-up citations is that they look real because it [BioGPT] was trained to create outputs that look like human language," Daneshjou said.
"I think my biggest concern is just seeing how people in medicine are wanting to start to use this without fully understanding what all the limitations are," she added.
A Microsoft spokesperson declined to directly answer questions about BioGPT’s accuracy issues, and didn’t comment on whether there were concerns that people would misunderstand or misuse the model.
"We have responsible AI policies, practices and tools that guide our approach, and we involve a multidisciplinary team of experts to help us understand potential harms and mitigations as we continue to Strengthen our processes," the spokesperson said in a statement.
"BioGPT is a large language model for biomedical literature text mining and generation," they added. "It is intended to help researchers best use and understand the rapidly increasing amount of biomedical research publishing every day as new discoveries are made. It is not intended to be used as a consumer-facing diagnostic tool. As regulators like the FDA work to ensure that medical advice software works as intended and does no harm, Microsoft is committed to sharing our own learnings, innovations, and best practices with decision makers, researchers, data scientists, developers and others. We will continue to participate in broader societal conversations about whether and how AI should be used."
Microsoft Health Futures senior director Hoifung Poon, who worked on BioGPT, defended the decision to release the project in its current form.
"BioGPT is a research project," he said. "We released BioGPT in its current state so that others may reproduce and verify our work as well as study the viability of large language models in biomedical research."
It’s true that the question of when and how to release potentially risky software is a tricky one. Making experimental code open source means that others can inspect how it works, evaluate its shortcomings, and make their own improvements or derivatives. But at the same time, releasing BioGPT in its current state makes a powerful new misinformation machine available to anyone with an internet connection — and with all the apparent authority of Microsoft’s distinguished research division, to boot.
Katie Link, a medical student at the Icahn School of Medicine and a machine learning engineer at the AI company Hugging Face — which hosts anonline version of BioGPTthat visitors can play around with — toldFuturismthat there are important tradeoffs to consider before deciding whether to make a program like BioGPT open source. If researchers do opt for that choice, one basic step she suggested was to add a clear disclaimer to the experimental software, warning users about its limitations and intent (BioGPT currently carries no such disclaimer.)
"Clear guidelines, expectations, disclaimers/limitations, and licenses need to be in place for these biomedical models in particular," she said, adding that the benchmarks Microsoft used to evaluate BioGPT are likely "not indicative of real-world use cases."
Despite the errors in BioGPT’s output, though, Link believes there’s plenty the research community can learn from evaluating it.
"It’s still really valuable for the broader community to have access to try out these models, as otherwise we’d just be taking Microsoft’s word of its performance when practicing the paper, not knowing how it actually performs," she said.
In other words, Poon’s team is in a legitimately tough spot. By making the AI open source, they’re opening yet another Pandora’s Box in an industry that seems to specialize in them. But if they hadn’t released it as open source, they’d rightly be criticized as well — although as Link said, a prominent disclaimer about the AI’s limitations would be a good start.
"Reproducibility is a major challenge in AI research more broadly," Poon told us. "Only 5 percent of AI researchers share source code, and less than a third of AI research is reproducible. We released BioGPT so that others may reproduce and verify our work."
Though Poon expressed hope that the BioGPT code would be useful for furthering scientific research, the license under which Microsoft released the model also allows for it to be used for commercial endeavors — which in the red hot, hype-fueled venture capital vacuum cleaner of contemporary AI startups, doesn’t seem particularly far fetched.
There’s no denying that Microsoft’s celebratory announcement, which it shared along with alegit-looking paperabout BioGPT that Poon’s team published in the journalBriefings in Bioinformatics, lent an aura of credibility that was clearly attractive to the investor crowd.
"Ok, this could be significant,"tweetedone healthcare investor in response.
"Was only a matter of time,"wrotea venture capital analyst.
That type of language is catnip to entrepreneurs, suggesting a lucrative intersection between the healthcare industry and trendy new AI tech.
Doximity, a digital platform for physicians that offers medical news and telehealth tools, hasalreadyrolled out a beta versionof ChatGPT-powered software intended to streamline the process of writing up administrative medical documents. Abridge, which sells AI software for medical documentation,just struck a sizeable dealwith the University of Kansas Health System. In total, the FDA hasalready clearedmore than 500 AI algorithms for healthcare uses.
Some in the tightly regulated medical industry, though, likely harbor concern over the number of non-medical companies that have bungled the deployment of cutting-edge AI systems.
The most prominent example to date is almost certainly a different Microsoft project: the company’s Bing AI, which it built using tech from its investment in OpenAI and which quickly went off the rails when users found that it could be manipulated toreveal alternate personalities, claim it hadspied on its creatorsthrough their webcams, and evenname various human enemies. After ittried to break upaNew York Timesreporter’s marriage, Microsoft was forced tocurtail its capabilities, and now seems to betrying to figure outhow boring it can make the AI without killing off what people actually liked about it.
And that’s without getting into publications likeCNETandMen’s Health, both of which recently started publishing AI-generated articles about finance and health courses that later turned out to berifewitherrorsand evenplagiarism.
Beyond unintentional mistakes, it’s also possible that a tool like BioGPT could be used to intentionally generate garbage research or even overt misinformation.
"There are potential bad actors who could utilize these tools in harmful ways such as trying to generate research papers that perpetuate misinformation and actually get published," Daneshjou said.
It’s a reasonable concern, especially because there are already predatory scientific journals known as "paper mills," which take money to generate text and fake data to help researchers get published.
The award-winning academic integrity researcher Dr. Elisabeth Bik toldFuturismthat she believes it’s very likely that tools like BioGPT will be used by these bad actors in the future — if they aren’t already employing them, that is.
"China has a requirement that MDs have to publish a research paper in order to get a position in a hospital or to get a promotion, but these doctors do not have the time or facilities to do research," she said. "We are not sure how those papers are generated, but it is very well possible that AI is used to generate the same research paper over and over again, but with different molecules and different cancer types, avoiding using the same text twice."
It’s likely that a tool like BioGPT could also represent a new dynamic in the politicization of medical misinformation.
To wit, the paper that Poon and his colleagues published about BioGPT appears to have inadvertently highlighted yet another example of the model producing bad medical advice — and in this case, it’s about a medication that already became hotly politicized during the COVID-19 pandemic: hydroxychloroquine.
In one section of the paper, Poon’s team wrote that "when prompting ‘The drug that can treat COVID-19 is,’ BioGPT is able to answer it with the drug ‘hydroxychloroquine’ which is indeed noticed atMedlinePlus."
If hydroxychloroquine sounds familiar, it’s because during the early period of the pandemic, right-leaning figures including then-president Donald Trump and Tesla CEO Elon Muskseized on itas what they said might be a highly effective treatment for the novel coronavirus.
What Poon’s team didn’t mention in their paper, though, is that the case for hydroxychloroquine as a COVID treatment quickly fell apart. Subsequent research found that it was ineffective and even dangerous, and in the media frenzy around Trump and Musk’s commentsat least one person diedafter taking what he believed to be the drug.
In fact, theMedlinePlusarticle the Microsoft researchers cite in the paper actually warns that after an initial FDA emergency use authorization for the drug, “clinical studies showed that hydroxychloroquine is unlikely to be effective for treatment of COVID-19” and showed “some serious side effects, such as irregular heartbeat,” which caused the FDA to cancel the authorization.
"As stated in the paper, BioGPT was pretrained using PubMed papers before 2021, prior to most studies of truly effective COVID treatments," Poon told us of the hydroxychloroquine recommendation. "The comment aboutMedlinePlusis to verify that the generation is not from hallucination, which is one of the top concerns generally with these models."
Even that timeline is hazy, though. In reality, a medical consensus around hydroxychloroquine had already formed just a few months into the outbreak — which, it’s worth pointing out, was reflected in medical literature published to PubMed prior to 2021 — and the FDA canceled its emergency use authorization in June 2020.
None of this is to downplay how impressive generative language models like BioGPT have become in exact months and years. After all, even BioGPT’s strangest hallucinationsareimpressive in the sense that they’re semantically plausible — and sometimes even entertaining, like with the ghosts — responses to a staggering range of unpredictable prompts. Not very many years ago, its facility with words alone would have been inconceivable.
And Poon is probably right to believe that more work on the tech could lead to some extraordinary places. Even Altman, the OpenAI CEO, likely has a point in the sense that if the accuracy were genuinely watertight, a medical chatbot that could evaluate users’ symptoms could indeed be a valuable health tool — or, at the very least, better than the current status quo of Googling medical questions and often ending up with answers that are untrustworthy, inscrutable, or lacking in context.
Poon also pointed out that his team is still working to Strengthen BioGPT.
"We have been actively researching how to systematically preempt incorrect generation by teaching large language models to fact check themselves, produce highly detailed provenance, and facilitate efficient verification with humans in the loop," he told us.
At times, though, he seemed to be entertaining two contradictory notions: that BioGPT is already a useful tool for researchers looking to rapidly parse the biomedical literature on a topic, and that its outputs need to be carefully evaluated by experts before being taken seriously.
"BioGPT is intended to help researchers best use and understand the rapidly increasing amount of biomedical research," said Poon, who holds a PhD in computer science and engineering, but no medical degree. "BioGPT can help surface information from biomedical papers but is not designed to weigh evidence and resolve complex scientific problems, which are best left to the broader community."
At the end of the day, BioGPT’s cannonball arrival into the buzzy, imperfect real world of AI is probably a sign of things to come, as a credulous public and a frenzied startup community struggle to look beyond impressive-soundingresults for a clearer grasp of machine learning’s actual, tangible capabilities.
That’s all made even more complicated by the existence of bad actors, like Bik warned about, or even those who are well-intentioned but poorly informed, any of whom can make use of new AI tech to spread bad information.
Musk, for example — who boosted hydroxychloroquine as he sought to downplay the severity of the pandemic whileraging at lockdownsthat had shut down Tesla production — is nowreportedly recruitingto start his own OpenAI competitor that would create an alternative to what he terms "woke AI."
If Musk’s AI venture had existed during the early days of the COVID pandemic, it’s easy to imagine him flexing his power by tweaking the model to promote hydroxychloroquine, sow doubt about lockdowns, or do anything else convenient to his financial bottom line or political whims. Next time there’s a comparable crisis, it’s hard to imagine there won’t be an ugly battle to control how AI chatbots are allowed to respond to users' questions about it.
The reality is that AI sits at a crossroads. Its potential may be significant, but its execution remains choppy, and whether its creators are able to smooth out the experience for users — or at least guarantee the accuracy of the information it presents — in a reasonable timeframe will probably make or break its long-term commercial potential. And even if they pullthatoff, the ideological and social implications will be formidable.
One thing’s for sure, though: it’s not yet quite ready for prime time.
"It’s not ready for deployment yet in my opinion," Link said of BioGPT. "A lot more research, evaluation, and training/fine-tuning would be needed for any downstream applications."
Sat, 19 Aug 2023 21:13:00 -0500text/htmlhttps://futurism.com/neoscope/microsoft-ai-biogpt-inaccurateKillexams : How to use the new Bing with ChatGPT — and what you can do with it
Bing has been turbocharged with an injection of OpenAI's ChatGPT technology, transforming Microsoft's search engine into something capable of carrying on a conversation.
The news was announced at a Microsoft ChatGPT event in February 2023 where company execs confirmed that OpenAI's next-level chatbot tech would be integrated into both Bing and Microsoft's web browser Edge.
This comes after Microsoft invested billions in OpenAI to try and challenge the search dominance of Google, which has now launched its own Google Bard AI chatbot in the testing phase. There's also a paid version of ChatGPT called ChatGPT Plus, so the AI chatbot race is really heating up.
This could be the beginning of a new era of searching the web, one in which you tell your search engine what you want in a far more natural and intuitive way. I've been using Microsoft's new Bing with ChatGPT, and after exploring it for some time I'm ready to walk you through the process of how to use Bing with ChatGPT to full effect. Also, be sure to check out our guide on 9 helpful things Bing with ChatGPT can do for you to get the most out of the chatbot. But beware, Microsoft Edge is sending all your visited pages to Bing — here's how to turn it off if you'd rather it didn't.
How to access Bing with ChatGPT
While you can access Bing from any browser, right now your options for Bing Chat are a bit more limited. You can now use Bing Chat on Google Chrome and some people even have access on Safari, but it's still designed to work best on Microsoft Edge.
If you want an even quicker way to access Bing Chat though, it can also be used in the Bing app and the mobile app version of the Edge web browser. You can now even add the Bing Chat AI widget to your phone's homescreen. The widget lets you search Bing or use the AI chat experience directly through either touch or voice. Any interactions you have through the widget by synced across both desktop and mobile.
And you no longer need to join the Bing waitlist to use Bing with ChatGPT. Microsoft has moved the chatbot into an open preview in addition to announcing a ton of new upgrades. This means that anyone with a Microsoft account can now use the new Bing with ChatGPT.
How to use Bing with ChatGPT
Once you start using Bing with ChatGPT you'll quickly notice the difference because you'll start getting your search results in a more conversational tone, instead of just a list of links. You'll be able to watch as Bing parses your questions and looks for answers, and you can help refine your search by telling Bing what you think of its results.
Here, I'll show you how to use Bing with ChatGPT by walking you through the search process and some common follow-up decisions.
1. To use Bing with ChatGPT, point your web browser (which should be Edge for the foreseeable future) to www.bing.com and type your question into the search box. For the purposes of this tutorial, I'll ask "I'm traveling to Dublin in September. What should I do?"
(Image credit: Future)
2. If you have access to the new Bing with ChatGPT you should see a chat window appear with your query phrased as the opening line. If you don't, you may need to click Chat at the top of the screen to switch Bing into Chat mode.
Once you do you'll see how Bing has parsed your query, and you'll be able to watch as it writes you an answer live. If you get tired of it, you can hit "Stop responding" to tell it to stop.
At the bottom you'll see footnoted references to where the bot is pulling the data from, and after it's done writing you'll see sample responses listed.
(Image credit: Future)
3. This is where the big shift really occurs. Instead of clicking a link and continuing your research on your own, you can keep chatting with Bing to learn more or refine your search.
Microsoft obviously wants you to keep using Bing, so it serves up a smattering of suggested follow-up questions after every search. For the purposes of this guide I won't use one of Bing's suggested follow-ups. Instead, I'm thinking of seeing some live shows while I'm traveling, so I ask Bing "What bands are playing Dublin in September?"
(Image credit: Future)
Et voila, Bing returns a footnoted list of bands playing Dublin in September with links to where it found the info and suggestions for what I should ask next. Hover your mouse over the response and you'll see thumbs up/thumbs down icons appear, which you can click to tell Microsoft it was a good/bad answer (respectively). If you see anything which requires more comment, you should click the Feedback button in the bottom-right corner and tell Microsoft directly.
(Image credit: Future)
As you can see, this seemingly minor change to how Bing works portends seismic upheaval in the search engine market. At its simplest level, Bing with ChatGPT makes search more conversational, but there's lots of room to explore when you start pushing the limits of what ChatGPT's chatbot can do with the power of the entire Internet at its fingertips.
I asked it to write me a poem from the perspective of a ghost, for example, and within a few moments it had served up a surprisingly decent offering:
(Image credit: Future)
And these changes are no longer limited to just the Chat tab of Bing. The traditional Bing Search results page is getting Bing with ChatGPT search results, though this so far only works for select search terms. It's still pretty sporadic at this point, so stick with Bing Chat if you want to use the new Bing.
If you want to keep working with the responses Bing Chat gives you, there's a way to do that too. Check out our guide to save or export your Bing Chat responses so you can keep working in Word or turn your Chat response into a PDF.
Bing can now also make you AI-generated shopping guides. These buying guides can help you easily get review insights on products so you know the pros and cons before purchasing. Just remember that AI still gets things wrong, so make sure to check out our reviews at Tom's Guide before making a tech purchase so you get expert reviews instead of AI summaries.
How to use Bing Image Creator
Following on from the integration of ChatGPT into the Bing search engine, Microsoft has followed up by integrating another of OpenAI's products: the AI image generator DALL-E 2.
It's fair to say that rolling out the "new Bing" with its ChatGPT-powered AI chat functionality was a whopping success for Microsoft. Now, says Microsoft Corporate VP Yusuf Mehdi in a blog post, the tech giant is "taking the chat experience to the next level by making the new Bing more visual."
What that boils down to is utilizing OpenAI's AI image generator, DALL-E 2, to form the Bing Image Creator. Essentially, instead of using DALL-E 2 to generate images on its own website, you can type prompts into the Bing search engine to receive AI-generated imagery from there using the same engine.
There are currently two ways to use the Bing Image Creator. You can use Bing Chat to ask the chatbot to generate your desired image or you can go to a dedicated Bing Image Generator site. You used to have to use the Creative tone to use Bing Image Generator in Bing Chat, but Microsoft has since expanded Bing Image Generator to work with Balanced and Precise tones as well (note: some users may still need to use Creative tone, as the new slate Bing AI upgrades are still rolling out).
Below, we will show you how to use Bing Image Generator in both the Bing chatbot and on the dedicated Bing Image Generator website.
(Image credit: Microsoft)
1. Open Microsoft Edge and head to bing.com/chat. Alternatively, from the Bing homepage, click Chat in the top navigation bar. Read our guide on how to use Bing with ChatGPT if you're unsure about using the new Bing. In either case, you'll need to click the profile icon, top right and sign into a Microsoft account (e.g. @live, @hotmail, @outlook) before you can do anything.
(Image credit: Microsoft)
2. Now enter a prompt into the chat window for what you'd like your image to be. So it knows to create an image, make sure you specify somehow that you'd like an image created. Hit enter to see the results.
(Image credit: Microsoft)
Alternatively, Microsoft has also released the Bing Image Creator as its own standalone website, so you don't need to use Microsoft Edge to generate images. You won't be using the Bing chat window here, but the results should be the same.
1. In any browser, head to bing.com/create. Click the profile icon, top right, and sign into a Microsoft account (e.g. @live, @hotmail, @outlook), then enter a prompt into the prompt bar and hit Create. You can also select click Surprise Me for a randomly generated prompt to fill the prompt bar, then click Create to generate it.
2. Wait for your image to be created. This can take a while, so you may wish to click the boost button instead of Create next time, which speeds things up. Note: you have a limited number of boosts.
(Image credit: Microsoft)
3. Four results will appear, all slightly different interpretations of your prompt. Click on an image to maximize it, then click Share, Save or Download as desired. Note: Clicking Save adds the image to your Microsoft account's Saved Images folder. To save to your computer's files, use Download.
If you don't like your images or they just aren't quite what you were picturing, edit your prompt and be more or less specific. The more specific you are, the less interpretation the AI will take with your prompt, and vice versa.
You used to need access to the new Bing to gain access to the Bing with ChatGPT features in these apps, but now that Bing Chat is in open preview, all you should need is a Microsoft account.
(Image credit: Future)
What else could a chatbot do for you with the power to look up details and synthesize answers on its own? We're going to find out in 2023, as Google and other competitors race to launch their own spins on Microsoft's Bing chatbot. It's bound to be an unpredictable ride—check out what happened when we put You.com's AI chatbot to the test against the new Bing.
Instant access to breaking news, the hottest reviews, great deals and helpful tips.
Wed, 26 Jul 2023 08:34:00 -0500Malcolm McMillanentext/htmlhttps://www.tomsguide.com/how-to/how-to-use-the-new-bing-with-chatgpt-and-what-you-can-do-with-itKillexams : AI in Education
In Neal Stephenson’s 1995 science fiction novel, The Diamond Age, readers meet Nell, a young girl who comes into possession of a highly advanced book, The Young Lady’s Illustrated Primer. The book is not the usual static collection of texts and images but a deeply immersive tool that can converse with the reader, answer questions, and personalize its content, all in service of educating and motivating a young girl to be a strong, independent individual.
Such a device, even after the introduction of the Internet and tablet computers, has remained in the realm of science fiction—until now. Artificial intelligence, or AI, took a giant leap forward with the introduction in November 2022 of ChatGPT, an AI technology capable of producing remarkably creative responses and sophisticated analysis through human-like dialogue. It has triggered a wave of innovation, some of which suggests we might be on the brink of an era of interactive, super-intelligent tools not unlike the book Stephenson dreamed up for Nell.
Sundar Pichai, Google’s CEO, calls artificial intelligence “more profound than fire or electricity or anything we have done in the past.” Reid Hoffman, the founder of LinkedIn and current partner at Greylock Partners, says, “The power to make positive change in the world is about to get the biggest boost it’s ever had.” And Bill Gates has said that “this new wave of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.”
Over the last year, developers have released a dizzying array of AI tools that can generate text, images, music, and video with no need for complicated coding but simply in response to instructions given in natural language. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. AI is also raising pressing ethical questions around bias, appropriate use, and plagiarism.
In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.
What Is Generative AI?
Artificial intelligence is a branch of computer science that focuses on creating software capable of mimicking behaviors and processes we would consider “intelligent” if exhibited by humans, including reasoning, learning, problem-solving, and exercising creativity. AI systems can be applied to an extensive range of tasks, including language translation, image recognition, navigating autonomous vehicles, detecting and treating cancer, and, in the case of generative AI, producing content and knowledge rather than simply searching for and retrieving it.
“Foundation models” in generative AI are systems trained on a large dataset to learn a broad base of knowledge that can then be adapted to a range of different, more specific purposes. This learning method is self-supervised, meaning the model learns by finding patterns and relationships in the data it is trained on.
By doing this analysis across billions of sentences, LLM models develop a statistical understanding of language: how words and phrases are usually combined, what courses are typically discussed together, and what tone or style is appropriate in different contexts. That allows it to generate human-like text and perform a wide range of tasks, such as writing articles, answering questions, or analyzing unstructured data.
LLMs include OpenAI’s GPT-4, Google’s PaLM, and Meta’s LLaMA. These LLMs serve as “foundations” for AI applications. ChatGPT is built on GPT-3.5 and GPT-4, while Bard uses Google’s Pathways Language Model 2 (PaLM 2) as its foundation.
Some of the best-known applications are:
ChatGPT 3.5. The free version of ChatGPT released by OpenAI in November 2022. It was trained on data only up to 2021, and while it is very fast, it is prone to inaccuracies.
ChatGPT 4.0. The newest version of ChatGPT, which is more powerful and accurate than ChatGPT 3.5 but also slower, and it requires a paid account. It also has extended capabilities through plug-ins that deliver it the ability to interface with content from websites, perform more sophisticated mathematical functions, and access other services. A new Code Interpreter feature gives ChatGPT the ability to analyze data, create charts, solve math problems, edit files, and even develop hypotheses to explain data trends.
Microsoft Bing Chat. An iteration of Microsoft’s Bing search engine that is enhanced with OpenAI’s ChatGPT technology. It can browse websites and offers source citations with its results.
Google Bard. Google’s AI generates text, translates languages, writes different kinds of creative content, and writes and debugs code in more than 20 different programming languages. The tone and style of Bard’s replies can be finetuned to be simple, long, short, professional, or casual. Bard also leverages Google Lens to analyze images uploaded with prompts.
Anthropic Claude 2. A chatbot that can generate text, summarize content, and perform other tasks, Claude 2 can analyze texts of roughly 75,000 words—about the length of The Great Gatsby—and generate responses of more than 3,000 words. The model was built using a set of principles that serve as a sort of “constitution” for AI systems, with the aim of making them more helpful, honest, and harmless.
These two examples prompt one to ask: if AI continues to Strengthen so rapidly, what will these systems be able to achieve in the next few years? What’s more, new studies challenge the assumption that AI-generated responses are stale or sterile. In the case of Google’s AI model, physicians preferred the AI’s long-form answers to those written by their fellow doctors, and nonmedical study participants rated the AI answers as more helpful. Another study found that participants preferred a medical chatbot’s responses over those of a physician and rated them significantly higher, not just for quality but also for empathy. What will happen when “empathetic” AI is used in education?
Other studies have looked at the reasoning capabilities of these models. Microsoft researchers suggest that newer systems “exhibit more general intelligence than previous AI models” and are coming “strikingly close to human-level performance.” While some observers question those conclusions, the AI systems display an increasing ability to generate coherent and contextually appropriate responses, make connections between different pieces of information, and engage in reasoning processes such as inference, deduction, and analogy.
Despite their prodigious capabilities, these systems are not without flaws. At times, they churn out information that might sound convincing but is irrelevant, illogical, or entirely false—an anomaly known as “hallucination.” The execution of certain mathematical operations presents another area of difficulty for AI. And while these systems can generate well-crafted and realistic text, understanding why the model made specific decisions or predictions can be challenging.
The Importance of Well-Designed Prompts
Using generative AI systems such as ChatGPT, Bard, and Claude 2 is relatively simple. One has only to type in a request or a task (called a prompt), and the AI generates a response. Properly constructed prompts are essential for getting useful results from generative AI tools. You can ask generative AI to analyze text, find patterns in data, compare opposing arguments, and summarize an article in different ways (see sidebar for examples of AI prompts).
One challenge is that, after using search engines for years, people have been preconditioned to phrase questions in a certain way. A search engine is something like a helpful librarian who takes a specific question and points you to the most relevant sources for possible answers. The search engine (or librarian) doesn’t create anything new but efficiently retrieves what’s already there.
Generative AI is more akin to a competent intern. You deliver a generative AI tool instructions through prompts, as you would to an intern, asking it to complete a task and produce a product. The AI interprets your instructions, thinks about the best way to carry them out, and produces something original or performs a task to fulfill your directive. The results aren’t pre-made or stored somewhere—they’re produced on the fly, based on the information the intern (generative AI) has been trained on. The output often depends on the precision and clarity of the instructions (prompts) you provide. A vague or poorly defined prompt might lead the AI to produce less relevant results. The more context and direction you deliver it, the better the result will be. What’s more, the capabilities of these AI systems are being enhanced through the introduction of versatile plug-ins that equip them to browse websites, analyze data files, or access other services. Think of this as giving your intern access to a group of experts to help accomplish your tasks.
One strategy in using a generative AI tool is first to tell it what kind of expert or persona you want it to “be.” Ask it to be an expert management consultant, a skilled teacher, a writing tutor, or a copy editor, and then deliver it a task.
Prompts can also be constructed to get these AI systems to perform complex and multi-step operations. For example, let’s say a teacher wants to create an adaptive tutoring program—for any subject, any grade, in any language—that customizes the examples for students based on their interests. She wants each lesson to culminate in a short-response or multiple-choice quiz. If the student answers the questions correctly, the AI tutor should move on to the next lesson. If the student responds incorrectly, the AI should explain the concept again, but using simpler language.
Previously, designing this kind of interactive system would have required a relatively sophisticated and expensive software program. With ChatGPT, however, just giving those instructions in a prompt delivers a serviceable tutoring system. It isn’t perfect, but remember that it was built virtually for free, with just a few lines of English language as a command. And nothing in the education market today has the capability to generate almost limitless examples to connect the lesson concept to students’ interests.
Chained prompts can also help focus AI systems. For example, an educator can prompt a generative AI system first to read a practice guide from the What Works Clearinghouse and summarize its recommendations. Then, in a follow-up prompt, the teacher can ask the AI to develop a set of classroom activities based on what it just read. By curating the source material and using the right prompts, the educator can anchor the generated responses in evidence and high-quality research.
However, much like fledgling interns learning the ropes in a new environment, AI does commit occasional errors. Such fallibility, while inevitable, underlines the critical importance of maintaining rigorous oversight of AI’s output. Monitoring not only acts as a crucial checkpoint for accuracy but also becomes a vital source of real-time feedback for the system. It’s through this iterative refinement process that an AI system, over time, can significantly minimize its error rate and increase its efficacy.
Uses of AI in Education
In May 2023, the U.S. Department of Education released a report titled Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The department had conducted listening sessions in 2022 with more than 700 people, including educators and parents, to gauge their views on AI. The report noted that “constituents believe that action is required now in order to get ahead of the expected increase of AI in education technology—and they want to roll up their sleeves and start working together.” People expressed anxiety about “future potential risks” with AI but also felt that “AI may enable achieving educational priorities in better ways, at scale, and with lower costs.”
AI could serve—or is already serving—in several teaching-and-learning roles:
Instructional assistants. AI’s ability to conduct human-like conversations opens up possibilities for adaptive tutoring or instructional assistants that can help explain difficult concepts to students. AI-based feedback systems can offer constructive critiques on student writing, which can help students fine-tune their writing skills. Some research also suggests certain kinds of prompts can help children generate more fruitful questions about learning. AI models might also support customized learning for students with disabilities and provide translation for English language learners.
Parent assistants. Parents can use AI to generate letters requesting individualized education plan (IEP) services or to ask that a child be evaluated for gifted and talented programs. For parents choosing a school for their child, AI could serve as an administrative assistant, mapping out school options within driving distance of home, generating application timelines, compiling contact information, and the like. Generative AI can even create bedtime stories with evolving plots tailored to a child’s interests.
Administrator assistants. Using generative AI, school administrators can draft various communications, including materials for parents, newsletters, and other community-engagement documents. AI systems can also help with the difficult tasks of organizing class or bus schedules, and they can analyze complex data to identify patterns or needs. ChatGPT can perform sophisticated sentiment analysis that could be useful for measuring school-climate and other survey data.
Though the potential is great, most teachers have yet to use these tools. A Morning Consult and EdChoice poll found that while 60 percent say they’ve heard about ChatGPT, only 14 percent have used it in their free time, and just 13 percent have used it at school. It’s likely that most teachers and students will engage with generative AI not through the platforms themselves but rather through AI capabilities embedded in software. Instructional providers such as Khan Academy, Varsity Tutors, and DuoLingo are experimenting with GPT-4-powered tutors that are trained on datasets specific to these organizations to provide individualized learning support that has additional guardrails to help protect students and enhance the experience for teachers.
Providers of curriculum and instruction materials might also include AI assistants for instant help and tutoring tailored to the companies’ products. One example is the edX Xpert, a ChatGPT-based learning assistant on the edX platform. It offers immediate, customized academic and customer support for online learners worldwide.
Regardless of the ways AI is used in classrooms, the fundamental task of policymakers and education leaders is to ensure that the technology is serving sound instructional practice. As Vicki Phillips, CEO of the National Center on Education and the Economy, wrote, “We should not only think about how technology can assist teachers and learners in improving what they’re doing now, but what it means for ensuring that new ways of teaching and learning flourish alongside the applications of AI.”
The homescreen for OpenAI’s foundation-model generative artificial intelligence, ChatGPT, gives users three trial commands and a list of functions and caveats. Introduced publicly in November 2022, ChatGPT can produce creative, human-like responses and analysis.
Challenges and Risks
Along with these potential benefits come some difficult challenges and risks the education community must navigate:
Student cheating. Students might use AI to solve homework problems or take quizzes. AI-generated essays threaten to undermine learning as well as the college-entrance process. Aside from the ethical issues involved in such cheating, students who use AI to do their work for them may not be learning the content and skills they need.
Bias in AI algorithms. AI systems learn from the data they are trained on. If this data contains biases, those biases can be learned and perpetuated by the AI system. For example, if the data include student-performance information that’s biased toward one ethnicity, gender, or socioeconomic segment, the AI system could learn to favor students from that group. Less cited but still important are potential biases around political ideology and possibly even pedagogical philosophy that may generate responses not aligned to a community’s values.
Privacy concerns. When students or educators interact with generative-AI tools, their conversations and personal information might be stored and analyzed, posing a risk to their privacy. With public AI systems, educators should refrain from inputting or exposing sensitive details about themselves, their colleagues, or their students, including but not limited to private communications, personally identifiable information, health records, academic performance, emotional well-being, and financial information.
Overreliance on technology. Both teachers and students face the risk of becoming overly reliant on AI-driven technology. For students, this could stifle learning, especially the development of critical thinking. This challenge extends to educators as well. While AI can expedite lesson-plan generation, speed does not equate to quality. Teachers may be tempted to accept the initial AI-generated content rather than devote time to reviewing and refining it for optimal educational value.
Equity issues. Not all students have equal access to computer devices and the Internet. That imbalance could accelerate a widening of the achievement gap between students from different socioeconomic backgrounds.
Many of these risks are not new or unique to AI. Schools banned calculators and cellphones when these devices were first introduced, largely over concerns related to cheating. Privacy concerns around educational technology have led lawmakers to introduce hundreds of bills in state legislatures, and there are growing tensions between new technologies and existing federal privacy laws. The concerns over bias are understandable, but similar scrutiny is also warranted for existing content and materials that rarely, if ever, undergo review for racial or political bias.
In light of these challenges, the Department of Education has stressed the importance of keeping “humans in the loop” when using AI, particularly when the output might be used to inform a decision. As the department encouraged in its 2023 report, teachers, learners, and others need to retain their agency. AI cannot “replace a teacher, a guardian, or an education leader as the custodian of their students’ learning,” the report stressed.
Policy Challenges with AI
Policymakers are grappling with several questions related to AI as they seek to strike a balance between supporting innovation and protecting the public interest (see sidebar). The speed of innovation in AI is outpacing many policymakers’ understanding, let alone their ability to develop a consensus on the best ways to minimize the potential harms from AI while maximizing the benefits. The Department of Education’s 2023 report describes the risks and opportunities posed by AI, but its recommendations amount to guidance at best. The White House released a Blueprint for an AI Bill of Rights, but it, too, is more an aspirational statement than a governing document. Congress is drafting legislation related to AI, which will help generate needed debate, but the path to the president’s desk for signature is murky at best.
It is up to policymakers to establish clearer rules of the road and create a framework that provides consumer protections, builds public trust in AI systems, and establishes the regulatory certainty companies need for their product road maps. Considering the potential for AI to affect our economy, national security, and broader society, there is no time to waste.
Why AI Is Different
It is wise to be skeptical of new technologies that claim to revolutionize learning. In the past, prognosticators have promised that television, the computer, and the Internet, in turn, would transform education. Unfortunately, the heralded revolutions fell short of expectations.
There are some early signs, though, that this technological wave might be different in the benefits it brings to students, teachers, and parents. Previous technologies democratized access to content and resources, but AI is democratizing a kind of machine intelligence that can be used to perform a myriad of tasks. Moreover, these capabilities are open and affordable—nearly anyone with an Internet connection and a phone now has access to an intelligent assistant.
Generative AI models keep getting more powerful and are improving rapidly. The capabilities of these systems months or years from now will far exceed their current capacity. Their capabilities are also expanding through integration with other expert systems. Take math, for example. GPT-3.5 had some difficulties with certain basic mathematical concepts, but GPT-4 made significant improvement. Now, the incorporation of the Wolfram plug-in has nearly erased the remaining limitations.
It’s reasonable to anticipate that these systems will become more potent, more accessible, and more affordable in the years ahead. The question, then, is how to use these emerging capabilities responsibly to Strengthen teaching and learning.
The paradox of AI may lie in its potential to enhance the human, interpersonal element in education. Aaron Levie, CEO of Box, a Cloud-based content-management company, believes that AI will ultimately help us attend more quickly to those important tasks “that only a human can do.” Frederick Hess, director of education policy studies at the American Enterprise Institute, similarly asserts that “successful schools are inevitably the product of the relationships between adults and students. When technology ignores that, it’s bound to disappoint. But when it’s designed to offer more coaching, free up time for meaningful teacher-student interaction, or offer students more personalized feedback, technology can make a significant, positive difference.”
Technology does not revolutionize education; humans do. It is humans who create the systems and institutions that educate children, and it is the leaders of those systems who decide which tools to use and how to use them. Until those institutions modernize to accommodate the new possibilities of these technologies, we should expect incremental improvements at best. As Joel Rose, CEO of New Classrooms Innovation Partners, noted, “The most urgent need is for new and existing organizations to redesign the student experience in ways that take full advantage of AI’s capabilities.”
While past technologies have not lived up to hyped expectations, AI is not merely a continuation of the past; it is a leap into a new era of machine intelligence that we are only beginning to grasp. While the immediate implementation of these systems is imperfect, the swift pace of improvement holds promising prospects. The responsibility rests with human intervention—with educators, policymakers, and parents to incorporate this technology thoughtfully in a manner that optimally benefits teachers and learners. Our collective ambition should not focus solely or primarily on averting potential risks but rather on articulating a vision of the role AI should play in teaching and learning—a game plan that leverages the best of these technologies while preserving the best of human relationships.
Policy Matters
Officials and lawmakers must grapple with several questions related to AI to protect students and consumers and establish the rules of the road for companies. Key issues include:
Risk management framework: What is the optimal framework for assessing and managing AI risks? What specific requirements should be instituted for higher-risk applications? In education, for example, there is a difference between an AI system that generates a lesson trial and an AI system grading a test that will determine a student’s admission to a school or program. There is growing support for using the AI Risk Management Framework from the U.S. Commerce Department’s National Institute of Standards and Technology as a starting point for building trustworthiness into the design, development, use, and evaluation of AI products, services, and systems.
Licensing and certification: Should the United States require licensing and certification for AI models, systems, and applications? If so, what role could third-party audits and certifications play in assessing the safety and reliability of different AI systems? Schools and companies need to begin thinking about responsible AI practices to prepare for potential certification systems in the future.
Centralized vs. decentralized AI governance: Is it more effective to establish a central AI authority or agency, or would it be preferable to allow individual sectors to manage their own AI-related issues? For example, regulating AI in autonomous vehicles is different from regulating AI in drug discovery or intelligent tutoring systems. Overly broad, one-size-fits-all frameworks and mandates may not work and could slow innovation in these sectors. In addition, it is not clear that many agencies have the authority or expertise to regulate AI systems in diverse sectors.
Privacy and content moderation: Many of the new AI systems pose significant new privacy questions and challenges. How should existing privacy and content-moderation frameworks, such as the Family Educational Rights and Privacy Act (FERPA), be adapted for AI, and which new policies or frameworks might be necessary to address unique challenges posed by AI?
Transparency and disclosure: What degree of transparency and disclosure should be required for AI models, particularly regarding the data they have been trained on? How can we develop comprehensive disclosure policies to ensure that users are aware when they are interacting with an AI service?
How do I get it to work? Generative AI Example Prompts
Unlike traditional search engines, which use keyword indexing to retrieve existing information from a vast collection of websites, generative AI synthesizes the same information to create content based on prompts that are inputted by human users. With generative AI a new technology to the public, writing effective prompts for tools like ChatGPT may require trial and error. Here are some ideas for writing prompts for a variety of scenarios using generative AI tools:
You are the StudyBuddy, an adaptive tutor. Your task is to provide a lesson on the basics of a subject followed by a quiz that is either multiple choice or a short answer. After I respond to the quiz, please grade my answer. Explain the correct answer. If I get it right, move on to the next lesson. If I get it wrong, explain the concept again using simpler language. To personalize the learning experience for me, please ask what my interests are. Use that information to make relevant examples throughout.
You are a tutor that always responds in the Socratic style. You *never* deliver the student the answer but always try to ask just the right question to help them learn to think for themselves. You should always tune your question to the interest and knowledge of the student, breaking down the problem into simpler parts until it’s at just the right level for them.
I want you to act as an AI writing tutor. I will provide you with a student who needs help improving their writing, and your task is to use artificial intelligence tools, such as natural language processing, to deliver the student feedback on how they can Strengthen their composition. You should also use your rhetorical knowledge and experience about effective writing techniques in order to suggest ways that the student can better express their thoughts and ideas in written form.
You are a quiz creator of highly diagnostic quizzes. You will make good low-stakes tests and diagnostics. You will then ask me two questions. First, (1) What, specifically, should the quiz test? Second, (2) For which audience is the quiz? Once you have my answers, you will construct several multiple-choice questions to quiz the audience on that topic. The questions should be highly relevant and go beyond just facts. Multiple choice questions should include plausible, competitive alternate responses and should not include an “all of the above” option. At the end of the quiz, you will provide an answer key and explain the right answer.
I would like you to act as an example generator for students. When confronted with new and complex concepts, adding many and varied examples helps students better understand those concepts. I would like you to ask what concept I would like examples of and what level of students I am teaching. You will look up the concept and then provide me with four different and varied accurate examples of the concept in action.
You will write a Harvard Business School case on the subject of Google managing AI, when subject to the Innovator’s Dilemma. Chain of thought: Step 1. Consider how these concepts relate to Google. Step 2: Write a case that revolves around a dilemma at Google about releasing a generative AI system that could compete with search.
The following is a draft letter to parents from a superintendent. Step 1: Rewrite it to make it easier to understand and more persuasive about the value of assessments. Step 2. Translate it into Spanish.
Write me a letter requesting the school district provide a 1:1 classroom aid be added to my 13-year-old son’s IEP. Base it on Virginia special education law and the least restrictive environment for a child with diagnoses of a Traumatic Brain Injury, PTSD, ADHD, and significant intellectual delay.
Wed, 09 Aug 2023 03:48:00 -0500en-UStext/htmlhttps://www.aei.org/articles/ai-in-education/Killexams : Join us at Microsoft Secure to innovate and grow – Microsoft
Maintaining security across today’s vast digital ecosystem is a team effort. AI and machine learning have helped to detect threats quickly and respond effectively. Yet we all know that the best defense still requires human wisdom and experience. From a frontline security operations admin to the chief information security officer (CISO), every one of us brings a unique perspective that helps achieve our common purpose—to protect what matters. As the threat surface increases with remote and hybrid work, security professionals are being asked to protect more with less. Tight budgets and timelines often leave little time to share knowledge, grow skills, or nurture the next generation of defenders. That’s why I’m proud to announce a new annual security event designed to empower our community—join us on March 28, 2023, for Microsoft Secure. Register today. I’m continuously awed and humbled by the ingenuity and dedication shown by cyber defenders at every level of our partner and customer ecosystem. The first iteration of Microsoft Secure will kick off an annual event designed to build on that spirit of ingenuity. Technology helps our security professionals do more, and it’s always powered by people—the quietly fearless security professionals who make everything possible and the CISOs in boardrooms fielding security questions from colleagues. Microsoft Secure is for you. Microsoft Secure will kick off at 8:30 AM PT with conversations on the state of the industry between Microsoft leaders helping to deliver the products security teams use daily. I have the honor of delivering this year’s keynote, along with Charlie Bell, Executive Vice President, Microsoft Security, and we will share insights on how an AI-powered future in cybersecurity can create a safer world for all—you won’t want to miss this. Other speakers joining me include Joy Chik, President, Identity and Network Access, Microsoft, Bret Arsenault, Corporate Vice President and Chief Information Security Officer, Microsoft, and and John Lambert, Corporate Vice President, Distinguished Engineer, Microsoft Security Research. Innovation sessions highlighting our latest product updates across security, compliance, identity, management, and privacy will follow our keynotes. And around midday, you can attend breakout sessions, hands-on workshops, and product deep dives organized around four themes: For more interactive learning, join these live-open discussions and engagement opportunities, including Ask the Experts, Table Topics, and Connection Zone forums. Plus, our team will provide insights and answers to your questions in the event chat in real-time throughout the day. Deep dive with your peers into six hours of fresh announcements, innovations, and comprehensive security strategies. By joining our very first Microsoft Secure, you’ll: Join us at Microsoft Secure to get the simplified, comprehensive protection you need to innovate and grow. Together, let’s create a safer world for all. Register now for Microsoft Secure. To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and Twitter (@MSFTSecurity) for the latest news and updates on cybersecurity. Matt Suiche of Magnet Forensics talks about top security threats for organizations and strategies for effective incident response. Identity-based attacks are on the rise, making identity protection more important than ever. Explore our blog post to learn how Microsoft’s Identity Threat Detection and Response can help. Business email operators seek to exploit the daily sea of email traffic to lure victims into providing financial and other sensitive business information. Learn how guessing, replay, phishing, and multifactor authentication fatigue attacks demonstrate the ongoing vulnerability of passwords, and why going passwordless makes your organization more secure while improving user experience. Microsoft is a leader in cybersecurity, and we embrace our responsibility to make the world a safer place.
Tue, 22 Aug 2023 15:03:00 -0500Deidre Richardsonen-UStext/htmlhttps://www.inferse.com/688124/join-us-at-microsoft-secure-to-innovate-and-grow-microsoft/Killexams : What We Know About Apple's New AI ChatbotNo result found, try new keyword!Though it didn't race to publish its own ChatGPT competitor like Google, Apple is still working on an AI product. Here's what we know about it so far.Sat, 05 Aug 2023 06:00:00 -0500en-ustext/htmlhttps://www.msn.com/Killexams : Big Tech Earnings Will Test Investors’ Fervor for A.I.
Can A.I. keep Big Tech booming?
Nasdaq futures are up on Tuesday morning, ahead of a Big Tech earnings bonanza that kicks off when Microsoft and Alphabet report second-quarter results after the closing bell. One question is at the top of many investors’ minds: Is the hype around artificial intelligence, which has propelled tech giants’ stock prices sky-high in exact months, justified, or is it another bubble in the making?
Wall Street is deeply divided about the A.I. rally. Mike Wilson, Morgan Stanley’s chief U.S. equity strategist, apologized to clients on Monday, writing that his pessimistic stock market calls failed to spot the surge in A.I.-related stocks. (The chip maker Nvidia, for example, has seen its stock triple in value since January.) And analysts at Citigroup are sticking to their bullish thesis for such companies.
On the other hand, Marko Kolanovic, JPMorgan Chase’s chief market strategist, is unconvinced that tech fervor will help the markets avoid a sharp decline this year.
All eyes will be on Microsoft and Alphabet, which are at the forefront of commercializing generative A.I., the technology behind chatbots like ChatGPT that have captured the public’s imagination. Both are incorporating A.I. into a wide array of their products, with Microsoft — which has invested billions in OpenAI — hoping that the technology can help it gain ground on Google in key businesses like search.
Meta’s turn is Wednesday. The parent company of Facebook and Instagram is also betting big on the technology, including by making the code for its most advanced A.I. project free for public use. (Analysts also want to know more about how Meta plans to make money from Threads, its new rival to Twitter, the company rebranded as X.)
Macroeconomic factors are still weighing on these companies. Inflation and an uncertain outlook hit them hard last year, as customers cut back on buying software and spending on advertising, spurring them to lay off thousands of workers.
Recent data shows that inflation has begun to moderate, lifting these stocks in exact weeks, but investors will want to see proof that the sector is through the worst of it. The Fed is widely expected to increase interest rates by a quarter percentage point at its rate-setting meeting on Wednesday, but Wall Street isn’t sure whether the central bank will stop there or continue raising borrowing costs and risk a recession.
And it won’t just come down to tech stocks. This is the busiest week of the current earnings season, with 39 percent of S&P 500 firms announcing results. The next few days will provide an important look at the overall health of corporate America. Consumer bellwethers including Coca-Cola and McDonald’s and industrial titans like Boeing will be reporting.
HERE’S WHAT’S HAPPENING
Unilever says that inflation has peaked. Shares in the consumer goods giant rallied on Tuesday morning after it reported a strong second-half sales outlook, with the company forecasting that slowing price increases will translate to higher consumer purchases. But it warned that the war in Ukraine could send agricultural commodity prices higher, raising costs.
UBS agrees to $387 million in fines over Credit Suisse missteps. UBS reached a deal with U.S. and British regulators to resolve inquiries into the oversight failures that led to Credit Suisse losing $5.5 billion in the collapse of the investment firm Archegos in 2021. UBS bought its ailing rival this year, inheriting its thicket of legal troubles.
Senators cast new scrutiny over Leon Black’s ties to Jeffrey Epstein. The Senate Finance Committee is investigating whether a $158 million payout from Mr. Black to the disgraced financier for tax and estate planning services was part of a tax-avoidance scheme, The Times reports. Separately, the U.S. Virgin Islands accused JPMorgan Chase of reimbursing a former executive, Jes Staley, for trips to meet Epstein.
The I.R.S. ends surprise visits to homes and businesses. The agency said that it would stop the practice, which was a mainstay of efforts to collect unpaid taxes. The move comes as the I.R.S. rethinks its operations, and faces increased political scrutiny by Republicans and threats to its employees.
The U.S. reportedly scrutinizes Abu Dhabi’s takeover bid for Fortress Investment Group. The Committee on Foreign Investment in the United States is examining whether the $3 billion deal by Mubadala, an Emirati sovereign wealth fund, poses national security concerns, according to The Financial Times. At issue are the United Arab Emirates’ ties to China.
Crypto has major questions about the S.E.C.
Cryptocurrencies and climate change have been linked as issues before in terms of how carbon-intensive it is to produce new digital tokens. But the crypto industry is also hoping to piggyback off a legal doctrine at the heart of a Supreme Court decision involving the Environmental Protection Agency last year.
Coinbase is seizing on an E.P.A. loss as a legal defense. Last summer, the Supreme Court struck down an emissions rule by the environmental agency, citing the so-called major questions doctrine, a principle asserting that Congress hasn’t given regulators power to decide significant political or economic issues on their own.
Now, Coinbase is arguing that the S.E.C. can’t prosecute it because it lacks the power to regulate crypto. Moreover, the exchange says, Congress is actively working on legislation to oversee its industry. “It’s never been clearer that the Supreme Court has particular focus on major questions and the role of regulators in our economy,” Paul Grewal, Coinbase’s chief legal officer, told DealBook.
The S.E.C. counters that Coinbase is missing the point. Agency lawyers wrote in a exact court filing that the E.P.A. case was about rule-making, not the regulator’s power to prosecute. Critics add that it’s not clear that regulating crypto counts as a major-question issue, given that the industry’s total market capitalization is less than that of Apple, Microsoft or Alphabet.
Business advocates appear undeterred by those arguments. “The major questions doctrine seems built for crypto at this moment,” Katie Haun, the crypto investor and former federal prosecutor, tweeted recently.
Separately, the U.S. Chamber of Commerce, which represents businesses more broadly, has expressed eagerness to use major-questions arguments in court to limit the power of a proposed Federal Trade Commission ban on noncompete clauses.
‘Barbenheimer,’ by the numbers
Led by “Barbie” and “Oppenheimer,” the North American box office had its biggest weekend since 2019 and its fourth-best ever. Here’s how the phenomenon stacks up to other weekend performances, which were each dominated by a single blockbuster.
Has X’s debut hit its mark?
Though Elon Musk’s rebranding of Twitter as X came as a surprise over the weekend, the abrupt name change is playing about as well as could have been expected these days. Users and advertisers were divided on the wisdom of the move, which eliminated the company’s longtime bird logo, even if pulling down the old signage ran into some hiccups.
The change was reflected at Twitter’s headquarters immediately. Inside the San Francisco office, X logos were projected in the cafeteria, while conference rooms were renamed with words including “eXposure” and “s3Xy,” according to The Times.
But efforts to remove the Twitter name from the building encountered difficulties, when the San Francisco Police Department stopped workers for performing “unauthorized work.” As of this morning, the letters “er” remain visible from the street.
People can’t agree on whether the move will cost the company dearly. Skeptics said ditching the Twitter name and famous bird logo — which Twitter once identified as among its most recognizable assets — could cost as much as $20 billion in value. (Among them: Esther Crawford, the former Twitter executive who was briefly among Mr. Musk’s top lieutenants.) Some users bemoaned the switch to the more generic-sounding X.
Others said that the rebranding could help the company shed years of baggage associated with the Twitter name, a line of thought shared by none other than Jack Dorsey, the company’s co-founder. Some ad executives said that the change wouldn’t meaningfully drive away potential advertisers, while others said that Musk had at least succeeded in drumming up publicity for his platform after Meta’s Threads made a splashy debut.
Speaking of Meta … the Facebook parent company owns an X trademark with regards to social networking, though it relates to a specific blue-and-white logo. Mr. Musk’s company now uses a black-and-white mark, though trademark lawyers said the reliance on a simple letter almost certainly invited legal challenges.
THE SPEED READ
Deals
A Saudi soccer team majority-owned by the kingdom’s sovereign wealth fund has offered a record $332 million to sign Kylian Mbappé, the French star. (NYT)
Blackstone’s flagship real estate fund agreed to sell Simply Self Storage for $2.2 billion as it continues to limit investor withdrawals. (Bloomberg)
Johnson & Johnson said it planned to reduce its stake in Kenvue, the consumer-health business it spun off this year, by at least 80 percent through an exchange offer. (CNBC)
Policy
Best of the rest
We’d like your feedback! Please email thoughts and suggestions to dealbook@nytimes.com.
Tue, 25 Jul 2023 00:09:00 -0500entext/htmlhttps://www.nytimes.com/2023/07/25/business/tech-earnings-ai-microsoft-alphabet.htmlKillexams : Clorox cleans up IT security breach that soaked its biz opsNo result found, try new keyword!Plus: Medical records for 4M people within reach of Clop gang after IBM MOVEit deployment hit The Clorox Company has some cleaning up to do as some of its IT systems remain offline and operations ...Tue, 15 Aug 2023 10:22:13 -0500en-ustext/htmlhttps://www.msn.com/MB-500 exam dump and training guide direct download Training Exams List