Exact copy of SC-900 real questions are here to download

We receive reports from applicant on daily basis who sit for Microsoft Microsoft Security- Compliance- and Identity Fundamentals real exam and pass their exam with good score. Some of them are so excited that they apply for several next exams from killexams.com. We feel proud that we serve people improve their knowledge and pass their exams happily. Our job is done.

Exam Code: SC-900 Practice exam 2023 by Killexams.com team
SC-900 Microsoft Security, Compliance, and Identity Fundamentals

Exam Number: exam SC-900

Exam Name : Microsoft Security, Compliance, and Identity Fundamentals



Exam TOPICS



The content of this exam was updated on July 26, 2021. Please obtain the exam skills outline below to see what changed.

Describe the concepts of security, compliance, and identity (10-15%)

Describe the capabilities of Microsoft identity and access management solutions (30-35%)

Describe the capabilities of Microsoft security solutions (35-40%)

Describe the capabilities of Microsoft compliance solutions (25-30%)



Describe the Concepts of Security, Compliance, and Identity (10-15%)

Describe security and compliance concepts & methodologies

 describe the Zero-Trust methodology

 describe the shared responsibility model

 define defense in depth

 describe common threats

 describe encryption

 describe cloud adoption framework

Define identity concepts

 define identity as the primary security perimeter

 define authentication

 define authorization

 describe what identity providers are

 describe what Active Directory is

 describe the concept of Federated services

 define common Identity Attacks

Describe the capabilities of Microsoft Identity and Access Management

Solutions (30-35%)

Describe the basic identity services and identity types of Azure AD

 describe what Azure Active Directory is

 describe Azure AD identities (users, devices, groups, service principals/applications)

 describe what hybrid identity is

 describe the different external identity types (Guest Users)

Describe the authentication capabilities of Azure AD

 describe the different authentication methods

 describe self-service password reset

 describe password protection and management capabilities

 describe Multi-factor Authentication

 describe Windows Hello for Business

Describe access management capabilities of Azure AD

 describe what conditional access is

 describe uses and benefits of conditional access

 describe the benefits of Azure AD roles

Describe the identity protection & governance capabilities of Azure AD

 describe what identity governance is

 describe what entitlement management and access reviews is

 describe the capabilities of PIM

 describe Azure AD Identity Protection

Describe the capabilities of Microsoft Security Solutions (35-40%)

Describe basic security capabilities in Azure

 describe Azure Network Security groups

 describe Azure DDoS protection

 describe what Azure Firewall is

 describe what Azure Bastion is

 describe what Web Application Firewall is

 describe ways Azure encrypts data

Describe security management capabilities of Azure

 describe the Azure Security center

 describe Azure Secure score

 describe the benefit and use cases of Azure Defender - previously the cloud workload

protection platform (CWPP)

 describe Cloud security posture management (CSPM)

 describe security baselines for Azure

Describe security capabilities of Azure Sentinel

 define the concepts of SIEM, SOAR, XDR

 describe the role and value of Azure Sentinel to provide integrated threat protection

Describe threat protection with Microsoft 365 Defender

 describe Microsoft 365 Defender services

 describe Microsoft Defender for Identity (formerly Azure ATP)

 describe Microsoft Defender for Office 365 (formerly Office 365 ATP)

 describe Microsoft Defender for Endpoint (formerly Microsoft Defender ATP)

 describe Microsoft Cloud App Security

Describe security management capabilities of Microsoft 365

 describe the Microsoft 365 Defender portal

 describe how to use Microsoft Secure Score

 describe security reports and dashboards

 describe incidents and incident management capabilities

Describe endpoint security with Microsoft Intune

 describe what Intune is

 describe endpoint security with Intune

 describe the endpoint security with the Microsoft Endpoint Manager admin center

Describe the Capabilities of Microsoft Compliance Solutions (25-30%)

Describe the compliance management capabilities in Microsoft

 describe the offerings of the Service Trust portal

 describe Microsoft’s privacy principles

 describe the compliance center

 describe compliance manager

 describe use and benefits of compliance score

Describe information protection and governance capabilities of Microsoft 365

 describe data classification capabilities

 describe the value of content and activity explorer

 describe sensitivity labels

 describe Retention Polices and Retention Labels

 describe Records Management

 describe Data Loss Prevention

Describe insider risk capabilities in Microsoft 365

 describe Insider risk management solution

 describe communication compliance

 describe information barriers

 describe privileged access management

 describe customer lockbox

Describe the eDiscovery and audit capabilities of Microsoft 365

 describe the purpose of eDiscovery

 describe the capabilities of the content search tool

 describe the core eDiscovery workflow

 describe the advanced eDiscovery workflow

 describe the core audit capabilities of M365

 describe purpose and value of Advanced Auditing

Describe resource governance capabilities in Azure

 describe the use of Azure Resource locks

 describe what Azure Blueprints is

 define Azure Policy and describe its use cases

Microsoft Security, Compliance, and Identity Fundamentals
Microsoft Fundamentals pdf
Killexams : Microsoft Fundamentals pdf - BingNews https://killexams.com/pass4sure/exam-detail/SC-900 Search results Killexams : Microsoft Fundamentals pdf - BingNews https://killexams.com/pass4sure/exam-detail/SC-900 https://killexams.com/exam_list/Microsoft Killexams : Microsoft Azure Fundamentals AZ-900 exam Prep No result found, try new keyword!n\nThis Microsoft Azure Fundamentals AZ-900 exam Prep Specialization consists of four courses that will act as a bedrock of fundamental knowledge to prepare you for the AZ-900 certification exam ... Fri, 23 Dec 2022 10:34:00 -0600 text/html https://www.usnews.com/education/skillbuilder/microsoft-azure-fundamentals-az-900-exam-prep-0_n8wVF2gBEeu_4RJMZjt8Ew Killexams : Fundamentals of Soil Cation Exchange Capacity (CEC) Killexams : AY-238


AY-238

Soils (Fertility)

Purdue University
Cooperative Extension Service
West Lafayette, IN 47907






David B. Mengel, Department of Agronomy, Purdue University


Soils can be thought of as storehouses for plant nutrients. Many nutrients, such as calcium and magnesium, may be supplied to plants solely from reserves held in the soil. Others like potassium are added regularly to soils as fertilizer for the purpose of being withdrawn as needed by crops. The relative ability of soils to store one particular group of nutrients, the cations, is referred to as cation exchange capacity or CEC.

Soils are composed of a mixture of sand, silt, clay and organic matter. Both the clay and organic matter particles have a net negative charge. Thus, these negatively-charged soil particles will attract and hold positively-charged particles, much like the opposite poles of a magnet attract each other. By the same token, they will repel other negatively-charged particles, as like poles of a magnet repel each other.

Forms of Nutrient Elements in Soils

Elements having an electrical charge are called ions. Positively-charged ions are cations; negatively-charged ones are anions.

The most common soil cations (including their chemical symbol and charge) are: calcium (Ca++), magnesium (Mg++), potassium (K+), ammonium (NH4+), hydrogen (H+) and sodium (Na+). Notice that some cations have more than one positive charge.

Common soil anions (with their symbol and charge) include: chlorine (Cl-), nitrate (NO3-), sulfate (S04=) and phosphate (PO43-). Note also that anions can have more than one negative charge and may be combinations of elements with oxygen.

Defining Cation Exchange Capacity

Cations held on the clay and organic matter particles in soils can be replaced by other cations; thus, they are exchangeable. For instance, potassium can be replaced by cations such as calcium or hydrogen, and vice versa.

The total number of cations a soil can hold--or its total negative charge--is the soil's cation exchange capacity. The higher the CEC, the higher the negative charge and the more cations that can be held.

CEC is measured in millequivalents per 100 grams of soil (meq/100g). A meq is the number of ions which total a specific quantity of electrical charges. In the case of potassium (K+), for example, a meq of K ions is approximately 6 x 1020 positive charges. With calcium, on the other hand, a meq of Ca++ is also 6 x 1020 positive charges, but only 3 x 1020 ions because each Ca ion has two positive charges.

Following are the common soil nutrient cations and the amounts in pounds per acre that equal 1 meq/100g:


  Calcium (Ca++)    -  400 lb./acre
  Magnesium (Mg++)  -  240 lb./acre
  Potassium (K+)       780 lb./acre
  Ammonium (NH4+)   -  360 lb./acre

Measuring Cation Exchange Capacity

Since a soil's CEC comes from the clay and organic matter present, it can be estimated from soil texture and color. Table 1 lists some soil groups based on color and texture, representative soil series in each group, and common CEC value measures on these soils.

Table 1. Normal Range of CEC Values for Common Color/Texture Soil Groups.


                                        CEC in
  Soil groups              Examples       meg/100g
-----------------------------------------------
Light colored sands       Plainfield         3-5
                          Bloomfield

Dark colored sands        Maumee           10-20
                          Gilford

Light colored loams and   Clermont-Miami   10-20
 silt loams               Miami

Dark colored loams and    Sidell           15-25
 silt loams               Gennesee

Dark colored silty clay   Pewamo           30-40
 loams and silty clays    Hoytville

Organic soils             Carlisle muck   50-100
-------------------------------------------------

Cation exchange capacity is usually measured in soil testing labs by one of two methods. The direct method is to replace the normal mixture of cations on the exchange sites with a single cation such as ammonium (NH4+), to replace that exchangeable NH4+ with another cation, and then to measure the amount of NH4+ exchanged (which was how much the soil had held).

More commonly. the soil testing labs estimate CEC by summing the calcium, magnesium and potassium measured in the soil testing procedure with an estimate of exchangeable hydrogen obtained from the buffer pH. Generally, CEC values arrived at by this summation method will be slightly lower than those obtained by direct measures.

Buffer Capacity and Percent Base Saturation

Cations on the soil's exchange sites serve as a source of resupply for those in soil water which were removed by plant roots or lost through leaching. The higher the CEC, the more cations which can be supplied. This is called the soil's buffer capacity.

Cations can be classified as either acidic (acid- forming) or basic. The common acidic cations are hydrogen and aluminum; common basic ones are calcium, magnesium, potassium and sodium. The proportion of acids and bases on the CEC is called the percent base saturation and can be calculated as follows:


           Total meq of bases on exchange sites

 Pct. base =(i.e., meq Ca++ meq Mg++ +  meq K+)
 saturation  ------------------------------- x 100
                 Cation exchange capacity

The concept of base saturation is important, because the relative proportion of acids and bases on the exchange sites determines a soil's pH. As the number of Ca++ and Mg++ions decreases and the number of H+ and Al+++ions increases, the pH drops. Adding limestone replaces acidic hydrogen and aluminum cations with basic calcium and magnesium cations, which increases the base saturation and raises the pH.

In the case of Midwestern soils, the genuine mix of cations found on the exchange sites can vary markedly. On most, however, Ca++ and Mg++ are the dominant basic cations and are in greater concentrations than K+. Normally, very little sodium is found in Midwestern soils.

Relationship Between CEC and Fertilization Practices

Recommended liming and fertilization practices will vary for soils with widely differing cation exchange capacities. For instance, soils having a high CEC and high buffer capacity change pH much more slowly under normal management than low-CEC soils. Therefore, high-CEC soils generally do not need to be limed as frequently as low-CEC soils; but when they do become acid and require liming, higher lime rates are needed to reach optimum pH.

CEC can also influence when and how often nitrogen and potassium fertilizers can be applied. On low-CEC soils (less than 5 meg/20000g), for example, some leaching of cations can occur. Fall applications of ammonium N and potassium on these soils could result in some leaching below the root zone, particularly in the case of sandy soils with low-CEC subsoils. Thus, spring fertilizer application may mean improved production efficiency. Also, multi-year potash applications are not recommended on low-CEC soils.

Higher-CEC soils (greater than 10 meg/100g), on the other hand, experience little cation leaching, thus making fall application of N and K a realistic alternative. Applying potassium for two crops can also be done effectively on these soils. Thus, other factors such as drainage will have a greater effect on the fertility management practices used on high- CEC soils.

Summary

The cation exchange capacity of a soil determines the number of positively-charged ions cations-that the soil can hold. This, in turn, can have a significant effect on the fertility management of the soil.


RR3/93

Cooperative Extension work in Agriculture and Home Economics, State of Indiana, Purdue University and U.S. Department of Agriculture cooperating: H.A. Wadsworth, Director, West Lafayette, IN. Issued in furtherance of the acts of May 8 and June 30, 1914. The Cooperative Extension Service of Purdue University is an equal opportunity/equal access institution.

Fri, 14 Aug 2020 20:12:00 -0500 text/html https://www.extension.purdue.edu/extmedia/AY/AY-238.html
Killexams : Best Free AI Training Courses for 2023

With businesses finding new, inventive ways to use AI almost every day, it's no surprise that AI training courses are becoming increasingly sought after.

Workers in all sorts of industries are looking to upskill themselves in line with the rapid technological changes occurring. Luckily, companies like Microsoft and Google offer free AI training courses, as do some higher education institutions.

In this guide, we cover the best AI training courses currently available, as well as the benefits of learning about AI in the current job market. We've largely focused on free courses that offer immediate, foundational learning opportunities that you can start applying to your job role or career straight away, rather than paid degree courses that cost hundreds or thousands of dollars.

1. Google’s Generative AI Learning Path (10 Courses)

One of the more generous courses available in terms of genuine hours of learning, Google’s Generative AI Learning path has 10 courses on it. All courses take one day to complete.

Seven of the courses are classified as introductory, including “Introduction to Generative AI”, “Introduction to Large Language Models” and “Generative AI Fundamentals”. There are three courses within the learning path described as intermediate, including “Encoder-Decoder Architecture” and “Attention Mechanism: Overview”.

Surfshark Logo

Surfshark Logo

Get two months FREE, when you sign up with Surfshark VPN today 📅📅

Grab Deal Now 🔥

The first two introductory courses cover a lot of immediately applicable content, such as how to use prompt tuning to get the best out of large language models. There's also a course on the responsible usage of AI.

Although there's no official qualification, you will be awarded a completion badge that you can attach to your digital resume.

2. Microsoft's “Transform Your Business With AI” course

This Microsoft learning path is designed, as the tech giant says, to help businesspeople acquire “the knowledge and resources to adopt AI in their organizations”, and explores “planning, strategizing, and scaling AI projects in a responsible way.”

Microsoft says the objectives for this course are to become familiar with existing AI tools, understand basic AI terminology and practices, and use prebuilt AI to build intelligent applications.

To enroll in this course, which is 2 hours and 40 minutes long, Microsoft says you’ll need a “basic understanding” of IT and business concepts. Modules included in the pathway are:

  • Leverage AI tools and resources for your business (55 mins)
  • Create business value from AI (21 min)
  • Embrace responsible AI principles and practices (48 mins)
  • Scale AI in your organization (36 mins)

3. Linkedin’s “Career Essentials In Generative AI” training course

Linkedin’s AI Career Essentials Course is made up of five different videos, with a total run time of around four hours. Each video is hosted by a different AI expert, covering a range of core concepts and ethical considerations relating to AI models.

One of the videos provides a detailed explanation of how to streamline your work with Microsoft Bing Chat, while another discusses the key differences between search engines and reasoning engines. Although there’s no accreditation or certification, completing this course will earn you a badge of completion from LinkedIn, which can be displayed on your profile.

The fifth video in the series, entitled “an introduction to artificial intelligence” is an hour and a half long and provides a simplified overview of the best AI tools for businesses, which is handy for those who haven’t taken the plunge yet and implemented a tool in work.

4. IBM’s “AI Foundations for Everyone” training course

IBM offers a course entitled “AI Foundations for Everyone” through Coursera, of which over 19,000 people have already enrolled in. You can audit the course for free, which will deliver you access to all of the materials and some of the assignments, but you won't be graded or get a certificate at the end.

It’s geared toward beginners and you don’t need prior experience to enroll, and the schedule is flexible so you can learn at your own pace. Along with AI fundamentals, the course will also ensure you’re familiar with IBM’s own AI services, which help businesses integrate artificial intelligence into their existing infrastructure.

IBM says that, by the end of the course, participants will have had “hands-on interactions with several AI environments and applications”.

The course has three modules: “Introduction to Artificial Intelligence”, “Getting Started with AI using IBM Watson”, and “Building AI-Powered Chatbots Without Programming”. Each module takes between nine and eleven hours to complete.

5. Digital Partner's “The Fundamentals of ChatGPT” training course

Digital Partner’s course entitled “The fundamentals of ChatGPT” is a great option for anyone who wants to take a free, accredited course that covers the basics of Generative AI.

During the course, you’ll spend time learning about OpenAI’s role in global AI development, and be able to learn about how ChatGPT works, its advantages and limitations. There’s also a variety of examples included within the course that will show you how to leverage ChatGPT for different tasks, and you'll learn more about the difference between ChatGPT and ChatGPT Plus.

Modules include “Working With ChatGPT”, “ChatGPT and Its Shortcomings” and “Training a GPT Model”.

This free course is available on Alison.com, and is published by a digital marketing firm called Digital Partner. The course is CPD accredited and a certificate will be awarded upon completion of a small assessment at the end of the 1.5-3 hour program.

6. Phil Ebner’s ChatGPT, Midjourney, Firefly, Bard, DALL-E” AI crash course

While there are some good courses on Udemy that will guide you through the ins and outs of MidJourney and other AI generation tools, this instructor covers the most ground, and almost 12,000 students have already enrolled in the course, which has a 4.6/5 rating on Udemy.

The course is almost two hours long and also includes content that will help you better use tools like ChatGPT to generate text responses as well as images.

The “AI for Visual Creativity” section, however, will show you how to use both MidJourney and Dall-E to create “photorealistic images, illustrations, and digital art in a variety of styles.”

On Udemy, you don’t receive certificates of completion for free courses, but if you're just looking to upskill yourself free of charge, this course is definitely worth a look.

Best Free AI Training courses for Programmers, Developers & Tech Experts

Up next, we have more advanced courses geared towards programming and development.

  1. Harvard University’s “Introduction to Artificial Intelligence with Python”
  2. DeepLearning.AI’s “ChatGPT Prompt Engineering For Developers” (Coursera)
  3. Intro to TensorFlow for Machine Learning (Udacity)
  4. Georgia Tech’s Reinforcement Learning (Udacity)
  5. Become an AI-Powered Engineer: ChatGPT, GitHub Copilot (Udemy)
  6. Great Learning’s “ChatGPT for Beginners” training course

1. Harvard University’s “Introduction to Artificial Intelligence with Python”

Harvard University offers a self-paced, 7-week course on the “concepts and algorithms at the foundation of modern artificial intelligence”.

The time commitment of between 10-30 hours a week – but it’s completely free to enroll and you’ll be supported as you complete projects and attend lectures. However, you need to have taken Harvard’s “Introduction to Computer Science” course first to enroll.

2. DeepLearning.AI’s “ChatGPT Prompt Engineering For Developers” (Coursera)

This course will help you utilize OpenAI’s API to write more effective prompts, learn how large language models can be used to carry out tasks like text transformation and summarizing, and teach you how to program and build a custom AI chatbot.

The course is run by AI expert and DeelLearning.AI co-founder Andrew Ng and OpenAI’s Isa Fulford, and it’s only an hour long. DeepLearning.AI says the course is “free for a limited time”. A basic understanding of Python is needed, but aside from that, it’s beginner friendly.

3. Intro to TensorFlow for Machine Learning (Udacity)

This course teaches participants how to build deep-learning applications with TensorFlow, one of the most popular open-source Python software libraries.

The estimated completion time for the course is approximately two months, and you should have some experience with Python syntax, including variables, functions, and classes, as well as a grasp of basic algebra.

If you take the course, providers Udacity say, you’ll get “hands-on experience building your own state-of-the-art image classifiers” as well as other types of deep learning models.

4. Georgia Tech’s Reinforcement Learning (Udacity)

This Georgia Tech course is free on Udacity and focuses on exploring “automated decision-making from a computer-science perspective”.

At the end of the course – which takes approximately four months to complete, but is also described as self-paced – participants will recreate a result from a published paper on reinforcement learning.

However, it is recommended you have a graduate-level machine-learning qualification and some prior experience with reinforcement learning from previous studies. Experience with Java is also required.

Although there’s no official certificate awarded for completing the course, you can earn a nano degree program certificate by completing Udacity's 4-month long “Deep Reinforcement Learning”, although this costs $1116.

5. Become an AI-Powered Engineer: ChatGPT, Github Copilot (Udemy)

In this course, students will learn how to create high-quality pieces of code using ChatGPT and integrate it with other text editors. It also covers how to use GitHub Copilot.

This might be a free tutorial, but the course has much better reviews than some of the other AI courses available on Udemy, with 56% of watchers who left a review giving the course five stars, and a further 24% giving it four stars at the time of writing.

The course will be best suited to developers who want to leverage AI tools for coding responsibilities in general, and also, to become more efficient in their coding practices.

6. GreatLearning’s “ChatGPT for Beginners” training course

This is a completely free, two-hour long beginners-focused ChatGPT course. It’s one of the only beginner's courses on the internet that includes a section on coding prompts, although it also covers quite a bit of other ground, including email prompting.

There are no prerequisites needed for this course, and it has an average rating of 4.61/5, with 75% of reviewers giving the course 5 stars.

Free College AI Courses and Training

A number of universities and colleges offer AI-focused courses.

  1. Stanford University’s “Machine Learning” Course (Coursera)
  2. Vanderbilt University’s “Prompt Engineering for ChatGPT”
  3. Georgia Tech’s “Machine Learning” Course (Udacity)
  4. The Open University’s “AI Matters” Course (OpenLearn)
  5. University of Pennsylvania’s “AI For Business” (Coursera)
  6. University of Helsinki's “Elements of AI” and “Ethics of AI” Course

1. Stanford University’s “Introduction to Artificial Intelligence” course (Udacity)

This foundational online program, which takes around 10 months to complete at a rate of 10 hours a week, focuses on fundamental AI concepts and practical machine learning skills but is classified as an intermediate course.

The course, which is split into two umbrella sections (“Fundamentals of AI” and “Applications of AI) is completely free if you sign up for Udacity (which also doesn't cost anything).  It consists of 22 different lessons and a string of interactive quizzes.

2. Vanderbilt University’s “Prompt Engineering for ChatGPT”

Jules White, Vanderbilt University’s associate dean for strategic learning programs and associate professor of computer science, has launched a free online course available through Coursera focusing on prompt engineering.

It goes through the most effective approaches for prompt engineering, covering summarization, simulation, programming, and other useful ways you can harness the power of ChatGPT with your inputs.

The course takes around 18 hours to complete and is made up of an introduction to prompts and three separate sessions on prompt patterns, as well as a 2-hour module on examples.

3. Georgia Tech’s “Machine Learning” course (Udacity)

In collaboration with Georgia Tech, Udacity has made an intermediate machine-learning course available for free, which takes around 4 months to complete, although the course listing says you can do it at your own pace.

The course is offered as part of an online master’s degree at Georiga Tech, but taking this course won’t earn you credit toward this degree.

The course covers Supervised and Unsupervised Learning, which are two different types of machine learning, and covers how they're used in AI systems.

However, having a “strong familiarity with Probability Theory, Linear Algebra, and Statistics” and prior experience with statistics is helpful. Students should also have some experience with programming.

4. The Open University’s “AI Matters” course (OpenLearn)

The Open University is a UK-based institution that offers a free course through its learning portal OpenLearn entitled “AI Matters”.

In the course, you'll learn about the “historical, social, political and economic issues in AI”, explore the benefits and limitations of the technology, and discuss ethical risks relating to AI.

The course is six hours long, and you'll be eligible for a certificate of participation.

5. University of Pennsylvania’s “AI For Business” (Coursera)

The University of Pennsylvania's “AI for Business” specialization is made up of four different, free courses:

  • AI Applications in People Management
  • AI Fundamentals for Non-Data Scientists
  • AI Applications in Marketing and Finance
  • AI Strategy and Finance

According to the University of Pennsylvania's website, although the course itself costs $39 to complete, you can enroll in the four individual modules that take up the course for free. Each module will take up around two hours a week to complete.

6. University of Helsinki's “Elements of AI” and “Ethics of AI”

The University of Helsinki has two, free online courses available. The course entitled “Ethics of AI” is geared towards “anyone who is interested in the ethical aspects of AI”, the university says.

The course will familiarize you with common questions that arise in AI ethics and the various ways to approach them.

Elements of AI” is a broader course with 6 chapters, focusing on Topics such as “neural networks”, “machine learning” and “AI problem solving”.  All you need to do to access the course materials is sign up.

The Benefits of Learning About AI

Of course, completing an AI training course can have a number of benefits. From a personal learning perspective, it's one of the best ways you can spend your time – AI is here to stay, and getting a better grasp of how it works might just help you out in the near future.

Plus, the things you learn about AI will be applicable to a wide variety of job roles in almost every sector of the economy, so it's arguably a safer bet than completing a course on a niche or industry-specific topic.

What's more, right now, businesses are looking for people who understand how generative AI tools like ChatGPT work, and how to leverage them effectively. Employees that are conscious of the limitations of AI tools and able to generate useful responses using prompts are going to become more sought after than employees without these skills.

Completing an AI training course is going to look good on your CV, which will help if you're applying for a new job. Evidence that you've taken the initiative to explore an emerging technology is definitely something an employer will find desirable.

Of course, if you don't have much of a budget – or you're not entirely sure what AI training course would be the best use of your time – then trying out some free options is a great place to start.

Yes – Google has a free AI learning path that has ten courses you can complete for free. Each course takes around one day to complete. The modules provide an introduction to generative AI, and although there's no official qualification, you will get a completion badge which you can add to your resume.

Yes – you can learn how to harness the powers of AI on your own, through online guides, tutorials, and courses. There are quite a lot of resources out there now that will show you how to get the most out of generative AI tools like ChatGPT. However, if you want to learn how more complex skills, such as programming a large language model, you may need to seek out a paid course

Yes – Microsoft has a free training path available that consists of four short modules, which cover how to leverage AI effectively in business settings and how to scale AI projects in a responsible, impactful way.
Thu, 17 Aug 2023 03:03:00 -0500 en-US text/html https://tech.co/news/best-free-ai-training-courses
Killexams : A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies

A Human Rights Impact Assessment of Microsoft's Enterprise Cloud and AI Technologies Licensed to U.S. Law Enforcement Agencies June 2023 -i- FH11370276.1 About the Assessors Members of the Global Business & Human Rights (“GBHR”) Practice of the law firm Foley Hoag LLP conducted this Human Rights Impact Assessment (“HRIA.”) Launched in 2000, the GBHR Practice completed the world’s first HRIA, assisted Professor John Ruggie in drafting the U.N. Guiding Principles on Business and Human Rights, and continues to provide counsel regarding human rights challenges and leadership for multiple industries across six continents. The Practice conducts human rights monitoring and risk assessments for the banking, extractive, information and communication technology, manufacturing, private equity, and retail sectors to integrate respect for internationally recognized rights into management practices and supply chains. Practice group members are actively engaged in multi-stakeholder initiatives and have served as Assessors for the Global Network Initiative, as the Secretariat of the Voluntary Principles on Security and Human Rights, and as Legal Counsel to the Nuclear Power Plant & Exporters Principles of Conduct. Team members for this Assessment included Practice Chair, Gare A. Smith; Privacy and Data Security Co-Chair, Christopher Hart; Senior Advisor, Isa Mirza; Associate, Rumbidzai Maweni; and consultant Akshay Walia. -ii-FH11370276.1 Table of Contents I.Executive Summary ................................................................................................................ 1 A.Overview .......................................................................................................................... 1 B.Key Findings .................................................................................................................... 1 C.Priority Recommendations ............................................................................................... 2 II.Introduction ............................................................................................................................. 3 III.Background ............................................................................................................................. 3 IV.Methodology ........................................................................................................................... 6 A.Scope ................................................................................................................................ 6 B.Process .............................................................................................................................. 6 C.Issues Excluded from the HRIA ....................................................................................... 7 1.Business Relationships with the Military ..................................................................... 7 2.Specific Contracts ......................................................................................................... 8 V.Applicable International Human Rights Frameworks ............................................................ 9 A.The United Nations Guiding Principles on Business and Human Rights ........................ 9 B.The Universal Declaration of Human Rights ................................................................. 11 C.The International Convention on the Elimination of All Forms of Racial Discrimination12D.Applying International Human Rights Framework in this HRIA .................................. 13 VI.Microsoft’s Human Rights Approach & Policy Commitments ............................................ 15 A.Policy Commitments Relating to Human Rights ........................................................... 15 1.Global Human Rights Statement ................................................................................ 16 2.Trust Code .................................................................................................................. 17 3.Responsible AI Standard ............................................................................................ 18 4.Policies & Initiatives for Addressing Racial Discrimination ..................................... 21 5.Human Rights in Microsoft’s Terms of Service ......................................................... 22 B.Human Rights Risk Management & Oversight.............................................................. 23 C.Stakeholder Reactions to Microsoft’s Human Rights Policies and Practices ................ 25 VII. Salient Adverse Human Rights Impacts ............................................................................... 25 VIII. Assessment of Microsoft’s Relationship to Adverse Human Rights Impacts ...................... 26 A.Azure .............................................................................................................................. 26 1.Overview .................................................................................................................... 26 2.Products marketed for government use ...................................................................... 28 3.Specific Use Cases...................................................................................................... 29 -iii-FH11370276.1 4.Causation Analysis of Azure Products ....................................................................... 30 B.Third Party Technologies ............................................................................................... 35 1.The connection between third party technologies and Microsoft products ................ 35 2.Examples .................................................................................................................... 35 3.Causation Analysis of Third Party Apps .................................................................... 37 C.Law Enforcement Digital Systems ................................................................................. 39 1.New York Domain Awareness System ...................................................................... 39 2.Other Law Enforcement Digital Systems ................................................................... 40 3.Causation Analysis of DAS and Other Law Enforcement Digital Systems. .............. 41 IX.Recommendations ................................................................................................................. 46 -1-FH11370276.1 I.Executive SummaryA.OverviewMicrosoft is one of the world’s leading technology companies. Its products and technologies are in offices, classrooms, and homes. Additionally, governments use them to help conduct vital public services. Microsoft is also a human rights leader among technology companies. It maintains model human rights policies and processes, engages in human rights due diligence, seeks to mitigate against potential human rights harms, and offers a strong voice in pushing human rights-oriented policymaking and technological development. Microsoft commissioned this Human Rights Impact Assessment (“HRIA” or “Assessment”), which is consistent with its human rights policies and processes, because a group of shareholders expressed concern that some of the products and technologies Microsoft provided to certain government agencies were used to commit human rights abuses, particularly against individuals who identify as Black, Indigenous, and People of Color (“BIPOC”). Those shareholders pointed to Microsoft’s provision of technologies to such government agencies as inconsistent with Microsoft’s policies. Microsoft agreed that diligence, in the form of this HRIA, was in order, and retained human rights experts (the “Assessors”) at Foley Hoag LLP (“Foley Hoag”) to conduct an independent assessment. Foley Hoag focused this HRIA on Microsoft’s licensing of cloud services and artificial intelligence (“AI”) technologies to U.S. federal and state law enforcement agencies, and U.S. immigration authorities. The Assessors sought to determine (1) to what extent, if any, Microsoft is responsible for adverse human rights impacts stemming from the misuse of its products by these agencies, particularly with respect to BIPOC communities, and (2) what, if anything, Microsoft should do to mitigate or remediate those impacts. The Assessors relied on the United Nations Guiding Principles on Business and Human Rights (“UNGPs”) to conduct their analysis. They reviewed Microsoft’s policies and internal documents, and publicly available information. They also interviewed members of the socially responsible investment and human rights communities, individuals within government agencies, and Microsoft personnel to understand Microsoft’s role and develop reasonable and effective recommendations. This report reflects their assessment. B.Key FindingsFoley Hoag made the following key findings. 1.In no instances did the Assessors find evidence that Microsoft causeshuman rights harms.2.In most cases, Microsoft is an upstream provider of platforms that existindependently from the software and applications that developers createand use on those platforms. That independent relationship attenuates any -2- FH11370276.1 responsibility Microsoft might have for downstream adverse human rights impacts, such as discriminatory policing, surveillance, and incarceration. The precise degree of Microsoft’s responsibility, however, is unclear under the UNGPs. 3. In a few cases, Microsoft is actively involved in developing products, such as for the New York Police Department, through the company’s consulting services. In those cases, the relationship between Microsoft and any downstream adverse impact is more concrete, and Microsoft has at least the responsibility to mitigate impacts. 4. Although Microsoft’s human rights policies and practices are robust, civil society groups, particularly representatives of the human rights community, perceive a lack of transparency with respect to product design and deployment. This hinders Microsoft’s ability to speak with authority on human rights issues. C. Priority Recommendations Foley Hoag offers a number of recommendations to Microsoft. Among the most important are the following. 1. Notwithstanding interpretations of law or the UNGPs, as a best practice Microsoft should contribute to efforts to address downstream adverse human rights impacts even when it is solely providing platforms, and seek to mitigate potential adverse impacts in such circumstances. The Assessors note that Microsoft has already proactively been taking such actions through its internal human rights policies and practices and its due diligence efforts. 2. As a best practice, Microsoft should consider assuming some responsibility to remediate genuine harms when it has provided consulting services to help develop cloud and AI products for domestic law and immigration enforcement agencies. 3. Microsoft should increase transparency, in particular by proactively seeking input from civil society with respect to its product design, deployment, and impact. 4. Microsoft should find ways to strengthen the work and reach of its internal human rights team throughout the company. 5. Microsoft should emphasize its expectation through its bespoke contracts, terms of service, and other related documents that its counterparties, customers, and partners will respect human rights when using Microsoft products. -3- FH11370276.1 6. Microsoft should continue to expand and explore additional ways to use its technology, consulting services, public advocacy, and lobbying to mitigate the potential for downstream abusive conduct by law and immigration enforcement personnel. II. Introduction Members of the Global Business and Human Rights Practice at the law firm Foley Hoag conducted this HRIA between March 2022 and April 2023. The Assessment considers how the enterprise cloud services and AI technologies that Microsoft licenses to local and federal U.S. law and immigration enforcement agencies impact the human rights of vulnerable communities in the United States, both beneficially and detrimentally, and how Microsoft might mitigate impacts stemming from the use of those technologies that are adverse. Microsoft commissioned the HRIA to (1) better understand how rights-holders are impacted by the use of its products in the context of its public-facing human rights commitments, and (2) secure guidance on ways to mitigate any harmful impacts. The Assessment was commissioned in response to a shareholder resolution that was to be considered at Microsoft’s November 2021 Annual Shareholders’ Meeting. The resolution requested that Microsoft retain an independent expert to assess how Microsoft’s products might be responsible for salient adverse human rights impacts to individuals who identify as BIPOC. The shareholders asked that the independent assessment include mitigating steps Microsoft could take to address such harms.1 The shareholders withdrew the resolution after Microsoft committed to this Assessment.2 Although Microsoft funded the Assessment, Foley Hoag retained independence with respect to research, consultations with stakeholders, and the Assessment’s content. The Assessment provides recommendations to assist Microsoft in mitigating adverse human rights impacts that may be connected to its enterprise cloud services and AI technologies. III. Background Microsoft commissioned this HRIA in a political and cultural environment in which individuals and institutions are increasingly aware of the ubiquity of racial discrimination and other serious violations of rights by U.S. law enforcement. After receiving the proposed shareholder resolution, Microsoft sought to understand what responsibility it might have for any adverse human impacts relating to domestic law enforcement and immigration activities, and how it might best mitigate or remediate such harms. As Microsoft announced in October 2021: 1 “Microsoft Human Rights Policy Implementation Proposal,” Investor Advocates for Social Justice (Lead Filer), June 2022. 2 Ibid. -4- FH11370276.1 In advance of the Microsoft Annual Shareholder Meeting on November 30, we received a request to explore how Microsoft products licensed to public sector entities are experienced by third parties, especially Black, Indigenous and People of Color (BIPOC) and other vulnerable communities. We agree this is a question that warrants greater attention and are contracting an independent third-party to help us identify, understand, assess, and address genuine or potential human rights impacts of our products and services. In conducting investigations like this, we are guided by the UN Guiding Principles on Business and Human Rights (UNGPs). In particular, UNGP Principle 18 notes the value of drawing on external human rights expertise and the importance of meaningful consultation with affected groups and other relevant stakeholders. That will be our approach to this work: We will task the independent third party to engage an expansive audience, with particular focus on […] BIPOC and other vulnerable communities.3 The Assessors recognize the vital role that technology plays in all aspects of daily life—and thus the vital role that technology providers such as Microsoft have in delivering those technologies and innovating further. Specifically, Microsoft technologies allow for documents such as this to be written, edited, stored, and shared. They allow businesses to maintain complex data management and communications systems. They are used by institutions as diverse as critical infrastructure providers, hospitals, sports teams, small businesses, government agencies, schools, and manufacturing plants. It is easy to take for granted how much the day-to-day work of contemporary life is made possible, and sometimes even seems seamless, because of the role played by complex digital technology. Central to this HRIA, it is also essential to recognize the important and legitimate role that domestic law enforcement and immigration authorities play in achieving U.S. objectives for public safety and national security. Microsoft’s licensing of digital technologies to relevant agencies can be instrumental in ensuring that these goals are pursued in a manner that is responsible, equitable, and mindful of human rights challenges facing marginalized communities. Many of the most essential law enforcement activities, however, have led to serious violations of the human rights of BIPOC individuals. Such violations include disproportionate policing, stopping, and detainment of predominately BIPOC individuals (often referred to as “racial profiling”), and subsequent violence to which BIPOC individuals are more subject as a corollary of the activities.4 Unarmed Black and Latino men are the group most likely to fall 3 “Taking on Human Rights Due Diligence,” Microsoft on the Issues Blog, October 2020. 4 “Police Misconduct, Such as Falsifying Evidence, is a Leading Cause of Wrongful Convictions, Study Finds, USA Today, September 15 2020; “Government Misconduct and Convicting the Innocent,” National Registry of Exonerations, September 2020; “Baltimore Police Officer Indicted for Tampering with Evidence,” CNN, January 25, 2018. -5- FH11370276.1 victim to serious police brutality in the United States, including lethal shootings with dubious legal justifications. There is also evidence that police officers may collude to falsify reports and tamper with evidence to avoid heightened public scrutiny when a Black or Latino suspect is injured or killed by police.5 Similarly, serious harms are exacted on BIPOC immigrants through immigration enforcement agencies. These harms are systemic: they exist in society writ large and are present across a number of public and private institutions. BIPOC groups – primarily those entering into the United States from the Middle East, North Africa, Central and South America, and the Caribbean — are the most likely to be the target of discriminatory surveillance, arrest, and incarceration by immigration authorities.6 This includes enforcement actions that lead to the detention and/or deportation of BIPOC immigrants without due process, separation and detention of immigrant children from their families, aggressive home raids on suspected illegal immigrants, and disproportionate use of force or otherwise abusive enforcement practices in securing the U.S-Mexico border. Advanced technologies, such as AI, present the danger of not only enabling abuses, but of exacerbating them. As Microsoft’s Vice Chair and President Brad Smith observed, There are many governmental uses of facial-recognition technology that protect public safety and promote better services for the public without raising [multiple concerns]. But when combined with ubiquitous cameras and massive computing power and storage in the cloud, facial-recognition technology could be used by a government to enable continuous surveillance of specific individuals. It could do this at any time or even all the time. The use of such technology in this way could unleash mass surveillance on an unprecedented scale.7 In response to these well-known dangers, Microsoft has made numerous concerted, good faith efforts to prevent its technologies from being misused in a way that harms rights-holders. Those efforts include banning the licensing of facial recognition technologies to U.S. police and instituting a robust internal “Responsible AI” program. Microsoft has also used its formidable market presence and reputation to influence policymaking and discussions through its public statements on discriminatory policing and surveillance, and the company’s promotion of responsible, inclusive technologies. This HRIA finds that more can be done. It is not intended, however, to solve the problems of systemic racism in policing and law enforcement. Systemic harms by definition are not caused by any single actor, nor can they be resolved through the actions of a single actor. Rather, the HRIA takes a focused look at specific technologies created and provided by Microsoft that could be, and likely have already been, abused by law enforcement to carry out 5 Ibid. 6 “Department of Homeland Security Must Stop Targeting Communities of Color,” Brennan Center for Justice, April 2022. 7 “Tools and Weapons: The Promise and Peril of the Digital Age,” Brad Smith and Carol Ann Browne, page 259. -6- FH11370276.1 and exacerbate the adverse human rights impacts identified here. By focusing on these specific technologies, the Assessors intend to identify the most salient ways in which Microsoft’s products might be related to such harms, and recommend ways that Microsoft can mitigate those harms. IV. Methodology A. Scope The HRIA assesses how, if at all, Microsoft’s enterprise cloud services and AI technologies may adversely impact rights-holders in the United States when those services and technologies are used by U.S. law enforcement and immigration authorities. With respect to affected rights-holders, the Assessment gives particular consideration to the historical and persistent vulnerabilities that disproportionately deprive BIPOC communities in the United States of their rights, and the role of Microsoft’s products in benefiting or harming such groups. The HRIA also acknowledges that some BIPOC communities may be at even greater risk of harms if their identities intersect with other historically marginalized characteristics, including being female, children, LGBTQI, people with disabilities or mental illness, and/or accurate immigrants. B. Process To collect data and first-hand stakeholder perspectives, the Assessment followed a two-pronged research process. First, the Assessors undertook a literature review of Microsoft’s key policies, applicable international human rights frameworks to which the policies should adhere, and external reporting and analysis. The Assessors then convened interviews with Microsoft’s executive leadership and a range of external stakeholders: academics, legal and public policy experts, human rights organizations, and socially responsible investment firms. The consultation process also included a three-day visit to the company’s headquarters in Redmond, Washington to gain an in-depth perspective from its senior executives. The Assessors spoke with human rights organizations, advocacy organizations specializing in privacy and other digital rights, racial justice and immigrant rights groups, public policy and legal experts, academics, and former officials at key U.S. federal agencies who have first-hand expertise regarding government agencies’ use of digital technologies in the provision of public services. In addition, the Assessors interviewed executives at Microsoft responsible for human rights policies and their implementation in the development and sale of technology products to government agencies. In total, the Assessors held interviews that gathered the perspectives of more than fifty individuals. Interviews were conducted with external stakeholders representing thirty-five organizations from the aforementioned backgrounds, as well as eleven senior executives in Microsoft’s Corporate, External, and Legal Affairs (“CELA”) Team, including the Office of Responsible AI, U.S. Government Affairs Team, and Racial Equity Initiative. -7- FH11370276.1 To protect identities and encourage candor, the Assessors committed to anonymizing and aggregating the feedback cited in the HRIA to the greatest extent possible. At the same time, representatives of civil society and Microsoft’s senior executives both emphasized the need for more information sharing related to the impacts of the company’s products on rights-holders, disclosure of due diligence results, and conversations regarding human rights challenges in the provision of these products to governments. Accordingly, the Assessors asked civil society interviewees for permission to list the names of their respective organizations in the HRIA. The intention behind this was to help Microsoft enhance and tailor its stakeholder engagement strategy, including through in-depth discussions regarding specific human rights challenges, information-sharing, and other ways of collaborating with the organizations that participated in the HRIA. Most of the interviewees agreed to provide their organizations’ names for the HRIA, with the understanding that doing so did not constitute their endorsement of the Assessment. These organizations are listed in Annex A. C. Issues Excluded from the HRIA As Microsoft announced, We also want to be clear that this is not a review of all the specific contracts we have in place today, nor is it a broader statement that goes beyond…where, when and with whom we do business. It’s also not a blanket prohibition on providing technology across the public sector, as we sell numerous solutions to many public sector customers around the world, and will continue to do so.8 With this in mind, the Assessors explain below why certain issues were deemed outside the HRIA’s scope. 1. Business Relationships with the Military The shareholder resolution that prompted this HRIA references several examples of Microsoft’s contracts and relationships with the U.S. Department of Defense and other entities that fall within the ambit of the U.S. military. 9 Accordingly, the shareholders expressed an interest in having the HRIA encompass sales to all government agencies, both civilian and military. Microsoft expressed concern, however, that an HRIA encompassing both military and civilian dimensions could be too broad in scope. Microsoft’s senior executives stated that they agreed with the shareholder resolution’s overarching objective of focusing on the ways BIPOC communities may have been or could be harmed by the company’s products but added that it would be difficult to deliver this subject the nuanced treatment it deserves if the Assessment also covered Microsoft’s commercial contracts with the U.S. military. In particular, Microsoft 8 Taking on Human Rights Due Diligence,” Microsoft on the Issues Blog, October 2020. 9 See “Microsoft Human Rights Policy Implementation Proposal,” Investor Advocates for Social Justice (Lead Filer), June 2022. -8- FH11370276.1 executives underscored that the company’s relationship with military agencies creates additional layers of complexity, relating to both distinct regulatory requirements and specialized technical features of products designed for military applications. The Assessors discussed the exclusion of military contracts with approximately twenty representatives of civil society organizations involved in the development of the shareholder resolution. Many of those civil society representatives voiced disappointment over the decision to exclude military uses, emphasizing that a review of these contracts was expressly called for in the shareholder resolution. Several representatives provided similar views during subsequent one-on-one interviews, opining that the military’s use of Microsoft products is intertwined with uses by civilian authorities. Others noted that Microsoft appears to have significant contracts with the U.S. military, and that the products it provides in military settings stand at substantial risk of furthering serious human rights violations through their potential use by foreign governments to commit atrocities, genocide, and other crimes against humanity. 2. Specific Contracts The Assessors did not review specific contracts between the company and U.S. Government agencies. Microsoft’s commercial contracts contain highly sensitive content, including information about Microsoft that the company deems proprietary, and information that the contracting client may view as confidential. Microsoft has a large number of contracts with government entities that the Assessors understand typically contain distinct licensing terms and conditions. Even if a contract review were to be methodologically practicable, an adequate assessment of their various features and particular impacts on rights would be infeasible under this HRIA. Instead, this Assessment considers opportunities for Microsoft to share more details with civil society and other stakeholders about product licensing to government agencies. Additionally, the Assessors urge Microsoft to strengthen human rights provisions in the contractual terms governing product use. During the course of interviews, certain members of civil society expressed disappointment with this exclusion. Rights advocates in general regard these limitations as symptomatic of the power they believe technology companies hold in society. Stakeholders made clear that civil society groups want greater transparency and knowledge sharing with respect to the technologies Microsoft licenses to domestic law and immigration enforcement agencies. -9- FH11370276.1 V. Applicable International Human Rights Frameworks A. The United Nations Guiding Principles on Business and Human Rights The UNGPs are a set of thirty-one principles endorsed in 2011 by the United Nations Human Rights Council.10 These principles provide the methodological basis for the analysis in this HRIA. The Assessors drew additional direction from the assurance and implementation guidances that form the supplementary UNGPs Reporting Framework.11 Microsoft’s Human Rights Statement invokes the UNGPs, noting: Starting with our initial product design and development, to supply chain manufacturing and management, and finally deployment - we work to identify and understand positive and adverse human rights impacts. To help us manage these efforts Microsoft commits to respecting the . . . [UNGPs]. We work every day to implement the UNGPs throughout Microsoft, both at headquarters and offices in approximately 200 countries and territories, and throughout our global supply chains. The UNGPs call upon businesses to respect human rights by conducting due diligence of how their activities might adversely affect human rights, to minimize adverse impacts, and to remediate harms. We communicate our commitment to stakeholders through this Global Human Rights Statement webpage where this statement is available in 18 languages and dialects.12 Microsoft’s Human Rights Statement further provides that: Understanding potential human rights impacts associated with digital technologies presents unique challenges. Our global and on-going processes begin with a focus on identifying and assessing any actual, or potential, adverse human rights impacts that we may cause, contribute or be directly linked with, either through our own activities or as a result of our business relationships. Our processes follows [sic] the UNGPs and the OECD Guidelines for Multinational Enterprises. One of the ways we do this is by conducting [HRIAs], to identify and prioritize salient risks. We have conducted HRIAs at both the corporate and product levels, and for various countries and locations. Our HRIA work includes regular engagement and consultation with stakeholders in an effort to understand and address perspectives of vulnerable groups or populations.13 10 “Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect, and Remedy’ Framework,” United Nations Human Rights Office of the High Commissioner, January 2011; Also see “Launch of John Ruggie’s ‘Just Business: Multinational Corporations and Human Rights,” (Video), NYU School of Law's Center for Human Rights and Global Justice. 11 See “UNGPs Reporting Framework,” Shift and Mazars LLP, February 2015. 12 Microsoft Global Human Rights Statement, Microsoft. 13 Ibid. -10- FH11370276.1 Although the UNGPs are important to Microsoft’s human rights due diligence, and inform the analysis in this HRIA, the UNGPs are not a source of law. Accordingly, they should not “be read as creating new international law obligations.”14 They do serve, however, as a framework for businesses to “respect human rights.”15 As Principle 11 underscores, businesses therefore “should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.”16 Inasmuch as the principles focus on the “corporate responsibility” to “respect human rights,” the UNGPs speak in broad terms. “Because business enterprises can have an impact on virtually the entire spectrum of internationally recognized human rights, their responsibility to respect applies to all such rights.”17 However, “some human rights may be at greater risk than others in particular industries or contexts, and therefore will be the focus of heightened attention.”18 From the Assessor’s perspective, this means that any particular due diligence exercise may have a different focus depending on the risks involved. The UNGPs provide that the “responsibility to protect human rights” places a “require[ment]” on businesses. They must “[a]void causing or contributing to adverse human rights impacts through their own activities,” “address such impacts when they occur,” and “[s]eek to prevent or mitigate adverse human rights impacts that are directly linked to their” business activities.19 Thus, businesses “may be involved with adverse human rights impacts either through their own activities or as a result of their business relationships with other parties.” The “means through which a business enterprise meets its responsibility” will be “proportional to, among other factors, its size.”20 When a business “causes or may cause an adverse human rights impact, it should take the necessary steps to cease or prevent the impact.”21 When it instead “contributes or may contribute to an adverse human rights impact,” the business “should take the necessary steps to cease or prevent its contribution and use its leverage to mitigate any remaining impact to the greatest extent possible.” When an adverse human rights impact is “directly linked” to a business’s operations without cause or contribution, “the situation is more complex,” and a broad spectrum of context-dependent options may be available to “mitigate the impact.” In addition, if a company “causes” or “contributes” to genuine adverse human rights impacts, it “should provide for or cooperate in their remediation through legitimate processes.”22 If it does not cause or contribute, but is directly linked to harm—even genuine harm—“the responsibility to respect human rights does not require that the enterprise itself provide for 14 “General Principles,” UNGPs. 15 “Principle 11,” UNGPs. 16 Ibid. 17 “Principle 12: Commentary,” UNGPs. 18 Ibid. 19 “Principle 13,” UNGPs. 20 “Principle 14: Commentary, UNGPs. 21 “Principle 19: Commentary,” UNGPs. 22 “Principle 22,” UNGPs. -11- FH11370276.1 remediation.”23 In those situations, businesses should “[s]eek to prevent or mitigate” such impacts.24 The purpose of carrying out human rights due diligence, such as through this HRIA, is to “identify, prevent, mitigate, and account for” how businesses “address their adverse human rights impacts.”25 “The process should include assessing genuine and potential human rights impacts.” While “actual impacts . . . should be a subject for remediation,” “[p]otential impacts should be addressed through prevention or mitigation.”26 To carry this out, the diligence process should “identify and assess the nature of the genuine and potential adverse human rights impact with which a business enterprise may be involved.”27 B. The Universal Declaration of Human Rights The UNGPs refer to a number of “core internationally recognized human rights” as the “benchmarks against which other social actors assess the human rights impacts of business enterprises.”28 Among these are the Universal Declaration of Human Rights29 (“UDHR”), the principal doctrine of the U.N. International Bill of Rights. The UDHR articulates the rights to which all individuals are inalienably entitled, regardless of their background and status in society or any other characteristics that define their identity. The two other instruments in the International Bill of Rights, the International Covenant on Civil and Political Rights30 (“ICCPR”) and the International Covenant on Economic, Social, and Cultural Rights31 (“ICESCR”), expand on certain rights in the UDHR. For the purposes of this HRIA, the Assessors treat the UDHR as the primary document outlining the fundamental rights that companies are expected to respect in the course of their activities. Few, if any, of the human rights prescribed in the UDHR are independent of one another, and instead intersect and effect each other in multiple ways. The following UDHR Articles are most pertinent to the scope of this HRIA: • Articles 1 & 2 (Right to Equality & Freedom from Discrimination) – All individuals are born free and equal in dignity and rights. As such, they are afforded 23 “Principle 22: Commentary,” UNGPs. 24 “Principle 13,” UNGPs. 25 “Principle 17,” UNGPs. 26 “Principle 17: Commentary,” UNGPs. 27 “Principles 18: Commentary,” UNGPs. 28 “Principle 12: Commentary,” UNGPs. 29 See: “The International Bill of Human Rights,” U.N. General Assembly Resolution 217 A(III), December 10, 1948. 30 “International Covenant on Civil and Political Rights”, adopted and opened for signature, ratification and accession by U.N. General Assembly Resolution 2200 A(XXI), December 16, 1966 and entered into force on March 23, 1976, See: pp. 17-34 of the International Bill of Human Rights. 31 “The International Covenant on Economic, Social and Cultural Rights,” adopted and opened for signature, ratification and accession by U.N. General Assembly Resolution 2200 A(XXI), December 16, 1966 and entered into force on January 3, 1976; See: pp. 7-16 of the International Bill of Human Rights. -12- FH11370276.1 the same set of human rights, regardless of any other identifying characteristic or social status. • Article 3 (Right to Life and Security) – All individuals have the right to life, and to live in freedom and safety. • Article 5 (Freedom from Inhumane Treatment or Punishment) – No individuals are to be subjected to torture or to cruel, inhuman, or degrading treatment or punishment. • Article 7 (Right to Equal Legal Protection) – All individuals shall be treated equally in the application of the law and shall be protected by the law without discrimination. • Article 9 (Freedom from Arbitrary Detention) – No individual shall be subjected to arbitrary arrest, detention, or exile. • Article 12 (Privacy and Personal Reputation) – No individuals shall be subject to interference with their privacy or have their reputations impugned. • Article 14 (Right to Asylum) – All individuals have a right to enter a country to seek asylum from the persecution they are experiencing in their home country. • Articles 18, 19 & 20 (Freedom of Expression and Belief) – All individuals have the right to think and believe as they want, including through religious belief and practice. All individuals also have the right to their own opinions, and the right to express them freely. • Articles 29 & 30 (Protection of Human Rights) – The law should ensure human rights and should allow everyone to enjoy the same mutual respect. Further, no government or non-State entity should act in a way that takes away the rights expressed in the UDHR. C. The International Convention on the Elimination of All Forms of Racial Discrimination The International Convention on the Elimination of All Forms of Racial Discrimination (“ICERD”) is also particularly relevant to this HRIA. ICERD is the oldest of the nine core international human rights treaties, and is the principal human rights instrument aimed at eliminating racial discrimination globally.32 Signatories, including the United States, are bound by international law to protect individuals from discrimination through such efforts as condemning racial discrimination, prohibiting segregation and apartheid, and agreeing to pursue national measures aimed at eradicating racism and promoting racial understanding.33 In addition to executive policies issued by the President 32 See: “International Convention on the Elimination of All Forms of Racial Discrimination,” adopted and opened for signature, ratification and accession by U.N. General Assembly resolution 2106 (XX), December 21, 1965. 33 See Ibid. -13- FH11370276.1 and legislation passed by Congress, the United States has a duty to uphold and implement ICERD through public services provisioned by federal and local government agencies. ICERD elaborates on the non-discrimination articles in the International Bill of Rights, providing a framework expressly dedicated to the elimination of racial discrimination and the promotion of racial inclusion. Companies can draw from ICERD’s overarching purpose by bolstering their commitments to the prevention of discrimination against BIPOC communities that may stem from their activities. In addition, they can foster racial inclusion by advancing Diversity, Equity, and Inclusion (“DEI”) and racial justice initiatives. Microsoft’s efforts toward these goals are addressed below. D. Applying International Human Rights Framework in this HRIA Following the UNGPs’ expectations for due diligence, and accounting for related international human rights frameworks, the Assessors (1) identify the genuine or potential adverse human rights impact(s) with which Microsoft might be involved through its enterprise cloud services and AI technologies, (2) assess whether Microsoft is causing, contributing to, or directly linked to those adverse human rights impacts, and (3) recommend appropriate mitigation strategies in the event of potential harm, and remediation strategies in the event of genuine harm. How to make a determination regarding cause, contribution, or direct linkage is, from the Assessors’ perspective, often unclear based solely on the text of the UNGPs inasmuch as the UNGPs do not provide a clear definition regarding corporate relationships to harms or a specific set of evaluative criteria. Further, because the UNGPs are not a source of law, but are instead a set of principles intended to guide businesses in meeting their human rights responsibilities, there is considerable latitude in addressing both the question of causation and the remediation or mitigation strategies that might be available. Microsoft recognizes the importance of drawing these distinctions. As noted in its most accurate Human Rights Annual Report in the context of its 2018 HRIA on AI technology, One reason the question of contribution is important is that it can help determine opportunities to use leverage to mitigate potential adverse human rights impacts. Stakeholders pointed to three key factors that could increase the opportunity for leverage: the level of customization, substitutability, and a continuing relationship. These opportunities cannot ensure that adverse impacts won’t occur, but they do suggest potential opportunities for companies to exert influence.34 The UNGPs call on companies to take positions – even drastic ones that are commercially disadvantageous – to prevent harms by downstream partners for which they could have a level of responsibility. The evaluation of responsibility and attendant action represents a critical due diligence step under the UNGPs. In particular, the Guiding Principles expect that 34 “2020 Microsoft Human Rights Annual Report,” Microsoft. -14- FH11370276.1 Microsoft will exert its influence to seek the end of serious abuses by the agencies with which the company has a commercial relationship. In instances where this leverage is not significantly effective, the UNGPs call on Microsoft to consider taking positions of greater consequence, namely terminating a particular agency’s license or even ending the entire commercial relationship. In the past, Microsoft has been pressed to end business with certain U.S. government agencies following the revelation of credible evidence indicating systemic human rights abuses by those agencies. Ultimately, Microsoft continued working with the agencies. It is the company’s conviction that, in a functioning democratic system with effective rule of law, Microsoft will have greater ability to influence an agency’s human rights practices in a positive manner if the two remain in an active commercial relationship. More broadly, by staying in the market, Microsoft would also be able to continue licensing products that adhere to Microsoft’s human rights standards—particularly when the use of those products in public services significantly and equitably benefit rights-holders. As articulated by John Ruggie, the chief drafter of the UNGPs, the connection between harms and a company’s responsibilities should be evaluated as a “continuum.” Ruggie noted: [a] variety of factors can determine where on that continuum a particular instance may sit. They include the extent to which a business enabled, encouraged, or motivated human rights harm by another; the extent to which it could or should have known about such harm; and the quality of any mitigating steps it has taken to address it.35 In Ruggie’s view, the question of responsibility does not rise or fall on a single factor, but on multiple considerations that are interrelated. Due diligence based on this view should account for a company’s enablement, encouragement, and knowledge, in addition to its mitigation efforts. Similarly, a accurate report on corporate responsibility under the UNGPs states: The closer the connection between the company’s core business operations, specific products, or specific purchasing activities and the resulting harm —balanced with other factors — the greater the likelihood that the company contributed to the harm, and vice versa.36 Again, the report notes that the analysis of responsibility rests on multiple factors, including the relationship between the “core” of what a company does and the “specificity” of the activity in relation to the ultimate harm. Professor Vivek Krishnamurthy, reviewing this and 35 “Comments on Thun Group of Banks Discussion Paper on the Implications of U.N. Guiding Principles 13& 17 in a Corporate and Investment Banking Context,” John Ruggie, 21 Feb. 2017. 36 Jonathan Drimmer and Peter Nestor, “Seven Questions to Help Determine When a Company should Remedy Human Rights Harms under the UNGPs,” BSR, January 2021, available at https://www.bsr.org/en/reports/seven-questions-to-help-determine-when-a-company-should-remedy-human-rights. -15- FH11370276.1 other literature in the context of cloud service provider responsibilities, finds the concept of “specificity” to be analytically significant.37 These interpretations help draw a sharper distinction between the cause, contribution, and direct linkage categories through which Microsoft’s relationships and responsibilities to harms are determined under the UNGPs. They offer less insight, however, regarding the fine line between direct linkage and relationships in which the contributions of a Microsoft product within a value chain are distant enough from the harm to be immaterial. In the Assessors’ view, when the lines between direct linkage and relationships that fall below such linkage may be blurred, it is advisable for Microsoft to assume it is directly linked and develop a mitigation strategy that diminishes the risk that its products will facilitate harms. The purpose of the UNGPs is not, like tort law, to find whether there is proximate cause for a specific injury. Determining responsibility for its own sake is not the aim. The purpose of the UNGPs is to “enhanc[e] standards and practices with regard to business and human rights so as to achieve tangible results for affected individuals and communities, and thereby also contribut[e] to a socially sustainable globalization.”38 Accordingly, if responsibility is unclear –as it might often be in a complex digital technology ecosystem – adopting a mitigation strategy will help achieve the UNGPs’ objectives in a manner that is consistent with Microsoft’s Global Human Rights Statement. Additionally, although the UNGPs speak in terms of “adverse human rights impacts,” the Assessors are mindful that, for a company as complex as Microsoft, actions aimed at mitigating adverse human rights impacts might lead to unintended harms for other rights-holders. For example, Microsoft’s enterprise cloud services could be abused by certain actors to serve nefarious ends. Yet it would be disastrous if, in response to this risk, Microsoft stopped selling the cloud services relied on by hospitals, universities, and innovative businesses. In the Assessors’ view, and consistent what the Assessors believe to be the spirit of the UNGPs, such binary solutions are neither productive nor necessary. Accordingly, the Assessors consider the full spectrum of human rights implicated in the use of Microsoft’s technologies when recommending mitigation strategies, including in the context of law and immigration enforcement. VI. Microsoft’s Human Rights Approach & Policy Commitments A. Policy Commitments Relating to Human Rights To gain a broad understanding of Microsoft’s human rights commitments and gauge their consistency with internationally recognized frameworks, the Assessors reviewed the policies, guidelines, statements, protocols, and standards that govern the business practices most applicable to this HRIA (collectively, the “Policy Framework”). 37 With Great (Computing) Power Comes Great (Human Rights) Responsibility: Cloud Computing and Human Rights,” Vivek Krishnamurthy, Business and Human Rights Journal, Edition 7, 2022, page 242. 38 UNGP, General Principles, Commentary. -16- FH11370276.1 Overall, the Policy Framework is robust, intricate, and covers the range of human rights issues related to this Assessment’s scope. The documents within the Framework provide significant detail regarding Microsoft’s human rights values. The Policy Framework is premised on the principle that “Technology should be used for the good of humanity, to empower and protect everyone and to leave no one behind.”39 1. Global Human Rights Statement In the Assessors’ view, Microsoft’s Global Human Rights Statement is a model to which the human rights programs of other companies in the technology sector can aspire. The Statement articulates the international standards, company initiatives, best practices, and tools that Microsoft applies to evaluate its business activities and fulfill its responsibility to protect human rights.40 Critically, the Statement acknowledges that the identity of marginalized groups can intersect with other personal characteristics to increase the already higher risk of harms facing vulnerable rights-holders: Our commitment to vulnerable groups: Although human rights are universal, they are not yet enjoyed universally. For example, various forms of discrimination require that we pay special attention to vulnerable groups. Vulnerable groups include persons who are disproportionately susceptible to heightened adverse impacts, or those who have less practical access to remedy. We are committed to conducting business without discrimination based on race, color, ethnicity, sex, language, religion, political or other opinion, national or social origin, property, birth or other status such as disability, age, marital and family status, gender, sexual orientation, gender identity or expression, health status, place of residence, economic and social situation, or other characteristics, or the multiple intersecting forms of discrimination that influence the realization of human rights. We commit to take actions to empower vulnerable groups to better exercise their rights.41 Microsoft’s Statement refers to numerous instruments to which it adheres that address specific types of discrimination, including the International Convention on the Elimination of All Forms of Discrimination Against Women; the Women’s Empowerment Principles; the Convention on the Rights of the Child; the Child Rights and Business Principles, the Convention on the Rights of Persons with Disabilities; and the Standards of Conduct for Business on Tackling Discrimination against LGBTI People.42 39 See “Global Human Rights Statement,” Microsoft. 40 Ibid. 41 Ibid. 42 Ibid. -17- FH11370276.1 The Statement also highlights international initiatives of which Microsoft is a member or supporter, or to which it is a signatory. These initiatives include the Global Network Initiative, the U.N. Sustainable Development Goals, and the U.N. Global Compact.43 Pursuant to the Statement, the UNGPs and the OECD Guidelines for Multinational Enterprises serve as the primary frameworks that Microsoft references when designing its human rights due diligence—that is, the primary vehicle through which it assesses, remediates, and mitigates adverse human rights impacts. Microsoft’s Statement stresses that it uses “ongoing human rights due diligence” to “understand[] potential human rights impacts associated with digital technologies,” which present “unique challenges.”44 Additionally, the Statement addresses the grievance mechanisms the company provides to stakeholders and rights-holders. These mechanisms are provided through several channels, most notably through anonymous submissions to the company’s Integrity Website, complaints emailed to the Business Conduct Email Address, and calls to the Integrity Hotline.45 Microsoft has also established product-specific channels for voicing grievances, such as the Disability Answer Desk, the Xbox Live Policy & Enforcement, and the Privacy Support Form that allows rights-holders to request the right to access and delete personal data.46 The Statement also devotes attention to rule of law and good governance. Specifically, it notes that Microsoft advocates for public policies and laws that promote technological innovation and protect human rights. Finally, the Statement notes that Microsoft’s employees, third-party suppliers and other business partners, and the governments with which Microsoft enjoys a commercial relationship share key responsibilities for implementing the policy. Internally, Microsoft’s Regulatory and Public Policy Committee, within its Board of Directors, serves as the primary body for overseeing risks related to the Statement’s commitments, and Microsoft’s Vice Chair and President is responsible for ensuring that the 1,500 business, legal, and corporate affairs employees sitting within CELA oversee the Statement’s implementation.47 2. Trust Code Microsoft’s Standards of Business Conduct (which the company refers to as its “Trust Code”) seek to maintain and build on the relationships between the company and the stakeholders affected by its operations and activities.48 The Trust Code states, “Microsoft’s Standards of Business Conduct . . . will show you how we will use our culture and values to build and preserve trust with our customers, governments, investors, partners, representatives, and each other, so we can achieve more together.”49 The document categorizes several types of 43 Ibid. 44 Ibid. 45 Ibid. 46 Ibid. 47 Ibid., pages 9-10. 48 “Trust Code: Standards of Business Conduct,” Microsoft. 49 Ibid. -18- FH11370276.1 stakeholder relationships, including customers, governments, communities, employees, investors, society, and business partners.50 The Trust Code seeks to build stakeholder trust by requiring Microsoft’s employees and business partners to predicate their decision-making on sound ethical principles that instill confidence in stakeholders affected by Microsoft’s activities. Accordingly, the Trust Code serves as a guidance tool employees can reference when faced with a decision that could compromise the company’s values and/or introduce risks. The Trust Code describes several steps employees can take when they have concerns, including an option to ask questions and seek further guidance from CELA and Microsoft’s Finance and Human Resources Teams. The Trust Code also distinguishes between the responsibilities of non-managerial employees and the greater responsibilities of managers and senior executives. Microsoft provides a number of channels to report ethical issues: Microsoft’s Integrity Portal; emailing or sending a letter by post to CELA’s Office of Legal Compliance; calling a Microsoft hotline; or directly raising concerns with the employee’s manager, another Microsoft manager, or the human resources, finance, and CELA Teams. The document also guarantees employees protection from retaliation if they submit allegations of ethical impropriety or human rights violations. 3. Responsible AI Standard a. Overview AI is a broad technological concept that encompasses, overlaps with, and sometimes is confused with a variety of other concepts and technologies. AI is notoriously difficult to define—Brad Smith has written that there is “universal vagueness swirling around AI”.51 At the same time, the accurate draft of the European Union’s proposed AI regulations define AI as “systems that display intelligent behavior by analyzing their environment and taking actions—with some degree of autonomy—to achieve specific goals.”52 However defined, AI technologies have become increasingly important for the technology industry generally, and Microsoft specifically. Recognizing both the importance of AI and its uncertain effects on human rights, Microsoft helped found the Partnership on AI, which seeks to address ethical issues in the development of AI technologies.53 It also conducted a human rights impact assessment on AI in 2018. As Microsoft stated in its most accurate Human Rights Annual Report, it is difficult to determine responsibility for adverse human rights impacts in the AI context: Several factors complicate this question in the context of AI. They include the unpredictability of the nature and use of AI as a rapidly 50 Ibid. 51 Smith, Tools and Weapons, 222. See also page 224: “There is no universally agreed-upon definition of AI across the tech sector.” 52 “Artificial Intelligence for Europe,” European Commission, April 2018. 53 See “About Us: Advancing Positive Outcomes for People and Society,” PAI. -19- FH11370276.1 evolving technology and the complexity of algorithms, which may make it difficult to determine whether the adverse impact stems from the algorithm itself, the data used to train or operate the AI, or the way in which the AI was used. This challenge is more complex for companies that also provide data or cloud computing infrastructure and services enabling customers to build AI products on platforms due to limited visibility into customers’ activities.54 In light of both AI’s potential power and nascency, Microsoft first introduced its Six Principles for AI in 2018.55 Those principles informed the drafting of Microsoft’s Responsible AI Standard (the “Standard”), which it revised in a second version published in June 2022. Microsoft’s latest iteration of the Standard significantly augments and formalizes the principles for responsible design and use of AI, and provides extensive guidance and protocols for company personnel engaged in decision-making regarding potentially harmful aspects of AI technologies under development. The Standard lays out a series of due diligence steps beginning with, and informed by, an impact assessment. Those steps underpin sixteen goals across key areas that Microsoft seeks to achieve when designing and deploying AI technologies. Those goals fall under six categories: accountability, transparency, fairness, reliability/safety, privacy/security, and inclusiveness. The Standard is supplemented by documentation that provides extensive direction to personnel whose work necessitates due diligence on AI technologies, including the following:56 • The Responsible AI Impact Assessment Template. The Template allows relevant teams to evaluate a product’s likely impacts on rights-holders; • The Responsible AI Impact Assessment Guide. The Guide is a forty-two page resource for teams completing an impact assessment; and • Transparency Notes. Transparency Notes are to communicate to the public the intended uses, capabilities, and limitations of deployed Microsoft AI-dependent products. The Standard is supported by monitoring tools that assist employees in assessing AI impacts following the deployment of a product, including the HAX Workbook to support early planning and collaboration between engineering disciplines and help drive alignment on product requirements across teams; the AI Fairness Checklist to prioritize fairness when developing AI; and the Fairlearn tool, which seeks to empower AI developers to assess their systems' fairness and mitigate any discriminatory impacts on vulnerable groups.57 54 Human Rights Annual Report: Fiscal Year 2021, Microsoft. Microsoft’s fiscal year 2021 covers the period from July 1, 2020 to June 30, 2021. 55 See “Responsible AI Standard, V2,” Microsoft. 56 See “Responsible AI Resources,” Microsoft. 57 Ibid. -20- FH11370276.1 In addition, in 2017, Microsoft established Aether, a body that advises Microsoft’s senior leadership on the challenges and opportunities presented by AI technologies. Microsoft has stated that its executives engage Aether to make recommendations on responsible AI issues, technologies, processes, and best practices. The working groups in Aether undertake research and development, and provide advice on rising questions, challenges, and opportunities related to cutting-edge AI.58 b. Specific Requirements under the AI Standard Certain requirements under the Standard establish a formal human rights due diligence process that can help mitigate harms to BIPOC communities emanating from the use of Microsoft’s AI products, and design products to prevent future harms from occurring. At the design stage, for example, the Standard requires Microsoft’s AI product teams to carry out due diligence identifying the rights-holders likely to be impacted, stakeholders overseeing the product’s use, and those who use the product to make decisions of significant impact on rights-holders. The following requirements provide the parameters for this due diligence: • “Review defined Restricted Uses to determine whether the system meets the definition of any Restricted Use.” (Accountability Goal 1); • “Identify the stakeholders who are responsible for troubleshooting, managing, operating, overseeing, and controlling the system during and after deployment.” (Accountability Goal 5.1); • “Identify: 1) stakeholders who will use the outputs of the system to make decisions, and 2) stakeholders who are subject to decisions informed by the system.” (Transparency Goal 1.1); • “Identify: 1) stakeholders who make decisions about whether to employ a system for particular tasks, and 2) stakeholders who develop or deploy systems that integrate with this system.” (Transparency Goal 2.1); • “Identify stakeholders who will use or be exposed to the system, in accordance with the Impact Assessment requirements.” (Transparency Goal 3.1); and • “Identify and prioritize demographic groups, including marginalized groups, that may be at risk of experiencing worse quality of service based on intended uses and geographic areas where the system will be deployed. Include: 1) groups defined by a single factor, and 2) groups defined by a combination of factors.” (Fairness Goal 1.1). For each of these requirements, the relevant employees must use the Responsible AI Standard’s Impact Assessment Template to document and assess the information collected. To identify and prioritize affected rights-holders, Microsoft recommends communicating with 58 Responsible AI Webpage, Microsoft. -21- FH11370276.1 “researchers, subject matter experts, and members of demographic groups,” including marginalized and otherwise vulnerable communities. 4. Policies & Initiatives for Addressing Racial Discrimination Microsoft’s products, and the relationships the company cultivates with the agencies that use its products, can themselves mitigate discrimination against BIPOC communities. According to Microsoft’s senior managers, the company is committed to implementing DEI principles—including addressing racial injustice as set forth under ICERD—at an enterprise level. This commitment affects the company’s operations, activities, and supply chains. Microsoft recently initiated a number of policy changes that deepen the company’s commitments to DEI and racial justice, including committing the company to monitor its progress on the implementation of these goals. As part of this, the company has set goals in the near-term to increase investments in Black-owned businesses, double the number of Black-owned approved suppliers, and spend an incremental $500 million with those entities and existing Black-owned suppliers.59 Additionally, Microsoft launched the Black Partner Growth Initiative, which focuses on increasing the number of Black-owned business partners in the United States.60 Microsoft has initiated numerous efforts related to increased equity and accessibility in the distribution and use of digital technology. For example, it launched the Airband Initiative, to “advance access to high-speed internet and meaningful connectivity” as a “fundamental right.”61 Its TechSpark initiative “[f]oster[s] economic opportunity and job creation in partnership with communities across the U.S.”62 Additionally, its global skills initiative is “aim[ed] at bringing more digital skills to 250 million people worldwide by the end of the [2025].”63 Microsoft also created an internal Justice Reform Initiative, which “works to empower communities and drive progress toward a more equitable justice system.”64 As part of this Initiative, the company partners with organizations such as the NYU Policing Project and the National Network for Safe Communities to explore how technology can better advance racial equity in the criminal justice system.65 Senior executives overseeing Microsoft’s racial equity and justice reform efforts emphasized that they work closely with community leaders and civil society organizations to reduce incarceration and advance racial equity in the justice system. These executives indicated that they are empowered by Microsoft both to lobby lawmakers at the state and federal level and to engage directly with civil society organizations through multi-stakeholder coalitions to advance shared racial justice advocacy interests. 59 “Racial Equity: Engaging Our Ecosystem: 2021 Progress Report,” Microsoft. 60 Ibid. 61 “Microsoft Airband Initiative,” Microsoft. 62 “The Microsoft TechSpark Program,” Microsoft. 63 “Expanding our commitments in Africa: Connectivity and skills” - Microsoft On the Issues Microsoft. 64 “Creating a More Equitable Justice System,” Microsoft. 65 Ibid. -22- FH11370276.1 Consistent with this HRIA, senior managers responsible for Microsoft’s DEI and racial equity policies noted that the company is in the process of internally assessing its civil rights impacts. This process involves evaluating the company’s workforce policies and practices and is in progress. 5. Human Rights in Microsoft’s Terms of Service Microsoft’s relationships with its customers are governed by their contracts. The applicable documentation varies by product. The Microsoft Azure Product Terms website, for example, provides links to General Service Terms and terms for specific Azure products.66 These documents provide contractual and policy restrictions related to Microsoft’s products relevant to the impacts addressed in this Assessment. Placing human rights at the center of contractual terms, and ensuring their implementation and enforcement, significantly strengthens human rights due diligence and leads to more effective harm mitigation strategies. In the Azure General Service Terms, for example, Microsoft restricts use of facial recognition by U.S. law enforcement: Customer may not use Azure Facial Recognition Services if Customer is, or is allowing use of such services by or for, a police department in the United States. Violation of any of the restrictions in this section may result in immediate suspension of Customer’s use of the service. 67 As another example, under its Cognitive Services and Applied AI Services, Microsoft references its Code of Conduct for Text-to-Speech integrations. Under that Code of Conduct, the Text-to-Speech implementation by the customer “must not be used to intentionally deceive people” or “disguise policy positions or political ideologies.”68 Microsoft includes both an “Acceptable Use Policy” and restrictions on “High Risk Use.”69 Under its Acceptable Use Policy, customers may not use products: • in a way prohibited by law, regulation, governmental order or decree; • to violate the rights of others; or • in any application or situation where use of the Services Deliverables could lead to the death or serious bodily injury of any person, or to severe physical or environmental damage, except in accordance with the High Risk Use section below. In turn, Microsoft’s “High Risk Use” terms state: WARNING: Modern technologies may be used in new and 66 “Microsoft Azure,” Microsoft. 67 Ibid. 68 “Code of Conduct for Text-to-Speech Integrations,” Microsoft, July 2022. 69 “Professional Services,” Microsoft. -23- FH11370276.1 innovative ways, and Customer must consider whether its specific use of these technologies is safe. The Services Deliverables are not designed or intended to support any use in which a service interruption, defect, error, or other failure of a Services Deliverable could result in the death or serious bodily injury of any person or in physical or environmental damage (collectively, “High Risk Use”). Accordingly, Customer must design and implement the Services Deliverables such that, in the event of any interruption, defect, error, or other failure of the Services Deliverables, the safety of people, property, and the environment are not reduced below a level that is reasonable, appropriate, and legal, whether in general or for a specific industry. Customer’s High Risk Use of the Services Deliverables is at its own risk. The Assessors did not, however, find any clauses that reference the UNGPs, other international human rights principles and frameworks, or Microsoft human rights policies, nor did they identify any other provisions that would condition the client’s use of a Microsoft product specifically on respect for human rights. As noted by one senior executive, Microsoft prefers terms of service that are as all-encompassing as possible, “so they can be uniformly applicable to all sorts of large customers and individual end-users around the world.” To this effect, the executive added that contractual clauses usually only indicate that the customer “shall not violate the rights of others.” B. Human Rights Risk Management & Oversight Microsoft implements its human rights policies in a manner that encourages individual product teams to escalate human rights issues to senior executives and managers in the company’s CELA Team. CELA is responsible for driving and overseeing the implementation of Microsoft’s human rights policies in its business activities and across its various teams. Consistent with the company’s culture and belief that technology, legal, public policy, and human rights issues should not be separately siloed, CELA brings together professionals from across these fields into a single department capable of addressing the full range of such issues and challenges. Overall, Microsoft’s approach to human rights risk management seeks to establish a flexible due diligence process that can be easily and consistently coordinated across relevant teams. All of Microsoft’s business groups and product teams are supported by a dedicated CELA Team that provides front-line support on the full range of legal and human rights issues encountered in the development and delivery of products and services. These front-line CELA Teams, in turn, are supported by CELA subject matter experts. A core responsibility of the front-line CELA Teams is to identify salient legal issues and human rights-related risks – and to escalate these issues to the personnel at CELA who lead Microsoft’s human rights efforts, as well as subject matter experts within CELA. All CELA personnel receive training on the identification of risks and the procedures by which to escalate issues to CELA subject matter experts. -24- FH11370276.1 The close relationship between the company’s business groups and the dedicated CELA front-line team that provides it with legal and human rights support is key to the effectiveness of these processes. Microsoft believes that providing its business groups with dedicated legal, human rights, and public policy professionals residing within CELA assists in the timely and proactive identification of harmful risks as potential products markets, and commercial relationships are considered. The front-line CELA professionals who support a particular business group are responsible for identifying and mitigating the full range of legal and human rights risks the business encounters and are trained on specific issues relevant to the effective implementation of Microsoft’s human rights policies. Front-line CELA personnel have been dealing with human rights issues for years, particularly within Microsoft’s cloud businesses aimed at governments and other large customers, and have formed close and productive working relationships with the senior executives in CELA responsible for Microsoft’s human rights risk management. Microsoft describes its human rights management as “hub-and-spoke,” intended to empower and support front line personnel to identity and address legal and policy issues, including human rights issues. To that end, Microsoft’s teams do not typically follow a formal set of processes and protocols to identify and assess potential human rights risks. As summarized by one senior executive, “Microsoft has a history of being as skinny as it can be, and then relying on qualified personnel in CELA and other teams to identify the human rights challenges that need to be prioritized”. This model is intended to create a human-rights-respecting culture, as opposed to one reliant solely on human rights experts. Consequently, Microsoft has few internal policies and procedures that could be used by personnel as guidance to establish a standardized process for the identification, assessment, and elevation of human rights risks. This makes it particularly important that CELA carry out regular oversight across product teams—especially those working on technologies with features and uses that are at high risk of facilitating serious harms to vulnerable communities. One executive noted that the small number of CELA team members responsible for Microsoft’s human rights implementation and risk management requires them to rely on Front-Line CELA personnel and subject matter experts to make the Team aware of priority human rights issues. The executive indicated that these managers are typically provided with the latitude to act as the “owners and drivers” of their own work portfolios, with CELA standing by to help them manage risks and resolve human rights quandaries when requested. Microsoft finds this approach effective because it places responsibility on front line personnel to be attuned to human rights issues and to escalate them as needed. Microsoft has a significant record of engaging in human rights due diligence. Since 2016, Microsoft has published an Annual Human Rights Report as a means of providing the public with details on its human rights programs and plans, including the types of due diligence the company conducts, and how it addresses corporate social responsibility (“CSR”) expectations -25- FH11370276.1 and challenges.70 Additionally, Microsoft has worked with human rights experts to assess extant and emerging human rights challenges related to its products. In its most accurate Annual Human Rights Report, Microsoft highlighted its 2018 Human Rights Impact Assessment of AI technologies.71 The Annual Report identified five salient human rights issues as current priorities for human rights implementation, public reporting, and further due diligence monitoring: accessibility, data security and privacy, digital safety, freedom of expression and privacy, and responsible sourcing. As highlighted in this HRIA, these priorities will need to also factor in specific challenges related to racial discrimination in the use of Microsoft’s products by high-risk government agencies. C. Stakeholder Reactions to Microsoft’s Human Rights Policies and Practices Stakeholders primarily commented on Microsoft’s Human Rights Statement, DEI work, Responsible AI Standard, and human rights risk management practices. For each of these policies and practices, stakeholders expressed no complaints or concerns about any specific policy or practice. They did, however, express two holistic concerns. First, that Microsoft’s policies are disconnected from its practices. Second, that the company is not transparent regarding its activities. The perceived lack of transparency, in particular, appears to create a negative feedback loop, in which stakeholders view with suspicion whether and to what extent Microsoft has successfully operationalized its commitments. VII. Salient Adverse Human Rights Impacts Before considering whether Microsoft bears any responsibility under the UNGPs for human rights harms, it is necessary to first “identify and assess the nature of the genuine and potential adverse human rights impacts with which a business enterprise may be involved.”72 The purpose of this first step is to “understand the specific impacts on specific people, given a specific context of operations.”73 The “specific people” at issue in this HRIA are vulnerable rights-holders, namely BIPOC individuals. The UNGP framework focuses on the most severe harms – based on the scope of harm, scale of harm, and remediability of harm – to which BIPOC are vulnerable.74 In the immediate context, Microsoft’s products may be connected to the following adverse human rights impacts: • Discriminatory surveillance. This harm involves law enforcement targeting BIPOC communities through surveillance activities and determinations regarding further law enforcement action. Surveillance activities include collecting personal 70 As a prominent example, see “Human Rights Annual Report: Fiscal Year 2021,” Microsoft. 71 Ibid. 72 See “Principle 18: Commentary,” UNGPs. 73 Ibid. 74 See “Principle 14: Commentary,” UNGPs. -26- FH11370276.1 information (including biometric information), creating databases, and creating predictive policing tools, such as assessments of the risk BIPOC individuals may pose to public safety. • Privacy infringement. Related to discriminatory surveillance, this harm involves disproportionately invading BIPOC communities’ expectations of privacy through the collection, storage, and processing of personal data, especially in manners that are not transparent to targeted individuals. • Discriminatory arrest and incarceration. This harm entails collecting, processing, and analyzing information (including that which may be obtained by infringing on the privacy rights of BIPOC individuals) to support high-risk policing activities disproportionately aimed at BIPOC communities. Such harm also increases the likelihood that other fundamental rights will be violated, including the right to life and security, and freedom from inhumane treatment and arbitrary detention. Each of these harms can be significant in scope and scale. They also connect to, and exacerbate, inequities in the criminal justice system. They are largely ubiquitous, and poorly regulated through state and federal law. To the extent they lead to a loss of life or liberty, they can be impossible for a company to remediate. Notably, these harms may be intertwined with technology products designed to enhance public safety, particularly law enforcement actions that are determined by the collection, storage, processing, and algorithmic assessment of personal data points. VIII. Assessment of Microsoft’s Relationship to Adverse Human Rights Impacts A. Azure 1. Overview Azure is a cloud computing platform operated by Microsoft. Both Azure’s hardware and software-based operating system form the foundational elements of a digital ecosystem that provides a platform upon which other technology applications can run. Microsoft licenses hundreds of Azure products, under nearly two dozen categories, including AI and machine learning; analytics; databases; management and governance; and networking.75 Accordingly, among its options, a customer could have access to Azure Cloud Services, which allows the customer to “build the web and cloud applications you need on your terms while using the many languages we support.”76 Alternatively, the customer could use Azure Automanage, which “offers a unified solution to simplify IT management,”77 or the 75 See “Azure Products,” Microsoft. 76 See “Azure Cloud Services,” Microsoft. 77 See “Azure Automanage,” Microsoft. -27- FH11370276.1 customer could use Azure Traffic Manager, which “operates at the DNS layer to quickly and efficiently direct incoming DNS requests based on the routing method of your choice.”78 The customer could also combine any number of these Azure products into a package of computing solutions. As an illustration, if a customer were interested in Microsoft’s “Cloud scale analytics” solution, it could create a solution architecture using a variety of models that might look like this:79 As the above diagram illustrates, this particular architecture solution involves Azure Synapse Analytics, Azure Data Lake Store, Azure Analysis Services, and the Azure Cosmos database. More concretely, under the above architecture, the relevant inputs are received through software applications provided by the customer; are fed to and stored by Azure products that then will use the data to train a machine-learning algorithm; and then will receive specific outputs for various customer purposes. Regardless of the specific architecture or specific solution, the basic structure of the Azure technology is the same, whether alone or in combination with other Microsoft Azure products and solutions. Namely, Microsoft provides various kinds of tools that can be used by a customer to develop cloud-based solutions specific to that customer. This typically includes running applications developed by the customer on data hosted in the Azure platform. As Microsoft puts it, “[t]he Microsoft Cloud,” of which Azure is one part, “provides a unified 78 See “Azure Traffic Manager,” Microsoft. 79 See “Azure Cloud-Scale Analytics,” Microsoft. -28- FH11370276.1 collection of services for creating applications.”80 Azure is a set of services “aimed at professional software developers who create and maintain new applications.”81 2. Products marketed for government use Microsoft specifically markets certain products for government use under its product page “Azure for government.”82 Microsoft’s marketing focuses on its products’ beneficial uses: Deliver services to citizens, anywhere at any time. Modernize your legacy infrastructure and easily scale up and down as needed. Meet government cloud security and compliance standards while managing costs. Learn how governments serve their citizens more effectively with Azure.83 Most relevant to this HRIA, Microsoft identifies and promotes certain Azure tools that help governments “enable investigations and analysis”: • Azure AI. Azure Applied AI services are a number of services, which are themselves a subset of a suite of AI services that “offer you turnkey AI services for common business processes.”84 Although AI Cognitive Services are “general purpose AI services,” Applied AI services have “additional task-specific AI and business logic to optimize for specific use cases.” In both cases, they are “designed to help developers create intelligent apps.”85 • Azure Media Services. These services allow licensees to “Manage, transform, and deliver media content with cloud-based workflows.”86 • Azure Translator. Translator allows for the translation of text across 100 different languages.87 • Azure Synapse Analytics. This service is “a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.”88 There are multiple tools within Synapse Analytics, and for each of 80 See “Build Applications on the Microsoft Cloud,” Microsoft. 81 Ibid. 82 See “Azure for Government,” Microsoft. 83 Ibid. 84 See “Azure Applied AI Services,” Microsoft. 85 Ibid. 86 See “Azure Media Services,” Microsoft. 87 See “Azure Cognitive Services,” Microsoft. 88 See “Azure Synapse Analytics,” Microsoft. -29-FH11370276.1 them, developers are provided a workspace upon which they can build customized software solutions. Microsoft provides as examples twenty-four different use cases under four categories: data engineering, data warehousing, data lake, and converged analytics. These are not the only products that domestic law enforcement licensees may use. The entire suite of Azure products and AI technology is available to government agency customers. Microsoft does, however, tailor its Azure Government services to government licensees, if they choose to use them. The primary difference between what Microsoft calls “Global Azure” and “Azure Government” is in its security features, such as using physically isolated datacenters and networks located solely in the U.S.89 Otherwise, the Azure products under the “Azure Government” label function similarly to other Azure products: they are platforms on which developers may independently create applications. Under the category of government solutions, Microsoft specifically markets to “public safety and justice organizations,” stating that its products allow those organizations to “make more informed decisions and increase safety for the people and communities you serve.”90 Microsoft regularly blogs91 and publishes reports detailing various use cases.92 Some of these publications highlight ways that Azure platforms can help dismantle structural racism and Boost public safety. 3.Specific Use Casesa.Fusion CentersFusion centers “are state-owned and operated centers that serve as focal points in states and major urban areas for the receipt, analysis, gathering and sharing of threat-related information between state, local, tribal and territorial, federal, and private sector partners.”93 These centers were established to advance national security goals that Congress identified after the September 11th attacks, and were designed to streamline information sharing between federal agencies and state and municipal law enforcement agencies. Key federal agencies involved in fusion centers are the Federal Bureau of Investigation (“FBI”), the Department of Homeland Security (“DHS”), and the Federal Bureau of Prisons (“BOP”). Microsoft provides agencies participating in fusion centers with support for Azure Government and technologies that enable communications interoperability, making it possible for federal and state agencies to exchange intelligence and data between each other and with local police departments. 89 See “What is Azure Government,” Microsoft. 90 Ibid. 91 See “Microsoft Industry Blogs – Government,” Microsoft. 92 For a number of specific cases, see “The Future of Public Safety and Justice,” Microsoft, 2020. 93 “Fusion Centers,” U.S. Department of Homeland Security, October 2022. -30- FH11370276.1 b. Immigration Enforcement Two federal agencies bear primary responsibility for immigration enforcement: Immigration and Customs Enforcement (“ICE”) and Customs and Border Protection (“CBP”). Both are housed under DHS. These agencies license Azure Government and other Azure tools.94 Microsoft has also bid to license its products and services in support of the Repository for Analytics in a Virtualized Environment (“RAVEn”), a new analytical data platform that ICE has been developing since 2018.95 RAVEn is being designed to analyze large datasets so ICE can more easily identify enforcement targets, as well as patterns and connections between targets and events. RAVEn’s databases would purportedly be composed of personal information on public websites, as well as confidential data provided by private sector partners. Such data could include biometric data from diverse sources, including fingerprints and DNA, official government data, information from social media, surveillance photos and videos, global positioning systems, and financial data obtained from private companies. Notably, RAVEn is identified in the HRIA solely to address potential adverse human rights impacts; RAVEn is not a Microsoft product. 4. Causation Analysis of Azure Products a. Microsoft’s Perspective Microsoft has two complementary views regarding the use of its technologies by domestic law enforcement and immigration agencies. The first is that its provision of such technology is done lawfully and with the purpose of providing innovative tools that Boost the provision of public safety and national security services to the benefit of all rights-holders, including BIPOC individuals and other vulnerable groups. Although Microsoft recognizes that some of these technologies can be used abusively, from its perspective potential or genuine abuse in violation of laws and policies – even if, as a statistical matter, known or reasonably anticipated – does not alone imbue Microsoft with responsibility for adverse human rights impacts. Nor, from Microsoft’s perspective, should those adverse impacts outweigh the benefits to public safety and national security when government agencies use such technologies to provision essential public services. Microsoft’s second view is that Azure is a set of cloud computing and AI tools that customers can use (by their in-house IT staff or other third-party IT firms hired by the customer) to develop, deploy, control, and run the customer's own applications and process the customer's own data to serve the customer's purposes. The customer controls its own applications. Customers have the right to protect the confidentiality and privacy of their applications and data from their platform tools providers such as Microsoft. Microsoft agrees that government entities such as law enforcement agencies should be transparent and accountable to the public regarding the applications they develop, deploy, control, and operate and the corresponding impact on the 94 “Acquisition Planning Forecast System” U.S. Department of Homeland Security. 95 “Amazon, Google, Microsoft, and other Tech Companies Are in a 'Frenzy' to Help ICE Build its Own Data-Mining Tool for Targeting Unauthorized Workers,” Business Insider, September 1, 2021. -31- FH11370276.1 public (including the BIPOC community), and it supports legislative and regulatory reforms toward such public transparency and accountability. From Microsoft’s perspective, consistent with the above descriptions of the products at issue, these services are merely “inert” platforms upon which developers can create and layer software and build out the digital environment they need to perform their functions. The platforms are analytically no different from bridge building materials provided to a city that can be used for both beneficial and abusive means based solely on decisions left to the lawful discretion of the government agency. In addition, senior executives leading Microsoft’s Justice Reform Initiative pointed out that Azure-enabled technology systems can also be harnessed by both police departments and racial justice organizations to provide the data analytics needed to identify racial disparities stemming from public safety practices. That data can then be used to drive more equitable law enforcement practices. For example, Microsoft partners with Seattle’s Law Enforcement Assisted Diversion program, which is designed to provide law enforcement with service delivery in a manner that is aligned with modern restorative justice and decarceration principles. Additionally, the executives noted the company’s partnerships with the Vera Institute, the Urban Institute, and the University of Southern California’s Sol Price Center for Social Innovation to create virtual tools and multiple data sources that can drive reforms to law enforcement practices and strengthen police engagement with BIPOC communities.96 From a more expansive vantage point, Azure Government, and other Azure products, are also being used by local and federal agencies to Boost the delivery of social services that are vital to all rights-holders. In the process, agencies have the potential to use Azure to customize their apps in a way that better identifies and corrects situations in which BIPOC and other vulnerable groups may have less access to important services. Azure products, for instance, could be deployed to enable complex data collection and processing that improves the delivery of community and public healthcare programs; police,97 fire, and paramedic services; humanitarian assistance during natural disasters; and public entitlement programs, such as Medicare, Medicaid, and supplemental food assistance for low-income families. b. External Stakeholders’ Perspectives Many external stakeholders view Microsoft as the most important technology corporation in the world. In that role, these stakeholders believe that Microsoft has outsized influence over the development and use of a wide array of technologies employed by U.S. agencies. From their perspective, Microsoft’s size, importance, and commercial relationships with law and immigration enforcement agencies place special responsibilities on the company. Some of these external stakeholders expressed concern regarding: (1) Microsoft’s design of its products; (2) Microsoft’s relationships with government agencies, particularly those known to engage in 96 “Empowering Communities Toward a More Equitable Criminal Justice System,” Microsoft on the Issues Blog, March 2020. 97 Study: Body-Worn Camera Research Shows Drop In Police Use Of Force : NPR -32- FH11370276.1 public safety and national security; and (3) Microsoft’s transparency regarding both product design and its commercial relationships with such agencies. A number of external stakeholders, including those who brought the shareholder resolution, expressed concern that Microsoft has not adequately taken into account how its technologies may enable and reinforce both discrimination and other abuses caused by high-risk government activities – namely, broad data collection, analysis, and surveillance of BIPOC individuals. Accordingly, these stakeholders believe the company should fully assess the potential for its products to amplify these patterns of abuse. More acutely, from the perspective of most civil society representatives interviewed, “but for” Microsoft’s development and licensing of these technologies, some of these abuses would not be as effectively facilitated. These stakeholders expressed particular concern regarding: • The possibility of flaws in the design of Microsoft products that could reinforce or exacerbate discriminatory targeting of BIPOC and other vulnerable communities by police departments and federal immigration agencies. The stakeholders did not state what these flaws were, but expressed concern that, absent transparency and specific vetting for biases in AI analytical tools, for example, such flaws could exist and enable abusive and discriminatory policing practices; • The level of support Microsoft may be providing to assist a law enforcement agency’s use of a product; and • The company’s failure to incorporate specific human rights expectations into the terms of use and other contractual documents to which government licensees are bound. Further, many of these stakeholders believe that the use of Microsoft products in policing establishes, at a minimum, a direct linkage between the company and attendant adverse human rights impacts stemming from discriminatory law and immigration enforcement. In some cases, they suggested that Microsoft is not just directly linked, but in fact contributes to harms by law enforcement. From their perspective, contribution occurs when government agencies contract with Microsoft to secure advice regarding potential uses of the agency’s Azure platforms— particularly when it is known that the specific law enforcement or immigration agency pursues national security and public safety objectives in a fashion that systematically and disproportionately violates the rights of BIPOC people. ICE stands out as an example of such an agency, although stakeholders also expressed concerns related to an array of local police departments and federal law enforcement agencies. Finally, stakeholders expressed concern about Microsoft’s perceived lack of transparency. The vast majority of civil society representatives, researchers, and socially responsible investment firms conveyed that they do not have a clear window into the suite of Azure products used by local and federal agencies to enhance public safety and immigration enforcement activities. This, they contended, is in large part due to Microsoft not sharing detailed information that would provide a more fulsome picture of its role in strengthening the provision of public services. -33- FH11370276.1 c. Government Perspective The Assessors spoke with three individuals with significant first-hand professional experience in technology and its use by government agencies. These stakeholders had previously served as senior officials in local and federal agencies, with their roles involving the advancement of U.S. national security and public safety objectives, and their impacts on civil liberties. This group was comprised of a former member of a congressionally established federal board, a former attorney and legal advisor at the U.S. Department of Justice, and a former state county prosecutor. Although their comments on public safety were general, they argued that civil society’s criticisms of technology companies’ relationships with agencies should be focused on the laws and public policy set by governments, not on the technology providers lawfully developing and providing the technologies. In their view, governments are responsible to citizens, and are legally obligated to protect rights-holders from harms. The interviewees noted, however, their appreciation of the role technology companies can and should play in strengthening BIPOC communities’ enjoyment of rights, arguing that Microsoft’s technology can be used to mitigate the harms that end users might cause. They highlighted that AI-based tools could be developed by agencies and built into a cloud computing system to promote socially inclusive national security and public safety objectives. They explained that, although governments should be pressed to establish stronger privacy and non-discrimination protections governing AI’s use, technology companies should nevertheless continue to tackle the significant challenges related to racial bias in the design of AI products. d. Assessors’ Analysis If Microsoft were to be responsible for adverse human rights impacts that emanate from its Azure services as described above, it would, at most, be directly linked to those harms. Under the scenario in which a domestic law enforcement customer is merely licensing products from Microsoft, without more involvement from Microsoft in the development of the products or services, the Assessors do not believe that Microsoft would or could be either causing or contributing to any adverse human rights impacts, as those terms are understood under the UNGPs. If the Assessors were to conclude that Microsoft was directly linked to adverse human rights impacts that emanate from its Azure services, then Microsoft would bear a responsibility to mitigate attendant adverse human rights impacts. The Assessors believe there are good arguments on both sides regarding direct linkage. On the side in favor of a direct linkage determination, the computing technologies that Microsoft offers—especially when used in combination with each other—are exceptionally powerful, placing in organizations that might not otherwise have the resources, skills, or competence solutions that can be used in myriad ways. Without such technology, advanced surveillance and harms to privacy, in addition to adverse and discriminatory decision-making that comes with AI and analytics technology, would not be possible. On the side against direct linkage, the Assessors see no evidence that these technologies are anything other than platforms on which end user licensees can develop an array of products -34- FH11370276.1 that can be architected into myriad solutions. In this way, cloud platforms and AI technologies are analytically similar to building materials used for any end purpose. The manner in which those materials are provided will inform the level of connection or attenuation between the provider of the materials and the customer, such that the relationship will exist along a continuum. A construction company could provide materials to a customer, which could subsequently build a warehouse or a detention camp. Alternatively, the construction company could build the warehouse itself; whether the warehouse were then used to store medical equipment or illicit drugs would not be in the purview of the construction company. This relationship between platform provider, on the one hand, and developer, on the other, is more or less independent. Wherever the relationship might fall on the spectrum, a “but for” analysis is difficult to justify. That said, although the “but for” analysis lacks persuasiveness in terms of allocating responsibility, it does suggest that Microsoft should err on the side of mitigation. Government agencies simply do not have the resources to develop products with the kind of sophistication, complexity, and modularity that Microsoft has and can develop. Microsoft’s products in fact do enable complex and sophisticated government activity and make easier surveillance and privacy abuses. Moreover, on its website and through social media, Microsoft touts partnerships it has forged with police agencies to end discriminatory disparities in law enforcement and deepen community trust. Although it is promoting the beneficial role local law enforcement plays in protecting the rights of the communities they serve, Microsoft is simultaneously aware that abuses in policing are systematic, severe, and could be facilitated by technology. Keeping this knowledge at the forefront of Microsoft’s human rights due diligence as it develops, licenses, and markets products to law enforcement agencies should be paramount. This is a challenging fact scenario. Application of the UNGPs does not lead to a clear conclusion that there is direct linkage. Analysis could lead to a conclusion that there is not direct linkage and that, therefore, Microsoft does not bear a responsibility to mitigate any attendant harms. Under the circumstances, and consistent with the spirit of respecting human rights, the Assessors encourage Microsoft to adopt the highest human rights protecting standard and, accordingly, mitigate the genuine and potential adverse human rights harms that are connected to the use of its products regardless of the precise degree of its responsibility under the UNGPs. Doing so would not only further Microsoft’s own human rights commitments, but also would constitute a best practice. Notably, as detailed above, Microsoft is already taking extensive steps—and setting industry standards—to mitigate harms through its policies and due diligence process. Finally, it is important to acknowledge, there is a significant substitutability problem arising out of the question of what Microsoft’s appropriate behavior should be in light of its role in the market. The Assessors did not find in discussions with government stakeholders that they would only license solutions from Microsoft. In the absence of Microsoft’s product offerings, other parties would fill the vacuum—and those parties may not be mindful of human rights impacts. By staying in the market rather than allowing a competitor to be substituted into a commercial relationship with law enforcement, Microsoft has an opportunity to promote best -35- FH11370276.1 practices. This is especially true if Microsoft assumes responsibility to mitigate adverse human rights impacts. B. Third Party Technologies 1. The connection between third party technologies and Microsoft products Given the dynamic and modular nature of the suite of Microsoft’s products, in particular its Azure cloud products, government licensees have a great deal of freedom to independently develop and/or purchase apps and other technologies that carry out an agency’s unique national security and public safety goals. Microsoft provides access to some of these third party apps through Azure Marketplace.98 Microsoft describes the Marketplace as “the premier destination for all of your software needs – certified and optimized to run on Azure.”99 Some third party apps are identified as “preferred solutions,” which are “selected by a team of Microsoft experts and are published by Microsoft partners with deep, proven expertise and capabilities to address specific customer needs in a category, industry, or industry vertical.”100 It is important to note, however, that third party apps are developed independently from Microsoft. Even if the third party is a “preferred partner,” the third party acts independently from Microsoft in designing, developing, and deploying the app to end users. Although Microsoft calls certain applications, such as those by Genetec and Veritone below, “preferred solutions,” Microsoft executives note that this references technical compatibility, and is not a statement of approval regarding specific end uses. 2. Examples a. Coptivity Coptivity is “an AI-enabled conversation mobile app” that “delivers immediate dispatch assistance to deputies on patrol.”101 It is “essentially an intelligent voice assist for law enforcement out on the field.”102 As Microsoft describes one use case for the app in a blog post, Instead of calling dispatch to run a license plate or get background information on a driver – and sometimes waiting from 5 to 30 minutes for the results – officers could query Coptivity to instantly identify a vehicle’s registration status and the owner’s criminal and mental health background. 98 See “Azure Marketplace,” Microsoft. 99 Ibid. 100 See “Microsoft Preferred Solutions,” Microsoft. 101 See “Transforming Law Enforcement with the Cloud and AI,” Microsoft, July 2019. 102 Ibid. -36- FH11370276.1 Microsoft provides additional use case examples in a YouTube video.103 The app was created by developers and IT professionals from the San Diego County Sheriff’s Department through its participation in Microsoft’s HackFest in February 2018. The app uses Azure Government, AI technology, and the Azure Cognitive Services Bot Framework.104 b. aiWARE/IDentify aiWARE is an AI operating system created by the developer Veritone that can be deployed on various cloud platforms, including (but not limited to) Azure Government. Using the aiWARE platform, Veritone created an app in 2018 called IDentify that the company described as a tool for “intelligent, rapid suspect identification.”105 According to the company’s description, Built upon Veritone’s proven AI platform, aiWARE, IDentify empowers law enforcement agencies to substantially increase operational effectiveness by streamlining investigative workflows and identifying suspects faster than ever before. Each day, thousands of law enforcement personnel rely upon the enterprise-scale AI capabilities of aiWARE-based applications to accelerate investigations, protect personally identifiable information, and keep our communities safe.106 Veritone primarily markets IDentify to law enforcement agencies. It is offered through Azure Marketplace as a “preferred solution.”107 c. Genetec Genetec provides an open-platform software, hardware, and cloud-based service designed to help law enforcement agencies streamline and strengthen their public safety response. This service – the “Genetec Security Center” platform – is offered on Azure Marketplace as a “preferred solution.” The Security Center is a “unified platform that blends IP video surveillance, access control, automatic license plate recognition, intrusion detection, and communications” in a single solution.108 Video surveillance includes intelligent cloud-based, closed-circuit television (“CCTV”), and other forms of video-based activity monitoring. Genetec’s surveillance software has been licensed to law enforcement departments in U.S. cities, such as Atlanta.109 103 See “Improving Situational Awareness in Law Enforcement with Microsoft AI,” Microsoft. 104 Transforming Law Enforcement with the Cloud and AI,” Microsoft, July 2019. 105 See “Applications: Identify,” Veritone. 106 Ibid. 107 https://azuremarketplace.microsoft.com/en-us/marketplace/apps/veritoneinc.veritone_identify?tab=Overview 108 See https://azuremarketplace.microsoft.com/en-us/marketplace/apps/genetec.securitycenter?tab=overview 109 “How Securing Atlanta for the Big Game Became an Access Management Initiative,” Case Study, Assa Abbloy. -37- FH11370276.1 d. Offender 360 Offender 360 was developed in 2014 by Tribridge, a technology services firm specializing in business applications and cloud solutions.110 In designing Offender 360, Tribridge used the cloud-based Microsoft Dynamics Customer Relations Management (“CRM”) software that is hosted on Azure Government.111 Offender 360 brings together previously siloed information databases to create a holistic view of an incarcerated individual. Individuals are then compared against others in the database and categorized on this basis. Offender 360 is intended to create precise data profiles that identify individuals most likely to benefit from court diversion programs, education, skills, and job preparedness programs offered in prisons, and coordination of a person’s reentry into society.112 In addition, it is designed to monitor and manage prison populations, and reduce violence, recidivism, and other risks to the safety of both incarcerated individuals and corrections officers. The State of Illinois purchased licenses for the use of the CRM platform and made use of Microsoft’s technical support agreements to train their correctional staff on how to customize applications.113 It has since been adopted and further developed by other correctional facilities across the country.114 3. Causation Analysis of Third Party Apps a. Microsoft’s Perspective From Microsoft’s perspective, third party apps are developed by government customers and third-party companies that serve the public sector independently of Microsoft. Microsoft does not believe it has a direct linkage to such app developers that triggers responsibility for downstream harms within the digital ecosystem that uses its products. Microsoft has noted that it is an upstream platform developer and that its products can be used for countless applications created by the licensees. Moreover, Microsoft believes that the above cases show how third parties are developing public safety tools to uphold the freedoms and human rights, including the right to safety, of all rights-holders. Microsoft notes that government agencies use Azure products to develop and integrate apps that Boost an array of social services an

Wed, 16 Aug 2023 06:04:00 -0500 en text/html https://www.jdsupra.com/legalnews/a-human-rights-impact-assessment-of-9523904/
Killexams : How to Convert Word to PDF With Embedded Links

Daniel Hatter began writing professionally in 2008. His writing focuses on Topics in computers, Web design, software development and technology. He earned his Bachelor of Arts in media and game development and information technology at the University of Wisconsin-Whitewater.

Wed, 25 Mar 2015 14:28:00 -0500 en-US text/html https://smallbusiness.chron.com/convert-word-pdf-embedded-links-46326.html
Killexams : Fundamentals of Atmospheric Modeling

This title is supported by one or more locked resources. Access to locked resources is granted exclusively by Cambridge University Press to instructors whose faculty status has been verified. To gain access to locked resources, instructors should sign in to or register for a Cambridge user account.

Please use locked resources responsibly and exercise your professional discretion when choosing how you share these materials with your students. Other instructors may wish to use locked resources for assessment purposes and their usefulness is undermined when the source files (for example, solution manuals or test banks) are shared online or via social networks.

Supplementary resources are subject to copyright. Instructors are permitted to view, print or obtain these resources for use in their teaching, but may not change them or use them for commercial gain.

If you are having problems accessing these resources please contact lecturers@cambridge.org.

Sun, 05 Mar 2023 17:16:00 -0600 en text/html https://www.cambridge.org/us/universitypress/subjects/earth-and-environmental-science/atmospheric-science-and-meteorology/fundamentals-atmospheric-modeling-2nd-edition
Killexams : How to show or hide Microsoft Print to PDF printer in Windows 111/10

Microsoft Print to PDF option can help you save a webpage as PDF via any browser like Edge, Chrome, etc., and it is an in-built tool included in Windows 11/10. However, if you want to show or hide Microsoft Print to PDF printer in Windows 11/10, you can follow this tutorial. There are multiple methods you can follow, and they will let you get the same job done.

How to show or hide Microsoft Print to PDF printer in Windows 10

To show or hide Microsoft Print to PDF printer, use these tools or methods:

  1. Using Command Prompt
  2. Using Windows PowerShell
  3. Using Devices and Printers
  4. Using Windows Settings
  5. Using Windows Features

To learn more about these methods, continue reading.

1] Using Command Prompt

How to show or hide Microsoft Print to PDF printer in Windows 10

It is possible to use the Command Prompt to remove or hide Microsoft Print to PDF option while saving a webpage to PDF or printing a document in Windows 10. Not only the Microsoft Print to PDF, but also you can remove almost any visible and non-connected print using this Command Prompt method.

To get started, open Command Prompt on your computer. For that, you can search for cmd in the Taskbar search box and click on the individual result. After that, type the following command and press the Enter button:

printui.exe /dl /n "Microsoft Print to PDF"

Once done, you will no longer be able to find Microsoft Print to PDF while printing a document or anything else. If you want to hide another printer, you have to replace Microsoft Print to PDF with the original name of the desired printer.

2] Using Windows PowerShell

How to show or hide Microsoft Print to PDF printer in Windows 10

Like Command Prompt, you can use Windows PowerShell to hide or remove Microsoft Print to PDF option from the list of printers. As usual, you can use the same (with minor customization) command to remove any other printer that you no longer use on your Windows 10 computer.

To get started with the Windows PowerShell method, search for powershell in the Taskbar search box, and click on the respective search result. Then, type the following command and press the Enter button:

Remove-Printer -Name "Microsoft Print to PDF"

Note: If you use the Command Prompt or Windows PowerShell method to hide or remove the Microsoft Print to PDF printer, you have to use the Windows Features panel to remove and add the printer when you want to get it back. In other words, you can follow the fifth guide mentioned in the list here.

Read: Print to PDF is missing

3] Using Devices and Printers

Devices and Printers, in Control Panel, is such panel, where you can find all the connected devices, such as your monitor/s, mouse, speakers, fax machine, printers, etc. It is possible to manage those devices from this panel to not have to navigate to different paths for different devices.

Like said, you can add a new device and remove one when needed with the help of Devices and Printers. In other words, it is possible to remove Microsoft Print to PDF printer from Windows 10 using this section in Control Panel.

To get started, search for the control panel in the Taskbar search box and open the Control Panel by clicking the individual result. Make sure that the View by is set to Large icons. If so, click on the Devices and Printers option.

Next, right-click on the Microsoft Print to PDF printer and select the Remove device option.

How to show or hide Microsoft Print to PDF printer in Windows 10

Then, click on the Yes button in the confirmation window.

That’s all! Once the process is completed, you can’t find the Microsoft Print to PDF printer anymore.

4] Using Windows Settings

This method is probably the easiest step you can follow as you can complete it within moments. Like all the other methods, you can remove any installed printer from the list of printers with the help of Windows Settings.

For getting started, you need to open Windows Settings on your computer. For that, you can open the Start Menu and click on the settings gear icon. Alternatively, you can press the Win+I keyboard shortcut to open the same.

Next, go to the Devices section and switch to the Printers & scanners tab. Here you can find all the connected and non-connected printers on your right-hand side. You have to select the Microsoft Print to PDF option and click on the Remove device option/button.

How to show or hide Microsoft Print to PDF printer in Windows 10

Then, click the Yes option in the confirmation popup.

Read: List Printers using the same printer driver, separately.

5] Using Windows Features

All the aforementioned methods help you hide the Microsoft Print to PDF printer temporarily. However, if you want to remove the entire package or feature, remove it from the Windows Features panel.

For that, you can search for windows features in the Taskbar search box and click on the Turn Windows features on or off option in the search result.

Remove the tick from the Microsoft Print to PDF checkbox and click the OK button.

How to show or hide Microsoft Print to PDF printer in Windows 10

It will take some time to complete the process. Once done, you have to restart your computer to finish the step.

After restarting, you won’t find the Microsoft Print to PDF printer in Windows 11/10.

That’s all! Hope these methods helped you.

Read: Ways to Enable or Disable Optional Windows Features.

How to show or hide Microsoft Print to PDF printer in Windows 10
Fri, 04 Jun 2021 04:56:00 -0500 en-us text/html https://www.thewindowsclub.com/show-or-hide-microsoft-print-to-pdf-printer
Killexams : Microsoft Edge 102.0.1245.41 has security fixes and solves PDF printing bug0 0

Microsoft Edge logo off-white on blue and mint green background

Microsoft released two small updates for its Edge browser over the weekend. The company sent out a security update on Friday and another one today. While the Friday update fixed a security issue that affected the Edge browser, today’s update addressed security issues that affected all Chromium-based web browsers. Additionally, the update seems to have fixed the problem that prevented PDF files from being printed when accessed through the Edge browser.

Edge 102.0.1245.41 for the Stable release channel is being labeled as a maintenance update that fixes several vulnerabilities. Microsoft is yet to update the Release Notes. However, the company has previously informed about the following vulnerabilities in the Chromium and Edge browsers:

  • Chromium: CVE-2022-2007 Use after free in WebGPU -- CVE-2022-2007
  • Chromium: CVE-2022-2008 Out of bounds memory access in WebGL -- CVE-2022-2008
  • Chromium: CVE-2022-2010 Out of bounds read in compositing -- CVE-2022-2010
  • Chromium: CVE-2022-2011 Use after free in ANGLE -- CVE-2022-2011

The aforementioned security vulnerabilities affect all browsers that are based on the Chromium engine, which includes Google Chrome. Google confirmed it had patched these vulnerabilities in the Chromium browser on June 9, 2022. Some of the security issues are rated ‘High’. Neither Microsoft nor Google has offered any detailed information about the vulnerabilities yet.

In addition to security fixes, the latest update also addresses a bug with PDF printing. The problem existed since Edge was updated to 102.0.1245.30. Although it wasn’t widespread, a few network administrators claimed all PCs within a network couldn’t print PDF files from the updated browser.

Source: Günter Born

Tue, 14 Jun 2022 01:06:00 -0500 en text/html https://www.neowin.net/news/microsoft-edge-1020124541-has-security-fixes-and-solves-pdf-printing-bug/
Killexams : Microsoft Roars to New High: ETFs to Tap the Strength No result found, try new keyword!Microsoft saw solid earnings estimate revision of 34 cents for the fiscal year (ending June 2023) and 23 cents for the next fiscal year (ending June 2024) over the past 60 days. Fri, 16 Jun 2023 04:49:00 -0500 en-us text/html https://www.msn.com/ Killexams : Preparing for the AZ-900 Microsoft Azure Fundamentals Exam No result found, try new keyword!In this course, you will prepare for the AZ-900 Microsoft Azure Fundamentals exam. You will refresh your knowledge of cloud concepts, Microsoft Azure services, Microsoft Azure workloads ... Fri, 06 Jan 2023 16:56:00 -0600 text/html https://www.usnews.com/education/skillbuilder/preparing-for-the-az-900-microsoft-azure-fundamentals-exam-0_CopdX3XBEeuulw5wDKo02w
SC-900 exam dump and training guide direct download
Training Exams List