Full AZ-140 Question bank from killexams.com

It truly is our specialty to offer updated, valid in addition to the latest AZ-140 Real Exam Questions that usually are verified to end up being working in a genuine AZ-140 exam. We include tested Configuring and Operating Windows Virtual Desktop on Microsoft Azure questions in addition to answers in the download section from the website for the users to get at one simple click. AZ-140 practice test is also up to date accordingly.

Exam Code: AZ-140 Practice exam 2022 by Killexams.com team
AZ-140 Configuring and Operating Windows Virtual Desktop on Microsoft Azure

Exam Number: AZ-140
Exam Name : Configuring and Operating Windows Virtual Desktop on Microsoft Azure

Exam TOPICS

Exam AZ-140: Configuring and Operating Microsoft Azure Virtual Desktop
Candidates for this exam are administrators with subject matter expertise in planning, delivering, and managing virtual desktop experiences and remote apps, for any device, on Azure.

Responsibilities for this role include deploying virtual desktop experiences and apps to Azure. Professionals in this role deliver applications on Azure Virtual Desktop and optimize them to run in multi-session virtual environments. To deliver these experiences, they work closely with the Azure administrators and architects, along with Microsoft 365 Administrators.

Candidates for this exam should have experience in Azure technologies, including virtualization, networking, identity, storage, backups, resilience, and disaster recovery. They should understand on-premises virtual desktop infrastructure technologies as they relate to migrating to Azure Virtual Desktop. These professionals use the Azure portal and Azure Resource Manager templates to accomplish many tasks. This role may use PowerShell and Azure Command-Line Interface (CLI) for more efficient automation.

NOTE: The bullets that appear below each of the skills measured are intended to illustrate how we are assessing that skill. This list is NOT definitive or exhaustive.
Plan an Azure Virtual Desktop Architecture (10-15%)
Implement an Azure Virtual Desktop Infrastructure (25-30%)
Manage Access and Security (10-15%)
Manage User Environments and Apps (20-25%)
Monitor and Maintain an Azure Virtual Desktop Infrastructure (20-25%)

Plan an Azure Virtual Desktop Architecture (10-15%)
Design the Azure Virtual Desktop architecture
 assess existing physical and virtual desktop environments
 assess network capacity and speed requirements for Azure Virtual Desktop
 recommend an operating system for an Azure Virtual Desktop implementation
 plan and configure name resolution for Active Directory (AD) and Azure Active Directory
Domain Services (Azure AD DS)
 plan a host pools architecture
 recommend resource groups, subscriptions, and management groups
 configure a location for the Azure Virtual Desktop metadata
 calculate and recommend a configuration for performance requirements
 calculate and recommend a configuration for Azure Virtual Machine capacity requirements
Design for user identities and profiles
 select an appropriate licensing model for Azure Virtual Desktop based on requirements
 recommend an appropriate storage solution (including Azure NetApp Files versus Azure Files)
 plan for Azure Virtual Desktop client deployment
 plan for user profiles
 recommend a solution for network connectivity
 plan for Azure AD Connect for user identities
Implement an Azure Virtual Desktop Infrastructure (25-30%)
Implement and manage networking for Azure Virtual Desktop
 implement Azure virtual network connectivity
 manage connectivity to the internet and on-premises networks
 implement and manage network security
 manage Azure Virtual Desktop session hosts by using Azure Bastion
 monitor and troubleshoot network connectivity
Implement and manage storage for Azure Virtual Desktop
 configure storage for FSLogix components
 configure storage accounts
 configure disks
 create file shares
Create and configure host pools and session hosts
 create a host pool by using the Azure portal
 automate creation of Azure Virtual Desktop host and host pools by using PowerShell, Command-Line Interface (CLI), and Azure Resource Manager templates
 create a host pool based on Windows client or Windows Server session hosts
 configure host pool settings
 manage licensing for session hosts that run Windows client or Windows Server
 assign users to host pools
 apply OS and application updates to a running Azure Virtual Desktop host
 apply security and compliance settings to session hosts
Create and manage session host images
 create a gold image
 modify a session host image
 install language packs in Azure Virtual Desktop
 deploy a session host by using a custom image
 plan for image update and management
 create and use a Shared Image Gallery
 troubleshoot OS issues related to Azure Virtual Desktop
Manage Access and Security (10-15%)
Manage access
 plan and implement Azure roles and role-based access control (RBAC) for Azure Virtual Desktop
 manage local roles, groups and rights assignment on Azure Virtual Desktop session hosts
 configure user restrictions by using Azure AD group policies and AD policies Manage security
 plan and implement Conditional Access policies for connections to Azure Virtual Desktop
 plan and implement multifactor authentication in Azure Virtual Desktop
 manage security by using Azure Security Center
 configure Microsoft Defender Antivirus for session hosts
Manage User Environments and Apps (20-25%)
Implement and manage FSLogix
 plan for FSLogix
 install and configure FSLogix
 configure Profile Containers
 configure Cloud Cache
 migrate user profiles to FSLogix
Configure user experience settings
 configure Universal Print
 configure user settings through group policies and Endpoint Manager policies
 configure persistent and non-persistent desktop environments
 configure Remote Desktop Protocol (RDP) properties on a host pool
 configure session timeout properties
 troubleshoot user profile issues
 troubleshoot Azure Virtual Desktop clients
Install and configure apps on a session host
 configure dynamic application delivery by using MSIX App Attach
 implement application masking
 deploy an application as a RemoteApp
 implement and manage OneDrive for Business for a multi-session environment
 implement and manage Microsoft Teams AV Redirect
 implement and manage browsers and internet access for Azure Virtual Desktop sessions
 create and configure an application group
 troubleshoot application issues related to Azure Virtual Desktop
Monitor and Maintain an Azure Virtual Desktop Infrastructure (20-25%)
Plan and implement business continuity and disaster recovery
 plan and implement a disaster recovery plan for Azure Virtual Desktop
 design a backup strategy for Azure Virtual Desktop
 configure backup and restore for FSLogix user profiles, personal virtual desktop
infrastructures (VDIs), and golden images
Automate Azure Virtual Desktop management tasks
 configure automation for Azure Virtual Desktop
 automate management of host pools, session hosts, and user sessions by using
PowerShell and Azure Command-Line Interface (CLI)
 implement autoscaling in host pools
Monitor and manage performance and health
 monitor Azure Virtual Desktop by using Azure Monitor
 monitor Azure Virtual Desktop by using Azure Advisor
 customize Azure Monitor workbooks for Azure Virtual Desktop monitoring
 optimize session host capacity and performance
 manage active sessions and application groups
 monitor and optimize autoscaling results

Configuring and Operating Windows Virtual Desktop on Microsoft Azure
Microsoft Configuring information source
Killexams : Microsoft Configuring information source - BingNews https://killexams.com/pass4sure/exam-detail/AZ-140 Search results Killexams : Microsoft Configuring information source - BingNews https://killexams.com/pass4sure/exam-detail/AZ-140 https://killexams.com/exam_list/Microsoft Killexams : Getting Up and Running with Windows Autopatch

The Windows Autopatch service, which allows enterprises to automatically roll out updates for Windows 10, Windows 11, Microsoft Edge, and Microsoft 365 software, is now live, Microsoft said this week. Autopatch is intended to streamline updating operations and reduce the time it takes for systems to be patched. Originally announced in April, the feature has been in public preview since May.

"Essentially Microsoft engineers use the Windows Update for Business client policies and deployment service tools on your behalf," wrote Lior Bela, senior product marketing manager at Microsoft, on the Microsoft IT Pro blog. "The service creates testing rings and monitors rollouts—pausing and even rolling back changes where possible."

This Tech Tip summarizes the prerequisites for using Autopatch and instructions on enabling the new feature.

Very Specific Prerequisites

Customers must have Windows 10/11 Enterprise E3 or E5 licenses. The organization must also have Azure Active Directory Premium and Microsoft Intune. A proxy or firewall that uses TLS 1.2 is also required.

"Azure Active Directory must either be the source of authority for all user accounts, or user accounts must be synchronized from on-premises Active Directory using the latest supported version of Azure Active Directory Connect to enable Hybrid Azure Active Directory join," Microsoft said in the deployment guide.

The endpoints that will be enrolled into Windows Autopatch must be managed by either Microsoft Intune or Configuration Manager Co-Management. Intune must be set as the mobile device management (MDM) authority or co-management must be turned on and enabled on the endpoints. The endpoints being enrolled must also have connected with Microsoft Intune within the last 28 days in order to be registered with Autopatch.

The endpoints, which must be corporate-owned (bring-your-own-device is not currently supported) should have 64-bit editions of Windows 10/11 Pro, Windows 10/11 Enterprise, or Windows 10/11 Pro for Workstations. However, Windows Autopatch will support updating of Windows 365 cloud PCs in mid-July.

Configuring the Environment

Since Autopatch is cloud-based, there are specific Microsoft services that must be available at all times. The four URLs that must be on the allowed list of the proxy or firewall are mmdcustomer.microsoft.com, mmdls.microsoft.com, logcollection.mmd.microsoft.com, and support.mmd.microsoft.com

The deployment guide lists other firewall configurations, IP ranges, and port requirements for Azure Active Directory, Microsoft Intune, Windows Update for Business, and individual Microsoft applications.

Azure Active Directory must have security defaults enabled and not have any user names that conflict with the ones Autopatch needs to use: MsAdmin, MsAdminInt, and MsTest. Azure AD must also be set so that conditional access policies and multifactor authentication aren’t assigned to all users. The point is that Autopatch can’t be required to have multifactor authentication enabled. 

"Your conditional access policies must not prevent our service accounts from accessing the service and must not require multi-factor authentication," Microsoft said.

How Do I Get Started?

Customers with Windows Enterprise E3 and E5 licenses will find Tenant Administration in the Microsoft Endpoint Manager administrator center. The option Tenant enrollment in the Windows Autopatch section will begin the process to set up and configure Autopatch.

But first, Microsoft will run the online Readiness assessment tool to check the settings in Microsoft Intune and Azure Active Directory to ensure they are properly configured to work with Windows Autopatch. If issues are found, the administrator must fix them before continuing.

Once everything is ready, the tool will show an Enroll button to kick off the enrollment. During the enrollment process, administrators will be guided to create the policies, groups, and accounts necessary to run Autopatch.

"Once you've enrolled devices into Autopatch, the service does most of the work. But through the Autopatch blade in Microsoft Endpoint Manager, you can fine-tune ring membership, access the service health dashboard, generate reports, and file support requests," Microsoft said.

What Sysadmins Can’t Do

  • It would not be possible to schedule the updates to roll out on certain days or times. The decision of when to move to the next ring is also not configurable.
  • Once a device is registered with Windows Autopatch, updates are rolled out to the devices according to its ring assignment. Currently, there is no support for individual device level control.
  • Windows Autopatch doesn't support managing update ring membership using your Azure AD groups.
  • There is currently no programmatic access via PowerShell to Autopatch
Tue, 12 Jul 2022 12:33:00 -0500 en text/html https://www.darkreading.com/dr-tech/getting-up-and-running-with-windows-autopatch
Killexams : How to block Microsoft Bing Search installation in Office 365

Microsoft is rolling out an update for Office 365 ProPlus, which will install an extension on Chrome. That extension will change the default search engine to Bing. While it does look aggressive, Microsoft is offering an option to turn off the installation through Microsoft Endpoint Configuration Manager or Microsoft Intune. In this post, we will show how you can block the Microsoft Bing Search installation in Office 365. In case you install it, we have also shown how you can remove it post-installation.

block Microsoft Bing Search installation in Office 365

What is Microsoft Search in Bing

If you are not aware of Microsoft Search in Bing, then its enterprise search solution. Customers using Office can leverage this for contextual work-related information using data sources in Office 365. While Bing will also search the internet, but if the query is all about work, then it can pull data from  SharePoint, Microsoft OneDrive for Business, and Exchange.

It uses Microsoft Graph to make the search useful for everyone in the organization. You can find a person, find the location of your desk, locate a document, and so on. More on this on Office Blog

Before we start with the solution, here are some details

  • As of now, it applies to new and existing Office 365 ProPlus installations in Australia, Canada, France, Germany, India, the United Kingdom, and the United States.
  • If you already use Bing as the default search engine, it will not get installed.
  • The extension will be released to the Monthly Channel in late February 2020. Release for the Semi-Annual Channel (Targeted), and Semi-Annual Channel is coming soon.

While it only makes sense for Office customers who would use Office 365 in their organization, but if you feel its forcing, then follow these methods. Microsoft is rolling out this in a phased manner. So you may not see it right away, but future installation or updates will install it.

Depending on how you use it in the Enterprise, you can choose to remove it altogether or decide only to change the default search engine.

  1. Temporarily switch to Bing
  2. Exclude from Installation
  3. Change Default Search Engine
  4. Remove the Extension from the computer

The last two of this list is a post-installation scenario.

1] Temporarily switch to Bing

Since you know when it’s going to come, IT Admin can take measures to switch to Bing temporarily. Once the installation of Office 365 ProPlus update is complete, turn back to your favorite search engine.

2] Exclude Microsoft Bing Search before Office 365 installation or update

Bing Toggle Configuration Manager

If you use Microsoft Endpoint Configuration Manager or Microsoft Intune, you can exclude the extension from being installed by using the Office Deployment Tool or by using Group Policy. While you can download the XML or Policy from here, this is how it will look in the Configuration Manager. There is a clear option to toggle off the choice, which changes the default search engine to Microsoft Search in Bing.

2] Change Default Search Engine from Bing to Google

remove Microsoft Search Bing Extension

The chances are that you use Microsoft Search in Bing, but not for search.  In that case, you can click on the search icon on the top left of Chrome, and toggle off  Bing as your default search engine. Restart Chrome to apply the change. It will supply users the freedom to search the web using Chrome, and when they want to use Bing, they can always use that.

3] Remove the Extension from the computer

Microsoft installs this extension through a software update. So if you are not planning to use Microsoft Search in Bing at all, then its best to remove the extension. Once uninstalled, it will never be installed again in a future update of Office 365 ProPlus. The default search engine will revert to the previous choice.

There are two ways of doing this. Make sure the user account has local administrator rights.

Use Control Panel

  1. Go to Control Panel > Programs > Programs and Features
  2. Locate the DefaultPackPC program, and choose to uninstall it.
  3. You can do the same by going to Windows 10 Settings > Apps > Apps & features.

Using Command Prompt

Open Command Prompt with admin privileges

Type the below-mentioned command and press the Enter key to execute it.

"C:\Program Files (x86)\Microsoft\DefaultPackMSI\MainBootStrap.exe" uninstallAll

If you want to remove it from multiple computers, then you have two options. Either distribute a BAT file or deploy that command to various devices in your organization. You can use a script, Configuration Manager, or some other enterprise software deployment tool.

Microsoft is likely to face a lot of heat for this. The only problem which I see here is that they kept the default option turned on to install Microsoft Search for Bing. The official documentation clearly explains what measures can be taken by IT admin, which they can use to skip the installation. Microsoft should have stayed away from changing the default search engine and would have been fine. If an enterprise needs to use Microsoft Search in Bing for Chrome, they will eventually do it.

Wed, 06 Jul 2022 18:18:00 -0500 en-us text/html https://www.thewindowsclub.com/block-microsoft-bing-search-installation-in-office-365
Killexams : Microsoft recognized as a Leader in UEM Software 2022 IDC MarketScape reports

Competition for talent has increased pressure to lead in the digital space, and business decisions now weigh user experience for employees heavily among costs and benefits. Workers insist on experiences that mirror their personal experiences, often on their own devices. As enterprise computing has expanded beyond the cubicle, the need to manage the ensuing explosion of complexity, especially when it comes to device security, has raised the bar for technology and information business decision-makers.

Microsoft has heard consistently that meeting these expanding needs with limited resources is job one. As new solutions seem to emerge as rapidly as the problem itself expands, providing a consistent, proven, centralized portal for endpoint management is how Microsoft aims to be the partner of choice in this space.

The scale of Microsoft—and our investments in endpoint management and endpoint security—affords our customers peace of mind that our solutions will continue to evolve alongside the demands and threats they face. We deliver advanced end-to-end cross-cloud, cross-platform security solutions, which integrate more than 50 different categories across security, compliance, identity, device management, and privacy, informed by more than 24 trillion threat signals we see each day. Proof of customers’ trust and peace of mind: the Microsoft Security business grew more than 45 percent year-over-year, totaling USD15 billion of annual revenue.1

In-person meeting with a masked woman and a man discussing the information shared on screen showing the Microsoft Endpoint Manager admin center All Devices of the endpoints managed from the cloud.

This scale also allows Microsoft to bring a unified endpoint management solution that is tailored for customers’ challenges, especially the transformation to cloud management. Microsoft is recognized as a Leader in the Unified Endpoint Management Software 2022 Vendor Assessment IDC MarketScape report, including Ruggedized/Internet of Things Device Deployments and Small and Midsize Businesses. Microsoft Endpoint Manager is an integrated solution that simplifies management across multiple operating systems, cloud, on-premises, mobile, desktop, and virtualized endpoints.

This quote from the report may be of special interest to customers trying to do more with less:

Integration is a key aspect of the Microsoft Endpoint Manager offering, and the product ties into a wide range of other tools from the vendor, including Office 365 apps, Teams, and OneDrive as well as Microsoft security products including Microsoft Defender for Endpoint (endpoint security) and Microsoft Sentinel (security information and event management).” 

Managing more platforms

In short, IT administrators get more done in one place, with simplified management of multiple operating systems, cloud, on-premises, mobile, desktop, and virtualized endpoints. The IDC MarketScape report calls out the syllabu of enterprise management for macOS endpoints: 

[Microsoft Endpoint Manager] includes the ability to apply granular policies to Mac software distribution and deployments, broader support for macOS device configuration profiles, and user-based policy enforcement customization.

In 2022, we will further expand across platforms by releasing enhanced support for devices running Android Open Source Project (AOSP), such as Oculus virtual reality (VR) headsets, as well as enable conditional access policies and device settings for Linux desktops. This way, IT can protect data on any devices by securing user apps; configuring, securing, monitoring, and updating apps remotely; and reducing risks with the combination of identity-based management and Microsoft Security.

User-focused experiences

We hear from customers that powerful software is great, but a frictionless experience is better. We try to bring this learning into our product development and continue working to Excellerate not just what control IT admins have over endpoints, but also how they interact with them. How can data be turned into insights? How does portfolio visibility contribute to security?

We understand, too, that users are vested stakeholders in the process, and their satisfaction often determines whether IT can notch up a win or not. Access policies that are too strict can frustrate users or lead to insecure workarounds and require a balance of security and usability. We know that the line between home and work is blurred—so too is the line between business and personal devices—we try to Excellerate on the ways we can help users do their work where and how they want.

IDC MarketScape report also recognizes our focus on endpoint analytics that are designed to make suggestions instead of presenting data, and flag anomalies in the continuous stream of health, compliance, and security signals. When an IT admin can take proactive steps instead of making reactive fixes, we notch up a win for everyone.

The improved experience is well described by Grupo Bancolombia, who adopted Endpoint Manager for more flexibility to support employees in the cloud so they can work from anywhere in a secure way. Read the case study to learn more.

This quote from Santiago Santacruz Pareja, Grupo Bancolombia IT Infrastructure Engineer, encapsulates the improved experience for users and IT pros:

We quickly rolled out BitLocker to 23,000 machines, but the best part was that it was invisible to employees—they didn’t notice any changes to their device or daily work, and we succeeded in protecting their data.

Learn more

You’re invited to read the full report or view a snapshot of the IDC MarketScape report below. Keep up with ongoing developments on Unified Endpoint Management (UEM) by visiting the Microsoft Endpoint Manager Tech Community blog and exploring Microsoft Endpoint Manager.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

We thank our customers and partners for being on this journey with us.

Positioning of the IDC MarketScape of worldwide software vendors across Capabilities and Strategies for Unified Endpoint Management. Categories include participants, contenders, major players and leaders with Microsoft showing as a leader ahead of Vmware in Strategies.

IDC MarketScape: Worldwide Unified Endpoint Management Software 2022 Vendor Assessment, Doc #US48325122, May 2022.


1Microsoft Fiscal Year 2022 Second Quarter Earnings Conference Call, Microsoft. January 25, 2022.

Wed, 13 Jul 2022 05:00:00 -0500 en-US text/html https://www.microsoft.com/security/blog/2022/07/13/microsoft-recognized-as-a-leader-in-uem-software-2022-idc-marketscape-reports/
Killexams : Global Configuration Management Market to be Worth USD 5.06 Billion at A CAGR Of 15.30% by the Year 2027

Global Configuration Management Market is Expected to Reach USD 5.06 Billion By 2027 at A CAGR Of 15.30 percent.

Maximize Market Research has published a report on the global Global Configuration Management Market, which is expected to increase at a CAGR of 15.3% from 2022 to 2027, from USD 2.15 Billion in 2021to USD 5.06 Billion in 2027.

The market is expected to have significant growth over the coming years due to consistently expanding demand. Technologies for Global Configuration Management also assist in lowering operational costs, giving a clearer picture of the total cost of ownership of the current IT infrastructure. Additionally, during the software development and deployment phases, Global Configuration Management solutions are used to assist developers in keeping track of software source code, revisions, and documentation.

Request Free Sample:@https://www.maximizemarketresearch.com/request-sample/62980

Global Configuration Management Market Dynamics:

The Global Configuration Management market is growing quickly on a global scale. Solutions for Global Configuration Management can enhance business processes while preserving the software’s agility to adapt to the continuously shifting business and market demands.

Therefore, industry demands for IT automation and Global Configuration Management software to Excellerate applications and infrastructure supporting digital & customer-facing applications in dynamic contexts mostly account for market growth.

Additionally, market growth is favored by automation and digital transformation. Due to the increasing need for businesses to evaluate, measure, optimize, increase productivity, and reduce operational costs, there is a considerable increase in the demand for automated business processes. Additionally, the rising adoption of Global Configuration Management solutions accelerates market growth, enhances process effectiveness, and decreases human labor and downtime.

Global Configuration Management Market Regional Insights:

Asia-Pacific, North America, Europe, the Middle East & Africa, and South America make up the geographical segments of the global Global Configuration Management market.

In terms of market share, North America dominates the Global Configuration Management sector. The region’s businesses’ ongoing investments in their IT infrastructure, as well as the presence of major tech players like Microsoft Corporation, BMC Software, and Oracle, are some of the drivers driving the market’s growth.

In the market, Europe is in second place. The division of Europe includes the United Kingdom, Italy, Germany, France, Spain, and the rest of Europe. The UK is anticipated to gain the most market share, followed by Germany and France, according to MMR study.

Competitive Landscape:

Details about each company are included in the Global Configuration Management market competitive landscape. Included information includes an overview of the business, financials, revenue, market potential, investments in R&D, new market initiatives, regional presence, corporate strengths and weaknesses, product launches, product breadth and depth, and application domination. The data points mentioned above only pertain to the companies’ market focus on Global Configuration Management.

Details about each company are included in the Global Configuration Management market competitive landscape. Included information includes an overview of the business, financials, revenue, market potential, investments in R&D, new market initiatives, regional presence, corporate strengths and weaknesses, product launches, product breadth and depth, and application domination. The data points mentioned above only pertain to the companies’ market focus on Global Configuration Management.

Microsoft, Amazon Web Services, Inc., Oracle, BMC Software, Inc., IBM, Alibaba Cloud, CA Technologies, Red Hat, Inc., CloudBees, Inc., Micro Focus, Northern.tech AS, Canonical Ltd., SaltStack, Inc., Octopus Deploy, JetBrains s.r.o., Hewlett Packard Enterprise Development LP, and ServiceNow are among the leading domestic and international companies profiled in the Global Configuration Management market report. Global, North America, Europe, Asia-Pacific (APAC), Middle East and Africa (MEA), and South America market share data are all independently available. MMR analysts are aware of competitive advantages and offer competitive analysis for each rival individually.

Global Configuration Management Market Segmentation:

By Component:

  • Solution
  • Services
  • Managed Services
  • Professional Services
  • Support and Maintenance
  • Integration
  • Training and Consulting

By Module:

  • Global Configuration Management Database (CMDB)
  • Service Catalog
  • Service Definition
  • Others

By System:

  • Software and Application
  • Storage
  • Server

By Vertical:

  • Telecom and IT-enabled Services
  • BFSI
  • Government and Public Sector
  • Healthcare and Life Sciences
  • Manufacturing
  • Retail and Consumer Packaged Goods
  • Others

By Region:

  • North America
  • Europe
  • Asia Pacific
  • Middle East and Africa
  • South America

Global Configuration Management Market Key Competitors:

  • Canonical Ltd.
  • Saltstack
  • Octopus Deploy
  • Jetbrains
  • Codenvy
  • BMC Software
  • CA Technologies
  • Servicenow
  • Atlassian
  • Oracle
  • IBM
  • Alibaba Cloud
  • AWS
  • Puppet
  • Ansible
  • Chef
  • Micro Focus
  • Cloudbees
  • tech AS
  • Microsoft
  • Red Hat, Inc.
  • Hewlett Packard Enterprise Development LP

To Get A Detailed Report Summary Of the Global Configuration Management Market, Click Here:@https://www.maximizemarketresearch.com/market-report/global-configuration-management-market/62980/

About Maximize Market Research:

Maximize Market Research undertakes business-to-business and business-to-consumer market research on technological innovations and opportunities in the chemical, health services, pharmaceuticals, electronics and communications, internet of things, food and beverage, aerospace and defense, and other industrial sectors. Because companies all over the world are struggling to deal with the changing market, financial, and technological situations, ‘Maximize Market Research’ is well-positioned to anticipate the future market size and competitive assessment of industries. At the same time, our industry experts are well placed to identify and forecast product life cycles, technological advances, and industry trends in manufacturing environments.

Contact Maximize Market Research:

3rd Floor, Navale IT Park, Phase 2

Pune Banglore Highway, Narhe,

Pune, Maharashtra 411041, India

[email protected]

Tue, 05 Jul 2022 12:00:00 -0500 Newsmantraa en-US text/html https://www.digitaljournal.com/pr/global-configuration-management-market-to-be-worth-usd-5-06-billion-at-a-cagr-of-15-30-by-the-year-2027
Killexams : Intel Releases Open Source AI Reference Kits

Open source designs simplify AI development for solutions across healthcare, manufacturing, retail and other industries.

SANTA CLARA, Calif.--(BUSINESS WIRE)--What’s New: Intel has released the first set of open source AI reference kits specifically designed to make AI more accessible to organizations in on-prem, cloud and edge environments. First introduced at Intel Vision, the reference kits include AI model code, end-to-end machine learning pipeline instructions, libraries and Intel oneAPI components for cross-architecture performance. These kits enable data scientists and developers to learn how to deploy AI faster and more easily across healthcare, manufacturing, retail and other industries with higher accuracy, better performance and lower total cost of implementation.


“Innovation thrives in an open, democratized environment. The Intel accelerated open AI software ecosystem including optimized popular frameworks and Intel’s AI tools are built on the foundation of an open, standards-based, unified oneAPI programming model. These reference kits, built with components of Intel’s end-to-end AI software portfolio, will enable millions of developers and data scientists to introduce AI quickly and easily into their applications or boost their existing intelligent solutions.”
–Wei Li, Ph.D., Intel vice president and general manager of AI and Analytics

About AI Reference Kits: AI workloads continue to grow and diversify with use cases in vision, speech, recommender systems and more. Intel’s AI reference kits, built in collaboration with Accenture, are designed to accelerate the adoption of AI across industries. They are open source, pre-built AI with meaningful enterprise contexts for both greenfield AI introduction and strategic changes to existing AI solutions.

Four kits are available for obtain today:

  • Utility asset health: As energy consumption continues to grow worldwide, power distribution assets in the field are expected to grow. This predictive analytics model was trained to help utilities deliver higher service reliability. It uses Intel-optimized XGBoost through the Intel® oneAPI Data Analytics Library to model the health of utility poles with 34 attributes and more than 10 million data points1. Data includes asset age, mechanical properties, geospatial data, inspections, manufacturer, prior repair and maintenance history, and outage records. The predictive asset maintenance model continuously learns as new data, like new pole manufacturer, outages and other changes in condition, are provided.
  • Visual quality control: Quality control (QC) is essential in any manufacturing operation. The challenge with computer vision techniques is that they often require heavy graphics compute power during training and frequent retraining as new products are introduced. The AI Visual QC model was trained using Intel® AI Analytics Toolkit, including Intel® Optimization for PyTorch and Intel® Distribution of OpenVINO™ toolkit, both powered by oneAPI to optimize training and inferencing to be 20% and 55% faster, respectively, compared to stock implementation of Accenture visual quality control kit without Intel optimizations2 for computer vision workloads across CPU, GPU and other accelerator-based architectures. Using computer vision and SqueezeNet classification, the AI Visual QC model used hyperparameter tuning and optimization to detect pharmaceutical pill defects with 95% accuracy.
  • Customer chatbot: Conversational chatbots have become a critical service to support initiatives across the enterprise. AI models that support conversational chatbot interactions are massive and highly complex. This reference kit includes deep learning natural language processing models for intent classification and named-entity recognition using BERT and PyTorch. Intel® Extension for PyTorch and Intel Distribution of OpenVINO toolkit optimize the model for better performance – 45% faster inferencing compared to stock implementation of Accenture customer chatbot kit without Intel optimizations3 – across heterogeneous architectures, and allow developers to reuse model development code with minimal code changes for training and inferencing.
  • Intelligent document indexing: Enterprises process and analyze millions of documents every year, and many of the semi-structured and unstructured documents are routed manually. AI can automate the processing and categorizing of these documents for faster routing and lower manual labor costs. Using a support vector classification (SVC) model, this kit was optimized with Intel® Distribution of Modin and Intel® Extension for Scikit-learn powered by oneAPI. These tools Excellerate data pre-processing, training and inferencing times to be 46%, 96% and 60% faster, respectively, compared to stock implementation of Accenture Intelligent document indexing kit without Intel optimizations4 for reviewing and sorting the documents at 65% accuracy.

Download free on the Intel.com AI Reference Kits website. The kits are also available on Github.

Why It Matters: Developers are looking to infuse AI into their solutions and the reference kits contribute to that goal. These kits build on and complement Intel’s AI software portfolio of end-to-end tools and framework optimizations. Built on the foundation of the oneAPI open, standards-based, heterogeneous programming model, which delivers performance across multiple types of architectures, these tools help data scientists train models faster and at lower cost by overcoming the limitations of proprietary environments.

What's Next: Over the next year, Intel will release a series of additional open source AI reference kits with trained machine learning and deep learning models to help organizations of all sizes in their digital transformation journey.

More Context: oneAPI Dev Summit for AI | Intel oneAPI | Intel AI Tools

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel’s innovations, go to newsroom.intel.com and intel.com.

Notices & Disclaimers

1Predictive Utility Analytics Reference Kit, measured on June 29, 2022. HW Configuration: Microsoft Azure Standard D4_v5, OS: Ubuntu 20.04.4 LTS (Focal Fossa), 8 X Intel® Xeon® Platinum 8370C CPU @ 2.80GHz, 2 threads/core, 4 cores/socket, 1 socket. SW Configuration: Config 1 (Python v3.9, Scikit-learn v 1.0.2, XGBoost v0.81), Config 2 (Intel® Distribution for Python 3.9.12 2022.0.0, Scikit-learn 0.24.2, Intel® Extension for Scikit-learn 2021.5.1, XGBoost 1.4.3, daap4py 2021.6.0). Additional details at https://github.com/oneapi-src/predictive-health-analytics. Results may vary.

2Visual Quality Inspection Reference Kit, measured on June 29, 2022. HW Configuration: Microsoft Azure Standard D4_v5, OS: Ubuntu 20.04.4 LTS (Focal Fossa), 4 X Intel® Xeon® Platinum 8370C CPU @ 2.80GHz, 2 threads/core, 2 cores/socket, 1 socket. SW Configuration: Config 1 (PyTorch v1.8.0), Config 2 (Intel® Extension for PyTorch v1.8.0, Intel® Neural Compressor v1.12, Intel® Distribution of OpenVINO Toolkit 2021.4.2). Additional details at https://github.com/oneapi-src/visual-quality-inspection. Results may vary.

3Customer Chatbot Reference Kit, measured on June 22, 2022. HW Configuration: Microsoft Azure Standard D4_v5, OS: Red Hat Enterprise Linux Server 7.9, 4 X Intel® Xeon® Platinum 8370C CPU @ 2.80GHz, 2 threads/core, 2 cores/socket, 1 socket. SW Configuration: Config 1 (PyTorch v1.11), Config 2 (PyTorch v1.11.0, Intel® Extension for PyTorch v1.11.200, Intel® Neural Compressor v1.12). Additional details at https://github.com/oneapi-src/customer-chatbot. Results may vary.

4Intelligent Indexing Reference Kit, measured on June 22, 2022. HW Configuration: Amazon AWS m6i.xlarge, OS: Red Hat Enterprise Linux Server 7.9, 4 X Intel® Xeon® Platinum 8370C CPU @ 2.80GHz, 2 threads/core, 2 cores/socket, 1 socket. SW Configuration: Config 1 (Pandas, Scikit-learn), Config 2 (Intel® AI Analytics Toolkit v 2021.4.1, Intel® Extension for Scikit-learn, Intel® Distribution of Modin). Additional details at https://github.com/oneapi-src/intelligent-indexing. Results may vary.

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.

Results may vary. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates.

No product or component can be absolutely secure.

Your costs and results may vary.

Intel technologies may require enabled hardware, software or service activation.

Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.

© Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.


Contacts

Rebecca Want
1-650-919-4595
becky.want@ketchum.com

Tue, 12 Jul 2022 08:07:00 -0500 en text/html https://www.pharmiweb.com/press-release/2022-07-12/intel-releases-open-source-ai-reference-kits
Killexams : Microsoft Surface Laptop Go 2 review: Stylish and lightweight, but battery life could be better

Pros

  • Slimline, lightweight design
  • Bright, colourful 12.4-inch display
  • Business-class security features
  • 'Advance exchange' warranty

Cons

  • Disappointing battery life
  • 720p webcam
  • Moderate 1536x 1024 display resolution
  • Limited upgrade options

You could be forgiven for struggling to keep up with all the different models in Microsoft's Surface range, but the updated Surface Laptop Go 2 stands out by virtue of being the smallest, lightest and most affordable of the clamshell laptops now on offer. 

The 2022 edition isn't a major upgrade, primarily focusing on its new 11th generation Core i5 processor and enhanced security features for business users. However, the Surface Laptop Go 2's lightweight and stylish design should appeal to those seeking an ultraportable laptop for general, daily use. It covers a lot of bases too, providing both consumer and business variants aimed at users ranging from budget-limited students to business travellers.

Surface Laptop Go 2: 11th-generation Core i5, 4GB or 8GB of RAM, 128GB or 256GB of SSD storage, available in consumer (Windows 11 Home) or business (Windows 11 Pro, Secured Core PC) versions.

Image: Cliff Joseph

Design & features

Like its 2020 predecessor, the Surface Laptop Go 2 opts for a 12.4-inch touch-enabled display that helps to keep the device's weight down to just 1.13kg, while the slimline profile measures just 15.7mm thick. It's easy to pick up with one hand and slip into a backpack or briefcase when you're ready to hit the road. The build quality is impeccable, despite the competitive price. Available in a variety of colours (Sage, Ice Blue, Sandstone and Platinum), the aluminum casing of the Laptop Go 2 feels sturdy enough to cope with a few bumps when you're travelling. 

The keyboard panel feels surprisingly firm and comfortable. Microsoft claims that it "provides 30% more key travel than the MacBook Air", and the keyboard certainly feels satisfyingly responsive when typing. That said, it's odd that there's no backlight built into the keyboard for use in darkened rooms or airplane cabins. 

The keyboard is comfortable to type on, but there's no backlight.

Image: Cliff Joseph

The small screen has its drawbacks, providing a resolution of just 1536 by 1024 pixels (148dpi). Opting for a 3:2 aspect ratio is a good decision, as it means the display is tall enough to provide good visibility when studying or scrolling through documents or web pages. However, the screen does quickly start to feel a bit cluttered as soon as you have a few overlapping windows or apps open at once. The screen is bright and colourful -- in fact, to my naked eye sometimes too much so -- with the colours seeming slightly oversaturated at times, especially on the red end of the spectrum.  

Microsoft's spec sheet makes no mention of the screen's brightness or colour gamut support, but the company tells us it is 330 nits and 100% of sRGB. Clearly the Surface Laptop Go 2 is not intended for professional-level graphics work, but its display works perfectly well for viewing documents, web pages or watching video. 

The 15.7mm-thick Surface Laptop Go 2 offers USB-C and USB-A ports, a 3.5mm headphone jack and Microsoft's Surface Connect port.

Images: Cliff Joseph

Connections

The Surface Laptop Go 2's compact design means that connectivity is somewhat limited. On the wireless side there's Wi-Fi 6 and Bluetooth 5.1, but physical connectors are limited to just one USB-C and one USB-A port, plus a 3.5mm headphone jack and Microsoft's Surface Connect port for charging and docking. Thankfully, the USB-C port did allow me to connect the Laptop Go 2 to an external 4K display -- Microsoft's website seems to imply that this requires the additional Surface Dock 2 ($259.99). However, business users who want to use the Laptop Go 2 as part of their office setup may still need an additional dock or hub to provide features such as Ethernet for an office network, or additional USB ports. 

And the one obvious sign of cost-cutting with the Surface Laptop Go 2 is its 720p webcam, although Microsoft states that there's a new camera module providing improved brightness, contrast and colour balance. The image is certainly bright enough, even on a gloomy British summer day, but there's still a slightly grainy quality to it, so business users who need a high-quality webcam for video calls and meetings might prefer to buy a 1080p webcam along with their desktop hub. 

Pricing & options

As mentioned, the Surface Laptop Laptop Go 2 is available in both consumer and business editions, with the latter including Windows 11 Pro, and providing Secured-Core PC security features, including a physical TPM 2.0 chip (rather than implementing TPM in firmware as the consumer model does).  

Business users also benefit from Microsoft's 'advance exchange' warranty, which allows them to request a replacement unit even before their faulty Surface device has been collected for repair. We'll focus on the business pricing here, but consumer pricing is generally just $100 cheaper for each individual configuration. 

SEE: Best Windows laptop 2022: Top notebooks compared

Business prices for the Surface Laptop Go 2 start at $699.99 for a model with a quad-core Core i5-1135G7 processor running at 2.4GHz (up to 4.2GHz with Turbo Boost), along with a rather miserly 4GB of RAM and 128GB of solid-state storage. The base model doesn't include a fingerprint sensor, although the sensor is built into the power button on all other configurations.  

Our review unit had 8GB of RAM and 256GB of storage, bringing the price to $899.99; there's also a $1099.99 model with 16GB of RAM, which seems rather expensive for a simple memory upgrade. There are no additional processor or other upgrade options, and all configurations rely on the Core i5's integrated Iris Xe Graphics. 

Performance

You can't accuse Microsoft of over-hyping the performance of the Surface Laptop Go 2, as its website simply states that it provides "performance to run everyday apps".  

That's an accurate description of the performance provided by the 11th-generation Core i5 processor, which achieves scores of 1345 (single core) and 4140 (multi core) in the Geekbench 5 CPU benchmark. In contrast, Apple's current M1-based MacBook Air ($999.99) scores 1730 and 7590 respectively.  

Image: Cliff Joseph

That modest CPU performance is also reflected in the PCMark 10 application tests, with a score of 4466 that only just lifts it out of the lowest third of the results table. To be fair, the Surface Laptop Go 2 will handle standard productivity software such as Microsoft Office without any trouble, along with web browsing, email and video calls. 

The integrated Iris Xe Graphics is perhaps the stand-out performer here, providing a PCMark 10 score of 7508 for photo-editing work, so the Laptop Go 2 should be able to manage some basic photo editing for presentations work if required. It won't be able to handle much in the way of video editing or gaming, though, with a PCMark 10 video score of 4341, while its gaming score of 10,820 in the 3DMark Night Raid test puts it firmly in the 'less than 20fps' category.  

The one real disappointment with the Surface Laptop Go 2 is its battery life. Microsoft quotes up to 13.5 hours of 'typical device usage', but our review unit only managed half that during our video streaming test, lasting for a modest six hours and forty minutes even with the screen brightness turned down to 50%. You might be able to stretch that to a full eight-hour working day if you're not using the Wi-Fi all day long, but that's still far behind the outstanding 17 hours provided by the M1 MacBook Air.

Conclusions

Microsoft's Surface Laptop Go 2 is something of a mixed bag. Judged on its performance and battery life alone, this is an entry-level laptop with a distinctly mid-range price -- and some rather expensive memory upgrades.  

The Surface Laptop Go 2's saving grace is its sleek, ultraportable design, which is both stylish and sturdy, and well suited to hybrid and mobile working. If you value portability above everything else, then the lightweight Laptop Go 2 is certainly tempting. 

However, business users who need to do more than just run Microsoft Office may prefer one of the more powerful models in the Surface range, or one of its many ultraportable rivals. 


Microsoft Surface Laptop Go 2 specifications

Dimensions 278.2mm x 206.2mm x 15.7mm (10.95in. x 8.12in. x 0.62in.)
Weight 1.127kg (2.48lbs)
Build aluminium (top), aluminum and polycarbonate composite resin system with glass fiber and 30% post-consumer recycled content (base)
Colours Sage, Ice Blue, Sandstone, Platinum
OS Windows 11 Home (consumer) • Windows 11 Pro or 10 Pro (business)
Display 12.4-inch PixelSense, 1536 x 1024  (3:2, 148ppi), 330 nits, 10-point multi-touch
Processor Intel Core i5-1135G7
RAM 4GB, 8GB (LPDDR4x)
Storage 128GB, 256GB (removable SSD)
Graphics Intel Iris Xe Graphics
Wi-Fi Wi-Fi 6 (802.11ax)
Bluetooth 5.1
Connections USB-C, USB-A, 3.5mm audio out, Surface Connect
Camera 720p HD f/2.0
Audio dual far-field mics, omnisonic speakers with Dolby Audio Premium
Battery capacity 41Wh
Battery life up to 13.5h (typical device usage)
Security TPM 2.0 (firmware on consumer model, hardware on business model) • Windows Hello sign-in • fingerprint power button (excluding 4GB/128GB configuration)
Warranty 1 year
In the box Surface Laptop Go 2, 39W power supply, quick-start guide, safety & warranty documents
Price from $599.99 (consumer) • from $699.99 (business)

Alternatives to consider

The Surface Laptop Go 2 is one of the slimmest, lightest Windows laptops around, but it has plenty of competition in the ultraportable category, at a variety of price points.

RECENT AND RELATED CONTENT

Microsoft's $599 Surface Laptop Go 2 is available for pre-order right now 

Microsoft Surface in 2022: What do we want, and what do we expect? 

Surface Laptop Go: Microsoft's smaller, cheaper budget PC is easy to recommend 

The best Surface PC: Which Windows 11-ready Surface device is right for you? 

The 13 best laptop docking stations: Transform your workspace 

Read more reviews

Wed, 06 Jul 2022 01:03:00 -0500 en text/html https://www.zdnet.com/article/microsoft-surface-laptop-go-2-review/
Killexams : HashiCorp Recognized as U.S. Winner and Global Finalist for the 2022 Microsoft Open Source Software on Azure Partner of the Year Award

Press release content from Globe Newswire. The AP news staff was not involved in its creation.

SAN FRANCISCO, July 13, 2022 (GLOBE NEWSWIRE) -- HashiCorp, Inc. (NASDAQ: HCP), a leading provider of multi-cloud infrastructure automation software, today announced it has been chosen as the U.S. winner and a global finalist for the 2022 Microsoft Open Source Software (OSS) on Azure Partner of the Year Award. The company was honored among a global field of top Microsoft partners for demonstrating excellence in innovation and implementation of customer solutions based on Microsoft technology.

“HashiCorp is honored for the recognition by Microsoft as part of its OSS on Azure Partner of the Year Award,” said Burzin Patel, VP, Global Alliances, HashiCorp. “We are delighted to continue helping accelerate Azure adoption amongst our shared customers and community with our solutions for cloud infrastructure, security, and networking. Our work with Microsoft and the leading enterprises who rely on both Azure and HashiCorp will deliver value to their customers through the cloud.”

HashiCorp and Microsoft have partnered for several years on co-developing and supporting solutions for customers and community members on Microsoft Azure. Together, these efforts deliver a broad range of Azure-related solutions, including:

  • Infrastructure: Automating provisioning and security based on infrastructure and policy as code within a workflow that is consistent with HashiCorp Terraform on Azure.
  • Security: Building a zero trust security model by integrating Vault for identity and access management, secrets management, and server configuration with Azure Key Vault.
  • Networking: Connecting modern and legacy applications across hybrid environments using Consul and HCP Consul, a fully managed service on Microsoft Azure.

The Microsoft Partner of the Year Awards recognize Microsoft partners that have developed and delivered outstanding Microsoft-based applications, services and devices during the past year. Awards were classified in various categories, with honorees chosen from a set of more than 3,900 submitted nominations from more than 100 countries worldwide. The MSUS Partner Award program complements the global Microsoft Partner of the Year Award program and highlights U.S.-specific partner impact. HashiCorp was recognized for providing outstanding solutions and services in the OSS categories.

“I am honored to announce the winners and finalists of the 2022 Microsoft Partner of the Year Awards,” said Nick Parker, corporate vice president of Global Partner Solutions at Microsoft. “These partners were outstanding among the exceptional pool of nominees and I’m continuously impressed by their innovative use of Microsoft Cloud technologies and the impact for their customers.”

Microsoft Partner of the Year Awards are announced annually prior to the company’s global partner conference, Microsoft Inspire, which will take place on July 19-20 this year.

Additional details on the 2022 awards are available on the Microsoft Partner Network blog: https://blogs.partner.microsoft.com/mpn/congratulations-to-the-2022-microsoft-partner-of-the-year-awards-winners-and-finalists/. The complete list of categories, winners and finalists can be found at: https://partner.microsoft.com/en-us/inspire/awards.

About HashiCorp
HashiCorp is a leader in multi-cloud infrastructure automation software. The HashiCorp software suite enables organizations to adopt consistent workflows and create a system of record for automating the cloud: infrastructure provisioning, security, networking, and application deployment. HashiCorp’s portfolio of products includes Vagrant™, Packer™, Terraform®, Vault™, Consul®, Nomad™, Boundary, and Waypoint™. HashiCorp offers products as open source, enterprise, and as managed cloud services. The company is headquartered in San Francisco, though most of HashiCorp employees work remotely, strategically distributed around the globe. For more information, visit hashicorp.com or follow HashiCorp on Twitter @HashiCorp.

All product and company names are trademarks or registered trademarks of their respective holders.

HashiCorp Media & Analyst Contact
Kate Lehman
media@hashicorp.com

Wed, 13 Jul 2022 08:07:00 -0500 en text/html https://apnews.com/press-release/globe-newswire/technology-software-open-source-b986900b02d89dbfd40fca4df30c5bd3
Killexams : Colossal-AI Seamlessly Accelerates Large Models at Low Costs with Hugging Face

Forbes News, the world’s leading voice, recently declared large AI models as one of six AI trends to watch for in 2022. As large-scale AI models continue their superior performances across different domains, trends emerge, leading to distinguished and efficient AI applications that have never been seen in the industry.

For example, Microsoft-owned GitHub and OpenAI partnered to launch Copilot recently. Copilot plays the role of an AI pair programmer, offering suggestions for code and entire functions in real-time. Such developments continue to make coding easier than before.

Another example released by OpenAI, DALL-E 2, is a powerful tool that creates original and realistic images as well as art from only simple text. One month later, Google announced its own robust text-to-image diffusion model called Imagen. Imagen delivers exceptional results and accelerates the race of large AI models to a climax.

Image Generated by Imagen (left 2 col.) vs DALLE-2 (right 2 col.)
“Greek statue of a man tripping over a cat”

In accurate years, the outstanding performance of model scaling has led to an escalation in the size of pre-trained models. Unfortunately, training and even simply fine-tuning large AI models are usually unaffordable, requiring tens or hundreds of GPUs. Existing deep learning frameworks like PyTorch and Tensorflow may not offer a satisfactory solution for very large AI models. Furthermore, advanced knowledge of AI systems is typically required for sophisticated configurations and optimization of specific models. Therefore, many AI users, such as engineers from small and medium-sized enterprises, can’t help but feel overwhelmed by the emergence of large AI models.

In fact, the core reasons for the increased cost of large AI models are GPU memory restrictions and the inability to accommodate sizeable models. In response to all of this, Colossal-AI developed the Gemini module, which efficiently manages and utilizes the heterogeneous memory of GPU and CPU and is expected to help solve the mentioned bottlenecks. Best of all, it is completely open-source and requires only minimal modifications to allow existing deep learning projects to be trained with much larger models on a single consumer-grade graphics card. In particular, it makes downstream tasks and application deployments such as large AI model fine-tuning and inference much easier. It even grants the convenience of training AI models at home!

Hugging Face is a popular AI community that strives to advance and democratize AI through open source and open science. Hugging Face has had success collating large-scale models into their own model hub with over 50,000 models, including trendy large AI models like GPT and OPT.

HPC-AI Tech’s flagship open-source and large-scale AI system, Colossal-AI, now allows Hugging Face users to seamlessly develop their ML models in a distributed and easy manner. In the following paragraphs, we will take one of the most popular AI models in Hugging Face Hub, OPT from Meta, to demonstrate how to train and fine-tune your large AI models at a low cost with minimal modifications to your code.
Open source code: https://github.com/hpcaitech/ColossalAI

Accelerate Large Model OPT with Low Cost

About Open Pretrained Transformer (OPT)

Meta recently released Open Pretrained Transformer (OPT), a 175-Billion parameter AI language model. To encourage AI democratization in the community, Meta has released both the code and trained model weights, which stimulates AI programmers to perform various downstream tasks and application deployments. We will now demonstrate fine-tuning Casual Language Modelling with pre-training weights of the OPT model provided by Hugging Face Hub.

Configure with Colossal-AI

It is very simple to use the powerful features of Colossal-AI. Users only need a simple configuration file, and are not required to alter their training logic to equip models with their desired features (e.g. mixed-precision training, gradient accumulation, multi-dimensional parallel training, and memory redundancy elimination).
Suppose we intend to develop the OPT on one GPU. We can accomplish this by leveraging heterogeneous training from Colossal-AI, which only requires users to add relevant items to the configuration files. Among the items added, tensor_placement_policy, which can be configured as cuda, cpu, or auto, determines our heterogeneous training strategy. Each training strategy has its distinct advantages:

  • cuda: puts all model parameters on GPU, suitable for scenarios where training persists without weights offloading;
  • cpu: puts all model parameters on CPU, suitable for giant model training, only keeps weights on GPU memory that participate in current computation steps;
  • auto: determines the number of parameters to keep on GPU by closely monitoring the current memory status. It optimizes the usage of GPU memory and minimizes the expensive data transmission between GPU and CPU.

For typical users, they can just select the auto strategy, which maximizes training efficiency by dynamically adapting its heterogeneous strategy with respect to its current memory state.

from colossalai.zero.shard_utils import TensorShardStrategy 

zero = dict(model_config=dict(shard_strategy=TensorShardStrategy(),  
                 tensor_placement_policy="auto"), 
                 optimizer_config=dict(gpu_margin_mem_ratio=0.8))

Launch with Colossal-AI

With the configuration file ready, only a few lines of code are needed for the newly declared functions.
Firstly, awaken Colossal-AI through a single line of code in the configuration file. Colossal-AI will automatically initialize the distributed environment, read in configuration settings, and integrate the configuration settings into its components (i.e. models and optimizers).

 colossalai.launch_from_torch(config='./configs/colossalai_zero.py')

After that, users may define their own datasets, models, optimizers, and loss functions per usual, or by using raw PyTorch code. Only their models need to be initialized under ZeroInitContext. In the given example, we adopt the OPTForCausalLM model along with its pretrained weights by Hugging Face and make adjustments to the Wikitext dataset.

 with ZeroInitContext(target_device=torch.cuda.current_device(),  
                    shard_strategy=shard_strategy, 
                    shard_param=True): 
    model = OPTForCausalLM.from_pretrained( 
                'facebook/opt-1.3b' 
                config=config) 

Next, use colossalai.initialize to integrate heterogeneous memory functions defined in the configuration file, into the training engine to enable the feature.

engine, train_dataloader, eval_dataloader, lr_scheduler = 
colossalai.initialize(model=model,optimizer=optimizer, 
                      criterion=criterion, 
                      train_dataloader=train_dataloader, 
                      test_dataloader=eval_dataloader, 
                      lr_scheduler=lr_scheduler)                    

Remarkable Performance from Colossal-AI

On a single GPU, Colossal-AI’s automatic strategy provides remarkable performance gains from the ZeRO Offloading strategy by Microsoft DeepSpeed. Users can experience up to a 40% speedup, at a variety of model scales. However, when using a traditional deep learning training framework like PyTorch, a single GPU can no longer support the training of models at such a scale.

Adopting the distributed training strategy with 8 GPUs is as simple as adding a -nprocs 8 to the training command of Colossal-AI!

Behind the Scenes

Such remarkable improvements come from Colossal-AI’s efficient heterogeneous memory management system, Gemini. To put it simply, Gemini uses a few warmup steps during model training to collect memory usage information from PyTorch computational graphs. After warm-up, and before performing each operation, Gemini pre-allocates memory for the operator equivalent to its peak usage based on the collected memory usage records. At the same time, it re-allocates some model tensors from GPU memory to CPU memory.

The inbuilt memory manager by Gemini attaches a state to each tensor, including HOLD, COMPUTE, FREE, etc. Based on the queried memory usage, the manager constantly converts the tensor states, and adjusts tensor positions. Compared to the static memory classification by DeepSpeed’s ZeRO Offload, Colossal-AI Gemini employs a more efficient use of GPU and CPU memory, maximizes model capacities, and balances training speeds, all with small amounts of hardware equipment.

For the representative of large models, GPT, Colossal-AI is capable of training up to 1.5 billion parameters on a gaming laptop with RTX 2060 6GB. For a PC with RTX3090 24GB, Colossal-AI can train GPT with 18 billion parameters. Colossal-AI can also bring significant improvements to high performance graphics cards such as a Tesla V100.

Furthermore: convenient and efficient parallelizations

Parallel and distributed technologies are vital methods to further accelerate model training. To train the world’s largest and most advanced AI models within the shortest time, efficient distributed parallelization is still a necessity. Issues found in existing solutions include limited parallel dimension, low efficiency, poor versatility, difficult deployment, and lack of maintenance. With this in mind, Colossal-AI uses technologies such as efficient multi-dimensional parallelism and heterogeneous parallelism to allow users to deploy large AI models efficiently and rapidly with minimal modifications to their code.

To counter complications arising from data, pipeline, and 2.5D parallelism simultaneously, a simple line of code declaration suffices with Colossal-AI. The typical system/framework method of hacking into underlined code logic is no longer necessary.

parallel = dict(  
                pipeline=2,  
                tensor=dict(mode='2.5d', depth = 1, size=4)) 

For a super-large AI model such as GPT-3, Colossal-AI only needs half the computing resources compared to the NVIDIA solution to start training. If the same computing resources were used, the speed could be further increased by 11%, which could reduce the training cost of GPT-3 by over a million dollars.

In theory, this sounds fantastic, but what about in practice? Colossal-AI has proven its capabilities in application to real-world issues across a variety of industries, including autonomous driving, cloud computing, retail, medicine, and chip production.

For AlphaFold, which is used for protein structure prediction, our team has introduced FastFold, based on the Colossal-AI acceleration scheme. FastFold has successfully surpassed other schemes including those proposed by Google and Columbia University. It successfully reduces the training time of AlphaFold from 11 days to 67 hours, simultaneously lowering the overall cost. Moreover, the process of long sequence inference is accelerated by about 9.3 to 11.6 times.

Colossal-AI values open-source community construction. We offer detailed tutorials and support the latest cutting-edge applications such as PaLM and AlphaFold. Colossal-AI will regularly produce new and innovative features. We always welcome suggestions and discussions and would be more than willing to help if you encounter any issues. You can raise an issue here or create a discussion syllabu in our forum. Your suggestions are highly appreciated. Recently, Colossal-AI reached No. 1 in trending projects on Github and Papers With Code, together with projects that have as many as 10K stars.

The open-source code is on Project’s GitHub.

Reference
https://medium.com/@yangyou_berkeley/colossal-ai-seamlessly-accelerates-large-models-at-low-costs-with-hugging-face-4d1a887e500d


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

Wed, 13 Jul 2022 04:29:00 -0500 Synced en-US text/html https://syncedreview.com/2022/07/13/colossal-ai-seamlessly-accelerates-large-models-at-low-costs-with-hugging-face/
Killexams : Change and Configuration Management Market Size- Global Industry Analysis By Development, Share and Demand Forecast 2022-2031

The MarketWatch News Department was not involved in the creation of this content.

Japan, Japan, Fri, 08 Jul 2022 10:49:28 / Comserve Inc. / -- Change and Configuration Management Market size, share, growth, trends, segmentation, top key players, strategies, demand, statistics, sales, current scenario, competitive landscape and forecast.

Change and configuration management is being implemented by organizations to guide the long term health of their assets and products. An organization's ability to respond quickly to product opportunities depends at least partly on its ability to manage an inventory of assets and rapidly configure and produce products from that inventory.

The change and configuration management market is further gaining traction across various end-user industries. The increased adoption of configuration management is mainly attributed to multiple benefits such as a significant reduction in cost as it decreases redundant duplication, provides faster problem solutions, thereby offering a better quality of service. It provides powerful development for large enterprises, cloud service providers, and midmarket clients.

- Every organization needs to define its own conventions for unique identification. One general convention most often is not enough; typically, a number of conventions are necessary for various classes of configuration items, such as certificates, documents, and code. Owing to this, several market incumbents are collaborating to cater to the specified market demand.

Click Here to obtain trial Report >> https://www.sdki.us/sample-request-86980

Key Market Trends

Increasing Demand of Auto- Scaling and Endpoint security in BFSI sector is Fueling the Growth

- Several banks are leveraging cloud concepts like auto-scaling groups that can automatically respond to load by scaling the number of machines up or down. Unlike the snowflake server setup, provisioning these new machines is automatic using the scripts. Configuration management can run faster than a sysadmin manually entering commands, reducing latency of spinning up a new machine.
- In May 2020, Red Hat, Inc., the provider of open source solutions, announced that Asiakastieto Group, a Helsinki-based fintech company, is using Red Hat solutions to build its new account information service, Account Insight, on an open banking platform. The firm uses Red Hat OpenShift for faster time to market and Red Hat Integration to centrally manage data sharing with business customers to deliver the most relevant, timely offerings to end consumers.
- Red Hat 3scale API Management provided Asiakastieto with a centralized, more secure interface to connect banks and credit grantors and to enable test and launch of changes via configuration, removing the need to build and redeploy code, which was disruptive to business.

Asia-Pacific to Witness the Highest Growth Over the Forecast Period

- India is considered a very demanding country for change management, as companies seek to enable easy access to accurate records, safeguard companies against cybersecurity threats, and perform essential functions like reducing outages and breaches and reducing costs.
- The increasing amount of IT infrastructure across China has enabled the focus of various industries towards configuration management. Optimization of IT assets maintenance to prevent inconsistencies has been driving the demand for change management software solutions in the region.
- Japan is the ideal nation in the APAC region for configuration management solutions, as 99.7% of functional industries fall under the small and medium enterprise (SME) segment, as reported by the Small and Medium Enterprise Agency (SMEA) of Japan. SMEs, due to their budgetary constraints and low IT infrastructures have preferred low to medium cost solutions, like change management software over ERP solutions.

Click Here to obtain trial Report >> https://www.sdki.us/sample-request-86980

Competitive Landscape

The major players in the change and configuration management market are IBM Corporation, Microsoft Corporation, Hewlett-Packard Company, Amazon Web Services, and BMC Software, among others. The market is consolidated, as it is dominated by these major players. Hence, market concentration is expected to be high.

- May 2020 - Aruba, a Hewlett Packard Enterprise company, announced the integration of Aruba ClearPass Policy Manager with Microsoft endpoint protection platforms to deliver significant advances in enterprise cyberattack protection. Aruba has completed the integration, testing, and verification of ClearPass Policy Manager with Microsoft Endpoint Manager, a unified management platform that includes Configuration Manager and Microsoft Intune
- March 2020 - BMC Software Inc. announced the new v20.02 version of BMC Helix, which includes the features like BMC Helix Remediate Integrations that provide blind spot detection and closed-loop change and enhanced configuration management.

1 INTRODUCTION
1.1 Study Assumptions and Market Definition
1.2 Scope of the Study

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET INSIGHTS
4.1 Market Overview
4.2 Industry Attractiveness - Porter's Five Forces Analysis�??
4.2.1 Bargaining Power of Suppliers�??
4.2.2 Bargaining Power of Consumers�??
4.2.3 Threat of New Entrants�??
4.2.4 Intensity of Competitive Rivalry�??
4.2.5 Threat of Substitute Products�??
4.3 COVID-19 Impact on the Market

5 Market Dynamics

Request For Full Report >> Change and Configuration Management Market

The dynamic nature of business environment in the current global economy is raising the need amongst business professionals to update themselves with current situations in the market. To cater such needs, Shibuya Data Count provides market research reports to various business professionals across different industry verticals, such as healthcare & pharmaceutical, IT & telecom, chemicals and advanced materials, consumer goods & food, energy & power, manufacturing & construction, industrial automation & equipment and agriculture & allied activities amongst others.

For more information, please contact:

Hina Miyazu

Shibuya Data Count
Email: sales@sdki.jp
Tel: + 81 3 45720790

The post Change and Configuration Management Market Size- Global Industry Analysis By Development, Share and Demand Forecast 2022-2031 appeared first on Comserveonline.

COMTEX_409874169/2652/2022-07-08T18:14:35

Is there a problem with this press release? Contact the source provider Comtex at editorial@comtex.com. You can also contact MarketWatch Customer Service via our Customer Center.

The MarketWatch News Department was not involved in the creation of this content.

Fri, 08 Jul 2022 06:14:00 -0500 en-US text/html https://www.marketwatch.com/press-release/change-and-configuration-management-market-size--global-industry-analysis-by-development-share-and-demand-forecast-2022-2031-2022-07-08
Killexams : Intel Releases Open Source AI Reference Kits

SANTA CLARA, Calif.--(BUSINESS WIRE)--Jul 12, 2022--

What’s New: Intel has released the first set of open source AI reference kits specifically designed to make AI more accessible to organizations in on-prem, cloud and edge environments. First introduced at Intel Vision, the reference kits include AI model code, end-to-end machine learning pipeline instructions, libraries and Intel oneAPI components for cross-architecture performance. These kits enable data scientists and developers to learn how to deploy AI faster and more easily across healthcare, manufacturing, retail and other industries with higher accuracy, better performance and lower total cost of implementation.

“Innovation thrives in an open, democratized environment. The Intel accelerated open AI software ecosystem including optimized popular frameworks and Intel’s AI tools are built on the foundation of an open, standards-based, unified oneAPI programming model. These reference kits, built with components of Intel’s end-to-end AI software portfolio, will enable millions of developers and data scientists to introduce AI quickly and easily into their applications or boost their existing intelligent solutions.”

–Wei Li, Ph.D., Intel vice president and general manager of AI and Analytics

About AI Reference Kits: AI workloads continue to grow and diversify with use cases in vision, speech, recommender systems and more. Intel’s AI reference kits, built in collaboration with Accenture, are designed to accelerate the adoption of AI across industries. They are open source, pre-built AI with meaningful enterprise contexts for both greenfield AI introduction and strategic changes to existing AI solutions.

Four kits are available for obtain today:

  • Utility asset health: As energy consumption continues to grow worldwide, power distribution assets in the field are expected to grow. This predictive analytics model was trained to help utilities deliver higher service reliability. It uses Intel-optimized XGBoost through the Intel® oneAPI Data Analytics Library to model the health of utility poles with 34 attributes and more than 10 million data points 1. Data includes asset age, mechanical properties, geospatial data, inspections, manufacturer, prior repair and maintenance history, and outage records. The predictive asset maintenance model continuously learns as new data, like new pole manufacturer, outages and other changes in condition, are provided.
  • Visual quality control: Quality control (QC) is essential in any manufacturing operation. The challenge with computer vision techniques is that they often require heavy graphics compute power during training and frequent retraining as new products are introduced. The AI Visual QC model was trained using Intel® AI Analytics Toolkit, including Intel® Optimization for PyTorch and Intel® Distribution of OpenVINO™ toolkit, both powered by oneAPI to optimize training and inferencing to be 20% and 55% faster, respectively, compared to stock implementation of Accenture visual quality control kit without Intel optimizations 2 for computer vision workloads across CPU, GPU and other accelerator-based architectures. Using computer vision and SqueezeNet classification, the AI Visual QC model used hyperparameter tuning and optimization to detect pharmaceutical pill defects with 95% accuracy.
  • Customer chatbot: Conversational chatbots have become a critical service to support initiatives across the enterprise. AI models that support conversational chatbot interactions are massive and highly complex. This reference kit includes deep learning natural language processing models for intent classification and named-entity recognition using BERT and PyTorch. Intel® Extension for PyTorch and Intel Distribution of OpenVINO toolkit optimize the model for better performance – 45% faster inferencing compared to stock implementation of Accenture customer chatbot kit without Intel optimizations 3 – across heterogeneous architectures, and allow developers to reuse model development code with minimal code changes for training and inferencing.
  • Intelligent document indexing: Enterprises process and analyze millions of documents every year, and many of the semi-structured and unstructured documents are routed manually. AI can automate the processing and categorizing of these documents for faster routing and lower manual labor costs. Using a support vector classification (SVC) model, this kit was optimized with Intel® Distribution of Modin and Intel® Extension for Scikit-learn powered by oneAPI. These tools Excellerate data pre-processing, training and inferencing times to be 46%, 96% and 60% faster, respectively, compared to stock implementation of Accenture Intelligent document indexing kit without Intel optimizations 4 for reviewing and sorting the documents at 65% accuracy.

Download free on the Intel.com AI Reference Kits website. The kits are also available on Github.

Why It Matters: Developers are looking to infuse AI into their solutions and the reference kits contribute to that goal. These kits build on and complement Intel’s AI software portfolio of end-to-end tools and framework optimizations. Built on the foundation of the oneAPI open, standards-based, heterogeneous programming model, which delivers performance across multiple types of architectures, these tools help data scientists train models faster and at lower cost by overcoming the limitations of proprietary environments.

What's Next: Over the next year, Intel will release a series of additional open source AI reference kits with trained machine learning and deep learning models to help organizations of all sizes in their digital transformation journey.

More Context:oneAPI Dev Summit for AI | Intel oneAPI | Intel AI Tools

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel’s innovations, go to newsroom.intel.com and intel.com.

Notices & Disclaimers

1 Predictive Utility Analytics Reference Kit, measured on June 29, 2022. HW Configuration: Microsoft Azure Standard D4_v5, OS: Ubuntu 20.04.4 LTS (Focal Fossa), 8 X Intel® Xeon® Platinum 8370C CPU @ 2.80GHz, 2 threads/core, 4 cores/socket, 1 socket. SW Configuration: Config 1 (Python v3.9, Scikit-learn v 1.0.2, XGBoost v0.81), Config 2 (Intel® Distribution for Python 3.9.12 2022.0.0, Scikit-learn 0.24.2, Intel® Extension for Scikit-learn 2021.5.1, XGBoost 1.4.3, daap4py 2021.6.0). Additional details at https://github.com/oneapi-src/predictive-health-analytics. Results may vary.

2 Visual Quality Inspection Reference Kit, measured on June 29, 2022. HW Configuration: Microsoft Azure Standard D4_v5, OS: Ubuntu 20.04.4 LTS (Focal Fossa), 4 X Intel® Xeon® Platinum 8370C CPU @ 2.80GHz, 2 threads/core, 2 cores/socket, 1 socket. SW Configuration: Config 1 (PyTorch v1.8.0), Config 2 (Intel® Extension for PyTorch v1.8.0, Intel® Neural Compressor v1.12, Intel® Distribution of OpenVINO Toolkit 2021.4.2). Additional details at https://github.com/oneapi-src/visual-quality-inspection. Results may vary.

3 Customer Chatbot Reference Kit, measured on June 22, 2022. HW Configuration: Microsoft Azure Standard D4_v5, OS: Red Hat Enterprise Linux Server 7.9, 4 X Intel® Xeon® Platinum 8370C CPU @ 2.80GHz, 2 threads/core, 2 cores/socket, 1 socket. SW Configuration: Config 1 (PyTorch v1.11), Config 2 (PyTorch v1.11.0, Intel® Extension for PyTorch v1.11.200, Intel® Neural Compressor v1.12). Additional details at https://github.com/oneapi-src/customer-chatbot. Results may vary.

4 Intelligent Indexing Reference Kit, measured on June 22, 2022. HW Configuration: Amazon AWS m6i.xlarge, OS: Red Hat Enterprise Linux Server 7.9, 4 X Intel® Xeon® Platinum 8370C CPU @ 2.80GHz, 2 threads/core, 2 cores/socket, 1 socket. SW Configuration: Config 1 (Pandas, Scikit-learn), Config 2 (Intel® AI Analytics Toolkit v 2021.4.1, Intel® Extension for Scikit-learn, Intel® Distribution of Modin). Additional details at https://github.com/oneapi-src/intelligent-indexing. Results may vary.

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.

Results may vary. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates.

No product or component can be absolutely secure.

Your costs and results may vary.

Intel technologies may require enabled hardware, software or service activation.

Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.

© Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

View source version on businesswire.com:https://www.businesswire.com/news/home/20220712005386/en/

CONTACT: Rebecca Want

1-650-919-4595

becky.want@ketchum.com

KEYWORD: UNITED STATES NORTH AMERICA CALIFORNIA

INDUSTRY KEYWORD: APPS/APPLICATIONS TECHNOLOGY MOBILE/WIRELESS SECURITY OTHER TECHNOLOGY SOFTWARE INTERNET DATA MANAGEMENT ARTIFICIAL INTELLIGENCE

SOURCE: Intel Corporation

Copyright Business Wire 2022.

PUB: 07/12/2022 12:30 PM/DISC: 07/12/2022 12:32 PM

http://www.businesswire.com/news/home/20220712005386/en

Tue, 12 Jul 2022 04:38:00 -0500 en text/html https://www.joplinglobe.com/region/national_business/intel-releases-open-source-ai-reference-kits/article_cee4e7b8-e1aa-5d28-9ecf-a5099dd10d96.html
AZ-140 exam dump and training guide direct download
Training Exams List