Every topic of 250-428 exam is covered in PDF Download

When are you concerned about how exactly to complete your Symantec 250-428 Exam from the first attempt, all of us recommend that with the aid of killexams.com Symantec Administration of Symantec Endpoint Protection 14 sample test plus Actual Questions you will figure out how to enhance your knowledge. Our own 250-428 Questions and Answers are complete plus valid. The Symantec 250-428 PDF FILE documents are a specific copy of real examination questions plus answers that a person is going in order to see on the examination screen.

Exam Code: 250-428 Practice exam 2023 by Killexams.com team
250-428 Administration of Symantec Endpoint Protection 14

Exam ID : 250-428

Exam Title : Administration of Symantec Endpoint Protection 14

Questions: 65 - 75

Exam Duration: 90 minutes

Passing Score: 70%

Languages: English

The Symantec Endpoint Protection 14: Plan and Implement course is designed for the network, IT security, and systems administration professional in a Security Operations position tasked with planning and implementing a Symantec Endpoint Protection environment. This course covers how to architect and size a Symantec Endpoint Protection environment, install or upgrade the Symantec Endpoint Protection Manager (SEPM), benefit from a SEPM disaster recovery plan, and manage replication and failover. The class also covers how to deploy new endpoints and upgrade existing Windows, Mac, and Linux endpoints.

Course Objectives

By the completion of this course, you will be able to:

• Architect a Symantec Endpoint Protection Environment

• Prepare and deliver a successful Symantec Endpoint Installation

• Build a Disaster Recovery plan to ensure successful SEPM backups and restores

• Manage failover and replication

• Deploy endpoint clients


• Course environment

• Lab environment

Preparing and Delivering a Successful Symantec Endpoint Protection Implementation

• Architecting and Sizing the Symantec Endpoint Protection Environment

• Installing the SEPM

• Benefiting from a SEPM Disaster Recovery Plan

• Managing Replication and Failover

Discovering Endpoint Client Implementation and Strategies

• Implementing the Best Method to Deploy Windows, Mac, and Linux Endpoints

• Migrating a SEP 12.1.6 client to SEP 14

Symantec Endpoint Protection 14.x: Configure and Protect

The Symantec Endpoint Protection 14.x: Configure and Protect course is designed for the network, IT security, and systems administration professionals in a Security Operations position who are tasked with configuring optimum security settings for endpoints protected by Symantec Endpoint Protection 14. This class brings context and examples of attacks and tools used by cybercriminals.


• Course environment

• Lab environment

Securing Endpoints against Network-Based Attacks

Introducing Network Threats

 Describing how Symantec Endpoint Protection protects each layer of the network stack

 Discovering the tools and methods used by attackers

 Describing the stages of an attack Protecting against Network Attacks and Enforcing Corporate Policies using the Firewall Policy

 Preventing network attacks

 Examining Firewall Policy elements

 Evaluating built-in rules

 Creating custom firewall rules

 Enforcing corporate security policy with firewall rules

 Blocking network attacks using protection and stealth settings

 Configuring advanced firewall feature Blocking Threats with Intrusion Prevention

 Introducing Intrusion Prevention technologies

 Configuring the Intrusion Prevention policy

 Managing custom signatures

 Monitoring Intrusion Prevention events

Introducing File-Based Threats

 Describing threat types

 Discovering how attackers disguise their malicious applications

 Describing threat vectors

 Describing Advanced Persistent Threats and a typical attack scenario

 Following security best practices to reduce risks Preventing Attacks with SEP Layered Security

 Virus and Spyware protection needs and solutions

 Describing how Symantec Endpoint Protection protects each layer of the network stack

 Examining file reputation scoring

 Describing how SEP protects against zero-day threats and threats downloaded through files and email

 Describing how endpoints are protected with the Intelligent Threat Cloud Service

 Describing how the emulator executes a file in a sandbox and the machine learning engines role and function

Securing Windows Clients

 Platform and Virus and Spyware Protection policy overview

 Tailoring scans to meet an environments needs

 Ensuring real-time protection for clients

 Detecting and remediating risks in downloaded files

 Identifying zero-day and unknown threats

 Preventing email from downloading malware

 Configuring advanced options

 Monitoring virus and spyware activity Securing Mac Clients

 Touring the SEP for Mac client

 Securing Mac clients

 Monitoring Mac clients

Securing Linux Clients

 Navigating the Linux client

 Tailoring Virus and Spyware settings for Linux clients

 Monitoring Linux clients Controlling endpoint integrity and compliance

Providing Granular Control with Host Integrity

 Ensuring client compliance with Host Integrity

 Configuring Host Integrity

 Troubleshooting Host Integrity

 Monitoring Host Integrity

Controlling Application and File Access

 Describing Application Control and concepts

 Creating application rulesets to restrict how applications run

 Monitoring Application Control events Restricting Device Access for Windows and Mac Clients

 Describing Device Control features and concepts for Windows and Mac clients

 Enforcing access to hardware using Device Control

 Discovering hardware access policy violations with reports, logs, and notifications

Hardening Clients with System Lockdown

 What is System Lockdown?

 Determining to use System Lockdown in Whitelist or Blacklist mode

 Creating whitelists for blacklists

 Protecting clients by testing and Implementing System Lockdown.

Enforcing Adaptive Security Posture

Customizing Policies based on Location

 Creating locations to ensure the appropriate level of security when logging on remotely

 Determining the criteria and order of assessment before assigning policies

 Assigning policies to locations

 Monitoring locations on the SEPM and SEP client

Managing Security Exceptions

 Creating file and folder exceptions for different scan types

 Describing the automatic exclusion created during installation

 Managing Windows and Mac exclusions

 Monitoring security exceptions

Symantec Endpoint Protection 14.x: Manage and Administer

The Symantec Endpoint Protection 14.x: Manage and Administer course is designed for the network, IT security, and systems administration professional in a Security Operations position tasked with the day-to-day operation of the SEPM management console. The class covers configuring sever-client communication, domains, groups, and locations and Active Directory integration. You also learn how Symantec Endpoint Protection uses LiveUpdate servers and Group Update Providers to deliver content to clients. In addition, you learn how to respond to incidents using monitoring and reporting

Course Objectives

By the completion of this course, you will be able to:

• Describe how the Symantec Endpoint Protection Manager (SEPM) communicates with clients and make appropriate changes as necessary.

• Design and create Symantec Endpoint Protection group structures to meet the needs of your organization.

• Respond to threats using SEPM monitoring and reporting.

• Analyze the content delivery system (LiveUpdate).

• Reduce bandwidth consumption using the best method to deliver content updates to clients.

• Configure Group Update Providers.

• Create location aware content updates
• Use Rapid Release definitions to remediate a virus outbreak.

Monitoring and Managing Endpoints

Managing Console Access and Delegating


• Creating administrator accounts

• Managing administrators and delegating responsibility

Managing Client-to-SEPM Communication

• Analyzing client-to-SEPM communication

• Restoring communication between clients and SEPM

• Verifying clients are online with the SEPM

Managing the Client Architecture and Active

Directory Integration

• Describing the interaction between sites, domains, and groups

• Managing groups, locations, and policy inheritance

• Assigning policies to multiple locations

• Importing Active Directory Organizational Units

• Controlling access to client user interface settings

Managing Clients and Responding to Threats

• Identifying and verifying the protection status for all computers

• Monitoring for health status and anomalies

• Responding to incidents

Monitoring the Environment and Responding to Threats

• Monitoring critical log data

• Identifying new incidents

• Responding to incidents

• Proactively respond to incidents

Creating Incident and Health Reports

• Reporting on your environments security status

• Reporting on the health of your environment

Enforcing Content Updates on

Endpoints using the Best Method

Introducing Content Updates using LiveUpdate

 Describing the LiveUpdate ecosystem

 Configuring LiveUpdate sources

 Troubleshooting LiveUpdate

 Examining the need for an internal LiveUpdate

Administration server

 Describe the high-level steps to configure an internal

LiveUpdate server

Analyzing the SEPM Content Delivery System

 Describing content updates

 Configuring LiveUpdate on the SEPM and clients

 Monitoring a LiveUpdate session

 Managing content on the SEPM

 Monitoring content distribution for clients

Managing Group Update Providers

 Identifying the advantages of using group update providers

 Adding group update providers

 Adding multiple and explicit group update providers

 Identifying and monitoring group update providers

 Examining group update provider health and status

Configuring Location Aware Content Updates

 Examining location awareness

 Configuring location aware content updates

 Monitoring location aware content updates

Managing Certified and Rapid Release Definitions

 Managing Certified SEPM definitions from Symantec

Security Response

 Managing Certified Windows client definitions from Symantec Security Response

 Managing Rapid Release definitions from Symantec Security Response

 Managing Certified and Rapid Release definitions from Symantec Security Response for Mac and Linux clients

 Using static definitions in scripts to download content

Administration of Symantec Endpoint Protection 14
Symantec Administration approach
Killexams : Symantec Administration approach - BingNews https://killexams.com/pass4sure/exam-detail/250-428 Search results Killexams : Symantec Administration approach - BingNews https://killexams.com/pass4sure/exam-detail/250-428 https://killexams.com/exam_list/Symantec Killexams : A win for Biden administration's cyber agenda in court No result found, try new keyword!Regulators approve an auto repair law after previous car-hacking concerns, and a state court says police can’t hide their social media monitoring policy. Tue, 22 Aug 2023 23:11:07 -0500 en-us text/html https://www.msn.com/ Killexams : Why the Symantec Management Client Service Failed to Start

Avery Martin holds a Bachelor of Music in opera performance and a Bachelor of Arts in East Asian studies. As a professional writer, she has written for Education.com, Samsung and IBM. Martin contributed English translations for a collection of Japanese poems by Misuzu Kaneko. She has worked as an educator in Japan, and she runs a private voice studio out of her home. She writes about education, music and travel.

Fri, 14 Aug 2020 15:23:00 -0500 en-US text/html https://smallbusiness.chron.com/symantec-management-client-service-failed-start-78264.html
Killexams : Symantec's Altiris Client Management Suite Remotely Manages Classroom Technology

Schools find that lifecycle management software lets them do more with less.

The I.T. staff at the Jurupa Unified School District in Riverside, Calif., knew they had to find a better way to manage the district’s 4,500 computers.

With a small staff of five technicians, it was simply impossible for the IT staff to make personal visits to the district’s 16 elementary schools, three middle schools and three high schools and keep up with all the patches and software updates.

{mosloadposition mpu}

“We’re spread out over 44 square miles, and with the state’s budget situation the way it is, hiring additional IT workers was not possible,” says Thomas Tan, Jurupa’s director of information and education technology. “We needed a smarter way to manage the network,” he explains.

Jurupa’s answer was the Altiris Client Management Suite from Symantec, lifecycle management software that helps the district remotely manage PC and network assets, software distribution, and configuration and patch management.

Unauthorized changes account for roughly 60 percent of system downtime.

Source: Enterprise Management Associates

In the past, Tan says, it would take the IT staff 30 minutes to two hours to deploy a new PC image, which is a new version of an operating system with the appropriate configurations, drivers and applications. Now the staff can deploy new images in minutes.

“We could not re-image every computer manually on location,” adds Bob Ford, the district’s network manager. “It takes us 20 minutes to get to the most distant high school, so by managing the images remotely from a central location we save time and fuel costs,” he explains.

Tan says the ROI case for Altiris is very strong: Including two weeks of training and consulting, Altiris cost the district slightly less than $100,000 to deploy.

“A full-time tech would cost us $60,000 a year, so Altiris pays for itself in less than two years,” says Tan.

A Full View

Many school districts and organizations find that lifecycle management tools that let IT managers take a full view of the network are preferable to point solutions. Products such as Avocent’s LANDesk, Kace Networks’ KBOX, Novell’s ZENworks and Symantec’s Altiris are among the leading players. Prices vary based on the scope of a project, but most of the products cost in the five- to six-figure range for a 1,000-node deployment.

“We’ve gone from a silo-based view to a more holistic view of the network,” says Andi Mann, vice president, Enterprise Management Associates (EMA).

“As we develop new technologies, they tend to be integrated into the lifecycle management tools,” he says. “For example, we’re starting to see virtual applications management offered in many of the latest products.”

School districts struggling with tight budgets say deploying lifecycle management tools is the only way they can survive these challenging economic times. Here are some best practices they offer:

Deploy the tool as soon as possible.
Mike Roberts, technology director at Quinlan Independent School District in Quinlan, Texas, recommends not worrying about mastering all the features right away. Roberts, who deployed a KBOX appliance for his small six-school district, says IT managers are going to learn more about their networks than they ever imagined, so one approach is to just roll it out and use the training sessions to fill in the gaps.

“Very quickly, you’ll find out about license issues you didn’t know about and what’s actually running on each computer on the network,” Roberts says.

EMA’s Mann agrees that IT managers will derive immediate benefits from lifecycle management software. “For some, patch management may be the biggest problem to tackle first, while others may just start right off with inventory and asset control,” he says.

Conduct a thorough inventory of your software and licenses.
For those who are more comfortable with a formal plan, one of the best ways to start is to get control of your software licenses. It’s important to keep in mind that if an end user introduces pirated software onto the network, you could be held liable.

“Set up a baseline and run a variance report,” says EMA’s Mann. “Do it every week until you start seeing patterns as to what’s installed on your network.”

Karen Diggs, director of technology at North West Hendricks Schools in Lizton, Ind., says Novell’s ZENWorks 10 makes it very easy to get information on all the existing software licenses and the district’s hardware inventory.

“There are so many teachers using different software programs that it’s good to have ZENWorks keep track of all the software so we know what’s out there,” Diggs says.

Get your network backbone in order.
Especially for school districts that plan to use the software distribution feature, it’s important to have a robust network with the available bandwidth to handle remote trouble-shooting and software installs. Jurupa’s Ford says the district’s Altiris deployment was roughly in tandem with a gigabit network rollout.

“We now have 600 times more capacity than our existing T1 lines,” Ford says. “In a previous life, we would bring the Altiris servers to the site. Now we can do all the management from a centralized location and have LAN-quality speeds throughout the network,” he explains.

Minimize variations and lock down user desktops.
EMA’s Mann advises IT managers to stay away from multiple images. The fewer the images, the less complex the network is and the fewer procedures the IT staff will have to run and manage, he says.

Along with minimal images, it’s also important to lock down user machines. This means preventing users from installing new programs, using external hard drives and accessing the control panel.

“By preventing user activities, you minimize changes, and there’s less of a chance the network will experience an unauthorized change,” he explains.

Set up a test lab.
Quinlan’s Roberts warns that remote software distribution can become a nightmare if it’s not managed properly. He recommends setting up a test lab to run remote software installs.

“The last thing you want to do is blast an install through your management system of software that doesn’t work,” Roberts says.

Diggs of North West Hendricks Schools is adamant that the best approach is to set up a test environment, configure based on the school’s specifications, take a snapshot and then run the test.

“Once you roll out the software, you don’t want to uninstall and re-image the machines,” she says. “Even with the automated tools, it’s still time consuming.”

Lifecycle Checklist

Ask these questions to determine if lifecycle management software is right for your school district:

  1. Do you know what you are dealing with in terms of software licenses and devices?
  2. Are you paying too much for software licenses?
  3. Can you upgrade applications with minimal disruption?
  4. Are all your systems up to date with the latest security patches?
  5. Are your IT technicians still making desktop or classroom visits?
  6. Can you quickly bring a new piece of hardware online?
Sun, 04 Sep 2022 16:08:00 -0500 Steve Zurier en text/html https://edtechmagazine.com/k12/article/2009/04/symantecs-altiris-client-management-suite-remotely-manages-classroom-technology
Killexams : Top-Down Approach in Business

A top-down approach in business describes a traditional organizational style that emphasizes the imperatives and vision of upper management. Company directives and goals flow down from the top to subordinates below. Most small businesses automatically use the top-down approach because they’re apt to have only two layers: owner and employees.

Wed, 18 Jul 2018 12:41:00 -0500 en-US text/html https://smallbusiness.chron.com/topdown-approach-business-66018.html
Killexams : Bitwarden releases free and open-source E2EE Secrets Manager


Bitwarden, the maker of the popular open-source password manager tool, has released ‘Secrets Manager,’ an end-to-end encrypted secrets manager for IT professionals, software development teams, and the DevOps industry.

The tool aims to act as a secure alternative to hard-coding secrets or sharing ‘.env’ files over email, giving users flexibility, scalability, and keeping their secrets safe in the case of a data breach.

Those secrets typically include API keys, user authentication certificates, database passwords, SSL and TLS certificates, private encryption keys, SSH keys, etc.

These secrets are inadvertently exposed online following cyberattacks or publicly leaked due to poor security practices in the development lifecycle.

Last year, Symantec reported that over 1,800 apps for the iOS platform contained hard-coded AWS credentials, exposing their developers and users to varying risk levels.

The problem is so widespread that GitHub launched a system that would alert repository owners of misconfigurations leading to the exposure of secrets, and independent security researchers wrote open-source tools dedicated to scanning for secrets in publicly exposed AWS S3 storage buckets.

Bitwarden Secrets Manager is poised to solve this problem by giving users an easy and secure way to retrieve, share, and deploy them across development teams while also supporting granular access permissions for individuals or groups.

Secrets Manager follows the same open-source approach as the password manager, so its codebase, CLI, SDK, and integration code are subject to scrutiny and also allow the flexibility of custom implementations.

The tool is offered in three tiers, depending on the needs of development teams, but there’s a free version supporting unlimited secrets, two users, three projects, and three service accounts.

The ‘Teams’ and ‘Enterprise’ tiers that cost $6 and $12 per month, respectively, raise those limits and offer additional business functionalities like support for FIDO2 authentication, automated provisioning, SSO integration, and more advanced administrative capabilities.

For now, Bitwarden Secrets Manager supports integration with GitHub Actions, but support for Kubernetes, Terraform, and Ansible integrations is expected to land in future versions.

Also, more languages are to be added to the tool’s SDK, and access management will be enhanced with additional options for individual secret assignments to specific accounts.

Wed, 23 Aug 2023 07:03:00 -0500 Bill Toulas en-us text/html https://www.bleepingcomputer.com/news/security/bitwarden-releases-free-and-open-source-e2ee-secrets-manager/
Killexams : Review: Symantec Altiris Asset Management Suite 7.1

Managing vendor contracts, controlling hardware and software costs and optimizing IT assets to meet business requirements constitute critical chores for IT professionals. Symantec’s Altiris Asset Management Suite 7.1 aims to remove the hassle from IT asset management by giving enterprises the detailed information they need to make smart, informed decisions. Such tools are a necessity in today’s cost-conscious workplace.

Altiris Asset Management Suite (AMS) culls data from Symantec’s Client Management Suite (CMS) and Server Management Suite (SMS). AMS also integrates with similar Microsoft discovery tools so IT departments can tap installed investments.


IT departments often find it tough to tease out relationships between hardware, software, associated contracts, end users and user groups. Altiris AMS takes away the pain of guessing who has what system, who has what installed on their system and when their licenses are due for renewal.

The downloadable suite provides a wizard that assesses whether a system meets the minimum product requirements and will add any missing applications if prompted — a cool feature that saves the administrator time during installation. After ensuring that my hardware met the minimum product requirements prior to the installation, AMS downloaded successfully. The installation and initial setup were painless.

Why It Works For IT

AMS’s user interface logically divides hardware and software. An application metering capability provides insight into which applications have been installed, which have been paid for and which are being used. Such information makes this a real cost-cutting tool for IT.

IT managers can also see the full cradle-to-grave lifecycle of an asset, including contracts of all types associated with hardware and software, purchase orders, service-level agreements, warranties and even retirement and disposal documentation.

Administrators can calculate total cost of ownership by factoring in discoverable data such as purchase costs, monthly maintenance fees or chargeback costs. It’s possible to customize AMS to include fields specific to a school district and also add non-discoverable information to an asset, such as an additional cost center.

It’s also possible to designate who may view asset information by groups, which proves useful for security. For example, IT staff can limit asset visibility of a satellite campus to authorized people at that facility.


AMS is optimized for and depends heavily on its associated Symantec discovery tools, CMS and SMS. Figuring out these dependencies may take a bit of time and some experimentation. According to Symantec, most deployments consist of AMS coupled with CMS.

Product Requirements

Altiris Asset Management Suite requires the Symantec Management Platform, which includes the Symantec Management Console, Database, Notification Server and Asset Management Suite components. The Management Server must be installed with .NET Framework 3.5 SP1 or above, Internet Explorer 7.0 or above, SQL Server 2005 or SQL Server 2008 and Windows Server 2008 R2 x64. The Workflow Server needs either Windows Server 2003 or 2008, SQL Server 2005, Windows Server 2008 R2, Windows IIS and Microsoft .NET Framework 3.5.

Wed, 03 Nov 2021 17:56:00 -0500 Alyson Behr en text/html https://edtechmagazine.com/k12/article/2012/04/review-symantec-altiris-asset-management-suite-71
Killexams : AFRY — An Integrated Single Source Of Truth Across IT, OT And ET

The convergence of information technology (IT) with operational technology (OT) and engineering technology (ET) is a crucial enabler for digital transformation in companies, particularly asset-intensive industries such as mining and manufacturing. We can see this in the partnership between AFRY, a leader in engineering design and advisory services, and Infosys, a leader in next-generation digital services and consulting.

This article focuses on AFRY’s process industry business and how the two companies partnered to deliver an IT-OT-ET integrated "single source of truth," assuring data integrity from the time of initial engineering and construction and across all the plant lifecycle stages, speeding the ability to ramp up to design capacity, eliminate delays due to engineering rework and costly design fixes, reduce unplanned downtime and Improve overall plant performance and productivity.

AFRY is a trailblazer in a domain that has traditionally been slow in fully embracing the latest technological advances. As Kai Vikman, COO at AFRY, noted, "Successful IT-OT-ET integration is a clear prerequisite to reap the benefits of digital manufacturing at scale." He also believes that this will be an obligation with the new European Data Act calling for more harmonized rules on fair access to and use of data.

Getting started: The handover from construction to operations

The life span of a process plant in industries such as industrial chemical manufacturing is typically more than 50 years. Building such a plant is a complex multistep process, and its success will rely heavily on effective collaboration among all stakeholders covering multiple disciplines from process engineering to mechanical engineering to architecture to electrical and instrumentation to piping and construction.

After the plant is complete, there is a handover of information from the builder to the plant operator. The handover may involve millions of documents from multiple engineering, procurement and construction (EPC) contractors. Transferring relevant data in a format usable by the plant’s operations and maintenance is a challenge and a potential inhibiter that could add months or years to the schedule for making the plant fully operational.

The data involved in this process spans multiple disciplines. It might include the standard technical specifications, process and instrumentation and process flow diagrams, architectural designs and schematics, electrical circuit diagrams, instrumentation details or a 3-D model of the plan. Each of these elements adds to the complexity.

Leveraging global standards for data sharing and integration

IT-OT-ET integration plays a central role as a critical facilitator for many other systems and information integration. The key to success is information standardization, ensuring minimum effort to hand over information between parties. Infosys worked with AFRY to establish the standard guiding principles and class libraries from multiple industry standards and best practices, as no single standard could address the data integration challenges across the lifecycle. The approach uses ISO 15926 (“Integration of lifecycle data for process plants, including oil and gas production facilities”), a globally recognized standard for data sharing and integrating complex plant and project information.

ISO 15926’s Resource Description Framework (RDF) acts as a universal reference across disparate information systems, providing a neutral information layer with which any software application with an ISO 15926 adaptor can exchange data. It preserves the precise meaning of the data as it is being exchanged by referencing a data dictionary containing definitions of all objects and associated attributes within the plant. This ability for systems to exchange information with shared meaning by using universal standards is called semantic interoperability.

In a semantic implementation, data arrives pre-packaged with self-described context, and the receiving system can derive meaning from that data through a universal vocabulary. In this case, Infosys added data about the data (i.e., metadata) and linked each element to a controlled, shared vocabulary defined by ISO 15926.

Other standards leveraged were the Capital Facilities Information Handover Specification (CFIHOS) and the DEXPI Initiative, promoting general data exchange standards for the process industry, with a current focus on Piping and Instrumentation diagrams. Infosys also used the OPC Unified Architecture (OPC UA) standard for operational technology integration for machine-to-machine communications for industrial automation.

Together with Infosys, AFRY has set up a sandbox environment integrating Virtual Site, a plant engineering system, SAP, the enterprise business planning system, and the Simatic platform, a plant automation system, to demonstrate new use cases. The structured data is implemented in an application server that binds the semantics to data based on the chosen standards to retrieve information in subsequent applications efficiently. The environment is currently set up on the Microsoft Azure platform but can be implemented on any on-premise or public cloud platforms. The unique contribution of the AFRY-Infosys partnership is the standardization and harmonization of data using the interoperability layer aligning to global standards.

Overall benefits of a single integrated source of truth

By integrating plant lifecycle data across the IT, OT and ET domains, Infosys and AFRY were able to build a single source of truth across the plant lifecycle—a digital twin of the entire plant. The digital twin is an exact digital representation of the physical plant and accurately reflects the state of the plant, including all of the information about work processes for operations and maintenance and engineering information.

Sharing integrated plant engineering data in the correct format between EPC companies and the plant operator reduced delays, rework, conflicts and change orders during the construction phase. Multidisciplinary engineering data simplified conformance to regulatory, environmental, safety and compliance standards.

For operations, a single source of information available at the right time, place and format led to significant improvements in long-term lifecycle performance and optimization, maximizing plant yield and efficiency. Safety information management with standardized processes, augmented by safe working training, led to fewer safety accidents and less lost time due to injury.

Effective maintenance management reduced unplanned downtime and a significant reduction in maintenance costs thanks to well-organized maintenance data and procedures, easy-to-find technical data sheets and ready access to spare parts. Deploying engineering data management as a shared data source to support digital solutions such as predictive maintenance resulted in improved productivity per technician and reductions in mean time-to-repair.

Wrapping up

The challenges that AFRY is tackling are in a domain that has been hesitant and slow to embrace the latest technological advances fully. The result has been fragmentation, inadequate collaboration with suppliers and insufficient knowledge transfer information from project to project. For the longest time, plant engineering data has resided in silos.

When a problem occurs in the plant, it is hard for engineers, operations and maintenance people to access information and identify the cause. When changes occur, it takes way too long to update the other systems that need to know about the change. The result is that the systems people rely on don't have accurate or sufficient data. The industry needs a radical approach. If digitalization is the primary goal, interoperability is the means to achieve it, and interoperability requires standardization.

Transactional and business process information (from IT), the monitoring and analysis of industrial assets (OT) and the use of engineering design data (ET) are all essential for the proper day-to-day function of a process plant. The incremental value of the AFRY-Infosys partnership comes from creating interoperability among these domains when the IT-OT-ET data is brought together in a single source of truth as the foundation for a digital enterprise.

Moor Insights & Strategy provides or has provided paid (wish services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Adobe, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Analog Devices, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Avaya Holdings, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Elastic, Ericsson, Extreme Networks, Five9, Flex, Fortinet, Foundries.io, Foxconn, Frame (now VMware), Frore Systems, Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ,  IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Intuit, Iron Mountain, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo,  Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, MemryX, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Movandi, Multefire Alliance, National Instruments, Neat, NetApp, Netskope, Nightwatch, NOKIA, Nortek, Novumind, NTT, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Rigetti Computing, Ring Central, Salseforce.com,  Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Veeam, Ventana Micro Systems, Vidyo, Volumez, VMware, Wave Computing, Wells Fargo, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler.  

Sun, 20 Aug 2023 08:53:00 -0500 Patrick Moorhead en text/html https://www.forbes.com/sites/patrickmoorhead/2023/08/20/afry---an-integrated-single-source-of-truth-across-it-ot-and-et/
Killexams : Vanguard Speaks Out Against New Approach to Financial Risk

What You Need to Know

  • FSOC wants a way to step in quickly when nonbanks seem likely to crash the economy.
  • Better Markets says the sudden collapse of Silicon Valley Banks shows why FSOC needs the ability.
  • A bipartisan group of SEC and CFTC commissioners suggests that an entities-based approach could increase bailout pressure.

The Vanguard Group is warning the Financial Stability Oversight Council that applying the same, one-size-fits-all risk management rules to all types of financial services companies could backfire, by increasing the odds that different types of companies will crash at the same time.

Vanguard talks about the dangers of promoting regulatory “groupthink” in a letter it sent to FSOC last week, in a response to FSOC efforts to regain the ability to take over specific potentially risky nonbank financial services companies quickly.

Vanguard says FSOC seems to be moving toward replacing the different sets of solvency rules that regulators have developed for insurers, housing finance providers and other nonbank financial firms with one set of rules based on the Federal Reserve regulations for banks

That could “lead to increased correlation of risk management practices,” Vanguard says. “Relying on a single, even if Federal Reserve-approved, risk management approach may increase the likelihood of herding behavior. This is a suboptimal way to mitigate macroprudential risk.”

What It Means

Vanguard and its competitors have helped you convince your clients that diversification is a good approach to retirement planning.

Now, they’re trying to sell FSOC on the idea that diversification might also help with financial system risk management.

The Background

Congress put the statutory language creating FSOC in the Dodd-Frank Act, in an effort to keep the kind of complicated, previously obscure financial system problems that nearly crashed the world financial system in 2008 from cropping up in the future.

U.S. Treasury Secretary Janet Yellen is the chair of FSOC.

FSOC also includes heads of agencies such as the Federal Reserve Board and the U.S. Securities and Exchange Commission, a voting member with insurance expertise, the head of the Treasury Department’s Federal Insurance Office, and a representative from the National Association of Insurance Commissioners.

Federal bank regulators already had the goal to swoop in and manage risk when a bank seemed to be likely to fail. One of FSOC’s goals was to find a way to identify nonbanks as “systemically important financial institutions” (or SIFIs) and to give the Federal Reserve Board the ability to apply some of the same discipline they applied to banks to nonbank SIFIs.

FSOC began by taking an aggressive approach to identifying SIFIs. The SIFIs quickly escaped SIFI designations by taking steps such as restructuring their operations or going to court.

In 2019, FSOC agreed to back away from aggressive SIFI designation efforts; defer to nonbanks’ primary regulators, when possible; and to emphasize the regulation of potentially risky activities rather than oversight over specific companies.

Originally, FSOC was going to set a June 27 deadline for comments. It then responded to commenters’ requests for more time by pushing the deadline back to July 28.

Mon, 31 Jul 2023 17:55:00 -0500 en text/html https://www.thinkadvisor.com/2023/07/31/vanguard-speaks-out-against-new-approach-to-financial-risk/
Killexams : Gallstones -- Approach to Medical Management

Oral Bile-Acid Treatment

Successful dissolution of gallstones by the oral administration of bile-acid mixtures was reported almost 70 years ago.[22] It was, however, only in the 1970s that this form of therapy was tested on a larger scale.[23,24,25] Initially, CDCA was used,[26] but due to a dose-dependent increase in aminotransferases, an increase in serum low-density lipoprotein cholesterol, and the development of bile salt-induced diarrhea, the treatment raised concerns. Because the more hydrophilic UDCA appeared to be as effective in gallstone dissolution but was practically devoid of side effects, it rapidly replaced the use of CDCA.[27,28]

The idea behind oral administration of CDCA and UDCA was to enrich the bile with these bile acids and thereby decrease cholesterol supersaturation and dissolve the stones. In fact, total bile salt concentration in bile did not change appreciably, and the decrease in cholesterol saturation was achieved primarily by a decrease in biliary cholesterol concentration. While both bile acids do decrease biliary cholesterol secretion,[29] they do so by different mechanisms. CDCA decreases cholesterol synthesis by inhibiting hepatic HMG-CoA reductase activity, whereas UDCA does not affect cholesterol synthesis but reduces intestinal cholesterol absorption.[30] CDCA also decreases hepatic bile-acid synthesis, but UDCA does not, and may even slightly increase it. There is also a difference between the 2 agents in terms of the physical chemical mechanism of gallstone dissolution: CDCA removes cholesterol from the stones by micellar solubilization, whereas UDCA does so primarily by formation of a liquid crystalline phase.[31,32]

In 1981, the National Cooperative Gallstone Study established the efficacy and safety of CDCA therapy.[26] Although gallstones could be dissolved by oral administration of CDCA, its efficacy was low. Less than half (40.8%) of patients responded to the highest dose tested (750 mg/day), and only 13.5% had complete dissolution of their stones within 2 years. Moreover, the response was slow. In over half, more than 9 months of treatment were needed for complete dissolution. Subsequently, the introduction of UDCA with a better safety profile and equal or better efficacy made bile salt litholysis more attractive.

The efficacy of CDCA is dose-dependent, but so are its side effects. Hence, a full dose of 15 mg/kg/day will induce diarrhea in up to 60% of patients, increase cholesterol levels in most patients, and cause hepatotoxicity in over 3%. In comparison, the recommended dose of UDCA (10-12 mg/kg/day) has essentially no side effects except occasional diarrhea. Therefore, monotherapy with CDCA cannot be recommended and has been completely replaced by UDCA therapy. Combination therapy with a reduced dose of both bile acids (5-8 mg/kg/day of each) has also been suggested, and may be as safe and efficient as full-dose UDCA monotherapy, as well as less costly.[33,34] UDCA monotherapy does, however, cause less diarrhea, and therefore it remains the treatment of choice today.

UDCA is usually given at a dose ranging between 8 and 15 mg/kg/day. Bedtime administration is preferable because it maintains hepatic bile-acid secretion rate overnight, thus reducing secretion of supersaturated bile and increasing the dissolution rate.[35,36] Dissolution is assessed by ultrasonography every 6 months. The expected dissolution rate is approximately a 1-mm decrease in stone diameter per month of treatment.[37] Treatment is usually continued for another 3 months after successful dissolution.

In up to 10% of patients, cholesterol gallstones acquire a surface calcification during treatment, rendering them nondissolvable and unsuitable for further therapy with bile acids.[38]

Not all patients are suitable candidates for oral dissolution therapy. Selection criteria are based on 3 main aspects: (1) patient, (2) gallbladder, and (3) stone characteristics. Patients with complications or with frequent and severe attacks of biliary colic are not suitable candidates. Patients with mildly symptomatic gallstones are the best candidates.[28,39] Patients with increased surgical risks or those who do not want to undergo surgery due to personal preferences should be considered for medical dissolution therapy. Asymptomatic patients are currently not treated. For medical therapy to be effective, the gallbladder needs to fill and function. Finally, only cholesterol stones can be dissolved by bile acids, and any significant calcification of the stones will render them nondissolvable.

Gallbladder function -- as well as cholesterol content of stones -- can be assessed by oral cholecystography.[40] After oral intake of an iopanoic acid derivative, a plain abdominal x-ray will show radiolucent cholesterol stones floating within a radiopaque contrast-filled gallbladder.[41] Gallbladder function can be further evaluated by measuring the emptying or ejection fraction following a fatty meal. Ultrasonography is the easiest and most precise method for detecting the presence of stones. Ultrasonography as well as cholescintigraphy may also be used to assess cystic duct patency and gallbladder function by measuring the ejection fraction after a fatty meal or cholecystokinin injection.[42,43,44] Some clinicians have even suggested that ultrasonography may predict stone composition prior to bile-acid or shock-wave lithotripsy treatment.[45,46] Several investigators have shown that the degree of stone calcification and suitability for bile-acid dissolution therapy can be accurately assessed by computed tomography (CT).[47,48,49] Hence, a combination of CT for stone composition and ultrasonography for gallbladder filling and function is also a good alternative for appropriate patient selection.

The success of oral dissolution treatment is defined as complete disappearance of gallstones as documented by oral cholecystography or, preferably, ultrasonography. This is achieved in 10% to over 80% of patients. The wide range of success reflects differences in patient selection, treatment duration, dosage, and ways of assessing success.[50] In a meta-analysis comprising almost 2000 patients treated until 1992, complete dissolution was achieved in 18.2% with CDCA, in 37.3% with UDCA, and in 62.8% with combination therapy.[51] In patients with small stones (< 10 mm), a dissolution rate of 48.5% was seen with UDCA therapy.

By employing more strict selection criteria, the efficacy of this treatment can be increased, but at the expense of the number of suitable candidates.[52] Thus, an optimal lean patient with small (< 5 mm) radiolucent stones (approximately 3% of all symptomatic patients) will have a 90% likelihood of complete dissolution within 6 months.[53] In contrast, patients with 5- to 10-mm radiolucent stones (approximately 12%) will have only a 50% chance of successful dissolution within 9 months.

Initially, extracorporeal shock-wave lithotripsy (ESWL) was introduced as an adjunct to bile-acid therapy.[54,55] The rationale was to use ESWL to fragment larger stones to increase dissolvable surface area, shorten treatment time, and increase the pool of patients suitable for bile-acid dissolution. With increasing experience it became clear that ESWL was actually an independent treatment modality.[56] After pulverizing gallstones to tiny sand-like fragments, there seems to be little if any benefit of or need for additional bile-acid therapy.[56,57,58]

A significant drawback of gallstone dissolution therapy is the possibility of gallstone recurrence. Stones will recur because the gallbladder is left in place and the underlying cause of gallstone formation has not been corrected. The recurrence rate is about 10% annually for up to 5 years,[59] and is often preceded by sludge formation.[60] Thereafter, recurrence is uncommon. Most stones recur without symptoms[50] and will respond to re-treatment with bile acids.[61,62] Maintenance therapy with low-dose UDCA has been reported to decrease the recurrence rate but it is costly.[63] Patients with multiple primary stones have an increased recurrence rate.[63] Additional factors that have been reported to predict recurrence after successful lithotripsy are obesity,[64] poor gallbladder emptying,[65] an increased deoxycholic acid pool,[66] and an apoE4 genotype.[67] Whether these factors are important after medical dissolution is unclear.

Because successful dissolution therapy is not inevitably followed by gallstone recurrence, there is a group of patients in whom the initial lithogenic process is transient. Pregnancy, rapid weight loss, and convalescence from abdominal surgery are recognized transient risk factors.[68,69] Trying to identify and characterize patients with transient lithogenicity for dissolution therapy is an important challenge for future studies.

Mon, 21 Aug 2023 12:00:00 -0500 en text/html https://www.medscape.com/viewarticle/460309_4
Killexams : A Non-Siloed Approach to Business Process Management

As companies move to “blended” BPR, which seeks both long- and short-term benefits, they realize the need to involve different people and think differently about these projects than before.

“People are always saying that IT needs to get closer to the business,” says Jerry Luftman, executive director and distinguished professor at the Stevens Institute of Technology “But business needs to get closer to IT too. For years, MBA programs and executive training programs have focused on the wrong things, such as the technical elements that turn people off. Businesspeople don’t need to know how to write software. They need to understand governance, the strategic operational point of view, how to demonstrate value, and what their role is in a major IT initiative.”

Lisa Anderson, head of LMA Consulting Group, Inc., a firm that works on supply chain and inventory projects, says the most successful reengineering projects involve progressive IT leaders who partner with business units. “You need to find people in the IT departments who have strong business acumen,” she says. “You need people who will sit down and explain, in non-technical terms, how they can leverage new technologies like business intelligence to Improve inventory levels, supply chains and other processes.”

This type of business-first partnering has become more commonplace during the recession. The evolving nature of BPR has also increased the need for speed. “The time frame for most new projects now is yesterday,” jokes Ron Wince, CEO of Guidon Performance Solutions, a business process consulting firm, who adds that there’s a heightened focus on change management.

“Change management has always been an afterthought,” Wince says. “Even when companies did think of it, they didn’t really ingrain change management into the decision-making process as they do now.”

Quick Change

Companies are incorporating change management into the business case for BPR projects. In fact, Wince recently worked on a project where the company had an executive coach as part of the decision-planning process, with the role of helping executives change their behavior to be in line with the new processes.

While having a coach participate is quite unusual, more stakeholders, such as HR departments, are coming to the table early on in BPR projects, since the pace of projects demand that their different units provide input in a parallel rather than sequential manner.

The danger is when change management is written into the business case, but a company still maintains old habits from past projects when the pace was more leisurely. For example, one hospital, which had been rolling out electronic medical records (EMR) for a long time, had always built into its project plans that departments would have two months to work out the kinks with new technology. However, the new pace of BPR didn’t allow for that luxury, and the ROI goals required the new processes be running efficiently within two weeks. Immediately, business units pushed back about how quickly they were supposed to change the way they worked.

Moving forward, the hospital implemented an “internal readiness team,” which worked with employees and hired an external HR consultant to do surveys of their concerns and opinions. Because of this forethought, the hospital was able to identify and address a lot of issues with adoption well ahead of time, reducing the time it took to implement the new project by 15 percent.

“Companies don’t have the luxury of only involving the C-level and financial people in process reengineering,” Wince says. “Everyone is feeling the pressure to be ready for the growth that is coming, and that means they have to look at their processes, and how they manage their processes, in a different way.”

Joe Mullich has received more than two dozen awards for writing about business, technology and other topics.

↑ Back to top

Mon, 19 Dec 2011 17:27:00 -0600 text/html https://www.wsj.com/ad/article/enterprisetech-management
250-428 exam dump and training guide direct download
Training Exams List