Get 000-779 Exam practice questions containing 100% valid test questions.

Assuming you are stressed, How to breeze through your IBM 000-779 Exam. With the help of the ensured killexams.com IBM 000-779 brain dumps questions and test system, you will sort out some way to utilize your insight. The greater part of the specialists starts perceiving when they observe that they need to show up in IT certificate. Our practice questions is done and direct. The IBM 000-779 Dumps make your creativity and knowledge significant and help you parts in direction of the authorization test.

Exam Code: 000-779 Practice test 2022 by Killexams.com team
IBM Tivoli Workload Scheduler V8.2 Implementation
IBM Implementation test Questions
Killexams : IBM Implementation test Questions - BingNews https://killexams.com/pass4sure/exam-detail/000-779 Search results Killexams : IBM Implementation test Questions - BingNews https://killexams.com/pass4sure/exam-detail/000-779 https://killexams.com/exam_list/IBM Killexams : Oracle Certification Guide: Overview and Career Paths

Oracle offers a multitude of hardware and software solutions designed to simplify and empower IT. Perhaps best known for its premier database software, the company also offers cloud solutions, servers, engineered systems, storage and more. Oracle has more than 430,000 customers in 175 countries, about 138,000 employees and exceeds $37.7 billion in revenue.

Over the years, Oracle has developed an extensive certification program. Today, it includes six certification levels that span nine different categories with more than 200 individual credentials. Considering the depth and breadth of this program, and the number of Oracle customers, it’s no surprise that Oracle certifications are highly sought after.

[For more information read our Oracle CRM review, and our review of Oracle’s accounting suite.]

Oracle certification program overview

Oracle’s certification program is divided into these nine primary categories:

  • Oracle Applications
  • Oracle Cloud
  • Oracle Database
  • Oracle Enterprise Management
  • Oracle Industries
  • Oracle Java and Middleware
  • Oracle Operating Systems
  • Oracle Systems
  • Oracle Virtualization

Additionally, Oracle’s credentials are offered at six certification levels:

  • Junior Associate
  • Associate
  • Professional
  • Master
  • Expert
  • Specialist

Most Oracle certification exams are proctored, cost $245, and contain a mix of scored and unscored multiple-choice questions. Candidates may take proctored exams at Pearson VUE, although some exams are offered at Oracle Testing Centers in certain locations. Some exams, such as Oracle Database 12c: SQL Fundamentals (1Z0-061) and Oracle Database 11g: SQL Fundamentals (1Z0-051), are also available non-proctored and may be taken online. Non-proctored exams cost $125. Check the Oracle University Certification website for details on specific exams.

Oracle Applications and Cloud certifications

The Oracle Applications certification category offers more than 60 individual credentials across 13 products or product groups, such as Siebel, E-Business Suite, Hyperion, JD Edwards EnterpriseOne and PeopleSoft. The majority of these certifications confer Certified Implementation Specialist for some specific application, with various Certified Expert credentials also available. The Application certifications aim at individuals with expertise in selling and implementing specific Oracle solutions.

Oracle’s newest certification category is Oracle Cloud, which covers Java Cloud as well as a number of Oracle Cloud certifications, including Oracle Database Cloud. Cloud certs fall into seven sub-categories:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS), including Data Management, Application Development, Management Cloud and Mobile Cloud Service
  • Software as a Service (SaaS) – Oracle Customer Experience Cloud, including Service, Sales, Marketing and CPQ Cloud
  • Software as a Service (SaaS) – Oracle Customer Experience Cloud, including Service, Sales, Marketing, CPQ Cloud, and the rest of their CRM software offering

  • Software as a Services – Oracle Enterprise Resource Planning Cloud, including Financials, Project Portfolio Management, Procurement and Risk Management Cloud

  • Software as a Service – Oracle Human Capital Management Cloud, including Workforce Rewards, Payroll, Talent Management and Global Human Resources Cloud
  • Software as a Service – Oracle Supply Chain Management Cloud, including Order Management, Product Master Data Management, Product Lifecycle Management, Manufacturing, Inventory Management, Supply Chain Planning and Logistics Cloud

These credentials recognize individuals who deploy applications, perform administration or deliver customer solutions in the cloud. Credentials mostly include Associate and Certification Implementation Specialists, with one Mobile Developer credential offered plus a professional-level Oracle Database Cloud Administrator.

Oracle Database certifications

Certifications in Oracle’s Database category are geared toward individuals who develop or work with Oracle databases. There are three main categories: Database Application Development, MySQL and Oracle Database.

Note: Oracle Database 12c was redesigned for cloud computing (and is included in both the Cloud and Database certification categories). The current version is Oracle Database 12c R2, which contains additional enhancements for in-memory databases and multitenant architectures. MySQL 5.6 has been optimized for performance and storage, so it can handle bigger data sets.

Whenever a significant version of either database is released, Oracle updates its certifications exams over time. If an test isn’t available for the latest release, candidates can take a previous version of the test and then an updated test when it becomes available. Though MySQL 5.6 certifications and exams are still available for candidates supporting that version, the new MySQL 5.7 certification track may be more appropriate for those just starting on their MySQL certification journeys.

Oracle currently offers the Oracle Database Foundations Certified Junior Associate, Oracle Certified Associate (OCA), Oracle Certified Professional (OCP), Oracle Certified Master (OCM), Oracle Certified Expert (OCE) and Specialist paths for Oracle Database 12c. In addition, Oracle offers the OCA credential for Oracle Database 12c R2 and an upgrade path for the OCP credential. Because many of these certifications are also popular within the Oracle Certification Program, we provide additional test details and links in the following sections.

Other database certifications

Oracle Enterprise Management Certifications

The Oracle Enterprise Manager Certification path offers candidates the opportunity to demonstrate their skills in application, middleware, database and storage management. The Oracle Enterprise Manager 12c Certified Implementation Specialist test (1Z0-457) certifies a candidate’s expertise in physical, virtual and cloud environments, as well as design, installation, implementation, reporting, and support of Oracle Enterprise Manager.

Oracle Database Foundations Certified Junior Associate

The Oracle Database Foundation Certified Junior Associate credential targets those who’ve participated in the Oracle Academy through a college or university program, computer science and database teachers, and individuals studying databases and computer science. As a novice-level credential, the Certified Junior Associate is intended for individuals with limited hands-on experience working on Oracle Database products. To earn this credential, candidates must pass the Oracle Database Foundations (novice-level exam) (1Z0-006).

Oracle Certified Associate (OCA) – Oracle Database 12c Administrator

The OCA certification measures the day-to-day operational management database skills of DBAs. Candidates must pass a SQL test and another on Oracle Database administration. Candidates can choose one of the following SQL exams:

  • Oracle Database 12c SQL (1Z0-071)
  • Oracle Database 12c: SQL Fundamentals (1Z0-061) NOTE: This test will be retired on November 30, 2019.

Candidates must also pass the Oracle Database 12c: Installation and Administration (1Z0-062) exam.

Oracle Certified Associate – Oracle Database 12cR2 Administrator

To earn the Oracle Database 12cR2 OCA credential, candidates must first earn either the Oracle Database SQL Certified Associate, Oracle Database 11g Administrator Certified Associate, or the Oracle Database 12c Administrator Certified Associate.  In addition, candidates are required to pass the Oracle Database 12cR2 Administration test (1Z0-072).

Oracle Certified Professional (OCP) – Oracle Database 12c Administrator

The OCP certification covers more advanced database skills. You must have the OCA Database 12c Administrator certification, complete the required training, submit a course submission form and pass the Oracle Database 12c: Advanced Administration (1Z0-063) exam.

Professionals who possess either the Oracle Database 11g Administrator Certified Professional or Oracle Database 12c Administrator Certified Professional credential may upgrade to the Oracle Database 12cR2 Administration Certified Professional credential by passing the Oracle DBA upgrade test (1Z0-074).

Oracle Certified Master (OCM) – Oracle Database 12c Administrator

To achieve OCM Database 12c Administrator certification, you must have the OCP Database 12c Administrator certification, complete two advanced courses, and pass the Oracle Database 12c Certified Master test (12cOCM), complete the course submission form, and submit the Fulfillment Kit request.

Oracle also offers the Oracle Database 12c Maximum Availability Certified Master certification, which requires three separate credentials, including the Oracle Database 12c Administrator Certified Master, Oracle Certified Expert, Oracle Database 12c-RAC and Grid Infrastructure Administration, and Oracle Certified Expert, Oracle Database 12c – Data Guard Administration.

Oracle Certified Expert (OCE) – Oracle Database 12c

The OCE Database 12c certifications include Maximum Availability, Data Guard Administrator, RAC and Grid Infrastructure Administrator, and Performance Management and Tuning credentials. All these certifications involve prerequisite certifications. Performance Management and Tuning takes the OSP Database 12c as a prerequisite and the Data Guard Administrator certification requires the OCP Database 12c credential. The RAC and Grid Infrastructure Administrator provides candidates the most flexibility, allowing candidates to choose from the OCP Database 11g, OCP Databases 12c, Oracle Certified Expert – Real Application Clusters 11g and Grid Infrastructure Administration.

Once the prerequisite credentials are earned, candidates can then achieve Data Guard Administrator, RAC and Grid Infrastructure Administrator or Performance Management and Tuning by passing one exam. Achieving OCP 12c plus the RAC and Grid Infrastructure Administration and Data Guard Administration certifications earns the Maximum Availability credential.

Oracle Database Certified Implementation Specialist

Oracle also offers three Certified Implementation Specialist credentials: the Oracle Real Application Clusters 12c, Oracle Database Performance and Tuning 2015, and Oracle Database 12c. Specialist credentials target individuals with a background in selling and implementing Oracle solutions. Each of these credentials requires candidates to pass a single test to earn the designation.

Oracle Industries certifications

Oracle Industries is another sizable category, with more than 25 individual certifications focused on Oracle software for the construction and engineering, communications, health sciences, insurance, tax and utilities industries. All these certifications recognize Certified Implementation certified for the various Oracle industry products, which means they identify individuals proficient in implementing and selling industry-specific Oracle software.

Oracle Java and Middleware Certifications

The Java and Middleware certifications span several subcategories, such as Business Intelligence, Application Server, Cloud Application, Data Integration, Identity Management, Mobile, Java, Oracle Fusion Middleware Development Tools and more. Java and Middleware credentials represent all levels of the Oracle Certification Program – Associate, Professional and so on – and include Java Developer, Java Programmer, System Administrator, Architect and Implementation Specialist.

The highly popular Java category has certifications for Java SE (Standard Edition), and Java EE (Enterprise Edition) and Web Services. Several Java certifications that require a prior certification accept either the corresponding Sun or Oracle credential.

Oracle Operating Systems certifications

The Oracle Operating Systems certifications include Linux and Solaris. These certifications are geared toward administrators and implementation specialists.

The Linux 6 certifications include OCA and OCP Linux 6 System Administrator certifications, as well as an Oracle Linux Certified Implementation Specialist certification. The Linux 6 Specialist is geared to partners but is open to all candidates. Both the Linux OCA and Specialist credentials require a single exam. To achieve the OCP, candidates must first earn either the OCA Linux 5 or 6 System Administrator or OCA Linux Administrator (now retired) credential, plus pass an exam.

The Solaris 11 certifications include the OCA and OCP System Administrator certifications plus an Oracle Solaris 11 Installation and Configuration Certified Implementation Specialist certification. The OCA and OCP Solaris 11 System Administrator certifications identify Oracle Solaris 11 administrators who have a fundamental knowledge of and base-level skills with the UNIX operating system, commands, and utilities. As indicated by its name, the Implementation Specialist cert identifies intermediate-level implementation team members who install and configure Oracle Solaris 11.

Oracle Systems certifications

Oracle Systems certifications include Engineered Systems (Big Data Appliance, Exadata, Exalogic Elastic Cloud, Exalytics, and Private Cloud Appliance), Servers (Fujitsu and SPARC) and Storage (Oracle ZFS, Pillar Axiom, Tape Storage, Flash Storage System). Most of these certifications aim at individuals who sell and implement one of the specific solutions. The Exadata certification subcategory also includes Oracle Exadata X3, X4 and X5 Expert Administrator certifications for individuals who administer, configure, patch, and monitor the Oracle Exadata Database Machine platform.

Oracle Virtualization certifications

The Virtualization certifications cover Oracle Virtual Machine (VM) Server for X86. This credential is based on Oracle VM 3.0 for X86, and recognizes individuals who sell and implement Oracle VM solutions.

The Oracle VM 3.0 for x86 Certified Implementation Specialist Certification aim at intermediate-level team members proficient in installing OVM 3.0 Server and OVM 3.0 Manager components, discovering OVM Servers, configuring network and storage repositories and more.

The sheer breadth and depth of Oracle’s certification program creates ample opportunities for professionals who want to work with Oracle technologies, or who already do and want their skills recognized and validated. Although there are many specific Oracle products in which to specialize in varying capacities, the main job roles include administrators, architects, programmers/developers and implementation specialists.

Every company that runs Oracle Database, Oracle Cloud, or Oracle Linux or Solaris needs qualified administrators to deploy, maintain, monitor and troubleshoot these solutions. These same companies also need architects to plan and design solutions that meet business needs and are appropriate for the specific environments in which they’re deployed, indicating that the opportunities for career advancement in Oracle technologies are abundant.

Job listings and hiring data indicate that programmers and developers continue to be highly sought-after in the IT world. Programming and development skills are some of the most sought-after by hiring managers in 2019, and database administration isn’t far behind. A quick search on Indeed results in almost 12,000 hits for “Oracle developer,” which is a great indication of both need and opportunity. Not only do developers create and modify Oracle software, they often must know how to design software from the ground up, package products, import data, write scripts and develop reports.

And, of course, Oracle and its partners will always need implementation certified to sell and deploy the company’s solutions. This role is typically responsible for tasks that must be successfully accomplished to get a solution up and running in a client’s environment, from creating a project plan and schedule, to configuring and customizing a system to match client specifications.

Oracle training and resources

It’s not surprising that Oracle has an extensive library of test preparation materials. Check the Oracle University website (education.oracle.com) for hands-on instructor-led training, virtual courses, training on demand, test preparation seminars, practice exams and other training resources.

A candidate’s best bet, however, is to first choose a certification path and then follow the links on the Oracle website to the required exam(s). If training is recommended or additional resources are available for a particular exam, Oracle lists them on the test page.

Another great resource is the Oracle Learning Paths webpage, which provides a lengthy list of Oracle product-related job roles and their recommended courses.

Ed Tittel
Ed is a 30-year-plus veteran of the computing industry. He has worked as a programmer, technical manager, classroom instructor, network consultant and a technical evangelist for companies that include Burroughs, Schlumberger, Novell, IBM/Tivoli and NetQoS. He has written for numerous publications, including Tom’s IT Pro, and is the author of more than 140 computing books on information security, web markup languages and development tools, and Windows operating systems.

Earl Follis
Earl is also a 30-year veteran of the computer industry, who worked in IT training, marketing, technical evangelism, and market analysis in the areas of networking and systems technology and management. Ed and Earl met in the late 1980s when Ed hired Earl as a trainer at an Austin-area networking company that’s now part of HP. The two of them have written numerous books together on NetWare, Windows Server and other topics. Earl is also a regular writer for the computer trade press with many e-books, white papers and articles to his credit.

Tue, 28 Jun 2022 12:00:00 -0500 en text/html https://www.businessnewsdaily.com/10721-oracle-certification-guide.html
Killexams : Managing Requirements Tracking, Implementation and Sign-off for Embedded Systems INTRODUCTION

This document describes the issues faced when building hardware and software systems where the success of the project is dependant on requirements being fully supported and tested. Where the cost of failure is high there is a greater necessity for a robust requirements sign‐off capability. This particularly applies to systems where the financial cost of recalling a failing product is prohibitive and/or there is a high safety factor which is typical of embedded systems.

The following represents an approach to achieving the above through a combination of a software solution, asureSign™ and associated best practice as defined by Test and Verification Solutions Ltd (TVS).

CURRENT INDUSTRY PRACTICE

Currently best practice in requirements tracing stops at test definition. From that point the industry provides only a partially automated approach and most software developers settle for a manual one. There is no tool that will automatically track the results of tests as they apply to requirements. Quite often companies export tests from their requirements management tool into a spreadsheet (or similar) and then record test results in the spreadsheet. They do not hold the tests results for more than a few days and as changes are made the test results are soon out‐of‐date. What is really required is the ability to automatically record test results against the requirements they satisfy, to generate management reports from those results generate and to keep those results for future reference.

The hardware industry has developed numerous techniques to help verify design correctness, such as pseudo random testing, functional coverage, assertions, formal verification etc. But the usage of all these approaches also brings other problems. Quite often a particular requirement will be Tested by a collection of approaches, and with hundreds (or thousands) of tests, functional cover points, properties etc complicated by the fact that some tests or cover points could target more than one requirement, it quickly becomes non‐trivial to determine how well a particular requirement is progressing. Different tools also require different approaches to analyse their status, often having independent people in charge of them with independent flows. This makes it harder to see the full picture and to understand how an individual’s activity fits in relation to the project. Most companies address this by taking a number of days at the end of a project (when the pressure is greatest) to manually map all these approaches to project requirements. Inevitably this is time consuming and only provides valuable information at the end of a project. It also usually results in the identification of verification holes, leading to more work and another round of manual signoff. Finding late bugs often has the same effect.

The software world developed an array of tools for analysing source code and testing executable code associated with newly developed programmes and applications. Source control software provides a mechanism for defining versions of software and an associated history. Requirements management software enables the definition and tracing of user requirements. Bug tracking provides a simple process of listing and describing bugs and their status. However, these do not address the issue of ensuring that requirements had tests defined against them and that these tests were successfully completed.

THE asureSign APPROACH

The asureSign approach was identified during a UK]wide survey into verification carried out by TVS on behalf of the National Microelectronics Institute. The approach is equally relevant to both hardware and software development in solving the above challenges. It is derived from the needs expressed by Infineon Technologies AG and XMOS Ltd; both companies are designers and manufacturers of advanced semiconductor products and their associated software. XMOS undertook early development of a solution to problems stated above and used it to sign off both their hardware and software products. That solution was subsequently taken over by TVS (under agreement with XMOS) and Infineon has contracted with TVS for development of a software product, and have been a primary contributor in determining the functionality.

asureSign addresses the gap between the capture and tracing of functional requirements available in industry leading products (such as DOORS. from IBM and Reqtify. from Geensoft) and the testing capability provided by a whole host of solutions. The industry has to date not addressed the need to ensure that for every functional requirement there is, i) a test for each requirement, ii) what tests have been specified but not written and iii) what tests have been written but not run. There has also been no simple and cohesive method for tracking over time how a project is developing with respect to every feature and requirement, and how these features relate to the tests that are used to measure their progress

asureSign has been developed to address these industry shortcomings. It provides a solution that both supports management in delivering higher quality products and developers/testers in achieving more complete and robust development and testing procedures in support of their system development.

asureSign uses the flexibility of a relational database to track how a project is developing over time for every feature and requirement, and how these features relate to the tests that are used to measure their progress. The product tracks not only tests, but an array of information that can be used to measure progress: coverage, memory leaks, performance, etc. The asureSign database enables management to ask more complex questions, from very high to the very low level:

  • reports on a range of key criteria associated with the sign‐off of your functional requirements including who ran the various tests and on what servers.
  • reporting of incomplete test specification and the presence of the tests specified, written and run on a project wide basis
  • how are my requirements progressing over ti me, with visibility in to historical results and trends and how they relate to requirements
  • improved decision making due to the high quality, real-time information on the project status

The database also provides the ability to access real]time information on the progress of a test run, and the advantage of aggregated statistics about the tests: How long to they take to run?How much computing power do I need to run or verify a particular requirement ?

For developers and testing teams asureSign provides a structured and logical means to Excellerate control;

  • visibility on early stages of test and verification and the automatic tracking of test resulcoverage (structural and functional) for a wide variety of verification and testing functio
  • bug tracking. When a test fails your bug fixing system will be updated
  • plotting of historic results e.g. Was this passing before? If so, then what version of the source code was it using and what changed between those two runs of the tests?
  • automatically relate test results to the code that generated them


Figure 1: This shows the change over time for the correlation between requirements that have tests specified, written, run and passed

asureSign has a number of opportunities to provide automated links to products addressing key areas of support for other parts of the system development llifecycle.

  • Requirements Management: By linking to established products users can ensure that requirements are fully tested.
  • Configuration Management: By llinking to a source control system asureSign can provide a history of which versions were tested at what time.
  • Bug Fixing: By linking to bug fixing systems users can ensure that their bug fixing system will be updated with the latest test results.

SUMMARY

asureSign provides embedded system developers and management with a controlled environment for managing the implementation, tracking and sign-off of requirements. Development teams can be certain that requirements have been implemented through the development and testing of associated programs.

Wed, 07 May 2014 11:01:00 -0500 en text/html https://www.design-reuse.com/articles/32123/managing-requirements-tracking-implementation-and-sign-off-for-embedded-systems.html
Killexams : Successful AI Examples in Higher Education That Can Inspire Our Future

Over the past few years, news of the success of data-fed virtual teaching assistants and smart enrollment counselor chatbots has had the higher education world abuzz with the possibilities inherent in using artificial intelligence on campus. 

Colleges and universities hope AI will help them offload time-intensive administrative and academic tasks, make IT processes more efficient, boost enrollment in a climate of decline and deliver a better learning experience for students. On some campuses, these improvements are already taking place. 

While scaling up AI deployment at universities will take time due to the costs involved, some faculty members may also be resistant to AI on campus because they worry it will put them out of their jobs. 

The best way to convince potential stakeholders of the need for AI is to “opt for a problem-first approach,” suggests the Education Advisory Board, an education enrollment services and research company. 

“Market machine learning as a solution to strategic imperatives rather than just another flashy technology gimmick,” the company adds.

Another way to convince stakeholders is to highlight stories of successful AI deployments that also demonstrate tangible benefits. Let’s look at a few of those. 

MORE FROM EDTECH: AI and smart campuses are among higher ed tech to watch in 2020.

How Georgia Tech Used AI to Unburden Harried Teaching Assistants

At the Georgia Institute of Technology, many of the students in a master’s-level AI class were unaware that one of their teaching assistants, Jill Watson, wasn’t human. (This was despite the clue in her name, which refers to IBM’s Watson.) 

The class’s approximately 300 students posted about 10,000 messages a semester to an online message board, a volume nearly impossible for a regular assistant to handle, according to The Wall Street Journal. The class’s professor, Ashok Goel, tells Slate that while “the number of questions increases if you have more students … the number of different questions doesn’t really go up." So, he and his team created a system that could respond to those queries that were consistently repeated, and released Jill onto the message board. They populated Jill’s memory with tens of thousands of questions (and their answers) from past semesters. 

Not only did most students not realize Jill was virtual, she was also among the most effective teaching assistants the class had seen, answering questions with a 97 percent success rate, according to Slate. Jill had “learned to parse the context of queries and reply to them accurately.” 

Jill was one of nine teaching assistants for the course, and her success didn’t mean all the assistants would those their jobs. She couldn’t answer all of the questions — but more important, she couldn’t motivate students or help them with coursework. What Jill did was free up the human teaching assistants to do more meaningful work.

“Where humans cannot go, Jill will go. And what humans do not want to do, Jill can automate,” Goel tells DOGO News.

MORE FROM EDTECH: Universities use AI chatbots to Excellerate student services.

AI Freezes Summer Melt at Georgia State

In 2016, Georgia State University introduced an AI chatbot, Pounce, that reduced “summer melt” by 22 percent, which meant 324 additional students showed up for the first day of fall classes. “Summer melt” occurs when students who enroll in the spring drop out by the time school begins in the fall. Georgia State’s freshman gains came specifically from those students who had access to the chatbot in a randomized control trial, said the university in a statement.

How did this happen? Through smart text messaging.

The university already knew the advantages of communicating with students via text messages. It also was aware that its existing staff couldn’t possibly be burdened with texting answers to thousands of student queries, according to Campus Technology. It decided to partner with Boston-based AdmitHub, an education technology company that works on conversational AI technology powered by human expertise.

Almost 60 percent of Georgia State’s students are from low-income backgrounds, and many of them are the first in their families to attend college, so they need individual attention as well as financial aid, according to The Chronicle of Higher Education. Feeling confused by various required forms and not knowing which campus offices to go to for specific queries are among the reasons they don’t make it to the first day of classes in the fall. 

AdmitHub worked with Georgia State’s admissions and IT teams to identify these and other common obstacles to enrollment that students face, including financial aid applications, placement exams and class registration. Information and answers related to all of these subjects were fed into Pounce, and students could ask Pounce questions 24/7 via text messages on their smart devices.

In 2016, during the first summer of implementation in the randomized control trial, Pounce delivered more than 200,000 answers to questions asked by incoming freshmen. 

Every interaction was tailored to the specific student’s enrollment task,” says Scott Burke, assistant vice president of undergraduate admissions at Georgia State, on the university’s website. “We would have had to hire 10 full-time staff members to handle that volume of messaging without Pounce.”

The university has enhanced and continued to use Pounce. It has also expanded the chatbot’s role to other student success initiatives. 

MORE FROM EDTECH: Emotionally intelligent AI advances personalization on campus.

Tailored Instruction Meets Students’ Needs with AI

Universities aren’t just seeing declines in enrollment, they are also dealing with high dropout rates. Today’s college students need learning to be more engaging and personalized. Technology, especially AI, can help with both those issues. AI, fed with and trained by Big Data, can deliver a personalized learning experience, writes AI expert Lasse Rouhiainen in the Harvard Business Review. Professors can gain unique insights into the ways different students learn and provide suggestions on how to customize their teaching methods to their individual needs, notes Rouhiainen.

Further down the road, an AI-powered machine might even be able to read the expressions on students' faces to tell whether they are having trouble understanding lessons, according to Forbes

Meanwhile, AI is already making learning more engaging on several campuses. 

IBM Research and Rensselaer Polytechnic Institute have partnered on a new approach to help students learn Mandarin. They pair an AI-powered assistant with an immersive classroom environment that makes students feel as though they are in restaurant in China — or in a garden or a tai chi class — where they can practice speaking Mandarin with an AI chat agent. 

IBM and Rensselaer call the classroom the Cognitive Immersive Room, and it was developed at the Cognitive and Immersive Systems Lab, a research collaboration between the two entities.

Challenges in Scaling Up the Use of AI and Training IT Staff

While such discrete university AI projects have been successful, they also demonstrate that such initiatives are still very much driven by human intelligence. 

For instance, Georgia Tech’s Jill, in her earliest version, was fed more than 40,000 posts from online discussion forums to enable her to answer queries and converse with students. Similarly, Georgia State’s success with Pounce didn’t come easily, notes the Education Advisory Board. 

“In addition to the cost of the chatbot’s development, Georgia State’s ten-person team of admissions counselors spent months teaching Pounce how to respond accurately to students’ questions, another task added to a demanding workload,” according to EAB.

Aside from the amount of work this takes, it also means these AI tools will only be as good as the data they’re fed. While for small projects this may be feasible to do, the challenge becomes enormous when rolling out campuswide AI tools that need to use massive amounts of student and institutional data. An AI tool to answer any and all student questions needs a “heavy lift” database, notes Timothy M. Renick, vice president for enrollment management and student success at Georgia State, in The Chronicle of Higher Education.

Campuses where AI is being used in nonacademic areas face other challenges. Their experiences suggest that managing and using volumes of data requires staff beyond IT teams to be trained to use data and AI tools.

For instance, the University of Iowa has connected many campus buildings to computer systems that use AI to monitor them for energy efficiency and any problems. That means staff at these facilities need more than mechanical skills; at the very least, they will need to become effective at incorporating computers and data into their workflows in ways they aren’t today. This means they either need to acquire IT skills or the university IT department needs to offer more support for these teams.

“There’s going to be a skills gap that we’re all thinking about,” Don Guckert, vice president for facilities management at the University of Iowa, tells The Chronicle of Higher Education.

Fri, 03 Jan 2020 05:30:00 -0600 Shailaja Neelakantan en text/html https://edtechmagazine.com/higher/article/2020/01/successful-ai-examples-higher-education-can-inspire-our-future
Killexams : DESIGN : Requirements Management in Medical Device Development

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published March 1996

Alan Davis and Dean Leffingwell

To managers, developers, quality assurance personnel, and others on the front line of the rapidly evolving medical device industry, the complexity of device technology seems to be growing almost exponentially. Today, medical instruments may make use of specialty analyzers to evaluate the action of chemical reagents, biological reagents, or both; they may incorporate complex electromechanical and robotic apparatus; or they may include electronic subsystems as sophisticated as any modern-day computer. Running such systems requires thousands of lines of software so complex that they would have been unthinkable just five years ago.

At the heart of this complexity, however, lies a fundamental question: what are the real requirements for such a system? After all, if the intended behavior of a system is not fully understood, how can its labeling claims of efficacy and safety be assured? How can the system be designed? How can the system's purpose be determined, or its quality measured? And how can a company predict what level of effort will be required to develop and produce the system?

Part management process and part engineering discipline, requirements management is a technique that can be used effectively to manage this increasing complexity.

REQUIREMENTS MANAGEMENT

Design and development are exacting endeavors. Mechanical tolerances are specified to within a few thousandths of an inch, electrical timing is specified in nanoseconds, and software is specific to every bit in applications containing thousands of lines of code. Despite this precision, many projects suffer a nagging uncertainty: what exactly is this thing supposed to do?

Requirements management is defined as a systematic approach to eliciting, organizing, documenting, and managing both the initial and the changing requirements of a system. A primary result of this effort is the development of one or more requirements specifications that define and document the complete external behavior of the system to be built.

A well-implemented program of requirements management creates a central information repository, which includes requirements, attributes of requirements, requirements-related status information, and other management information related to a company's environment. This repository can be not only a key link in a project's success, but a valuable corporate asset as well. Requirements and their associated attributes can evolve, be adapted, and be reused for subsequent development projects, lowering their cost. This repository not only defines the specifics of what the system will do, it assists management in determining relative priorities of system development activities and assessing the status of the program.

Some project engineers consider requirements management to be an unnecessary exercise.They would prefer to proceed directly to implementation. However, our experience suggests that effective requirements management can offer a variety of benefits that would otherwise be missing from the product development process. These include the following:

* Better definition and management of labeling claims. An up-front investment in requirements management can provide a stronger basis for the claims of efficacy and safety that the manufacturer will ultimately make for the device.

* Better control of the development project. Requirements creep and inadequate knowledge of the intended behavior of the system are commonplace in out-of-control projects. Requirements management can provide the development team with a clearer understanding of what is to be delivered, as well as when and why.

* Improved quality and end-user satisfaction. The fundamental measure of quality is, Does the device do what it is supposed to do? Higher quality can result only when end-users, developers, and test personnel have a common understanding of what must be built and tested.

* Reduced project costs and delays. Research shows that requirements errors are the most pervasive and most expensive errors to fix. Reducing the number of these errors early in the development cycle lowers the total number of errors and cuts project costs and the time to market.

* Improved team communications. Requirements management encourages the involvement of customers, so that the device meets their needs. A central repository provides a common knowledge base of project needs and commitments for the user community, management, analysts, developers, and test personnel.

* Easier compliance with regulations. FDA has a eager understanding of the requirements management problem. The most recent revision of the good manufacturing practices (GMP) regulation, with its proposed incorporation of design controls, provides the agency with a solid basis for auditing the requirements management process.

THE HIGH COST OF REQUIREMENTS ERRORS

There is strong evidence that effective requirements management leads to overall project cost savings. Requirements errors that remain undetected until test or deployment, for example, typically cost well over 10 times more to repair than other errors. Requirements errors also typically constitute over 40% of total errors in a software project. Small reductions in the number of such errors can save money by avoiding rework and scheduling delays.

There are few published data that quantify the cost of requirements errors in medical equipment, but the software industry has been accumulating evidence of such cost for some time. These data should be directly applicable to medical device software development, which is one of the most labor-intensive portions of device development.

Studies performed at GTE, TRW, and IBM measured the costs of errors occurring at various phases of the project life cycle.1 Although these studies were conducted independently of one another, they all reached roughly the same conclusion: Detecting and repairing an error at the requirements stage costs 5 to 10 times less than doing so at the coding stage, and 20 times less than doing so at the maintenance stage (see Table I). The later in the life cycle an error is discovered, the more likely it is that repair expenses will include both the cost to correct the offending error and the cost to correct additional investments that have been made in the error in later phases of product development. Typical investments include the cost to redesign and replace the code, rewrite documentation, and rework or replace software in the field.

In a study performed at Raytheon, it was reported that, on average, approximately 40% of the total project budget was spent on rework.2 Other studies confirm that for the majority of companies, rework amounts to 30­40% of total project costs. Because of the frequency, and the multiplier effects, of requirements errors, finding and fixing them consumes 70­85% of the total rework costs on an average project. Organizations like Raytheon have demonstrated that refining their development processes can lower their rework costs by up to 60%.3 While not all companies will experience such a dramatic improvement, even a small reduction in the number of requirements errors can generate significant savings.

But as high as they are, these estimates of the hard costs of requirements errors tell only part of the story. Intangible costs include features that could have been delivered had the project's resources not been devoted to rework, the increased potential for adverse incidents, and lost market share, along with the associated loss of revenues and profits. The sum of these costs demonstrates that companies cannot afford to ignore the benefits of improved requirements management.

THE VIEW FROM FDA

Under existing regulations, FDA's pre-production quality assurance guidelines provide the clearest insights into its views on requirements management.4 The document states, "Prior to the design activity, the design concept should be clearly expressed in terms of its characteristics....Once these characteristics are agreed upon ...they should be translated into written design specifications." Based on these guidelines, inspectors and reviewers today expect requirements documents not only to exist, but also to be accurate, complete, and unambiguous. When it is published later this year, FDA's revised GMP regulation will have a significant impact on medical equipment manufacturers because it will bring design-related activities within the scope of the regulation. These activities, referred to collectively as design control, are processes and procedures for documenting the design activities used to implement the labeling claims of the device. Once the revised regulation is in place, it seems likely that FDA will take a very active role in reviewing and evaluating a company's requirements management activities.

ELICITING REQUIREMENTS

The process of gathering and eliciting requirements for a new device, system, or application is one of the most exciting and important phases of device development. The goal of this phase is to gather all the requirements of the device. To be successful, these requirements should reflect a consensus not only within the company, but with prospective customers and users as well.

There are a variety of potential sources that can be used effectively in this process, including a company's existing documentation for a similar or predecessor device, 510(k) submissions from other companies or competitors marketing similar devices, experts from inside or outside the company who understand the application area, a company's marketing and sales department, and prospective users. Once these sources have been identified, the goal is to develop a systematic process for collecting data from them, synthesizing and collating the data, and then rationalizing and reducing the data. This last step is needed to remove conflicting or redundant requirements, and eliminate or annotate those requirements that are impractical based on current technology, cost constraints, or market factors.

To elicit requirements, device manufacturers can employ several techniques, either alone or in combination:

* Well-structured interviews can be highly effective in collecting requirements from experts, the sales and marketing department, and prospective users.

* Brainstorming is a structured yet creative technique for eliciting ideas, capturing them, and then subjecting them to objective criteria for evaluation.

* The domain where the device will be used can be a rich source of system requirements. Placing engineers and designers in this environment even for a day is a quick way to educate them about potential problems and issues.

* Structured workshops, managed by trained facilitators, are also an effective way to elicit input.

DEFINING REQUIREMENTS

Typically, requirements start as an abstraction, along the lines of, "We feel there is a patient need for a low-cost ambulatory infusion pump." As the process continues, the requirements become more specific, diverge, recombine in often new ways, and eventually emerge as a set of detailed requirements such as, "The pump will weigh less than 12 ounces," or "The pump will be able to operate for up to 24 hours on a single 9-V battery." As they become more detailed, requirements also become less ambiguous.

When the final requirements have been arrived at, the document that contains them is called a requirements specification. This specification is the basis for design, explaining to designers and testers what the system should do. It also must be communicated to and agreed upon by all relevant parties.

Good requirements specifications have the following attributes in common:5

* Clarity. If a requirement has multiple interpretations, the resulting product will probably not satisfy needs of users.

* Completeness. It may be impossible to know all of a system's future requirements, but all of the known ones should be specified.

* Consistency. A system that satisfies all requirements cannot be built if two requirements conflict.

* Traceability. The source of each requirement should be identified. The requirement may represent a refinement of another, more abstract requirement, or it may result from input from a meeting with a target user.

As long as requirements address external behaviors--as viewed by users or by other interfacing systems--then they are still requirements, regardless of their level of detail. A requirement becomes design information, however, once it attempts to specify the existence of particular subcomponents or their algorithms.

Most requirements specifications include auxiliary information that is not, by definition, a requirement. Such information may include introductory text, summary statements, tables, and glossaries. real requirements should therefore be highlighted by a unique font or some other identifier.

The Role of Hazard Analysis. Hazard analysis plays a unique role in ensuring the safety and efficacy of a medical device. As a technique, it takes into account design- related elements that can mitigate or eliminate hazards to the patient, as well as design elements that create hazards that must be mitigated elsewhere. When reduced to product documentation, the understanding that hazard analysis conveys is qualitatively different from that of a pure requirements document. Because requirements documents focus on what the product should do, rather than on how it should be done, they should not contain design-related information. Hazard analysis, on the other hand, is intended to apply knowledge of good design practices to the mitigation of potential hazards. In doing so, it necessarily creates new system requirements, and the documents that contain these requirements should therefore be considered to be at the highest level of the document hierarchy (see Figure 1). In this way, new requirements developed through hazard analysis can be traced into implementation and testing in the same way as other product requirements.

ORGANIZING DOCUMENTS

After feedback on requirements has been elicited, the next step is to organize the requirements into a document hierarchy, which will support the development and maintenance phases that follow. No hard-and-fast rules define the number and types of documents that will be necessary to manage the evolution of the product; it will depend in part on the level of sophistication of the project team, the number of subsystems, how critical the device is, the total number of requirements, and other factors.6

In many cases, a document hierarchy with no more than three levels will suffice. In the hierarchy shown in Figure 1, the document tree has been divided into three major branches according to the architectural decomposition of the product: software, hardware, and reagents and consumables. This tripartite division provides working groups in each area with specific requirements documents around which they can focus their development efforts. In this hierarchy, the three branches stem from the product requirements document, which is the source of all system requirements. This is typically the highest-level document that defines the claims for the device, and the document upon which the 510(k) or premarket approval is based. At the lowest level of the hierarchy, the documents are divided further into implementation documents, and individual-test documents and specific test protocols to be applied at the subsystem level.

THE SOFTWARE REQUIREMENTS SPECIFICATION

One of the more important requirements documents is the software requirements specification, which is a primary concern of the software developer. This document defines not only the complete external behaviors of the software system to be built, but also its nonbehavioral requirements. A number of standards and guidelines are available to help manufacturers create a requirements specification.7 Standards are by no means a cure-all, but they do provide checklists of things to address and can help to shorten the learning curve for new requirements writers and other project team members. The standard chosen should ensure accuracy, encourage consistency, and promote a short learning curve. Then, based on usage, the standard can be modified as necessary and made company-specific.

Documentation writers should attach labels to those portions of text, graphics, or sound objects that must be tested after implementation. Ideally, the real requirements will be left in situ rather than stored in multiple places. That way, they can be edited and maintained in the project documents even after they have been selected as individual requirements. This makes it easier to update project documentation as requirements change.

ASSIGNING ATTRIBUTES

All system requirements possess attributes. These provide a rich source of management information to help plan, communicate, and record a project's activities through its life cycle. Because the needs of each project are unique, its attributes should be selected accordingly. Specific requirements, for example, will have different implications for efficacy and patient safety. For an infusion pump, the requirement that "overinfusion shall be prevented through redundant safety systems" is more critical than one that states, "The case shall be painted gray."

Companies must therefore set priorities and make trade-offs, particularly in response to scheduling and budgetary constraints. By talking with customers and management, developers can Excellerate their decision making by ranking requirements in order of importance. Status fields should be created to record key decisions and progress. When defining the project baseline, the terms proposed, approved, and incorporated are useful. As a project moves into development, in progress, implemented, and validated can be used to describe important milestones.

Another important attribute is ownership; that is, who is responsible for ensuring that a particular requirement is satisfied? It is also important to create a field for rationale; that is, why does a particular requirement exist? This field can either explain the rationale directly or refer to an explanation somewhere else. The reference, for example, could be to a page and line number of a product requirements specification or to a videotape of an important customer interview. And the dates on which the requirement was created or changed should also be recorded.

Many relationships can exist among requirements. Attribute fields, for example, can record:

* A more abstract requirement from which the present requirement emanated.

* A more detailed requirement that emanated from the present requirement.

* A requirement of which the present requirement is a subset.

* Requirements that are subsets of the present requirement.

In addition to the ones listed above, other common attributes include difficulty, stability, risk, security, version, and functional area. Regardless of the method used to track them, attributes should be easily customizable to adapt to the unique needs of each development team and each application. Manufacturers should pay particular attention to maintaining links between requirements and all products that result from them. These links help to determine the effect of any changes, as well as their development status, which can be recorded as an attribute of those downstream entities.

REQUIREMENTS TRACEABILITY

Requirements begin their existence when first elicited from customers or users. Once they have been captured, links must be maintained backward to their more abstract predecessors and forward to their more detailed successor requirements. Maintaining the ability to trace the path of these changes is a prerequisite to quality assurance and sound requirements management. Use of requirements traceability enables product development teams to attain a level of project control, quality assurance, and product safety that is difficult to achieve by any other means.

Traceability should be a key tool in project management, if only because ensuring full requirements test coverage is virtually impossible without some form of requirements traceability. FDA's "Reviewer Guidance for Computer Controlled Medical Devices Undergoing 510(k) Review" states that "testing requirements should be traceable to the system/software requirements and design."8 The proposed revised GMP regulation goes further, stating that "manufacturers must establish a documented program to ensure that design requirements are properly established, verified, and translated into design specifications, and that the design released to production meets the approved design specifications."9

In its simplest terms, requirements tracing demonstrates that the system does what it was supposed to do. The key benefits of this process include:

* Verification that all user needs have been implemented and adequately tested.

* Verification that there are no system behaviors that cannot be traced to a user requirement.

* Improved understanding of the impact of changing requirements.

A document hierarchy in the form of a diagram can be a useful tool for managing requirements traceability. Such a diagram graphically displays the interrelationships that exist among a project's documents, and therefore also among the requirements expressed in those documents. The chart shown in Figure 1, for example, indicates that there is a direct relationship between the product requirements document (the source of all product requirements) and the software requirements specification. Hence, all software-related product requirements should be satisfied by one or more software requirements, and, conversely, any software requirement may help to satisfy one or more product requirements. In this example, any product requirement that has no corresponding software, hardware, or reagents and consumables requirements will not be satisfied. By the same token, a software or other requirement that has no related product requirement is extraneous and should be eliminated. The document hierarchy enables project team members to visualize and check the completeness of requirements contained in all levels of a product's documentation.

In addition to using a document hierarchy, manufacturers should employ some system that will enable them to continuously identify relationships among items within the hierarchy. This can be done by embedding unique identifiers and electronic links within the document, or by using a separate spreadsheet or database that manages the links outside the document. The latest generation of requirements management tools automatically maintains traceability links.

CHANGE MANAGEMENT

Requirements traceability provides a methodical and controlled process for managing changes that inevitably occur as a device is developed and deployed to the field. Without traceability, every change would require project team members to review all documents on an ad hoc basis in order to determine what other elements of the project, if any, require updating. Because such a process would make it difficult to establish whether all affected components have been identified over time, changes to the system would tend to decrease its reliability and safety.

With traceability, change can be managed in an orderly fashion by following the relationships of product requirements and specifications through the document hierarchy. When a user requires changes, developers can quickly identify elements that may need to be altered, testers can pinpoint test protocols that may need to be revised, and managers can better determine the potential costs and difficulty of implementing the changes.

REQUIREMENTS REPORTING

A requirements repository gives managers a powerful tool for tracking and reporting project status. Critical milestones can be more easily identified, scheduling problems better quantified, and priorities and ownership kept visible. Querying the repository can help managers quickly retrieve key information such as how many requirements have been established for their device, how many of these might affect patient safety, whether all hazards have been identified and mitigated, the current status and schedule for incorporating requirements into the prototype, and the estimated cost of proposed changes.

High-level reports drawn from the requirements repository can also aid management reviews of product features. Requirements can be prioritized by customer need, by user safety considerations, or by how difficult or costly they will be. Such specialized reports can help managers allocate resources by focusing their attention on key project issues.

CONCLUSION

Many device manufacturers carry the scars from projects that failed to meet expectations. It is common for projects to over-shoot their schedules by half, deliver less than originally promised, or be canceled before release. To keep pace with rising complexity and increased user demands, companies must become more sophisticated in the ways they develop, test, and manage their projects. Improved requirements management offers a first step in this direction.

A requirements management repository that includes a product's requirements, specifications, and attributes--and the links among them--can help manufacturers determine precisely what the product is supposed to do. It can be used to manage and control the device development process, Excellerate team communications, Excellerate quality, and, ultimately, ensure that the final product fully satisfies the needs of the customer.

REFERENCES

1. Davis A, Software Requirements, Objects, Functions, and States, Englewood Cliffs, NJ, Prentice Hall, 1993.

2. Dion R, "Process Improvement and the Corporate Balance Sheet," IEEE Software, July, pp 28­35, 1993.

3. Paulk M, Curtis B, et al., "Capability Maturity Model for Software, Version 1.1," SEI-93-TR-025, Pittsburgh, Software Engineering Institute, 1993.

4. "Preproduction Quality Assurance Planning: Recommendations for Medical Device Manufacturers," HHS Publication FDA 90-4236, Rockville, MD, FDA, Center for Devices and Radiological Health (CDRH), 1989.

5. Davis A, et al., "Identifying and Measuring Quality in Software Requirements Specifications," presented to the IEEE International Software Metrics Symposium, Baltimore, May 1993.

6. Wood B, and Pelnik T, "Managing Development Documentation," Med Dev Diag Indust, 17(5):107­114, 1995.

7. "IEEE Recommended Practice for Software Requirements Specifications," IEEE Std 830-1993, Piscataway, NJ, Institute of Electrical and Electronics Engineers, 1993.

8. "Reviewer Guidance for Computer Controlled Medical Devices Undergoing 510(k) Review," Rockville, MD, FDA, CDRH, 1989.

9. "Working Draft of the Current Good Manufacturing Practice (GMP) Final Rule," Rockville. MD, FDA, CDRH, Office of Compliance, July 1995.

Alan Davis is a professor of computer science and El Pomar chair of software engineering at the University of Colorado, Colorado Springs. Dean Leffingwell is president and CEO of Requisite, Inc. (Boulder, CO), a company specializing in requirements management software for medical devices.

Tue, 12 Jul 2022 12:00:00 -0500 en text/html https://www.mddionline.com/news/design-requirements-management-medical-device-development
Killexams : SAP Application Services Market Is Dazzling Worldwide with Major Giants Atos, Deloitte, PwC, Cognizant

An extensive elaboration of Global SAP Application Services Market covering micro level of analysis by competitors and key business segments (2022-2030). The Global SAP Application Services explores comprehensive study on various segments like opportunities, size, development, innovation, sales and overall growth of major players. The study is a perfect mix of qualitative and quantitative Market data collected and validated majorly through primary data and secondary sources. Some of the MajorKey players profiled in the study are SAP, NTT Data, Infosys, Atos, Deloitte, Accenture, Capgemini, Wipro, Tata Consultancy Services (TCS), IBM, Fujitsu, PwC, Cognizant, CGI, DXC Technology & EPAM

Get free access to trial report @: https://www.htfmarketreport.com/sample-report/4109006-2022-2030-report-on-global-sap-application-services-market

On the off chance that you are engaged with the industry or expect to be, at that point this investigation will provide you complete perspective. It’s crucial you stay up with the latest sectioned by Applications [BFSI, Manufacturing, Retail & CPG, Telecom & IT & Life Sciences & Healthcare], Product Types, [, Management Services, Implementation and Upgrades, Post-Implementation Services & SAP Hosting] and some significant parts in the business
.
For more data or any query mail at [email protected]

Which market aspects are illuminated in the report?

Executive Summary: It covers a summary of the most vital studies, the Global SAP Application Services market increasing rate, modest circumstances, market trends, drivers and problems as well as macroscopic pointers.

Study Analysis:Covers major companies, vital market segments, the scope of the products offered in the Global SAP Application Services market, the years measured and the study points.

Company Profile: Each Firm well-defined in this segment is screened based on a products, value, SWOT analysis, their ability and other significant features.

Manufacture by region: This Global SAP Application Services report offers data on imports and exports, sales, production and key companies in all studied regional markets

Highlighted of Global SAP Application Services Market Segments and Sub-Segment:

SAP Application Services Market by Key Players: SAP, NTT Data, Infosys, Atos, Deloitte, Accenture, Capgemini, Wipro, Tata Consultancy Services (TCS), IBM, Fujitsu, PwC, Cognizant, CGI, DXC Technology & EPAM

SAP Application Services Market by Types: Management Services, Implementation and Upgrades, Post-Implementation Services & SAP Hosting

SAP Application Services Market by End-User/Application: BFSI, Manufacturing, Retail & CPG, Telecom & IT & Life Sciences & Healthcare

SAP Application Services Market by Geographical Analysis: North America, Europe, Asia-Pacific etc

For More Query about the SAP Application ServicesMarket Report? Get in touch with us at: https://www.htfmarketreport.com/enquiry-before-buy/4109006-2022-2030-report-on-global-sap-application-services-market

The study is a source of reliable data on: Market segments and sub-segments, Market trends and dynamics Supply and demand Market size Current trends/opportunities/challenges Competitive landscape Technological innovations Value chain and investor analysis.

Interpretative Tools in the Market: The report integrates the entirely examined and evaluated information of the prominent players and their position in the market by methods for various descriptive tools. The methodical tools including SWOT analysis, Porter’s five forces analysis, and investment return examination were used while breaking down the development of the key players performing in the market.

Key Growths in the Market: This section of the report incorporates the essential enhancements of the marker that contains assertions, coordinated efforts, R&D, new item dispatch, joint ventures, and associations of leading participants working in the market.

Key Points in the Market: The key features of this SAP Application Services market report includes production, production rate, revenue, price, cost, market share, capacity, capacity utilization rate, import/export, supply/demand, and gross margin. Key market dynamics plus market segments and sub-segments are covered.

Basic Questions Answered

*who are the key market players in the SAP Application Services Market?
*Which are the major regions for dissimilar trades that are expected to eyewitness astonishing growth for the
*What are the regional growth trends and the leading revenue-generating regions for the SAP Application Services Market?
*What are the major Product Type of SAP Application Services?
*What are the major applications of SAP Application Services?
*Which SAP Application Services technologies will top the market in next 5 years?

Examine Detailed Index of full Research Study [email protected]https://www.htfmarketreport.com/reports/4109006-2022-2030-report-on-global-sap-application-services-market

Table of Content
Chapter One: Industry Overview
Chapter Two: Major Segmentation (Classification, Application and etc.) Analysis
Chapter Three: Production Market Analysis
Chapter Four: Sales Market Analysis
Chapter Five: Consumption Market Analysis
Chapter Six: Production, Sales and Consumption Market Comparison Analysis
Chapter Seven: Major Manufacturers Production and Sales Market Comparison Analysis
Chapter Eight: Competition Analysis by Players
Chapter Nine: Marketing Channel Analysis
Chapter Ten: New Project Investment Feasibility Analysis
Chapter Eleven: Manufacturing Cost Analysis
Chapter Twelve: Industrial Chain, Sourcing Strategy and Downstream Buyers

Buy the Full Research report of Global SAP Application Services [email protected]https://www.htfmarketreport.com/buy-now?format=1&report=4109006

Thanks for studying this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.

About Author:
HTF Market Report is a wholly owned brand of HTF market Intelligence Consulting Private Limited. HTF Market Report global research and market intelligence consulting organization is uniquely positioned to not only identify growth opportunities but to also empower and inspire you to create visionary growth strategies for futures, enabled by our extraordinary depth and breadth of thought leadership, research, tools, events and experience that assist you for making goals into a reality. Our understanding of the interplay between industry convergence, Mega Trends, technologies and market trends provides our clients with new business models and expansion opportunities. We are focused on identifying the “Accurate Forecast” in every industry we cover so our clients can reap the benefits of being early market entrants and can accomplish their “Goals & Objectives”.

Contact US :
Craig Francis (PR & Marketing Manager)
HTF Market Intelligence Consulting Private Limited
Unit No. 429, Parsonage Road Edison, NJ
New Jersey USA – 08837
Phone: +1 (206) 317 1218
[email protected]
Connect with us at LinkedIn | Facebook | Twitter

Mon, 11 Jul 2022 20:45:00 -0500 Newsmantraa en-US text/html https://www.digitaljournal.com/pr/sap-application-services-market-is-dazzling-worldwide-with-major-giants-atos-deloitte-pwc-cognizant
Killexams : IoT and industrial AI: Mining intelligence from industrial things
  • By Renee Bassett
  • Cover Story
IoT and industrial AI: Mining intelligence from industrial things
Here’s how to understand what industrial AI can do, how IoT feeds it, and how to start a pilot project of your own
By Renee Bassett

There is nothing "artificial" about the intelligence that can be gleaned from the detailed monitoring of machines, processes, and the people who interact with them. Ever since the time and motion studies of the efficiency experts of the early 1900s, industrial engineers have been turning real-time data into information and decisions that could Excellerate productivity, efficiency, and profits. With the fourth industrial revolution upon us now, artificial intelligence (AI) technology is ready to go to work in ways that are not always obvious.

According to a Gartner Group forecast, The Business Value of Artificial Intelligence Worldwide, 2017-2025, AI and Internet of Things (IoT) "already work together in our daily lives without us even noticing. Think Google Maps, Netflix, Siri, and Alexa, for example. Organizations across industries are waking up to the potential. By 2022, more than 80 percent of enterprise IoT projects will have an AI component-up from less than 10 percent today" (2018).

The takeaway is clear, says data analytics software provider SAS: "If you're deploying IoT, deploy AI with it. If you're developing AI, think about the gains you can make by combining it with IoT. Either one has value alone, but they offer their greatest power when combined. IoT provides the massive amount of data that AI needs for learning. AI transforms that data into meaningful, real-time insight on which IoT devices can act."

AI and machine learning

Artificial intelligence uses a variety of statistical and computational techniques and encompasses a number of terms. Machine learning (ML), a subset of AI, identifies patterns and anomalies in data from smart sensors and devices without being explicitly programmed where to look. Over time, ML algorithms "learn" how to deliver more accurate results.

Because of this learning, "ML outperforms traditional business intelligence tools and makes operational predictions many times faster and more accurately than systems based on rules, thresholds, or schedules," according to SAS. "AI separates signal from noise, giving rise to advanced IoT devices that can learn from their interactions with users, service providers, and other devices in the ecosystem."

"The challenge is that people have not developed the level of trust in artificial intelligence and machine learning that they have in other technologies that automate tasks," says Oliver Schabenberger, COO and CTO of SAS. "People sometimes confuse automation with autonomy, he adds. But have no fear: "AI does not eliminate the need for humans, it just enables them to do their work more effectively," he says.


AI, around since the 1950s, is becoming a mainstream application as a result of the explosion in IoT data volume, high-speed connectivity, and high-performance computing.
Source: SAS


Defining AI applications

Industrial AI can range from low-intelligence applications like automation to higher-end intelligence capable of decision making. It can also be controlled centrally or distributed across multiple machines. According to Gartner vice president and analyst Jorge Lopez, AI applications can be broken down into five levels of sophistication:

Reactors follow simple rules but can respond to changing circumstances within limits (such as basic drones).

  • Categorizers recognize types of things and can take simple actions to deal with them within a controlled environment (warehouse robots).
  • Responders serve the needs of others by figuring out questions and situations (driverless cars, personal assistants).
  • Learners gather information from multiple sources to solve complex problems (IBM Watson, wholly automated military drones).
  • Creators initiate a paradigm shift, such as inventing a new business model. They are not merely tools that people use; they have the potential to engineer actions harmful to humans. They will change humans' relationship to technology as well as people's roles within society and the economy, says Gartner. Therefore, "AI creator applications require profound thought before development."

These five artificial intelligence models have three types of organization, says Gartner: standalone, federation, or swarm. A standalone AI system is an individual entity that acts by itself to solve problems. The enterprise exercises centralized control over it by overseeing the entity as it performs.

In a federation structure, says Gartner, multiple versions of an entity work in the same way but on different problems (e.g., robo-advisors, personal assistants). The enterprise can exercise central control or provide more autonomy to the entities. In a swarm structure, multiple entities work together on the same problem (e.g., Intel light show drones, Perdix drones). Control over execution is left to the machines entirely or requires only light human management.


Early AI adopters like retail and banking firms have reaped the benefits of AI, but it is not too late for fast followers, according to Petuum. AI has caught the attention of industrial innovators and naysayers alike.
Source: McKinsey & Company


More than automation

The most common place to start with AI is with automation, but experts say it is a mistake to stop there. The more powerful use of AI is to aid human decision making and interactions. Because AI can classify information and make predictions faster and at higher volumes than humans can accomplish on their own, those terabytes of data being produced by industrial IoT devices are being transformed into powerful tools today.

In a recent blog post for industrial AI startup Petuum, author Atif Aziz says, "Some industry leaders are zooming past the basics: digitization, cloud infrastructure, monitoring and dashboards. They are putting newly acquired data to good use through AI-driven advanced analytics (e.g., uncovering patterns through system of systems) and automating complex processes. Some early adopters are implementing as many as 100 digital transformation initiatives simultaneously or using AI to automate their core production processes across 30 or more plants," Aziz says.

On the other end of the spectrum, "some folks still need to understand how AI can provide real value and balance the ROI with their limited resources," says Aziz. "The breakneck speed of advancement in the Industrial AI/ML space over the last three years affords a unique advantage for these newcomers. They can skip many of the expensive intermediate steps (e.g., significant investments in data aggregation infrastructure, dashboards, and monitoring centers) and gain the same AI benefits as the savvier early adopters."

Aziz says most industrial AI initiatives fall into three categories. AI for assets includes equipment automation, equipment stabilization, and equipment health. AI for processes includes yield maximization through efficiency gains, automation and stabilization across multiple assets or spanning multiple flows, and quality improvement. AI for operational excellence and/or business agility includes energy cost optimization, predictive maintenance, logistics and scheduling, research and development, and more.

AI for assets

IBM Watson IoT helps organizations make smarter decisions about asset management by combining IoT data with cognitive insights driven by AI. IBM's Maximo enterprise asset management (EAM) system uses Watson IoT technology to make better decisions about critical physical assets in industrial plants-whether they are discrete machines, complex functional asset systems, or human assets.

One Maximo user, Ivan de Lorenzo, is outage planning manager for Cheniere Energy, a Houston-based liquefied natural gas producer. He says that, with the software, "we have better information on assets and maintenance activity, and more sophisticated tools and mechanisms for managing it all. The result is greater operational control and accountability, especially when it comes to planning and scheduling."

AI-based asset life-cycle and maintenance management solutions like Maximo use real-time data collection, diagnostic, and analysis tools to extend an asset's usable life cycle. Use of the software also improves overall maintenance best practices; meets increasingly complex health, safety, and environmental requirements; and controls operational risk by embedding risk management into everyday business processes.

IBM says EAM also helps "control the brain drain among employees facing retirement by [putting] into place proven workflows and enforced best practices that capture the knowledge and critical skills of long-time employees." Such a system also helps a reduced workforce to work more efficiently and cost effectively "by using the captured intellectual experience of skilled workers in a format easily dispersed in a wide range of languages."

AI for processes

AI systems are being used to Excellerate whole processes as well as industrial assets. In an MIT Technology Review Insights publication produced in conjunction with IBM, Raytheon senior principal systems engineer Chris Finlay describes the benefits of replacing document-based information exchange with an AI-compatible digital platform to support engineering and design. "Once you start to capture things digitally, you can start to exploit machine learning or AI algorithms," he says. "You can start to reduce development costs because you can automate tasks that you were doing by hand."

Joe Schmid, director of worldwide sales for IBM Watson Internet of Things, says, "In the engineering process, you define what you want to do, design it, build it, test it, and prove that you've done it. The key is integrating those steps. But integrating is hard."

Customers that Schmid has worked with are often good at one part of the process, such as design, but they do not integrate design into the life cycle. "When they need to change goals or specs, it's all in people's heads," he says. "That doesn't work anymore with the complex systems we have today. One engineer can't have an entire system in their head. That's when errors pop up."

The goal of AI for engineering processes is to create an integrated "system of systems," a closed loop that runs from the requirements phase of product development to real-time monitoring of how consumers are using the product, and then deploy AI systems to analyze the data and leverage that knowledge to Excellerate the product, says Dibbe Edwards, vice president of IBM Watson IoT connected products offerings.

In another example, global building materials company Cemex is on an industry 4.0 journey toward enhanced standardized operations using AI. The ultimate goals are increased efficiencies, reduced fuel and energy consumption, better quality, reduced costs, and improved decision making. The company announced in March that it had installed "AI-based autopilots" for its rotary kiln and clinker cooler systems that will "autosteer" its cement plants and enable autonomous, operator-supervised plant operations.

Cemex used OSIsoft PI systems to power Petuum Industrial AI Autopilot products. The two work with plant control systems to provide precise real-time forecasts for significant process variables, prescriptions for critical control variables, and a supervised autosteer function aligned with business objectives while staying within applicable static and dynamic constraints. The PI systems fuel real-time predictive and prescriptive recommendations.

Rodrigo Quintero, operations digital technologies manager for Cemex, says, "Petuum Industrial AI Autopilot helped us achieve something we didn't think was possible at this time: yield improvements and energy savings up to 7 percent, which is game changing for our industry. Additionally, this is a giant step in digital transformation toward safe, highly standardized operations, that will help us strengthen our high-quality products portfolio while also ensuring we meet our operational and sustainability goals, and minimize costs."

The Autopilot products can ingest data from a variety of sources, including unstructured, images, structured, time series, customer relationship management (CRM) data, enterprise resource planning (ERP) data, and others. The Petuum platform provides sophisticated data processing, data cleansing, and machine/deep learning pipelines to implement advanced AI that is sensitive to linear, temporal, long range, and nonlinear data patterns in a range of industrial use cases.

AI for operational excellence

Staying ahead of maintenance and production challenges to keep precision metals rolling out of its plants on time is a high priority for Ulbrich Stainless Steel & Specialty Metals. That is why the global company chose SAS Analytics for IoT to gain access to the latest suite of AI, machine learning, and streaming analytics available to analyze the data from plant sensors.

Jay Cei, COO at Ulbrich, says, "Collecting machine and sensor data from our factories and integrating that with ERP system data will help us understand the intricate relationships between equipment, people, suppliers, and customers.

Learning what their IoT data means is critical for understanding how the company can become more productive and efficient in the future, Cei says. DJ Penix, president of SAS implementation partner Pinnacle Solutions, says, "Streaming analytics will not only help Ulbrich understand what is happening now with their machines. It will also enable them to predict future events, such as when a machine needs maintenance before it breaks down."

The software provides a simplified way for any user to prepare stationary and streaming IoT data for analysis without specialized skills, says Penex. Whether a data scientist, business manager, or someone in between, they can use SAS Analytics for IoT to quickly select, launch, transform, and operationalize IoT data, he says.

Jason Mann, vice president of IoT at SAS, says companies can no longer afford to ignore the hidden signals in their IoT data. "To thrive, organizations need a solution that addresses data complexity and automates timely and accurate decision making," he adds.

Tips for AI pilot projects

According to a recent Gartner survey, 37 percent of organizations are still looking to define their AI strategies, while 35 percent are struggling to identify suitable use cases. Once you have developed a solid understanding of AI and its potential applications, it is time to make a case for a pilot. Here are some tips from Gartner for making the pilot project a success.

  1. Be realistic about a timeline. Once you have approval from executives, it can be tempting to think a pilot project will follow quickly. In fact, according to results from Gartner's 2017 Annual Enterprise Survey, 58 percent of respondents in companies currently piloting AI projects say it took two or more years to reach the piloting phase, and only 28 percent of respondents reported getting past the planning stage in the first year.
  2. Aim for fairly soft outcomes, such as improvements to processes, customer satisfaction, products, or financial benchmarking. Gartner Research Circle respondents urged others not to fall into the trap of seeking only immediate monetary gains. Aim initially for less-quantifiable benefits from which financial gains would eventually arise.
  3. Focus on worker augmentation, not worker replacement. AI's potential to reduce staff head count attracts the attention of senior business executives as a potential cost-saving initiative. A more informed expectation, however, is for applications that help and Excellerate human endeavors, as AI promises benefits far beyond automation. Organizations that embrace this perspective are more likely to find workers eager to embrace AI.
  4. Plan for the transfer of knowledge from external service providers and vendors to enterprise information technology and business workers. External service providers can play a key role in planning and delivering AI-powered software, and knowledge transfer is crucial. AI requires new skills and a new way of thinking about problems. These include technical knowledge in specific AI technologies, data science, maintaining quality data, problem domain expertise, and skills to monitor, maintain, and govern the environment.
  5. Choose AI solutions that offer tracking and revealing AI decisions, ideally using action audit trails and features that visualize or explain results. To that end, Gartner predicts that by 2022, enterprise AI projects with built-in transparency will be twice as likely to receive funding from CIOs.
  6. Start small; do not worry about immediate return on investment. Digital transformation should begin with small experiments that are purely for learning, says Gartner. Use the time to pilot projects that employ a variety of technologies to assess which make the most sense for the business.

Reader Feedback


We want to hear from you! Please send us your comments and questions about this subject to InTechmagazine@isa.org.

Wed, 20 Apr 2022 23:43:00 -0500 en text/html https://www.isa.org/intech-home/2019/july-august/features/iot-and-industrial-ai-mining-intelligence-from-ind
Killexams : Cut Through The Fog: Ten Tests For A Family Business’ Succession Plan

Bert & I” are stories of Down East Maine by Marshall Dodge and are best known for his dry sense of humor. In story is of how, when two lobstermen, the Captain and his mate, are out pulling pots, a dense fog rolled in. This being the days before radar, they could only rely on printed charts. The captain tells the mate to get the old chart book, but as the Captain open the book to the page they needed, a puff of wind came and blew that loose chart from the old book and into the water. 

           “Well now what do we do?” asked the Mate.

           “We get moving and get onto this here next chart, is what we do” replied the captain.

           So, at its most basic, strategy is a way of thinking that shapes what you are going to do in the future, which is how to get to safety when the fog rolls in. Estate planning is an integral part of that thinking, but too often the strategic implications of the estate plan on the family controlled business is overlooked. 

The Question

           How many know that your estate planning is the strategy to achieve your goals for growth, control, protection and succession? 

 An estate planning strategy of complexity for the sake of tax savings and starving the family for income to avoid debt will not achieve your goals in the future, since it has nothing to do with preserving the company as a going concern. Tactics used by professionals, such as the use of Family Limited Partnerships, are all about tax and debt avoidance. The estate plan will render your client’s strategies ineffective if your goal is to transfer control of the company intact in the future. Being able to test whether the estate plan works in your overall strategy both before and during implementation allows you to avoid much of the cost and delays of change after implementation. 

The question “Will my estate planning strategy achieve my goals in the future?” is so board that is not very useful. Here are seven further, clarifying questions to estate planner to generate answers that open your mind to new ways of thinking and get greater value from your professional services in achieving your Goals.

Question 1: Does your Strategy tap a true source of advantage?

Question 2: Is your strategy sufficiently granular about where to seek an     advantage?

Question 3: Does your strategy put you ahead of the trends?

Question 4: Does your strategy rest on privileged insights?

Question 5: Does your strategy embrace uncertainty?

Question 6: does your strategy balance commitment with flexibility?

Question 7: How contaminated is your strategy with biases?

Question 8: Based on the Answers to these Questions, does your Strategy Achieve your Goals in the Future?

The next two are really observations –

9: Strategies do not work if there is no conviction to act on your strategy, and

10: Strategies need to be translated into an action plan to be effective.

Question 1: Does your Strategy Embrace Uncertainty?

Of these questions, the most critical is “Does your strategy embrace uncertainty?” so I will discuss this question first. For example, in 2010 no one could have predicted that the unified credit would be raised to $5 Million, and no one can predicted exactly how much the unified credit would remain at $5 Million.   Further, no one can tell what the economic future may hold, and if the Dollar declines in value, the inventory of a business may be worth more than the business itself. Some analyst, consultant or other pundit is always making predictions about the future, but unless you are comfortable relying on their crystal ball, only by embracing some uncertainty can your strategy work.

There are three different ways professionals handle uncertainty: Traditionalist, New Realists, or Futurists. 

Traditionalists rely on mathematical projections of what has happened in the recent past, with any errors in the projections due to a lack of greater knowledge, information or expertise. Basically the same solutions that worked in the past will work in the future, so long as you “get a better hammer” of increased resources to drive it home. I find this most often in the basic financial planning models.

New Realists rely on close monitoring for signs of change and rapid response to risks or opportunities when changes occur. Since there “is not strategy, only tactics” it tells nothing about where you are going, what you need to get there or when you arrive. I see this often in the succession planning models for family wealth, both inside and outside of a family controlled business. It is not until the death of a family principle that tactic are decided upon. 

The Futurists, or Scenario Analysis, strategic model is the best way of embracing the uncertainty of the future in the long term (i.e. more than ten years). Scenario analysis is a challenge for you, your family and your business, but the result is your 1) learn from others mistakes, 2) marshal the resources you need to meet risks you can anticipate and 3) have the mental flexibility to cope with those risks and opportunities you cannot anticipate.

Question 2: Does your Strategy Tap a True Source of Advantage?

When considering business strategies, the advantage of a special position or capability is one of the first things that defined. The same is true, though often unrecognized, with families. Your company has a competitive advantage, and you control this scarce resource.   Your competitive advantage may not be recognized at first but things like positional advantage you and your family has because your relationships within the closed markets of suppliers and customers, your conduct inside and outside of the company, and your focus on the family’s involvement and performance in the “business” gives them a relative advantage to the non-family businesses in the marketplace.

These advantages can be fleeting so both the advantages of the business and the advantages of the family must be tested and analyzed to see what would happen if they no longer where a true source of advantage.

Question 3: Is Your Strategy Granular Enough about Where to Compete?

In Mehrdad Baghai’s book Granularity of Growth he shows that 80% of the differential of growth in companies is based on picking the right place to compete, and only 20% is based on how a company competes. The same is true in collecting and other “alternative” investments. Too often, however, the niche that a company or a family seeks to compete in is drawn too broadly; the result is that false data and conclusions are created. 

Chris Bradley, Martin Hirt, and Sven Smit note an example of this when a national retail bank makes a regional effort to grow its retail banking business through better customer satisfaction, and at the end of the initiative, the regional data shows that the retail banking did, indeed, grow so validating the strategy. When, however, the bank looked at the data on a city by city and product by product basis, they found that 90% of the regional growth for the bank was due to new business in one rapidly developing urban center, and that was only in one fast growing product area. The granular data proved that the customer satisfaction model was not the reason for growth, but rather the placement of the bank in a rapidly growing urban center.

Question 4: Does Your Strategy put you ahead of the Trends?

Many forecast the future by projecting out the immediate past performance into the near future. Strategies need to have a deliberate analysis of the trends within and outside of the family, company or other organization. The risk of the internally based predictions is illustrated by Daniel Kahneman’s recent book Think Fast and Slow.  Kahneman recalls a project to revise a textbook and curriculum where, based on the internal projections of the work already done, this group of experts predicted that the project would be a success, and that it would be done in two years. HE then queried a participant who had experience in such projects about his specific experience outside of the internal trends of this group. This expert had supported the conclusion of success and two years the group had come up with, but on consideration of his outside experience, he realized that projects he had experience with had a 40% chance of complete failure, and if it was successful, the average time to completion was seven to ten years. Kahneman then goes on to describe his project in fact took eight years to complete.

Question 5: Does Your Strategy Rest on Privileged Insight?

Gaining insights is always hard to do. I usually begin with a short list of questions that have major implications. These can be personal (what if someone dies suddenly?) to technological (what if there is a process breakthrough?) to macro (what if we enter into a deflationary cycle?). In each case, the question must include data collected from both inside and outside of the family, business and organization, and has to focus on simple but often profound conclusions that offer the insight into how the family, the market and the client will behave in specific situations.

Question 6: Does your strategy balance commitment and flexibility?

Families and businesses sustain their advantages by being able to commit to a strategy for the very long term. This commitment may be to back a strategy that is high risk, but also very high potential returns, as well as commitment to a very low risk strategy, even though there is a low yield on the investment. Strategies require flexibility also, since when you commit is often not a fixed date. To create real options, there needs to be a strategy that commits to opening opportunities for the family or business as well as the flexibility to take advantage of opportunities when they come your way.

Question 7: How contaminated is your strategy by biases?

We are all products of our past and experiences, so we all have inherent biases towards some things and away from others. Even though it is not possible to avoid biases all together, any strategy needs to recognize how contaminated they are by biases, and how it warps the strategy. Some of the most common biases Chris Bradley, Martin Hirt, and Sven Smit note in their article include:

Over Optimism – This in only looking at the inside data and forecast from there, the most common bias of the “number cruncher” programs and experts in estate planning.

Anchoring – Determining the value based on some arbitrary outside point. An example is when IBM IBM gave up the PC operating software since there was no value for the PC for businesses at that time.

Loss Aversion – Avoidance of risk at the expense of those that might lead to opportunities. A classic example is the French loss aversion in 1939 prevented them from taking the opportunity to take on Germany when there was an advantage.

Confirmation Bias – Seeing only what confirms your existing opinion. A recent example is the way that the various warnings on the subprime mortgage investments were ignored by both rating agencies and investors until it was too late.

Herding – This is finding comfort in the crowd, and is the root cause of nearly every financial bubble. This is also a grave issue in estate planning, often under the guise of “best practices” the result is needlessly complex and convoluted plans, documents and operations that do not relate to the goals of the client but rather what the consensus in the industry is at the moment about what “everyone should have/do this”. 

Champion Bias – This is when an idea is accepted or rejected based on who is proposing the idea and not on its own merits. For example, the ratings on wine often have more to do with who is making the call than on the real quality of the wine itself.

The Halo Effect – This is copying the actions of others on the assumption that what has worked in the past for others will work in the future for you. Obviously lawyers who are trained to rely on case law and statutes for their planning without reference to the goals of the client (or the unique qualities of the specific case) are very susceptible to Halo effect. 

Survivor Bias – Here you only take lessons from those that succeed, and ignore the lessons that can be learned from those who have failed. This ignores the impact that luck and outside events have on strategies.

Conclusion

The Planning for future succession at a family business is like navigating in a fog. Success is only certain after your succession, but testing your strategy helps you cut through the fog.

Sun, 06 Feb 2022 19:23:00 -0600 Matthew Erskine en text/html https://www.forbes.com/sites/matthewerskine/2020/11/20/cut-through-the-fog-ten-test-for-a-family-business-succession-plan/
Killexams : DHS puts the kibosh on saying ‘pilot’ as it deals with new congressional reporting requirements

There is a new unwritten rule at the Department of Homeland Security these days: Don’t use the word pilot or demonstration program in public or in official documents.

Seems a little odd?

Calling something a pilot in government is like shaking someone’s hand when you first meet them. It’s a well-worn and appreciated custom.

But at DHS these days, the words are verboten thanks to a little noticed provision in the Department of Homeland Security’s...

READ MORE

There is a new unwritten rule at the Department of Homeland Security these days: Don’t use the word pilot or demonstration program in public or in official documents.

Seems a little odd?

Calling something a pilot in government is like shaking someone’s hand when you first meet them. It’s a well-worn and appreciated custom.

But at DHS these days, the words are verboten thanks to a little noticed provision in the Department of Homeland Security’s section of the fiscal 2022 omnibus spending bill.

Yes, Congress included in new language that requires DHS to submit a report on any pilot or demonstration program that “uses more than 5 full-time equivalents or costs in excess of $1 million.”

That requirement has caused a lot of consternation across DHS during fiscal 2022, according to multiple sources.

“This caught a lot of folks by surprise. It wasn’t seen until mostly after the fact that this was going to be problematic for the department after studying it,” said Chris Cummiskey, the former acting undersecretary for management at DHS and currently CEO of Cummiskey Strategic Solutions. “This is going potentially stifle the innovation that you often get with pilots to test out different approaches. It will apply limitations on advancing the pilots without approval from appropriators and that will make it difficult to operate these programs.”

To be clear, lawmakers aren’t forbidding any pilots or demonstration programs, but they do want a lot more data from DHS than they had been getting.

“Congress doesn’t know if there are a lot of programs. It had become apparent to some members of Congress over time DHS was doing things that were pilot in nature and they would ask questions like what are the metrics or goals or time frames, how many personnel are involved and at what point will it go from a pilot to regular operations,” said a source familiar with the provision, who requested anonymity to speak about the House Appropriations Committee’s thinking. “Very consistently, Congress would not get the responses and that there didn’t seem to be a lot of forethought or a lot of documented language about the pilots.”

So House appropriators added a host of new requirements for DHS to address in their reports that are due 30 days before the pilot or demonstration program begins, including:

  • Objectives that are well-defined and measurable;
  • An assessment methodology that details — the type and source of assessment data; the methods for and frequency of collecting such data; and how such data will be analyzed;
  • An implementation plan, including milestones, a cost estimate, and schedule, including an end date; and
  • A signed interagency agreement or memorandum of agreement for any pilot or demonstration program involving the participation of more than one Department of Homeland Security component or that of an entity not part of such department.

The source said DHS shouldn’t have been surprised by the provision. Lawmakers included similar language in the 2021 appropriations bill, but it ended up being only in the statement language versus being statutory in 2022.

“The department ignored it in 2021. Now it could’ve been a new administration coming in late and not having access to transition stuff when they should’ve and it stopped them from hitting the ground running. But lawmakers also wanted to make a point that this was something they wanted DHS to do,” the source said. “There were a lot of conversations in 2021 about the statement and lawmakers didn’t get a lot of feedback from DHS about the 2022 language. They seemed to say they could execute on the request.”

Multiple requests to DHS for comments about the provision and its impact were not returned.

Senate Appropriations Committee spokesman said the provision originated in the House.

“Its purpose is to provide oversight of ‘pop-up’ pilot programs at DHS, which typically did not track performance and impacts but largely acted as a justification for expanding the pilot itself,” the spokesman said.

Threshold for pilots is low

Cummiskey and other former DHS executives say the data call and putting together the reports shouldn’t be a huge lift for agency leaders.

Rafael Borras, the former DHS undersecretary for management and now president and CEO of the Homeland Security and Defense Business Council, said Congress created a low threshold for reporting and it will cover quite a large number of programs. But, at the same time, he said it shouldn’t too difficult to pull that information together.

“If you own the pilot or demonstration program, you should have that information available. The bigger question is why does Congress want the information and how will they use it,” Borras said. “Congress may not look at 100 reports, but they will look at the one or two and that may create some challenges for DHS.”

Cummiskey estimated it could be as many 40 different pilot or demonstration programs across the entire agency.

Troy Edgar, the former CFO for DHS and now a partner for federal finance and supply chain transformation with IBM Consulting, said another concern is how these requirements will slow down pilot work, which, in turn, can slow down departmental transformation and modernization.

He said the five full-time equivalents and $1 million thresholds seem low for an agency with a budget of over $82 billion.

Provision not about stopping innovation

Borras added that his big concern is adding this to the dozens, or even, hundreds, of other reporting requirements DHS already has to deal with.

“The department must uncover what is root of this and then address the root problems Congress is thinking about,” he said. “If it is because they are not transparent and open enough, the DHS must deal with that. A simple report from the undersecretary for management doesn’t get at the root issue.”

The source said lawmakers want DHS to be innovative and to transform, but have the discipline and rigor associated with spending millions of dollars.

“It’s the kind of discipline that the department needs to make sure it has when it does a pilot. It has to make sure these pilots are effective in way DHS can learn whether or not the pilot achieved the goals intended,” the source said. “It’s beside the point if lawmakers look at all of them, but if it’s hundreds I think we all would be surprised. But lawmakers will look at some of them and ensure the requirements are institutionalized in a way that will result in better pilots going forward.”

The fact that the language isn’t “punitive” or a reaction to something DHS did, as some experts surmised, is a positive thing.

The question Borras, Cummiskey and others asked is whether requiring reports will have the intended affect Congress wants, which is better oversight, accountability and general management of pilot programs. It’s unclear whether new reporting requirements, by themselves, in any federal management realm really changed agency behavior.

Mon, 11 Jul 2022 08:34:00 -0500 en-US text/html https://federalnewsnetwork.com/reporters-notebook-jason-miller/2022/07/dhs-puts-the-kibosh-on-saying-pilot-as-it-deals-with-new-congressional-reporting-requirements/
Killexams : Return to play after thigh muscle injury in elite football players: implementation and validation of the Munich muscle injury classification

Introduction

Muscle injuries represent one-third of all injuries in football and cause one-quarter of total injury absence.1 Over 50% of muscle injuries affect the thigh muscles, and hamstring muscle injuries are the most common injury subtype representing 12% of all injuries.1 A professional male football team with 25 players suffers about five hamstring injuries and three quadriceps injuries each season, resulting in 130 lost football days.1

The aim is to return the player to training and matches as soon as possible. Prognostic information is vital for the medical staff to address questions from players, coaches, managers, media and agents regarding return to play.

The fact that muscle injuries present a heterogeneous group of injury types, locations, severities and sizes, makes prognoses about healing times and rehabilitation difficult.1–5

A radiological classification system of muscle injuries introduced by Peetrons6 is frequently used for imaging; recently, Ekstrand et al2 showed that MRI can be helpful in verifying the diagnosis of hamstring injuries and that radiological grading is associated with lay-off times after injury.

However, a clinical classification system correlating clinical grading with absence is presently not available.

Recently, the ‘Munich muscle injury classification system was introduced as a new terminology and classification system of muscle injuries’.7 This clinical system classifies muscle injuries into functional and structural–mechanical injuries, where functional disorders are fatigue-induced or neurogenic injuries causing muscle dysfunction, while structural–mechanical injuries are tears of muscle fibres.7

The aim of the present study was to implement the Munich classification system in male elite-level football teams in Europe (teams from Union of European Football Associations (UEFA) Champions League and English Premier League) and to evaluate if the classification system is applied and received well by the teams’ medical staff. A further aim was to prospectively evaluate the classification system as a predictor of return to play. A third aim was to provide normative data for the frequency of muscle injuries in the different classification groups as well as to analyse if the classification system could be useful both for anterior and posterior thigh muscle injuries.

We hypothesised that the classification system is well received and readily applicable by football medical teams and that the distribution of lay-off days is different across categories of the classification.

Material and methods

Study population

A prospective cohort study of men's professional football in Europe has been carried out since 2001, the UEFA Champions League (UCL) study.8 For the purpose of this substudy, 31 European professional teams (1032 players) were followed over the 2011/2012 season between July 2011 and May 2012. All contracted players in the first teams were invited to participate in the study.

Study design and definitions

The full methodology and the validation of the UCL injury study design are reported elsewhere.9 The study design followed the consensus on definitions and data collection procedures in studies of football injuries.9 ,10 An overview of the general definitions is seen in table 1. Specifically for this study, a thigh muscle injury was defined as ‘a traumatic distraction or overuse injury to the anterior or posterior thigh muscle groups leading to a player being unable to fully participate in training or match play’. Contusions, haematomas, tendon ruptures and chronic tendinopathies were excluded.

Table 1

Operational definitions

Data collection

Player baseline data were collected at the start of the season. Individual player exposure in training and matches was registered by the clubs on a standard exposure form and sent to the study group on a monthly basis. Team medical staff recorded thigh muscle injuries on a standard injury form that was sent to the study group each month. The thigh injury form is an A4 page consisting of ticking boxes for type, location, mechanisms of injuries as well as diagnostic procedures (clinical examination, imaging by MRI or ultrasonography) and treatments. All injuries resulting in a player being unable to fully participate in training or match play (ie, time-loss injuries) were recorded, and the player was considered injured until the team medical staff allowed full participation in training and availability for match selection. All injuries were followed until the final day of rehabilitation. To ensure high reliability of data registration, all teams were provided with a study manual containing definitions and describing the procedures used to record data, including fictive examples. To avoid language problems, the manual and the study forms were translated from English into five other languages: French, Italian, Spanish, German and Russian. In addition, all reports were checked each month by the study group, and feedback was sent to the teams in order to correct any missing or unclear data. While each team received detailed instructions on how to standardise the process of data collection, potential limitations included the risk for observer bias from the lack of independent injury classification and the evaluation of return to play performed by the same team medical staff.

Magnetic resonance imaging

For the purpose of this study, the clubs were instructed to perform the initial MRI examination within 24–48 h of the injury event. The MRI machine should not be older than 5 years and should have a field strength of at least 1.5 T. The minimum MR sequences should include axial and coronal planes using T1, T2 with fat saturation and/or STIR sequences. A MRI Thigh Injury Report Form was created with information about date of imaging, the name of the radiologist evaluating the images, MR sequences used, muscles involved and severity of injury.

For severity classification, a modification of Peetrons radiological classification6 was utilised with the following grading system: grade 0—negative MRI without any visible pathology; grade 1—oedema, but no architectural distortion; grade 2—architectural disruption indicating partial tear; and grade 3—total muscle or tendon rupture. All radiologists used the same standard evaluation protocol.

Injury evaluation

Of the 393 injuries recorded during the study period, all (100%) underwent physical examination, 215 (55%) were examined by MRI and 75 (35%) of these also had concomitant initial ultrasound. One-hundred and seven injuries (27%) were examined exclusively by initial ultrasound without MRI, and 70 (18%) were examined clinically without the use of any imaging. Information about examination method was missing for one injury.

Implementation and validation of the Munich muscle injury classification

During the season 2011/2012, ticking boxes for injury classification according to the Munich system were added to the thigh injury card. The team medical staffs were asked to tick one of the following alternatives: Fatigue-induced muscle disorder, delayed onset muscle soreness, neuromuscular muscle disorder—spine related, neuromuscular disorder—muscle related, partial muscle injury—minor, partial muscle injury—moderate, subtotal/complete muscle injury/tendinous avulsion. The definitions of functional and structural muscle disorders and their subgroups (as they appeared in the study manual) are shown in table 1. Validity presents the extent to which a concept, conclusion or measurement is well-founded and corresponds accurately to reality. The validation process of the classification was therefore designed to evaluate whether the concept and grading of the classification corresponded to clinically relevant parameters such as the lay-off times of the injured players.

Statistical analyses

Lay-off days are presented with median (Md) and IQR. χ2 Test was used to analyse the association between categorical data. Kolmogorov-Smirnov test (D) was used to test for normality in lay-off days, and Levene's test (F) was used to test for homogeneity of variance in subgroups. Non-parametric methods, Mann-Whitney U-test (U) and Kruskal-Wallis test (H) were used in this study to analyse differences in lay-off days between independent subgroups. All tests were two-sided and the significance level was set at p<0.05. All statistical analyses were made in IBM SPSS Statistics V.19.0 (IBM Corp, Armonk, New York, USA). The study design underwent an ethical review and was approved by the UEFA Football Development Division and the Medical Committee.

Results

Of the 393 thigh muscle injuries reported during the study period, all (100%) injury forms included injury classification according to the Munich system.

Overall, 263 (67%) of the thigh injuries were classified as structural and 130 (33%) as functional. Two-hundred and ninety-eight (76%) injuries affected the posterior thigh; 193 (65%) were classified as structural injuries and 105 (35%) as functional disorders. Ninety-five (24%) injuries affected the anterior thigh; 70 (74%) were classified as structural injuries and 35 (26%) as functional disorders. There was no significant association between classification (functional/structural) and location (anterior/posterior), χ2(1)=2.59, p=0.108.

The distribution of lay-off days, in both structural injuries and functional disorders, was significantly non-normal, D(263)=0.21, p<0.001 and D(130)=0.24, p<0.001, respectively. Levene's test also indicated a significant difference in variance in the subgroups, F(1, 391)=33.80, p<0.001.

The number of lay-off days was significantly higher in structural injuries (Md 16, IQR 16 days) compared to functional disorders (Md 6, IQR 6 days), U=6184.5, z=−10.31, r=−0.52, p<0.001. The difference in lay-off days between structural injuries and functional disorders, within both anterior (Md 14, IQR 16 days vs Md 7, IQR 9 days) and posterior (Md 16, IQR 15 days vs Md 6, IQR 5 days) thigh injuries, was also significant, U=446.5, z=−3.62, r=−0.37, p<0.001 and U=3229.5, z=−9.72, r=−0.56, p<0.001, respectively. However, there was no significant difference in lay-off days between anterior (Md 12, IQR 15 days) and posterior (Md 12, IQR 14 days) thigh injuries overall, U=14 004.0, z=−0.16, r=−0.01, p=0.88.

Detailed classification-specific normative data are presented in table 2 and figure 1.

Table 2

Lay-off days by thigh muscle location and Munich muscle classification system

Figure 1

Days of absence after different groups of muscle injuries.

There was a significant difference in lay-off days between the subgroups of structural injuries, H(2)=93.91, p<0.001 (Md 13, IQR 10 days for minor partial muscle tears (1), Md 32, IQR 24 days for moderate partial muscle tears (2) and Md 60, IQR 5 days for subtotal/complete muscle injury/tendinous avulsion (3)). Pairwise comparisons were conducted to follow-up the significant difference among the subgroups, controlling for type I error across tests by using Bonferroni approach. The results of these tests indicated that the number of lay-off days was significantly higher in both subgroups (2) and (3) compared to subgroup (1). However, lay-off days were not significantly affected by the subgroups of functional disorders, H(3)=4.49, p=0.21. Median lay-off for the subgroups was between 4.5 and 8 days.

Information about the performed examinations was available in all except one injury. MRIs were performed in 36/130 (28%) of functional disorders and in 179/262 (68%) of structural injuries. MRI forms for 52 of the 215 MRI examinations (24%) were received from 14 of the 31 clubs. All 12 injuries, clinically classified as functional disorders, were reported to be either of radiological grade 0 (no MRI pathology) (17%) or grade 1 (oedema without visible tears) (83%) and without signs of muscle ruptures on MRI.

Thirteen injuries were clinically classified as moderate partial muscle tears; 10 (77%) were reported as MRI grade 2; and 3 (23%) were reported as MRI grade 1.

The 27 injuries clinically classified as minor partial muscle tears showed mixed MRI gradings. The majority (81%) were classified as either grade 0 (n=1) or grade 1 (n=21) with muscle tears reported in only 5 (19%).

The radiological size of the tears was only reported in 9 (60%) cases: 4 (80%) in minor and 5 (50%) in moderate partial muscle tears. The mean extent in millimetres of the minor partial muscle tears in z, x and y direction was 26±11 (range 10–33), 14±4 (range 11–19) and 9±3 (range 5–12), respectively.

Primary injuries versus re-injuries

Forty-nine injuries (12%) were classified as re-injuries (injury of the same type and at the same site as an index injury occurring no more than 2 months after a player's return to full participation from the index injury). No significant association between injury classification and re-injury rate could be found, χ2(1)=0.005, p=0.95. The re-injury rate was 33/263 (13%) within structural injuries (13% in minor and 12% in moderate partial muscle tears, and 20% in subtotal/complete muscle injury/tendinous avulsion) and 16/130 (12%) within functional disorders (10% in fatigue-induced muscle disorders and 18% in both muscle-related and spine-related neuromuscular disorders). Only seven (5%) of the initial functional disorders developed into secondary structural injuries within 2 months of the primary injury.

Discussion

Muscle injuries present one of the most frequent and most relevant injuries in professional football accounting for a majority of time lost from competition.1 Owing to complex and heterogeneous presentation of these injuries, the development of a comprehensive muscle injury classification has traditionally been challenging. A critical aspect of a useful muscle injury classification is that it not only provides valid and practically relevant information to the treating medical practitioner but also easily applicable and accepted by medical staff. A main finding of the current study is that the implementation of the Munich muscle injury classification was highly successful, with full medical staff acceptance, and excellent injury data collection.

Functional muscle disorders are clinically underestimated

The present study showed a discrepancy between clinical and radiological classification. Among injuries classified both clinically and radiographically, 77% were clinically classified as structural tears, but radiological grading on MRI showed evidence of muscle tears in only 29% of injuries. This finding is in accordance with a recent study by Ekstrand et al,2 who showed that 70% of hamstring injuries seen in professional football show no signs of muscle fibre disruption on MRI. Still, these injuries are responsible for more than half of the muscle injury-related lay-off.2 The understanding of these most frequent muscle injuries/disorders with the highest impact on lay-off time is still limited and warrants further scientific evaluation. The differentiation of functional and structural muscle injuries introduced by the Munich classification is an important first step towards a more differentiated evaluation of this relatively undefined area of athletic muscle injury. The current study shows that functional muscle disorders are common, but associated with relatively short lay-off times, thereby providing useful information to medical staffs and athletes. Furthermore, our data demonstrate a low risk for the development of subsequent more severe re-injury after functional muscle disorders. Prospective specific investigation of functional muscle disorders with appropriate power is needed. Further systematic study is also required for developing reliable clinical and radiographic tools for differential diagnosis of functional muscle disorders and minor structural injury. However, this study suggests that for the purpose of predicting return to sport, differentiation of functional muscle disorders may not be as clinically relevant. Our finding that clinical classification tends to overestimate structural tears and underestimates functional disorders could be explained by the limited awareness of the high incidence of functional disorders in elite-level football. Since the Munich classification relies on a careful clinical examination and history of the injury, the skill of the clinician and a detailed understanding of the different disorders, there may be a distinct learning curve, and education and experience may become an important factor.

Return to play is longer after structural injuries

The ability to predict lay-off is very important for the injured player as well as the coaching staff. The Munich classification clearly shows a difference in return to play between structural and functional muscle injuries. This seems logical since by definition, structural injuries show macroscopic evidence of muscle fibre damage, and functional disorders show no such damage. Our study indicates that severity of the muscle injury directly affects the duration of the lay-off. Similarly, increased muscle injury severity on MRI has been associated with longer times to return to professional American Football2 ,4 ,5 ,11–13

Clinical classification relates to lay-off

Another main finding of this study is that subgrouping of structural injuries into minor or moderate partial tears as well as total ruptures is clearly associated with lay-off time from football. By definition, validity is the extent to which a concept, conclusion or measurement is well-founded and corresponds accurately to the real world. Our finding that the concept and grading of the classification corresponded to the lay-off times of the injured players therefore validates the concept provided in the classification. Thus, our study validates the ability of the Munich muscle injury classification to differentiate between functionally relevant degrees of muscle injury and its usefulness for the prognosis of healing time. Similarly, the extent of muscle injury on MRI has been shown to have prognostic relevance as injuries involving >50% of the muscle diameter were associated with longer lay-off times.11 ,14 In our study, MRI was unable to detect the differences between moderate or minor structural injuries, suggesting that the Munich classification is more sensitive than MRI in detecting low-grade structural injury. Müller-Wohlfahrt et al7 postulated that secondary muscle bundles, with a diameter of 2–5 mm, can be palpated by the experienced examiner, as well as suggested further studies to determine the size threshold between a minor and a moderate partial muscle tear. In the present study, MRI was unable to detect such small injuries (<5 mm) in any of the 52 injuries in either x, y or z led. Previous studies15 ,16 also noted that some clinically detected athletic muscle injuries are negative on 1.5 Tesla MRI and that these MRI-negative injuries resulted in faster return to competition. This suggests that MRI at the current resolution has limited sensitivity for the detection of minor muscle injury. Similarly, detection limits exist on clinical examination. Our study does not allow any definite conclusions of the tactile limit for the detection of minor muscle tears. Further systematic studies should try to better define the threshold for clinical detection of minor muscle injury possibly by correlation with high resolution MRI (3T or higher).

The diagnosis and definition of the MRI-negative injuries are challenging

Our study demonstrates that a negative MRI does not rule out clinically relevant muscle injury and that clinical diagnosis and management should be based on a combination of clinical history, physical examination and possibly radiographic evaluation. MRI grades 0–1 injuries constitute the majority of muscle injuries in professional football athletes and include a spectrum of pathology such as minor structural injuries as well as functional muscle disorders. Differential diagnosis of these muscle disorders can be challenging and requires a thorough understanding of the Munich classification and strong clinical diagnostic skills. While grading of structural injuries in the Munich classification has prognostic relevance for return to play, subgrouping of functional disorders seems less relevant since lay-off times were similar between the different functional disorders. However, the differentiation of functional disorders may be important as it can impact the therapeutic approach. Interestingly, the average number of lay-off days is similar between minor structural and muscle-related neuromuscular functional disorders. Since the treatment approach is similar, it raises the question if the underlying pathologies could overlap. Could there be a neurological response to a minor tear such as a reciprocal inhibition or could a neurological inhibition facilitate the development of minor tears? This particular aspect of the Munich muscle injury classification requires further specific and powerful substudy and validation with detailed documentation of history, clinical exam, MRI, ultrasound and functional outcome parameters. An improved understanding of MRI grades 0–1 muscle injuries will help to further optimise the management of these injuries and will help to develop evidence-based strategies for expedited and safe return to competition after athletic muscle injury.14 ,15 ,17

What are the new findings?

  • In summary, the current study demonstrates the successful implementation of the Munich muscle injury classification in elite football players. In addition, it validates the following aspects:

  • Structural injuries and functional disorders differ significantly in their lay-off times.

  • Subgrouping of structural muscle injuries based on injury severity has positive prognostic relevance.

  • Subgrouping of functional muscle disorders has less prognostic value.

How might it impact on clinical practice?

Sun, 13 Sep 2020 16:37:00 -0500 en text/html https://bjsm.bmj.com/content/47/12/769
Killexams : Urban Poor Community Settings' Knowledge and Screening Practices for Cervical Cancer in Ibadan, Nigeria

Cancer of the uterine cervix has become a growing public health challenge with increasing mortality and morbidity among women in lower human development index countries.1 It is reported as the foremost and fourth most common cause of cancer deaths among women.2,3 Bray et al2 reported estimated global new cases of 569,847 (3.2%) and a mortality of 311,365 (3.3%) in 2018. Low- and middle-income countries account for one of the highest prevalence of cervical cancer, with an estimated 90% of global deaths occurring in this region. Cancer of the uterine cervix is the second leading female cancer among Nigerian women, after breast cancer,4 and accounts for more than 10,000 annual deaths.2,5

CONTEXT

  • Key Objective

  • What are men and women's knowledge and screening practices of cervical cancer in urban slum community settings?

  • Knowledge Generated

  • The mean knowledge score of cervical cancer detection was 5.0 ± 2.6 on a 0- to 39-point scale. Cervical cancer prevention practices (screening and human papillomavirus vaccination) were very low.

  • Relevance

  • Low knowledge potentially translates to low practice, as shown in this study. These may result in late detection and presentation at health facilities with poor treatment outcomes. Prevention strategies, at primary and secondary levels, including educational interventions should be encouraged in clinical and other settings to prevent an overburdening of the health system.

The major cause of cervical cancer is the human papillomavirus (HPV), and the disease is sexually transmitted. Other risk factors include high parity, smoking, sexual initiation at an early age, multiple sexual partners, and prolonged use of oral contraceptives. The prevention rate for cervical cancer is high if detected and treated early.6 The WHO recommends regular screening every 3-5 years among women age 30-49 years, in addition to timely treatment of precancerous lesions.2 More recently, the World Health Assembly endorsed the WHO Global Strategy for elimination of cervical cancer. The global targets for 2030 emphasize primary prevention (90% coverage of HPV vaccination of girls by age 15 years) and secondary prevention (70% of women to be screened by age 35 and 45 years).7

Vaccination is reported as an important public health primary prevention approach to reduce the risk of HPV, whereas cervical cytology or Papanicolaou test (Pap smear) is documented as secondary form of prevention.8 In Nigeria, primary prevention through HPV vaccination is not yet part of the national routine immunization program; it is, however, accessible, at a high cost, through a limited number of private and public health care settings. Conventionally, secondary prevention through screening is carried out in Nigeria using Pap smear test.9 However, Pap smear test is not suitable as primary screening in low-resource settings although it has played a substantial role in reducing cervical cancer in developed countries over the past 70 years. Awareness and knowledge of cervical cancer are, however, necessary for improved involvement of women in prevention and screening practices. Several studies have been carried out among Nigerian women to assess the knowledge and screening practices for cervical cancer (including HPV vaccine and Pap smear test) and among HIV-infected women. Most studies have shown an appreciable high knowledge of cervical cancer in urban workplace settings, women attending health facilities, or health workers. Conversely, many studies have documented low knowledge and practice among women,1,10,11 especially at population levels. Incessant creation of awareness about cervical cancer has the potential to increase knowledge and utilization of cervical cancer screening practices.12 More research documenting both men's and women's knowledge of cervical cancer and screening practices among women in urban slum community settings is needed. Focus on men in addition to women on cervical cancer studies is very scanty in Nigeria. Thus, the inclusion of men, and not only women, is important because of the decision-making role of men in Nigerian families and in improving family health. This study therefore investigated the knowledge and screening practices for cervical cancer among male and female adults in urban poor settings in Ibadan, Nigeria. The findings from this study would inform baseline data for planning appropriate health promotion and education, prevention interventions, and policy formulation to prevent and control cervical cancer in poor community settings in Nigeria.

Study Design and Setting

This study used a cross-sectional design in two urban community-based settings in Ibadan, Oyo State, Nigeria. Data collection lasted for 3 weeks in both communities. Ibadan is the largest city in Africa situated in the western region of Nigeria. Ibadan has a population of 3 million and is a combination of both urban and semiurban community settings. Two underserved communities located in the urban slum areas of Ibadan North Local Government Areas (LGAs) were identified. The two communities selected for this study are at the heart of Ibadan city in an urban LGA but have a mixture of higher- and lower-educated people and subsequently a combination of high-, middle-, and low-income communities. The LGA has an estimated population of 308,119, and sanitary conditions in the slum areas are poor as the majority of houses do not have access to potable water and water-flushed toilets.

Study Population, Sampling, and trial Size

All consenting male and female adults in both communities age 18-65 years were eligible to participate in this study. Exclusion criteria included persons who did not provide consent to participate in the study and physically or mentally ill men and women, who were unable to provide adequate information. A previous community-based study by Nnodu et al13 informed the calculation of the trial size of 500, using the Leslie Kish formula. The prevalence of knowledge about HPV was 33%, with calculations on the basis of 95% confidence level, a margin of error of 5%, and a design effect of 1.5. The final trial size calculated was 334, but to cater for attrition and to cover a larger trial area, the trial size was increased to 500. Community members age 18-65 years were randomly selected from the total community population. Simple random sampling technique was used to select participating households, whereas one respondent was selected in each household by ballot to avoid selection bias. A total of 147 males and 353 females completed the electronic data.

Data Collection Methods and Instruments

The research team had earlier visited and interacted with the communities' stakeholders including the heads of the landlord association, executives of the association, and religious leaders. The two communities had committed to support the study and paved the way for easy connection to the community members. In the two communities, 552 people were approached and only 52 declined to participate, mostly because they had to engage with other commitments at the time of the interview. Most of those who consented to participate were self-employed so could create time for the interview.

Data collection instrument was developed and converted into an electronic data capture tool (ODK Collect). Data collection was interviewer administered using the translated instrument into the local language Yoruba and back-translated to English for content validity. Data tool contained both open- and closed-ended questions (Data Supplement). Interviewers were trained before data collection process to ensure increased quality of data set, and all followed a homogenous procedure. Data were pretested in similar community settings before real data collection was undertaken. The ODK tool included questions on sociodemographic characteristics of respondents and cervical cancer questions on awareness and knowledge of cervical cancer risk factors, symptoms, and detection. Questions were also asked about screening practices with Pap smear, visual inspection with acetic acid, reasons for nonscreening, and HPV vaccination. Respondents were asked to identify their responses to knowledge questions with yes or no as appropriate. Open-ended question responses were recorded on the ODK tool by interviewers. Knowledge questions were scored (on a scale of 0-39) for respondents' knowledge of cervical cancer detection, symptoms, and risk factors. Total scores were added together, and a mean knowledge score for respondents was calculated.

Quality Assurance

Throughout the data collection process, research assistants were monitored and gave daily feedback on the research process. The electronic data collected were uploaded daily and checked for completeness and errors. In those cases where there were errors, research assistants were asked to collect additional data and the data previously collected were discarded. Quality assurance meetings were held weekly to review the data collected, weekly targets, and any challenges that research assistants encountered. This enabled immediate response to facilitate ease of data collection procedure.

Data Analysis

Electronic data collected using the ODK tool were checked before they were transferred into the Statistical Package for Social Sciences (IBM SPSS) version 21. Both descriptive and inferential statistics analyses were used to meet the criteria of the study objectives. Categorical variables were presented using frequencies and percentages, whereas continuous variables reported means and standard deviations. Inferential statistics was obtained using chi-square statistics to estimate the degree of association between the variables in the study. Multivariate regression analysis was not carried out because there was no statistical significance from the chi-square statistics.

Ethical Consideration

The study protocol was approved and informed (by signing an informed consent form), and voluntary consents were sought and obtained from community leaders, stakeholders, and all study respondents, before the commencement of data collection (Data Supplement). There were no physical risks to the respondents; data were collected in privacy, and respondents were assured that they would not be penalized in any way if they chose to stop the data collection at any stage. Respondents were assured of confidentiality of responses, and only identification codes were assigned to ODK files. Study findings were disseminated to participating communities after the completion of the study.

The study protocol was approved by the University of Ibadan/University College Hospital Nigeria Ethical Review Committee, Nigeria, with the reference number UI/EC/17/0410.

Awareness of Cervical Cancer

A majority of respondents were not aware of cervical cancer screening (91.2%) and Pap smear test (93.6%). Few (10%) had ever heard of HPV vaccine for the prevention of cervical cancer.

Knowledge of Cervical Cancer (detection, symptoms, and risk factors)

The knowledge of the risk factors for cervical cancer showed that majority (92.4%; 0.92 ± 0.27) reported that old age, low socioeconomic status (88%; 0.88 ± 0.33), unhealthy diet (75.8%; 0.76 ± 0.43), and high rate of abortion (73%; 0.73 ± 0.44) were risk factors for cervical cancer (Table 2).

Table

TABLE 2 Knowledge of Cervical Cancer (Risk Factors, Symptoms, and Detection)

A majority of respondents (91.60%; 0.92 ± 0.28) reported that the absence of menstruation or irregular menstruation, itching at the vagina (91.00%; 0.91 ± 0.29), and painful menstruation (95.40%; 0.95 ± 0.21) constitute symptoms of cervical cancer (Table 2).

The results of knowledge on detection of cervical cancer showed less than half (41.4%; 0.41 ± 0.49) of the respondents reported that cervical cancer can be terminal. However, a majority of respondents (88.6%; 0.89 ± 0.32) reported that it is sufficient to only do cervical cancer test once to eliminate its risk, and 83.4% (0.83 ± 0.37) reported that cervical cancer is a genetic disease, whereas 92.6% (0.93 ± 0.26) reported that postmenopausal women still have the risk of getting cervical cancer.

The mean knowledge score of cervical cancer detection was 5.0 ± 2.6 with a minimum knowledge score of two and a maximum of 13, the mean knowledge score of cervical cancer symptoms was 3.3 ± 0.8 with a minimum knowledge score of one and a maximum of eight, and the mean knowledge score of risk factors for cervical cancer was 6.7 ± 1.6 with a minimum knowledge score of three and a maximum of 13. The overall knowledge of participants was pooled and assessed on a (0-39) point scale. This was further categorized into ranges with 0-18 points as poor knowledge, 19-23 points as fair knowledge, and 24-39 points as good knowledge of cervical cancer. Respondents' overall mean knowledge score was 15.0 ± 4.1. Majority (77.2%) had low knowledge score for cervical cancer.

There was a statistically significant association between knowledge of cervical cancer and employment status of respondents (χ2 = 10.35; P < .05). There was no statistically significant difference between knowledge and sex (Table 3).

Table

TABLE 3 Knowledge of Cervical Cancer, Employment Status, and Sex

Cervical Screening Practices

Only women (n = 353) reported about cervical cancer practices. Very few women had been screened for cervical cancer with the Papanicolaou test (4%), and one woman with visual inspection with acetic acid (0.3%). Four women had taken HPV vaccine before (1.1%) (Table 4).

Table

TABLE 4 Cervical Cancer Screening Practices

The overall results of this study indicated low knowledge of cervical cancer and screening practices. The study findings revealed a considerable proportion of the respondents to have had either secondary or tertiary education. Average to high level of education did not seem to translate to awareness or good knowledge of cervical cancer among the study respondents. This was in contrast to findings reported by Ezenwa et al10 among women in similar urban community settings in Nigeria. Importantly, findings highlighted low socioeconomic status on the basis of low income of respondents with majority living on 20,000 naira or less per month (equivalent of 51 dollars per month), and moreover, majority were self-employed with petting trading. This translates to less ability to afford the cost of screening or vaccination for the prevention of cervical cancer, coupled with lack of accessibility. This result was similar to the findings of Olanlesi-Aliu et al14 on the quality of services on cervical cancer being affected by inadequate resources. Subsequently, the employment status was a variable that could exert influence on knowledge of the respondents on cervical cancer.

Poor level of awareness was reported for cervical screening, Pap smear, and HPV vaccine from the results of this study. Previous research findings13,15 corroborated these outcomes. These findings consequently demonstrate a need for increased awareness on cervical cancer, HPV screening, and vaccination, as well as the need for health promotion and education strategies targeting cervical cancer screening and the benefits of vaccination among adults in poor urban community settings in Nigeria. Almost all the knowledge scores for questions on risk factors, symptoms, and detection for cervical cancer had responses below the average. This is suggestive of perceived low susceptibility to the disease, but strikingly, majority perceived postmenopausal women to still have the risk of getting cervical cancer, whereas mean score was high for testing is only needed once to eliminate the risk of cervical cancer. Knowledge gaps among study respondents highlight a crucial need for health education to increase knowledge about cervical cancer. Health education should include recommendations for screening, according to the ACOG. The ACOG highlighted that women age between 25 and 29 years are recommended for cervical cytology or Pap test only at 3-year intervals, whereas those who are 30 years to 65 years could have a combination or cotesting of Pap test and HPV test every 5 years. For women over 65 years, screening can be halted on the basis of acceptable previous negative screening within the past 5 years.16

This study highlights major gaps in prevention practices for cervical cancer and identifies an urgent need to upscale cervical cancer prevention and intervention strategies in urban poor community settings of Nigeria. Findings are similar to a recent study carried out among women,1 where only two women had gone through cervical cancer screening test, but none of them had taken HPV vaccination. These findings are reflective of the inadequate health programs and services regarding cervical cancer prevention in Nigeria.14

In conclusion, the findings of this study have underscored a necessity for increased awareness creation through health promotion interventions and strategies to alleviate low knowledge of cervical cancer, prevention, and screening practices in poor community settings in Nigeria. The provision of prevention services, which must be accessible and affordable to the populace irrespective of the geographical location, is also needed.

© 2021 by American Society of Clinical Oncology
PRIOR PRESENTATION

Presented at the 7th Annual Symposium on Global Cancer Research: Translating and Implementation for Impact in Global Cancer Research, Chicago, IL, March 7, 2019. This study abstract has been published in J Global Oncol 2019:3. © 2019 by American Society of Clinical Oncology following international conference/symposium presentation and can be found here: DOI: 10.1200/JGO.19.10000.

SUPPORT

Supported by a planning grant awarded by the US National Institutes of Health, Fogarty International Center, Addressing NCDs In Nigeria Through Enhanced International Partnership and Interdisciplinary Research Training, award number 1D71TW010876-01.

We acknowledge all the respondents and research assistants for their contributions to the conduct of the study. We are grateful to the ethical committee who provided approval for this study.

1. Olubodun T, Odukoya OO, Balogun MR: Knowledge, attitude and practice of cervical cancer prevention, among women residing in urban slum in Lagos, South West, Nigeria. Pan Afr Med J 32:130, 2019 Crossref, MedlineGoogle Scholar
2. Bray F, Ferlay J, Soerjomataram I, et al: Global Cancer Statistics 2. GLOBOCAN estimates of incidence of mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 68:394-424, 2018 Crossref, MedlineGoogle Scholar
3. World Health Organization: Global Strategy to Accelerate the Elimination of Cervical Cancer as a Public Health Problem. Geneva, Switzerland World Health Organization, 2020. Licence: CC BY-NC-SA 3.0 IGO Google Scholar
4. Ononogbu U, Al-Mujtaba M, Modibbo F, et al: Cervical cancer risk factors among HIV-infested Nigerian women. BMC Public Health 13:582, 2013 Crossref, MedlineGoogle Scholar
5. Bruni L, Albero G, Serrano B, et al: Human Papillomavirus and Related Diseases in Nigeria. Summary Report. ICO/IARC Information Centre on HPV and Cancer (HPV Information Centre). ICO/IARC HPV Information Centre, Barcelona, Spain, 2019 Google Scholar
6. Morounke Saibu G, Ayorinde James B, Adu OB, et al: Epidemiology and incidence of common cancers in Nigeria. J Cancer Biol Res 5:1105, 2017 Google Scholar
7. World Health Organization: World health statistics 2019: Monitoring health for the SDGs, sustainable development goals. World Health Organization, 2019 https://apps.who.int/iris/handle/10665/324835. License: CC BY-NC-SA 3.0 IGO Google Scholar
8. Finocchario-Kessler S, Wexler C, Maloba M, et al: Cervical cancer prevention and treatment research in Africa: A systematic review from public health perspective. BMC Womens Health 16:29, 2016 Crossref, MedlineGoogle Scholar
9. Sowemimo OO, Ojo OO, Fasubaa OB: Cervical cancer screening and practice in low resource countries: Nigeria as a case study. Trop J Obstet Gynaecol 34:170-176, 2017 CrossrefGoogle Scholar
10. Ezenwa BN, Balogun MR, Okafor IP: Mothers’ human papilloma virus knowledge and willingness to vaccinate their adolescent daughters in Lagos, Nigeria. Int J Womens Health 5:371, 2013 Crossref, MedlineGoogle Scholar
11. Liu T, Li S, Ratcliffe J, et al: Assessing knowledge attitude towards cervical cancer screen among rural woman Eastern China. Int J Environ Res Public Health 14:967, 2017 CrossrefGoogle Scholar
12. Mabelele M, Materu J, Ng'ida FD, et al: Knowlegde towards cervical cancer prevention screening practices among women who attended reproductive and child health at Magu district hospital, Lake Zone Tanzania: A cross-section study. BMC Cancer 18:565, 2018 Crossref, MedlineGoogle Scholar
13. Nnodu O, Erinosho L, Jamada M, et al: Knowledge and attitudes towards cervical cancer and human papillomavirus. Afr J Reprod Health 14:95, 2010 MedlineGoogle Scholar
14. Olanlesi-Aliu AD, Martin PD, Daniels FM: Towards the development of a community-based model for promoting cervical cancer prevention among Yoruba women in Ibadan Nigeria: Application of PEN-3 model. Southern Afr J Gynaecol Oncol 11:20-24, 2019 CrossrefGoogle Scholar
15. Massey PM, Boansi RK, Gipson JD, et al: Human papillomavirus (HPV) awareness and vaccine receptivity among Senegalese adolescents. Trop Med Int Health 22:113-121, 2017 Crossref, MedlineGoogle Scholar
16. ACOG Practice Advisory: Cervical Cancer Screening (Update). Washington, DC, American College of Obstetricians and Gynecologists (ACOG), 2018 Google Scholar
Wed, 05 Jan 2022 12:26:00 -0600 en text/html https://ascopubs.org/doi/10.1200/GO.20.00619
000-779 exam dump and training guide direct download
Training Exams List