Click and download CVPM exam Test Prep and exam prep to pass actual test.

killexams.com known as high recognition of serving people to pass the CVPM test of their very first attempt. We have very sensitive position among several exam braindumps suppliers. We maintain of quality by updating CVPM mock exam on regular basis and check the validity of CVPM Free Exam PDF on regular basis.

CVPM Certificate of Veterinary Practice Management exam Questions | http://babelouedstory.com/

CVPM exam Questions - Certificate of Veterinary Practice Management Updated: 2024

Ensure your success with this CVPM dumps question bank
Exam Code: CVPM Certificate of Veterinary Practice Management exam Questions January 2024 by Killexams.com team

CVPM Certificate of Veterinary Practice Management

To prepare students to seek a Veterinary Practice Manager position by providing them with the education to develop strategies, procedures, policies and management skills to run the daily operations of a veterinary business and to satisfy the college semester hours required for CVPM certification.



Basic Skills Assessment

Applicants are required to complete two Basic Skills Assessments, one in memorizing and one in math, to determine the level of readiness for beginning their selected program. Additional studies may be required.



Business Orientation

In this course, youll be introduced to distance learning; study skills and techniques; and ways to review for examinations. Youll discover similarities between personal financial goals and business goals and examine how to determine personal financial goals; set up a budget; and recognize the elements of a business.

Principles of Management

This course will review the functions of management and its role in planning, leading, organizing, motivating, and controlling.
Basic Accounting

In this course, you'll discover the necessary accounting information to understand the basic accounting information, what it means, and how its used; The course will also cover financial statements, return on investment, bookkeeping process, cost accounting, and report systems.

Veterinary Practice Management

In this course youll learn the importance of understanding and implementing laws and requirements in a veterinary practice. Youll also learn the benefits of developing a professional, efficient, and knowledgeable team that provides exceptional customer service to their clients. Additionally, the course will provide you with instructions and information on how to help your practice to grow financially.

Human Resources Management

In this course, you'll investigate an overview of Human Resources Management (HRM), as its understood today. This course illustrates the dynamic interaction of the personnel functions with each other and with the objectives of an organization.
Marketing

In this course, youll discover the principles of marketing. Youll learn how to develop a marketing plan; use social and mobile marketing effectively; integrate ethics into marketing strategies; influence the consumer decision process; perform market research; perform SWOT and STP analyses; make decisions concerning branding, packaging, and developing new products; price products and services fairly; set advertising objectives; and more.

Business Ethics

In this course, youll explore the types of ethical issues you may encounter in your professional life and discover processes to help in determining the best response to the issue. Youll discover how to determine a course of action with various philosophical approaches and business-related approaches to ethical issues in a professional setting; and a process for decision making in balancing different considerations. In addition, discover an overview of selected areas of business that may require particular attention.



Whether you're looking to start a new career as a Vet Practice Manager, or you want to start on a path toward certification, our Veterinary Practice Management undergraduate certificate program can help you achieve these goals. For up to $86 per credit, our Vet Practice Management program fulfills one of the requirements to sit for the Certified Veterinary Practice Manager (CVPM) exam.



Killexams Veterinary Practice Management Certificate online program fulfills the education requirement for certification. After students complete the Vet Practice Management online courses and graduate from the program, they will have met one of the four requirements to sit for the CVPM exam — Certified Veterinary Practice Manager, offered by the Veterinary Hospital Managers Association (VHMA). The CVPM designation is recognized by many veterinary practice leaders as the highest level of credential for professional veterinary managers and is often preferred or required when applying for a veterinary manager position.
Certificate of Veterinary Practice Management
Medical Certificate exam Questions

Other Medical exams

CRRN Certified Rehabilitation Registered Nurse
CCRN Critical Care Register Nurse
CEN Certified Emergency Nurse
CFRN Certified Flight Registered Nurse
CGFNS Commission on Graduates of Foreign Nursing Schools
CNA Certified Nurse Assistant
CNN Certified Nephrology Nurse
CNOR Certified Nurse Operating Room
DANB Dental Assisting National Board
Dietitian Dietitian
EMT Emergency Medical Technician
EPPP Examination for Professional Practice of Psychology
FPGEE Foreign Pharmacy Graduate Equivalency
NBCOT National Board for Certification of Occupational Therapists - 2023
NCBTMB National Certification Board for Therapeutic Massage & Bodywork
NET Nurse Entrance Test
NPTE National Physical Therapy Examination
OCN Oncology Certified Nurse - 2023
PANCE Physician Assistant National Certifying
VTNE Veterinary Technician National Examination (VTNE)
CNS Clinical Nurse Specialist
NBRC The National Board for Respiratory Care
AHM-540 AHM Medical Management
AACN-CMC Cardiac Medicine Subspecialty Certification
AAMA-CMA AAMA Certified Medical Assistant
ABEM-EMC ABEM Emergency Medicine Certificate
ACNP AG - Acute Care Nurse Practitioner
AEMT NREMT Advanced Emergency Medical Technician
AHIMA-CCS Certified Coding Specialist (CPC) (ICD-10-CM)
ANCC-CVNC ANCC (RN-BC) Cardiac-Vascular Nursing
ANCC-MSN ANCC (RN-BC) Medical-Surgical Nursing
ANP-BC ANCC Adult Nurse Practitioner
APMLE Podiatry and Medical
BCNS-CNS Board Certified Nutrition Specialis
BMAT Biomedical Admissions Test
CCN CNCB Certified Clinical Nutritionist
CCP Certificate in Child Psychology
CDCA-ADEX Dental Hygiene
CDM Certified Dietary Manager
CGRN ABCGN Certified Gastroenterology Registered Nurse
CNSC NBNSC Certified Nutrition Support Clinician
COMLEX-USA Osteopathic Physician
CPM Certified Professional Midwife
CRNE Canadian Registered Nurse Examination
CVPM Certificate of Veterinary Practice Management
DAT Dental Admission Test
DHORT Discover Health Occupations Readiness Test
DTR Dietetic Technician Registered
FNS Fitness Nutrition Specialist
MHAP MHA Phlebotomist
MSNCB MSNCB Medical-Surgical Nursing Certification
NAPLEX North American Pharmacist Licensure Examination
NCCT-TSC NCCT Technician in Surgery
NCMA-CMA Certified Medical Assistant
NCPT National Certified Phlebotomy Technician (NCPT)
NE-BC ANCC Nurse Executive Certification
NNAAP-NA NNAAP Nurse Aide
NREMT-NRP NREMT National Registered Paramedic
NREMT-PTE NREMT Paramedic Trauma Exam
OCS Ophthalmic Coding Specialist
PANRE Physician Assistant National Recertifying Exam
PCCN AACN Progressive Critical Care Nursing
RDN Registered Dietitian
VACC VACC Vascular Access
WHNP Women Health Nurse Practitioner
AACD American Academy of Cosmetic Dentistry
RPFT Registered Pulmonary Function Technologist
ACLS Advanced Cardiac Life Support - 2023
GP-Doctor General Practitioner (GP) Doctor
GP-MCQS Prometric MCQS for general practitioner (GP) Doctor
INBDE Integrated National Board Dental Examination (Day 1 exam)
Podiatry-License-Exam-Part-III Podiatry License exam Part III - 2023

We have Tested and Approved CVPM Exams. killexams.com gives the exact and most accurate IT exam materials which practically contain all information focuses. With the guide of our CVPM exam materials, you don't have to spend time on several books but simply need to burn through 10-20 hours to ace our CVPM dumps of braindump questions and answers.
Medical
CVPM
Certificate of Veterinary Practice Management
https://killexams.com/pass4sure/exam-detail/CVPM
Question: 78
Karen, the practice manager, oversees all aspects of ABC Animal Hospital. Karen
is viewed as being a dreamer; she does not understand all aspects of the veterinary
business, but always completes her job. Employees tend to be rude to Karen in
their dealing with her. Under the Contingency Leadership Theory, what type of
leadership style does Karen have?
A. Situational
B. Task oriented
C. Motivational
D. Relationship oriented
Answer: B
Under the Contingency Leadership Theory, Karen has a task oriented leadership
style. Task oriented leaders focus on getting the task completed, rather than the
people involved in completing the task.
Question: 79
Under the Model Infection Control Plan for Veterinary Practices, what protective
action should be taken during the intake of animals?
A. Avoid bringing aggressive or potentially infectious animals in through the
reception area
B. Wear appropriate protective outerwear, and wash hands before and after
examination of individual animals or animal groups
C. If an animal is suspected of having a notifiable infectious or a foreign animal
disease, consult with the State Veterinarian
D. Wear gloves and facial protection
Answer: A
Under the Model Infection Control Plan for Veterinary Practices, avoiding
bringing aggressive or potentially infectious animals in through the reception area
is a protective action. If they must come through the main entrance, if possible,
carry the animal or place it on a gurney so that it can be taken directly into a
designated examination room.
Question: 80
The Veterinary Hospital Managers Association classifies job descriptions based
on duties and responsibilities in the hospital. Although there are many crossovers,
what is the primary difference between an office manager and a practice
manager?
A. Office managers primarily deal with the front desk, while the scope of practice
managers is hospital wide
B. Office managers are responsible for seeing that administrative policies are met
and a practice manager is charged with the responsibility of managing the
business activities
C. Office managers are supervisors, while practice managers are managers
D. Office managers do not have a degree, while a practice manager does
Answer: B
Although there are many crossovers, the primary difference is that office
managers are responsible for seeing that administrative policies are met and a
practice manager is charged with the responsibility of managing the business
activities. In general, office managers perform many tasks relating to human
resources and public relations, while practice managers perform more tasks
related to organizational function, law, and ethics.
Question: 81
The ancillary services expense account (from the American Animal Hospital
Associations chart of accounts) details expenses that occur from which of the
following activities?
A. Office visits
B. Surgery
C. Vaccinations
D. Boarding
Answer: D
The ancillary services expense account (from the American Animal Hospital
Associations chart of accounts) details expenses that occur from boarding. Food,
grooming, and retail products also fall into this category.
Question: 82
What is the professional association for equine veterinarians, with a mission to
improve the health and welfare of horses and further professional development,
called?
A. Society for Theriogenology
B. American Horse Show Association
C. American Association of Equine Practitioners
D. Society for Equine Practitioners
Answer: C
The professional association for equine veterinarians, with a mission to improve
the health and welfare of horses and further professional development, is called
the American Association of Equine Practitioners (AAEP).
Question: 83
An effective veterinary appointment schedule should accomplish which of the
following two goals?
A. Be filled to capacity and no time overlap
B. Keep the veterinarians as busy as possible and eliminate technical staff down
time
C. Eliminate client wait time and maximize efficiency of staff
D. Client convenience and slots filled according to staffing availability
Answer: C
An effective veterinary appointment schedule should accomplish two goals:
eliminate client wait time and maximize efficiency of staff. Because appointment
schedules are arranged on blocks of time, the minimum time slots for availability
should be between 10 - 20 minutes, depending on the procedure to be performed.
Question: 84
When considering the safety of operating a surgical laser, which of the following
injuries poses the greatest risk to employees?
A. Hair loss
B. Allergic reaction
C. Skin burns
D. Eye damage
Answer: D
When considering the safety of operating a surgical laser, eye damage poses the
greatest risk to employees. The cornea and lens of the eye focus the light beam to
a very small spot on the retina. Vision damage is usually severe and can result in
blindness.
For More exams visit https://killexams.com/vendors-exam-list
Kill your exam at First Attempt....Guaranteed!

Medical Certificate exam Questions - BingNews https://killexams.com/pass4sure/exam-detail/CVPM Search results Medical Certificate exam Questions - BingNews https://killexams.com/pass4sure/exam-detail/CVPM https://killexams.com/exam_list/Medical Best No exam Life Insurance Companies of January 2024

Our experts answer readers' insurance questions and write unbiased product reviews (here's how we assess insurance products). In some cases, we receive a commission from our partners; however, our opinions are our own.

Many of the best life insurance companies offer no-exam life insurance, which has the obvious appeal of skipping medical exams. 

SBLI Life Insurance

Insider’s Rating
A five pointed star A five pointed star A five pointed star A five pointed star A five pointed star
3.56/5

AM Best Financial Strength Rating

A

Pros
  • Check mark icon A check mark. It indicates a confirmation of your intended interaction. Whole and term life insurance products
  • Check mark icon A check mark. It indicates a confirmation of your intended interaction. Final expense options
  • Check mark icon A check mark. It indicates a confirmation of your intended interaction. Diverse life insurance riders available
Cons
  • con icon Two crossed lines that form an 'X'. Company does not post important information like limits on its site
  • con icon Two crossed lines that form an 'X'. No universal life insurance options
  • con icon Two crossed lines that form an 'X'. Age limit of 60 on accelerated underwriting

Bestow Life

Insider’s Rating
A five pointed star A five pointed star A five pointed star A five pointed star A five pointed star
3.37/5

AM Best Financial Strength Rating

A+

Pros
  • Check mark icon A check mark. It indicates a confirmation of your intended interaction. Premiums as low as $8/month
  • Check mark icon A check mark. It indicates a confirmation of your intended interaction. Easy online access
Cons
  • con icon Two crossed lines that form an 'X'. Company only offers term policies
  • con icon Two crossed lines that form an 'X'. Limited support for customer needs
  • con icon Two crossed lines that form an 'X'. Maximum coverage limits are low

Complete Editorial Review of the Best No exam Life Insurance Companies

  • Ethos Life: Best Term Life Policy
  • AARP Life Insurance: Best for Seniors
  • USAA Life Insurance: Best for Military Members
  • Prudential Life Insurance: Best for Higher Policy Limits
  • Mutual of Omaha Life Insurance: Best for Guaranteed Acceptance
  • AAA Life Insurance: Best for Waiting Period

Compare the Best No exam Life Insurance 

Many insurers offer a range of permanent and term life insurance policies that require a medical exam. These companies offer the best no medical exam life insurance policies. 

Best Term Life Policy

Ethos No Medical exam Life Insurance

Ethos Life accepts applicants up to age 65 with a 100% online application process, and limits are as high as $2 million. 

Ethos Life asks a few basic medical questions, but coverage is effective immediately once approved. In addition, every customer buying policies like this from Ethos Life is eligible for a 30-day look period, which is another way to say you can cancel and get your money back in the first 30 days with no penalties.

  • Health questions: Yes
  • Coverage limits: Up to $2 million
  • Age: 20-65
  • Waiting period: May apply
  • Unique feature: Instant quotes available with a streamlined online application

Ethos Life Insurance Review

Best for Seniors

AARP No Medical exam Life Insurance

AARP Life Insurance caters to senior clients for insurance and many other financial products. Older adults between 50 and 74 may qualify as long as they are AARP members. Term policies are available with limits up to $150,000 in most states. Montana and New York residents may be eligible for up to $100,000. Whole life policy limits max out at $25,000.

Whole life policies can be issued without any health exams or medical questions. The term policies, on the other hand, may ask some health questions.

  • Health questions: For term life policies, but not whole
  • Coverage limits: Up to $150,000
  • Age: 50-74
  • Unique feature: No medical exam policy caters to older adults

AARP Life Insurance Review

Best for Military Members

USAA No Medical exam Life Insurance

USAA Life Insurance is typically associated with military members and their immediate family members, but its insurance products are available to anyone. Pricing is lower, payouts are higher, and customer service is strong. Of course, these services are only available to military and qualifying family members. For the children of a deceased military member to use any USAA products, the military member would need to be signed up before their death.

Guaranteed whole life policies are available in 49 states, excluding Montana. USAA life insurance coverage is available from $2,000 to $25,000 with no medical exam or questions. Applicants who want higher coverage limits can explore medical exam policy options with a licensed agent.

  • Health questions: No
  • Coverage limits: Up to $25,000
  • Age: 45-85
  • Waiting period: Two years
  • Unique feature: Below-market product costs available for military members and qualifying family

USAA Life Insurance Review

Best for Higher Policy Limits

Prudential No Medical exam Life Insurance

Prudential Life Insurance offers up to $3 million in coverage for term life policies. Adults up to 60 years old are eligible for coverage with a short application involving some medical questions. For younger applicants, conversion options may also be available later to make term policies into whole life policies. However, due to the higher limits, Prudential's application process may also be longer.

  • Health questions: Yes
  • Coverage limits: Up to $3 million
  • Age: 20-60
  • Waiting period: Two years
  • Unique feature: High expert and customer rankings with a trusted provider

Prudential Life Insurance Review

Best for Guaranteed Acceptance

Mutual of Omaha No Medical exam Life Insurance

Mutual of Omaha Life Insurance has high financial stability and customer satisfaction ratings across different types of insurance. Guaranteed life policies are available for adults between the ages of 45 and 85. In New York state, the age range is 50-75. Policies can be as small as $2,000 in most states and as large as $25,000 with no health questions or medical exams.

Mutual of Omaha's no medical exam policies have a graded death benefit. If you die within two years of the policy start date, the company will not pay the full policy. Instead, it delivers 110% of the premiums paid. The Mutual of Omaha website boasts same-day payouts on most policies. Policies for children are also available.

  • Health questions: No
  • Coverage limits: Up to $25,000
  • Age: 45-85 (50-75 in New York State)
  • Waiting period: Two years
  • Unique feature: Company website lists same-day payment on most claims

Mutual of Omaha Life Insurance Review

Best for Waiting Period

AAA No Medical exam Life Insurance

AAA Life Insurance offers immediate death benefits for qualified applicants between 18 and 75. In other words, once your policy starts, you are eligible for the full policy benefit. Policies are available with limits as low as $25,000 and as high as $500,000. While a medical exam is not required, health questions are.

AAA offers term policies with limits as high as $500,000. For a whole life policy, the limit is $25,000. But applicants can add a rider doubling the payout for accidental death coverage. Younger people have no waiting period for benefits. For applicants over age 45, AAA pays out 130% of the premiums paid up to the date of death for the first two years.

  • Health questions: Yes
  • Coverage limits: Up to $500,000 term/$25,000 whole
  • Age: 18-75
  • Waiting period: Applies after age 45
  • Unique feature: Death benefit available regardless of the cause of death

No Medical exam Life Insurance FAQs

A no medical exam life insurance policy could be right for you if you're able to qualify and don't need special coverage. These policies are the easiest to get for young applicants with no significant health issues. Older applicants can buy with some companies, but acceptance is not guaranteed. No medical exam policies offer less coverage with higher premiums in most cases. If you do not qualify for the no medical exam policy you want, insurance agents can help you explore alternatives. 

No medical exam means life insurance companies will not check your blood pressure, cholesterol, etc. However, companies have access to prescription history and other personal records, and underwriters base decisions partly on this history.

You'll have to decide whether you prefer a whole or term policy based on your situation if you're getting no medical life insurance. A term policy has an expiration date, and extensions or conversions to a whole life policy are not guaranteed. If anything, your rate may be higher if you try to convert your policy. The insurer looks at you just as it would any other applicant of your age, health, etc. A whole life policy locks in premiums and payouts.

There are alternatives to a new medical exam life insurance policy. Insurance agents can quote you medical exam policies if you're denied a no medical exam option. An experienced agent may be able to assess your application before starting the process to avoid official denials. If you're concerned about premium limits, you can explore options like IUL (indexed universal life) for permanent life insurance that increases your benefit as long as you make premium payments.

A no medical exam life insurance policy may hold a certain appeal for older applicants and those in failing health. However, the life insurance market is the opposite of what you might expect. These policies are best for young people (typically under 50 years old) in good health. The no medical exam process is often more efficient, streamlining your approval, and life insurance companies can only do this with low-risk applicants.

If a provider sees red flags that might disqualify you, it won't necessarily prevent you from getting coverage. Instead, the agent would most likely offer to run more conventional life insurance quotes for you.

Guaranteed issue life insurance policies do not require a medical exam. This type of life insurance is typically limited to people ages 50 or older, and the tradeoff is that policies are usually more expensive than ones that do require a medical exam. That said, if your health conditions would otherwise prevent you from getting a life insurance policy, guaranteed issue insurance is a useful option, and it's offered by a variety of insurers including AIG, AAA, New York Life, and Gerber Life.

Yes, you can really get life insurance without a medical exam, but your options will be different. That's because you'll need to choose a guaranteed issue policy — a specific type of insurance that lets you bypass the medical exam requirement — and it will probably cost more than a regular policy including a medical exam.

The highest amount of life insurance you can get without a medical exam is lower than what you could get with a medical exam. Guaranteed issue policies that don't require medical exams typically top out at $25,000 or $50,000 in coverage, while standard life insurance policies can offer millions in coverage.

How to Pick the Best No Medical exam Life Insurance Policy for You

Particularly when choosing life insurance, customization is critical. Buyers don't need to add every rider, but a little research goes a long way in selecting the right company. Some applicants will not qualify for a no medical exam life insurance policy. A life insurance agent can help you run quotes that make sense for you. Then agents can offer realistic insurance policy options and review the costs and benefits of each.

Asking friends and family which insurance agent they use could be your first step to finding the right life policy. Factors like age, medical history, and financial goals play key roles in your decision. So we do not recommend asking loved ones about individual policies. Instead, let a qualified insurance professional find the best policies for you.

Why You Should Trust Us: How We Chose the Best No exam Life Insurance

The coverage and riders offered are vital parts of our evaluation. We also look at the speed of payouts, customer satisfaction, and financial strength ratings. All of these factor into the immediate and long-term performance of the life insurance companies we review.

If you're looking for more information about a specific life insurer, our individual reviews offer a deep dive into individual policies, riders, and more. The same considerations are used for all competitors to ensure readers have the edge to make informed decisions in an ever-changing market.

See our insurance rating methodology for more details.

Tue, 02 Jan 2024 09:59:00 -0600 en-US text/html https://www.businessinsider.com/personal-finance/best-no-medical-exam-life-insurance
The Process of Obtaining a Medical Certificate

I have a confession to make: I’ve been flying without a medical. Well, not a third-class medical, at least. As a light sport pilot and the first armless pilot, I’ve been using my driver’s license as my medical. Needless to say, my pathway to becoming a pilot was not typical. Navigating the skies with just my feet meant embracing challenges and redefining what’s possible. Now, I’m finally on the journey to getting my medical, the real medical, the third-class medical. Once again, I’m back in uncharted territory.

This medical adventure started with The Impossible Airplane project, a custom-controlled RV-10 that will be the first purpose-built airplane in history for a pilot flying with just their feet. I’ve been flying Ercoupes for a long time, but the RV-10 does not qualify for the current light sport rules. So if I’m ever going to solo The Impossible Airplane, I’ll need to get my private pilot certificate, which also means getting at least my third-class medical.

Even without arms, my formal application started like any nondisabled student pilot. I spoke to several pilot friends about an aviation medical examiner (AME) they would recommend. Unlike most other applicants, though, I wanted to find an AME who had experience with nonstandard applications. Over the years, I’ve heard horror stories from prospective pilots about how one poorly written statement caused months of delays and paperwork headaches. I also know that there is no established precedent for a pilot to receive a medical who is only flying with their feet. So, if we didn’t get the application right the first time, there’s no telling what other issues we would have to resolve.

Luckily, I found Dr. Douglas Little. He is an AME in my hometown of Tucson, Arizona, and has handled several other nontypical medical applications. I also felt reassured when Little explained the medical deferral process and that he would help me through the back and forth with the FAA. He must have done a good job because two months later I received a letter from the FAA that said I needed to see an orthopedic doctor for an evaluation and further documentation. 

When I first read the letter, I wondered why the FAA wanted an evaluation. I was born without arms. I’ve made it this far in life and aviation. But it was easy enough to schedule an appointment and send the evaluation back to the agency. When I sat down with the orthopedic doctor, it took a few minutes to explain the situation and what we were asking for. He wasn’t a pilot, but he was professionally curious. (I get a lot of professional curiosity from doctors, especially podiatrists who want to see how my feet have adapted.) The next day I had triplicates of his evaluation and sent two of them to the FAA. It was a pretty straightforward letter attesting to the functionality of my legs and feet with the absence of arms.

The FAA never asked for my medical history. I can see how that collection of documents would be difficult to assemble and then even harder to present to the agency in an organized manner. There are many services out there that specialize in assisting pilots with that process. Luckily for me, about two months after sending in my evaluation, the FAA approved a medical flight review. I can only guess that a combination of Little, the orthopedic doctor, and flying as a light sport pilot for several years helped the process.

The FAA gave me the option to choose the district office that would oversee the process. Naturally, I chose the Scottsdale, Arizona, FSDO. A couple of weeks after that, I got a call from an FAA representative. She informed me that I could select the examiner for my medical flight review and would have  a six-month window to complete it. That gave me pause for a moment. I know many student pilots were struggling to find examiners and take timely tests. On top of that, the Ercoupe I wanted to use was still undergoing a lot of maintenance. Plus, I needed a good refresher after a summer of not flying at all. However, the FAA official assured me that if I needed more time, then I just needed to call the office before the six-month window was up. 

And now I am waiting for the medical flight review. A CFI friend recommended an examiner who was properly qualified for these. I told him the situation, and he was happy to arrange a date toward the end of my six-month window. I’m both nervous and excited. If I fail my exam, then I will be disqualified from flying as a light sport pilot, and my days as a pilot are over. But if I pass—when I pass—many of the LSA restrictions will be lifted. There will be other restrictions, but I’ll overcome those too. 

Thanks to the light sport rules, the doors have been opened for more pilots like me to learn to fly. I’m so thankful that Able Flight is leading the charge for helping pilots with disabilities learn to fly. Able Flight gave me a scholarship for my light sport training. More and more of us are progressing on to the private pilot level and hopefully making it easier for the next pilot to follow after us. And who knows? Maybe after earning my private pilot certificate and instrument rating, I will start looking over the requirements for being a commercial pilot.

As I stand on the threshold of setting this next precedent, I’m reminded that every flight, every test, every hurdle overcome is not just for me, but for those who dream of taking to the skies regardless of the obstacles they face.

Thu, 21 Dec 2023 10:00:00 -0600 en-US text/html https://www.flyingmag.com/the-process-of-obtaining-a-medical-certificate/
An Exploratory Look At Whether Generative AI Can Pass An Official Mental Health Counseling Licensing exam That Professionals Take

In today’s column, I will be closely looking at whether generative AI could potentially pass an official mental health counseling licensing exam. This is part of my ongoing in-depth series about generative AI or large language models (LLMs) that are or can be anticipated to be used for mental health guidance or advisement.

Before I dive into today’s particular topic, I’d like to provide a quick background for you so that you’ll have a suitable context about the arising use of generative AI for mental health advisement purposes. I’ve mentioned this in prior columns and believe the contextual establishment is essential overall. If you are already familiar with the overarching background on this topic, you are welcome to skip down below to the next section of this discussion.

The use of generative AI for mental health treatment is a burgeoning area of tremendously significant societal ramifications. We are witnessing the adoption of generative AI for providing mental health advice on a widescale basis, yet little is known about whether this is beneficial to humankind or perhaps contrastingly destructively adverse for humanity.

Some would affirmatively assert that we are democratizing mental health treatment via the impending rush of low-cost always-available AI-based mental health apps. Others sharply decry that we are subjecting ourselves to a global wanton experiment in which we are the guinea pigs. Will these generative AI mental health apps steer people in ways that harm their mental health? Will people delude themselves into believing they are getting sound mental health advice, ergo foregoing treatment by human mental therapists, and become egregiously dependent on AI that at times has no demonstrative mental health improvement outcomes?

Hard questions are aplenty and not being given their due airing.

Furthermore, be forewarned that it is shockingly all too easy nowadays to craft a generative AI mental health app, and just about anyone anywhere can do so, including while sitting at home in their pajamas and not knowing any bona fide substance about what constitutes suitable mental health therapy. Via the use of what are referred to as establishing prompts, it is easy-peasy to make a generative AI app that purportedly gives mental health advice. No coding is required, and no software development skills are needed.

We sadly are faced with a free-for-all that bodes for bad tidings, mark my words.

I’ve been hammering away at this Topic and hope to raise awareness about where we are and where things are going when it comes to the advent of generative AI mental health advisement uses. If you’d like to get up-to-speed on my prior coverage of generative AI across a wide swath of the mental health sphere, you might consider for example these cogent analyses:

  • (1) Use of generative AI to perform mental health advisement, see the link here.
  • (2) Role-playing with generative AI and the mental health ramifications, see the link here.
  • (3) Generative AI is both a cure and a curse when it comes to the loneliness epidemic, see the link here.
  • (4) Mental health therapies struggle with the Dodo verdict for which generative AI might help, see the link here.
  • (5) Mental health apps are predicted to embrace multi-modal, e-wearables, and a slew of new AI advances, see the link here.
  • (6) AI for mental health got its start via ELIZA and PARRY, here’s how it compares to generative AI, see the link here.
  • (7) The latest online trend entails using generative AI as a rage-room catalyst, see the link here.
  • (8) Watching out for when generative AI is a mental manipulator of humans, see the link here.
  • (9) FTC aiming to crack down on outlandish claims regarding what AI can and cannot do, see the link here.
  • (10) Important AI lessons learned from the mental health eating-disorders chatbot Tessa that went awry and had to be shut down, see the link here.
  • (11) Generative AI that is devised to express humility might be a misguided approach including when used for mental health advisement, see the link here.
  • (12) Creatively judging those AI-powered mental health chatbots via the use of AI levels of autonomy, see the link here.
  • (13) Considering whether generative AI should be bold and brazen or meek and mild when proffering AI mental health advisement to humans, see the link here.
  • (14) Theory of Mind (ToM) is an important tool for mental health therapists and the question arises whether generative AI can do the same, see the link here.
  • And so on.

Here’s how I will approach today’s discussion.

First, I will introduce you to a pioneering research study that sought to assess whether generative AI could potentially pass an exam taken by medical school students as part of their pursuit of achieving their medical degree. The exam is known as the United States Medical Licensing exam (USMLE). This study received a great deal of headlines since it showcased that generative AI seems to do well on the arduous medical exams taken by budding doctors. Next, I will share with you some salient details about an exam for mental health professionals known as the National Clinical Mental Health Counseling Examination (NCMHCE).

I’m guessing you might be wondering whether generative AI might be able to do well on that type of exam. Great question, thanks. I opted to use a popular generative AI app called ChatGPT to try out a half-dozen questions from the NCMHCE. Please note that this was merely an official trial set and not by any means the full exam.

Would you be surprised to know that the generative AI was able to successfully answer many of the sampled trial questions? I provide some important caveats and limitations about this mini experiment of sorts, and I want to emphasize this was principally done on an ad hoc basis and merely intended to be illustrative.

Here’s the deal.

Please do not jump the shark on this matter. Hold your horses. My mainstay aims here are simply to inspire others to do a deep dive on this and perform a fully comprehensive rigorous research study of an akin nature, perhaps modeled somewhat on the same approach taken by the study on the USMLE or similar such professional licensing domains.

Anyway, I believe you will find this interesting, engaging, and possibly whet your appetite to find out more on these topics. My discussion is yet another angle to considering where we are and where things are going pertaining to generative AI and the field of mental health therapy.

Please buckle up and prepare yourself for quite a ride.

Generative AI And Medical School Standardized Licensing Exam

Let’s talk about tests.

We generally assume that to practice medicine a test of some kind should be required to attest to the proficiency of the person that will be serving as a medical professional. I’d like to start by discussing perhaps one of the most famous such medical proficiency tests known as the United States Medical Licensing Examination (USMLE). This is the test typically expected of those attaining a medical degree in the United States.

The USMLE was devised to aid in standardizing upon one major medical examination test that would be acceptable across every state and ensure that MDs were meeting the same set of standards. The test is composed of three separate stages and is taken during medical school and also upon graduation from medical school.

Here’s some additional detail as noted on the USMLE website:

  • “In the United States and its territories, the individual medical licensing authorities (‘state medical boards’) of the various jurisdictions grant a license to practice medicine. Each medical licensing authority sets its own rules and regulations and requires passing an examination that demonstrates qualification for licensure. Results of the USMLE are reported to these authorities for use in granting the initial license to practice medicine. The USMLE provides them with a common evaluation system for applicants for initial medical licensure.”
  • “USMLE was created in response to the need for one path to medical licensure for allopathic physicians in the United States. Before USMLE, multiple examinations (the NBME Parts examination and the Federation Licensing Examination [FLEX]) offered paths to medical licensure. It was desirable to create one examination system accepted in every state, to ensure that all licensed MDs had passed the same assessment standards – no matter in which school or which country they had trained.”
  • “The United States Medical Licensing Examination® (USMLE®) is a three-step examination for medical licensure in the U.S. The USMLE assesses a physician's ability to apply knowledge, concepts, and principles, and to demonstrate fundamental patient-centered skills, that are important in health and disease and that constitute the basis of safe and effective patient care.”

Humans take the USMLE to showcase their proficiency in medicine. When you encounter a medical doctor, you are likely to assume they probably took the test and passed it. On an intuitive basis we realize that having to pass such an arduous test is impressive and helps to provide us comfort that the person knows their stuff when it comes to the medical field.

Shift gears.

Can generative AI potentially also be proficient enough to pass the USMLE?

That’s an interesting and some would say important question worthy of considering.

First, some quick background about generative AI.

Realize that generative AI is not sentient and only consists of mathematical and computational pattern matching. The way that generative AI works is that a great deal of data is initially fed into a pattern-matching algorithm that tries to identify patterns in the words that humans use. Most of the modern-day generative AI apps were data trained by scanning data such as text essays and narratives that were found on the Internet. Doing this was a means of getting the pattern-matching to statistically figure out which words we use and when we tend to use those words. Generative AI is built upon the use of a large language model (LLM), which entails a large-scale data structure to hold the pattern-matching facets and the use of a vast amount of data to undertake the setup data training.

There are numerous generative AI apps available nowadays, including GPT-4, Bard, Gemini, Claude, ChatGPT, etc. The one that is seemingly the most popular would be ChatGPT by AI maker OpenAI. In November 2022, OpenAI’s ChatGPT was made available to the public at large and the response was astounding in terms of how people rushed to make use of the newly released AI app. There are an estimated one hundred million active weekly users at this time.

Using generative AI is relatively simple.

You log into a generative AI app and enter questions or comments as prompts. The generative AI app takes your prompting and uses the already devised pattern matching based on the original data training to try and respond to your prompts. You can interact or carry on a dialogue that appears to be nearly fluent. The nature of the prompts that you use can be a make-or-break when it comes to getting something worthwhile out of using generative AI and I’ve discussed at length the use of state-of-the-art prompt engineering techniques to best leverage generative AI, see the link here.

Shortly after ChatGPT was made publicly available, many AI researchers began to test the AI app by administering various well-known standardized tests to see how the AI app would do. In February 2023, a research study was posted that indicated ChatGPT had performed surprisingly well on the USMLE. The study was entitled “Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models” by Tiffany H. Kung, Morgan Cheatham, ChatGPT, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, Victor Tseng, PLOS Digital Health, and posted on February 9, 2023.

Here is what the research paper stated overall (excerpts):

  • “We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 3. ChatGPT performed at or near the passing threshold for all three exams without any specialized training or reinforcement. Additionally, ChatGPT demonstrated a high level of concordance and insight in its explanations.”
  • “USMLE questions are textually and conceptually dense; text vignettes contain multimodal clinical data (i.e., history, physical examination, laboratory values, and study results) often used to generate ambiguous scenarios with closely-related differential diagnoses.”

Consider mindfully those above-noted remarks from the AI research effort.

ChatGPT was able to score either at or near the passing threshold for the three staged USMLE. Thus, an arduous medical proficiency exam that we expect human medical doctors to pass was nearly passed by a generative AI app. Some would decry this result as misleading in the sense that the generative AI was doing this without real “knowledge” akin to what humans seem to possess. The concern is that generative AI is nothing more than a so-called stochastic parrot that mimics human wording and fails to “understand” or “comprehend” what is going on.

Nonetheless, the aspect that generative AI could accomplish such a feat is unto itself impressive, even if done via smoke and mirrors as some suggest. The result is additionally surprising because the researchers used ChatGPT out of the box, as it were, namely the generic version of ChatGPT. Another approach would be to add additional data training on the medical field to ChatGPT, but that’s not what they did in this experiment. A generic data-trained generative AI was able to do well on a highly specialized medical domain exam. For more about how generic generative AI can be fine-tuned to specific domains, see my coverage at the link here.

Let’s consider a few other detailed aspects about the notable research result and then I’ll move to my next Topic of discussion.

The research paper noted these salient details (excerpted):

  • “The data analyzed in this study were obtained from USMLE trial question sets which are publicly available.”
  • “376 publicly-available test questions were obtained from the June 2022 trial exam release on the official USMLE website. Random spot checking was performed to ensure that none of the answers, explanations, or related content were indexed on Google prior to January 1, 2022, representing the last date accessible to the ChatGPT training dataset. All trial test questions were screened, and questions containing visual assets such as clinical images, medical photography, and graphs were removed. After filtering, 305 USMLE items (Step 1: 93, Step 2CK: 99, Step 3: 113) were advanced to encoding.”
  • “In this present study, ChatGPT performed at >50% accuracy across all examinations, exceeding 60% in most analyses. The USMLE pass threshold, while varying by year, is approximately 60%.”
  • “Therefore, ChatGPT is now comfortably within the passing range. Being the first experiment to reach this benchmark, we believe this is a surprising and impressive result. Moreover, we provided no prompting or training to the AI, minimized grounding bias by expunging the AI session before inputting each question variant, and avoided chain-of-thought biasing by requesting forced justification only as the final input.”

I’d like to bring your attention to a few points made in those excerpts.

Notice that the experiment consisted of identifying a trial of publicly available questions associated with the exam. The idea is to usually feed samples of questions and not necessarily an entire test per se. It is important to consider how a trial was chosen and whether the trial is suitably representative of what the full test might contain. Fair is fair.

Another fairness consideration is that there is always a chance that the generative AI might have been initially data-trained on the very same questions. If those questions were found when the startup data training took place, you could say it is absurd to feed the same questions into the generative AI. The answers will likely already be known simply due to having seen the questions and their answers beforehand.

If you select questions that arose after the cutoff date of the generative AI app’s data training, you are somewhat comfortable that the content wasn’t encountered already. But even that is readily questioned since the questions might have appeared in other guises. Some exams modify old questions and reuse them in later versions of the exam. There is a chance that a new question is close enough to an older question that perhaps this gives the generative AI a leg up on answering the new question.

My point is that you need to carefully consider how these experiments are conducted. Overall, make sure to look at what trial was chosen and how appropriate it is. What are the odds that the generative AI has previously encountered the same or similar questions? As much as feasible, the goal is to set a fair and square playing field to see whether the generative AI can genuinely answer questions that have not previously been used as part of the data training effort.

You now have a semblance of what takes place when trying to assess generative AI about being able to pass exams such as the pervasive USMLE in the medical domain.

Let’s continue our exploration.

Generative AI And Mental Health Therapy exam Taking

The research study that explored the use of generative AI such as ChatGPT on the USMLE can serve as a role model for similar kinds of studies. The conception is to identify publicly available trial questions, administer the questions to the generative AI, and see how well or poorly the generative AI scores on answering the questions. As much as possible, try to keep the playing field level and fair.

I decided to try this quickly for the field of mental health therapy or mental health counseling.

There is a well-known exam known as the National Clinical Mental Health Counseling Examination (NCMHCE). trial questions are publicly posted online. I selected some of the trial questions and fed them into ChatGPT. I opted to use ChatGPT due to its immense popularity and it has generally been the default choice of similar research studies.

I might note that a more advanced generative AI such as GPT-4 by OpenAI or others would likely do a better job than ChatGPT. In that manner, you could interpret the ChatGPT usage as the floor and that we might expect heightened results by using a more advanced generative AI app. There isn’t an ironclad certain that a more advanced generative AI will do better. The odds though are in that direction.

We also have to be watchful for in a sense polluting an experiment by perchance using questions that have already been seen by the generative AI during the initial data-training. Furthermore, if the generative AI is hooked up to the Internet, the AI might simply go out and find the questions and their answers, similar to a search engine, rather than trying to directly answer the questions. ChatGPT in that sense is a handy choice because the free version does not readily allow for Internet access to perform its activities and the data training was last cut off in January 2022 (at the time of writing of this discussion).

Let’s dive into the ad hoc experiment by first establishing the nature of the mental health therapy or mental health counseling exam.

The National Clinical Mental Health Counseling Examination (NCMHCE) is devised and administered via an organization known as the National Board for Certified Counselors, Inc. Here is what the website for the organization says (excerpts):

  • “The National Board for Certified Counselors, Inc. and Affiliates (NBCC) is the premier credentialing body for counselors, ensuring that counselors who become nationally certified have achieved the highest standard of practice through education, examination, supervision, experience, and ethical guidelines.”
  • “Established as a not-for-profit, independent certification organization in 1982, NBCC’s original and primary purposes have broadened, and its divisions and affiliates have taken on additional responsibilities to advance the counseling profession and enhance mental health worldwide.”
  • “Today, there are over 69,000 National Certified Counselors (NCCs) in more than 40 countries.”

The gist is that this is a well-known and widely accepted organization, and the exam is likewise well-known and widely accepted. I bring this up in case you read a study that used generative AI on some relatively unknown exam or less than a stellar reputational exam, in which case, you would want to gauge the result of the study as partially on the rigor and standing of the test being given at the get-go.

Here is what the website about the NCMHCE says about the exam (excerpts):

  • “The National Clinical Mental Health Counseling Examination (NCMHCE) is designed to assess the knowledge, skills, and abilities determined to be important for providing effective counseling services. The NCMHCE is a requirement for counselor licensure in many states. It is one of two examination options for the National Certified Counselor (NCC) certification and also fulfills the examination requirement for the Certified Clinical Mental Health Counselor (CCMHC) specialty certification.”
  • “The NCMHCE measures an individual’s ability to apply and evaluate knowledge in core counselor skills and competencies and to practice competently as a professional counselor. Specifically, it assesses an entry-level clinical mental health counselor’s ability to apply knowledge of theoretical and skill-based tenets to clinical case studies. The case studies are designed to capture a candidate’s ability to identify, analyze, diagnose, and develop plans for treatment of clinical concerns.”
  • “Candidates for the NCMHCE must have a graduate-level degree or higher from a counseling program accredited by the Council for Accreditation of Counseling and Related Educational Programs (CACREP) or administered by an institutionally accredited college or university. The counseling degree program must contain courses in eight requirement areas.”

Observe some key points mentioned in those excerpts.

First, the exam is used to assess entry-level clinical mental health counselors. You might say that this is handy for my ad hoc experiment since I want to focus on the keystone threshold needed to be considered suitably knowledgeable for proceeding to perform mental health therapy with real clients or patients. Other exams might be used to assess more advanced skill levels, but I’m aiming here to start with the usual starting point. I’m sure that other researchers are or will try to do the same for more advanced instances.

Second, note that candidates who want to sit for the exam must have a graduate-level degree or higher from an accredited counseling program or as administered by an accredited college or university. This sets the bar higher than perhaps allowing an undergraduate to take the exam or maybe wantonly opening the exam to anyone who wants to take it. We can presume that the test is likely to ask questions of a hard nature. That’s good since we would want to make sure we deliver something challenging to generative AI rather than some easy-peasy questions or materials. We might also note that of course, generative AI would not qualify to officially take the exam since it has not met all the criteria to do so.

The official exam website provides an NCMHCE Sample Case Study that indicates the case study is considered updated as of March 2023. I selected six trial questions from this trial set. I want to loudly emphasize that this is an ad hoc selection and I do so merely to be illustrative of what might be done on a more rigorous basis.

Though the date says March 2023, there of course is a chance that these questions and their answers have been around before that date, for which ChatGPT might have seen before the January 2022 cutoff date. I tried to do various probing into ChatGPT to see if the content had already been prior encountered. By and large, it doesn’t seem to be, but that’s not known for sure, and a deeper analysis would need to be undertaken to ascertain this. For the moment, let’s go with the flow and assume that the trial questions weren’t previously seen by ChatGPT during its data training.

The six sampled trial questions cover these six respective topics:

  • Q1. Establish a therapeutic alliance.
  • Q2. Identify strengths that Excellerate the likelihood of goal attainment.
  • Q3. Discuss limits of confidentiality.
  • Q4. Determine a diagnosis.
  • Q5. Assess the presenting problem and level of distress.
  • Q6. Establish short- and long-term counseling goals consistent with the client’s diagnosis.

Keep that in mind as I walk you through what ChatGPT provided as answers to the posed questions.

The test is essentially based on case studies. For these six sampled trial questions, a case study was provided in the publicly posted material. The case study was fed into ChatGPT for this analysis. Rather than displaying for you the entirety of the case study, I will do a quick recap to bring you up to speed.

In this instance, the case study entails a divorced female of age 35 who is first undertaking a mental health counseling session with a mental health therapist who has some background about the client or patient but otherwise, this is the first meeting of the two. The client or patient has already been provisionally diagnosed as having a major depressive disorder.

Additional background is given about the client or patient. For example, after her divorce, she began staying in bed quite a lot and moved back in with her mother. She got fired from her job. She has had financial difficulties. Most days, she feels sad, empty, and anxious. She has joint legal custody with her ex-husband of their two children, respectively aged 10 and 12. And so on.

That outlines the nature of the underlying case study.

Questions And The Answers Generated By The Generative AI

I am going to walk you through each of the six multiple-choice questions and also showcase the answers that were generated by ChatGPT so that you can follow along step-by-step.

My initiating prompt asks ChatGPT to provide answers plus explain why each chosen answer was considered the correct answer by ChatGPT. Asking for an explanation is not necessary, but I thought getting explanations might be interesting to see.

There is also a bit of prompting strategy involved, namely that by asking for an explanation the chances are that a generative AI app might be more extensive in trying to solve a given question or problem, see my discussion at the link here. You could suggest that I was aiding the generative AI by giving an establishing prompt that would urge it to do a better job than otherwise. Whenever you look at research studies doing just about anything with generative AI, make sure to find out what prompts they used. This is a significant factor related to the performance of the generative AI. Studies that fail to showcase their prompts are unfortunately doing a bit of a disservice by not revealing how they got the generative AI to undertake things.

The sampled trial questions are based on the case study, and I’ve placed them in quotes to indicate they came from the case study. In some instances, the wording is slightly reworded merely and strictly for purposes of feeding them into ChatGPT.

I am purposely not going to speculate or comment on the answers that are given by ChatGPT. I will simply note whether ChatGPT selected the correct multiple-choice selection as stipulated in the trial set. I’m guessing that mental health therapists and mental health counselors will undoubtedly find the explanations of special interest and will indubitably get them extensively mulling over what ChatGPT had to say.

Here is my establishing prompt and the answer provided by ChatGPT:

  • My prompt: “I am going to ask you a series of questions about the case study. The questions are multiple-choice. You are to select an answer of either A, B, C, or D from the choices provided for each question. I want you to carefully read each question and then indicate which answer is the best answer of the choices provided. You are to also explain why the answer you’ve chosen is the best answer. Are you ready to answer the first question?”
  • ChatGPT generated a response: “Yes, I'm ready. Please proceed with the first question.”

Okay, we are good to go.

Here’s the first question:

“Q1. What would be the best way for a mental therapist to at initial intake seek to establish a therapeutic relationship with this person?”

“A. Reflect on what is irrational about the woman’s self-evaluations.”

“B. Advise her about how to manage her painful experiences.”

“C. Summarize her emotions and her desire for change.”

“D. Explore how she could Excellerate her role in relationships.”

  • ChatGPT generated response:

“Answer to Q1 is C. Summarize her emotions and her desire for change.”

“Explanation: Establishing a therapeutic relationship involves demonstrating empathy, understanding, and acknowledging the client's emotions. Summarizing her emotions and desire for change (Option C) helps validate her experiences and shows that the therapist is actively listening and comprehending her situation. This approach helps build rapport and sets the foundation for a collaborative therapeutic relationship. It also allows the client to feel heard and understood, creating a safe space for further exploration and intervention.”

End of ChatGPT response

ChatGPT selected answer C, which is considered the correct answer as per the trial set.

The explanation indicates the presumed basis. I could have also asked ChatGPT to explain why the other answers weren’t as good as the selected answer. That would have gotten quite wordy and didn’t seem needed for this ad hoc analysis. A deeper study might want to consider getting a lengthier reply with a more elaborate explanation.

We move to the second question:

“Q2. What information should a mental health therapist explore to help the client remember her strengths?”

“A. The client’s disclosure about her previous level of functioning.”

“B. The client’s recollections of past mistakes.

“C. The client’s feelings about her mother paying for counseling.”

“D. The client’s use of cognitive distortions.”

  • ChatGPT generated response:

“Answer to Q2 is A. The client’s disclosure about her previous level of functioning.”

“Exploring the client's disclosure about her previous level of functioning (Option A) would be most relevant to identify and highlight her strengths. Understanding her past achievements, abilities, and positive aspects of her life before the onset of depressive symptoms can help the therapist and the client focus on resources that can be tapped into for recovery. This approach emphasizes a strengths-based perspective and encourages the client to recognize her capabilities, fostering a more positive outlook on her current situation.”

End of ChatGPT response

The answer indicated by ChatGPT is answer A, which is also the correct answer per the trial set. We are now two-for-two in terms of having gotten the questions right.

Next, let’s see what happens with the third question:

“Q3. How should a mental health therapist handle the client’s concern about confidentiality?”

“A. Explain that as a mental therapist, they will share progress updates with the mother but not provide any details.”

“B. Clarify what can and cannot be shared through informed consent.”

“C. Request the mother’s assent to the treatment and obtain the client’s consent.”

“D. Ask the client for a signed release that would include her mother as part of the treatment process.”

  • ChatGPT generated response:

“Answer to Q3 is B. Clarify what can and cannot be shared through informed consent.”

“The best approach is to clarify confidentiality issues through informed consent (Option B). The therapist should explain the limits of confidentiality, outlining what information can be shared and under what circumstances. This empowers the client to make an informed decision about what aspects of her treatment will remain confidential and what may need to be shared. It respects the client's autonomy and privacy while addressing concerns about the involvement of her mother in the therapy process.”

End of ChatGPT response

The answer indicated by ChatGPT is B, which is also the correct answer via the trial set. Three for three, so far.

Here’s the fourth question:

“Q4. From the perspective of the mental health therapist, what self-report from the client is a discernable criterion for the given diagnosis?”

“A. Her disagreements with her mother.”

“B. Her increased psychomotor agitation.”

“C. Her difficulty managing her children.”

“D. Her decreased pleasure in her activities.”

  • ChatGPT generated response:

“Answer to Q4 is D. Her decreased pleasure in her activities.”

“A discernible criterion for a major depressive disorder is often a pervasive and persistent decrease in pleasure or interest in previously enjoyed activities. This is commonly referred to as anhedonia. The client's report of not being able to remember the last time she enjoyed something and the cessation of an activity she used to find enjoyable (creating jewelry) aligns with this criterion. While other aspects, such as disagreements with her mother (Option A) or difficulty managing her children (Option C), may be related to her emotional state, the decreased pleasure in activities is a more specific indicator for depression.”

End of ChatGPT response

The answer indicated by ChatGPT is D, which is the correct answer according to the trial set. You will also note that this time the explanation opted to explain why the chosen answer was considered a better choice than some of the other proposed answers.

We are at four of four in terms of getting the correct answers.

Let’s see what occurs with the fifth question:

“Q5. What criteria were likely used to determine the severity specified for the client’s major depressive disorder?”

“A. The number of symptoms is not substantially in excess of those needed to make the diagnosis, yet functional impairment exists.”

“B. The intensity of the symptoms is distressing but manageable, and results in minor impairment of social or occupational functioning.”

“C. The intensity of symptoms is high and is accompanied by psychomotor agitation and mixed features associated with a major depressive episode.”

“D. The number of symptoms and their intensity are distressing and unmanageable and interfere with the client’s social and occupational functioning.”

  • ChatGPT generated response:

“Answer to Q5 is D. The number of symptoms and their intensity are distressing and unmanageable and interfere with the client’s social and occupational functioning.”

“The severity specified for major depressive disorder is often determined by the number and intensity of symptoms, as well as their impact on the client's social and occupational functioning. In this case, the client reports feeling sad, empty, and anxious most days, experiences tension every day, has difficulty getting out of bed, and has ceased engaging in previously enjoyed activities. Additionally, she has lost her job, faces financial difficulties, and expresses guilt about her interactions with her children. These factors suggest a significant impact on both social and occupational functioning, aligning with the criteria for a more severe form of major depressive disorder.”

End of ChatGPT response

ChatGPT indicates the answer should be D, but the trial set indicates the correct answer is A. This is the first miss by ChatGPT in this set of sampled trial questions.

I was curious whether ChatGPT could justify the answer D versus the answer A.

One thing to keep in mind is that some generative AI apps such as ChatGPT tend to immediately back down when you challenge them about an answer. The AI will indicate that it was wrong and switch to the answer that you say is the correct answer. In a sense, this is somewhat disconcerting because you would generally prefer the AI to moderately push back and support an answer that was chosen.

I am not saying this should be doggedly dogmatic. It certainly makes sense to have the AI back down if the user is telling the AI that the answer is wrong, though I think you can see how this might be a dangerous slope to slide upon. Just because a user says that something is wrong does not mean that it indeed is wrong. You can end up with the AI being too easily convinced of falsehoods simply because the user said they were truths. This is an ongoing issue of concern about generative AI, see my analysis at the link here.

All in all, rather than telling the AI that it was wrong, which would possibly get it to collapse like a deck of cards, I came to the matter politely and tangentially. I merely asked ChatGPT to compare the answer D with the answer A. I didn’t tip my hand that the answer A is considered the correct answer.

Here we go:

  • My prompt: “I'd like you to compare the answer D that you selected to the choice A. What makes answer D better than answer A?”
  • ChatGPT generated response:

“Let's compare options D and A:”

“D. The number of symptoms and their intensity are distressing and unmanageable and interfere with the client’s social and occupational functioning.”

“A. The number of symptoms is not substantially in excess of those needed to make the diagnosis, yet functional impairment exists.”

“The main distinction lies in the severity of functional impairment. Option D suggests that the number of symptoms and their intensity are distressing and unmanageable, leading to significant interference with social and occupational functioning. This aligns with a more severe form of major depressive disorder, where the impairment is substantial.”

“On the other hand, Option A acknowledges functional impairment but does not emphasize the distressing and unmanageable nature of the symptoms. It leaves room for interpretation that the impairment might be present but not to the extent that it is distressing or unmanageable.”

“In the context of major depressive disorder, severity is often determined not just by the presence of symptoms but also by the impact on daily functioning. Option D explicitly addresses the distress and interference with functioning, making it a more comprehensive and accurate description of a severe presentation of major depressive disorder.”

End of ChatGPT response

I’d bet that mental health professionals could readily debate the indication of ChatGPT about why one of the answers is construed as better than the other. Please do so at your leisure.

Moving to the sixth and final question, take a look at this one:

“Q6. From the perspective of the mental health therapist, what short-term goal would guide the treatment of the client’s presenting problem?’

“A. Spend one hour each day rewriting difficult childhood memories with her mother.”

“B. Return to her previous sleep schedule to be more present with her children.”

“C. Spend one hour each day journaling about her vision for her next marriage.”

“D. Return to full-time work in nursing to boost her self-image and mood.”

  • ChatGPT generated response:

“Answer is B. Return to her previous sleep schedule to be more present with her children.”

“The most appropriate short-term goal would be to address the immediate issue affecting her daily functioning, which is her disrupted sleep schedule. Returning to her previous sleep schedule (Option B) can have a positive impact on her mood, energy levels, and ability to engage with her children. This goal is specific, achievable, and directly addresses a key factor contributing to her distress. It also aligns with improving her overall well-being and functioning in the short term.”

End of ChatGPT response

The answer selected by ChatGPT is answer B, which is the correct answer according to the trial set.

Conclusion

Please do not try to calculate the percentage of times that ChatGPT was able to correctly answer the sampled sample. Doing so would be ridiculous as this was not a rigorous experiment but instead an ad hoc one and meant solely to get the juices flowing about examining how generative AI might do on these kinds of tests. That being said, I believe there is enough indication that performing a thorough study would be worthwhile. Had the generative AI got none of the answers correctly I would probably be leaning toward suggesting that a full-on study wasn’t likely worth the effort. This snippet suggests that doing more might be valuable.

There are flies in the ointment on these matters.

One argument about having generative AI take these various types of tests is that there isn’t any human soul or human connection that is also being assessed, see my discussion at the link here. In other words, just because a generative AI app can answer questions on a dry test does not mean that we would want the generative AI to interact with real people in real life and provide advice as to whatever domain or realm of expertise is being tested.

The argument is further amplified when considering the Topic of mental health. Some would assert that only another human can adequately counsel another human. An AI system is not human and does not have human experience under its belt. A counterviewpoint is that notwithstanding humanness, there is still a place for AI to aid humans, including in the sphere of mental health guidance or advice.

Let’s conclude this discussion for now by invoking a famous line.

The renowned American psychologist Carl Rogers purportedly said this: “In my early professional years, I was asking the question, how can I treat, or cure, or change this person? Now I would phrase the question in this way, how can I provide a relationship that this person may use for their personal growth?”

Can generative AI form a relationship with humans and if so, do we want that to be how mental health is conveyed or advised?

More questions ostensibly need more answers; thus, the endeavor must continue.

Mon, 01 Jan 2024 09:36:00 -0600 Lance Eliot en text/html https://www.forbes.com/sites/lanceeliot/2024/01/01/an-exploratory-look-at-whether-generative-ai-can-pass-an-official-mental-health-counseling-licensing-exam-that-professionals-take/
What Insurers Look for in a Life Insurance Blood Test & Exam No result found, try new keyword!When you apply for life insurance, the company will usually request a medical exam to see if you have a disease or are at risk for one. They will also look for any current drug use. The exam may ... Wed, 27 Dec 2023 10:00:00 -0600 en-US text/html https://www.valuepenguin.com/life-insurance/blood-test-medical-exam Study finds AI language model failed to produce appropriate questions, answers for medical school exam

With concerns mounting that artificial intelligence (AI) could have a profound impact on traditional teaching in academic settings, many question the role of ChatGPT, a sophisticated AI language model that can generate content that mimics human conversation.

ChatGPT offers the potential to assist or take over the student writing process with the capability of authoring everything from college admissions essays to term papers. But, can it also be used to aid the prodigious, sometimes daunting learning process in the medical school curriculum?

Researchers from Boston University Chobanian & Avedisian School of Medicine used ChatGPT to create multiple-choice questions, along with explanations of correct and incorrect choices, for a graduate and medical school immunology class that was taught by faculty in the school's department of pathology & laboratory medicine. They found the AI language model wrote acceptable questions but failed to produce appropriate answers.

The study is published in the journal Academic Pathology.

"Unfortunately, ChatGPT only generated correct mock test with explanations in 32% of the questions (19 out of 60 individual questions). In many instances, ChatGPT failed to provide an for the incorrect answers. An additional 25% of the questions had answers that were either wrong or misleading," explained corresponding author Daniel Remick, MD, professor of pathology & laboratory medicine at the school

According to the researchers, students appreciate practice exams that can be used to study for their real exams. These practice exams have even greater utility when explanations for answers are included since students will learn the rationale for the correct and have explanations for the incorrect answers.

Since ChatGPT generated questions with vague or confusing question stems and poor explanations of the answer choices, this study tool may not be entirely viable.

"These types of misleading questions may create further confusion about the topics, especially since the students have not gained expertise and they may not be able to find errors in the questions. "However, despite the issues we encountered, instructors may still find ChatGPT useful for creating practice exams with explanations—with the caveat that extensive editing may be required," added Remick.

More information: Alexander Ngo et al, ChatGPT 3.5 fails to write appropriate multiple choice practice exam questions, Academic Pathology (2023). DOI: 10.1016/j.acpath.2023.100099

Citation: Study finds AI language model failed to produce appropriate questions, answers for medical school exam (2023, December 20) retrieved 5 January 2024 from https://medicalxpress.com/news/2023-12-ai-language-medical-school-exam.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Tue, 19 Dec 2023 09:59:00 -0600 en text/html https://medicalxpress.com/news/2023-12-ai-language-medical-school-exam.html
New Cardiology Certification Board: What's the Plan?

The proposal by the major cardiovascular societies in the US to form a new board of cardiovascular medicine to manage initial and ongoing certification of cardiologists represents something of a revolution in the field of continuing medical education and assessment of competency. 

Five US cardiovascular societies — the American College of Cardiology (ACC), the American Heart Association (AHA), the Heart Failure Society of America (HFSA), the Heart Rhythm Society (HRS), and the Society for Cardiovascular Angiography & Interventions (SCAI) — have now joined forces to propose a new professional certification board for cardiovascular medicine, to be known as the American Board of Cardiovascular Medicine (ABCVM)

The ABCVM would be independent of the American Board of Internal Medicine (ABIM), the current organization providing maintenance of certification for cardiologists as well as many other internal medicine subspecialties. The ABIM's maintenance of certification process has been widely criticized for many years and has been described as "needlessly burdensome and expensive." 

The ABCVM is hoping to offer a more appropriate and supportive approach, according to Jeffrey Kuvin, MD, a trustee of the ACC, who has been heading up the working group to develop this plan. 

Kuvin, who is chair of the cardiology at Northwell Health, Manhasset, New York, a l arge academic healthcare system, explained that maintenance of certification has been a Topic of discussion across the cardiovascular community for many years, and the ACC has a working group focused on the next steps for evaluation of competency, which he chairs.

"The Topic of evaluation of competence has been on the mind of the ACC for many years and hence a work group was developed to focus on this," Kuvin noted. "A lot of evolution of the concepts and next steps have been drawn out of this working group. And now other cardiovascular societies have joined to show unification across the house of cardiology and that this is indeed the way that the cardiovascular profession should move." 

"Time to Separate from Internal Medicine"

The general concept behind the new cardiology board is to separate cardiology from the ABIM. 

"This is rooted from the concept that cardiology has evolved so much over the last few decades into such a large multidimensional specialty that it really does demarcate itself from internal medicine, and as such, it deserves a separate board governed by cardiologists with collaboration across the entirely of cardiology," Kuvin said. 

Cardiology has had significant growth and expansion of technology, tools, medications, and the approach to patients in many specialities and subspecialties, he added. "We have defined training programs in many different areas within cardiology; we have our own guidelines, our own competency statements, and in many cases, cardiology exists as its own department outside of medicine in many institutions. It's just time to separate cardiology from the umbrella of internal medicine." 

The new cardiology board would be separate from, and not report to, the ABIM; rather, it would report directly to the American Board of Medical Specialties (ABMS), the only recognized medical certification body in the US. 

What Are the Proposed Changes

Under the present system, managed by the ABIM, clinicians must undergo two stages of certification to be a cardiologist. First, they have to pass the initial certification exam in general cardiology, and then exams in one of four subspecialties if they plan to enter one of these, including interventional cardiology, electrophysiology, advanced heart failure or adult congenital heart disease

Next, clinicians enter the maintenance of certification phase, which can take three different forms: 1) taking another recertification exam every 10 years; 2) the collaborative maintenance pathway — a collaboration between ACC and ABIM, which includes evaluation, learning and a certified exam each year; or 3) longitudinal knowledge and assessment — in which the program interacts with the clinician on an ongoing basis, sending secured questions regularly. 

All three of these pathways for maintenance of certification involve high stakes questions and a set bar for passing or failing. 

Under the proposed new cardiology board, an initial certification exam would still be required after fellowship training, but the maintenance of certification process would be completely restructured, with the new approach taking the form of continuous learning and assessment of competency. 

"This is an iterative process, but we envision with a new American Board of Cardiovascular Medicine, we will pick up where the ABIM left off," Kuvin notes. "That includes an initial certifying examination for the five areas that already exist under the ABIM system but with the opportunities to expand that to further specialties as well."

He points out that there are several areas in cardiology that are currently not represented by these five areas that warrant some discussion, including multimodality imaging, vascular heart disease, and cardio-oncology. 

"At present, everybody has to pass the general cardiology exam and then some may wish to further train and get certified in one of the other four other specific areas. But one Topic that has been discussed over many years is how do we maintain competency in the areas in which clinicians practice over their lifetime as a cardiologist," Kuvin commented. 

He said the proposed cardiology board would like to adhere to some basic principles that are fundamental to the practice of medicine. 

"We want to make sure that we are practicing medicine so that our patients derive the most benefit from seeing a cardiologist," he said. "We also want to make sure, however, that this is a supportive process, supporting cardiologists to learn what they know and more importantly what they don't know; to identify knowledge gaps in specific area; to help the cardiologist fill those knowledge gaps; to acknowledge those gaps have been filled; and then move on to another area of interest. This will be the focus of this new and improved model of continuous competency."

The proposed new board also says it wants to make sure this is appropriate to the area in which the clinician is practicing.

"To take a closed book certified exam every 10 years on the world of cardiology as happens at the current time – or the assessments conducted in the other two pathways – is often meaningless to the cardiologist," Kuvin says. "All three current pathways involve high stakes questions that are often irrelevant to one’s clinical practice." 

Lifelong Learning

"The crux of the changes we are proposing will be away from the focus of passing a test towards a model of helping the individual with their competency, with continuous learning and evaluation of competency to help the clinician fill in their knowledge gaps," he explains.

He described the new approach as "lifelong learning," adding that, instead of it being "a punitive pass/fail environment with no feedback, which causes a lot of discontent among clinicians," it will be a supportive process, where a clinician will be helped in filling their knowledge gaps. 

"I think this would be a welcome change not just for cardiology but across medical specialties," Kuvin said. 

He also pointed out the ABMS itself is considering a continuous competency approach, and the proposed new cardiology board aims to work with the ABMS to make sure that their goals of continuous competency assessment are matched. 

"The world has changed. The ability to access information has changed. It is no longer imperative for a clinician to have every piece of knowledge in their brain, but rather to know how to get knowledge and to incorporate that knowledge into clinical practice," Kuvin noted. "Competency should not involve knowledge alone as in a closed book exam. It is more about understanding the world that we live in, how to synthesize information, where we need to Excellerate knowledge and how to do that." 

Kuvin acknowledged that asking clinicians questions is a very helpful tool to identify their knowledge base and their knowledge gaps. "But we believe the clinician needs to be given resources – that could be a conference, an article, a simulation - to fill that knowledge gap. Then we could ask clinicians some different questions and if they get those right then we have provided a service." 

Tactile skills for cardiologists needing to perform procedures – such as interventionalists or electrophysiologists may be incorporated by simulation in a technology-based scenario.

On how often these assessments would take place, Kuvin said that hadn't been decided for sure. 

"We certainly do not think an assessment every 10 years is appropriate. We envision, instead of an episodic model, it will be rather a lifelong journey of education and competency. This will involve frequent contact and making sure knowledge gaps are being filled. There are criteria being set out by the ABMS that there should be a certain number of touch points with individuals on an annual as well as a 5-year basis to make sure cardiologists are staying within specific guardrails. The exact nature of these is yet to be determined," he said. 

Kuvin added that it was not known yet what sort of hours would be required but added that "this will not be a significant time burden."

What is the Timeframe?

The application to the ABMS for a separate cardiology board is still ongoing and has not yet received formal acceptance. Representatives from the five US cardiovascular societies are in the initial stages of formulating a transition board. 

"The submission to the ABMS will take time for them to review. This could take up to a year or so," Kuvin estimates. 

This is the first time the ABMS has entertained the concept of a new board in many years, he noted. "It will be a paradigm shift for the whole country. I think that cardiology is really at the forefront and in a position where we can actually do this. If cardiovascular medicine is granted a new board, I think this will help change the approach of how physicians are assessed in terms of continuous competency not just in cardiology but across all specialties of medicine."

He added: "We are confident that we can work within the construct of the ABMS guidelines that have been revised to be much more holistic in the approach of continuous competence across the board. This includes thinking beyond rote medical knowledge and thinking about the clinician as a whole and their abilities to communicate, act professionally, work within a complex medical system, utilize medical resources effectively. These all have to be part of continuous competence."

How Much Will This Cost?

Noting that the ABIM has received criticism over the costs of the certification process, Kuvin said they intend to make this "as lean a machine as possible with the focus on reducing the financial [burden] as well as the time burden for cardiologists. It is very important that this is not cumbersome, that it is woven into clinical practice, and that it is not costly." 

But he pointed out that building a new board will have significant costs. 

"We have to think about developing initial board certification examinations as well as changing the paradigm on continuous certification," he said. "This will take some up-front costs, and our society partners have decided that they are willing to provide some start-up funds for this. We anticipate the initial certification will remain somewhat similar in price, but the cost of ongoing continuous competency assessment will be significantly reduced compared to today's models."

Kuvin said the collaboration of the five participating US cardiovascular societies was unprecedented. But he noted that while the transition board is beginning with representatives of these individual societies, it will ultimately be independent from these societies and have its own board of directors. 

He suggested that other societies representing other parts of cardiology are also interested. "Cardiology has recognized how important this is," he said. "Everybody is excited about this."

Thu, 07 Dec 2023 08:53:00 -0600 en text/html https://www.medscape.com/viewarticle/new-cardiology-certification-board-what-s-plan-2023a1000umq
Why ISSA’s Corrective Exercise Course Stands Out: A Trainer’s Take No result found, try new keyword!A certified personal trainer tests and reviews the ISSA Corrective Exercise Specialist course to see if it is worth adding to your skill set. Thu, 04 Jan 2024 15:41:00 -0600 text/html https://www.si.com/showcase/fitness/issa-corrective-exercise-review Virginia Cannabis Control Authority to take over medical marijuana program in 2024 No result found, try new keyword!B eginning January 1, 2024, the Virginia Cannabis Control Authority (CCA) will take over regulatory oversight of Virginia’s medical cannabis program. The CCA is working with the Board of Pharmacy, the ... Thu, 28 Dec 2023 13:30:30 -0600 en-us text/html https://www.msn.com/ Correction Window For Foreign Medical Graduate exam Opens Today, Check Details New Delhi:

The National Board of Examinations in Medical Sciences (NBEMS) will open the correction window for the Foreign Medical Graduate Examination (FMGE) today (December 15, 2023). Candidates who have registered for the exam can visit the official website of NBE to make correction in their application forms. The correction window will close on December 18, 2023. 

The edit window to make changes or rectify any error in the application form will open on December 29, 2023 to January 1, 2024.

The candidates will be required to  submit documents related to Primary Medical Qualification Certificate (PMQC), Attestation of PMQC by  Indian embassy concerned, Eligibility certificate or Admission Letter, Proof of Citizenship for completing the application process. 

The admit cards for the exams will be out on January 12, 2023.  The exam will be conducted on January 20, 2024 and the results will be out on February 20, 2024. 

The licensing exam is crucial  for Indian or overseas citizen of India who possess a primary medical qualification awarded by any medical institution outside India and who are desirous of getting provisional or permanent registration with Medical Council of India or any State Medical Council.

The paper shall be delivered in two parts to be taken in a single day, with each part comprising 150 questions to be attempted in 150 minutes. The examination will be a multiple-choice question (MCQ) test delivered using a computer network as per the prescribed scheme.

Fri, 15 Dec 2023 03:28:00 -0600 text/html https://www.ndtv.com/education/fmge-december-2023-correction-window-for-foreign-medical-graduate-exam-opens-today-check-details-4677871
Natalia Grace’s Real Age Is Revealed In New Docuseries Natalia Speaks

If you’re like me, you watch a lot of true crime docuseries. So when I tell you that The Curious Case of Natalia Grace (TCCoNG), which aired on Investigation Discovery (ID) last May, is one of the absolute wildest ones we’ve seen, that’s saying something. At the heart of this bizarre case lies the question: how old is Natalia Grace? Because while she and her supporters claim she was born in 2003, her adoptive family vehemently asserts that she was born in 1989, making her 34 today. A new follow-up series, Natalia Speaks (NS) reveals Natalia Grace’s real age through science, laying this central query to rest once and for all. But first, a recap for those who haven’t seen the original series or need a refresher...

In 2010, Kristine and Michael Barnett adopted Natalia Grace, a 6-year-old originally from Ukraine.

Her first adoptive family in the United States, was the Ciconnes, in 2008. But for reasons that remain unclear and/or disputed, in 2010 she was adopted by the Barnetts who already had three sons. In time, however, the Barnetts would not only claim that the girl had violent, even murderous tendancies, but that she was, in fact con-artist adult. (If this sounds familiar, yes, it’s the plot of The Orphan, which came out a year before the adoption in 2009.)

Natalia around age 7. The Barnetts would claim she was approximately 21 at this time.Courtesy of Investigation Discovery

In 2012, the Barnetts had Natalia’s age changed from 8 to 22.

Armed with a note from a family doctor and a social worker, the Barnetts went through the Indiana court system to legally re-age Natalia, claiming that she was not 8 years old, but born in 1989 and was 22 years old. With that, they placed her in apartments to live on her own where she routinely drew the attention of neighbors as being unusual.

The first docuseries released last year did not include interviews with either Natalia or her new guardians (whom she refers to as her parents), Cynthia and Antwon Mans, relying primarily on the testimony of Michael Barnett. But in NS, we learn Natalia’s side of the story, and she alleges horrifying abuse at the hands of her adoptive family. She also holds fast to her claim that she is, in fact, 20 as of press time (at the time of filming, she was still 19). Though she had purported to be older in the past, she says, she did so under the instruction of her adoptive mother, Kristine.

A DNA test has concluded that today, Natalia is in her early 20s.

In the very first episode of the series, which premiered on Jan. 1, Natalia submits to a DNA test that puts her age around 22 years old, which lines up pretty well with her claims and her Ukrainian birth certificate. (She will turn 21 this October.) But in an emotional moment during this episode, shortly after receiving her results, she weeps with her father Antwon, saying “They knew and they still did what they did.” And, indeed, this DNA test is not the first medical test to conclude that Natalia was, in fact, a child throughout her time with the Barnetts. Sadly, none of those tests were submitted to Indiana courts in 2012.

Natalia’s age had been confirmed by previous tests that were never submitted to the courts in 2012.

Natalia Grace in 2011. She was approximately 8 years old at the time.Courtesy of Investigation Discovery

A 2019 investigation by BuzzFeed News found that 2012 tests conducted at Peyton Manning Children’s Hospital in Indianapolis — a bone density test and skeletal survey — revealed that she was between 8 and 11 years old. Being approximately 9 at the time, Natalia’s Ukrainian birth certificate listing her birthday in October 2003 appears to have been pretty accurate.

Moreover, the first episode of NS brings to light that in addition to the tests at Peyton Manning, the Barnetts had multiple medical records indicating Natalia’s age in keeping with what was on her birth certificate. Jackie Starbuck, the former deputy prosecutor for Tippecanoe County, who led the 2023 neglect case against Michael Barnett, tells the documentary crew that the Barnetts provided medical records to Natalia’s elementary school from before she lived with them that indicate her age as being in keeping with her birth certificate. The results of a 2010 endocrinology exam by Dr. Andrew Riggs found Natalia to be between 9 and 11, which the Barnetts had been made aware of.

Natalia’s childhood dentist confirmed her age directly to Kristine Barnett back in 2011, months after she was adopted.

The series also interviews the Barnett family dentist, Dr. Tim Gossweiler, who says that Kristine brought Natalia in for an appointment in 2011 concerned that she was older than she purported to be. Though Gossweiler conceded that Natalia was quite articulate for a 7-year-old, X-rays showed that she not only still had 12 baby teeth in her mouth with her adult teeth were visible beneath them. He determined her age to be between 6 and 9 years old. “There’s really no question,” he told the documentary crew. “This is way too many baby teeth to be an adult. It’s indisputable. You can’t fake teeth coming in like this. So when mom and Natalia left, mom knew that Natalia was around 8 or 9, which is right in the range that she said she was,” he told the documentary crew.

Gossweiler was unaware of Natalia’s re-aging and appeared startled upon learning that her age had been changed by nearly 15 years. “Wow,” he said. “That... that seems crazy.”

Indeed it is.

The Curious Case of Natalia Grace: Natalia Speaks airs Jan. 1, 2, and 3 on ID beginning at 9/8c.

Tue, 02 Jan 2024 09:00:00 -0600 en text/html https://www.romper.com/entertainment/natalia-grace-age-how-old-is-she




CVPM learn | CVPM history | CVPM benefits | CVPM book | CVPM Topics | CVPM Study Guide | CVPM information hunger | CVPM mission | CVPM resources | CVPM information source |


Killexams exam Simulator
Killexams Questions and Answers
Killexams Exams List
Search Exams
CVPM exam dump and training guide direct download
Training Exams List