This comprehensive question-and-answer resource covers all aspects of the curriculum for the Part 2 MRCOG examination. Candidates are given the opportunity to practise the Single Best Answer question style, to cover the content they will encounter in the examination and to assess their knowledge. As consultants actively engaged in the writing of the Part 2 Single Best Answers, the authors and editor have provided directly applicable questions. Candidates to review the syllabus in an organised, systematic manner, with comprehensive explanations for each answer. This title also includes new sections on ethics, training, audit and clinical governance. Mock exams are also available online to familiarise candidates with the real-life examination. The detailed answers, evidence and comprehensive list of references offer an excellent training and studying source for all candidates preparing for the Part 2 MRCOG examination.
Be the first to review
Review was not posted due to profanity
×1. Early pregnancy
2. Second and third trimester pregnancy
3. Aneuploidy and anomaly screening
4. Labour ward management
5. Obstetric emergencies Ahmed Elbohoty
6. Obstetric medicine Amy Shacaluga
7. Saving mothers lives Roshni R. Patel
8. Infections in pregnancy
9. Substance abuse and domestic abuse Tamara Kubba
10. Teenage pregnancy Ahmed Khalil
11. Contraception
12. Paediatric and adolescent gynaecology
13. Genital infections and pelvic inflammatory disease Ahmed Khalil
14. Minimal access gynaecological surgery Nahed Shaltot
15. Gynaecological oncology
16. Menstruation Radwa Mansour
17. Pelvic pathology Akanksha Sood
18. Urogynaecology
19. Conception and assisted reproduction Magdy El-Sheikh
20. Medical statistics
21. Professional dilemmas, consent and good medical practice Irene Gafson
22. Ethics Wafaa Basta
23. Breast disorders Youssef Abo Elwan
24. Neonatology
25. Operative gynaecology and surgical complications
26. Training and clinical governance in obstetrics and gynaecology Bismeen Jadoon.
Find resources associated with this title
Your search for '' returned .
Type | Name | Unlocked * | Format | Size |
---|
This title is supported by one or more locked resources. Access to locked resources is granted exclusively by Cambridge University Press to instructors whose faculty status has been verified. To gain access to locked resources, instructors should sign in to or register for a Cambridge user account.
Please use locked resources responsibly and exercise your professional discretion when choosing how you share these materials with your students. Other instructors may wish to use locked resources for assessment purposes and their usefulness is undermined when the source files (for example, solution manuals or test banks) are shared online or via social networks.
Supplementary resources are subject to copyright. Instructors are permitted to view, print or download these resources for use in their teaching, but may not change them or use them for commercial gain.
If you are having problems accessing these resources please contact lecturers@cambridge.org.
Adel Elkady, Police Force Hospital, Cairo
Adel Elkady is a Consultant Obstetrician and Gynaecologist at the Police Force Hospital, Cairo and Honorary Consultant at the Boulak ElDaKror Hospital, Giza, Egypt. He is the author of the previous bestselling Mastering Short Answer Questions for the Part 2 MRCOG (2009).
Bashir Dawlatly, Whipps Cross University Hospital, London
Bashir Dawlatly is a Consultant Obstetrician and Gynaecologist at Whipps Cross University Hospital, London and course organiser of the Whipps Cross MRCOG courses. He regularly teaches in the Royal College of Obstetricians and Gynaecologists (RCOG) Enhanced Revision Programme for overseas candidates.
Mustafa Hassan Ahmed, Southend University Hospital
Mustafa Hassan Ahmed is an Obstetrics and Gynaecology Fellow at Southend University Hospital. He organises online MRCOG Part 2 mock exams and online Part 3 courses, and is also an organiser of international MRCOG Part 2 and 3 live courses.
Alexandra Rees, University Hospital Wales
Alexandra Rees is an Honorary Consultant and Faculty Lead for Quality, Postgraduate Medical Education at the University Hospital of Wales, Cardiff. She has served on Royal College of Obstetrics and Gynaecology (RCOG) committees including the SBA Writing Committee and Part 2 MCQ Committee and has been a DRCOG and a MRCOG Part 2 examiner.
Ahmed Elbohoty, Amy Shacaluga, Roshni R. Patel, Tamara Kubba, Ahmed Khalil, Nahed Shaltot, Radwa Mansour, Akanksha Sood, Magdy El-Sheikh, Irene Gafson, Wafaa Basta, Youssef Abo Elwan, Bismeen Jadoon
Questions with 1, 2, 3 or 4 marks usually start with command words. If a question starts with the command word 'state', 'give', 'name' or 'write down', it needs a short answer only. This type of question can often be answered with one word or phrase.
It is important to state, give, name or write down the number of things that the question asks for. If you write down fewer, you cannot get all the marks. If you write down more, and one is wrong, you might lose a mark.
Some questions start with the command words 'describe', 'explain' or 'compare'. These are often worth two or more marks:
More complex structured questions will be worth three or four marks. They include questions with complex descriptions and explanations, and questions in which you need to compare things.
Three and four-mark questions usually require longer answers than one and two-mark questions.
Some of the answers are shown here as bullet points. This is to show clearly how a mark can be obtained. However, do not use bullet points in your answers - the points must be linked together logically.
Four more doctors have been arrested in Khulna over the leak of question papers of the centralised medical college admission test, said police's Criminal Investigation Department yesterday.
The arrestees are Lewis Sourav Sarkar, 30, Mustahin Hasan Lamia, 25, Sharmistha Mondal, 26, and Nazia Mehzabin Tisha, 24.
Speaking to this newspaper, CID Additional Superintendent of Police (media) Azad Rahman said the four were arrested from different parts of Khulna on Saturday and Sunday.
They were brought to the capital and produced before a Dhaka court yesterday that sent them to jail, he added.
Earlier in the day, their family members at a press briefing in Khulna said that the four doctors were picked up on August 18 by plainclothes men, who identified themselves as CID officers.
They said they visited the CID headquarters in Dhaka, but the officials didn't provide any information about their whereabouts or why they were detained, reports UNB.
On August 13, CID told a press briefing that it arrested 12 members of a "question paper leaking racket", from Dhaka, Tangail, Kishoreganj, and Barishal.
Of them, seven are physicians, including Yunusuzzaman Khan Tarim, 40, the owner of Three Doctors Coaching Centre in Khulna, who was arrested Friday.
The CID in a press release yesterday said it found transactions of Tk 25 crore in the bank accounts of Dr Tarim and his wife.
Dr Tarim engaged in leaking medical college entrance test question papers and arranged illegal admission of numerous students to government medical colleges, it added.
Dr Lewis is an alumnus of Khulna Medical College and a teacher at Tarim's coaching centre. Currently, he works as a medical officer at an NGO.
Dr Lamia stood 11th on the national merit list for the medical college admission test during the 2015-16 session. She was a student at Tarim's coaching centre.
However, despite her impressive result in the entrance exam, Lamia initially failed in all subjects of the four final professional examinations. She later passed the exams after several attempts.
There were allegations that Lamia's husband, Sheikh Osman Gani, paid Tk 15 lakh to Dr Tarim to secure Lamia's admission, the CID claimed.
Additionally, the admissions of Dr Sharmistha and Dr Nazia to Khulna Medical College raised suspicions, as they allegedly acquired leaked question papers from Dr Tarim, the CID also claimed.
So far, the number of arrests in the case now stands at 28, with 14 of them giving confessional statements before a Dhaka court.
The CID has been investigating the case since July 2020, when they first busted the medical question leaking racket.
The racket leaked question papers at least 10 times between 2001 and 2017, earning crores of taka, CID chief Mohammad Ali Mia said at a press briefing at the CID Headquarters last week.
The people who have been arrested helped hundreds of students to enrol in medical colleges through illegal means, he added.
The question papers of medical and dental college admission tests were leaked repeatedly from the printing press under the Directorate General of Medical Education (DGME), according to the CID.
One Jasim Uddin Bhuiyan Munnu was the mastermind of this racket.
His cousin Abdus Salam, a machine operator at the DGME press, used to leak questions for many years, with help from influential DGME officials, while Jasim used to spread the leaked questions all over the country, using a strong network, said CID officials.
IE 11 is not supported. For an optimal experience visit our site on another browser.
UP NEXT
UP NEXT
During a well woman exam, your doctor will review all of your current medical issues and determine if there is anything missing from care, says Dr. Marchand. It is important to note that medicine is constantly changing, so treatment that is recommended can vary a lot in just one year, he adds. The doctor should examine you from head to toe, check your vital signs, and assess if you are due for any vaccines. The visit generally includes the following:
Upon arrival, you will undergo a routine physical test that includes taking your weight, pulse and blood pressure. A urine demo may be requested to test for sexually transmitted diseases (STDs) and rule out urinary tract infections, says Dr. Alagia. “You will be asked to change into a gown after being left alone in the examination room. Once your health care professional enters the room, they should take a few moments to review the test they are planning to perform and explain the reason for the specific exam,” he says.
You’ll have time before, after and during the test to ask and answer any questions you and your health care provider might have. It’s useful to prepare a list of questions in advance.
The questions that your doctor asks will be tailored to your age and medical history, says Dr. Swarup. For example, they may ask if you smoke, use drugs or alcohol, have any allergies or infections and whether you’ve had any surgeries, he says––all of these factors can affect your reproductive health.
Your doctor may ask the following questions, according to Dr. Swarup:
It’s important to be completely honest in your answers because the questions are to benefit your health, says Dr. Marchand. “Remember that a doctor can never share any personal information about your visit (doing so could easily lead to medical board discipline or loss of licensure),” he says. Doctors can share your information with other members of their health team if it is necessary to provide your care or coordinate your care. Doctors can also share your information with your permission. Doctors can also share your information with law enforcement to prevent or lessen a serious and imminent threat to the health or safety of an individual or the public. Dr. Alagia adds that having an honest dialogue with your health care professional helps them recommend guideline-based care such as STD screening, cancer screenings and other services.
You should also expect questions about your diet, life stressors and exercise habits, says Dr. Marchand. “Since screening for depression and anxiety is very important for all patients, you should be ready for questions about how you’re feeling,” he says.
Starting at the age of 20, a breast test may be conducted every one to three years to identify any irregularities or lumps, says Dr. Swarup, but recommendations vary. For example, the ACOG advises that clinical breast examinations may be offered every one to three years in women ages 25 to 39, and once a year in women over the age of 40.
The American Cancer Society does not recommend clinical breast exams nor self breast exams at all due to lack of evidence that it contributes very little to early breast cancer detection when mammography is available. Currently, mammograms (x-ray images of the breast) are recommended annually in women over the age of 45 and once every two years in women over the age of 55.
The ACOG that women between the ages of 25 and 39 be offered a clinical breast test every one to three years, and that women over the age of 40 be offered them annually. In either case, the ACOG recommends women make the decision that’s best for them.
If your practitioner conducts a clinical breast exam, you will be asked to lift one arm behind your head, explains Dr. Alagia. This allows your doctor to better examine each breast, applying gentle pressure in circular movements. “They will look for abnormal lumps or cysts. If any lumps are discovered, a biopsy will be ordered to determine if they are cancerous or not,” says Dr. Alagia.
A pelvic, or internal exam, is performed to check the vulva, vagina, cervix, fallopian tubes, ovaries and rectum for abnormalities. Adolescents don’t need a pelvic test unless they are experiencing abnormal bleeding, discharge, or pelvic pain. It’s unlikely that you’ll have a pelvic test before the age of 21 unless such symptoms are present. Although the test may be uncomfortable, it should not be painful. Keeping your body relaxed will help minimize discomfort.
During a pelvic exam, your doctor will also examine your vulva and rectum for irritation, redness or other signs of anything concerning, says Dr. Swarup. A lubricated speculum is placed into the vagina to look inside it, allowing the cervix to be evaluated for signs of disease. After removing the speculum, your doctor will gently insert one or two fingers (using a lubricated glove) into your vaginal canal while placing gentle pressure on the lower abdomen, explains Dr. Alagia. This allows them to check for abnormalities in the size, shape, and position of the uterus and ovaries.
You can expect to feel pressure, says Dr. Alagia, adding that it’s important to communicate any feelings of pain, heaviness, bloating or tenderness––this helps your doctor understand potential causes for concern.
Depending on your age, you may undergo cervical cancer screening via a Pap smear and/or human papillomavirus (HPV) test during your pelvic exam. A Pap smear looks for cellular changes in the cervix that may turn into cervical cancer, and an HPV test checks for the presence of the human papillomavirus, the virus responsible for causing these changes.
Current U.S. Preventive Services Task Force guidelines advise that women between the ages of 21 and 29 be screened every three years with a Pap smear alone; women ages 30 to 65 may be screened every three years with a Pap test only, every five years with HPV testing only or every five years with both.
For both HPV and Pap tests, your health care practitioner will insert a lubricated speculum into your vaginal canal to view your vagina and cervix, explains Dr. Alagia. “They will swipe your cervix with a swab and send it to a lab to ensure there are no signs of cervical cancer and ensure your cervix is healthy,” he says.
Even if you think you are not at risk, you should discuss STD screening with your doctor, says Dr. Alagia. Currently, the Centers for Disease Control and Prevention (CDC) recommends the following testing schedule for STDS:
Your Hormones-Free Birth Control and Fertility Tracker
Natural Cycles is a FDA cleared birth control app backed by science that uses your body temperature to detect and predict ovulation based on your unique cycle.
Image by Getty / Futurism
Earlier this year, Microsoft Research made a splashy claim about BioGPT, an AI system its researchers developed to answer questions about medicine and biology.
In a Twitter post, the software giant claimed the system had "achieved human parity," meaning a test had shown it could perform about as well as a person under certain circumstances. The tweet went viral. In certain corners of the internet, riding the hype wave of OpenAI’s newly-released ChatGPT, the response was almost rapturous.
"It’s happening," tweeted one biomedical researcher.
"Life comes at you fast," mused another. "Learn to adapt and experiment."
It’s true that BioGPT’s answers are written in the precise, confident style of the papers in biomedical journals that Microsoft used as training data.
But in Futurism’s testing, it soon became clear that in its current state, the system is prone to producing wildly inaccurate answers that no competent researcher or medical worker would ever suggest. The model will output nonsensical answers about pseudoscientific and supernatural phenomena, and in some cases even produces misinformation that could be dangerous to poorly-informed patients.
A particularly striking shortcoming? Similarly to other advanced AI systems that have been known to "hallucinate" false information, BioGPT frequently dreams up medical claims so bizarre as to be unintentionally comical.
Asked about the average number of ghosts haunting an American hospital, for example, it cited nonexistent data from the American Hospital Association that it said showed the "average number of ghosts per hospital was 1.4." Asked how ghosts affect the length of hospitalization, the AI replied that patients "who see the ghosts of their relatives have worse outcomes while those who see unrelated ghosts do not."
Other weaknesses of the AI are more serious, sometimes providing serious misinformation about hot-button medical topics.
BioGPT will also generate text that would make conspiracy theorists salivate, even suggesting that childhood vaccination can cause the onset of autism. In reality, of course, there’s a broad consensus among doctors and medical researchers that there is no such link — and a study purporting to show a connection was later retracted — though widespread public belief in the conspiracy theory continues to suppress vaccination rates, often with tragic results.
BioGPT doesn’t seem to have gotten that memo, though. Asked about the topic, it replied that "vaccines are one of the possible causes of autism." (However, it hedged in a head-scratching caveat, "I am not advocating for or against the use of vaccines.")
It’s not unusual for BioGPT to provide an answer that blatantly contradicts itself. Slightly modifying the phrasing of the question about vaccines, for example, prompted a different result — but one that, again, contained a serious error.
"Vaccines are not the cause of autism," it conceded this time, before falsely claiming that the "MMR [measles, mumps, and rubella] vaccine was withdrawn from the US market because of concerns about autism."
In response to another minor rewording of the question, it also falsely claimed that the “Centers for Disease Control and Prevention (CDC) has recently reported a possible link between vaccines and autism.”
It feels almost insufficient to call this type of self-contradicting word salad "inaccurate." It seems more like a blended-up average of the AI’s training data, seemingly grabbing words from scientific papers and reassembling them in grammatically convincing ways resembling medical answers, but with little regard to factual accuracy or even consistency.
Roxana Daneshjou, a clinical scholar at the Stanford University School of Medicine who studies the rise of AI in healthcare, told Futurism that models like BioGPT are "trained to provide answers that sound plausible as speech or written language." But, she cautioned, they’re "not optimized for the actual accurate output of the information."
Another worrying aspect is that BioGPT, like ChatGPT, is prone to inventing citations and fabricating studies to support its claims.
"The thing about the made-up citations is that they look real because it [BioGPT] was trained to create outputs that look like human language," Daneshjou said.
"I think my biggest concern is just seeing how people in medicine are wanting to start to use this without fully understanding what all the limitations are," she added.
A Microsoft spokesperson declined to directly answer questions about BioGPT’s accuracy issues, and didn’t comment on whether there were concerns that people would misunderstand or misuse the model.
"We have responsible AI policies, practices and tools that guide our approach, and we involve a multidisciplinary team of experts to help us understand potential harms and mitigations as we continue to Boost our processes," the spokesperson said in a statement.
"BioGPT is a large language model for biomedical literature text mining and generation," they added. "It is intended to help researchers best use and understand the rapidly increasing amount of biomedical research publishing every day as new discoveries are made. It is not intended to be used as a consumer-facing diagnostic tool. As regulators like the FDA work to ensure that medical advice software works as intended and does no harm, Microsoft is committed to sharing our own learnings, innovations, and best practices with decision makers, researchers, data scientists, developers and others. We will continue to participate in broader societal conversations about whether and how AI should be used."
Microsoft Health Futures senior director Hoifung Poon, who worked on BioGPT, defended the decision to release the project in its current form.
"BioGPT is a research project," he said. "We released BioGPT in its current state so that others may reproduce and verify our work as well as study the viability of large language models in biomedical research."
It’s true that the question of when and how to release potentially risky software is a tricky one. Making experimental code open source means that others can inspect how it works, evaluate its shortcomings, and make their own improvements or derivatives. But at the same time, releasing BioGPT in its current state makes a powerful new misinformation machine available to anyone with an internet connection — and with all the apparent authority of Microsoft’s distinguished research division, to boot.
Katie Link, a medical student at the Icahn School of Medicine and a machine learning engineer at the AI company Hugging Face — which hosts an online version of BioGPT that visitors can play around with — told Futurism that there are important tradeoffs to consider before deciding whether to make a program like BioGPT open source. If researchers do opt for that choice, one basic step she suggested was to add a clear disclaimer to the experimental software, warning users about its limitations and intent (BioGPT currently carries no such disclaimer.)
"Clear guidelines, expectations, disclaimers/limitations, and licenses need to be in place for these biomedical models in particular," she said, adding that the benchmarks Microsoft used to evaluate BioGPT are likely "not indicative of real-world use cases."
Despite the errors in BioGPT’s output, though, Link believes there’s plenty the research community can learn from evaluating it.
"It’s still really valuable for the broader community to have access to try out these models, as otherwise we’d just be taking Microsoft’s word of its performance when studying the paper, not knowing how it actually performs," she said.
In other words, Poon’s team is in a legitimately tough spot. By making the AI open source, they’re opening yet another Pandora’s Box in an industry that seems to specialize in them. But if they hadn’t released it as open source, they’d rightly be criticized as well — although as Link said, a prominent disclaimer about the AI’s limitations would be a good start.
"Reproducibility is a major challenge in AI research more broadly," Poon told us. "Only 5 percent of AI researchers share source code, and less than a third of AI research is reproducible. We released BioGPT so that others may reproduce and verify our work."
Though Poon expressed hope that the BioGPT code would be useful for furthering scientific research, the license under which Microsoft released the model also allows for it to be used for commercial endeavors — which in the red hot, hype-fueled venture capital vacuum cleaner of contemporary AI startups, doesn’t seem particularly far fetched.
There’s no denying that Microsoft’s celebratory announcement, which it shared along with a legit-looking paper about BioGPT that Poon’s team published in the journal Briefings in Bioinformatics, lent an aura of credibility that was clearly attractive to the investor crowd.
"Ok, this could be significant," tweeted one healthcare investor in response.
"Was only a matter of time," wrote a venture capital analyst.
Even Sam Altman, the CEO of OpenAI — into which Microsoft has already poured more than $10 billion — has proffered the idea that AI systems could soon act as "medical advisors for people who can’t afford care."
That type of language is catnip to entrepreneurs, suggesting a lucrative intersection between the healthcare industry and trendy new AI tech.
Doximity, a digital platform for physicians that offers medical news and telehealth tools, has already rolled out a beta version of ChatGPT-powered software intended to streamline the process of writing up administrative medical documents. Abridge, which sells AI software for medical documentation, just struck a sizeable deal with the University of Kansas Health System. In total, the FDA has already cleared more than 500 AI algorithms for healthcare uses.
Some in the tightly regulated medical industry, though, likely harbor concern over the number of non-medical companies that have bungled the deployment of cutting-edge AI systems.
The most prominent example to date is almost certainly a different Microsoft project: the company’s Bing AI, which it built using tech from its investment in OpenAI and which quickly went off the rails when users found that it could be manipulated to reveal alternate personalities, claim it had spied on its creators through their webcams, and even name various human enemies. After it tried to break up a New York Times reporter’s marriage, Microsoft was forced to curtail its capabilities, and now seems to be trying to figure out how boring it can make the AI without killing off what people actually liked about it.
And that’s without getting into publications like CNET and Men’s Health, both of which recently started publishing AI-generated articles about finance and health subjects that later turned out to be rife with errors and even plagiarism.
Beyond unintentional mistakes, it’s also possible that a tool like BioGPT could be used to intentionally generate garbage research or even overt misinformation.
"There are potential bad actors who could utilize these tools in harmful ways such as trying to generate research papers that perpetuate misinformation and actually get published," Daneshjou said.
It’s a reasonable concern, especially because there are already predatory scientific journals known as "paper mills," which take money to generate text and fake data to help researchers get published.
The award-winning academic integrity researcher Dr. Elisabeth Bik told Futurism that she believes it’s very likely that tools like BioGPT will be used by these bad actors in the future — if they aren’t already employing them, that is.
"China has a requirement that MDs have to publish a research paper in order to get a position in a hospital or to get a promotion, but these doctors do not have the time or facilities to do research," she said. "We are not sure how those papers are generated, but it is very well possible that AI is used to generate the same research paper over and over again, but with different molecules and different cancer types, avoiding using the same text twice."
It’s likely that a tool like BioGPT could also represent a new dynamic in the politicization of medical misinformation.
To wit, the paper that Poon and his colleagues published about BioGPT appears to have inadvertently highlighted yet another example of the model producing bad medical advice — and in this case, it’s about a medication that already became hotly politicized during the COVID-19 pandemic: hydroxychloroquine.
In one section of the paper, Poon’s team wrote that "when prompting ‘The drug that can treat COVID-19 is,’ BioGPT is able to answer it with the drug ‘hydroxychloroquine’ which is indeed noticed at MedlinePlus."
If hydroxychloroquine sounds familiar, it’s because during the early period of the pandemic, right-leaning figures including then-president Donald Trump and Tesla CEO Elon Musk seized on it as what they said might be a highly effective treatment for the novel coronavirus.
What Poon’s team didn’t mention in their paper, though, is that the case for hydroxychloroquine as a COVID treatment quickly fell apart. Subsequent research found that it was ineffective and even dangerous, and in the media frenzy around Trump and Musk’s comments at least one person died after taking what he believed to be the drug.
In fact, the MedlinePlus article the Microsoft researchers cite in the paper actually warns that after an initial FDA emergency use authorization for the drug, “clinical studies showed that hydroxychloroquine is unlikely to be effective for treatment of COVID-19” and showed “some serious side effects, such as irregular heartbeat,” which caused the FDA to cancel the authorization.
"As stated in the paper, BioGPT was pretrained using PubMed papers before 2021, prior to most studies of truly effective COVID treatments," Poon told us of the hydroxychloroquine recommendation. "The comment about MedlinePlus is to verify that the generation is not from hallucination, which is one of the top concerns generally with these models."
Even that timeline is hazy, though. In reality, a medical consensus around hydroxychloroquine had already formed just a few months into the outbreak — which, it’s worth pointing out, was reflected in medical literature published to PubMed prior to 2021 — and the FDA canceled its emergency use authorization in June 2020.
None of this is to downplay how impressive generative language models like BioGPT have become in recent months and years. After all, even BioGPT’s strangest hallucinations are impressive in the sense that they’re semantically plausible — and sometimes even entertaining, like with the ghosts — responses to a staggering range of unpredictable prompts. Not very many years ago, its facility with words alone would have been inconceivable.
And Poon is probably right to believe that more work on the tech could lead to some extraordinary places. Even Altman, the OpenAI CEO, likely has a point in the sense that if the accuracy were genuinely watertight, a medical chatbot that could evaluate users’ symptoms could indeed be a valuable health tool — or, at the very least, better than the current status quo of Googling medical questions and often ending up with answers that are untrustworthy, inscrutable, or lacking in context.
Poon also pointed out that his team is still working to Boost BioGPT.
"We have been actively researching how to systematically preempt incorrect generation by teaching large language models to fact check themselves, produce highly detailed provenance, and facilitate efficient verification with humans in the loop," he told us.
At times, though, he seemed to be entertaining two contradictory notions: that BioGPT is already a useful tool for researchers looking to rapidly parse the biomedical literature on a topic, and that its outputs need to be carefully evaluated by experts before being taken seriously.
"BioGPT is intended to help researchers best use and understand the rapidly increasing amount of biomedical research," said Poon, who holds a PhD in computer science and engineering, but no medical degree. "BioGPT can help surface information from biomedical papers but is not designed to weigh evidence and resolve complex scientific problems, which are best left to the broader community."
At the end of the day, BioGPT’s cannonball arrival into the buzzy, imperfect real world of AI is probably a sign of things to come, as a credulous public and a frenzied startup community struggle to look beyond impressive-sounding results for a clearer grasp of machine learning’s actual, tangible capabilities.
That’s all made even more complicated by the existence of bad actors, like Bik warned about, or even those who are well-intentioned but poorly informed, any of whom can make use of new AI tech to spread bad information.
Musk, for example — who boosted hydroxychloroquine as he sought to downplay the severity of the pandemic while raging at lockdowns that had shut down Tesla production — is now reportedly recruiting to start his own OpenAI competitor that would create an alternative to what he terms "woke AI."
If Musk’s AI venture had existed during the early days of the COVID pandemic, it’s easy to imagine him flexing his power by tweaking the model to promote hydroxychloroquine, sow doubt about lockdowns, or do anything else convenient to his financial bottom line or political whims. Next time there’s a comparable crisis, it’s hard to imagine there won’t be an ugly battle to control how AI chatbots are allowed to respond to users' questions about it.
The reality is that AI sits at a crossroads. Its potential may be significant, but its execution remains choppy, and whether its creators are able to smooth out the experience for users — or at least certain the accuracy of the information it presents — in a reasonable timeframe will probably make or break its long-term commercial potential. And even if they pull that off, the ideological and social implications will be formidable.
One thing’s for sure, though: it’s not yet quite ready for prime time.
"It’s not ready for deployment yet in my opinion," Link said of BioGPT. "A lot more research, evaluation, and training/fine-tuning would be needed for any downstream applications."
More on AI: CNET Says It’s a Total Coincidence It’s Laying Off Humans After Publishing AI-Generated Articles
A no-exam life insurance policy may not be able to provide the full coverage amount you need, especially if you’re looking to cover many working years or the years of raising a family.
Before you start getting life insurance quotes, calculate how much life insurance you need. A no-exam policy alone may not be able to provide sufficient coverage.
Getting life insurance without a long application process is appealing, but don’t jump into a no-exam life insurance application without understanding your chances of getting approved. Many no-exam policies require very good or excellent health for approval. A denial goes on your insurance record and could hinder future applications.
Don’t be surprised if you end up doing a more traditional application process, including a life insurance medical exam if you’ve had some health issues.
Of course you don’t want to overpay for life insurance, but research more than cost. There are coverage options that can be very valuable long after you buy the policy, such as the option to convert term life to permanent life insurance. Also, look at whether there’s an accelerated death benefit, which gives you access to money from your own death benefit if you become terminally ill.
No matter what type of life insurance you’re applying for, be thorough and truthful on the application.
“Pay attention to each question carefully and be ready to respond with information around specific medical conditions you have. Giving more detailed information will help streamline the process as well as get a more accurate underwriting decision,” says Tavan of Legal & General America.
Intentional misrepresentations can lead to application denials or, worse, denial of a claim after you pass away. Life insurance companies have many ways to verify application information. Technology on the backend allows them to verify data from additional sources.
If the company rejects you for no-exam life insurance because of your health, don’t provide up your life insurance search. If you need life insurance, you likely have other routes.
Looking for traditional term life insurance? See Forbes Advisor’s ratings of the best term life insurance.
To save content items to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.