A Comparative Assessment of Large Language Models in Congenital Hypothyroidism: Reliability, Quality and Readability
PDF
Cite
Share
Request
Original Article
E-PUB
21 April 2026

A Comparative Assessment of Large Language Models in Congenital Hypothyroidism: Reliability, Quality and Readability

J Clin Res Pediatr Endocrinol. Published online 21 April 2026.
1. University of Health Sciences Türkiye, Antalya City Hospital, Department of Pediatric Endocrinology, Antalya, Türkiye
No information available.
No information available
Received Date: 29.01.2026
Accepted Date: 10.04.2026
E-Pub Date: 21.04.2026
PDF
Cite
Share
Request

Abstract

Objective

To comparatively evaluate the reliability, quality, and readability of responses generated by widely used large language model (LLM)–based chatbots to congenital hypothyroidism (CH)–related patient questions.

Methods

Forty CH frequently asked questions (FAQs), derived from clinician-reviewed patient education resources, were submitted under standardized conditions (December 2025) to ChatGPT-4, ChatGPT-5.2, Gemini, and Copilot. The modified DISCERN (mDISCERN) instrument was used to assess reliability, whereas the Global Quality Score (GQS) was used to evaluate quality. Readability was evaluated using Flesch Reading Ease (FRE), Flesch–Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman–Liau Index (CLI), and Simple Measure of Gobbledygook (SMOG). Scores were compared using Friedman tests with Bonferroni-corrected post hoc analyses.

Results

Median mDISCERN scores were 5.0 for ChatGPT-4, ChatGPT-5.2, and Gemini, and 4.0 for Copilot. Median GQS scores were 5.0 for ChatGPT-4, ChatGPT-5.2, and Gemini, and 4.0 for Copilot. Differences among models were significant for both mDISCERN and GQS (p<0.001), with ChatGPT-5.2 outperforming others in key pairwise comparisons. Readability differed significantly across all indices (all p<0.001). ChatGPT-5.2 demonstrated the highest FRE and lowest FKGL, whereas Gemini produced the most complex text. However, all models exceeded the recommended sixth-grade reading level.

Conclusion

LLM-based chatbots generated generally moderate-to-high quality CH information, but readability remains suboptimal for patient education. ChatGPT-5.2 showed the best overall performance. LLM outputs may support patient information needs but should complement, not replace, clinician-provided counseling.

Keywords:
Artificial intelligence, ChatGPT, congenital hypothyroidism, Copilot, Google Gemini, Large language models