Disability Misinformation on Facebook: A Comparison of LLM-based Fact-Checking Tools

Social media has become a prominent space for seeking and sharing information, but it also enables misinformation to spread. When it comes to disability-related information, such as how to apply for a Medicaid waiver, understanding and detecting the prevalence of false information on social media becomes further complicated due to varying policies that keep changing over time. To provide an initial exploration of this problem space, we investigated misinformation propensity within disability-related Facebook groups, organizational factors associated with it, and how AI fact-checking tools perform in detecting this type of information. We first identified target Facebook groups from a large-scale survey. From 20 public Facebook groups mentioned in the survey, we web-scrapped 1,407 informational and fact-checkable posts. GPT-4o, GPT-o1, and Originality.ai were used to classify posts. The results were validated against the ground-truths generated manually. Our findings reveal that groups centered on developmental disabilities are more vulnerable to misinformation. AI fact-checking tools proved effective in classifying accurate information but presented varying performance in detecting misinformation. This work provides a preliminary assessment of the prevalence of misinformation and the performance of LLM-based tools.

* This paper is an outcome of the ASSIP internship at Mason, designed for high school and undergraduate students, in 2024.

Venue: 
[forthcoming] iConference 2026 Proceedings. Mar. 29-Apr. 2, Edinburgh, U.K.
Authors: 
Ian Prazak
Leah Padovani
Yool Lim
Julia H.P. Hsu
Myeong Lee