Abstract
Generative Artificial Intelligence (GenAI) applications, such as ChatGPT, are transforming how individuals access health information, offering conversational and highly personalized interactions. While these technologies can enhance health literacy and decision-making, their capacity to generate deeply tailored—hypercustomized—
responses risks amplifying confirmation bias by reinforcing pre-existing beliefs, obscuring medical consensus, and perpetuating misinformation, posing significant challenges to public health. This paper examines GenAI mediated confirmation bias in health information seeking, driven by the interplay between GenAI’s hypercustomization capabilities and users’ confirmatory tendencies. Drawing on parallels with traditional online information-seeking behaviors, we identify three key “pressure points” where biases might emerge: query phrasing, preference for belief-consistent content, and resistance to belief-inconsistent information. Using illustrative
examples, we highlight the limitations of existing safeguards and argue how even minor variations in applications’ configuration (e.g., Custom GPT) can exacerbate these biases along those pressure points. Given the widespread adoption and fragmentation (e.g., OpenAI’s GPT Store) of GenAI applications, their influence on health-seeking behaviors demands urgent attention. Since technical safeguards alone may be insufficient, we propose a set of interventions, including enhancing digital literacy, empowering users with critical engagement strategies, and implementing robust regulatory oversight. These recommendations aim to ensure the safe integration
of GenAI into daily life, supporting informed decision-making and preserving the integrity of public understanding of health information.
responses risks amplifying confirmation bias by reinforcing pre-existing beliefs, obscuring medical consensus, and perpetuating misinformation, posing significant challenges to public health. This paper examines GenAI mediated confirmation bias in health information seeking, driven by the interplay between GenAI’s hypercustomization capabilities and users’ confirmatory tendencies. Drawing on parallels with traditional online information-seeking behaviors, we identify three key “pressure points” where biases might emerge: query phrasing, preference for belief-consistent content, and resistance to belief-inconsistent information. Using illustrative
examples, we highlight the limitations of existing safeguards and argue how even minor variations in applications’ configuration (e.g., Custom GPT) can exacerbate these biases along those pressure points. Given the widespread adoption and fragmentation (e.g., OpenAI’s GPT Store) of GenAI applications, their influence on health-seeking behaviors demands urgent attention. Since technical safeguards alone may be insufficient, we propose a set of interventions, including enhancing digital literacy, empowering users with critical engagement strategies, and implementing robust regulatory oversight. These recommendations aim to ensure the safe integration
of GenAI into daily life, supporting informed decision-making and preserving the integrity of public understanding of health information.
Original language | English |
---|---|
Journal | Annals of the New York Academy of Sciences |
Publication status | Submitted - 2025 |