Empowering women’s health literacy
Well Informed
Challenge
Despite information being just one click away, women searching for endocrine-health guidance find the web awash in gender-biased myths.
Endocrine disorders (hypothyroidism, diabetes, PCOS, endometriosis) affect 6 – 12 % of women worldwide, yet 83 % report misdiagnosing or self-medicating after encountering bad advice, straining both wallets and doctor-patient trust.¹
A Plan International study shows 91 % of girls worry about online health misinformation,² while Frost & Sullivan notes 75–85 % already rely on digital tools for care.³ Faced with unmoderated forums, influencer posts, and ad-driven blogs, women shoulder the invisible labour of fact-checking, often blaming themselves when the “facts” fail them.
“After getting burned once, I spent two hours fact-checking a single blog post,” a research participant
Understanding how search tools amplify health misinformation and impact was the first step toward solutions that (1) give women back the power to assess claims about their bodies and (2) lift the exhausting burden of constant fact-checking.
Research
To bring clarity to a complex topic, I focused my thesis on four key themes: how women seek and trust health information online, their experiences with misinformation, and how they ultimately use what they find.
I recruited participants from online communities: Reddit, Facebook groups, endocrine health forums, and physical spaces like campus boards and local parks. It wasn’t easy. Gaining trust in spaces where misinformation often thrives required careful, respectful outreach. (I leaned on trauma-informed design principles throughout the process, ensuring that every step, from outreach to interviews, centered safety, empathy, and participant autonomy.)
I received 114 survey responses and conducted 12 interviews with women aged 18–35 from around the world: India, Canada, Europe, Australia, and the U.S. One-third of respondents said they’d encountered misleading health information; 36% had been directly impacted by it. That meant 41 women had suffered—financially, emotionally, physically—because of false online advice. That number stayed with me.
So I prepared for the interviews differently. I wanted them to feel more like safe conversations than structured research. Most women had PCOS or thyroid-related concerns, and many had tried everything from “natural” supplements to expensive programs. Misinformation hadn’t just misled them; it had cost them money, hope, and time.
Several only realized they’d been misled after joining online communities, where they heard similar stories. These spaces offered solidarity, but not always safety. Without moderators, bad advice could still spread. One participant told me it took her two years to learn how to spot misinformation. Another spent over two hours a day trying to verify what she read, still unsure what to trust.
Most relied on blogs, social media, or research papers. Only two participants felt confident navigating information, and both had health-related backgrounds or access to professionals. Many lacked the time or resources to validate sources or see doctors regularly.
Throughout these conversations, what struck me most was how deeply community mattered. For many, it was the only source of comfort and support. But the community alone wasn’t enough. This insight led me to a bigger question: how might design and technology amplify trust while minimizing harm? What if we could equip people not just to connect, but to discern?
Several participants shared that even their experiences with doctors left them confused or dismissed. The emotional toll of misinformation was tangible—it led to anxiety, shame, delays in care, and in some cases, worsened outcomes. This underscored the urgent need for accessible, reliable, and compassionate health information.
Through synthesizing responses, I identified a five-stage journey most participants experienced: getting diagnosed, finding resources to understand their condition, forming a perception of the diagnosis, managing it, and working toward improved outcomes. The “resource” stage—when women actively search for information—was the most vulnerable to misinformation, and deeply affected the rest of their journey.
Many participants didn’t just search using clinical keywords. They often typed full questions, described symptoms, and included emotional cues. Search results varied drastically depending on how these queries were framed. This observation revealed a gap between how people naturally search for help and how online health information is structured.
Although health literacy and misinformation awareness are often promoted as solutions, 66% of participants said they never learned how to evaluate information critically, neither at school nor at home. While tools like educational campaigns, health tracking apps, and peer support platforms exist, they haven’t proven effective in addressing the scale or complexity of misinformation’s spread.
Without expert moderation, community groups—though emotionally supportive—sometimes became echo chambers of poor advice. This misinformation, in turn, created ripple effects: misallocated health resources, increased pressure on providers, delayed care, and even illegal sales of unregulated treatments.
The insight was clear: there’s a pressing need to embed health professionals into community spaces—not to dominate them, but to gently validate, educate, and empower. Without trusted guidance and verifiable resources, women are left to sift through noise at their most vulnerable moments. And that, in itself, is a system-level failure.
This research reaffirmed for me that when it comes to health, access isn’t just about availability—it’s about trust, clarity, and care
didn’t start this project with a fully formed product idea. I simply wanted to solve a problem that kept surfacing in my research: how can we help women reclaim power over the health information they consume—without carrying the exhausting burden of constant fact-checking?
Initially, I considered building a community platform. But after digging deeper, I realized that the community already had places to connect. What was missing was a sense of clarity and trust in the information being shared. That’s when the idea for Well Informed emerged—a health information reliability checker designed for women navigating endocrine conditions, beginning with PCOS.
The project was co-designed with healthcare professors, an endocrinologist, medical students, and my research participants. Together, we prototyped Well Informed not as a final product, but as a concept to test: What if there were a tool that made accurate, reliable health information easier to find, understand, and trust?
Well Informed combines curated PCOS content, user-tailored information delivery, and AI-powered content analysis using natural language processing and machine learning. The prototype included a tag-based system to help users assess the trustworthiness of articles. For complex or unclear claims, users could request human review—an added layer of validation provided by a panel of healthcare professionals.
Instead of searching through dozens of tabs, users would receive personalized, verified information aligned with their questions. Small in-app nudges aimed to build health literacy without overwhelming the user.
Through prototype testing, I observed how this tool helped women feel more in control, less anxious, and more confident in their decision-making. It didn’t replace their communities or doctors—it supported them in navigating both.
This wasn’t about building the next big health app. It was about exploring whether a tool like Well Informed could meaningfully shift the experience of finding information from overwhelming to empowering.
The idea still lives as a tested prototype. But the vision remains clear: give women the tools to assess claims about their own bodies—and relieve them from the exhausting work of constantly questioning every headline, post, or “expert” opinion they see online.
To evaluate the success of the Well Informed MVP, I conducted two rounds of testing: a 5-day article review study via Mechanical Turk, and Zoom-based usability testing with five new participants.
For the article review study, I recruited five participants and asked them to evaluate one article each day for five days. Articles ranged from blogs and news sites to academic papers. Participants categorized each article as Reliable, Ambiguous, or Not Reliable before receiving an auto-generated report in a text-based format. To assess whether users would trust expert input, I asked: Would you want a health expert’s opinion on this article? If they answered yes, they were shown a pre-collected expert tip, helping validate their interest in seeking professional advice.
This test revealed that participants could easily identify open-source research as reliable but struggled with the credibility of blogs and news pieces. The "Ask a Health Expert" feature was most used when an article was flagged as unreliable or ambiguous, confirming its usefulness in moments of uncertainty. To keep participants engaged, I sent a health literacy tip at the end of each session. Feedback on these tips was overwhelmingly positive, prompting me to consider embedding a “daily tip” feature in future iterations.
In parallel, I conducted usability testing with five new female participants aged 18–29: three with experience managing PCOS, and two seeking to learn more about the condition. Through moderated Zoom sessions, I observed how they interacted with the app prototype.
Participants found the Health Expert Vote feature intuitive, but asked for more context—Who are these experts? What makes them credible? This feedback highlighted a need to add credentials and background information to boost user trust. Additionally, they wanted a notification feature to confirm when votes were submitted and when results would be available, as the current prototype did not update in real-time.
Both testing rounds offered valuable insights. If I had more time, I’d love to expand this study to include a broader range of content types and further explore how people naturally gather, assess, and act on health information.
Well Informed was never just about presenting information—it was about giving women their time, confidence, and clarity back. By helping users assess the reliability of health content related to endocrine conditions like PCOS, the tool aims to reduce misinformation fatigue, build health literacy, and support more informed decision-making.
Even in its early prototype form, Well Informed showed promise. Testing confirmed that features like article tagging and “Ask a Health Expert” provided a sense of validation and guidance. Participants felt less overwhelmed and more confident navigating uncertain information.
But alongside this impact comes ethical responsibility.
1. Accuracy and Algorithmic Bias
While Well Informed uses AI to evaluate content, its performance depends on the quality of training data. If the data contains bias or gaps, certain narratives or sources could be unintentionally excluded. It's essential to continuously audit these models and include diverse medical voices to ensure inclusive and balanced content evaluation.
2. Privacy and Data Ethics
Though the prototype didn’t collect personal data or symptom information, I designed it with a privacy-first mindset from the start. The goal is to uphold anonymity, data minimization, and user control. That means only collecting what’s necessary to provide value—without ever asking for sensitive health details unless the user explicitly consents. Future iterations would include transparent privacy policies, optional data sharing, and localized data storage where feasible.
Designing for privacy isn’t just compliance—it’s a form of care. Especially when working with populations who are already vulnerable to misinformation, shame, and health stigma.
Well Informed holds potential not just as a tool for better health decisions, but as a model for how we design ethical, empowering digital health experiences. With further development and care, it could reshape how we approach information trust, expert access, and personal health autonomy—without compromising privacy.