In June 2025, the International Diabetes Federation established its first Technology and AI Working Group, marking a landmark step towards integrating digital innovation into the global response to diabetes.
Diabetes currently affects over half a billion people worldwide. AI technologies are already being deployed across the care continuum — from risk prediction and retinopathy screening to insulin dosing and closed-loop systems.
However, the rapid expansion of AI-enabled diabetes technologies has outpaced the development of standards to govern their use. Critical gaps persist in data quality, algorithmic fairness, regulatory harmonisation, and equitable access. Many direct-to-consumer apps bypass rigorous oversight entirely. Without globally coordinated action, AI risks deepening health inequities rather than closing them.
In addition, AI tools are trained on Western population data, raising serious concerns about performance in Asian, African, and Middle Eastern populations where diabetes phenotypes differ significantly.
“We are entering a transformative era in healthcare, where technology and AI offer powerful tools to personalise and optimise diabetes care. But innovation without guardrails risks deepening the very inequities we seek to close. This is why the International Diabetes Federation has taken the lead — to ensure that the AI revolution in diabetes serves everyone: from high-resource urban centres to the most remote rural clinics. It is not just about technology; it is about trust, transparency, and the fundamental right of every person living with diabetes to benefit from progress safely and equitably.” Prof Peter Schwarz, IDF President
Vision
A world where every person living with diabetes benefits from AI and digital innovation — safely, equitably, and ethically — regardless of where they live.
Mission
To develop globally agreed strategies, guidelines, and educational resources that ensure AI-driven solutions in diabetes care are evidence-based, transparent, inclusive, and aligned with the needs of clinicians, patients, and health systems worldwide.
“Artificial intelligence is no longer a futuristic concept in diabetes care — it is here, now and in the hands of clinicians and people living with diabetes across the world. Over 200 AI-enabled tools are already in use, yet we have no globally agreed-upon standards to ensure they are safe, fair, and effective for diverse populations. Our activities will deliver the world’s first consensus guidelines on responsible AI in diabetes care, grounded in evidence, shaped by regional realities, and driven by the voices of clinicians and people with diabetes. We owe it to the more than half a billion people living with diabetes to get this right.” Dr Amit Kumar Dey, Chair, IDF Technology & AI Working Group
Responsible AI in a healthcare context must be:
- Ethical and accountable: With clear frameworks for governance, liability, and professional oversight.
- Transparent and explainable: Algorithms must not be opaque “black boxes” — their decision-making should be understandable to clinicians and patients.
- Fair and inclusive: Actively minimising bias and ensuring models are trained on diverse, representative population data.
- Safe: With robust protocols for monitoring and reporting adverse events.
- Protective of privacy: Upholding data security and recognising the patient as the ultimate owner of their data.
- Accessible and affordable: Ensuring innovation does not widen the digital divide, particularly in low- and middle-income countries.
| Members | Advisers |
| Chair: Dr Amit Kumar Dey (India) | Prof Peter Schwarz (Germany) – IDF President |
| Dr Daphne Gardner (Singapore) | Dr Banshi Saboo (India) – Chair, IDF South-East Asia Region |
| Dr Denise Franco (Brazil) | Prof Moshe Phillip (Israel) – Co-Chair, ATTD |
| Dr Elaine Chow (Hong Kong) | |
| Dr Hossam Arafa Ghazi (Egypt) | |
| Dr Inge Van Boxelaer (Belgium) | |
| Dr Manoj Chawla (India) | |
| Dr Viral Shah (USA) |
Global roundtable programme
Between August and December 2025, the Working Group convened five regional roundtables and one policy-facing session, engaging over 40 experts from more than 20 countries. Each session was structured around the 20 needs gaps identified to guide the activities of the Working Group, while capturing regional nuances.
August 2025: Mumbai, India (DRS 2025)
The inaugural roundtable addressed India’s urban-rural digital divide, linguistic diversity requiring multilingual AI tools, affordability barriers, and the critical need for local validation of algorithms developed on Western data.
September 2025: Vienna, Austria (EASD 2025)
Europe’s strong regulatory environment (EU AI Act, GDPR, MDR) framed this discussion on regulatory harmonisation, standardised outcome metrics, the central role of patient organisations and reimbursement variability across member states.
September 2025: Geneva, Switzerland
This high-level policy session at the Global Health Campus engaged WHO, FIND, ITU/FG-AI4H, Ministries of Health, and the World Diabetes Foundation — translating clinical priorities into implementable policy and framing the “Geneva Statement on AI for Diabetes.”
December 2025: ATTD-Asia 2025, Singapore
This roundtable highlighted extreme regional heterogeneity, the B2C “wellness app” loophole that bypasses regulation, the fundamental data deficit, and a powerful patient perspective noting that people with diabetes sometimes prefer AI support because it is free from judgment.
December 2025: PASID Gulf 2025, Dubai, UAE
This roundtable addressed severely fragmented health data, absent national diabetes registries, acute representation bias and a “literacy-first” imperative. Participants stressed the need to demonstrate cost-effectiveness in the “language of economists” to engage policymakers.
Future direction
The activities of the Working Group are progressing from consultation and landscape mapping towards structured prioritisation and agreement-building, followed by synthesis into clear, usable outputs. This pathway will translate shared learnings into practical guidance and resources. These will include consensus recommendations, policy-oriented summaries, implementation tools, and capacity-building/education modules to support safe, effective, and equitable adoption of technology and AI in diabetes care.
“The future of diabetes care, augmented by AI and technology, must be safe, fair, and accessible for all. Together, we can make it so.” Dr Amit Kumar Dey
The promise of AI and how the diabetes community can evolve with it
AI holds extraordinary potential: predictive algorithms that identify risk years before onset; retinopathy screening in primary care clinics; closed-loop insulin delivery; and coaching apps that support self-management around the clock. However, achieving this promise demands responsibility — validating tools across diverse populations, making them affordable, training users, and holding developers accountable.
- Clinicians can embrace AI as a clinical partner — augmenting, never replacing, their judgment.
- Patient organisations can champion participatory design, ensuring those most affected shape the tools that serve them.
- Regulators can use IDF’s forthcoming guidelines as a framework for harmonising standards across borders.
- Researchers can prioritise inclusive datasets, long-term studies, and CONSORT-AI/SPIRIT-AI reporting standards.
- Industry can commit to transparency, local validation, and equitable pricing.
- Bornstein SR, Dey AK, Steenblock C, et al. Artificial intelligence and diabetes: time for action and caution. Lancet Diabetes Endocrinol. 2025;13:552–554.
- Chen RJ, Wang JJ, Williamson DFK, et al. Algorithmic fairness in AI for medicine and healthcare. Nat Biomed Eng. 2023;7(6):719–742.
- Pham Q, Nguyen A, Alkhaldi B, et al. The need for ethnoracial equity in AI for diabetes management. J Med Internet Res. 2021;23(2):e22320.
- Pavon JM, et al. Large language models in diabetes management. Diabetes Care. 2025;48(2):182–184.
- World Health Organization. Ethics and governance of AI for health. Geneva: WHO; 2021.
- Rosella LC, et al. A participatory approach to responsible AI for diabetes. Digit Health. 2025;11:20552076251358541.
- Abràmoff MD, et al. Addressing bias in AI for health equity. NPJ Digit Med. 2023;6(1):170.
- Wang SCY, et al. AI-based diabetes care: risk prediction models. NPJ Digit Med. 2024;7(1):36.
- FUTURE-AI Consortium. Trustworthy AI in healthcare. BMJ. 2024;388:e081554.
- Bajramagic M, Battelino T, et al. AI-driven clinical decision support: a Delphi consensus roadmap. Diabetologia. 2025;68(11):2060–2071.