Summary:
- Over the last five years, GiveDirectly has used AI/ML technology to improve our programs delivering cash aid to people living in poverty.
- AI/ML models come with risk, so we’ve developed a Responsible AI/ML Framework to illustrate how we are using AI/ML and share our current principles and guardrails.
- We hope this framework will encourage organizations to create guardrails for how they deploy AI/ML and collaborate with us to develop best practices.
We use big data and AI to swiftly and accurately deliver cash relief to people in need
GiveDirectly has leveraged big data and artificial intelligence (AI), including machine learning (ML)1, to identify and deliver cash directly to people in need 4-8x faster and with improved accuracy. This has proven particularly useful for delivering cash relief in response to crises like hurricanes, floods, and COVID-19, and in contexts which lack up to date registries on the most vulnerable households.
We must balance the benefits of AI/ML against the potential risks
We need to ensure our use of AI/ML protects the best interests of recipients (our first value) while still helping us deliver cash with greater speed, scale, and cost-effectiveness. Using these tools can come with risks that we must mitigate, including that:
- AI/ML can be opaque and hard to explain;
- AI/ML needs large amounts of training data, which require careful handling and storage;
- Models can reproduce biases or inaccuracies in training data;
- AI/ML systems can reduce human oversight.
AI/ML is developing faster than the guardrails necessary to ensure it is used well. While many aid organizations are using AI/ML in their programs, there is little concrete transparent documentation on how organizations are handling the risks and tradeoffs.
That said, newly published AI standards like the ISO standard on AI management and calls for a humanitarian manifesto for AI can help guide organizations to adopt ethical AI and manage these risks.
Our Responsible AI/ML Framework combines ethical principles and practical protocols, informed by past work and expert insights
In the spirit of transparency and open learning, we’re sharing GiveDirectly’s Responsible AI/ML Framework: a set of principles and actionable protocols guiding how we use this technology. We created this framework from a combination of existing principles on responsible AI-usage, along with learnings from our past work, interviews with GiveDirectly recipients in Malawi and Kenya, GiveDirectly staff, and outside academic experts.
Read GiveDirectly’s Responsible AI/ML Framework 👇
Join us in shaping responsible AI/ML guidelines that mitigate risks and are centered on community needs
We encourage other organizations to build off of this framework and develop guidelines that center the needs of communities to reduce the risk of harm. We welcome input and will iterate as our understanding of risks, community preferences, and usage of AI/ML develops. If you’d like to exchange ideas or partner on developing best practices for responsible AI/ML use, please reach out at [email protected].
Footnotes
- Artificial Intelligence, or AI, is an umbrella term for tools that automate human tasks. Machine learning is a type of AI that is trained on large amounts of data to make predictions or identify patterns based on that data (Spencer, 2024). In this paper, we will use the term “AI/ML” to encompass the suite of tools we use to make predictions for our programs.