This is a supplement to the main AI Readiness report. All results refer only to responses from India, unless otherwise specified.
Table of contents
For a better experience, we recommend viewing this report on a desktop browser rather than a mobile device.
Introduction
Executive summary
In March 2024, GivingTuesday’s Generosity AI Working Group launched the AI Readiness Survey to assess how nonprofit organisations worldwide are adapting to the growing influence of artificial intelligence (AI) and other emerging technologies. The survey explored organisations’ capacity to adopt AI, their comfort with these tools, and the barriers they face. This research is a critical benchmark in the efforts to ensure that the social sector is not left behind as AI becomes increasingly relevant.
A key element of this research was to define “AI readiness” by measuring who is already using AI, how it is being utilised, and identifying the organisational traits linked to AI adoption. This helps establish a shared understanding and common language to facilitate the responsible and effective deployment of AI technologies in the social sector.
The India AI Readiness Report was developed as a supplement to the global study, focusing exclusively on insights from 251 organisations across the country. It was created to explore the unique dynamics of AI adoption in India’s nonprofit sector, and establish a better understanding of the distinct challenges and opportunities Indian organisations face with AI.
We hope this report will help anchor your decision making in relationship to AI, whether you’re a nonprofit considering how you compare to others in the AI adoption scale, a technology platform looking to better understand current nonprofit needs, an educator looking to develop resources for nonprofits, or a funder considering how to best support the sector’s AI adoption. AI systems, tooling, and education must be designed to meet nonprofits where they are while also inspiring meaningful progress. Without this alignment, we risk perpetuating a cycle of AI development that fails to address the unique needs of nonprofits, potentially leading to inefficiencies rather than improvements. Our aim is for this research to not just help nonprofits utilise AI, but to use it well.
By spotlighting India’s progress, we hope to spark further conversation around how to collectively approach AI adoption in the social sector and guide Indian nonprofits, funders, and stakeholders in their decision-making and collaborations.
Key takeaways:
- India is still in the early stages of AI adoption, particularly within the nonprofit sector, and this nascent stage influences several aspects of its use. Many organisations are just beginning to explore AI, often starting with simpler tools like generative AI. As a result, key factors such as data governance and risk awareness are underdeveloped compared to findings from the global sample.
- Two groups emerged from the sample: early adopters and late adopters. Early adopters tended to be larger, urban-based organisations, while late adopters were smaller, rural-focused, and less likely to use AI beyond generative AI. Early adopters used data and AI more extensively, with a higher desire to expand AI usage compared to late adopters, who largely remained uncertain about future AI use.
- Early adopters are more likely to use a wider range of AI tools, and show more interest in using these tools for future applications. Early adopters in India are showing a strong desire to expand their use of AI beyond generative tasks, with 60-80% of this group interested in utilising AI across various applications. This contrasts sharply with late adopters, where only about 10% show interest in using AI for future applications beyond generative tools.
- Indian organisations are comfortable with AI, but remain reserved in their views on broader adoption. Indian respondents focus more on the benefits of AI, and have fewer concerns about the risks and dangers, while globally, comfort with AI is generally lower, and scepticism about AI’s risks versus rewards is more pronounced. This may be because India has a less established culture and newer legislative landscape of data protection regulation in general, including AI.
- Indian respondents predominantly expressed hopes tied to efficiency, productivity, and resource optimization, but fears were focused on job displacement, skill gaps, data privacy, and bias. Concerns about AI’s role in impeding creativity were also more common in India. The global sample mirrors India’s concerns, but notably, ethical implications and broader societal impacts are more frequently highlighted in global responses, suggesting that Indian organisations may benefit from greater engagement with these broader discussions about AI ethics.
- Indian organisations, particularly smaller ones, lag in implementing data-use policies and cloud data storage compared to the global averages. Interestingly, larger Indian organisations employ MERL and tech personnel at higher rates than the global norm, but are less likely to engage in data-sharing agreements. Globally, organisations are more advanced in data policies and infrastructure, suggesting that India’s nonprofits may need targeted support to develop these foundational elements of AI readiness.
How to use this report
This report goes in-depth into the results of the AI readiness survey with details on methodology, key findings, and visualisations of the data sets. While we encourage you to read the entirety of this report, different readers may find certain sections more relevant for their work.
Our suggestions
- If you are seeking immediate insights, you may find the Key takeaways and Discussion sections most helpful. These sections summarise the primary results of our analysis and the main areas for future work.
- If you are a nonprofit intermediary supporting tool building or educational resources for nonprofits in relation to AI adoption, we suggest reviewing Findings from India, including those on organisational capacity and AI readiness, organisation personas and trends, and data use.
- If you are a researcher or analyst, scan the Table of contents to find the topics that align with your area of research. For more detailed breakdowns of data sources and further analysis that wasn’t included in the report, utilise the Footnotes, or the Research supplement from the global report.
- If you are interested in utilising the data for your own research, learn more in the Data access section.
- If you would like to download, browse, or utilize any of the data charts from this report for your own content, you can access them in our Visualizations Library.
For nonprofits
While this report goes into detail regarding the current state of AI readiness and utilisation, you won’t find resources on how to integrate or adopt AI into your work. However, if you’re looking for resources at any stage of AI utilisation and readiness – whether you’re not yet using AI, just beginning to integrate it, experimenting, building, or learning, we have compiled a variety of resources that can be found at the end of this report. These include people and organisations to connect with, training, and off-the-shelf resources and guides.
- Explore more resources: Interested in diving deeper into resources, tools, and datasets for AI experimentation? Check out our resource libraries and datasets that are searchable and accessible via GivingTuesday’s Generosity AI Working Group.
- Contribute to the problem library: Have a hypothesis or question about AI use cases? Register and contribute to our growing problem library by sharing your questions and problems to inform forward progress of responsible AI research, work and experimentation to investigate or address these challenges.
- Share your resources: Have resources that you’d like to share with others in the sector? We’d love to feature them. Submit them here.
About the sample
In all, we heard from 251 organisations in India1, spread across six regions2.
Smaller organisations were more common in our sample, with those having 6-15 employees being the most prevalent.
We combined two organisation sizes (separating those with fewer or more than 15 staff) and the type of geography served by their programming (rural, urban, or metro) to create cohorts. We used the Reserve Bank of India (RBI) definitions that define rural as <10,000 people, urban (and peri-urban) as 10,000 to 1 million, and metro areas as those exceeding 1 million. The number of respondents in each combination is shown on the bars below.
We also asked organisations to describe the type of work they do and identify whether they were focused on a specific region, or multiple regions within India, or in multiple countries. About half of all organisations said their programming was locally-focused or community based. The focus area categories, which were generated using ChatGPT based on the open-text descriptions of all 930 responses in this survey worldwide, show that organisations tend to do similar work regardless of local or regional focus. Education was the most common focus overall, followed by community development for local and regional organisations. Youth organisations tended to be national.
This breakdown is likely more accurate and representative than other categorization approaches. Had we used some predefined set of categories in this exercise, labels such as “Community Development” and “Youth and Family Services” would have been split into other categories. An organisation could appear in multiple categories based on their own open-text description of the work they do, so this better reflects organisations that do many kinds of work and are not defined by a single focus area. In that context, it is telling that the international organisations in our survey were most often categorised as “other” by ChatGPT. Descriptions of their work seldom overlap with any of the categories shown.
In Figure 1.5 we used the Indian state where people took the survey to categorise organisations into one of six geographic Indian regions, seen in Figure 1.13. The overall spread is similar to that shown in Figure 1.5 with a few exceptions. Education organisations are far more prevalent in Central; community development and economic empowerment focused in Eastern; arts & culture and social services more prevalent in North-Eastern; community development and youth & family services more common in Western.
About the people (survey respondents)
Over half of our sample were senior leaders and co-founders. Just four percent of respondents were technical staff or Monitoring, Research, Evaluation, and Learning (MERL) staff.
The number of years each respondent had been at their respective organisation (i.e. their tenure) was higher than one would expect for the nonprofit sector. Only 8.8 of respondents had been at their current organisation two or fewer years, compared the 40% seen by sector benchmarking surveys4. 57% had been at their current organisation for ten or more years.
Findings
The analysis was conducted using the following key parameters to reveal trends in readiness, adoption, and organisational differences in AI use across Indian nonprofits:
- Organisational capacity:
- Staff size (number of employees)
- Age of the organisation
- Geographical reach (local, regional, national,or international scope)
- Organisational characteristics:
- Region of operation (e.g., Western, Central, North-Eastern)
- Metro vs. urban vs. rural operational coverage
- FCRA approval (used as a proxy for higher capacity)
- Data Governance:
- Presence of a data-use policy
- Use of cloud storage for data
- Data-sharing agreements (whether the organisation has agreements that govern how they share data with other entities)
- AI Readiness:
- Data-use questions (number and variety of ways organisations collect data)
- Technical personnel (whether the organisation employs a technical person)
- AI-usage questions (how many and which AI tools the organisation uses)
- Presence of a MERL (Monitoring, Evaluation, Research, and Learning) specialist
- AI Adoption:
- Whether the organisation currently uses AI
- Number of forms of AI the organisation has tried or plans to try
- Future interest in expanding AI use
- Comfort with AI:
- Self-reported comfort level with using AI at work
- Risk-reward perception regarding AI (whether the organisation sees AI as more risky or rewarding)
The following sections discuss how these parameters interact and what they reveal about AI readiness in Indian organisations.
Findings from India
Percent | Sample (out of 251) | |
---|---|---|
Used generative AI to write text or create images | 55 | 137 |
Generative AI, and any other form | 43 | 109 |
Any form of AI, excluding generative AI | 59 | 147 |
Never used AI | 30 | 76 |
Use all forms of AI | 5 | 13 |
Organisational capacity is weakly correlated with AI readiness
Funding organisations have typically sought to categorise organisations by whether they can implement small or large scale services and interventions, and those categories depend on a multitude of factors: staff size, age, geographic reach, and local or subject-matter expertise. We wanted to know whether these factors were good predictors of whether an organisation was incorporating AI into their work. And if not, what was a good predictor? Funders likely would want to track good proxy indicators before implementing programs that relied on AI at organisations that weren’t prepared to manage this. In part, one of the aims of our research was to better define what “AI readiness” meant by measuring who is using it, and how, layered over other organisation characteristics that would be associated with readily using AI.
We used all relevant survey answers to calculate two composite scores for organisational capacity and AI readiness.
- Capacity reflects an organisation’s overall ability to absorb funding and implement projects affecting a larger geographical scope (specifically, we factor in number of staff, age of the organisation, and regionality).
- Readiness scores add up all of the behaviours that would logically assess an organisations ability to adopt and implement AI solutions, things like how and what data an organisation collects, technical staff, data policies, and current utilisation of AI (basically every technical question on our survey that is not an opinion about AI).
These scores are positively skewed, meaning more organisations have a lower capacity, similar to what we found in our global sample.
Organisations with higher capacity are more likely to see bigger gains in AI readiness, but this is mostly true for those with already high capacity. On average, for every one-point increase in an organisation’s capacity, their AI readiness score goes up by only 0.23 points—showing that capacity is not a strong predictor of AI readiness overall5. However, when looking at organisations with high capacity scores (above 13), the relationship becomes stronger, with a one-point rise in capacity linked to a 0.51-point increase in AI readiness. This suggests that organisations with substantial capacity benefit more from improvements in AI readiness compared to those starting with lower capacity, where the impact is smaller and making gains is more of an uphill battle.
The Foreign Contribution Regulation Act (FCRA) is a law in India that regulates the acceptance and use of foreign donations by nonprofits and other entities. Organisations must secure FCRA approval to receive foreign funding for programs with a social intent, which requires them to meet strict administrative and compliance standards. Notably, about half of the organisations in our sample had FCRA registration. There was no clear relationship between FCRA approval and an organisation’s capacity or AI readiness, but organisations with FCRA approval did stand out in other ways—they were collecting data in more varied forms and were slightly more likely to be using some form of AI.
Plotted below is the relationship between organisation growth and overall capacity and their technical capacity (to use emergent technologies, such as AI)6. The overall correlation between both measures was 0.30, meaning there was a weak, positive correlation. Only considering larger organisations (greater than 13 on our 25 point scale) raised this correlation to 0.44. However, the more we look into sub-groups (below), the clearer it gets that organisation capacity is not a good predictor of AI readiness. Instead, our cluster model provides a more concise narrative of what matters to two organisation profiles, explained in the next section.
Use of data and other technologies separates early from late AI adopters
We repeated a cluster analysis similar to the global report using data from India. This involved applying a variety of models7 to cluster responses into organisational profiles that might shed light on patterns in AI readiness. Our model included multiple choice (pick any) questions around what people have done with AI and what they would like to do, alongside our other organisational capacity and readiness data. We did not include questions that probed attitudes about AI (such as comfort with or willingness to use it) in the cluster analysis. Overall, the models consistently split our sample into organisations that had been using AI and wanted to use it more, and organisations that had not yet explored AI. We found evidence for two segments, which are best described as early AI adopters and late AI adopters.
The organisational capacity trends in India reveal significant differences between early and late adopters of AI, with capacity playing a crucial role in how these groups engage with technology. Early adopters of AI tend to be larger organisations with greater resources and infrastructure. These organisations often operate in urban areas, have more staff, and demonstrate higher levels of data collection and technical capacity, enabling them to experiment with and implement AI in more advanced ways. They are more likely to have formal data-use policies, cloud-based data storage, and technical personnel such as IT staff or MERL (Monitoring, Evaluation, Research, and Learning) experts, all of which contribute to their ability to adopt AI effectively.
In contrast, late adopters typically have lower organisational capacity, which is reflected in their size, geographic focus, and limited technical resources. These organisations are often smaller, with fewer staff and more constrained budgets. They tend to be located in rural or peri-urban areas, where access to digital infrastructure may be more limited. Late adopters are less likely to have data-use policies in place or use cloud storage, and they often do not employ dedicated technical personnel. As a result, their engagement with AI is generally more limited, with most using AI in simpler, more accessible ways—if at all. For example, generative AI, which requires less specialised knowledge, is often their first experience with AI technologies.
Early AI adopters have more advanced technology and data systems. Few of those in the late adopters group are currently using AI in any form, except for generative AI. Both groups use data in similar ways, but just to a different degree. Notably, the late adopters group tends to be made up of older organisations with slightly fewer staff, less likely to be collecting tabular data, have a data-use policy, or using the cloud for storage, but they are more likely to have employed a Monitoring, Evaluation, Research, and Learning (MERL) person.
Early and late adopter groups also differed in how they currently used data (for non-AI use-cases) and how they intended to use AI in the future. Late AI adopters use data less in every manner we asked about (listed on the y axis in Figure 3.2), and only about 10 percent of this group wants to use AI in the future in any form beyond generative AI.
We also looked at how the size of an organisation and how urbanised the places where it typically works affected whether they were early or late AI adopters. The trends here were not pronounced, but late adopters tended to be smaller organisations serving rural populations, and early adopters tended to be working in large metropolitan areas (exceeding a million people, Figure 3.4).
Figure 3.2 demonstrates the ways that early AI adopters use data and AI more as a group. Roughly 60-80 percent of early adopters would like to use AI in each of the ways we asked about. Most early adopters have tried AI in three or more ways, while few of the late adopters have experimented with anything beyond generative AI.
Another key distinction between the early and late adopters is that the appetite for expanding AI use is two or three times higher among the early adopter group. About 40 percent of the late adopter group still doesn’t know exactly what they will use AI for yet. One finding we cannot explain is that 18 percent of the late AI adopters claim to be using AI in some other way, not mentioned in these categories. This is the only case where they exceed the adoption rate of the early AI adopters.
Only considering organisation size, we found that nearly all organisations in India reported collecting data, similar to what we saw worldwide. Far fewer organisations have a data-use policy or use cloud data storage in India, compared to elsewhere.
We also examined how early and late adopters differed across the regions of India, and found that there was no meaningful difference based on geography. However, there was some variation in data use, AI use, and AI wants across regions. Note that we heard from too few organisations in the Northern and North-Eastern region to make these results meaningful, but the remaining regions offer some meaningful patterns about adoption. Demand for AI appears to be highest among Western region organisations, but this region also has the lowest percentage of organisations with MERL and tech people employed.
When we examined what people said they wanted to do with AI compared to what they were already doing, we found demand in India matched that seen worldwide. Generative AI already served almost everyone who wanted this sort of tool, but all other AI use-cases lagged far behind demand. Interactive chatbots and automated transcription/translation services are showing the highest amount of uptake.
Peoples’ feelings about AI are dramatically different between early and late AI adopters. Early adopters see AI as having more reward than risk, and are more comfortable using it at work. The Late AI adopters feel the opposite way. About half of the late adopters don’t know how to evaluate the risks or see risk and reward as being equal. About a quarter of both groups are neutral in their level of comfort using AI.
Indian organisations are hopeful about AI gains in efficiency
We explored the hopes and fears that organisations had towards AI by feeding their open text responses into ChatGPT8 that categorised common themes and sentiments. The results are presented below. Efficiency and productivity was mentioned most, and comments were overwhelmingly associated with hopes (92 mentions) over fears. In contrast, fears dominated the remaining categories.
We then broke each of these categories into more granular subcategories and compiled all themes that were mentioned at least five times. For “efficiency, productivity and resource optimisation,” respondents’ hopes about AI focus on potential efficiency gains. Within the category of “data analysis, decision making and communication”, respondents were most hopeful about AI facilitating data analysis and decision making within the organisation. Concerns voiced were around security, privacy, lack of knowledge and training.
We identified only two subcategories with more than 5 mentions for the remaining two categories. Under “innovation, collaboration and creativity”, respondents expressed worries that AI may reduce human creativity and originality (3.7% of all mentions). In terms of “improving services and personalisation”, respondents mentioned fears about the ethical implications of leveraging AI in everyday operations and broader societal applications (3.7% of all mentions).
We also looked at respondents from the top four cause categories (‘education’, ‘community development’, ‘women’s empowerment’ and ‘health and wellness’, respectively) and summarised their attitudes towards AI. Respondents working in education expressed optimism about AI enhancing learning opportunities through personalised teaching and by automating administrative tasks. However, the same group also expressed concerns that students’ over-reliance on AI may reduce creativity and critical thinking. For those working in community development, respondents saw AI as an opportunity to optimise resource allocation and identifying needs, while others expressed concerns about AI perpetuating existing regional inequities. Respondents working in women’s empowerment echoed the fear of AI worsening gender biases. These respondents also feared AI worsening the employment outcomes of women through job displacement and skill erosion (due to over-reliance on AI). Relating to health, AI was seen as a tool to improve diagnostics and treatment for marginalised groups, though some fear exists about the privacy of sensitive health data.
Indian organisations are comfortable using AI at work
Our Indian cohort was more comfortable using AI at work than seen elsewhere. 29 percent gave it a 10 out of 10. However, if we view AI as a product and apply the net promoter correction – which predicts the success of products and companies better than raw scores – AI is still slightly negative overall, with a -3 score on a -100 to +100 scale. This is higher than the -16 seen worldwide, but lower than the industry averages for Computer Software and IT services of 36 and 41, respectively.
When asked how people felt about the risk-reward tradeoff of AI, more of the Indian cohort either said the risks and benefits were equal or didn’t know / couldn’t evaluate than we saw globally, but the result was very similar overall.
India in the global context
This analysis was conducted looking specifically at trends from India and how they compare to findings from the global sample.
- Indian organisations are more likely to have technology or data staff, but lag behind the global sample on adoption of associated practices. Unlike the trend from the global sample, organisations in India were twice as likely to employ a tech person (147 vs 87) and a Monitoring, Evaluation, Research, and Learning (MERL) person (77 to 154). Both Early and Late Adopter groups in India were equally likely to have employed a tech person. This pattern in India runs counter to what we observed globally, where hiring a MERL and/or tech person was associated with a higher likelihood of using AI. The difference between these two groups in terms of data use and staff makeup is smaller than that seen with the global clusters in our global sample, about 10 percent.
- When we looked at the Indian subset compared to the global sample, we noticed that Indian organisations that had hired a tech person and/or MERL person did not necessarily score higher on AI readiness. In fact, they were less likely to be using AI. organisations that collect data, have a tech person, or a MERL person were less likely to have organisation data sharing agreements in India, compared to elsewhere. This suggests that the presence of this kind of expertise on the team is not a major differentiator in India, compared to elsewhere.
- When we asked organisations how they were using AI, there was one option to say “we’re not currently using AI” and overall, Indian organisations were not different than elsewhere (35% say “not using AI” vs 34% elsewhere). However, within each of these more tech-savvy subgroups, a lot more organisations within India said they were not using AI:
- Indian organisations that aren’t using AI yet have a tech person nearly 3x more often, employ a MERL person 1.5x more, and have entered into data agreements 2x more. These organisations collect data at the same rate as elsewhere, but fewer of them have data use policies.
- Fewer Indian orgs claim to have a data policy compared to elsewhere. While having a data policy does not alter the percent who are using AI, those that do tend to enter into organisation data agreements with others more often (44% in India vs 34% elsewhere).
Organisations reporting that they are not using AI
Non-Indian orgs | Indian orgs | |
---|---|---|
Employs a tech person (%) | 15 | 42 |
Employs a MERL person | 24 | 35 |
Has entered organisational data sharing agreements with others | 17 | 37 |
Collects data | 85 | 85 |
Has an organisational data use policy | 70 | 51 |
Conclusion
Discussion
AI has been one of the most talked about topics around the world in 2024. Predictions run the gamut from AI marking the beginning of the end of all things (see also “the singularity”9) to merely being the next phase in automation, akin to the computerization of business in the early 1990s.
In this section, we discuss potential implications and conclusions drawn from the survey data in the hopes that we can present balanced insights about where we are, where we are going, what harms to mitigate, and what boons to anticipate. For the nonprofit sector, many questions are emerging about to what extent it should be using AI technology in its work, as well as whether it has the interest and capacity to do so. This study is unique in that it provides more granular data about how attitudes towards AI intersect with presumed capacity. AI stands to disrupt many of the ways nonprofits work, as well as introduce new risks and the need for more training and guidelines.
On AI use and adoption in India
India is at a nascent stage of the AI adoption journey, and members of the social sector ecosystem are still in the process of exploring and understanding the various use cases for AI in the context of their work.
The capacity gap between early and late adopters in India also reflects a difference in future AI ambitions. Early adopters are not only more likely to be using AI in several ways already but also have a stronger appetite for expanding their use of AI across various organisational functions. On the other hand, many late adopters remain uncertain about how they will use AI in the future and are hesitant to invest in more advanced AI applications due to resource constraints and concerns about the risks involved.
When exploring ways to encourage AI uptake and use, it’s crucial to first explore AI’s relevance, potential use cases. The distinction between early and late adopters in terms of organisational capacity highlights the importance of providing targeted support for smaller, resource-constrained nonprofits. Enhancing their digital infrastructure, improving access to technical expertise, and developing capacity-building programs focused on AI adoption could help bridge the gap and create meaningful pathways for all types of organisations to fully leverage AI’s potential.
Indian NPO ecosystem has fewer concerns around data risks
In our survey comparing Indian responses to those from elsewhere, we see that the Indian nonprofit community focuses more on the benefits of AI, and has fewer concerns about the risks and dangers. This may be because India’s culture and legislative landscape around data protection regulation in general, including AI, is at a nascent stage10. Indian NPOs appear to think less about data protection/ privacy issues, and therefore are more likely to share data without having data use or sharing agreements in place.
Indian NPO hopes and fears around AI
Indian organisations cited job displacement, skill gaps in effectively using AI, and becoming overdependent on AI as their primary concerns. This could indicate that there is a way to go in making social sector practitioners feel more confident in their ability to adopt and effectively use AI as a tool. More concerted efforts to develop an ongoing discourse, share practical knowledge, and build skills amongst NPO practitioners could be a starting point to addressing this.
Voices we didn’t hear from
Anecdotally, we know some fraction of those invited to take our survey opted out, because our partner networks told us so. This reinforces an important caveat to interpreting all the recent surveys about AI, which is that we (and others) don’t hear from those who have no interest in the subject, or who feel unqualified to posit opinions. Estimating the size of this silent group remains a future goal. We may provide more insights about them in a follow-up report, as they represent an “unaddressable market.” As a result, actual AI use (as a percentage of all nonprofits) is lower than what these and other surveys imply.
Prioritising safeguards
The world is changing rapidly. 73% of the public in India reports using AI11 in 2024, though only 64% claim to have a good understanding of it based on other sources. India ranked 23rd out of 32 countries polled by Ipsos for level of public understanding of AI in May 202412. In January of 2024, the World Economic Forum rated active disinformation as the #1 global threat over the next two years and generative AI (e.g. deepfakes) as the most significant technological change that will transform the nature of reality itself. The forum highlights AI-generated misinformation as a specific threat to democracy in India13. They worry we are entering a new disinformation age where “synthetic content will manipulate individuals, damage economies, and fracture societies in numerous ways over the next two years.”
We know from our survey responses and others that the majority of nonprofits are already utilising AI and the vast majority of those using AI are using technology products managed by others. These threats compounded with growing widespread use make this an urgent time to centre safeguards. Historically, the nonprofit sector has struggled to drive collaboration and best practices around technology adoption and data policies. However, the nonprofit sector is uniquely positioned to centre humanity in technology. Collaborative spaces like the Generosity AI Working Group and frameworks, like the Fundraising.AI Framework for fundraisers are great starting points for engaging and adopting safeguards.
Our survey data highlights the nuances in current AI adoption and knowledge, understanding these differences is integral to achieving equitable and beneficial AI adoption. Addressing the systemic challenges of knowledge and infrastructure gaps, which are not unique to AI technology, requires cross-sector collaboration among philanthropy, technology, nonprofit, and researcher actors in the sector. Collectively, identification of strategic and tactical use cases to move us from “how do we use” to “what do we need” will support an ecosystem that centres sector needs in the design, development, and governance for equitable outcomes.
AI Readiness resources for Indian practitioners
Templates & frameworks
- The AI Governance Framework for Nonprofits was developed with insights from nearly two dozen nonprofit leaders to help organizations navigate AI adoption and management. The framework was sponsored by Microsoft and created by AI advisor, Afua Bruce.
- Fundraising. AI’s Framework toward Responsible and Beneficial AI for Fundraising. A framework for incorporating ethical standards, business, and hiring practices and building into the overall design of AI ecosystems.
- NTEN’s Generative AI Use Policy Template is designed to provide organizations with a framework for ethical, responsible, and transparent AI governance.
- Use cases from how Urban Institute is piloting guidelines around using AI in their research and policy work.
- USAID Checklist for Artificial Intelligence (AI) Deployment is a tool for policymakers and technical teams preparing to deploy or already deploying AI systems.
- NamasteData’s AI Equity Report is a resource that offers practical insights to help you make AI adoption equitable, effective, and mission-aligned. Download the report here.
Acknowledgments
About us
The Generosity AI Working Group at the GivingTuesday Data Commons is a cross-sector collaboration between practitioners in tech, academics, and the social sector who are exploring ways that artificial intelligence (AI) can be used to advance missions and grow generosity at scale. This working group explores the role of knowledge-sharing platforms, learning networks, and communities of practice in accelerating AI adoption for social good. It identifies strategies to address issues related to resource constraints, governance, data sharing, and equity in collaborative AI projects. The Generosity AI Working Group is connecting workstreams and creating a collaborative space to inform research, product development, and best practices.
The GivingTuesday Data Commons is a global network that enables data collaboration across the social sector. The Data Commons convenes specialist working groups, conducts collaborative research into giving-related behaviors, reveals trends in generosity and donations, and shares findings among its global community. With more than 170 data partners and 1,800 collaborators, the Data Commons is the largest philanthropic data collaboration ever built.
Collaborators
The AI Readiness Survey is conducted by the GivingTuesday Data Commons in partnership with Fundraising.AI and with generous support from Microsoft. This report was made possible through a network of collaborative partners that supported the survey design and dissemination and report design and iteration. We thank Tim Lockie, Nathan Chappell, Meena Das, and the numerous organizations in the Generosity AI Working Group for their time, energy, and expertise in reviewing the survey and report. Additionally, we thank our dissemination partners: Good Tech, MERL Tech, Fundraising.AI, NamasteData, The Human Stack, Donor Participation Project, Quiller, Brooklyn Org, Apurva.ai, Atma, BHUMI, ConnectFor, A.T.E. Chandra Foundation, Voluntary Action Network India (VANI), Danamojo, and Sattva – India Partner Network (IPN).
GivingTuesday is grateful to the partners and collaborators who have lent their valuable time and expertise to the team in interpreting and defining key findings for discussion and future work.
Data access
If you are interested in utilizing the data for your own research, you can access the dataset here. This link includes a readme file on how to use the data, as well as the raw data set.
To view other datasets, check out our resource libraries and datasets that are searchable and accessible via GivingTuesday’s Generosity AI Working Group.
Contact us
For press or media inquiries, please contact Shareeza Bhola.
For research or partnership inquiries, please contact Kelsey Kramer.
Research supplement
Supported by
Footnotes
- This online survey was hosted by Microsoft Forms and disseminated to networks of nonprofits and social sector practitioners in India via key Outreach Partners (see the Acknowledgements section of this report for a full list of names). The survey was live between March and June 2024. ↩︎
- Original map from wikimedia, by Editor8220 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=137854525 ↩︎
- The regions were defined using the grouping of states in the Zonal Councils https://www.mha.gov.in/en/page/zonal-council ↩︎
- CultureAmp tracks the tenure of nonprofit staff and reported that in January 2024, 40% of employees they surveyed had been working at their current employer for at most two years: https://www.cultureamp.com/science/insights/nonprofit-200-500. ↩︎
- Statistical significance, very roughly, entails that the observed relationship has a very low probability of being entirely due to chance. ↩︎
- If these two measures were highly correlated, one would expect to see the dots form a trend line that goes up and to the right. Instead, we see a weak correlation (Pearson’s r=0.42). ↩︎
- We experimented with several models. The results presented here were based on UMAP with HDBSCAN, but the K-means approach gave similar results. Both models largely split the sample into people who either felt AI had more benefits than risks, or that saw more risks than benefits, regardless of their actual use. The three-cluster model provided a much more nuanced and complex interpretation of these groups, so we presented that. ↩︎
- We chose GPT-4o-mini as our large language model. Using the ChatGPT application program interface (API), we set the temperature of the responses to 0.1. Temperature controls how ‘random’ and ‘creative’ the outputs will be. By setting a low temperature of 0.1, we limit the large language model to select the most probable and ‘plausible’ responses. ↩︎
- Source: https://en.wikipedia.org/wiki/Technological_singularity ↩︎
- https://www.ipsos.com/en-in/indians-perceive-artificial-intelligence-more-boon-bane-yet-tread-prudence-ai-adoption-ipsos-ai ↩︎
- https://www.salesforce.com/news/stories/generative-ai-statistics/ ↩︎
- https://www.ipsos.com/en-in/indians-perceive-artificial-intelligence-more-boon-bane-yet-tread-prudence-ai-adoption-ipsos-ai ↩︎
- https://www.weforum.org/stories/2024/08/deepfakes-india-tackling-ai-generated-misinformation-elections/ ↩︎