Table of contents
For a better experience, we recommend viewing this report on a desktop browser rather than a mobile device.
Executive summary
In March 2024, GivingTuesday’s Generosity AI Working Group launched the AI Readiness Survey to better understand the current capacity for and utilization of artificial intelligence (AI) and other new emerging technologies within the social sector. This research is essential to ensuring nonprofit organizations are not left behind in the rapid landscape of AI development. We asked questions to gather insight on how comfortable organizations are with AI, how people working in the sector currently use and envision utilizing AI in the future, and what barriers may prevent nonprofits from adopting AI technologies.
A foundational piece of our research was to better define what “AI readiness” means by measuring who is using it, and how, and then looking at other organization characteristics that would be associated with readily using AI. Why? So that we can develop some shared understanding and language to support a more coordinated and collaborative approach to how AI tools are being built, disseminated, discussed, and safeguarded.
Much of what we’ve uncovered through this research may not surprise you. But, we hope it will help anchor your decision making in relationship to AI, whether you’re a nonprofit considering how you compare to others in the AI adoption scale, a technology platform looking to better understand current nonprofit needs, an educator looking to develop resources for nonprofits, or a funder considering how to best support the sector’s AI adoption. AI systems, tooling, and education must be designed to meet nonprofits where they are while also inspiring meaningful progress. Without this alignment, we risk perpetuating a cycle of AI development that fails to address the unique needs of nonprofits, potentially leading to inefficiencies rather than improvements. Our aim is for this research to not just help nonprofits utilize AI, but to use it well.
We hope that this report – Where are we now? AI Readiness and Adoption in the Nonprofit Sector in 2024 – supports your own learning and empowers you find opportunities for collaboration with your networks.
Key takeaways:
- Organizational capacity is not a good predictor of AI readiness. Instead, the best predictor of AI readiness was the size when an organization hires its first technical or Monitoring Evaluation, Research, and Learning (MERL) person, which tends to be at around 15 staff. In short, we define organizational capacity as the ability to absorb funds and manage large projects. (Read about this definition in more detail in Organizational capacity and AI readiness). We measured both organizational capacity and a variety of specific technical milestones more directly related to AI readiness, and the two were not very related.
- Adoption: Using this survey, we clustered organizations into three groups based on their relationship to AI: AI consumers, AI skeptics, and Late AI Adopters. We believe these clusters will be helpful to anyone who wants to understand how the adoption journey necessarily differs for different types of organizations. Smaller organizations with 15 staff or less identified the “lack of knowledge and training about AI” as their number one barrier to adoption.
- Skepticism remains high: 68% of people have already tried AI in their work, yet skepticism about AI’s data-protection and bias remains high, even among those using it. Levels of caution about AI differ around the world. The Global South seems to be less concerned with privacy and model bias than the Global North. And those with a more technical background tend to be more wary of AI risks than other organizational personnel in general.
- Tool opportunities: Looking at the gap between what people are doing and what they say they’d like to do, the biggest unmet AI need is a tool for automatically organizing data.
- Key AI Readiness Segments: Overall, our global sample of organizations can be divided along several useful thresholds or cutoff points:
- Half our sample has tried AI in two or more use-cases.
- Half our sample has concerns about AI, specifically about data breaches, model bias, and intellectual property theft.
- Half our sample either doesn’t know how to evaluate the risks of AI, or thinks the risks and benefits are roughly equal.
How to use this report
This report goes in-depth into the results of the AI readiness survey with details on methodology, key findings, and visualizations of the data sets. While we encourage you to read the entirety of this report, different readers may find certain sections more relevant for their work.
Our suggestions:
- If you are seeking immediate insights, you may find the Key takeaways and What people are saying about AI in the nonprofit sector sections most helpful. These sections summarize the primary results of our analysis and summarize the findings from the report.
- If you are a nonprofit intermediary supporting tool building or educational resources for nonprofits in relation to AI adoption, sections on Organizational capacity and AI readiness, Organization personas and trends, and What people told us may be most interesting to you.
- If you are a researcher or analyst, scan the Table of contents to find the topics that align with your area of research. For more detailed breakdowns of data sources and further analysis that wasn’t included in the report, utilize the Research supplement and Footnotes. If you are interested in utilizing the data for your own research, learn more in the Data access section.
For nonprofits
While this report goes into detail regarding the current state of AI readiness and utilization, you won’t find resources on how to integrate or adopt AI into your work. However, if you’re looking for resources at any stage of AI utilization and readiness – whether you’re not using AI, just beginning to integrate it, experimenting, building, or learning, we have compiled a variety of resources that can be found at the end of this report. These include people and organizations to connect with, training, and off-the-shelf resources and guides.
- Explore more resources: Interested in diving deeper into resources, tools, and datasets for AI experimentation? Check out our resource libraries and datasets that are searchable and accessible via GivingTuesday’s Generosity AI Working Group.
- Contribute to the problem library: Have a hypothesis or question about AI use cases? Register and contribute to our growing problem library by sharing your questions and problems to inform forward progress of responsible AI research, work and experimentation to investigate or address these challenges.
- Share your resources: Have resources that you’d like to share with others in the sector? We’d love to feature them. Submit them here.
About the sample
We received 930 responses between March 7, 2024 and July 9, 2024. Most North American responses came in March 2024, and most responses from outside the U.S. came in May and June of 2024. Of the overall sample, 549 were from GivingTuesday’s network of nonprofits (mostly in the US), 251 were based in India, 86 were from partner networks focused on AI and technology, and 44 were from GivingTuesday’s global network outside North America.
63% of the sample operate in the Global North, 37% in the Global South1. Only 13% of responses came from outside Asia or North America.
Organization characteristics
Our sample mostly represented smaller, community-based organizations. 63% of organizations in our sample had 15 or fewer staff. 50% operate locally, and 16% operate in multiple countries. Worldwide, organization breakdown by size is somewhat similar to what we observed in our sample.
We calculated a composite general organization capacity based on staff size, age of participating organizations, and whether they operated locally, regionally, nationally, or in multiple countries. We used this approach to see whether organizational capacity correlates or is distinct from an entity’s ability to adopt AI in the next section.
Organization age (0-150 years). For clarity, all organizations that were founded over 150 years ago (the long tail) are lumped into one bar on the right.
Our sample, by continent and cause area
Continent: Because of low sample size for some continents, we only present findings in this report for North America, Asia, and Global North vs Global South. Europe, Africa, South America and Australia were not sufficiently represented to share continent-specific findings.
Cause area: Worldwide, education, community development, and health organizations are the most prevalent in our responses. Based on the ways organizations describe their work in narrative form, Economic Empowerment and Women’s Empowerment organizations are far more prevalent in Asia (mostly India) than in North America. These cause areas were assigned to organizations using ChatGPT2 to categorize their open text descriptions in the survey. An organization could be assigned multiple relevant cause areas3.
We also asked about how locally an organization works. The prevalence of organizations working in each cause area seems to be mostly independent of whether they operate locally or internationally.
About the people (survey respondents)
Roles of people taking the survey: Nearly half of our sample were senior leaders and co-founders. Very few respondents were technical staff or Monitoring, Research, Evaluation, and Learning (MERL) staff.
The number of years each respondent had been at their respective organization (i.e. their tenure) was higher than one would expect for the nonprofit sector. Only 22% of respondents had been at their current organization two or fewer years, compared the 40% seen by sector benchmarking surveys4. 48% of those with at least three years of tenure were organization leaders or in governance.
Organizational capacity and AI readiness
Funding organizations have typically sought to categorize organizations by whether they can implement small or large scale services and interventions, those categories depend on a multitude of factors: staff size, age, geographic reach, and local or subject-matter expertise. We wanted to know whether these factors were good predictors of whether an organization was incorporating AI into their work. And if not, what was a good predictor? Funders likely would want to track good proxy indicators before implementing programs that relied on AI at organizations that weren’t prepared to manage this. In part, one of the aims of our research was to better define what “AI readiness” meant by measuring who is using it, and how, layered over other organization characteristics that would be associated with readily using AI. (Note: we included a detailed analysis of this in the appendix, and only share the most interesting insights below.)
We used all relevant survey answers to calculate two composite scores for organizational capacity and AI readiness.
- Capacity reflects an organization’s overall ability to absorb funding and implement projects affecting a larger geographical scope (specifically, we factor in number of staff, age of the organization, and regionality).
- Readiness scores add up all of the behaviors that would logically assess an organizations ability to adopt and implement AI solutions, things like how and what data an organization collects, technical staff, data policies, and current utilization of AI (basically every technical question on our survey that is not an opinion about AI).
We find organizational capacity to have a bimodal distribution (a bell curve with two peaks), meaning that we see many lower capacity orgs, some higher capacity orgs, and a long tail of a few high-capacity organizations. However, the AI readiness scores we calculated based on questions we asked appear to generate a normal “bell curve” distribution.
We interpret the patterns of scores to indicate that the nonprofit sector is in the middle of the mainstream adoption phase for AI. Had we observed a highly-right-skewed distribution in readiness scores, that would have meant most people had little to no exposure and that a few people (the early adopters) were experimenting with AI heavily. From our results, we observe a normal, non-skewed score distribution suggesting that most organizations have experimented with AI to about the same extent, regardless of their overall capacity.
Plotted below is the relationship between organization growth and overall capacity and their technical capacity (to use emergent technologies, such as AI)5. The overall correlation between both measures was 0.42, meaning there was a weak, positive correlation. However, the more we look into sub-groups (below), the clearer it gets that organization capacity is not a good predictor of AI readiness. Some aspects of organizations are correlated with AI readiness, and we provided a research supplement for details on those. However, our cluster model provides a more concise narrative of what matters to three organization profiles, explained in the next section.
Organization personas and trends
Cluster analysis
We applied a variety of models6 to cluster responses into organizational profiles that might shed light on patterns in AI readiness. Our model included multiple choice (pick any) questions around what people have done with AI and what they would like to do, alongside our other organizational capacity and readiness data. We did not include questions that probed attitudes about AI (such as comfort with or willingness to use it) in the cluster analysis. Overall, the models consistently split our sample into organizations that had been using AI and wanted to use it more, and organizations that had not yet explored AI.
Our cluster model divided organizations into three organization personas that combine how they currently use data, how they use AI, and how they plan to utilize more AI in the future:
- AI Consumers (56%, n=523) – AI Consumers currently collect and use data the most in their work, are using some AI now, and want to use AI more in the future;
- AI Skeptics (15%, n=142) – Of the three groups, AI Skeptics currently collect and use data the least in their work, and while they have tried some AI tools, are not eager to try it much more;
- Late AI Adopters (28%n=265) – Late AI Adopters typically collect and use data in more and better ways than the AI Skeptics, but not quite at the level of the AI Consumers. They have barely tried any AI tools yet, but generally show a greater interest in using AI in more ways in the future than the AI Skeptics.
We asked about how people collect and use data currently, separate from any AI usage, and differences here are telling about the types of organizations likely to begin or expand their use of AI in the near future. The clustering divided organizations into 142 low use, 265 medium use, and 523 high-data-use organizations. As one would expect, the 523 organizations who were already mostly collecting data in tabular form and through electronic devices were also those most likely to have used AI. This AI Consumers group was also the most eager to expand its use of AI. 68-76% of this group wanted to generate, interpret, organize, predict, or use a digital-AI-assistant in their future work.
However, the low-data and medium-data-use groups are more nuanced. They best align with AI Skeptics and Late AI Adopters, respectively. We saw drastic differences in how much AI these two groups had already explored, and different appetites for further adoption. 90% of the Late AI Adopters reported not using any AI yet, compared to 38% of the AI Skeptics group7. When we asked about their desire to explore seven AI use-cases8 in the future, 27% of the low-data-use organizations wanted to try each of them on average, compared to 35% of the medium-data-use organizations, and 63% of the high-data-use organizations.
Here, we are labeling the AI-Skeptics based on having both already tried AI and having the least appetite to expand their use of AI, compared to the other groups. Given that this AI Skeptics group also is the least likely group to be collecting data in tabular form, using devices, cloud software, or retaining any raw data, we would conclude that the AI tools they’ve tried don’t address their needs. Overlaying the cluster analysis with the Diffusion of Innovation and the Gartner Hype Cycle, AI Skeptics may have passed the “peak of inflated expectations” and entered the “trough of disillusionment”9, whereas the Late AI Adopters have yet to traverse these stages.
It might seem counterintuitive that our low, medium and high data-use organization clusters are currently at a medium, low, and high level of AI adoption, respectively. Note that interest in future exploration and adoption of AI within these groups does closely follow the current levels of using other data collection methods; if an organization has already started to digitize its collection and management processes (going beyond excel spreadsheets), they are more likely to be interested in AI, but not necessarily already using it. Practically every organization we surveyed collects data about their work, but those least likely to have a tech person or MERL staff member are also least likely to have tested AI, despite the claims that these tools are easy to use “off the shelf” and don’t require technical expertise.
We might also conclude that if an organization hasn’t already done much real data tracking, they aren’t going to find AI to be very useful either. Many of our AI Skeptics are also the organizations that track and rely on data the least: Only 29% use software, 50% manually collect tabular data, 42% collect using devices. Only 9% retain original rich data (like audio, video, transcripts, etc). The other groups were all higher.
If these sample percentages are broadly representative of the nonprofit world as a whole10, we’d estimate that just over half of nonprofits are AI consumers in 2024, and that an additional quarter of all nonprofits might eventually adopt more AI in their work, but that the remaining fraction don’t find AI useful or relevant to their work in its current form. We also did not notice any major differences between the Global North and Global South in this respect (but see our later section on regional differences for more context on what differed). In addition, when we look at attitudes about AI apart from exploration in our sample, we might separate the low-adoption group into those who are true skeptics after testing, and those who are skeptical because the use-cases don’t fit their needs. Late AI adopters might be hampered by institutional policies and friction to adopting, so we asked about these aspects in follow up questions (see section Common standards and internal policies).
Looking at what organizations in each cluster primarily do (using chatGPT to categorize their open-text descriptions11), we see that the current AI Consumers tend to overrepresent nonprofit cause areas that are more easily tracked and measured. The datafication of education and health services has been underway for decades, and multiple commercial products exist to support these areas that are more amenable to AI. We see that these cause areas are overrepresented in our high-data group, whereas the low-data group reflects a more diverse collection of causes, more evenly distributed across categories, concentrated in no one area.
Table 3.3
Causes | All | Low-data / AI skeptics | Middle | High-data / AI consumers |
---|---|---|---|---|
Education | 26 | 18 | 24 | 29 |
Community Development | 20 | 19 | 18 | 21 |
Other | 18 | 18 | 17 | 18 |
Social Services | 16 | 13 | 20 | 15 |
Health and Wellness | 16 | 9 | 16 | 19 |
Youth and Family Services | 12 | 11 | 10 | 14 |
Arts and Culture | 11 | 6 | 14 | 12 |
Economic Empowerment | 9 | 6 | 8 | 10 |
Human Rights and Justice | 9 | 6 | 9 | 10 |
Women’s Empowerment | 7 | 4 | 8 | 7 |
Data sophistication
We asked about how an organization uses data in six survey questions. This, in part, determined how organizations were clustered. AI Consumers were the most likely group to have hired a tech person and/or a MERL person. When we drilled down into the characteristics of these groups, it became clear that once an organization reaches at least 15 paid staff, they are more likely to be part of the AI Consumers group. However, more established organizations, having been around for 30 or 50 years, were far more likely to be part of the Late AI Adopters group, despite typically having larger staff sizes. We found that organizations founded less than 10 years ago are basically equally split among our clusters; about three-quarters of newly established organizations are found in each group.
Plotting data practices by size of organization (staff size) gives a similar perspective. As expected, nearly all large and medium organizations collect data, but larger organizations are ahead of the rest in having data use policies and being cloud-based. The threshold for becoming data-savvy seems to be around 15 paid staff. Surprisingly, over half of large and medium organizations employ a tech person.
In our clustering model, the single biggest factor in whether an organization had adopted AI was having hired a MERL and/or tech person. We see a large spread across large and small organizations in the percent who have a MERL and/or tech person. Organizations that have both a tech and MERL person on staff (orange trends below) are far more likely to be sophisticated data-enabled organizations than average, and slightly ahead of even large organizations. These organizations also are the only group where a majority have been involved in adopting joint collaborative data agreements or guidelines.
Current AI usage
The current dominant meaning of “using AI” in 2024 appears to be generative AI, utilizing tools like ChatGPT, Copilot, and DALL-E 3 that create content based on text or image prompts. Interactive chat bots appear to be the only other AI tool with >50% adoption by a majority of our AI consumer sub-group. Only 1 in 6 organizations have tried interpreting data using AI or used an AI-powered task managing assistant, and only about 1 in 7 organizations have tried using AI for prediction.
Desired future AI usage
A majority of people in the AI Consumers and Late AI Adopter groups want to do more with AI, but less than half of the AI Skeptics show interest. The AI Skeptics are currently using AI to some extent, and unlike the Late AI Adopters, they have a clear idea about what it does and does not do. About half of the Skeptics would like to use AI in two or more ways in the future. Judging by other parts of the survey, this group probably has the strongest reservations about expanding use of AI and recognizes the “hype”, apart from its actual utility, having tested it. We also see a thread of concerns about AI overreliance in open text essays we received from respondents that might reflect this group.
Across all groups, priorities in the various ways to employ AI at work are similar. All groups prioritize the need for a tool to organize their data better, followed by AI virtual assistants, and continued use of generative AI. Interpreting data and prediction are also in high demand, despite the limitation that these tools will necessarily require better organized data (and larger amounts of it) before they can exist.
AI tools and market demand
The biggest “market gap” between current use and demand is for data organization tools, followed by predictive models, better interpretation of data, and virtual assistants. Generative AI, it seems, is currently used in-line with demand, and interactive chat bots appear to be less popular but also largely met by current technology. In other words, generative AI and chatbots have nearly saturated their target market, but other AI-uses have yet to make an entrance. AI-aided data organization, annotation, and cleaning tools will likely someday be the most prevalent AI use-case in the sector. Translation and transcription tools are the third most used category of AI tools, but fewer organizations are likely to need these tools, based on this data.
Overall, the nonprofit sector appears to be leaning into AI as a whole, despite reservations that surface when we ask questions about comfort, risk-reward, and policies/guidelines. Attitudes about the power of AI remain positive, but feelings about its use remain much more negative than seen with past technologies. Those interested in growing AI use would be well-served to be more intentional about mitigating the risk of data breaches, bias, and the perceived exploitation of prompt-inputs.
Our comparison of current and future use of AI suggests that the greatest unmet needs are apps that help in organizing and restructuring data, followed by virtual AI assistants, data interpretation/analysis, and predictive models (which require large amounts of this structured data as a key input). Generative AI and chatbot-use have already saturated the market; anyone who wants this has tried it.
What people told us: attitudes about AI
Attitudes about AI
We asked people to rate how comfortable they were incorporating AI in their work. When looking at our unweighted scores on a 0-10 range, one can see that there is a generally high level of comfort in using AI. This is consistent with the measured sentiment in our essay question about AI hopes and fears.
However, this positive skew might be misleading when compared in context. When applying the net-promoter12 correction to our question, the net comfort score is -16 on a -100 to +100 scale.
This approach subtracts out the “courtesy bias”13 that is often present when asking a question where the respondent feels social pressure to say “yes”, and is used widely to measure customer and/or product satisfaction14. So if AI was considered a product, a typical (successful) company or product would achieve a minimum score of +30 with its customers and users, but AI is – in fact – down in the failing product range. So, on balance, we conclude the level of excitement around AI is not nearly as high as the raw scores might imply. In fact, -16 is significantly worse than the industry benchmark NPS median scores for Computer Software and IT services at 36 and 41, respectively15. If a new product launch had the same level of comfort from its users (-16), it would be considered to be failing.
Using AI in one’s work might lead to new insights and reduced workloads, but it also carries many kinds of risks. When asked where the balance of risk and reward falls with AI, about a third didn’t feel well-enough informed to weigh in, and the rest were divided but somewhat skewed towards seeing more reward than risk. 29% thought AI would be net good, compared to 19% who saw AI as more risk than reward. 49% saw both as equal, or didn’t know. When we itemized these risks and asked people about each one, it was clear that the more tech-savvy respondents tended to see AI as more risky than people working at nonprofits in general. We conclude from this that while many people have started testing AI tools, half of the population really don’t feel well-enough informed to have made up their minds yet.
Among common concerns people had about AI, AI-related data breaches was the top concern, by a slight margin. In our cohort of tech-oriented nonprofits, we saw a substantial increase in the level of concern around most aspects of AI, especially bias. Replacing workers with AI was the only area that wasn’t a greater concern among tech-oriented nonprofits.
We also saw a trend in the opposite direction in our India cohort, where fewer people were alarmed about these issues. Concern for AI risks was generally a little lower in the Global South, and a little higher in the Global North.
In a recent survey16, Charities Aid Foundation (CAF, based in the UK) asked the general public in ten countries what they thought charitable organizations should be doing with AI (n=6,102 people online, early 2024). When asked to weigh the risk-versus-benefits of AI, 37% thought benefits outweighed risks, compared to 20% that felt risks outweighed benefits, and 34% that said they were roughly equal. A much smaller fraction of the public didn’t know (7%), compared to the 34% in our sample of nonprofit staff. They reported a “net positive” outlook of +15% (the percent who saw more benefits less the percent who saw more risks), compared to our +10% “net negative” outlook. We share that result here because the public perception appears to differ from that of nonprofit professionals.
Common standards and internal policies
Data policies are essential for protecting privacy, ensuring security, maintaining compliance, managing risks, and building trust. Given the many potential risks that AI adoption poses to data privacy, equity, and introducing systemic bias, we asked respondents whether their organizations currently have any data policies in place, and how feasible they thought it would be to collaborate with other organizations. While the majority (70%) of organizations did have a data-use policy in place, only 28% of organizations had been involved in adopting some kind of collaborative data agreements or guidelines in the past. In line with this, we found only 17% of respondents felt enthusiastic about the prospect of collaborating with other organizations on AI, and the net feasibility of this idea was -36 on a -100 to +100 scale.
It was unclear from this question whether current data-use policies were written with AI in mind, but our open text essay question made it very clear this is a major hurdle to adoption:
While only 3% of essays mentioned these kinds of policy issues directly, it is clear that a lack of a general “safe and ethical use policy” for AI is an underlying factor in people’s top concerns.
In a later section, we measured how much of a “priming effect” giving people a list of concerns (versus asking for an open ended response) has on the issues that they raise; we found that “AI related data breaches”, “Overreliance on AI” and “AI replacing workers” were their top concerns when unprompted. Clear guidelines could mitigate the harm of each of these concerns.
It is a common need, and also the kind of challenge that organizations have rarely collaborated on standardizing their safeguarding approaches together. Achieving sector-wide standards is going to take a different approach. In the absence of any organizing principles or external forces, it is likely every organization will eventually update their internal guidelines and policies, but years of AI use and exploration is likely to happen before it is regulated and self-governed.
Other surveys of nonprofit organizations have similarly found that AI policies are practically non-existent in 2024. Charityexcellence.co.uk17 found that 60% of their sample lacked any policies and procedures concerning AI, and only 5% had a clear, satisfactory policy. A separate survey of nonprofit communications professionals in nonprofits found that only 4% had an AI policy, and only 14% were actively working on one18.
Hopes and fears
At the end of the survey we gave people the chance to talk about anything on their minds related to AI. Nearly every respondent chose to write something about their hopes and fears about the promise or peril of AI in the context of their work. We then used ChatGPT to summarize the themes discussed, and the frequency that people mentioned each. We found that similar topics were discussed around the world at similar rates, but that a greater proportion of respondents in North America tended to talk about the efficiency gains from AI. Smaller organizations (with at most 15 staff) identified the “lack of knowledge and training about AI” as their number one concern/barrier to adoption.
Selected quotes:
Efficiency & Productivity
Ethics and privacy
Lack of Knowledge
Job Displacement
Sentiment19 was slightly more positive overall, with the average being +12 on a -100 to +100 scale. 65% of essays fell within the -25 to +25 range, meaning that the overwhelming majority were rather neutral or balanced in their comments. People varied widely in terms of how objective they were in their comments. Based on a language subjectivity algorithm, 15% of comments were highly objective, and 30% were quite opinionated.
The priming effect
People are much more likely to raise a concern when prompted. To examine this effect, we compared the percentage of comments that mentioned specific AI concerns with the percentage that identified that same concern when provided with a list of options to choose from. For ease of interpretation, we added a third trend series (to the right), based on the ratio difference between prompted and unprompted concerns (ignore the units on x-axis for this; it is unitless). Based on this, people were more concerned about data breaches and AI replacing workers, and specifically were more likely to mention these issues in their essays than when choosing from a list. Conversely, the environmental impact of AI and equity concerns (e.g. better-resourced organizations are more likely to adopt and benefit from AI) seem to be less top-of-mind in essays.
What people are saying about AI in the nonprofit sector, globally
Discussion and conclusions
AI has been one of the most talked about topics around the world in 2024. Predictions run the gamut from AI marking the beginning of the end of all things (see also “the singularity”20) to merely being the next phase in automation, akin to the computerization of business in the early 1990s.
Instead of prognosticating, we wanted to present a shared perspective from all available surveys, and add our own somewhat global view through another survey, in the hopes that we can present balanced insights about where we are, where we are going, what harms to mitigate, and what boons to anticipate. For the nonprofit sector, many questions are emerging about to what extent it should be using AI technology in its work, as well as whether it has the interest and capacity to do so. This study is unique in that it provides more granular data about how attitudes towards AI intersect with presumed capacity. AI stands to disrupt many of the ways nonprofits work, as well as introduce new risks and the need for more training and guidelines.
- Use versus attitudes: Overall, about 68% of people have already tried AI in their work, and 28% of them are using it in three or more ways. Generative AI is by far the most common first-use-case among those we asked. And yet skepticism about AI’s data-protection and bias remains high, even among those using it. 39% of those currently using AI worry about bias and 38% worry about data breaches. When viewed as a product, our respondents’ comfort with AI is much more negative, compared to the benchmarks for other technology products. But when asked whether the benefits of AI outweigh the risks, a larger percentage of respondents believed AI offers more upside, aligning with findings from another recent survey21. And yet, half of those surveyed were either uncertain or believed the rewards and risks were evenly balanced, meaning there is a significant amount of uncertainty and lack of understanding about what AI is and what it means for the world.
- Caution versus comfortability: Moreover, levels of caution about AI vary widely. Respondents in India are less concerned with privacy and model bias than those in the US. Tech and MERL people are worried about more kinds of risks than are organization staff in general. While being the smallest group, AI skeptics epitomize these contradictions: Unlike the Late AI Adopters group, they’ve tried AI but are the least interested in expanding its use. Among these uses, they most want AI to better organize their data, and yet they are the group least taking advantage of existing data tools to automate/accelerate their work. The largest cluster group, the current AI Consumers, are also surprising in that more of them are using generative AI than are collecting tabular data or using collection devices. They’re equally likely to be using software to manage data as to be using a chatbot (about half the group in each case). They are somewhat more comfortable with AI (+10) compared to the other groups, whose levels of comfort amounted to -57 and -28 on a -100 to +100 scale, for the Late AI Adopters and AI Skeptics, respectively. And yet nearly everybody seems poised to try AI, despite these reservations and discomfort. If one wanted to quantify the level of hype associated with a thing, these contradictory measures are as good a method as any.
- Voices we didn’t hear from: One important caveat to interpreting all the recent surveys about AI is that we (and others) don’t hear from those who have no interest in the subject, or who feel unqualified to posit opinions. Estimating the size of this silent group remains a future goal. Anecdotally, we know some fraction of those invited to take our survey opted out, because our partner networks told us they are doing so. We may provide more insights about them in a follow-up report, as they represent an “unaddressable market.” As a result, actual AI use (as a percentage of all nonprofits) is lower than what surveys imply.
- Distinction between personas: One might ask whether the AI adoption groups derived from clustering responses to this survey are real. Compared to, for example, our GivingPulse data – a weekly panel of US citizens that engage in different levels of generosity – we observe greater separation within these AI-Readiness/adoption clusters; these groups are more distinct. However, because AI adoption is likely more fluid and rapidly changing than generosity patterns, we would expect the percentage of organizations in each stage of adoption (AI awareness, testing, and adoption) to change each year.
- Prioritizing safeguards: The world is changing rapidly. In January of 2024, the World Economic Forum rated active disinformation as the #1 global threat over the next two years22 and generative AI (e.g. deepfakes) as the most significant technological change that will transform the nature of reality itself. They worry we are entering a new disinformation age where “synthetic content will manipulate individuals, damage economies, and fracture societies in numerous ways over the next two years.”23 We know from our survey responses and others24 that the majority of nonprofits are already utilizing AI and the vast majority of those using AI are using technology products managed by others. These threats compounded with growing widespread use make this an urgent time to center safeguards. Historically, the nonprofit sector has struggled to drive collaboration and best practices around technology adoption and data policies. However, the nonprofit sector is uniquely positioned to center humanity in technology. Collaborative spaces like the Generosity AI Working Group and frameworks, like the Fundraising.AI Framework for fundraisers are great starting points for engaging and adopting safeguards.
Our survey data highlights the nuances in current AI adoption and knowledge, understanding these differences is integral to achieving equitable and beneficial AI adoption. Addressing the systemic challenges of knowledge and infrastructure gaps, which are not unique to AI technology, requires cross-sector collaboration among philanthropy, technology, nonprofit, and researcher actors in the sector. Collectively, identification of strategic and tactical use cases to move us from “how do we use” to “what do we need” will support an ecosystem that centers sector needs in the design, development, and governance for equitable outcomes.
AI Readiness resources for nonprofits
At the Generosity AI Working Group, we aim to foster meaningful and resourced collaboration across the social sector. One way we are doing this is by consolidating and cataloging resources on AI tooling, capacity building, guidelines, and standards in our knowledge bank, datasets, tools, and problem statement libraries. You can access those here. We’re highlighting a few resources from our network for nonprofits at any stage of AI utilization and readiness – whether you’re not using AI, just beginning to integrate it, experimenting, building, or learning.
Templates & frameworks
- The AI Governance Framework for Nonprofits was developed with insights from nearly two dozen nonprofit leaders to help organizations navigate AI adoption and management. The framework was sponsored by Microsoft and created by AI advisor, Afua Bruce.
- Fundraising. AI’s Framework toward Responsible and Beneficial AI for Fundraising. A framework for incorporating ethical standards, business, and hiring practices and building into the overall design of AI ecosystems.
- NTEN’s Generative AI Use Policy Template is designed to provide organizations with a framework for ethical, responsible, and transparent AI governance.
- Use cases from how Urban Institute is piloting guidelines around using AI in their research and policy work.
- USAID Checklist for Artificial Intelligence (AI) Deployment is a tool for policymakers and technical teams preparing to deploy or already deploying AI systems.
- NamasteData’s AI Equity Report is a resource that offers practical insights to help you make AI adoption equitable, effective, and mission-aligned. Download the report here.
Trainings & learning
- The Human Stack AI For Anyone! The fast affordable entry level hands-on AI training for ANYONE. 15% off coupon for AI Readiness Report readers with code gtaireport.
- Dev Design is a digital literacy studio that helps nonprofits advance their AI adoption and digital transformation journey. Their workshops, resources, and 1:1 training cultivate data and AI literacy, building practical skills that people across all nonprofit job types can leverage to accelerate productivity, creativity, and impact.
- Stanford Institute for Human-Centered AI (HAI) and the Hasso Plattner Institute for Design (d.school) join forces to offer a highly engaging social sector educational experience that blends cutting-edge research and application with human-centered AI and social systems design frameworks. Apply to the two-day Designing Your Human-Centered AI Strategy: The Social Sector Cohort hosted at Stanford in October 2024.
- DataKind Learning Circles enable professionals at social impact organizations to engage in peer-to-peer knowledge sharing with a cohort, alongside support for data strategy that enable organizational data maturity growth from a trusted expert
Acknowledgments
About us
The Generosity AI Working Group at the GivingTuesday Data Commons is a cross-sector collaboration between practitioners in tech, academics, and the social sector who are exploring ways that artificial intelligence (AI) can be used to advance missions and grow generosity at scale. This working group explores the role of knowledge-sharing platforms, learning networks, and communities of practice in accelerating AI adoption for social good. It identifies strategies to address issues related to resource constraints, governance, data sharing, and equity in collaborative AI projects. The Generosity AI Working Group is connecting workstreams and creating a collaborative space to inform research, product development, and best practices.
The GivingTuesday Data Commons is a global network that enables data collaboration across the social sector. The Data Commons convenes specialist working groups, conducts collaborative research into giving-related behaviors, reveals trends in generosity and donations, and shares findings among its global community. With more than 170 data partners and 1,800 collaborators, the Data Commons is the largest philanthropic data collaboration ever built.
Collaborators
The AI Readiness Survey is conducted by the GivingTuesday Data Commons in partnership with Fundraising.AI and with generous support from Microsoft. This report was made possible through a network of collaborative partners that supported the survey design and dissemination and report design and iteration. We thank Tim Lockie, Nathan Chappell, Meena Das, and the numerous organizations in the Generosity AI Working Group for their time, energy, and expertise in reviewing the survey and report. Additionally, we thank our dissemination partners: Good Tech, MERL Tech, Fundraising.AI, NamasteData, The Human Stack, Donor Participation Project, Quiller, Brooklyn Org, Apurva.ai, Atma, BHUMI, ConnectFor, A.T.E. Chandra Foundation, Voluntary Action Network India (VANI), Danamojo, and Sattva – India Partner Network (IPN)
Data access
If you are interested in utilizing the data for your own research, you can access the dataset here. This link includes a readme file on how to use the data, as well as the raw data set.
To view other datasets, check out our resource libraries and datasets that are searchable and accessible via GivingTuesday’s Generosity AI Working Group.
Contact us
For press or media inquiries, please contact Shareeza Bhola.
For research or partnership inquiries, please contact Kelsey Kramer.
Research supplement
Supported by
Footnotes
- For the purposes of this survey we defined the Global North as North America, Europe, and Australia, and the Global South as South America, Africa, and Asia. Note that we did not receive any responses from countries that would be ambiguous, such as Turkey, parts of the Middle East, or Russia. ↩︎
- We tried several ChatGPT prompts asking for a list of topics from the texts, that captured the most frequent categories. ChatGPTs best result simply wrote a python script that assigned topics to organizations based on keywords in their descriptions, and allowed organizations to fit in multiple categories. The keyword approach was optimized because it was based on chatGPT “seeing” all the descriptions first.
↩︎ - We decided not to use a categorical taxonomy in our survey because there is no one “best” version that works worldwide, and researchers can take our raw data and recategorize this sample of organizations to fit any taxonomy, if they wish. In fact, chatGPT can be prompted to do this about as accurately as a human. ↩︎
- CultureAmp tracks the tenure of nonprofit staff and reported that in January 2024, 40% of employees they surveyed had been working at their current employer for at most two years: https://www.cultureamp.com/science/insights/nonprofit-200-500. ↩︎
- If these two measures were highly correlated, one would expect to see the dots form a trend line that goes up and to the right. Instead, we see a weak correlation (Pearson’s r=0.42). ↩︎
- We experimented with several models. The results presented here were based on UMAP with HDBSCAN, but the K-means approach gave similar results. Both models largely split the sample into people who either felt AI had more benefits than risks, or that saw more risks than benefits, regardless of their actual use. The three-cluster model provided a much more nuanced and complex interpretation of these groups, so we presented that. ↩︎
- 19% (+/- 9%) of the low-data-group currently was using each form of AI, compared to 34% (+/-22%) of the medium-data-group. ↩︎
- For analysis of AI use cases here, we excluded answers “Don’t Know” and “Other” from the calculations and only included the 7 explicit uses. ↩︎
- See the Gartner Hype Cycle: https://en.wikipedia.org/wiki/Gartner_hype_cycle ↩︎
- Our survey is not geographically representative of organizations worldwide, so our conclusions may not apply in continents we did not gather sufficient sample size. However, it is representative of small, medium, and large size organizations, and reflects a similar thematic breakdown to the global population of organizations. To the extent that data is available in similar surveys, we have no reason to believe that AI adoption is wildly different in places we didn’t sample, and further, we provide a detailed summary of these parallel findings in Section 6: discussion. ↩︎
- Prompt: Turn this list of text into a BRIEF list of nonprofit categories, and count the number of organizations in each category, where newline is a new organization. Name the 6 most common categories then combine the smaller categories into an “other” list. ↩︎
- In brief, you obtain the percent who gave 9 or 10 and subtract that the percent who gave scores below 7, ignoring 7s and 8s. Here’s an interactive calculator: https://delighted.com/nps-calculator ↩︎
- Read more about courtesy bias from Listen for Good: a multi-funder effort to improve feedback loops in nonprofits: https://listen4good.org/feedback101/courtesy-bias/ ↩︎
- We feel justified that there is a courtesy bias present because some of the partner networks that helped to distribute the AI survey told us that some of their members refused to fill out the survey precisely because they felt uninformed or unwilling to engage on the topic. That means those who did fill out are skewed (predisposed) to be more informed and more positive in their views. And it is likely that they felt some social pressure to rate their comfort a little higher as part of self-selecting into the camp that wants to explore AI further. ↩︎
- Source: https://customergauge.com/benchmarks/blog/technology-industry-nps-benchmarks ↩︎
- Source: https://www.cafonline.org/about-us/research/what-the-public-think-of-charities-using-ai ↩︎
- Source: https://www.charityexcellence.co.uk/charity-ai-benchmarking-survey/ ↩︎
- Source: https://www.charitycomms.org.uk/salary-and-organisational-survey-2023 ↩︎
- Using python TextBlob library ↩︎
- Source: https://en.wikipedia.org/wiki/Technological_singularity ↩︎
- Source: https://www.cafonline.org/insights/research/what-the-public-think-of-charities-using-ai We adopted our Risk-Reward question from CAF’s survey of the public about nonprofit use of AI, and they found that 37% of the public saw AI as having more reward, compared to 22% who saw AI as too risky. ↩︎
- Source: https://www.weforum.org/agenda/2024/01/ai-disinformation-global-risks Disinformation Tops Global Risks 2024 as Environmental Threats Intensify, as the #1 global threat in the next two years. https://www.weforum.org/press/2024/01/global-risks-report-2024-press-release/ Two-thirds of global experts anticipate a multipolar or fragmented [world] order to take shape over the next decade. From January 2024. ↩︎
- Source: https://www.weforum.org/publications/global-risks-report-2024/in-full/global-risks-2024-at-a-turning-point/ ↩︎
- Source: Source: https://www.charityexcellence.co.uk/charity-ai-benchmarking-survey/ 57% are using ChatGPT ↩︎