Where Are We Now?

AI Readiness and Adoption in the Nonprofit Sector in 2024

Results from GivingTuesday’s AI Readiness Survey

Table of contents

For a better experience, we recommend viewing this report on a desktop browser rather than a mobile device.


Executive summary

In March 2024, GivingTuesday’s Generosity AI Working Group launched the AI Readiness Survey to better understand the current capacity for and utilization of artificial intelligence (AI) and other new emerging technologies within the social sector. This research is essential to ensuring nonprofit organizations are not left behind in the rapid landscape of AI development. We asked questions to gather insight on how comfortable organizations are with AI, how people working in the sector currently use and envision utilizing AI in the future, and what barriers may prevent nonprofits from adopting AI technologies.

A foundational piece of our research was to better define what “AI readiness” means by measuring who is using it, and how, and then looking at other organization characteristics that would be associated with readily using AI. Why? So that we can develop some shared understanding and language to support a more coordinated and collaborative approach to how AI tools are being built, disseminated, discussed, and safeguarded.

Much of what we’ve uncovered through this research may not surprise you. But, we hope it will help anchor your decision making in relationship to AI, whether you’re a nonprofit considering how you compare to others in the AI adoption scale, a technology platform looking to better understand current nonprofit needs, an educator looking to develop resources for nonprofits, or a funder considering how to best support the sector’s AI adoption. AI systems, tooling, and education must be designed to meet nonprofits where they are while also inspiring meaningful progress. Without this alignment, we risk perpetuating a cycle of AI development that fails to address the unique needs of nonprofits, potentially leading to inefficiencies rather than improvements. Our aim is for this research to not just help nonprofits utilize AI, but to use it well. 

We hope that this report – Where are we now? AI Readiness and Adoption in the Nonprofit Sector in 2024 – supports your own learning and empowers you find opportunities for collaboration with your networks.

Key takeaways:


How to use this report

This report goes in-depth into the results of the AI readiness survey with details on methodology, key findings, and visualizations of the data sets. While we encourage you to read the entirety of this report, different readers may find certain sections more relevant for their work.

Our suggestions:

For nonprofits

While this report goes into detail regarding the current state of AI readiness and utilization, you won’t find resources on how to integrate or adopt AI into your work. However, if you’re looking for resources at any stage of AI utilization and readiness – whether you’re not using AI, just beginning to integrate it, experimenting, building, or learning, we have compiled a variety of resources that can be found at the end of this report. These include people and organizations to connect with, training, and off-the-shelf resources and guides.

About the sample

We received 930 responses between March 7, 2024 and July 9, 2024. Most North American responses came in March 2024, and most responses from outside the U.S. came in May and June of 2024. Of the overall sample, 549 were from GivingTuesday’s network of nonprofits (mostly in the US), 251 were based in India, 86 were from partner networks focused on AI and technology, and 44 were from GivingTuesday’s global network outside North America.

63% of the sample operate in the Global North, 37% in the Global South1. Only 13% of responses came from outside Asia or North America.

Organization characteristics

Our sample mostly represented smaller, community-based organizations. 63% of organizations in our sample had 15 or fewer staff. 50% operate locally, and 16% operate in multiple countries. Worldwide, organization breakdown by size is somewhat similar to what we observed in our sample.

We calculated a composite general organization capacity based on staff size, age of participating organizations, and whether they operated locally, regionally, nationally, or in multiple countries. We used this approach to see whether organizational capacity correlates or is distinct from an entity’s ability to adopt AI in the next section.

Organization age (0-150 years). For clarity, all organizations that were founded over 150 years ago (the long tail) are lumped into one bar on the right.

Our sample, by continent and cause area

Continent: Because of low sample size for some continents, we only present findings in this report for North America, Asia, and Global North vs Global South. Europe, Africa, South America and Australia were not sufficiently represented to share continent-specific findings.


Cause area: Worldwide, education, community development, and health organizations are the most prevalent in our responses. Based on the ways organizations describe their work in narrative form, Economic Empowerment and Women’s Empowerment organizations are far more prevalent in Asia (mostly India) than in North America. These cause areas were assigned to organizations using ChatGPT2 to categorize their open text descriptions in the survey. An organization could be assigned multiple relevant cause areas3.

We also asked about how locally an organization works. The prevalence of organizations working in each cause area seems to be mostly independent of whether they operate locally or internationally.

About the people (survey respondents)

Roles of people taking the survey: Nearly half of our sample were senior leaders and co-founders. Very few respondents were technical staff or Monitoring, Research, Evaluation, and Learning (MERL) staff.

The number of years each respondent had been at their respective organization (i.e. their tenure) was higher than one would expect for the nonprofit sector. Only 22% of respondents had been at their current organization two or fewer years, compared the 40% seen by sector benchmarking surveys4. 48% of those with at least three years of tenure were organization leaders or in governance.

Organizational capacity and AI readiness

Funding organizations have typically sought to categorize organizations by whether they can implement small or large scale services and interventions, those categories depend on a multitude of factors: staff size, age, geographic reach, and local or subject-matter expertise. We wanted to know whether these factors were good predictors of whether an organization was incorporating AI into their work. And if not, what was a good predictor? Funders likely would want to track good proxy indicators before implementing programs that relied on AI at organizations that weren’t prepared to manage this. In part, one of the aims of our research was to better define what “AI readiness” meant by measuring who is using it, and how, layered over other organization characteristics that would be associated with readily using AI. (Note: we included a detailed analysis of this in the appendix, and only share the most interesting insights below.)

We used all relevant survey answers to calculate two composite scores for organizational capacity and AI readiness.

  • Capacity reflects an organization’s overall ability to absorb funding and implement projects affecting a larger geographical scope (specifically, we factor in number of staff, age of the organization, and regionality). 
  • Readiness scores add up all of the behaviors that would logically assess an organizations ability to adopt and implement AI solutions, things like how and what data an organization collects, technical staff, data policies, and current utilization of AI (basically every technical question on our survey that is not an opinion about AI).

We find organizational capacity to have a bimodal distribution (a bell curve with two peaks), meaning that we see many lower capacity orgs, some higher capacity orgs, and a long tail of a few high-capacity organizations. However, the AI readiness scores we calculated based on questions we asked appear to generate a normal “bell curve” distribution. 

We interpret the patterns of scores to indicate that the nonprofit sector is in the middle of the mainstream adoption phase for AI. Had we observed a highly-right-skewed distribution in readiness scores, that would have meant most people had little to no exposure and that a few people (the early adopters) were experimenting with AI heavily. From our results, we observe a normal, non-skewed score distribution suggesting that most organizations have experimented with AI to about the same extent, regardless of their overall capacity.

Plotted below is the relationship between organization growth and overall capacity and their technical capacity (to use emergent technologies, such as AI)5. The overall correlation between both measures was 0.42, meaning there was a weak, positive correlation. However, the more we look into sub-groups (below), the clearer it gets that organization capacity is not a good predictor of AI readiness. Some aspects of organizations are correlated with AI readiness, and we provided a research supplement for details on those. However, our cluster model provides a more concise narrative of what matters to three organization profiles, explained in the next section.

Cluster analysis

We applied a variety of models6 to cluster responses into organizational profiles that might shed light on patterns in AI readiness. Our model included multiple choice (pick any) questions around what people have done with AI and what they would like to do, alongside our other organizational capacity and readiness data. We did not include questions that probed attitudes about AI (such as comfort with or willingness to use it) in the cluster analysis. Overall, the models consistently split our sample into organizations that had been using AI and wanted to use it more, and organizations that had not yet explored AI.

Our cluster model divided organizations into three organization personas that combine how they currently use data, how they use AI, and how they plan to utilize more AI in the future: 

We asked about how people collect and use data currently, separate from any AI usage, and differences here are telling about the types of organizations likely to begin or expand their use of AI in the near future. The clustering divided organizations into 142 low use, 265 medium use, and 523 high-data-use organizations. As one would expect, the 523 organizations who were already mostly collecting data in tabular form and through electronic devices were also those most likely to have used AI. This AI Consumers group was also the most eager to expand its use of AI. 68-76% of this group wanted to generate, interpret, organize, predict, or use a digital-AI-assistant in their future work.

However, the low-data and medium-data-use groups are more nuanced. They best align with AI Skeptics and Late AI Adopters, respectively. We saw drastic differences in how much AI these two groups had already explored, and different appetites for further adoption. 90% of the Late AI Adopters reported not using any AI yet, compared to 38% of the AI Skeptics group7. When we asked about their desire to explore seven AI use-cases8 in the future, 27% of the low-data-use organizations wanted to try each of them on average, compared to 35% of the medium-data-use organizations, and 63% of the high-data-use organizations. 

Here, we are labeling the AI-Skeptics based on having both already tried AI and having the least appetite to expand their use of AI, compared to the other groups. Given that this AI Skeptics group also is the least likely group to be collecting data in tabular form, using devices, cloud software, or retaining any raw data, we would conclude that the AI tools they’ve tried don’t address their needs. Overlaying the cluster analysis with the Diffusion of Innovation and the Gartner Hype Cycle, AI Skeptics may have passed the “peak of inflated expectations” and entered the “trough of disillusionment”9, whereas the Late AI Adopters have yet to traverse these stages.

It might seem counterintuitive that our low, medium and high data-use organization clusters are currently at a medium, low, and high level of AI adoption, respectively. Note that interest in future exploration and adoption of AI within these groups does closely follow the current levels of using other data collection methods; if an organization has already started to digitize its collection and management processes (going beyond excel spreadsheets), they are more likely to be interested in AI, but not necessarily already using it. Practically every organization we surveyed collects data about their work, but those least likely to have a tech person or MERL staff member are also least likely to have tested AI, despite the claims that these tools are easy to use “off the shelf” and don’t require technical expertise.

We might also conclude that if an organization hasn’t already done much real data tracking, they aren’t going to find AI to be very useful either. Many of our AI Skeptics are also the organizations that track and rely on data the least: Only 29% use software, 50% manually collect tabular data, 42% collect using devices. Only 9% retain original rich data (like audio, video, transcripts, etc). The other groups were all higher.

If these sample percentages are broadly representative of the nonprofit world as a whole10, we’d estimate that just over half of nonprofits are AI consumers in 2024, and that an additional quarter of all nonprofits might eventually adopt more AI in their work, but that the remaining fraction don’t find AI useful or relevant to their work in its current form. We also did not notice any major differences between the Global North and Global South in this respect (but see our later section on regional differences for more context on what differed). In addition, when we look at attitudes about AI apart from exploration in our sample, we might separate the low-adoption group into those who are true skeptics after testing, and those who are skeptical because the use-cases don’t fit their needs. Late AI adopters might be hampered by institutional policies and friction to adopting, so we asked about these aspects in follow up questions (see section Common standards and internal policies).

Looking at what organizations in each cluster primarily do (using chatGPT to categorize their open-text descriptions11), we see that the current AI Consumers tend to overrepresent nonprofit cause areas that are more easily tracked and measured. The datafication of education and health services has been underway for decades, and multiple commercial products exist to support these areas that are more amenable to AI. We see that these cause areas are overrepresented in our high-data group, whereas the low-data group reflects a more diverse collection of causes, more evenly distributed across categories, concentrated in no one area.

Table 3.3

Data sophistication

We asked about how an organization uses data in six survey questions. This, in part, determined how organizations were clustered. AI Consumers were the most likely group to have hired a tech person and/or a MERL person. When we drilled down into the characteristics of these groups, it became clear that once an organization reaches at least 15 paid staff, they are more likely to be part of the AI Consumers group. However, more established organizations, having been around for 30 or 50 years, were far more likely to be part of the Late AI Adopters group, despite typically having larger staff sizes. We found that organizations founded less than 10 years ago are basically equally split among our clusters; about three-quarters of newly established organizations are found in each group.

Plotting data practices by size of organization (staff size) gives a similar perspective. As expected, nearly all large and medium organizations collect data, but larger organizations are ahead of the rest in having data use policies and being cloud-based. The threshold for becoming data-savvy seems to be around 15 paid staff. Surprisingly, over half of large and medium organizations employ a tech person.

In our clustering model, the single biggest factor in whether an organization had adopted AI was having hired a MERL and/or tech person. We see a large spread across large and small organizations in the percent who have a MERL and/or tech person. Organizations that have both a tech and MERL person on staff (orange trends below) are far more likely to be sophisticated data-enabled organizations than average, and slightly ahead of even large organizations. These organizations also are the only group where a majority have been involved in adopting joint collaborative data agreements or guidelines.  

Current AI usage

The current dominant meaning of “using AI” in 2024 appears to be generative AI, utilizing tools like ChatGPT, Copilot, and DALL-E 3 that create content based on text or image prompts. Interactive chat bots appear to be the only other AI tool with >50% adoption by a majority of our AI consumer sub-group. Only 1 in 6 organizations have tried interpreting data using AI or used an AI-powered task managing assistant, and only about 1 in 7 organizations have tried using AI for prediction.

Desired future AI usage

A majority of people in the AI Consumers and Late AI Adopter groups want to do more with AI, but less than half of the AI Skeptics show interest. The AI Skeptics are currently using AI to some extent, and unlike the Late AI Adopters, they have a clear idea about what it does and does not do. About half of the Skeptics would like to use AI in two or more ways in the future. Judging by other parts of the survey, this group probably has the strongest reservations about expanding use of AI and recognizes the “hype”, apart from its actual utility, having tested it. We also see a thread of concerns about AI overreliance in open text essays we received from respondents that might reflect this group.


Across all groups, priorities in the various ways to employ AI at work are similar. All groups prioritize the need for a tool to organize their data better, followed by AI virtual assistants, and continued use of generative AI. Interpreting data and prediction are also in high demand, despite the limitation that these tools will necessarily require better organized data (and larger amounts of it) before they can exist.

AI tools and market demand

The biggest “market gap” between current use and demand is for data organization tools, followed by predictive models, better interpretation of data, and virtual assistants. Generative AI, it seems, is currently used in-line with demand, and interactive chat bots appear to be less popular but also largely met by current technology. In other words, generative AI and chatbots have nearly saturated their target market, but other AI-uses have yet to make an entrance. AI-aided data organization, annotation, and cleaning tools will likely someday be the most prevalent AI use-case in the sector. Translation and transcription tools are the third most used category of AI tools, but fewer organizations are likely to need these tools, based on this data.

Overall, the nonprofit sector appears to be leaning into AI as a whole, despite reservations that surface when we ask questions about comfort, risk-reward, and policies/guidelines. Attitudes about the power of AI remain positive, but feelings about its use remain much more negative than seen with past technologies. Those interested in growing AI use would be well-served to be more intentional about mitigating the risk of data breaches, bias, and the perceived exploitation of prompt-inputs. 

Our comparison of current and future use of AI suggests that the greatest unmet needs are apps that help in organizing and restructuring data, followed by virtual AI assistants, data interpretation/analysis, and predictive models (which require large amounts of this structured data as a key input). Generative AI and chatbot-use have already saturated the market; anyone who wants this has tried it.

What people told us: attitudes about AI

We asked people to rate how comfortable they were incorporating AI in their work. When looking at our unweighted scores on a 0-10 range, one can see that there is a generally high level of comfort in using AI. This is consistent with the measured sentiment in our essay question about AI hopes and fears.

However, this positive skew might be misleading when compared in context. When applying the net-promoter12 correction to our question, the net comfort score is -16 on a -100 to +100 scale. 

This approach subtracts out the “courtesy bias”13 that is often present when asking a question where the respondent feels social pressure to say “yes”, and is used widely to measure customer and/or product satisfaction14. So if AI was considered a product, a typical (successful) company or product would achieve a minimum score of +30 with its customers and users, but AI is – in fact – down in the failing product range. So, on balance, we conclude the level of excitement around AI is not nearly as high as the raw scores might imply. In fact, -16 is significantly worse than the industry benchmark NPS median scores for Computer Software and IT services at 36 and 41, respectively15. If a new product launch had the same level of comfort from its users (-16), it would be considered to be failing.

Using AI in one’s work might lead to new insights and reduced workloads, but it also carries many kinds of risks. When asked where the balance of risk and reward falls with AI, about a third didn’t feel well-enough informed to weigh in, and the rest were divided but somewhat skewed towards seeing more reward than risk. 29% thought AI would be net good, compared to 19% who saw AI as more risk than reward. 49% saw both as equal, or didn’t know. When we itemized these risks and asked people about each one, it was clear that the more tech-savvy respondents tended to see AI as more risky than people working at nonprofits in general. We conclude from this that while many people have started testing AI tools, half of the population really don’t feel well-enough informed to have made up their minds yet.

Among common concerns people had about AI, AI-related data breaches was the top concern, by a slight margin. In our cohort of tech-oriented nonprofits, we saw a substantial increase in the level of concern around most aspects of AI, especially bias. Replacing workers with AI was the only area that wasn’t a greater concern among tech-oriented nonprofits.

We also saw a trend in the opposite direction in our India cohort, where fewer people were alarmed about these issues. Concern for AI risks was generally a little lower in the Global South, and a little higher in the Global North.

In a recent survey16, Charities Aid Foundation (CAF, based in the UK) asked the general public in ten countries what they thought charitable organizations should be doing with AI (n=6,102 people online, early 2024). When asked to weigh the risk-versus-benefits of AI, 37% thought benefits outweighed risks, compared to 20% that felt risks outweighed benefits, and 34% that said they were roughly equal. A much smaller fraction of the public didn’t know (7%), compared to the 34% in our sample of nonprofit staff. They reported a “net positive” outlook of +15% (the percent who saw more benefits less the percent who saw more risks), compared to our +10% “net negative” outlook. We share that result here because the public perception appears to differ from that of nonprofit professionals.

Common standards and internal policies

Data policies are essential for protecting privacy, ensuring security, maintaining compliance, managing risks, and building trust. Given the many potential risks that AI adoption poses to data privacy, equity, and introducing systemic bias, we asked respondents whether their organizations currently have any data policies in place, and how feasible they thought it would be to collaborate with other organizations. While the majority (70%) of organizations did have a data-use policy in place, only 28% of organizations had been involved in adopting some kind of collaborative data agreements or guidelines in the past. In line with this, we found only 17% of respondents felt enthusiastic about the prospect of collaborating with other organizations on AI, and the net feasibility of this idea was -36 on a -100 to +100 scale.

  • We do not yet know how to evaluate AI systems for safety and negative impact. … We also don’t know how to govern these systems and have mostly left them to the devices of the market, which does not have social well being at heart. We desperately need better policy and governance systems.
  • …tackling a tool that requires complexity (policies, experimentation, learning, etc.) is difficult to accomplish in an organizational way. Nobody is funding our basic technology needs …
  • We would love to do more with AI, but due to privacy concerns and compliance regulations we would need enterprise AI tools accessible across the organization with data privacy agreements, an understanding of how the LLM(s) were trained and created (framework, design, bias, etc.).
  • I worry about poorly configured security on a black-box model that we can’t identify what comes in, where it goes, or how it’s used — specifically with sensitive PII or Health data.
  • Once the AI climate becomes regulated to protect and compensate or credit original IP, then I will consider using it.

While only 3% of essays mentioned these kinds of policy issues directly, it is clear that a lack of a general “safe and ethical use policy” for AI is an underlying factor in people’s top concerns.

In a later section, we measured how much of a “priming effect” giving people a list of concerns (versus asking for an open ended response) has on the issues that they raise; we found that “AI related data breaches”, “Overreliance on AI” and “AI replacing workers” were their top concerns when unprompted. Clear guidelines could mitigate the harm of each of these concerns.

It is a common need, and also the kind of challenge that organizations have rarely collaborated on standardizing their safeguarding approaches together. Achieving sector-wide standards is going to take a different approach. In the absence of any organizing principles or external forces, it is likely every organization will eventually update their internal guidelines and policies, but years of AI use and exploration is likely to happen before it is regulated and self-governed.

Other surveys of nonprofit organizations have similarly found that AI policies are practically non-existent in 2024. Charityexcellence.co.uk17 found that 60% of their sample lacked any policies and procedures concerning AI, and only 5% had a clear, satisfactory policy. A separate survey of nonprofit communications professionals in nonprofits found that only 4% had an AI policy, and only 14% were actively working on one18.

Hopes and fears

At the end of the survey we gave people the chance to talk about anything on their minds related to AI. Nearly every respondent chose to write something about their hopes and fears about the promise or peril of AI in the context of their work. We then used ChatGPT to summarize the themes discussed, and the frequency that people mentioned each. We found that similar topics were discussed around the world at similar rates, but that a greater proportion of respondents in North America tended to talk about the efficiency gains from AI. Smaller organizations (with at most 15 staff) identified the “lack of knowledge and training about AI” as their number one concern/barrier to adoption.

Selected quotes:

  • I can create, manage, and deploy content with 3 fewer staff members.
  • My hope is that AI will be each staff person’s personal assistant which will allow us to be more innovative and operate more efficiently.
  • AI can revolutionize sectors like healthcare, education, and sustainability, solving global problems.
  • AI will take the human-ness out of our work.
  • AI is unreliable due to its tendency to hallucinate

Ethics and privacy

  • However I am concerned about the inherent bias in the system about the lack of transparency in how the data is being collected and the assumptions being used to make predictions. I’m also concerned about becoming over reliant on something we don’t fully understand.
  • AI is machine learning and the support is only as good as inputted data. 
  • Copyright infringement
  • Fear of AI becoming too knowledgeable about our staff and the people we care for through our organization. We respect privacy
  • My fear is a bias towards those who create it – mostly affluent non-BIPOC in the western world.

Lack of Knowledge

  • Concerned that our small volunteer-led organization will not have the time, training, and personnel to adapt to a new environment where AI is a necessary tool.
  • The sector needs to collaborate radically to build up AI knowledge and access, otherwise we will be left [behind] by the private sector. We need to do it in a way that is horizontally governed, open, live, and managed with values at the centre, and impactful.
  • It’s another set of training I have to take to adopt in my own work, which is already at full capacity.

Job Displacement

  • You won’t be replaced by AI, you’ll be replaced by someone who knows how to use AI.
  • Where does digital equity fall into the AI discussion? We are not ready for the job loss that is coming.
  • AI is likely to change the nature of my work and other knowledge workers in the nonprofit sector. Therefore, the changes will be tumultuous and cause worker displacement but at the same time AI will help speed things up that were taking time away from other, more important tasks.

Sentiment19 was slightly more positive overall, with the average being +12 on a -100 to +100 scale. 65% of essays fell within the -25 to +25 range, meaning that the overwhelming majority were rather neutral or balanced in their comments. People varied widely in terms of how objective they were in their comments. Based on a language subjectivity algorithm, 15% of comments were highly objective, and 30% were quite opinionated.

The priming effect

People are much more likely to raise a concern when prompted. To examine this effect, we compared the percentage of comments that mentioned specific AI concerns with the percentage that identified that same concern when provided with a list of options to choose from. For ease of interpretation, we added a third trend series (to the right), based on the ratio difference between prompted and unprompted concerns (ignore the units on x-axis for this; it is unitless). Based on this, people were more concerned about data breaches and AI replacing workers, and specifically were more likely to mention these issues in their essays than when choosing from a list. Conversely, the environmental impact of AI and equity concerns (e.g. better-resourced organizations are more likely to adopt and benefit from AI) seem to be less top-of-mind in essays.

What people are saying about AI in the nonprofit sector, globally

Discussion and conclusions

AI has been one of the most talked about topics around the world in 2024. Predictions run the gamut from AI marking the beginning of the end of all things (see also “the singularity”20) to merely being the next phase in automation, akin to the computerization of business in the early 1990s.

Instead of prognosticating, we wanted to present a shared perspective from all available surveys, and add our own somewhat global view through another survey, in the hopes that we can present balanced insights about where we are, where we are going, what harms to mitigate, and what boons to anticipate. For the nonprofit sector, many questions are emerging about to what extent it should be using AI technology in its work, as well as whether it has the interest and capacity to do so.  This study is unique in that it provides more granular data about how attitudes towards AI intersect with presumed capacity. AI stands to disrupt many of the ways nonprofits work, as well as introduce new risks and the need for more training and guidelines.

Our survey data highlights the nuances in current AI adoption and knowledge, understanding these differences is integral to achieving equitable and beneficial AI adoption. Addressing the systemic challenges of knowledge and infrastructure gaps, which are not unique to AI technology, requires cross-sector collaboration among philanthropy, technology, nonprofit, and researcher actors in the sector. Collectively, identification of strategic and tactical use cases to move us from “how do we use” to “what do we need”  will support an ecosystem that centers sector needs in the design, development, and governance for equitable outcomes.

AI Readiness resources for nonprofits

At the Generosity AI Working Group, we aim to foster meaningful and resourced collaboration across the social sector. One way we are doing this is by consolidating and cataloging resources on AI tooling, capacity building, guidelines, and standards in our knowledge bank, datasets, tools, and problem statement libraries. You can access those here. We’re highlighting a few resources from our network for nonprofits at any stage of AI utilization and readiness – whether you’re not using AI, just beginning to integrate it, experimenting, building, or learning.

Templates & frameworks

Trainings & learning

Acknowledgments

About us

The Generosity AI Working Group at the GivingTuesday Data Commons is a cross-sector collaboration between practitioners in tech, academics, and the social sector who are exploring ways that artificial intelligence (AI) can be used to advance missions and grow generosity at scale. This working group explores the role of knowledge-sharing platforms, learning networks, and communities of practice in accelerating AI adoption for social good. It identifies strategies to address issues related to resource constraints, governance, data sharing, and equity in collaborative AI projects. The Generosity AI Working Group is connecting workstreams and creating a collaborative space to inform research, product development, and best practices.

The GivingTuesday Data Commons is a global network that enables data collaboration across the social sector. The Data Commons convenes specialist working groups, conducts collaborative research into giving-related behaviors, reveals trends in generosity and donations, and shares findings among its global community. With more than 170 data partners and 1,800 collaborators, the Data Commons is the largest philanthropic data collaboration ever built.

Collaborators

The AI Readiness Survey is conducted by the GivingTuesday Data Commons in partnership with Fundraising.AI and with generous support from Microsoft. This report was made possible through a network of collaborative partners that supported the survey design and dissemination and report design and iteration. We thank Tim Lockie, Nathan Chappell, Meena Das, and the numerous organizations in the Generosity AI Working Group for their time, energy, and expertise in reviewing the survey and report. Additionally, we thank our dissemination partners: Good Tech, MERL Tech, Fundraising.AI, NamasteData, The Human Stack, Donor Participation Project, Quiller, Brooklyn Org, Apurva.ai, Atma, BHUMI, ConnectFor, A.T.E. Chandra Foundation, Voluntary Action Network India (VANI), Danamojo, and Sattva – India Partner Network (IPN)

Data access

If you are interested in utilizing the data for your own research, you can access the dataset here. This link includes a readme file on how to use the data, as well as the raw data set.

To view other datasets, check out our resource libraries and datasets that are searchable and accessible via GivingTuesday’s Generosity AI Working Group

Contact us

For press or media inquiries, please contact Shareeza Bhola.

For research or partnership inquiries, please contact Kelsey Kramer.

Research supplement

Supported by

 Footnotes

  1.  For the purposes of this survey we defined the Global North as North America, Europe, and Australia, and the Global South as South America, Africa, and Asia. Note that we did not receive any responses from countries that would be ambiguous, such as Turkey, parts of the Middle East, or Russia. ↩︎
  2.  We tried several ChatGPT prompts asking for a list of topics from the texts, that captured the most frequent categories. ChatGPTs best result simply wrote a python script that assigned topics to organizations based on keywords in their descriptions, and allowed organizations to fit in multiple categories. The keyword approach was optimized because it was based on chatGPT “seeing” all the descriptions first.
    ↩︎
  3.  We decided not to use a categorical taxonomy in our survey because there is no one “best” version that works worldwide, and researchers can take our raw data and recategorize this sample of organizations to fit any taxonomy, if they wish. In fact, chatGPT can be prompted to do this about as accurately as a human. ↩︎
  4.  CultureAmp tracks the tenure of nonprofit staff and reported that in January 2024, 40% of employees they surveyed had been working at their current employer for at most two years: https://www.cultureamp.com/science/insights/nonprofit-200-500↩︎
  5.  If these two measures were highly correlated, one would expect to see the dots form a trend line that goes up and to the right. Instead, we see a weak correlation (Pearson’s r=0.42). ↩︎
  6.  We experimented with several models. The results presented here were based on UMAP with HDBSCAN, but the K-means approach gave similar results. Both models largely split the sample into people who either felt AI had more benefits than risks, or that saw more risks than benefits, regardless of their actual use. The three-cluster model provided a much more nuanced and complex interpretation of these groups, so we presented that. ↩︎
  7.  19% (+/- 9%) of the low-data-group currently was using each form of AI, compared to 34% (+/-22%) of the medium-data-group. ↩︎
  8.  For analysis of AI use cases here, we excluded answers “Don’t Know” and “Other” from the calculations and only included the 7 explicit uses. ↩︎
  9.  See the Gartner Hype Cycle: https://en.wikipedia.org/wiki/Gartner_hype_cycle ↩︎
  10.  Our survey is not geographically representative of organizations worldwide, so our conclusions may not apply in continents we did not gather sufficient sample size. However, it is representative of small, medium, and large size organizations, and reflects a similar thematic breakdown to the global population of organizations. To the extent that data is available in similar surveys, we have no reason to believe that AI adoption is wildly different in places we didn’t sample, and further, we provide a detailed summary of these parallel findings in Section 6: discussion. ↩︎
  11.  Prompt: Turn this list of text into a BRIEF list of nonprofit categories, and count the number of organizations in each category, where newline is a new organization. Name the 6 most common categories then combine the smaller categories into an “other” list. ↩︎
  12.  In brief, you obtain the percent who gave 9 or 10 and subtract that the percent who gave scores below 7, ignoring 7s and 8s. Here’s an interactive calculator: https://delighted.com/nps-calculator  ↩︎
  13.  Read more about courtesy bias from Listen for Good: a multi-funder effort to improve feedback loops in nonprofits: https://listen4good.org/feedback101/courtesy-bias/ ↩︎
  14.  We feel justified that there is a courtesy bias present because some of the partner networks that helped to distribute the AI survey told us that some of their members refused to fill out the survey precisely because they felt uninformed or unwilling to engage on the topic. That means those who did fill out are skewed (predisposed) to be more informed and more positive in their views. And it is likely that they felt some social pressure to rate their comfort a little higher as part of self-selecting into the camp that wants to explore AI further. ↩︎
  15.  Source: https://customergauge.com/benchmarks/blog/technology-industry-nps-benchmarks  ↩︎
  16.  Source: https://www.cafonline.org/about-us/research/what-the-public-think-of-charities-using-ai  ↩︎
  17.  Source: https://www.charityexcellence.co.uk/charity-ai-benchmarking-survey/  ↩︎
  18. Source: https://www.charitycomms.org.uk/salary-and-organisational-survey-2023 ↩︎
  19.  Using python TextBlob library ↩︎
  20.  Source: https://en.wikipedia.org/wiki/Technological_singularity  ↩︎
  21. Source: https://www.cafonline.org/insights/research/what-the-public-think-of-charities-using-ai We adopted our Risk-Reward question from CAF’s survey of the public about nonprofit use of AI, and they found that 37% of the public saw AI as having more reward, compared to 22% who saw AI as too risky. ↩︎
  22. Source: https://www.weforum.org/agenda/2024/01/ai-disinformation-global-risks Disinformation Tops Global Risks 2024 as Environmental Threats Intensify, as the #1 global threat in the next two years. https://www.weforum.org/press/2024/01/global-risks-report-2024-press-release/ Two-thirds of global experts anticipate a multipolar or fragmented [world] order to take shape over the next decade. From January 2024.  ↩︎
  23. Source: https://www.weforum.org/publications/global-risks-report-2024/in-full/global-risks-2024-at-a-turning-point/ ↩︎
  24. Source:  Source: https://www.charityexcellence.co.uk/charity-ai-benchmarking-survey/ 57% are using ChatGPT ↩︎