Entertainment
AI, philanthropy, and the pitfalls of machine-controlled giving
AI is changing the way we create, work, and learn. Many fear it could even change the way we think. And while much has been made of how AI could affect education, the arts, and business, the impact of artificial intelligence on philanthropy is more of a second thought.
AI’s claim to free users from the tedium of work — what its makers consider “drudgery” — is a big selling point. In fact, it’s a core tenant of many “AI for good” initiatives. Developers pitch AI as a tool for expediency, automation, and equity within the world of nonprofits, which usually operate with tight budgets and small staffs. And many philanthropic leaders see AI as a life-changing investment for nonprofits at large, especially small, community-oriented organizations just trying to survive.
But we also know that society is facing a crisis of care, in which more and more people report intense feelings of hopelessness and apathy. Does adding human-less, digital automation into one of the ways we provide care to others exacerbate growing feelings of dissociation? There’s a second battle waging too: A crisis of attention, in which the rapidly moving images on screens all around us have become more appealing than the slower, grittier world creating them. Is AI the right answer to the problem of grabbing the public’s attention, getting them to care, and maintaining their investment in the cause?
Nonprofits are looking to AI as a filler for historical gaps — to aid customer service, ease administrative issues, and get the attention of those with deep pockets. For many leaders in the giving world, the question remains whether those benefits outweigh the drawbacks.
Google Search: A window into the problem
In May, Google launched Search Labs’ AI Overviews, an AI-summarizing feature you have definitely seen but have certainly forgotten the name of. It was a tentpole addition amid a flurry of sparkling AI features, intended to make searching for information even easier (who wants to scroll through multiple pages anymore?).
Overviews appear in their own highlighted box under the normal Google Search bar, with a small conical beaker logo meant to indicate to the searcher that the results are still being tested. That’s important. The early launch of Overviews wasn’t just lackluster; it was worrisome. Results were muddy, often nonsensical, becoming the new carriers of absurd memes and fake screenshots; people scrolled right past them. Mashable’s own testing found a mix of genuinely helpful answers and glaringly off AI hallucinations. (The feature has yet to fully roll out to all searches.)
Weeks out, journalists were rallying a movement against the flurry of misinformation and misappropriated bylines spawned by the still-limited run of AI Overviews. The tool introduced a potential “catastrophe” to content visibility and online traffic, some publishers said, screwing with established metrics for appearing, with credit, at the top of news results. Not long after, the feature was rumored to be adding integrated, revenue-generating advertisements.
But it wasn’t just the news media that was worried, and it wasn’t just about profit. “What you’re seeing in the for-profit sector is certainly going to affect the nonprofit sector,” said Kevin Scally, chief development officer at nonprofit ratings site Charity Navigator. Just as journalists and creatives sounded the alarm to ethically dubious results, and users pointed out absurdly unhelpful responses, Scally and his colleagues saw the streamlined search summaries as a potential problem for the less discussed world of charity.
Such AI tech could potentially hide legitimate nonprofits in favor of ambiguous summaries or outrightly false results, these advocates warned. Its search summary results prompt questions of algorithmic bias, and subsequent ones surrounding funding or visibility — the same issues already plaguing the sector, but on a synthetically enhanced scale.
If we’re getting it wrong, it’s not just a matter of a funny screenshot. It could be a matter of the organization’s reputation and their funding.
Finding the right charity amid a slog of information
AI isn’t new in the sector, but the timeline has sped up. Dave Hollander, data science manager at nonprofit data site Candid, explained that the organization and others have spent time and money building discovery and audience for nonprofits for the past several years, exploring how AI can help underserved populations access resources online. Since resources like Charity Navigator and Candid work primarily with large, complex data sets, collated from federal resources and nonprofits themselves, AI tools are an incredibly useful option to cut down on the administrative heft. Other nonprofits may use AI to fill the gaps of staff, like site customer service bots helping donors find resources and organizations.
“The general availability of these AI tools, and the accessibility of it, could potentially help organizations improve their search engine optimization,” Hollander explained, “where in the past that would have been an insurmountable task for them. But discoverability through search has long been a problem for a lot of organizations, even before AI. And then AI comes and can also exacerbate that problem.”
A simple illustration: How would an AI-boosted search choose between organizations with confusingly similar names? In 2020, for example, as the global community rallied for the work of racial justice advocates and police abolitionists, millions of dollars in donations were funneled to activist organizations. Bad actors using SEO-gaming names that included the phrase “Black Lives Matter” managed to siphon off thousands from good-natured donors.
Disambiguations like these are already a problem, a natural product of an overloaded internet and not enough names to go around. Other problems arise with the repeated recommendation of the same big-name organizations (say, the Bill and Melinda Gates Foundation) over smaller, localized nonprofits doing the same work.
And organizations already vie for the spotlight in a charitable ecosystem moving toward less frequent, reactionary giving. “The risk that runs [with AI Overviews] is, if we’re getting it wrong, it’s not just a matter of a funny screenshot,” Scally warned. “It could be a matter of the organization’s reputation and their funding. Then you play that forward. If that’s happening at scale, where information about those organizations is getting twisted up, it has real ramifications for the programs they serve.”
Recently, Google announced new updates to AI Overviews to try to curb publishers’ worries, including prioritizing direct links to sources — but they’re still being tested. Other information-gathering sites, like TikTok, are facing similar misinformation issues with AI-supported searches.
Mashable Light Speed
AI is good at specificity only so far as the prompt it’s given, limited by the data it’s fed. Search Overviews summarize populated results and prioritize high-ranking links. If a smaller nonprofit isn’t active online, and isn’t already surfacing in Google results, it has little chance of becoming AI’s recommended click.
Understanding the true meaning behind a nonprofit’s work
Within AI, the nuance of nonprofit missions, and exactly how those goals are accomplished, are also sacrificed for the ease of a simplified answer. Google itself pitched the service with: “Google will do the Googling for you.” But AI doesn’t have a human brain and can’t incorporate the nuances involved in the processes of helping our fellow humans.
There’s a lengthening list of media and AI literacy questions to address, first. In an AI-enhanced future, how will individuals learn to properly search, vet, and align their charity on their own, with and without the aid of an AI bot? What do we lose when we stop doing the “hard” work of searching for ourselves?
The hypothetical solution is for nonprofits to offer up even more data to the AI tools’ developers — data from nonprofits, data from organizations like Charity Navigator, and personalized behavioral data from donors (read: internet users) that can solve the specificity problem. AI’s proponents love personalization. But that would stir up even more problems.
“I think that there’s inherently risks with that. Does technology really know the true me? How comfortable am I having Meta and Google and Microsoft essentially build profiles about me?” Scally said.
AI’s data hunger has worried many privacy advocates and proponents of data autonomy — a trend also taking over the world of nonprofits. Making such moves with people’s personal data belies the values of many of the world’s most effective social sector actors, those who avoid overlapping their work with Big Tech, who cannot feasibly gather such data (or choose not to among their communities), and especially those who are trying to decolonize their work from historic power holders.
As a wave of new perspectives on charitable giving emerge — including the idea of unrestricted, community-driven funding that intentionally eschews traceable nonprofit data — many nonprofits have already made AI safety commitments that would block deeper personalization. Candid, and its acquired GuideStar rating database, doesn’t allow its data for training third party models, and only uses a nonprofit’s publicly accessible tax data for internal projects.
AI could make charity feel like another investment, without the “warm glow of giving”
The problem with AI implementation is that it’s happening at hyperspeed. This speed, with AI designed by large tech industry leaders in order to streamline people’s digital lives and implemented without input, can just as easily strip people of one of the core purposes of charitable giving: human to human connection.
According to recent numbers from Giving USA, the U.S.’s charitable giving decreased by 2.1 percent in 2023, following a record high set by social and public health organizing in 2021. What did grow in 2023 were what are known as donor-advised funds, a controversially favored way of donating one’s money among the wealthy elite. Donor-advised funds are managed and sponsored by public charities and nonprofits, pooling low-taxed investor money into high-value charity payouts. As Scally explained, funds write out what are essentially grants to organizations, but individual givers stay uninvolved and potentially emotionally uninvested. Givers, then, are no longer doing the work.
Compassionate human connection takes work and time, things that AI’s efficiency goals are working to make a thing of the past.
Scally sees an obvious connection between these trends and tools like AI Overviews: Humans are becoming more disconnected from the physical act of handing over their money and resources to the people, or causes, most in need, often in favor of others (or even bots) telling them where to turn. This comes in spite of a social shift toward mass community giving and a revived interest in the concept of mutual aid.
“If you’re doing a search, finding the organization through an AI Overview, then making a grant through your donor-advised fund… What connection do you have to that organization?” asks Scally. “How invested are you to continue to support that organization, when you don’t feel that warm glow of giving?”
In a recent New Yorker article by speculative fiction author and frequent AI commentator Ted Chiang, growing fear of AI’s art takeover is presented as misleading, even as developers try to commandeer creative fields. “The businesses promoting generative-AI programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration — but these things cannot be easily separated,” Chiang writes. What AI rids humans of, the writer argues, is self-confidence, not drudgery. And it’s devaluing the effort and importance of human attention in favor of the technology’s processing power.
Art and philanthropy are not so different when it comes to the need for human intention and creativity — compassionate human connection takes work and time, things that AI’s efficiency goals are working to make a thing of the past. As Chiang wrote, “It is a mistake to equate ‘large-scale’ with ‘important’ when it comes to the choices made when creating art; the interrelationship between the large scale and the small scale is where the artistry lies.” And humanity at the small scale is where charity works greatest.
There’s good in AI, if we can use it wisely
Individual nonprofits (and even their supporters, like Candid and Charity navigator) aren’t turning away from AI completely. In fact, Scally scoffs at in an evil AI takeover. “Instead of a Terminator, or Matrix, or a Robocop scenario, how can we actually use this for good, and have a good balance against it?”
Candid has been testing AI in their work since Hollander started there in 2015. The organization has continued to explore generative AI as a solution to problems facing smaller nonprofits, including drafting documents like grant proposals and letters of intent.
And even with Google’s own AI technologies under critique, the company has been putting its money back into AI’s social sector benefits. In April, the company announced a $20 million investment into its latest Google “AI for Good” accelerator program. The initiative funneled cash into what they deemed to be “high-impact” nonprofits, like the World Bank, Justicia Lab, and Climate Policy Radar, to accelerate the integration of AI within their work. Google recently expanded the initiative.
Charity Navigator received Google backing to explore natural language processing and is internally testing AI-powered assistance for site visitors. They are spurred on by successful integrations among fellow nonprofits, like the Trevor Project’s Crisis Contact Simulator (also backed by Google).
“I don’t think it’s fair to discount AI and say it will never be able to get the intelligence it needs to really navigate nuanced areas of social good,” Scally reflected. “I think things are evolving — AI six months ago looks very different than it does now.” It comes down to more data, casting a wider net, and doing a better job at eliminating bias, Scally said.
Social sector guardians, then, could form something like a symbiotic relationship with Big Tech’s AI investments, enabling the work of these organizations, but keeping things like recommendations to human professionals. You’re seeing it already: Rather than inundating search overviews with something like advertising, have AI offer more context, more links, more information.
Still, questions remain. Can AI actually close equity gaps? Could its pervasiveness make it easier for full participation of all? The answers haven’t revealed themselves. But that isn’t to say that we can’t formulate a more compassionate plan as it advances. While we seek to add “humans in the loop,” a sense of humanity has to remain at the forefront.
-
Entertainment6 days ago
WordPress.org’s login page demands you pledge loyalty to pineapple pizza
-
Entertainment7 days ago
Rules for blocking or going no contact after a breakup
-
Entertainment6 days ago
‘Mufasa: The Lion King’ review: Can Barry Jenkins break the Disney machine?
-
Entertainment5 days ago
OpenAI’s plan to make ChatGPT the ‘everything app’ has never been more clear
-
Entertainment4 days ago
‘The Last Showgirl’ review: Pamela Anderson leads a shattering ensemble as an aging burlesque entertainer
-
Entertainment5 days ago
How to watch NFL Christmas Gameday and Beyoncé halftime
-
Entertainment3 days ago
‘The Room Next Door’ review: Tilda Swinton and Julianne Moore are magnificent
-
Entertainment4 days ago
Polyamorous influencer breakups: What happens when hypervisible relationships end