Surdna President Don Chen on Philanthropy and Artificial Intelligence

On  March 20, 2024, Don Chen, president of the Surdna Foundation, delivered the closing keynote at the Philanthropy and AI Forum hosted by Partnership on AI. He spoke about the societal impacts of Artificial Intelligence (AI) and the importance of diversity and democracy in creating the ethical frameworks, standards, and societal norms that govern AI’s use.

He also discussed the need to form cross-sector partnerships to ensure responsible, equitable, and safe AI outcomes; asking what would it look like to develop AI tools that can help repair historical harms and bridge divides? Read his full speech below:

Good afternoon. I’m delighted to be here to speak about this critically important set of topics from my perspective in the philanthropic sector. The Surdna Foundation is a 107-year-old private family foundation whose mission is to advance social and racial justice in U.S. communities. We have four major grantmaking programs on climate justice, economic inclusion, arts and culture, and youth justice.

I’m just re-reading Parable of the Sower, with the new foreword by NK Jemisin, in which she talks about the power of threes. So my keynote will include three threes: Surdna’s institutional outcomes, three reasons why I’m glad we’re part of the Partnership on AI, and three examples of how Surdna is working with partners to begin applying a socio-technical lens to our work.

To start with, Surdna seeks to make our grantmaking add up to three institutional outcomes:

1) democratic participation, especially to eliminate barriers to civic engagement, such as voting or adding voices to public decision- making,

2) wealth building, with a focus on closing the gaps in opportunity, capital, and the life outcomes supported by having assets, and

3) greater transparency and accountability because our democracy and economic system should provide equal protection, opportunity, and access to all Americans, but it often doesn’t, and we need to hold ourselves up to our own high standards.

So, our grantmaking is devoted to closing these gaps in democratic participation, wealth building, and transparency and accountability so that a person’s race—or gender, income, or zip code for that matter—shouldn’t unfairly determine their life outcomes.

I’m delighted that Surdna is a member of the Partnership on AI for several reasons, and I’ll mention three.

First—and the most obvious reason is that AI and other forms of technology continue to affect our lives and our work in multiple profound ways, many of which have been completely unexpected. None of us at Surdna are technologists—not like many of you are. But we understand that we need to think about how AI can and will affect everything from our day-to-day work to the societal conditions that we’re trying to improve. And that requires all of us—especially those of us in philanthropy—to prioritize a public interest approach to technology in everything we do.

Second, as foundations—especially endowed private foundations, we reside in a very privileged part of American society. In doing our mission-driven work, we have tremendous latitude and face relatively few accountability requirements. Sounds pretty good, right?

But we always need to recognize that how we use this privilege and latitude must be driven by a sense of civic responsibility. That’s the real reason why we exist. And because of our philanthropic dollars and our relative independence, foundations not only have a responsibility to be good civic partners in ensuring that technology serves the public interest, but we’re also in a relatively good position to serve as honest brokers and help the field prioritize a set of ethical standards and apply a sociotechnical lens to this work—in other words, integrating the technical aspects of AI development with a deep understanding of social, ethical, and racial equity considerations.

Third, we need to build power. We need to build our influence. Now, you might be thinking, wait a minute, aren’t the tech companies already omnipotent and omniscient? Isn’t government already incredibly powerful? Aren’t the big foundations powerful? Well, yes, in certain ways, there’s no denying that. But I would humbly propose that those of us who are thinking about public interest technology, ethical standards, a regulatory framework, and responsible practices for AI, about transparency and accountability in the deployment of technology—are not yet powerful enough to ensure that technology will be used responsibly well into the future. That’s the whole reason why we’re here, right?

I say all that recognizing how much benefit AI can potentially deliver, such as ensuring easier access to information, finding meaningful signals out of vast volumes of big data noise, and freeing up time for more personal interactions and leisure time, to name just a few. And yet many of those benefits of AI could come at a huge cost to society. Will intellectual property, sensitive data, and privacy be protected? Will all users have equal access to resources? Will algorithmic bias erase Black history similar to how we’re seeing book bans sweeping across our country? Will it entrench patterns of systemic racism in who gets hired, who gets a mortgage, who gets good medical care, who gets accurate voting information, and even who goes to prison? Or will algorithms be responsible, fair, and inclusive, and even help us close those longstanding gaps?

These are big weighty questions and challenges, and I know we can’t address them alone. So, I’ll highlight a few examples of how we’re working with grantee partners to grapple with them together. In each example, our goals are to ensure that a wide variety of voices—civil society, academia, grassroots communities, communities of color—are part of the conversation in the development and deployment of AI technologies, demanding that these innovations serve to improve societal well-being and promote justice. In the words of Mutale Nkonde, founder of AI for the People, “techno-social means that AI is not simply a technology in a vacuum, but a phenomenon that affects people’s lives in very real ways.”

So here’s my third of the threes: Examples of how Surdna is beginning to apply a sociotechnical – or techno-social—lens to our work. The first is related to ensuring safety and doing no harm, the second is about improving democratic participation, and the third is about partnerships that enable a variety of stakeholders to unpack these ideas and work together, strategically, and hopefully have influence to prioritize the public interest.

So let’s start with safety, and I’m not just talking about physical safety, but also other ways to prevent and mitigate harm. We see examples of harm or potential harm everywhere we look, from the displacement of jobs in the creative industries to the unfair targeting of people by police. Fighting this type of bias is one of the central goals of AI for the People—one of Surdna’s grantee partners. For the past five years, they’ve been fighting racial bias in algorithms that determine people’s access to opportunities and other life outcomes, developing a policy framework to ensure civil rights protections, and creating cultural narratives that increase people’s awareness about the intersection between technology and racial justice. An example of this is their advocacy for the introduction of the 2019 Algorithmic Accountability Act in Congress, language that was also integrated into the American Data Privacy Act of 2022.

The second example builds on the “do no harm” theme by highlighting the state of our democratic system, which feels particularly imperiled this year. We’re all seeing self-interested parties employing chatbots to spread misinformation and disinformation, and it’s likely to get worse before things get better. Some of these efforts are fueling the disenfranchisement of Black voters, undermining Americans’ trust in the integrity of our institutions, and reinforcing false narratives that divide our people and communities.

So, how do we keep those nefarious forces in check? Well, at the very least, we need public interest evaluation of these chatbots’ impact on our democracy. One of our grantees, Proof, has launched the AI Democracy Projects to help inform public understanding of AI risks, develop innovative ways to evaluate and benchmark the performance of AI datasets and technology, and create high standards and expectations for bias-free information that AI chatbots should be able to deliver. This is a step toward greater transparency in the field. If successful, this and related efforts will help hold industry accountable to public interest goals.

The third and final example is about the importance of partnerships across sectors. I’ve already shared my reasons why I’m glad Surdna is a member of PAI, including the need to ensure that ethicists and public interest leaders are more influential in the development and deployment of AI. I’ll just underscore that the Partnership on AI, also creates space for a wide variety of stakeholders—racial justice advocates, corporations, climate activists, and financial institutions—to dig very deep on questions and sectors where the implications of AI are already having an impact, such as at the intersection of labor and the economy, media integrity, and ensuring inclusivity in the development of AI tools, to name a few. It’s a partnership that I know will only grow in value as AI adoption grows.

I’ll conclude by restating the theme from our three examples:

1) the necessity of investing in AI safety to combat racial bias,

2) the imperative of holding industry accountable for equitable outcomes, and

3) the importance of collaborative efforts with multiple sectors and stakeholders to ensure responsible AI development and deployment.

These are all interconnected and essential to achieving a just and equitable society in the age of AI.

 I’ll leave you with one final thought. What would it take to optimize the positive potential of these powerful technologies, and develop ways to prevent and minimize the downsides that might occur? We spend a lot of time talking about doing no harm and mitigating harms. What would it look like to develop AI tools that can help repair historical harms? In other words, a “reparative AI.” What AI tools can help us bridge divides across differences, including race, class, and even ideological differences to develop solutions that we can all benefit from?

Thank you to the Partnership on AI for having me. I’m so looking forward to working on this agenda with all of you, together.

 

Watch Don’s speech