Thoughts on the OpenAI Foundation
An Excess of Capital
This is an edited version of my submission to Dwarkesh Patel’s blog prize, intended to be an exploration of novel ideas rather than a considered recommendation for what they should do.
With OpenAI’s new raise at an $852B valuation, OpenAI Foundation’s stake is now worth $180B, and has pledged $25B over the coming years to Life Sciences and AI Resilience. Meanwhile, Anthropic’s cofounders have pledged to donate 80% of their wealth. Nobody seems to have a concrete idea of how to deploy this much capital productively to “make AI go well”.
As a new fund launching in 2026, the OpenAI Foundation is joining an already fairly crowded impact-driven philanthropy landscape, just as other players are also ramping up their spend (e.g., Coefficient Giving, Schmidt Sciences, or Renaissance Philanthropy). After 2027, we might also expect Donor-Advised Funds downstream of Anthropic employees to be another source of billions with some overlap in goals to the Foundation. Almost all existing donors also share a belief that AI is moving fast and so capital is better deployed sooner rather than later.
Incubation
A common way to approach this allocation problem is to survey the landscape of directions ranked by impact and grant accordingly—for example, AI resilience, pro-social applications of AI like helping to cure disease, or economic impact studies. Indeed, existing players like Coefficient Giving already do this and donate in the hundreds of millions a year. But how can we scale up funding 10x or 100x beyond the existing ecosystem? What new opportunities exist, and what are the real bottlenecks once money is no longer an issue?
We can look at the case study of cybersecurity and biosecurity. Interventions in these areas have gained popularity on the AI risks side in recent years, since they aim to make the world more resilient to diffused AI capabilities and their downstream risks. This general direction, known as def/acc or d/acc, has the advantage of improving the odds that “AI goes well” on the default trajectory, rather than trying to affect some low-probability but high damages scenario, or aiming for a “moonshot” style AI policy intervention.
Def/acc as a whole has nice properties for absorbing lots of capital: its goals are fulfilled via making interventions in the world like researching new applications of AI, deploying infrastructure across the economy & society, and advocating for specific actionable policies. The spread is broad enough to support many new organizations, a handful of which could grow to be able to absorb at least $1B over their lifetime. This creates a venture capital-like dynamic: organization size follows a power-law, so successfully deploying enough capital at scale depends on having many “shots on goal” to create a small number of outsized outcomes.
Incubating or encouraging enough organizations to do this would be heavily talent-constrained at every stage, both for the founders needed to start new organizations, as well as the higher level problem of finding talented “general managers” for each subfield. Since the Foundation can pursue deep research into different areas to narrow down promising ideas for new organizations, maximizing its potential for capital allocation via incubation is similar to becoming a large venture studio: a fund which has concrete and well scoped-out ideas, searches for the founding team, and closely incubates the new organization. For reference, Sutter Hill, one of the most successful venture studios, deployed $3.1B in 2021. So deployment on the level of billions per year is possible this way.
But should the Foundation aim to fund for-profits vs non-profits for any given area1? Would a for-profit startup scale more quickly to the >$100M spend level, and ultimately have a larger impact? Or are there unique advantages of creating a 501c3, such as less pressure to make money and the ability to focus entirely on research or building public infrastructure? Prior cases of non-profit to for-profit conversion (OpenAI, as well as smaller examples like Edison Scientific or Apollo Research) indicate that for-profit can be a more impactful choice when available, due to the greater predictability of capital, talent attraction, and ultimately a higher ceiling on scale. So the Foundation, where it can be opinionated on directions like AI for biosecurity, should pursue this approach of trying to incubate organizations which could scale to usefully absorb capital.
Capital as Leverage
However, this alone is likely insufficient to deploy a meaningful chunk of the Foundation’s resources, especially if we expect its total assets to increase rapidly over the coming years. So what other, more creative options are available? The answer is to focus on interventions where capital itself is the bottleneck. To use the Foundation’s assets as leverage, to underwrite, market-make, or otherwise reduce the cost of capital for AI-related investments that seem broadly positive. Here are three examples:
An AI Catch-Up Facility. Low and middle-income countries like India, the Philippines, and Nigeria face a combination of often-weak state capacity and an economic shape primed for disruption from AI. Catch-up growth, enabling these countries to skip stages of economic development, may become harder as AI erodes the value of low-skill knowledge work labor. These governments will need to be more dynamic to adapt well. An AI Catch-Up Facility might be jointly capitalized by the Foundation, development finance, and the governments in question—and would aim to subsidize the deployment of vendor-neutral AI at below-market rates to improve state capacity. A first focus could be tax and customs systems, where increasing automation can simultaneously reduce corruption, improve revenue, and increase optionality for these governments during rapid economic transitions.
An AI Risk Reinsurer. Many commercial insurers have not yet launched AI-specific insurance products, and underinsured socially-valuable organizations like hospitals, utilities, and local governments are unlikely to buy when they do. The Foundation could capitalize a reinsurance vehicle to give commercial insurers confidence and create a “catastrophic loss” backstop, for risks including rogue agents, AI-enabled cyber or bio-attacks, or more prosaic incidents like jailbreaking in deployment.
Advanced market commitments (AMCs) are binding contracts to purchase products once they are developed and meet certain criteria, such as quality and price. AI-driven R&D could create an explosion of new potential products, some of which may be capital intensive or risky to scale to production. By guaranteeing an AMC for categories of AI-driven research advances (e.g., in biosciences), the Foundation could create a “pull” to incentivize earlier formation and investment into new companies & research institutes.
Other examples from this category include the creation of a compute futures market to reduce the risk of a frontier AI lab bankruptcy, or an underwriter of zero interest rate loans for small manufacturing shops to adopt robotics faster.
Each of these has in common a simple principle: using the Foundation’s assets as a “big stick” in cases where the activity is socially valuable, private actors underinvest, and the Foundation’s guarantee can unlock other capital sooner. If the Foundation’s assets increase in value to the hundreds of billions as many expect, these strategies may be one of the few realistic ways to use most of its capital in service of its mission.
Thank you to Tamay Besiroglu for the compute futures market idea. I’m now returning to regular publication here, after my long hiatus. I have much to share coming over the next few weeks!
Note that there are many avenues for philanthropy to legally fund for-profits with non-dilutive capital, including long term 0% interest rate loans with repayments tied to conditions on achieving large scale, tightly scoped grants, or advance commitments to buy the core product if it’s valuable to society.


Would be great to use some of this for funding open recipes, both for safety research and to help apply said recipes to high impact areas