Based on a talk delivered at the conference on Existential Threats and Other Disasters: How Should We Address Them? May 30-31, 2024 – Budva, Montenegro – sponsored by the Center for the Study of Bioethics, The Hastings Center, and The Oxford Uehiro Center for Practical Ethics.
Introduction
For twenty years, I have been talking about old age dependency ratios as an argument for universal basic income and investing in anti-aging therapies to keep elders healthy longer. A declining number of young workers supporting a growing number of retirees is straining many welfare systems. Healthy seniors are less expensive and work longer. UBI is more intergenerationally equitable, especially if we face technological unemployment.
But as a person anticipating grandchildren, I think the declining fertility part of the demographic shift is more on my mind. It's apparently on the minds of a growing number of people, including folks on the Right, ranging from those worried that feminists are pushing humanity to suicide or that there won't be enough of their kind of people in the future to those worried about the health of innovation and the economy. The reluctance by the Left to entertain any pronatalism is understandable, given the reactionary ways it has been promoted. But I believe a progressive pro-family agenda is possible.
When the influence of futurist philosophy on AI regulation became a big story last year, a spotlight fell on the conservative, anti-feminist pronatalism promoted by people like Elon Musk, which was in turn blamed on the effective altruists and longtermists. So, I intend to outline the argument for a politically progressive pronatalism given the central premises of longtermism. I propose that we should take the future interests of the currently living, and possibly the as yet unborn, into account and that the adoption of liberal pronatal policies would be in both groups' interest. Unfortunately, radical uncertainty places a sharp discount rate on our ability to take any actions on their behalf, even to know whether pronatal policies would be in their interest. So, preserving and expanding liberal and social democracy is something we can do to ensure that future generations can make collective decisions in their interests with better information.
Longtermism
Longtermists start with the assumption of "temporal impartiality," that future lives have the same value as lives today, and then try to determine what actions today will have the maximum positive influence on the long-term future. Strong longtermists accept that this may oblige significant sacrifices today for hypothetical future lives. Weak longtermists argue for discounting future lives so that future interests don't overwhelm current interests.
Longtermists draw on more than a century of utilitarian thought since Sidgwick, but sociologically they are a product of the effective altruism movement. Effective altruists attempt to apply consequentialism to charitable giving and public policy, supporting causes likely to reduce the most suffering. For effective altruists and longtermists, existential risks or "x-risks," risks that could extinguish all future value (10x future lives), become very high priorities, i.e. "Pascal's Mugging."
Some effective altruism groups like Open Philanthropy have programs devoted to reducing x-risks like bioweapons and nuclear war. Still, the most popular x-risk in tech milieus is runaway AI, especially since the publication of Nick Bostrom's Superintelligence. So there are now many affluent people for whom futurism, effective altruism, longtermism, and AI x-risk mitigation have fused into a quasi-religious mission. For people outside of this milieu, however, it seems evident that we can argue for mitigating x-risk without reference to 1025 people in the year 3 billion.
Very few advocates of these ideas are billionaires, but when advocated by people like Musk and Sam Bankman-Fried, they got much more funding and attention. Hundreds of millions of dollars and crypto coins have flowed into philanthropic support of effective altruist causes and charities, especially those associated with "AI safety." About a hundred "AI safety" experts have been embedded in federal agencies and Congress, and they actively lobby for their brand of AI regulation in the UK and European Union. OpenAI and Anthropic were founded by people with this "superalignment" agenda.
Musk and SBF's reputational collapse and the enormous wealth trying to shape AI regulation spotlighted the growing influence of these ideas. The Left just saw white tech bros promoting science fiction scenarios to distract from corporate crimes, poverty, racism, and war. But after the failed coup at OpenAI was partly attributed to effective altruist board members, longtermism and "AI safety" are now challenged by libertarian titans of industry as hostile to capitalism and innovation, while they now champion "effective accelerationism."
Ironically some longtermists have pointed out that the maths can make low probability positive interventions with potentially astronomical future benefits as important as x-risk mitigation. So, statistical traps and strange attractors can also be found in the positive intervention ledger. Adopting pronatalist policies that ensure that the global population grows in the future instead of shrinking could be as important as ensuring that we don't commit collective suicide. For consequentialists, all else being equal, more lives produce more aggregate utility. Parfit added that a society that reached maximum aggregate utility through population growth would have lower average utility than we would find attractive, which he called the "repugnant conclusion." While there is no wisdom in repugnance, any attempt at taking future people into account has to grapple with "population ethics," whether we are obliged to ensure future people are born, who they should be, and how many would be best.
Priors: Eudaemonia, Capabilities Theory, Liberalism
Turning to my intuitions, I have four priors. First, that consequentialism is the appropriate logic for public policy. For instance, I advocated for a Quality-Adjusted Life Year (QALY) approach to regulating human enhancement therapies in Citizen Cyborg.
Second, I believe that the consequences public policy should seek are eudaemonic, not hedonic, something like the Sen/Nussbaum "capabilities approach," which attempts to operationalize public policy for flourishing lives, ensuring both freedom to and the enablement to pursue life projects.
Third, liberal and social democracies in general, and reproductive freedom in particular, are the best regimes for maximizing human flourishing and capabilities, at least so far. Reproductive freedom guarantees the ability to pursue life projects for everyone, especially girls and women.
Fourth, I accept that we should adopt impartiality and reject kinship bias. While we may expect kinship bias in daily life, we should not value our own descendants over others in public policy.
Radical Uncertainty and Epistemic Discounting
Even if we believe that we should take future interests into account, we are not obliged to try if we are completely uncertain about what we can do to improve the future.
As Sidgwick said 140 years ago.
There is no abstract reason why the interest of future generations should be less considered than that of the now existing human beings; allowance being made for the greater uncertainty that the benefits intended for the former will actually reach them and actually be benefits. (Sidgwick, 1883)
It is amusing to me that longtermists are the familial descendants of the concept of the Singularity, which equated the advent of AGI with entering the event horizon of a black hole. Post-Singularity scenarios ranged from apocalypse to utopia, but the levers to pull for one outcome or the other, or even what the choices might be, were implied to be beyond our capacity to comprehend. Our best hope was to build friendly AIs in the hope that they would take over, crush the evil AIs, and figure out what is best for us.
Today, the Singularitarians' grandchildren are trying to statistically quantify whether the population count at the end of time would be higher from a temporary pause on AI development.
For me, the sources of radical uncertainty include
· Chaos and non-linear complexity in natural and social systems
· Uncertainties about the nature and moral standing of future persons (animals, posthumans, machines, hybrids, collectivities, etc.)
· Axiological uncertainty – values will change rapidly
· The poor track record of futurist predictions and the eschatological biases in our far-future imaginaries
The future will be stranger than we can now imagine, the epistemic horizon is less than a century, and our descendants will bring different values and better information to their decisions. Any sufficiently advanced futurism is indistinguishable from eschatology. Trying to think about what we can do for the people in the 22nd century is pointless, and I am inclined to radically discount lives between now and 2100.
Person-Affecting View and Non-Identity
If we include the interests of hypothetical future people who could be born between now and 2100 in our thinking, then the initial assumption would be that the more people there are, the more utility there will be, albeit with declining certainty. However, adopting the "person-affecting view" (PAV) takes the pronatal imperative off the table and asks instead what the interests of currently living people are expected to be. PAV is politically appealing since its advocates argue that we only have commitments to other people alive now, who are also the people we need to convince to act. Some may be moved by the argument that they should sacrifice for unborn people, but we should start with current peoples' self-interests. As a Buddhist Parfitian, I would also add an epistemic discount to the interests of the currently living, however, since our identity with our future selves declines over time; the child does not predict the elder.
If we roughly model the life expectancies of currently living persons and apply a steady discount rate that reduces all moral weight to 0 by 2100 due to epistemic uncertainty and non-identity, it yields something like this weighting.
In this accounting, actions that we take today that pay off by 2035 are worth six times more than identical attempts to impact the year 2080. In the case of pronatalism, this accounting argues we can be six times more certain about the social benefit of a baby born in 2035 than one born in 2080.
However, the strong assumption of epistemic opacity by 2100 is doing most of the work here. Including the interests of babies likely to be born between now and 2100 only changes the shape of the curve. Applying the same rate of discount to the median and pessimistic (lower) population estimates from the International Institute for Applied Systems Analysis (IIASA) only marginally changes the shape of our future moral weights, so that actions we take to impact 2035 are only four times more weighty than in 2080.
Slowing Population Aging and Decline
The IIASA is an Austrian think tank that includes trends in family planning and education in its modeling, and its population model is used by the Intergovernmental Panel on Climate Change. The IIASA's median model predicts global population peaking in the 2080s, then beginning a slow decline to less than 10 billion by 2100. Their lower growth model predicts a peak in the 2060s, declining to 8 billion by 2100. Also by 2100, the proportion of youth and working-age adults in the global population is expected to decline, and the proportion of seniors is expected to double or triple. That we will need to adapt to an older and smaller global population this century is as certain as any prediction can be, and many parts of the world are already struggling with this demographic shift.
Considering all the other factors around population growth, such as migration and environmental sustainability, is a longer discussion. Suffice it to say that sustainability is more impacted by technological innovation than population, and liberalizing migration would allow a better balance between shrinking and growing nations while there are still growing nations. Setting a lot to the side, we have good economic reasons to assume that the future well-being of both existing people and the unborn will be better the more slowly populations age and the less they shrink. So the interests of the currently living and the babies to be born this century warrant at least near-term pronatalist interventions, even if our certainty about the benefits of more babies declines to zero by 2100.
Two factors that will change how we think about well-being and population are how long people live and how much labor we can replace with automation. Radical life extension will be a net good but will force us to adapt to a grayer, smaller future more quickly. A future with radically extended life expectancy would be so different that we have an even stronger rationale for epistemic discounting.
As to automation, much of the argument for the impacts of population decline on well-being rests on its economic impacts, from the burden of pensioners on tax-payers to labor shortages and reduced economic vitality. We may find that an automated economy can provide a high level of well-being for everyone without population growth. Automation and robotics will partly address a decline in young workers, and an automated economy could more easily support a new normal of longer lives and even lower birth rates.
The Value of Freedom and Democracy
My rejection of the conservative framing of pronatalism is based on my commitment to reproductive freedom, feminism, and liberal individualism, which in turn can be grounded in capabilities consequentialism. Historically, liberal and social democracies have been best for promoting individual freedom and collective empowerment, and thus for maximizing individual and aggregate flourishing.
In particular, the ability to choose whether to have children and how many is a major constraint on all life's options, making reproductive freedom essential for capabilities consequentialism. While there are lifeboat situations where we may conclude "You three have to have kids," only policies that encourage but do not require child-bearing would be consistent with maximal future well-being.
Liberal and social democracies are also the best systems we have found so far for aggregating collective interests into policy. 2100 may have better mechanisms for social choice, but from 2024, more liberal, egalitarian, and democratic societies still look like the best bet, conceptually and empirically. Insofar as liberal and social democracy and reproductive rights are on the defensive, promoting these social orders should also be a moral priority.
Progressive Pronatalism
If we accept that there are moral grounds for pronatalism and reject any restriction of reproductive freedom to achieve pronatal goals, we are left with creating more generous financial incentives to become parents. Unfortunately most of the experiments with pronatal financial incentives so far, from Denmark to Japan, have had only marginal impacts on birth rates.
We probably won't be able to return to the replacement rate (2.1 children per woman) this century, but I also believe no country has really tried to meet the price point yet. Child subsidies could be more generous, and daycare and university could be free. A society that acknowledges parenting as a part-time job worthy of social compensation and genuinely invests in each child as a precious resource to be nurtured to their full potential would have a higher birth rate. Social democratic policy analyst Matthew Bruenig recently packaged the kinds of policies I have in mind as a "Family Fun Pack":
• Child allowance (tax credits and subsidies)
• Generous parental leave policies
• Free childcare
• Free pre-K
• Free healthcare
• Free school lunch
I would add free higher education.
Reflecting on the radical changes to the political economy that a more generous, purely financial, pronatalism would require, we can understand why conservatives disparage their practicality in favor of an (even more quixotic) return to faith and family.
While progressives warmly endorse these policies, they also often believe calling these policies "pronatalist" implies that governments should have population targets and restrict reproductive freedom. While I share the anxiety that pronatalism can provide ammunition for reactionary attacks on reproductive freedom, I worry progressives are ceding the argument to reactionaries as the world becomes more focused on the problem of falling birth rates.
Longtermism: Right Questions, Suspect Answers
I am in awe that the longtermist bundle of futurist philosophy has become so influential and controversial. While I am hostile to the reactionary and elitist, even racist, appropriations of these ideas by self-interested tech bros, the work that the "effective altruist" and "AI safety" communities have attempted is extremely important. Most of the people in these communities are liberals, not crypto-racists or corporate shills. Their ideas are as vulnerable to cooptation by self-interested elites as any, but no more. The biggest error was not being explicit enough about the need for social and political change to promote freedom and equality in the future. Being more explicit about the expected impacts of different political and economic arrangements on future utility might make fundraising for effective altruism or AI safety a little harder, but would help demonstrate the communities' political diversity and disambiguate them from problematic celebrities and philanthropists.
Longtermism asks fundamental questions and promotes the kind of consequentialism that should guide public policy. However, longtermist thinking about the far future reads more like eschatology than policy analysis. The field could benefit from more epistemic humility. Systematic attention to existential risks doesn't need to be jazzed up with far-future speculation to be pressing priorities.
As to pronatalism, the future well-being of people in this century will probably be better in a world that makes it as easy as financially possible to become a parent while preserving reproductive choice, even if this doesn't get us back to replacement. And by 2100 or so, all bets are off.
Hi James,
Father of five, as you know, I strongly approve of this analysis and its main proposition. This is in line with the concerns that I have expressed in recent years through my articles ("What if childhood became rare?" <https://transhumanistes.com/et-si-lenfance-devenait-rare/> / What if childhood became rare in Africa too?).
I said there that it will be essential to preserve childhood, by all means. But the safest way is still... to have children.