Personal tools

Skip to content. | Skip to navigation

You are here: Home The Law and Governance … Aims


There is a growing concern in the scientific community and the wider public about existential risks caused by scientific experiments and technological progress. An existential risk is one that threatens to cause the extinction of all human beings on earth and hotly debated areas are, for instance, specific types of research in biotechnology, certain types of geoengineering, and the development of artificial intelligence (AI): One can think of a virus that is modified such that its pathogenicity is enhanced, enabling it to cause a global pandemic (these are so called gain of function experiments (GOF)); one could think of experiments where large amounts of sulfur dioxide gas is injected into the stratosphere so that more sunlight is reflected, potentially changing our climate system (so-called solar radiation management).

Discussing the topic of existential risks means dealing with essential questions concerning the future of humankind: What responsibilities do States and even the international community as a whole have to reduce and manage these risks? Can these risks be effectively minimized without disproportionately restricting legitimate science and technological progress? How can we rationally deal with high risk scenarios in the area of scientific research if the research is intended to solve problems of humankind (for instance fight global warming; fight an epidemic etc.)? Are there gaps in the legal framework that must be addressed on an international level?

One important aim of the debate is to possibly develop rules and standards that govern these existential risks. Institutes at highly-ranked universities (Oxford/UK, Cambridge/UK, Boston/US) already deal with these kinds of existential risks. They try to find answers to the ethical limits of certain types of scientific and technical developments.

In our view it is important that legal arguments are brought forward in this debate. We want to stress the importance of human rights and other rules and principles of public international law, since they bind – depending on their source – at least an important number of states in the international community and can be implemented by courts or through other institutional means. We want to explore the link between the precautionary principle and human rights, especially the duty to protect the right to life and whether we can deduce risk reduction duties therefrom which bind all states; we want to look into the questions of state liability and operator liability and how they are enshrined in international law as it stands today. And we want to explore whether there are other international law arguments that provide answers for the right governance of existential risks. In doing this, we seek to find a way for an “ethical” governance regime where lacunae in the rights and legal principles are filled by ethical principles that are justified and coherent with the international legal order.

Existential and Catastrophic Risks

An existential risk is one that threatens to cause the extinction of all human beings on earth. This does not mean, however, that other risks are not important or may be neglected: In our view one must also deal, especially from an international law and governance perspective, with global catastrophic risks, i.e. those risks that might – directly or indirectly – kill or injure many people without threatening the survival of mankind as a whole. Hence we would seek a meaningful categorization of different types of risks caused by research and technical developments. This will be the analytical basis for treating the same cases the same way, which is important if we try to find rational and proportional regulations to limit those risks.