Preventing Hell on Earth


To fulfill its mission a human-centered paradigm as envisioned by World Academy of Art & Science should combine optimism with pessimism. An essential meta-value is avoiding the bad, in addition to achieving “the good”. Realistic assessment of human beings is a must. An appropriate phased time horizon of 10 to 80 years should frame the paradigm. Evaluation of emerging science and technology with very dangerous potentials, such as those posed by synthesizing viruses and radical “human enhancement,” followed perhaps by human cloning and deep genetic engineering, is essential. Thinking ahead realistically on alternative futures of the human species as a whole and their drivers is a must, giving due weight to dangerous propensities as well as virtues of human beings.
Only a small minority of humanity and its political leaders have the understanding essential for coping with fateful choices increasingly facing humanity. Inter alia essential is the regulation of dangerous research and technologies enforced by a strict global regime headed by a duly constituted circumscribed global authority. An upgraded genre of political leaders within redesigned democracy is essential. No human-centered paradigm should ignore such requirements.
All this lead to my suggestion to focus the paradigm on the most important and urgent, what Dag Hammarskjöld appropriately called “preventing Hell on Earth”.

1. Introductory Note

This essay is a contribution to discourse on a human-centered paradigm, or set of guiding principles. It is largely based on my books Avant-Garde Politician: Leaders for a New Age (2014) and The Capacity to Govern: A Report to the Club of Rome (2001), which also detail most of the sources on which the present paper is based. But this essay focuses on “preventing Hell on Earth,” including averting self-destruction of the human species, which is at the center of concerns.

2. Realistic Vision

The conceptual framework for a human-centered paradigm, which is being developed by WAAS, aims at guiding action directed at assuring, as far as humanly possible, a better future for humans and humanity as a whole. Accordingly, it belongs to the category of “realistic visions,” in partial contrast to “realistic” in the narrow incremental sense of “the art of the possible,” but also in contrast to counter-factual utopian visions.

To fulfill its action-guiding aims, a realistic vision must meet three main criteria: (1) directed at well-considered and explicated values; (2) accepting constraints imposed by rigid features of reality; and (3) dealing with clarified time horizons phased according to the natural time cycle of the relevant issues.

It seems to me that the WAAS discourse on a human-centered paradigm meets the value criterion of advancing “the good” as accepted by the best of contemporary moral discourse and global declarations. But it misses an essential meta-value, namely avoiding the bad as distinct in many respects from achieving “the good”, despite some logical and operational overlaps. Also, most of the discourse ignores very vexing issues of judging what endangers the welfare and perhaps existence of humans or enhances them, including emerging technologies which will be useable both for the better and the worse. Artificial Intelligence (AI in short), synthetic biology and human enhancement illustrate such domains of science and technology in respect to which salient values are missing or at best underdeveloped. The question to what extent and under what conditions novel science-and-technology provided processes and tools are likely to advance human welfare or endanger it, and what to do about it, remains wide open.

Also missing is an overriding imperative which guides specific human-serving values and helps to establish action agenda. “Preventing Hell on Earth,” with a continuously developing scope, is proposed as an overriding imperative, as expounded in this essay.

Moving on to the “realistic” aspect, I have grave doubts on crucial assumptions concerning human beings, as well as unavoidable power structures, which nearly all discourse on a human-centered paradigm takes implicitly for granted. These are discussed below.

Furthermore, as far as I understand the publications and declarations dealing with the human-centered paradigm, the time horizons dealt with are not clarified. This undermines their essential realism by permitting “mental time travel” into undefined futures which are far beyond maximum foresight abilities, and thus make the vision, at least in part, more an exercise in fantasy than creative but action-oriented contemplation. Therefore, I start my substantive discourse by proposing a phased time horizon.

3. Phased Time Horizon

The time horizon which I suggest for the paradigm is between the near future, say ten years, and a maximum of about eighty years, divided into phases as fit specific domains under consideration.

Publications on expectations for the 20th century written around the end of the 19th century were completely wrong. All the more so, outlooks presuming to cover the rest of the 21st century are at least very doubtful and most likely largely mistaken, because of the accelerated rate and steeper degree of non-linear and contingent change, and also some phase-jumps, adding up to the beginnings of a largely opaque metamorphosis of the human condition.

Still, an effort, however provisional, to engage in thinking about the future, preferably in the form of more or less possible and in part likely “alternative futures” and their drivers, is of critical and perhaps fateful importance, because of emerging dangers in addition to novel opportunities that require proactive creative adjustments, most of which have to be radical rather than incremental.

Cascading into metamorphosis with habits, institutions and frames of mind largely fixated on rear mirrors is very dangerous. But dreaming of a never-never future will not help. Therefore, I adopt a time horizon long enough to encompass radical transformations foreseeable in part as in-between possible and likely (to use multimodal logic terminology), but short enough, taking into account the longer life expectancy of humans, not to get lost in too much speculations. Thinking and acting in time frames of between about 10 and 80 years probably meet more or less these criteria.

Even within this relatively short time horizon range, presently “inconceivable” events and processes are likely, resulting in harsh transition crises. Gearing up for them and for using the crises as opportunities for necessary radical innovations which are not feasible without reality-undermining events is essential and should be included in all humanity-centered paradigms. Thus, a mass-killing conflict using mutated viruses may clear the way for setting up a strict global security regime.

However, a longer time horizon is a must when we move from a human-centered paradigm to a human species-centered paradigm. This adds the long-term imperative to prevent any action that endangers the very existence of the human species, together with being very cautious about human enhancements that may change basic features of the human species.

Emerging technologies are likely to provide tools that may result in the end of humankind in one way or another (as studied, inter alia, at the Oxford University Future of Humanity Institute), in addition to the continuing possibility of nuclear self-destruction and escalating damage to the environment. Therefore, I suggest that these imperatives be added with absolute priority to any human-centered paradigm.

4. Rigid Realities

I have serious doubts about underlying assumptions on human beings on which the proposed WAAS paradigm seems to rely, however un-explicated. As a mood-setter, let me take up for a critical look a widely accepted recommendation which illustrates dangerous neglect of stubborn facts that should be regarded as rigid, at least within the proposed time horizon.

The idea of a global parliament elected democratically is often discussed as if feasible in the foreseeable future. But to demonstrate the illusionary nature of such thinking for at least the next 80 years and probably much longer, it is enough to mention the demographic fact that a global body elected according to the democratic principle of “one person-one vote” would be completely dominated by a few Asian countries. China, India and Indonesia alone add up to about 40 percent of humanity! This clearly would not be acceptable to most of the global powers, rightly so given present and foreseeable states of being of large parts of humanity, in addition to undermining the pluralism of composition in terms of civilizations needed in a global parliament.

Mobilizing massive grass-root support for measures essential for the welfare of humans is important and perhaps essential. Both limitations on nuclear weapons and on climate changing activities have benefitted from bottom-up pressures, however inadequately so. But most of the emerging dangers to humans and the species as a whole are very complex, as are the required countermeasures. Thus, the potential dangers of AI are hotly debated and what can be done about them is far from clear, all the more so as AI can provide enormous benefits for humankind. The same is true, mutatis mutandis, for synthetic biology and, most challenging of all, for human enhancement.

It is hard to imagine that large parts of humanity will understand the complexities of such domains, which tax to the utmost the capacities of the minds of outstanding philosophers, scientists and other highly qualified thinkers. Mass petitions and referenda on them cannot therefore make sense within the proposed time horizon. This illustrates critical issues on which only a very small percentage of humanity can express plausible opinions; and, much worse, on which politicians who lack any real understanding of the issues and what is at stake, will have to make decisions impacting on the future of generations to come.

Critical for crafting human-centered paradigms are foundational assumptions on human beings. In particular, it is very dangerous and perhaps fatal to base a realistic vision on much too optimistic views on human beings while ignoring or underrating dangerous propensities built into them, as revealed throughout history and exposed by many psychological and sociological studies.

Without underrating the great importance of altruism, artistic creativity, advances in widely accepted humanistic values and other achievements of humanity over its history, which has its own ups and downs, let me focus on seven examples of very disturbing cardinal proclivities of the vast majority of human beings, as individuals, groups, and societies:

  1. Human beings have the dangerous propensity to regard it often as their moral duty to kill other humans, and also sacrifice their own lives in order to do so. “True believers” and fanaticism demonstrating this propensity are an integral part of human history and show no sign of disappearing or at least abating.
  2. Human beings seek power and superiority, wanting to be the “chosen” and “special,” while being envious of others who do so and often hostile towards them.
  3. Greed for more of what one or others like is a very strong attribute.
  4. Tribalism, in the sense of distinguishing between “us” and “others,” frequently accompanied by hostility to different “others”, is widespread.
  5. Humans seek leaders, look up to them, and follow them in doing good and often evil.
  6. In collectives, mass psychology phenomena take over, many of them full of dangerous potentials. Hopes that social networks and other internet collectives will reduce collective vices have not been realized, the opposite being just as likely.
  7. Even the most “civilized” of groups and societies seek “enemies to blame” and show signs of barbarism when put under pressure. The reaction of some of the European countries regarded as the most liberal of all to influx of Moslem immigrants is just a relatively small indicator of how thin the veneer of “civilization” often is.

I do not presume to go in this short essay into the deeper layers of such features and their causes, as discussed, but not satisfactorily explained, by evolutionary psychology, genetics, depth psychology and so on. Most probably they are “animalistic” features built into humanity by evolutionary processes, which can also metaphorically be viewed as a kind of “original sin”. But one point needs emphasis: efforts to change such basic propensities into what is regarded in different periods and places as “better” ones by education have not proven themselves. Even totalitarian efforts to produce a “new human being” have failed dismally.

It would be too pessimistic to conclude that dangerous human propensities are immutable. During about 800 to 200 BCE there occurred in China, India, and the Occident the so-called “Axial Age,” which transformed human self-understanding and transcendental views in ways still dominating most civilizations. It may be that a Second Axial age is in the making, driven by the capacity of humanity to destroy or transform itself, hopefully together with future peak value creators, transforming relatively rapid human self-understanding for the better, though this is far from assured. But this is too much of a speculation to serve as a basis for a new human-centered paradigm.

Alternatively, “human enhancement” by chemicals or genetic engineering, with all its dangers, may enable “reengineering” which reduces dangerous human propensities, though the risks of doing so are surely very high. But as long as human propensities are as they have been throughout the history of the species, and as they surely will be within the proposed time horizon and probably for much longer, all proposed paradigms must take them seriously into account. This is not done in most human-centered paradigms, which therefore suffer from a lot of “wishful thinking” which makes them at least partly into nice utopian fantastic visions but not reliable foundations for action.

Yehezkel Dror: Emeritus Professor of Political Science, Hebrew University, Jerusalem; Fellow, World Academy of Art & Science

Pages: 1 2 3