Arquivo da tag: future

The most important problems and what to do

by Jonatas

The most important problems of the universe: (1) suffering; (2) lack of intelligence; (3) new imperfect life being created unintentionally in unreachable (due to the speed of light) places; (4) the end of the universe (by heat death or something else). Other relatively minor problems can be fixed with time by fixing problem 2. Problems 3 and 4 seem to be impossible to completely fix, although maybe problem 3 could be diminished by a type of class action by all intelligent beings in the universe to replace imperfect life in their nearby planets, and problem 4 could be delayed by slowing the subjective passage of time in conscious beings (by accelerating their rate of functioning, this could be done to extreme levels in artificial beings, effectively multiplying by many times the remaining time in the universe). Mortality is not really a major problem (see Daniel Kolak’s open identity theory, and http://en.wikipedia.org/wiki/Daniel_Kolak).

What we should focus the most on doing now: (A) making the population sympathetic to and willing to work for transhumanism (this includes trying to diminish opponent forces, such as religion, and in general trying to propagate the whole mindset that leads one to accept transhumanism), which has the potential to solve problems 1 and 2 for us – and fixing problem 2 will fix many other minor issues with time (such as lack of knowledge; mortality; creation of virtual paradises; etc.); (B) preventing global catastrophic risks (the most important of which seems to be bio- or nanotechnological terrorism, which is not something that exists now), something that we have little to do about now, except trying to convince legislators and politicians of the risk, without making them averse to goal A.

About the relationship between goals A and B: avoiding goal A will do little to help B. In fact, putting too many security restrictions on goal A has the potential to make less ethics-conscious individuals advance the technology first. Instead, the technology should be advanced as fast as possible, with some restrictions, and there should be created a global power with strict international surveillance for bio- or nanotechnological terrorism.

About artificial intelligence: it is not per se going to solve problems 1 and 2 for us, unless it acquires consciousness and replaces us completely (but I don’t see that happening in the near future). As long as there is still lack of intelligence in people, bad political decisions and all stupidity related problems would continue to exist, and the same with suffering. Artificial intelligence, if properly planned, should not be considered a relevant global catastrophic risk, because it can be easily contained (inside reality simulations, or through many limitations, such as physical, in terms of knowledge, of accessibility, etc.) and because if it is very intelligent it should not have unproductive behavior.

Once we can fix problems 1 and 2 for us with transhumanism, we can explore all planets nearby, and if we find forms of life that still have problems 1 and 2, we can either solve these problems for them (giving additionally knowledge, immortality, virtual paradises, etc.) or replace them. Advanced aliens should be expected to do the same.

What impedes people to solve these problems or to see the need to solve them? (Z) Thinking that education will solve problem 2; (Y) Thinking that problem 2 doesn’t need to be solved because we are so intelligent already; (X) Thinking that suffering is somehow necessary and shouldn’t be avoided; (W) Thinking that it is against their God’s rules to fix these problems. The absurdity of these (Z, Y, X, W) should be already evident to who is reading this, so there’s no need to explain it.