The most important problems and what to do

by Jonatas

The most important problems of the universe: (1) suffering; (2) lack of intelligence; (3) new imperfect life being created unintentionally in unreachable (due to the speed of light) places; (4) the end of the universe (by heat death or something else). Other relatively minor problems can be fixed with time by fixing problem 2. Problems 3 and 4 seem to be impossible to completely fix, although maybe problem 3 could be diminished by a type of class action by all intelligent beings in the universe to replace imperfect life in their nearby planets, and problem 4 could be delayed by slowing the subjective passage of time in conscious beings (by accelerating their rate of functioning, this could be done to extreme levels in artificial beings, effectively multiplying by many times the remaining time in the universe). Mortality is not really a major problem (see Daniel Kolak’s open identity theory, and http://en.wikipedia.org/wiki/Daniel_Kolak).

What we should focus the most on doing now: (A) making the population sympathetic to and willing to work for transhumanism (this includes trying to diminish opponent forces, such as religion, and in general trying to propagate the whole mindset that leads one to accept transhumanism), which has the potential to solve problems 1 and 2 for us – and fixing problem 2 will fix many other minor issues with time (such as lack of knowledge; mortality; creation of virtual paradises; etc.); (B) preventing global catastrophic risks (the most important of which seems to be bio- or nanotechnological terrorism, which is not something that exists now), something that we have little to do about now, except trying to convince legislators and politicians of the risk, without making them averse to goal A.

About the relationship between goals A and B: avoiding goal A will do little to help B. In fact, putting too many security restrictions on goal A has the potential to make less ethics-conscious individuals advance the technology first. Instead, the technology should be advanced as fast as possible, with some restrictions, and there should be created a global power with strict international surveillance for bio- or nanotechnological terrorism.

About artificial intelligence: it is not per se going to solve problems 1 and 2 for us, unless it acquires consciousness and replaces us completely (but I don’t see that happening in the near future). As long as there is still lack of intelligence in people, bad political decisions and all stupidity related problems would continue to exist, and the same with suffering. Artificial intelligence, if properly planned, should not be considered a relevant global catastrophic risk, because it can be easily contained (inside reality simulations, or through many limitations, such as physical, in terms of knowledge, of accessibility, etc.) and because if it is very intelligent it should not have unproductive behavior.

Once we can fix problems 1 and 2 for us with transhumanism, we can explore all planets nearby, and if we find forms of life that still have problems 1 and 2, we can either solve these problems for them (giving additionally knowledge, immortality, virtual paradises, etc.) or replace them. Advanced aliens should be expected to do the same.

What impedes people to solve these problems or to see the need to solve them? (Z) Thinking that education will solve problem 2; (Y) Thinking that problem 2 doesn’t need to be solved because we are so intelligent already; (X) Thinking that suffering is somehow necessary and shouldn’t be avoided;(W) Thinking that it is against their God’s rules to fix these problems. The absurdity of these (Z, Y, X, W) should be already evident to who is reading this, so there’s no need to explain it.

5 opiniões sobre “The most important problems and what to do”

  1. Hi Jonatas,
    I am not so convinced by your proposition as I think it overlooks some important issues.

    First of all, I believe you chose those four issues as the ultimate “existential” problems we can foresee from our current perspective. I strongly agree with (4) and also probably somehow with (2), but I think (1) (and by implcation (3)) is very doubtful to be still valid in a post-biological future. To abolish suffering, or other kinds of hedonic principles (as to maximize happiness or wellbeing) seems to me as an inherently biologically motivated, and I don’t see why they would still play a role in a post human future, just as sex, art and feelings. Therefore I think it is a bit unrealistic and fallacious to consider this as an ultimate goal. The ultimate goals will be stabilished by whatever kinds of intelligent beings there will be after us, and whatever is of their survival interest (which I believe would include 4 and 2).

    I think many transhumanists share a view that technological advance will somehow unnecessary to fight against other problems as the many social (political and economical) and psychossocial we face today. I don’t see how this is going to happen. I personally don’t share your open identity beliefs, and as I see it, as long as our minds are only very indirectly physically connected this is reasonable. I believe many of the main problems humanity face today are caused by a restrict social identity which delimitates the ownership of power (military, economical, natural resources). We don’t care for those which don’t belong to our group, and we take care that we have enough power and resources to keep ourselves well and secure. That as I see it is the main cause of social trouble in human society today, and I don’t see how transhumanism will solve it, as I think people will stick to it while they can.

    I think technological advances are good, but they are somewhat orthogonal to social and psychossocial problems, and those seem to be harder to address. And for that reason are the ones we should invest more. If we don’t we will just have transhuman social problems, just as high tech didn’t stop wars from happening, it just made the wars more high tech.

    I agree that we should be careful with existential risks and want this technological achievements, I just think they are not the main problems we have to face now.

    I don’t think artificial intelligence it so easy to contain as you said, and I consider it to be a major existential risk.

    And finally, I think respect for other kinds of life is not natural nor rational, and that posthuman future implies in serious matters about what we mean, as we will be no longer restrained by physical or biological constraints, there seems to be no answer for what we should look for, what we should be, except that we will try to keep existing and somehow evolving.

    Anyway, I just think the utilitarian ethics is less universal and more human- or biological-founded than you consider, that those big and ultimate issues are not to be considerd by this human-perspective, and that it’s a bit delusional to look for this goals now and in that way. There are more urgent and serious matters to be dealt with.

  2. Leo, thanks for the reply.

    You argue that when we overcome our biological nature, utilitarianism may not hold true anymore. Could you explain this giving reasons?

    There’s a difference between “sex, art and feeling” and “utility” in that they are “esthetical”, subjective means to achieve value, and not absolute ends in themselves.

    Utilitarianism seems to me to be absolute in the universe as the determination of ethical value and valid regardless of the structure of beings, although beings could be in contact with only part of it or with no value at all. Only part of it would be if they cannot feel negative feelings, or if they cannot feel positive feelings. Since utilitarianism and therefore ethical value is dependent on consciousness, ethical value only applies to conscious beings. Since positive and negative is a dichotomy, there are no other dimensions, just a single one which is the quality of conscious feelings determining ethical value. I don’t see how this could possibly change with any other structure of an organism or machine.

    You argue that transhumanism wouldn’t by itself solve social and psycho-social problems. I think psychological problems are caused by a bad biology and they can only be solved by transhumanism. Increasing intelligence, however, is the key to solving all problems. It’s no use solving social problems theoretically without increasing the intelligence of the population. I agree that “many of the main problems humanity face today are caused by a restrict social identity which delimitates the ownership of power”, though I think this cannot be solved except by transhumanism.

    (Explanation:)

    Increasing the intelligence of a population, the social problems tend to resolve spontaneously. See empirical examples of this: Haiti is a country with a population with average IQ of 70 (Lynn, Vanhanen), while Japan has an average IQ of 105. Notice that the correlation between average IQ and Gross Domestic Product is about .7 in the world (Lynn, Vanhanen), that means, wealth is not a distracting factor–it is for the most part caused by IQ. Compare the capacity of Haiti and Japan of dealing with earthquakes.

    In Haiti there were very deep political and social problems, even before the last earthquake (this is not a distracting factor), and no perspective of getting better. There was an earthquake and the poorly-made buildings and constructions succumbed promptly causing hundreds of thousands of deaths, and many more wounded, which will eventually become crippled or die, by malnutrition, infections, diseases and lack of medical care. It’s a lamentable disaster and shows that a population with average IQ of about 70 doesn’t have capacity to administer its life and becomes a constant humanitarian problem (think in other countries with similar IQ to reinforce the example).

    On the other hand, the countries with social problems relatively solved, which enjoy a quality of life and social organization that we could consider reasonable, such as Scandinavia and another few countries of Europe, are the among the few countries with average IQ of about 100, the average IQ of the world being of about 85-90. It is not possible to transfer their social structure to countries with IQ much lower because there wouldn’t be “hardware” to run the “software”.

    If countries with average IQ of 100 have a performance so much better than countries with average IQ of 70, it seems very likely that a country with average IQ of 130 would have a social structure still incomparably better, and this is still a relatively low IQ compared to what could be achieved with genetic engineering or artificial intelligence. Such a population could be like having a Nobel Prize winner in every citizen, increasing incomparably the capacity to solve any type of problem, all the more so because those who would be above average would have an intelligence yet unseen today.

    (End of explanation)

    As for artificial intelligence, I’m not really worried by it, mainly because I don’t see a being vastly more intelligent doing something vastly more stupid than I would do. Why do you think artificial intelligence poses a risk and cannot be easily contained, even in a virtual world simulation which it wouldn’t be able to tell apart from reality (and even if it did, it could have no access to reality)?

    I think respect (or concern) for other kinds of life comes from my view regarding open identity. This is really hard to explain, and I won’t try to do it, since I have poor rethoric, but I’ve found a paper by Daniel Kolak with about 30 pages, which is shorter than his book “I Am You”, and I will send it to you and anyone interested by email.

  3. “Artificial intelligence, if properly planned, should not be considered a relevant global catastrophic risk, because it can be easily contained (inside reality simulations, or through many limitations, such as physical, in terms of knowledge, of accessibility, etc.) and because if it is very intelligent it should not have unproductive behavior.”

    Now first: EVERY SINGLE STRONG IA RESEARCHER CONSIDER AI A MAJOR GLOBAL CATHASTROFIC RISK.
    That been said, the hard thing to do is precisely properly plan a AI that doesn’t destroy all humans. There is no a priori reason to consider the human life the most productive way to use matter, in fact there are many good reasons not do so, look around you.

    You have committed intentionality bias by ignoring the major catastrophic risks that aren’t caused by human actives: asteroids, supervulcanism and gama ray bursts. So far these risks fairly surpass the other risks you mention. The reason you have done that is because in the savannas, and even nowadays, there is nothing we can do about non-intentional hazards and much we can do about intentional hazards, hence evolution shape our cognition to only care about intentional hazards.

    There is a CORELATION of .7 between wealth and IQ, form this doesn’t fallow that IQ causes wealth any more that wealth causes IQ. If the above average Italy or Germany IQ were genetically caused then poor regions of mainly italian and german immigration would have above average IQ, which doesn’t happen to be de case, look around you.

Deixe uma resposta

Preencha os seus dados abaixo ou clique em um ícone para log in:

Logotipo do WordPress.com

Você está comentando utilizando sua conta WordPress.com. Sair / Alterar )

Imagem do Twitter

Você está comentando utilizando sua conta Twitter. Sair / Alterar )

Foto do Facebook

Você está comentando utilizando sua conta Facebook. Sair / Alterar )

Foto do Google+

Você está comentando utilizando sua conta Google+. Sair / Alterar )

Conectando a %s