Altruistic Awesomeness, Your Challenge

So you woke up in this universe, containing not only yourself, but a planet in which you live, a few billion galaxies, religious grandmothers, cookies, weekends and downloadable series which you can watch any time. Eventually you noticed people are way more intentional than cookies, people always want something. Everyone told you, also, that you happen to be a person.

Then you asked the obvious question:  ¿What do I want?

Let us assume you are a very intelligent person (we know you are ¿right?)

Not just that, you have a deep grasp on biological evolution, and what life is.

You understand intelligence better than average, and you know the difference between a soul and an evolutionarily designed gadget whose function is tangential to being an optimization program which optimizes for some rough guidelines brought forth by genes and memes in a silent purposeless universe.

You know human thinking works mainly through analogies, and that the best way to explain how the mind works involves some way of dividing what it does into simpler steps that can be accomplished by less intelligent systems. That is, you realize the explanation of intelligence amounts to explaining it without using “intelligence” as part of your explanation.

You know that only a fool would think emotions are opposite to reason, and that our emotions are engineered by evolution to work in a fluid and peaceful coalition with reason, not only as best friends, but as a symbiotic system.

You have perused through the underlying laws of physics, and not only you found out schröedinger’s equation, but you understood that it implies a counter-intuitive series of things, such as: There are many-worlds splitting all the time into even more worlds, and I am splitting just like everything within this model. In fact, there is a tree of greater and greater amounts of branches, so I can always trace my self back, but there are too many selves forward. You wonder if you are all of them, or just one, and which.

Everett branches of a person, splitting into the future

You basically have intelligence enough (which probably would correlate with some nice IQ measurement, on the 125+ range…. but NEVER worry about IQ, that number is just a symbol to remind you that you are smarter than most of your teachers, your village elders, etc… and give you motivation to actually DO the stuff you’ve been considering doing all this time, IQ is, basically, a symbolic statement that you can disrespect authority)

Then you thought: Wow, it turns out I feel very good being Nice to other people. I am a natural altruist.

How can I put my intelligence to work for a better world, without being sucked into the void of EVIL-DARKNESS [your choiche of master-evil here, be it capitalism, common sense, politics, religion, stupidity, non-utilitarian charity etc…]

Since you grasp evolution, you do know that there is no ultimate-morality. There is no one great principle, just in the same way as there is no one great god.

On the other hand, it seems that happiness is great, and the best parts of life, both for you, and for your friends, are those parts which are Awesome, amazing, fantastic, delicious, unbearably happy, unimaginably joyful. This of course, opposed to those parts which are miserable, unfortunate, sad, ennui, so awful you want to cry.

So you decided you want to have a life that is 1) Awesome 2) Altruistic.

Now, you ask the second question:

¿What should I do?

Then all your intelligence was put to work on that, and you started finding out what the other Awesome Altruists were doing lately. You stopped reading Vogue and Newspapers, and read about people who loved mankind and tried to do stuff. Ghandi, Mandela, Russell, Bill Gates, Angelina Jolie, Frederic II, Nick Bostrom, Bono, Bentham, Eliezer Yudkowsky, Bakunin, Ettinger, Mother Theresa, Marx, among a  few others.

“Let us understand, once and for all, that the ethical progress of society depends, not on imitating the cosmic process, still less in running away from it, but in combating it.”
— T. H. Huxley (“Darwin’s bulldog”, early advocate of evolutionary theory)

You have started to analyse their actions counterfactually. You learned that the right question, to figure out what really matters is: ¿What is the difference between our world in which X did what he did, and our world in case X had not done that?

You noticed people have a blind spot relating to this question, and they always forget to ask “¿Would someone else have done that, had X not done that?” and you have stored a special cozy place in your brain that cintilates a huge neon sign saying “If YES, then X work does not make a difference” every time you ponder the issue.

So you noticed how the most important altruistic acts are not just those that have greater impact, and stronger effect. You realized that the fewer people are working on something of impact and effect, the more difference each one makes. There is no point in doing what will be done by others anyway, so what should be done is that which, if you did not do it, would not get done at all.

Applying this reasoning, you have excluded most of your awesome altruists of people it would be great to be like.

Some remain. You notice that from those, it turns out they are all either very powerful (moneywise) or tranhumanists. You begin to think about that…..

¿Why is it, you ask, that everyone who stands a chance of creating a much much better universe is concerned with these topics?

1) Promoting the enhancement and improvement of the human condition through use of technology

2) Reducing the odds of catastrophic events that could destroy the lives of, say, more than 50 million people at once.

3) Creating a world through extended use of technology in which some of our big unsolved problems do not exist anymore. (Ageing, unhappiness, depression, akrasia, ennui, suffering, idiocy, starvation, disease, impossibility of creating a back up of one self in case of car crash, not having a very, very delicious life, bureocracy and Death, to name a few problems)

It is now that you begin to realize that just like science is common sense, applied over and over again at itself, Just like science is iterated common sense, transhumanism is iterated altruistic awesomeness.

Sometimes, something that comes from science seems absurd for our savannah minds (splitting quantum worlds, remote controllable beetles, mindless algorithms that create mindful creatures). But then you realize that if you take everything you grasp as common sense, and apply common sense once again to it, you will get a few thing that look a little bit less commonsensical than the first ones. Then you do it again, a little more. And another time. All the steps take you only a little bit further away from what your savannah mind takes as obvious. But 100 steps later, we are talking about all the light coming from a huge exploding ball of helium very far away which disturbs space in predictable ways and that we perceive as sunlight. We call this iterated common sense Science, for short.

Now ¿what if you are a nice person, and you enjoy knowing that your action made a difference? Then you start measuring it. It seems intuitive at first that some actions will be good, saying the truth, for instance. But in further iterations, when you apply the same principles again, you find exceptions like “you are fat”. As you go through a few iterations, you notice the same emotional reaction you felt when common sense was slipping away while you learned science. You start noticing that giving for beggars is worse than for organized institutions, and that your voting does not change who is elected, you notice education pays off in long term, and you understand why states are banishing tobacco everywhere. You realize the classic “prevention is the best remedy”. Here is the point where you became a humanist. Congratulations! Very Few have gotten through here.

It turns out, though, that you happen to know science. So there are more steps to take. You notice that we are in one of the most important centuries of evolution’s course, because memes are overtaking genes, and we just found out about computers, and the size of the universe. We are aware of how diseases are transmitted, and we can take people’s bodies to the moon, and minds throughout most of the earth surface, and some other planets and galaxies. So you figure once we merge with technology, the outcome will be huge. You notice it will probably be in the time of your life, wheter you like it or not.

It will be so huge in fact, that there is probably nothing that you can do, in any other area whatsoever, that stands an awesome altruistic chance against increasing the probability that we will end up in a Nice Place to Live, and will not end up in “Terrible Distopian Scenario Number 33983783, the one in which we fail to realize that curing cancer was only worth it if it was not necessary to destroy the earth to calculate the necessary computations to perform the cure”.

Dawkins points out that there are many more ways of being dead than alive. There are more designs of unsustainable animals. Yudkwosky points out there are many more ways of failing in our quest to find a Nice Place to Live. Design space is huge, and the Distopian space is much greater than the Utopian Space. Also, they are not complementary.

So you kept your Altruistic Awesomeness reasoning with your great intelligence. Guess what, you found out that other people who do that call themselves “transhumanists”, and that they are working to either avoid global catastrophic risks, or to create a world of cognition, pleasure, and sublime amazement beyond what is currently conceivable to any earthling form.

You also found out there are so few of these people. This gave you a mixed feeling.

On the one hand, you felt a little bit worried, because no one in your tribe of friends, acquaintances, and authorities respects this kind of thinking. They want to preserve tradition, their salaries, one or another political view, the welfare state, teen-tribal values,  status quo, ecology, their grades, socialist ideals, or something to that effect. So you were worried because you identified yourself as something that is different from most who you know, and that not necessarily holds the promise of gaining status among your peers because of your ideals, which relate to the greater good of all humans and sentient life, present and future, including themselves, who simply have no clue what the hell are you talking about, and are beggining to find you a bit odd.

On the other hand, when you found out that there are few, you felt like the second shoes salesman, who went to an underdeveloped land and sent a message for the king after his friend, the first salesman, had sent another, from the northern areas of the land.

First Salesman: Situation Hopeless, they don’t wear any shoes…

Second Salesman: Glorious Opportunity, they don’t have any shoes yet!

It took you a long time, to learn all this science, and to deeply grasp morality. You have crossed through dark abysses of the human mind under which many of our greatest have failed. Yet, you made through, and your Altruistic Awesomeness was iterated, again and again, unappalled by the daunting tasks required of those who want to truly do good, as opposed to just pretending. The mere memory of all the process makes you chill. Now, with hindsight, you can look back and realize it was worth it, and that the path that lies ahead is paved, unlike hell, not with good intentions, but with good actions. It is now time to realize that if you have made it through this step, if all your memes cohered into a transhumanist self, then congratulations once again, for you are effectively part of the people on whom the fate of everything which we value lies. ¿Glorious opportunity, isn’t it?

Now take a deep breath. Insuflate the air. Think about how much all this matters, how serious it is. How awesome it it. Feel how altruistic you truly are, from the bottom of your heart. How lucky of you to be at one time so smart, so genuinely nice, and lucky to be born at a time where people who are like you are so few, but so few, that what you personally choose to do will make a huge difference. It is not only glorious opportunity, it is worth remarking as one of life’s most precious gifts. This feeling is disorienting and incandescent at the same time, but for now it must be put in a safe haven. Get back to the ground, watch your steps, breathe normally again and let us take a look at what is ahead of you.

From this day on, what matters is where you direct your efforts. ¿How are you going to guarantee a safer and plentier future for everyone? ¿Have you checked out what other people are doing? ¿Have you considered which human values do you want to preserve? ¿Are you aware of Nick Bostrom who is guiding the Future of Humanity institute at Oxford towards a deep awareness of our path ahead, and who has co-edited a book on global catastrophic risks? ¿Do you know that Eliezer Yudkowsky figured it all out at age 16 after abandoning high-school, and has been developing a friendly form of artificial intelligence, and trying to stop anyone from making the classic mistakes of assuming that a machine would behave or think as a human being would? ¿Did you already find out that Aubrey de Grey is dedicating his life to create an institution whose main goal is to end the madness of ageing, and has collected millions for a prize in case someone stops a mouse from ageing?

The issues that face us are not trivial. It is very dangerous to think that just because you know this stuff, you are already doing something useful. Beware of things that are too much fun to argue. There is actual work that needs to be done, and on this work may lie the avoidance of cataclysm, the stymie of nanotechnological destruction. The same line of work holds the promise of a world so bright that it is as conceivable to us as ours is to shrimp. A pleasure so high that the deepest shining emotions a known drug can induce are to deppressed orfan loneliness as one second of this future mental state is to a month of known drugs paradisiac peaks. To think about it won’t cut it. To talk about it won’t cut it. There is only one thing that will cut it. Work. Loads of careful, conscious, extremely intelligent, precise, awesomely altruistic, and deeply rewarding work.

There are two responsible things to be done. One, which this post is all about, is divulgating, showing the smart altruistic awesome people around that there are actual things that can be done, should be done, are decisive on a massive level, and are not overdetermined by someone else’s actions with the same effects.

The other is actually devising utopia. This has many sides to it. No skilled smart person is below threshold. No desiring altruistic awesome fellow is not required. Everyone should be trying. Coordination is crucial. To increase probability of utopia, either you decrease probability of distopia, cleaning the future space available of terrible places to live, or you accelerate and increase odds of getting to a Nice Place to Live. Even if you know everything I’ve been talking about until here, to give you a good description of what devising utopia amounts to, feels like, and intends, would take about two books, a couple dozen equations, some graphs, and at least some algorithms… (here are some links which you can take a look at after finishing this reading)

This post is centralizing. If you have arrived to this spot, and you tend to see yourself as someone who agrees with one third of what is here, you may be an awesome intelligence floating around alone, which, if connected to a system, would become an altruistic engine of powers beyond your current imagination.

I’m developing transhumanism in Latin America. No, I’m not the only one. And no, transhumanism has no borders.

Regardless, I’ll be getting any work offerings (¿got time? ¿got money? ¿Got enthusiasm? send it along) in case someone feels like it. I’ll also advise (as opposed to co-work) any newcomers who are lone riders. Lone wolfs, and people who do not like working along with others in any case.

Here, have my e-mail: diegocaleiro atsymbol gmail dotsymbol com

There is a final qualification that must be done to the “¿got time?” question. Seriously, if you are an altruist, and you are smart as we both know you are. ¿What could possibly be more worth your time than the one thing that will make you counterfactually more likely to be part of those who ended up the misery of darwinian psychological tyranny, and helped inaugurate the era of everlasting quasi-immortal happiness and vast fast aghasting intelligence which defies any conception of paradise?

If you do have a proper, more than five-lined intelligent response to the above question, please, do send it to my e-mail. After all, there is no point at which I’ll be completely convinced I arrived at the best answer. I’ve only researched for 8 years on the “¿what to do?” question. To think I did arrive at the best possible answer would be to commit the Best Impossible Fallacy, and I’m past this trivial kind of mistake.

Otherwise, in case you still agree with us two hundred that transhumanism is the most moral answer to the “¿What Should I do?” question….Then —> Please send me your wishes, profile, expertise, curriculum, or just how much time do you have to dedicate to it. This post is a centralizer. I’m trying to bring the effort together, for now you know. There are others like yourself out there. We have thought up a lot about how to make a better world, and we are now working hard towards it. We need your help. The worst that could happen to you is losing a few hours with us and then figuring out that in your conception, there are actually other things which compose a better meta-level iteration of your Altruistic Awesomeness. But don’t worry, it will not happen.

Here, have my e-mail: diegocaleiro atsymbol gmail dotsymbol com

Two others have joined already. (EDIT: After Writing this text there are already six of us already) The only required skill is intelligence (and I’m not talking about the thing IQ tests measure), being a fourteen year old is a plus, not an onus. As is having published dozens of articles on artifical intelligence. Dear Altruistic Awesome, the future is yours.

But it is only yours if you actually go there and do it.

5 comentários em “Altruistic Awesomeness, Your Challenge”

  1. “In order to get at you individually, I must talk in the first person. I have to get you to drop modesty and say to yourself, “Yes, I would like to do first-class work.” Our society frowns on people who set out to do really good work. You’re not supposed to; luck is supposed to descend on you and you do great things by chance. Well, that’s a kind of dumb thing to say. I say, why shouldn’t you set out to do something significant. You don’t have to tell other people, but shouldn’t you say to yourself, “Yes, I would like to do something significant.”

    http://www.paulgraham.com/hamming.html

Deixe um comentário