Doomsday and Conscious Machines

Penultimate Draft 17/dec/2008

Please Help me improve this article by commenting it before it is sent to press.

Every Conscious Machine Drives us Closer to Death

“Every time the clock ticks ‘plus one’,’plus one’,’plus one’,

it will be telling you ‘one less’,’one less’, ‘one less’…”

Abril Despedaçado

The Doomsday Argument is alive and kicking, and since its formulation in the beggining of the Eighties by the astrophysicist Brandon Carter it has gained wide attention, been strongly criticized and has been described in many different, and sometimes non-interchangeable analogies. I will briefly present the argument here, and departing from Nick Bostrom’s interpretation, I will defend that doom may be sooner than we think if we start building conscious machines soon in the future.

The Argument

From Bostrom [1996]:

The core idea is this. Imagine that two big urns are put in front of you, and you know that one of them contains ten balls and the other a million, but you are ignorant as to which is which. You know the balls in each urn are numbered 1, 2, 3, 4 … etc. Now you take a ball at random from the left urn, and it is number 7. Clearly, this is a strong indication that that urn contains only ten balls. If originally the odds were fifty-fifty, a swift application of Bayes’ theorem gives you the posterior probability that the left urn is the one with only ten balls. (Pposterior (L=10) = 0.999990). But now consider the case where instead of the urns you have two possible human races, and instead of balls you have individuals, ranked according to birth order. As a matter of fact, you happen to find that your rank is about sixty billion. Now, say Carter and Leslie, we should reason in the same way as we did with the urns. That you should have a rank of sixty billion or so is much more likely if only 100 billion persons will ever have lived than if there will be many trillion persons. Therefore, by Bayes’ theorem, you should update your beliefs about mankind’s prospects and realise that an impending doomsday is much more probable than you have hitherto thought.

So what the argument states is simply that if you are willing to concede that you are a random possible human, and you are aware that you are the (aprox) 60 billionth person on this planet, than you should be willing to shift your predictions about the end of the world (meaning the end of your class of people) to a much sooner time than you previously did.

Several objections have been put forth against this standard formulation of the doomsday argument, ranging from the counter-intuitiveness of the conclusion to saying that the analogy fails for many different reasons, such as that it has no temporal component, that birth ranks are indexicals, that one could not have been only a possible human, rather than an actual one, among others. Still, counterarguments have been put forth to all these objections[BOSTROM 1999,2001] and it is far from clear that we have any reason to cast doubt on the central argument, let alone consider it refuted.

The most usual objections to the Doomsday argument rely on an intuitive misaprehension of the basic ideas underlying the argument, reason for which I will copy another version of it here, from Bostrom [2001] which specifies a particular hypothesis regarding prior probabilities that will be used in this article as a basis for reasoning about the consequences of creating new forms of consciousness with regard to our distance to Doomday.

The Self-Sampling Assumption and its use in the Doomsday argument

Let a person’s birth rank be her position in the sequence of all observers who will ever have

existed. For the sake of argument, let us grant that the human species is the only intelligent life

form in the cosmos.1 Your birth rank is then approximately 60 billionth, for that is the number of humans who have lived before you. The Doomsday argument proceeds as follows:

Compare two hypotheses about how many humans there will have been in total:

h1: = “There will have been a total of 200 billion humans.”

h2: = “There will have been a total of 200 trillion humans.”

Suppose that after considering the various empirical threats that could cause human extinction (species-destroying meteor impact, nuclear Armageddon, self-replicating

nanobots destroying the biosphere, etc.) you still feel fairly optimistic about our prospects:

Pr(h1) = .05

Pr(h2) = .95

But now consider the fact that your birth rank is 60 billionth. According to the

doomsayer, it is more probable that you should have that birth rank if the total number of

humans that will ever have lived is 200 billion than if it is 200 trillion; in fact, your

having that birth rank is one thousand times more probable given h1 than given h2:

Pr(“My rank is 60 billionth.” | h1) = 1 / 200 billions

Pr(“My rank is 60 billionth.” | h2) = 1 / 200 trillions

With these assumptions, we can use Bayes’s theorem to derive the posterior probabilities

of h1 and h2 after taking your low birth rank into account:

Pr(h1 | R = 60 B) = ________Pr( R = 60 B | h1 ) Pr(h1 )______________ ≈ .98

.                               .  Pr( R = 60 B | h1 ) Pr( h1 ) + Pr( R = 60 B | h2 ) Pr(h2 )

Your rosy prior probability of 5% of our species ending soon (h1) has mutated into a

baleful posterior of 98%. ”

Prior Probabilities

The greatest problem about using bayesian reasoning in arguments such as the Doomsday argument is that we have no method whatsoever of determining the prior probabilities of outcome. We cannot know if the possibilities range from there being, all and all, 100 billion humans to there being 100 trillion or if the probabilities range from there being 100 billion to a googol humans, neither how likely each option it. Since we do not know what are these prior possible probabilities we must rely in one or another intuition about the probability distribution if we are to take in consideration our actual case.

Before going to the concrete case of mankind in the early 21st century, I want to point out that at an abstract level the argument is sound and works no matter what are the prior probabilities. Even though we cannot ascribe any certainty to from how much to how much should we shift the probability of extinction within, say, 200 years, we can be sure that we should make the shift, and think of it as much more probable than we usually do. The abstract bayesian reasoning is sound independently of determining the specific values to be treated, and therefore the belief that we are likely to be extinct sooner than we think is independent of the belief of how sooner are we to expect doom. What is important is that we understand that this reasoning, if appliable, slides the probability towards a sooner catastrophe, and that any further considerations we apply within this line of reasoning will slide it towards or away from our new set-point, whichever it is.

For mankind in the 21st century, we have the data that you are around the 60 billionth person to ever live, and since, as I said, we hace no way of being sure about prior probabilities we can use as a working hyphotesis the same simplified case that Bostrom used to unfold our discussion, that is, that the two prior possibilities are that there are 200 billion and 200 trillion people during all the history of man. This is just a working hyphotesis, and it doesn’t have to be anywhere near the truth for the consequences that we can draw from it be useful, even if it turned out that the options are 500 billion with 1/3 prior chance 229 googols with 1/3 prior chance and 12 with 1/3 prior chance, the sliding of our belief would still be the same, and the reasoning remais sound as long as we are not epistemologically aware of the prior probabilites (which we never will, since they are prior).

If that is the case then, as his arguing shows, we have reason to believe that we are 98% likely to be in a world that will stand more 140 billion people, and 2% likely to be in a world that will stand more 199 940 000 000 000 people, which is a lot more than 140 billion.

But then along comes the question, how soon it that? Or, as a fact of matter, how soon are the predictions thus far made based on any other prior probabilities? In a recent article Jason Matheny [2007] sums up a few predictions:

While it may be physically possible for humanity or its descendents to flourish for 10^41 years, it seems unlikely that humanity will live so long. Homo sapiens have existed for 200,000 years. Our closest relative, homo erectus, existed for around 1.8 million years (Anton, 2003). The median duration of mammalian species is around 2.2 million years (Avise et al., 1998).

A controversial approach to estimating humanity’s life expectancy is to use observation selection theory. The number of homo sapiens who have ever lived is around 100 billion (Haub, 2002). Suppose the number of people who have ever or will ever live is 10 trillion. If I think of myself as a random sample drawn from the set of all human beings who have ever or will ever live, then the probability of my being among the first 100 billion of 10 trillion lives is only 1%. It is more probable that I am randomly drawn from a smaller number of lives. For instance, if only 200 billion people have ever or will ever live, the probability of my being among the first 100 billion lives is 50%. The reasoning behind this line of argument is controversial but has survived a number of theoretical challenges (Leslie, 1996). Using observation selection theory, Gott (1993) estimated that humanity would survive an additional 5,000 to 8 million years, with 95% confidence.”

So the weather forecast is already dark grey, and here I intent to make it only worse. Going back to our assumption of the 200 billions against 200 trillions, we have foreseen that there are probably only 140 billion of us coming along for the ride, and before going into all the birth-rates and population predictions, we must stop and analyse what is the “us” when I say that there are 140 billion of us coming along for the ride.

The Reference Class Problem

The Doomsday argument works once you consider your birth rank in relation to your reference class, the class that you belong to which matters for considering the Doomsday argument. This could be any of these:

(1)Beings that have read, understood, and believed the Doomsday Argument

(2)Beings who could have mastered the argument

(3)Human Beings

(4)Conscious Beings

(5)Conscious Intelligent Beings

As things stand, there is no settled down position to which of this reference classes should we consider ourselves when reasoning about Doomsday. The intuitive grasp is that we should count our birth rank as humans, but that can be deceptive, since there are no strict frontiers that determine humanity (or any of these classes) and some consider it likely that even you could one day become some sort of tranhuman, super-human or post-human of a kind. Intuitively, that should not change your predictions about doom made before you upgraded, so we have some reason to believe that class (3) is not the best bet for Doom predictions.

Most of us only care about our lives as long as we are conscious, so that if one would keep us in deep anesthesia, in a coma, or in a sleepless dream, most of us would not like the idea. We hold a tacit conception that what matters about us in consciousness, meaning that were we not concious (i.e. If philosophical zombies were possible) life would be pointless. Also, we make ethical considerations regarding other entities in terms of consciousness: “don’t hurt that squirrel, he can feel it.” This is not a specific argument in favour of using the class of conscious observer when analysing global catastrophic risks, but it is a gereral argument in favouring of favouring consciousness over other things, whatever consciousness turns out to be.

From now on I will assume that the important class of reference when one is analysing the Doomsday Argument is indeed the class of conscious beings, and I’ll also assume that there is no such thing as half-conscious or partly-conscious. We will pretend that it is very clear who is and who is not conscious, and that each conscious being can be accounted as an equal into Doomsday reasoning (independently of how much he lives, how powerfull his mind etc…)

We are not also particularly interested in knowing when will be the Doomsday of all humans, supercomputers and squirrels. Not at least if we can instead know when is the Doomsday of all humans and supercomputers only. So, even though the debate goes on about squirrel’s consciousness, and why not say, bats’s as well, we will consider our reference class to be observers who are both conscious and intelligent. This comes from the simple fact that we want to predict the Doom of these fellows, not of squirrels, not of superpowerfull intelligent unconcious machines. To be in our reference class, we demand intelligence for squirrels and consciousness for machines, if they do not present them to us, we stand where we are.

Conscious Observers in an Atemporal World

The underlying reasoning behind Doomsday pressuposes a sort of atemporality that has been much discussed. Since we are considering as part of the class of reference beings from the future, that do not yet exist, how can we use them in our reasoning? Two lines of objections have been put forth, one that says that you cannot use them at all since they do not exist, and other that says that if the world is indeterminate (i.e.Quantum Physics etc…) then we cannot use them to calculate anything.

I think that these objections miss the point of the doomsday argument. As Daniel Dennett said: <!– @page { margin: 2cm } P { margin-bottom: 0.21cm } –>”The future is going to happen, and that is true whether determinism is true or wheter inderminism is true, there going to be a future” . There are two very different senses of being determinate. The more usual one is the classical formulation of determinism, epitomized by Laplace’s Demon though experiment. We are asked to imagine a omni-intelligent being that can compute all the laws of physics (whatever they are) and that knows the postion of all particles in one particular moment. By definition, if this demon is able to know the future and the past, then the universe is determinate, otherwise, it is indeterminate, or open. Then there is another less used sense of determinate, let’s call it God’s Eye Determination. Instead of the Demon, we have an omniscient God that knows all non-indexical facts, past and future, all the particles, everything that can be known by one being about the universe. A weak sort of determinate, which is the one Dennett alludes when forecasting the future, is the one in which this God knows the future. That only means that the future will come (if it comes) and that what happens in it will happen in it (it is as tautological as it sounds).

The reason I exposed these two senses of determinate is because both objections against doomsday that rely on the fact that the argument is temporal, whereas the urns with 10 balls or 100 balls are not is mixing up these two senses. For the mathematical assumption that you are a randomly chosen figure in a reference class to work all you need is God’s Eye Determination, there has to be a fact of matter as to how many beings there will ever be, but it is completely irrelevant if this information could be known by a Laplacean Demon, calculated by our best computers or accessible in any other fashion. The reasoning that gives soundness to the Doomsday argument is completely independent of the future, and of the level of determination of reality (in the sense of predictability). This may seem counterintuitive at first, but it seems very logical since the Doomsday Argument is a mostly mathematical argument, which implies it probably needs very thin ground in the nature of reality to work.

Can a Machine Be Conscious?

So the Doomsday argument is sound, works well and predicts a dark weather for our world, with not so many people (lattu sensu) to come after you, since you are the 60th billionth person around. Let us now turn to the refernce class. We have decided to consider only intelligent conscious beings as part of our supposed reference class, and that brings about the age-old question, can a machine be concious?

Within philosophy of mind this is one of the most discussed topics of the late 20th century. For starters, there are at least four widely used senses of the word “consciousness” that have been elegantly split up by Ned Block in Concepts of Consciousness. If we are phenomenal realists, like Block, Chalmers and Searle, that is, if we attribute reality to phenomenal qualities (i.e. Qualia) then the sense that matters to us is the sense Block calls p-consciousness (short for Phenomenal Consciousness). If we are materialistic monists, like Dennett, then what people call phenomenal consciousness actually stands for a bunch of interacting physical entities and their relations, not to phenomenal qualities. In this case to ask if a machine is conscious is to ask wether it can perform certain kinds of activities, and behave in such and such way, it is an empirical question.

I will remain neutral as to should we be phenomenal realists or materialistic monists. Since philosopher are allowed to suppose contradictory things, as long as they do them one at a time, I will work on both hipothesis.

      1. Phenomenal Realism is true: Supposing that phenomenal realism is true, it remains to be seen wether consciousness is a physical process (type or token identity and physical emergentism would be qualified here) or if it is non physical (here being all sorts of dualisms). For the sake of brevity, I won’t discuss Idealism. Another option is that consciousness is in fact a part of a physical process (property dualism, as well as qualia being the intrinsic nature of matter, opposed to the physical spectrum, which describes the relational nature).

      2. Materialistic Monism is true: An empirical theory of consiciousness would have to account for all we call conscious phenomena in an explanatory and clear way. Alternatively, it could be true and undiscoverable (because we do not have the means to perform such a discovery) but these details should not divert us from what matters for doomsday, so we can assume that it is discoverable.

As many options as there are for the philosophical Realist, most of them have an non-investigable outlook. Dualist formulations are almost always unverifiable, epiphenomenalism in particular. Even within cartesian dualism, if there were inter-substancial causality, what we could analyse from outside is only that the physics, say, of a brain is not working as expected, but that does not entail that it is consciousness that is doing the job. If it is consicousness, we don’t have a way to find out. If consciousness is the intrinsic nature of matter, since all our apparatus of measurement only measures relational aspects, we could not know either whether a machine was consicous.

As things stand, if phenomenal realism is true, we have no way of finding out if a machine is conscious or not, and are condemned to remain forever thinking through analogies, just like we do today with chimps and squirrels, guessing from their distance to us if, and how much are they concious. So, perhaps there could be conscious machines, but we would not be able to aknowledge them as such.

If on the other hand Materialistic Monism is right than we can assume that we will find a standard definition of consciousness and more or less direct ways of testing if it applies to different beings. Some believe we already do have the necessary apparatus. In any case it is technically feasible that we will one day find out a consciousness-meter and know whether machines are or not conscious. Note also that since we are assuming that we are conscious, it is a decided fact of matter that there can be conscious machines, because there already are, it only remains to be seen whether we will be able to produce non-biological machines that are conscious as well.

Thusfar I have addressed the epistemological grounds for machine consciousness, and argued that in both cases it is possible (does not contradict any central thesis) that machines are conscious.

Both Phenomenal Realists (of most kinds) and Materialistic Monists would be ready to aknowledge at least the possibility of machine consiciousness, so our reference class, which considers the future, seems to be increasing in size, but how much is it increasing?

How Long do We Have

The Doomsday argument purports to show that hiphotesis with fewer individuals (say, 200 billion) are more likely than with many individuals (200 trillion). Our reference class is much more likely to be around the billions than the trillions, now what is the consequence of incresing the size of the reference class that we think that actually will live. In other words, what is the consequence of thinking that it is likely that soon we will be able to create conscios machines?

For the argument, it is unfortunately (i’ll explain soon) none. The only important data when reasoning is the set from which you decided that you are a random sample from and your birth rank among that set. So, that is well stablished, we have decided that we are the 60th billionth people around and that our set of reference is of conscious intelligent beings. That is all the information we need! You already now, right now, that the world is much more like to have more 140 billion intelligent conscious beings than it is to have several trillions. If I add a new piece of information, it will not change your calculations, but I will do anyway:

New information: Within the first half 21th century, we will be able to create intelligent conscious machines.

Many people, most proeminently Ray Kurzweil, have defended this hiphotesis as highly likely. Moore’s law seems to be still working, technology is developing quickly, brain-computer interfaces are getting better every day, IBM has a brain-simulation project, our best computers perform computations only 2 or 3 orders of magnitude inferior to the human brain etc… in other words, it is a likely possibility, and we should give it careful thought.

I said that it doesn’t make any difference for the Doomsday Argument, and that is true, but that does not mean it doesn’t make any difference for the Doomsday itself. Doomsday, in our hiphotetical scenario is to take place whenever the 200 billionth conscios intelligent being is born, or created (remember that the argument works independently of the numbers we assumed, the same follows if we had chosen as prior possibilities other numbers instead of 200 billion). Doomsday will come not in a specific when, but in a specific if. If the 200 billionth being is born, then (per armageddon?) the reference class will be destroyed (or stop reproducing). Since no one forecasts that humans or machines will suddently stop reproducing unless they run out of fuel, armaggedon is more likely than immortality without children.

It is estimated that around 350 000 people are born every day, that ammounts to some 130 million born every year. Supposing that current trends of decreasing populational growth will continue, we can say that the 21st century will see some 5 billion more people being born. That is not so bad (given that we did not attribute any prior probabilities for, say, 63 billion all and all, because that would scare us too much). But now suppose that we do create intelligent machines, not only that, but we create machines that can create copies of themselves, just like we do. The difference being that they are much faster. Now, there is no theoretical obstacle for them to create, say, 145 Billion copies of themselves, within 15 years. That is almost sure doom for us, and for them.

It can still get worse. Let us suppose (also a higly likely possibility) that we create simulations of societies, just like our current videogames, but with conscious beings on them. One simulation could simultaneaously run a very large number of conscious beings, say, 200 milion. Or more, much more, the only limit is computational power, and that has been more than doubling every two years for decades.

So, how long do we have in fact? It impossible to forecast that for a great number of reasons. (1) We do not have the prior possibilities and their probabilities (2) We do not know if there were or there are other in our reference class alive today (aliens etc…) (3) Even if we did know that we are alone, and that the prior probabilities were such and such, this would still give us only a likelihood distribution, and we would have no way of telling which specific instance of it we were. Just like all bayesian reasoning based on unknown prior probabilities, the Doomsday argument is more an argument towards a shift in our current beliefs, than it is a settlement of what we should believe.

In this article, I hope I have made a strong case for another shift. Even though we cannot be sure whether machines are or not conscious, if we will ever build simulations, if Moore’s law will keep its pace etc… We have to shift upwards the fear we have of creating more individuals of our reference class (specially in ways that look dangerous). Doomsday, which will happen even if only in the heat death of the universe, is shifting towards us every time a conscious intelligent observer is created, and we should really take that in consideration when making future plans about building inteligent machines, at least if our mathematical and computational abilities manage to make us understand what can be blatantly obvious to the machines, but not so much for old apes from the Savannahs.

References

Bostrom, N.1999 The Doomsday Argument is Alive and Kicking IN Mind (1999), Vol. 108, No.431, pp. 539-50.

———-. 2001. The Doomsday Argument, Adam & Eve, UN++, and Quantum Joe IN Synthese (2001), vol. 127, issue 3, pp. 359-387.

Reducing the Risk of Human Extinction
Jason G. Matheny

Risk Analysis.

A Third Route to the Doomsday Argument

preprint

Paul Franceschi

University of Corsica

revised May 2005

p.franceschi@univ-corse.fr

http://www.univ-corse.fr/~franceschi

See for instance “A Third Route to the Doomsday Argument ”, Franceschi ,P.

Philosophical Foundations of Neuroscience. Bennet, M. Hacker, P.M.S.

9 comentários em “Doomsday and Conscious Machines”

  1. Hi, since you’ve asked for criticism, I’ll show a critic side, but that doesn’t mean that I didn’t find your article good and valid. I’ll try to criticize the points I’ve found weak, and probably my criticism could also be criticized, that’s the best way for our reasoning and our knowledge to evolve.

    The argument about how many people will still exist is entirely based upon how many people already exist or existed, and in what’s the chance that we are in the group of the people that already exist or existed, since it is impossible to be in the group of those who will come to exist (except in statistical or mathematical arguments, it seems). Making the question in this time, what’s the probability that we are in the 60 billion out of X? That’s 100%.

    All that we know about this is that we are somewhere between the numbers 54 billion and 60 billion, approximately—depending on whether we consider some of our ancestors more or less human, but whatever, for the sake of argument—and that this interval is the only possible at this time. Such a statistical inference can only be based in a population of 6 billion, because it can only consider the present time. A statistical calculation of the probability of a sample cannot be based in a population that does not exist in the present.

    What’s the chance of the Doomsday question being made in the present time and not in the past? It is precisely because it was made in the present that we are in this number, and not in another. The probability of us being in the number 60 billion, whatever the total number of humans, is 100%. We are not in a random point in the scale of all possible points in human history, because in the time of the Doomsday question, the present time, the only possible point is that in which we are.

    The argument tries to apply statistics and mathematics in a situation in which they’re not applicable, that asks more for practical and empirical reasoning. The probabilities found of us being in one or other point in the scale of total human beings are based in the error of supposing the probability of getting a real sample from an unreal population.

    Since the Doomsday argument is completely blind to all real conditions, if we make the question again when humanity reaches 200 billion, it would give us the number 400 billion as the most likely, and so on. At 120 human beings it would give us 240, and at 240 it would give us 480. Besides, the total number of human beings is completely irrelevant in temporal terms if our life span is greatly changed; if we decide to maintain a smaller and better served population; if we become something else from humans, as mentioned in the article.

    The Doomsday argument supposes extinction as certain and unavoidable, what’s really a wrong assumption. We have a final limit when the universe ends, but even this could be avoided by some future human intelligence, perhaps. As you’ve said: “Doomsday, which will happen even if only in the heat death of the universe”.

    However, the arguments based not on blind statistics but on concrete reasons that offer us motives for considering our probability of survival are valid. I think you could modify the article to show how despite the Doomsday argument being invalid, other motives make our survival questionable. Caution should be taken not to consider some motives that presuppose a static, stupid and unprepared humanity: we can run out of oil and still develop new sources of energy; we can run out of drinking water and find a cheap way to get it from salt water (like the dikes in the Netherlands); even our planet or solar system may collapse in the distant future and we may still be able to survive. Still, the argument of substituting entirely humanity for only hypothetically conscious beings may pose a real risk.

    I think even in dualism we could probably find out by indirect inferences if a given system generates consciousness or not. When we have a better knowledge of how consciousness is generated in the human brain, and of how it is not generated, we can delineate the differential aspects involved specifically with it, and have a considerable flexibility to modify the other irrelevant aspects, including important and desirable modifications in artificial organisms. An entirely robotic consciousness is a further away and riskier step, but I have the strong impression that before substituting all humans for only hypothetically conscious robots we’ll have a much higher intelligence ourselves, by which we may be better able to evaluate whether these robots will have consciousness or not. Nonetheless, the argument presented in the article is very pertinent, since it brings this problem to attention right from the present time.

    Sorry for only criticizing. Overall I think the text is quite good.

  2. There are few things that enjoy more than disagreeing with intelligent people. One of the best opportunities is the Doomsday argument.

    “Since the Doomsday argument is completely blind to all real conditions, if we make the question again when humanity reaches 200 billion, it would give us the number 400 billion as the most likely, and so on. At 120 human beings it would give us 240, and at 240 it would give us 480.”
    Call this sentence above (S1)
    you have criticized the temporality of the doomsday argument.
    As I understant, your critique is

    P1: The doomsday argument implies (S1)
    P2: (S1) is absurd

    C : The Doomsday argument is absurd.

    I beg to differ. I think it is true that the Doomsday argument implies S1. So people from the future shoud reason lilke that, and think of not so soon catastrophes. I agree everyone will see doom somewhat further than he is.
    That does not go against the core argument. The argument doesn’t talk about when is doom coming, but on how likely is it to come in the next X years… Likelihoods are never predictions, but they are reasons for making predictions.

    If you knew 100% surely that by doing X you have a 90% chance of dying, you wouldn’t do X.

    In the same fashion, if you knew somewhat surely that the world is 90% likely to end before the 200 billionth person shows up, you should not create new people. ….

    What is lacking in your counter-argument is an understanding that when there are 120 billion people, the argument WILL NOT CHANGE to 240 for US. It will be 240 for THEM. The doomsday argument applies separately to each individual, so the prediction it makes for you is not the same it makes for you grandson.

    That is because the likelihood of doom is proportional to your birth rank, and people have different birthranks.

    Overall, I think your critique does not apply on these grounds.

    this question is meaningless: “Making the question in this time, what’s the probability that we are in the 60 billion out of X? That’s 100%.”
    Yeah it is a 100%. So what, no one is asking that!
    The point is not that we are making the question in this time, the point is that our birth rank is X. X is a determining factor, not time. Asking what is the probability that we are here now GIVEN we are here now makes no more sense than asking why am I me?

    Thanks for the criticism, I hope others join you in this task.

  3. Well, you have reason in your argument:

    “What is lacking in your counter-argument is an understanding that when there are 120 billion people, the argument WILL NOT CHANGE to 240 for US. It will be 240 for THEM. The doomsday argument applies separately to each individual, so the prediction it makes for you is not the same it makes for you grandson.”

    That’s indeed so. However I still won’t accept the Doomsday argument for another reason, which is the same reason why I did that silly question:

    “Making the question in this time, what’s the probability that we are in the 60 billion out of X? That’s 100%.”

    The argument kind of is all based upon the chance of getting a random sample from the total birth scale of humanity (let’s say 120 billion), when actually it is not a possibility to get the sample from the entire scale, so the probability of we being in the point where we are says nothing in terms of chance. It’s like calculating the chance of getting a ball from the lottery when, instead of 60 balls, there is only one ball to pick from.

    I think that’s similar to the mistake of the Doomsday argument. The probability of how long humanity will continue is based on the chance of getting the current position from the total number of possible positions, but that’s meaningless, just as in the lottery example, you could not be anywhere else in the time scale as you are now, even hypothetically, because this is the time when the question is made.

    At least my intuition says so. Another reason I intuitively doubt the argument is because it only considers numbers… when there are so many other factors involved which are more important. Who knows, maybe in a small population in the animal kingdom reproducing more (increasing the numbers) actually yields the opposite result of increasing the chance of survival of the species instead of limiting its future. The argument is equally applicable for humans, animals, plants, or raindrops.

    So I think it would be better to minimize the Doomsday argument and to focus on the other arguments.

  4. While it was a fun read, I think all the reasoning is flawed in that it assumes there is going to be a doomsday. If you order a pizza, every minute it becomes more likely that the pizza will arrive the next minute. Nobody ordered the doomsday, so each generation is just as likely as the generation before to get one.

  5. Se desconsiderarmos as leis fisicas a probabilidade de uma quantidade de particulas se organizarem em uma estrela, galaxias ou seres humanos seria extremamente baixa. Conforme se aumenta as limitações do sistema essas limitações fazem as probabilidades de certas configurações, que eram de outro modo despresiveis, ficarem altas. Considerar todas as informações que possuimos para prever a extinção da humanidade do modo que foi considerado é ignorar essas limitações como relevantes e apenas considerar a nossa previsão. A extinção da humanidade é determinada pelas leis fisicas ou em um nivel mais alto de descrição pela evolução e alterações geologicas e extimativas que se baseiem nisso são realistas, as que não se baseiam não são.

  6. João, algum argumento a favor de considerar todas as informações que possuímos JUNTAS, em oposição a considerá-las também separadamente, ou você só vai dizer que devemos considerar todas as informações juntas?

    Algum argumento contra considerar em separado o dado sobre quantas pessoas já existiram?

    Alguma coisa que tenha a ver com o que eu escrevi?

    Ou seu comentário se baseia apenas no seu “senso de realidade” de um primata da savana vivendo numa cidade de pedra?

  7. Se você acha que não devemos considerar a extinção de outras especies, a trajetoria de asteroides, a frequencia com que eles se chocam com a terra e todas as outras informações sobre coisas que podem acabar com a nossa especie em prever o fim da nossa especie, então você tem que apresentar argumentos, se você não aceita isso não existe discussão racional só existe fanfarisse abstrata que pode ser muito intrigante, mover muitas emoções, nos fazer pensar sobre o fim de tudo mas que não é ciencia e se for matematica é matematica inutil. Se considerarmos só esses argumentos probabilisticos chegaremos num resultado que não é compativel com as previsoes cientificas e nesse caso é evidente quem deve ceder.
    De novo, meu argumento é de que você não considera as outras informações enquanto tais, você considera a previsão que aquelas informações te dão e quando essas informações entram na equação dessa forma elas pesam muito menos do que deveriam. Eu não vou criticar o que você escreveu em si, se você aceita que a probabilidade bayesiana é a que tem de ser usada tudo segue da maneira que você descreveu, eu estou criticando os pressupostos epistemologicos. Não acho que sabemos tão pouco assim sobre extinção e portanto considero que se deve usar a probabilidade classica. Talvez meu comentario tem mais a ver com o doomsday argument do que com seu texto e talvez eu não tenha entendido ele direito, mas mesmo assim achei que valia a pena comentar. Se você se der ao trabalho de argumentar o porque devemos considerar que sabemos tão pouco sobre a nossa propria extinção eu agradeço. 🙂
    A não ser que você alegue que a probabilidade bayesiana nos foi legada por alguma entidade divina ou alguma civilização alienigena, ela também faz parte do senso de realidade de um primata da savana vivendo na cidade de pedra.

  8. Você afirmou que eu acho um monte de coisas que não acho.
    Minha pergunta é se não devemos considerar os dados separados ALÉM de considerarmos eles juntos, sua resposta é como se eu tivesse falado que considerá-los juntos é burrice. Nessa altura do campeonato, espere mais de mim. Ambas as coisas podem e devem ser feitas, e ambas tem consequências.

    Sabemos tão pouco sobre nossa extinção porque não temos nenhum ancestral que foi extinto, sabemos muito sobre aranhas porque temos bastantes ancestrais que foram picados por aranhas. Não podemos basear nossas análises sobre nossa extinção apenas em argumentos baseados no mundo ao nosso redor por causa do princípio antrópico. Ou seja, porque uma das pré-condições para pensar sobre a própria extinção é não estar extinto em primeiro lugar.

    É por isso que vale a pena e faz sentido considerar o Doomsday Argument, que é um argumento probabilístico (é uma pena que ninguém tenha opinado sobre o que eu escrevi, e sim contestado o argumento do qual eu parto. A idéia é aceitar o argumento e verificar se o texto traz algo de interessante a esse respeito).

    Como o argumento é hipotético e matemático, ele não é baseado no senso de realidade de um primata da savana, ele pressupõe possibilidades contra-factuais bastante estranhas etc… Entendo o que você quis dizer, na medida em que ele foi pensado por primatas da savana. Meu ponto é que ele teria sido pensado igualmente por seres que vivem em outro universo com outras leis da física. Não só por pessoas que vivem numa sociedade como a nossa. Ou marcianos, que vivem num universo similar. Ou seja, ele independe da primatidade savanal para funcionar.

    Infelizmente ninguém comentou nada sobre o artigo em si, apenas contestando o argumento que já está estabelecido há mais de 20 anos e que não me interessa, ao menos aqui, contestar. Vou partir para o trabalho de revisão e modificação, e torcer por uma melhor sorte na versão 2.0.

Deixar mensagem para Jeremy Cancelar resposta