1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

TV Shows Devs.

Discussion in 'General' started by Corky42, 7 Apr 2020.

  1. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    Anyone been watching it?

    I've been loving it so far and it did make me wonder if we had infinite computing power, like the great big quantum computer in the series, could we really predict the future and look into the past, in other words is the universe purely deterministic?
     
  2. VipersGratitude

    VipersGratitude Multimodder

    Joined:
    4 Mar 2008
    Posts:
    3,535
    Likes Received:
    837
    I haven't seen it. Alex Garland is one of my favourite writer/directors, so I've been waiting for it to end so I can binge it.
    But to answer your question, no, it's increasingly probabilistic at large scales.
     
    Last edited: 7 Apr 2020
  3. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    That's an easy one: no.

    As well as the issue of classical non-quantum chaos - behaviours that are fully deterministic, but impossible to predict the future because it requires perfect knowledge of the exact state of everything at a single point in time to simulate forwards from (weather is a classic example of this), quantum theory means it is impossible to have perfect knowledge of the exact state of everything at a single point in time (Heisenburg uncertainty).
     
  4. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    I suspect once you've watched it you'll change your mind, either that or like edzieba you brushed over the part where i said infinite computing power. :)
    I suspect you didn't think about it before answering so definitively, firstly it doesn't requires perfect knowledge of the exact state of everything at a single point in time, it just needs a perfect knowledge of one thing at one point in time multiplied by an infinite number of measurements.

    While the uncertainty principal is a thing it's only a thing because we lack the ability to measure or predict everything that would influence the velocity and/or trajectory of a particle.

    Without spoiling the plot I'll steal some questions from it, does anything happen without a reason? If you answer yes then give an example.
     
  5. VipersGratitude

    VipersGratitude Multimodder

    Joined:
    4 Mar 2008
    Posts:
    3,535
    Likes Received:
    837
    Yeah, but...there's no such thing. Computers are finite apparatus also constrained by finite time. Google Zeno or Oracle machines.
     
  6. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    I know that's why i said if we had infinite computing power, like the great big quantum computer in the series.

    FYI I'm not trying to discuss what we can do, I'm trying to discuss what maybe (could be) possible in the future if we had a quantum computer of undefined massiveness like in the series.

    E: To give you a rough idea as i suspect some of the replies haven't watched it yet, in one part they say, what if you could build write a computer program that modeled every particle in the universe.
     
    Last edited: 7 Apr 2020
  7. VipersGratitude

    VipersGratitude Multimodder

    Joined:
    4 Mar 2008
    Posts:
    3,535
    Likes Received:
    837
    It doesn't matter how big it is or the basis of its technology, it's constrained by physical limitations. There will never be such a thing...unless, of course, it operates in more dimensions than cartesian + time.
    There's also no way of magically separating the computer from the physical system it's embedded in and attempting to measure, due to entanglement, so supposing a great big quantum computer capable of infinite computing power breaks the physics you're attempting to predict.
     
    edzieba likes this.
  8. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    Yea, no, i can't see why an complex enough computer program couldn't model every particle in the universe, or at least the universe as it pertains to us on this planet, solar system, galaxy or whatever.

    Technically it's just a matter of scale though isn't it, if we can write computer models that predict what happens to X based on all pertinent information then doesn't the same apply to everything just on a much greater scale.

    Does anything happen without a reason?
     
  9. MLyons

    MLyons 70% Dev, 30% Doge. DevDoge. Software Dev @ Corsair Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    3 Mar 2017
    Posts:
    4,196
    Likes Received:
    2,781
    Seemingly our servers having a hissy fit
     
    Byron C, The_Crapman, edzieba and 2 others like this.
  10. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    Nope. The Uncertainty principle means you literally cannot measure all the properties of a particle at the same time. No finer loupe or faster stopwatch will help, it is a literal impossibility to measure both accruately and simultaneously, and one that has been experimentally verified. Indeed, the very act of measuring one must increase uncertainty in the other.

    Or in other words: if you want to know the exact speed and exact position of a particle at the same time, you first need to invalidate all of quantum mechanics.

    Doesn't matter if you are trying to simulate things on a Ti 85, a Bambleweeny 57 Submeson Brain, Deep Thought, or AC, as the problem has nothing to do with computational power. Even classical chaos theory (and once again, we know of several systems that are provably chaotic) means you cannot perfectly simulate outcomes without a perfect starting state, and it is a physical impossibility to perfectly measure that starting state.

    Scale is another barrier you hit. The mix of information theory and entropy would take way too long to go over (with all sorts of interesting sidetracks like Reversible Computing), but the upshot is: the fastest and most compact and energy efficient (because beyond a certain point the two converge) method of computing a simulation of reality can only trend down towards reality, never beyond. Information is itself a fundamental physical property, hence why (for example) the whole Black Hole information paradox was under discussion in the first place.
     
  11. VipersGratitude

    VipersGratitude Multimodder

    Joined:
    4 Mar 2008
    Posts:
    3,535
    Likes Received:
    837
    Try thinking of the universe as the computer....which is already in the lowest energy state possible to calculate itself i.e. The laws of physics dictate that the most efficient computer to simulate a universe is a universe. Any lesser computer contained therein will be less efficient and therefore incapable of simulating the larger universe, even if it can run Crysis.
     
    Last edited: 7 Apr 2020
  12. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    I was sort of hoping the name would've given it away, if you could remove or account for the uncertainty it would no longer be uncertain, you seem to be rather ridged in your thinking as not only is it called uncertainty, something that can be made certain, but it's also called a principal.
    If you think you understand quantum mechanics then you don't understand quantum mechanics.
    No, no, no. We can model weather systems, fluid dynamics, and other things on a large scale without replicating the actual scale of what we're modeling, while the computer that the MET office uses maybe large the actual program probably takes up less space than a briefcase (if it's on solid state media).

    Also i don't see any law of physics that dictates that the most efficient computer to simulate a universe is a universe, i mean for starters the universe is pretty crappy when it comes to null, something like 99.99% of you is nothing but empty space, space is even mostly space.

    I'm still waiting for the answer to Does anything happen without a reason. :(
     
  13. VipersGratitude

    VipersGratitude Multimodder

    Joined:
    4 Mar 2008
    Posts:
    3,535
    Likes Received:
    837
    In the parlance of our times lolololololololol. You're seriously overestimating the accuracy of our models, particularly on anything large scale, where the butterfly effect can amplify a small variation out an extremely large one. Allow me to prove it to you...

    I just took a screenshot of my windows weather app, checking for the next day its predicted to rain, which as you can see is Saturday. If you look along the hourly row you'll see that there's mostly a 50% chance of rain with a few 40%'s thrown in for good measure.

    [​IMG]

    Now think about that. 50% chance of rain - what does that mean? It means "Your guess is as good as mine, dude. Might rain. Might not".


    Again, the butterfly effect. Any small unmeasured variation can blow up and completely skew predictions. Also, defining emptiness as crappy only illuminates your bias towards Baryonic matter, which accounts for only about 5% of the universe as we understand it

    Simple answer - yes - whatever exists inside a black hole, where space time warps in upon itself rendering the concept of causality meaningless.
     
    edzieba likes this.
  14. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    Come along, you're just being silly now. This started out as a hypothetical question of what sort of modeling maybe possible with a quantum computer of 30 to 40 billion or more qubits and then you proposed that the computer must be the same size of what it's modeling and then you move onto saying that because current day computer modeling isn't 100% accurate that it will never be.

    If this is what you're proposing then you need to have a word with computer engineers of yesteryear who thought 640kb of memory would be all that anyone ever needed and simply modeling the movements of the planets was considered ambitious.
    The butterfly effect is only a thing though because we don't have sufficient information on how the cause alters the effect, because we don't have an accurate enough (computer) model of how each atom effects another.

    It's not that it's impossible to know how a butterfly flapping it's wings effects global weather, it's that we don't currently posses either sufficient computation power or a sufficient complex enough model.
    Again with the silliness, we don't know what happens inside a black hole we only have theories on what may happen, i assume i don't need to explain why that's so, saying space time warps in upon itself inside a black hole is nothing more than a hypothesis.

    So the question remains, does anything happen without a reason, I'm not asking that to trap or trick anyone, I'm using it as a quicker means of describing what is Devs.
     
    Last edited: 7 Apr 2020
  15. VipersGratitude

    VipersGratitude Multimodder

    Joined:
    4 Mar 2008
    Posts:
    3,535
    Likes Received:
    837
    A model is just a model, an approximation of reality. That's all it ever will be, no matter the precision of it's measurements. If you take that model to the nth degree of precision you end up with...the actual thing. Reality.

    Think about it mechanically. Any computational device needs to be made of something. A bit will be comprised of hundreds of atoms, each having protons, neutrons and electrons, with each of those having properties and yet the only thing we can describe is 1 or 0...or in the case of a qubit, superposition of the two. As such, it can't even accurately describe itself. Hundreds of properties, but all we can calculate is 1 or 0. Now lets increase the precision and say it's made of one atom - It still has to account for the properties of the particles that comprise it, but still all it can calculate is 1 or 0. That's why, when taken to the nth degree, the only way to create an accurate universe-modelling computer is to create a universe

    You got a silly answer to a silly question. You'll need to define what you mean by reason further - In a philosophical sense, in a basic causal sense, etc? Does you question jump down a rabbit hole that only ends with questioning the meaning of existence?
     
  16. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    Stop being silly, a model that's accurate to the nth degree doesn't magically become reality, if we model the velocity of a falling object it doesn't suddenly become that object simply because we input all the metrics like gravity, weight, temperature, humidity, shape, air density, and every other metric that may effect it, it's absurd to suggest it would and doing so just makes it look like you're trolling.
    Again with the ridiculousness, a transistor doesn't describe itself, it's the software running on it that does that, you're conflating the hardware with the software, a computer program that models a building doesn't need to be run on a computer that's the same size as the building, it doesn't need that because it's the software that models the building not the actual hardware.
    No i got a silly answer because you've decided to ruin what could have been (IMO) an interesting discussion on one possible future of computers, thanks. :rolleyes:

    In whatever sense you want to interpret it, not that I'm expecting a sensible answer as it's pretty clear that instead of exploring what if's and possibilities you're more interested in sowing discord, pretty much the same as you've been doing with Risky in the Coronavirus Thread.
     
  17. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,131
    Likes Received:
    6,725
    Disagree. If you're modelling everything exactly, then you are literally creating the thing. You can simulate something without creating the actual thing being simulated, but it'll never be exactly like the real thing - there'll always be handwaving involved.

    To move away from computers and into the real world: if I build a model of a Glock 17 that's absolutely 100% accurate, made from the same materials, operates in the same way, actually fires, and is in no way distinguishable from the original... then I've made a Glock 17, and no amount of "it's just a model" is going to keep me from being detained at Her Majesty's pleasure for a ten-stretch.
    But a computer program that models a building atom by atom would need a transistor per atom (assuming you could model an atom on a single transistor, which you can't, but that just makes the point more), meaning it would be considerably bigger than the building it's modelling. I think that's where the confusion lies: you're conflating inaccurate-but-good-enough modelling with literally-recreating-something-down-to-the-molecular-level-only-on-a-computer - and whether said computer is quantum or otherwise, doing the latter necessitates having a computer with as many bits (or qubits) as the bits in the thing it's recreating.
     
    VipersGratitude likes this.
  18. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    So you're essentially saying we can't, and never will be able to, create a program that models a particle, or anything else for that matter, with a high enough degree of precision that we can know how it looked in the past or will look in the future?

    Because if that's what you're saying then aren't you throwing out all those astrophysics predictions of the past and future positions of the planets, of particle physics who can predict that an atomic nucleus will decay at a certain rate.
    Agree with the conflating thing but at what point do we say inaccurate-but-good-enough is so close to the physical object that it no longer matters, if we can model something to a 99.99 followed by more 9's than i want to subject you to then inaccurate-but-good-enough becomes something that's incredibly-accurate-and-all-but-an-exact model.

    Put it this way, if both the accuracy and computation power increases indefinitely then doesn't that mean we'll eventually be able to predict events (particle interaction, weather systems, whatever) with such a high degree of accuracy that for all intents and purposes we'll be able to look forwards and backwards in time.

    Yes you'd lose accuracy as you go further from the point of origin and as the model becomes more complex but continue down the route of more computational power and more accurate modeling and that lose of accuracy become less also.
     
    Last edited: 8 Apr 2020
  19. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,131
    Likes Received:
    6,725
    ...no? I *am* saying that you can't, and never will be able to, create a program that models every particle in the universe with a high enough degree of precision that we can know how it looked in the past or will look in the future with 100 percent accuracy unless the computer is considerably larger and more complex than the universe it is modelling - and then it's still not going to be 100 percent accurate 'cos by definition it can't include a model of itself as part of that universe (otherwise it needs to be bigger and more complex than the universe plus itself, which you can clearly see ain't possible.)
    No: see above. A sufficiently complex computer would have an effect on its surroundings, but would be unable to model those effects 'cos it'd need to be still more complex...
    No, thanks to the propagation of uncertainty. Let's say your model is 99.99999999 percent accurate. That's close enough to perfect, right? Well, your first measurement is 99.99999999 percent likely to be correct. Your second is 99.99999998 percent likely to be correct. Still pretty good, no? But keep going: if we're modelling the universe, you're going to need a tick rate of thousands, maybe millions, of measurements per second. By the 1000th measurement, you're down to 99.9999899999997 percent - still good, still good. 10,000? 99.9999 percent and change. Hmm. The millionth measurement? 99.99. By your billionth measurement you're down to 90.48 percent. Things rapidly degrade from there: the 10-billionth measurement is only 37 percent likely to be correct. 100-billionth, 0.0045 percent.

    Let's say you're modelling at a rate of 100,000 measurements (or modelled events) per second. It would only take you 11 and a half days to reach that 0.0045 percent point - meaning that you've gone less than two weeks forwards or backwards in time and you're now doing worse than just guessing.

    Now, we're obviously talking here your percentage chance of being completely correct, not good enough - the figures would be very different for good enough, depending on exactly how good good enough needs to be - but it still shows how tiny, tiny inaccuracies compound very quickly in even a high-accuracy simulation.
     
    VipersGratitude likes this.
  20. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    I suggest some additional research into what the Uncertainty Principle is, as one of the core tenets of Quantum Mechanics. Thus far, it has has plenty of evidence to support its existence and predictions, and thus far zero evidence to invalidate it. Hence my statement that you would need to invalidate Quantum Mechanics in order for simultaneous measurement to even be considered a possibility.

    Yes. Absolutely.

    Atomic decay is a prime example of a non predictable event. You can know very well that 50% of a given number of atoms will decay within a set time period, but it is not possible to predict when any given atom will decay. It is such a prime example that it is used as one of the best possible sources of randomness for cryptography.

    I'd highly recommend James Gleick's Chaos: Making a new Science as an overview of the field of Chaos Theory, and why these sorts of perfect forward predictions are a mathematical impossibility even with classical (i.e. non-quantum) mechanics, even with entirely deterministic systems.
     
    VipersGratitude likes this.

Share This Page