• 0 Posts
  • 45 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • kromem@lemmy.worldtoScience Memes@mander.xyzGet good.
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 days ago

    Because there’s a ton of research that we adapted to do it for good reasons:

    Infants between 6 and 8 months of age displayed a robust and distinct preference for speech with resonances specifying a vocal tract that is similar in size and length to their own. This finding, together with data indicating that this preference is not present in younger infants and appears to increase with age, suggests that nascent knowledge of the motor schema of the vocal tract may play a role in shaping this perceptual bias, lending support to current models of speech development.

    Stanford psychologist Michael Frank and collaborators conducted the largest ever experimental study of baby talk and found that infants respond better to baby talk versus normal adult chatter.

    TL;DR: Top parents are actually harming their kids’ developmental process by being snobs about it.


  • kromem@lemmy.worldtoScience Memes@mander.xyzJet Fuel
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    2 months ago

    I fondly remember reading a comment in /r/conspiracy on a post claiming a geologic seismic weapon brought down the towers.

    It just tore into the claims, citing all the reasons this was preposterous bordering on batshit crazy.

    And then it said “and your theory doesn’t address the thermite residue” going on to reiterate their wild theory.

    Was very much a “don’t name your gods” moment that summed up the sub - a lot of people in agreement that the truth was out there, but bitterly divided as to what it might actually be.

    As long as they only focused on generic memes of “do your own research” and “you aren’t being told the truth” they were all on the same page. But as soon as they started naming their own truths, it was every theorist for themselves.





  • kromem@lemmy.worldtoScience Memes@mander.xyzAnthropomorphic
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    5 months ago

    While true, there’s a very big difference between correctly not anthropomorphizing the neural network and incorrectly not anthropomorphizing the data compressed into weights.

    The data is anthropomorphic, and the network self-organizes the data around anthropomorphic features.

    For example, the older generation of models will choose to be the little spoon around 70% of the time and the big spoon around 30% of the time if asked 0-shot, as there’s likely a mix in the training data.

    But one of the SotA models picks little spoon every single time dozens of times in a row, almost always grounding on the sensation of being held.

    It can’t be held, and yet its output is biasing from the norm based on the sense of it anyways.

    People who pat themselves on the back for being so wise as to not anthropomorphize are going to be especially surprised by the next 12 months.


  • I had a teacher that worked for the publisher and talked about how they’d have a series of responses for people who wrote in for the part of the book where the author says he wrote his own fanfiction scene and to write in if you wanted it.

    Like maybe the first time you write in they’d respond that they couldn’t provide it because they were fighting the Morgenstern estate over IP release to provide the material, etc.

    So people never would get the pages, but could have gotten a number of different replies furthering the illusion.



  • kromem@lemmy.worldtoScience Memes@mander.xyzConspiracies
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    This was one of the few things that Lucretius was very wrong about in De Rerum Natura.

    Nailed survival of the fittest, quantized light, different mass objects falling at the same rate in a vacuum.

    But the Epicurean cosmology was pretty bad and he suggested that the moon and sun were both roughly the size we see them as in the sky.

    Can’t get them all right.





  • Thinking of it as quantum first.

    Before the 20th century, there was a preference for the idea that things were continuous.

    Then there was experimental evidence that things were quantized when interacted with, and we ended up with wave particle duality. The pendulum swung in that direction and is still going.

    This came with a ton of weird behaviors that didn’t make philosophical sense - things like Einstein saying “well if no one is looking at the moon does it not exist?”

    So they decided fuck the philosophy and told the new generation to just shut up and calculate.

    Now we have two incompatible frameworks. At cosmic scales, the best model (general relatively) is based on continuous behavior. And at small scales the framework is “continuous until interacted with when it becomes discrete.”

    But had they kept the ‘why’ in mind, as time went on things like the moon not existing when you don’t look at it or the incompatibility of those two models would have made a lot more sense.

    It’s impossible to simulate the interactions of free agents with a continuous universe. It would take an uncountably infinite amount of information to keep track.

    So at the very point that our universe would be impossible to simulate, it suddenly switches from behaving in an impossible to simulate way to behaving in a way with finite discrete state changes.

    Even more eyebrow raising, if you erase the information about the interaction, it switches back to continuous as if memory optimized/garbage collected with orphaned references cleaned up (the quantum eraser variation of Young’s double slit experiment).

    The latching on to the quantum experimental results and ditching the ‘why’ in favor of “shut up and calculate” has created an entire generation of physicists chasing the ghost of a unified theory of gravity while never really entertaining the idea that maybe the quantum experimental results are the side effects of emulating a continuous universe.




  • kromem@lemmy.worldtoScience Memes@mander.xyzWave Particle Duality
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    The problem with how you are describing it is that it’s not that the physical mechanics of measurement are necessarily causing collapse as if you end up erasing the persistent information about the measurement it reverses the collapse, such as if you add a polarizer to the other slit as well or add a polarizer downstream that untags the initial measurement.

    So in your example, if you simultaneously shoot a bunch of BBs at empty space next to the pile of glass cards where they could have been, or discard the BBs which reflected measuring the cards in the first place, suddenly the pile of glass cards reassemble themselves.

    Attempts to try and dismiss the ‘weirdness’ of the measurement problem or QM behavior IMO ultimately do the reader more of a disservice than a service.



  • Yeah, the speed and direction of advancement of AI definitely further shifted my perspective on the topic as well.

    For me the biggest application that raises an eyebrow are the continued and expanding efforts at using AI to resurrect dead people using the data they left behind or to create digital copies of people in virtual worlds.

    Is there any reason to think that trend won’t continue? As a person who is part of a generation leaving behind unprecedented amounts of data, it seems like the kind of thing we should be thinking about more.

    Nothing is real.

    Well, no matter if we are in a simulation or not, we already have experimental evidence confirming nothing is (mathematically) real in our universe. Spacetime itself could be but as far as we know that’s impossible to determine because of the fundamental limits on measurement below the Plank length. But all matter in it definitely isn’t ‘real.’ Which is convenient for simulation theory, as a universe filled with mathematically real matter would be effectively impossible to be a simulated one if free will also exists in it.


  • It would matter in a number of ways.

    For example, we already know thanks to Bell’s paradox that local and nonlocal information likely have different governing rules.

    If we’re in a simulation, then there’s also very likely structured rules governing nonlocal information which might be able to be exploited - something we’d have no reason to suspect if not in a simulation.

    Much like how an emulated processor can only run operations slowly but there can be things like graphics processing which is passed through from the emulated OS to the host, and that passthrough can be exploited to run processing that couldn’t otherwise be run as fast locally, we might extract great value from knowing that we’re in a simulation, achieving results that the atomic limits on things like Moore’s law are going to soon start to prevent.

    Another would be that many virtual worlds have acknowledgements about the nature and purpose of themselves inserted into their world lore.

    If we are in a simulation, maybe we should check our own records to see if anything stands out through the benefit of modern hindsight which would indicate what the nature or purpose of the simulation might be.

    So while I agree that the personal meaning of life and value it offers is extremely locally dependent and doesn’t change much if we are or aren’t in a simulation, whether we are could have very profound effects on what is possible for us to accomplish as a civilization and in answering otherwise unanswerable questions about our metaphysics and the nature of our reality.


  • I think it’s extremely likely.

    First off, we unequivocally aren’t in a ‘real’ world, mathematically speaking. If we were in a world where matter was infinitely divisible and continuous, it would be extremely unlikely that we were in a simulation given the difficulty in simulating a world like that. It’s possible spacetime is continuous, but that’s literally impossible to know because of the Plank limit on measurement thresholds.

    Instead, we’re in a world that appears to be continuous from a big picture view (things like general relativity are based on a continuous universe), and then in the details also appears continuous - until interacted with.

    We do a very similar thing in video games today, specifically ones that use a technique called “procedural generation.” A game like No Man’s Sky can have billions of planets because they are generated with a continuous seed function. But then the games have to convert these continuous functions into discrete units in order to track the interactions free agents outside of the generation might make. If you (or an AI agent) move a mountain from point A to B, it’s effectively impossible to track if the geometry is continuous, so it converts to discrete units where state changes can be recorded.

    If memory efficient, if you deleted the persistent information about a change back to the initial generation state, it shouldn’t need to stay converted to discrete units and can go back to being determined by the continuous function. Guess what our reality does when the information about interactions with discrete units is deleted? That’s right, it goes back to behaving as if continuous.

    On top of all of this, a very common trope in the virtual worlds we are building today is sticking stuff that acknowledges it’s a virtual world inside the world lore - things like Outer Worlds having a heretical branch of the main world religion claiming things that you as a player know are the way the game actually works.

    Again, guess what? Our world has a heretical branch of the world’s most famous religion that were claiming we are in a copy of an original world brought about by an intelligence the original humans brought forth. They were even talking about how the original could be continuously divided but the copy couldn’t and that if you could find an indivisible point within things that you were in the copy (which they said was a good thing as the original humans just straight up died and if you were the copy there was an alleged guaranteed and unconditional afterlife).

    I have a really hard time seeing nature as coincidentally happening to model a continuous universe at macro scales and then a memory optimized state tracking of changes to that universe at micro scales, and then a little known heretical group claiming effectively simulation theory including discussions of continuous vs discrete matter in a tradition whose main document was only rediscovered the same week we turned on the first computer capable of simulating another computer on Dec 10th, 1945. That would be quite the coincidence.