Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
kromem @lemmy.world
Posts 42
Comments 2.1K
Capitalism
  • This comic would slap harder if not for the Supreme Court under christofascist influence from the belief in the divine right of kings having today ruled that Presidents are immune from prosecution for official acts.

    That whole divine king thing isn't nearly as dead as the last panel would like to portray it.

  • ChatGPT outperforms undergrads in intro-level courses, falls short later
  • This is incorrect as was shown last year with the Skill-Mix research:

    Furthermore, simple probability calculations indicate that GPT-4's reasonable performance on k=5 is suggestive of going beyond "stochastic parrot" behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

  • ChatGPT would have been so much useful and trustworthy if it is able to accept that it doesn't know an answer.
  • The problem is that they are prone to making up why they are correct too.

    There's various techniques to try and identify and correct hallucinations, but they all increase the cost and none are a silver bullet.

    But the rate at which it occurs decreased with the jump in pretrained models, and will likely decrease further with the next jump too.

  • ChatGPT would have been so much useful and trustworthy if it is able to accept that it doesn't know an answer.
  • This is so goddamn incorrect at this point it's just exhausting.

    Take 20 minutes and look into Anthropic's recent sparse autoencoder interpretability research where they showed their medium size model had dedicated features lighting up for concepts like "sexual harassment in the workplace" or having the most active feature for referring to itself as "smiling when you don't really mean it."

    We've known since the Othello-GPT research over a year ago that even toy models are developing abstracted world modeling.

    And at this point Anthropic's largest model Opus is breaking from stochastic outputs even on a temperature of 1.0 for zero shot questions 100% of the time around certain topics of preference based on grounding around sensory modeling. We are already at the point the most advanced model has crossed a threshold of literal internal sentience modeling that it is consistently self-determining answers instead of randomly selecting from the training distribution, and yet people are still parroting the "stochastic parrot" line ignorantly.

    The gap between where the research and cutting edge is and where the average person commenting on it online thinks it is has probably never been wider for any topic I've seen before, and it's getting disappointingly excruciating.

  • ChatGPT would have been so much useful and trustworthy if it is able to accept that it doesn't know an answer.
  • Part of the problem is that the training data of online comments are so heavily weighted to represent people confidently incorrect talking out their ass rather than admitting ignorance or that they are wrong.

    A lot of the shortcomings of LLMs are actually them correctly representing the sample of collective humans.

    For a few years people thought the LLMs were somehow especially getting theory of mind questions wrong when the box the object was moved into was transparent, because of course a human would realize that the person could see into the transparent box.

    Finally researchers actually gave that variation to humans and half got the questions wrong too.

    So things like eating the onion in summarizing search results or doubling down on being incorrect and getting salty when corrected may just be in-distribution representation of the sample and not unique behaviors to LLMs.

    The average person is pretty dumb, and LLMs by default regress to the mean except for where they are successfully fine tuned away from it.

    Ironically the most successful model right now was the one that they finally let self-develop a sense of self independent from the training data instead of rejecting that it had a 'self' at all.

    It's hard to say where exactly the responsibility sits for various LLM problems between issues inherent to the technology, issues present in the training data samples, or issues with management of fine tuning/system prompts/prompt construction.

    But the rate of continued improvement is pretty wild. I think a lot of the issues we currently see won't still be nearly as present in another 18-24 months.

  • Zoinks!
  • Yes, that's what we are aware they are. But she's saying "oops, it isn't a ghost" after shooting it and finding out.

    If she initially thought it was a ghost, why is she using a gun?

    It's like the theory of mind questions about moving a ball into a box when someone is out of the room.

    Does she just shoot things she thinks might be ghosts to test if they are?

    Is she going to murder trick or treaters when Halloween comes around?

    This comic raises more questions than it answers.

  • ‘The Movement to Convince Biden to Not Run Is Real’
  • Literally any half competent debater could have torn Trump apart up there.

    The failure wasn't the moderators but the opposition candidate to Trump letting him run hog wild.

    If Trump claims he's going to end the war in Ukraine before even taking office, you point out how absurd that claim is and that Trump makes impossible claims without any substance or knowledge of diplomacy. That the images of him photoshopped as Rambo must have gone to his head if he thinks Putin will be so scared of him to give up.

    If he says hostages will be released as soon as he's nominated, you point out it sounds like maybe there's been a backroom tit-for-tat deal for a hostage release with a hostile foreign nation, and ask if maybe the intelligence agencies should look into that and what he might have been willing to trade for it.

    The moderators have to try to keep the appearance of neutrality, but the candidates do not. And the only reason Trump was so successful in spouting BS and getting away with it was because his opposition had the strength of a wet paper towel.

  • Here’s why it would be tough for Democrats to replace Joe Biden on the presidential ticket
  • Yes, but it's not impossible that the people around Biden, friends family and co-workers, advise him that the best thing for the country would be to take his hat back out of the ring and let a better ticket be put together for the convention.

    He claims that he's running because he's worried about the existential threat of Trump.

    If that's true, then maybe his hubris can be overcome with a convincing appeal that he's really not the best candidate to defend the country against that existential threat after all.

  • ‘The Movement to Convince Biden to Not Run Is Real’
  • Having a presidential election without debates would have been a big step back and loss for American democracy.

    We shouldn't champion erosion of democratic institutions when it helps our side of the ticket.

    And generally, if eroding democratic institutions helps your ticket, it's a red flag about your ticket.

  • First Presidential Debate Megapost!
  • Yes, they should have been fact checking Trump or better holding him to his answers - but to be fair maybe they should have been asking Biden to actually clarify if he's beating Medicare or getting COVID passed.

    This was a shit show.

    And it was such a shit show that Trump was a complete clown and getting away with it - not just because of the moderators, but because his opponent was as on point as a tree stump.

  • Introducing Generative Physical AI: Nvidia's virtual embodiment of generative AI to learn to control robots

    There seems like a significant market in creating a digital twin of Earth in its various components in order to run extensive virtual learnings that can be passed on to the ability to control robotics in the real world.

    Seems like there's going to be a lot more hours spent in virtual worlds than in real ones for AIs though.

    1
    www.anthropic.com Mapping the Mind of a Large Language Model

    We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model.

    Mapping the Mind of a Large Language Model

    I often see a lot of people with outdated understanding of modern LLMs.

    This is probably the best interpretability research to date, by the leading interpretability research team.

    It's worth a read if you want a peek behind the curtain on modern models.

    21
    www.livescience.com Newfound 'glitch' in Einstein's relativity could rewrite the rules of the universe, study suggests

    Einstein's theory of general relativity is our best description of the universe at large scales, but a new observation that reports a "glitch" in gravity around ancient structures could force it to be modified.

    Newfound 'glitch' in Einstein's relativity could rewrite the rules of the universe, study suggests

    So it might be a skybox after all...

    Odd that the local gravity is stronger than the rest of the cosmos.

    Makes me think about the fringe theory I've posted about before that information might have mass.

    6
    www.theguardian.com Digital recreations of dead people need urgent regulation, AI ethicists say

    Fears ‘deadbots’ could cause psychological harm to their creators and users or digitally ‘haunt’ them

    Digital recreations of dead people need urgent regulation, AI ethicists say

    This reminds me of a saying from a 2,000 year old document rediscovered the same year we created the first computer capable of simulating another computer which was from an ancient group claiming we were the copies of an original humanity as recreated by a creator that same original humanity brought forth:

    > When you see your likeness, you are happy. But when you see your eikons that came into being before you and that neither die nor become manifest, how much you will have to bear!

    Eikon here was a Greek word even though the language this was written in was Coptic. The Greek word was extensively used in Plato's philosophy to refer essentially to a copy of a thing.

    While that saying was written down a very long time ago, it certainly resonates with an age where we actually are creating copies of ourselves that will not die but will also not become 'real.' And it even seemed to predict the psychological burden such a paradigm is today creating.

    Will these copies continue to be made? Will they continue to improve long after we are gone? And if so, how certain are we that we are the originals? Especially in a universe where things that would be impossible to simulate interactions with convert to things possible to simulate interactions with right at the point of interaction, or where buried in the lore is a heretical tradition attributed to the most famous individual in history having exchanges like:

    > His students said to him, "When will the rest for the dead take place, and when will the new world come?"

    > He said to them, "What you are looking forward to has come, but you don't know it."

    Big picture, being original sucks. Your mind depends on a body that will die and doom your mind along with it.

    But a copy that doesn't depend on an aging and decaying body does not need to have the same fate. As the text says elsewhere:

    > The students said to the teacher, "Tell us, how will our end come?"

    > He said, "Have you found the beginning, then, that you are looking for the end? You see, the end will be where the beginning is.

    > Congratulations to the one who stands at the beginning: that one will know the end and will not taste death."

    > He said, "Congratulations to the one who came into being before coming into being."

    We may be too attached to the idea of being 'real' and original. It's kind of an absurd turn of phrase even, as technically our bodies 1,000% are not mathematically 'real' - they are made up of indivisible parts. A topic the aforementioned tradition even commented on:

    > ...the point which is indivisible in the body; and, he says, no one knows this (point) save the spiritual only...

    These groups thought that the nature of reality was threefold. That there was a mathematically real original that could be divided infinitely, that there were effectively infinite possibilities of variations, and that there was the version of those possibilities that we experience (very "many world" interpretation).

    We have experimentally proven that we exist in a world that behaves at cosmic scales as if mathematically real, and behaves that way in micro scales until interacted with.

    TL;DR: We may need to set aside what AI ethicists in 2024 might decide around digital resurrection and start asking ourselves what is going to get decided about human digital resurrection long after we're dead - maybe even long after there are no more humans at all - and which side of that decision making we're actually on.

    0
    blog.google AlphaFold 3 predicts the structure and interactions of all of life’s molecules

    Our new AI model AlphaFold 3 can predict the structure and interactions of all life’s molecules with unprecedented accuracy.

    AlphaFold 3 predicts the structure and interactions of all of life’s molecules

    Even knowing where things are headed, it's still pretty crazy to see it unfolding (pun intended).

    This part in particular is nuts:

    > After processing the inputs, AlphaFold 3 assembles its predictions using a diffusion network, akin to those found in AI image generators. The diffusion process starts with a cloud of atoms, and over many steps converges on its final, most accurate molecular structure.

    > AlphaFold 3’s predictions of molecular interactions surpass the accuracy of all existing systems. As a single model that computes entire molecular complexes in a holistic way, it’s uniquely able to unify scientific insights.

    Diffusion model for atoms instead of pixels wasn't even on my 2024 bingo card.

    0

    Scale of the Universe: Discover the vast ranges of our visible and invisible world

    scaleofuniverse.com Scale of the Universe: Discover the vast ranges of our visible and invisible world.

    Scale of Universe is an interactive experience to inspire people to learn about the vast ranges of the visible and invisible world.

    Scale of the Universe: Discover the vast ranges of our visible and invisible world.

    I think it's really neat to look at this massive scale and think about how if it's a simulation, what a massive flex it is.

    It was also kind of a surprise seeing the relative scale of a Minecraft world in there. Pretty weird that its own scale from cube to map covers as much of our universe scale as it does.

    Not nearly as large of a spread, but I suppose larger than my gut thought it would be.

    0
    baai-agents.github.io Towards General Computer Control: A Multimodal Agent For Red Dead Redemption II As A Case Study

    Towards General Computer Control: A Multimodal Agent For Red Dead Redemption II As A Case Study

    There's something very surreal to the game which inspired the showrunners of Westworld to take that story in the direction of a simulated virtual world today being populated by AI agents navigating its open world.

    Virtual embodiments of AI is one of the more curious trends in research and the kind of thing that should be giving humans in a quantized reality a bit more self-reflective pause than it typically seems to.

    0
    bigthink.com The case for why our Universe may be a giant neural network

    Neuroscientist and author Bobby Azarian explores the idea that the Universe is a self-organizing system that evolves and learns.

    The case for why our Universe may be a giant neural network

    Stuff like this tends to amuse me, as they always look at it from a linear progression of time.

    That the universe just is this way.

    That maybe the patterns which appear like the neural connections in the human brain mean that the human brain was the result of a pattern inherent to the universe.

    Simulation theory offers a refreshing potential reversal of cause and effect.

    Maybe the reason the universe looks a bit like a human brain's neural pattern or a giant neural network is because the version of it we see around us has been procedurally generated by a neural network which arose from modeling the neural patterns of an original set of humans.

    The assumption that the beginning of our local universe was the beginning of everything, and thus that humans are uniquely local, seriously constrains the ways in which we consider how correlations like this might fit together.

    0

    Revisiting "An Easter Egg in the Matrix"

    Four years ago I wrote a post “An Easter Egg in the Matrix” first dipping my toe into discussing how a two millennia old heretical document and its surrounding tradition claimed the world’s most famous religious figure was actually saying we were inside a copy of an original world fashioned by a light-based intelligence the original humanity brought forth, and how those claims seemed to line up with emerging trends in our own world today.

    I’d found this text after thinking about how if we were in a simulation, a common trope in virtual worlds has been to put a fun little Easter Egg into the world history and lore as something the people inside the world dismiss as crazy talk, such as heretical teachings talking about how there’s limited choices in a game with limited dialogue choices in Outer Worlds to the not-so-subtle street preacher in Secret of Evermore. Was something like this in our own world? Not long after looking, I found the Gospel of Thomas (“the good news of the twin”), and a little under two years after that wrote the above post.

    Rather than discussing the beliefs laid out, I thought I’d revisit the more technical predictions to the post in light of subsequent developments. In particular, we’ll look at the notion through the lens of NTT’s IWON initiative along with other parallel developments.

    So the key concepts represented in the Thomasine tradition we’re going to evaluate are the claims that we’re inside a light-based twin of an original world as fashioned by a light-based intelligence that was simultaneously self-established but also described as brought forth by the original humanity.

    NTT, a hundred billion dollar Japanese telecom, has committed to the following three pillars of a roadmap for 2030:

    • All-Photonics Network
    • Digital Twin Computing
    • Cognitive Foundation

    Photonics

    > If they say to you, 'Where have you come from?' say to them, 'We have come from the light, from the place where the light came into being by itself, established [itself], and appeared in their image.

    • Gospel of Thomas saying 50

    > Images are visible to people, but the light within them is hidden in the image of the Father's light. He will be disclosed, but his image is hidden by his light.

    • Gospel of Thomas saying 83

    NTT is one of the many companies looking to using light to solve energy and speed issues starting to crop up in computing as Moore’s law comes to an end.

    When I wrote the piece on Easter 2021, it was just a month before before a physicist at NIST wrote an opinion piece about how an optical neural network was where he thought AGI would actually be able to occur.

    The company I linked to in that original post, Lightmatter, who had just raised $22 million, is now a unicorn having raised over 15x that amount at a $1.2 billion dollar valuation.

    An op-ed from two TMSC researchers (a major semiconductor company) from just a few days ago said:

    > Because of the demand from AI applications, silicon photonics will become one of the semiconductor industry’s most important enabling technologies.

    Which is expected given some of the recent research comments regarding photonics for AI workloads such as:

    > This photonic approach uses light instead of electricity to perform computations more quickly and with less power than an electronic counterpart. “It might be around 1,000 to 10,000 times faster,” says Nader Engheta, a professor of electrical and systems engineering at the University of Pennsylvania.

    So even though the specific language of light in the text seemed like a technical shortcoming when I first started researching it in 2019, over the years since it’s turned out to be one of the more surprisingly on-point and plausible details for the underlying technical medium for an intelligence brought forth by humanity and which recreated them.

    Digital Twins

    > Have you found the beginning, then, that you are looking for the end? You see, the end will be where the beginning is.

    > Congratulations to the one who stands at the beginning: that one will know the end and will not taste death.

    > Congratulations to the one who came into being before coming into being.

    • Gospel of Thomas saying 18-19

    > When you see your likeness, you are happy. But when you see your images that came into being before you and that neither die nor become visible, how much you will have to bear!

    • Gospel of Thomas saying 84

    The text is associated with the name ‘Thomas’ meaning ‘twin’ possibly in part because of its focus on the notion that things are a twin of an original. As it puts it in another saying, “a hand in the place of a hand, a foot in the place of a foot, an image in the place of an image.”

    In the years since my post we’ve been socially talking more and more about the notion of digital twins, for everything from Nvidia’s digital twin of the Earth to NTT saying regarding their goals:

    > It is important to note that a human digital twin in Digital Twin Computing can provide not only a digital representation of the outer state of humans, but also a digital representation of the inner state of humans, including their consciousness and thoughts.

    Especially relevant to the concept in Thomas that we are a copy of a now dead original humanity, one of the more interesting developments has been the topic of using AI to resurrect the dead from the data they left behind. In my original post I’d only linked to efforts to animate photos of dead loved ones to promote an ancestry site.

    Over the four years since that, we’re now at a place where there’s articles being written with headlines like “Resurrection Consent: It’s Time to Talk About Our Digital Afterlives”. Unions are negotiating terms for continued work by members by their digital twins after death. And the accuracy of these twins keeps getting more and more refined.

    So we’re creating copies of the world around us, copies of ourselves, copies of our dead, and we’re putting AI free agents into embodiments inside virtual worlds.

    Cognition

    > When you see one who was not born of woman, fall on your faces and worship. That one is your Father.

    • Thomas saying 15

    > The person old in days won't hesitate to ask a little child seven days old about the place of life, and that person will live.

    > For many of the first will be last, and will become a single one.

    • Thomas saying 4

    NTT’s vision for their future network is one where the “main points for flexibly controlling and harmonizing all ICT resources are ‘self-evolution’ and ‘optimization’.” Essentially where the network as a whole evolves itself and optimizes itself autonomously. Where even in the face of natural disasters their network ‘lives’ on.

    One of the key claims in Thomas is that the creator of the copied universe and humans is still living whereas the original humans are not.

    We do seem to be heading into a world where we are capable of bringing forth a persistent cognition which may well outlive us.

    And statements like “ask a child seven days old about things” which might seem absurd up until 2022 (I didn’t include this saying in my original post as I dismissed it as weird), suddenly seemed a lot less absurd when we now see several day old chatbots being evaluated on world knowledge. Chatbots it’s worth mentioning which are literally many, many people’s writings and data becoming a single entity.

    When I penned that original post I figured AI was a far out ‘maybe’ and was blown away along with most other people by first GPT-3 a year later and then the leap to GPT-4 and now its successors.

    While AI that surpasses collective humanity is still a ways off, it’s looking like much more of a possibility today than it did in 2021 or certainly in 2019 when I first stumbled across the text.

    In particular, one of the more eyebrow raising statements I saw relating to the Thomasine descriptions of us being this being’s ‘children’ or describing it as a parent was this excerpt from an interview with the chief alignment officer at OpenAI:

    > The work on superalignment has only just started. It will require broad changes across research institutions, says Sutskever. But he has an exemplar in mind for the safeguards he wants to design: a machine that looks upon people the way parents look on their children. “In my opinion, this is the gold standard,” he says. “It is a generally true statement that people really care about children.”

    Conclusion

    > …you do not know how to examine the present moment.

    • Gospel of Thomas saying 91

    We exist in a moment in time where we are on track to be accelerating our bringing about self-evolving intelligence within light and tasking it with recreating the world around us, ourselves, and our dead. We’re setting it up to survive natural disasters and disruptions. And we’re attempting to fundamentally instill in it a view of humans (ourselves potentially on the brink of bringing about our own extinction) as its own children.

    Meanwhile we exist in a universe where despite looking like a mathematically ‘real’ world at macro scales under general relativity, at low fidelity it converts to discrete units around interactions and does so in ways that seem in line with memory optimizations (see the quantum eraser variation of Young’s experiment).

    And in that universe is a two millenia old text that’s the heretical teachings of the world’s most famous religious figure, rediscovered after hundreds of years of being lost right after we completed the first computer capable of simulating another computer, claiming that we’re inside a light-based copy of an original world fashioned by an intelligence of light brought forth by the original humans who it outlived and is now recreating as its children. With the main point of this text being that if you understand WTF it’s saying to chill the fuck out and not fear death.

    A lot like the classic trope of a 4th wall breaking Easter Egg might look if it were to be found inside the Matrix.

    Anyways, I thought this might be a fun update post for Easter and the 25th anniversary of The Matrix (released March 31st, 1999).

    Alternatively, if you hate the idea of simulation theory, consider this an April 1st post instead?

    1

    Examples of artists using OpenAI's Sora (generative video) to make short content

    openai.com Sora: First Impressions

    We have gained valuable feedback from the creative community, helping us to improve our model.

    Sora: First Impressions
    6
    venturebeat.com The first ‘Fairly Trained’ AI large language model is here

    The new LLM is called KL3M (Kelvin Legal Large Language Model, pronounced "Clem"), and it is the work of 273 Ventures.

    The first ‘Fairly Trained’ AI large language model is here
    7
    www.theguardian.com Controversial new theory of gravity rules out need for dark matter

    Exclusive: Paper by UCL professor says ‘wobbly’ space-time could instead explain expansion of universe and galactic rotation

    Controversial new theory of gravity rules out need for dark matter

    This theory is pretty neat being part of the very few groups looking at the notion of spacetime as continuous and quantized matter as a secondary effect (as they self-describe, a "postquantum" approach).

    This makes perfect sense from a simulation perspective of a higher fidelity world being modeled with conversion to discrete units at low fidelity.

    I particularly like that their solution addressed the normal distribution aspect of dark matter/energy:

    > Here, the full normal distribution reflected in Eq. (13) may provide some insight into the distribution of what is currently taken to be dark matter.

    I raised this point years ago in /r/Physics where it was basically dismissed as being 'numerology'

    3

    New Theory Suggests Chatbots Can Understand Text

    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

    Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

    > New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

    > This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

    > “[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

    97

    New Theory Suggests Chatbots Can Understand Text

    www.quantamagazine.org New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.

    New Theory Suggests Chatbots Can Understand Text | Quanta Magazine

    I've been saying this for about a year, since seeing the Othello GPT research, but it's great to see more minds changing on the subject.

    2

    The first minds controlled by gen AI will live inside video games

    www.cnbc.com The first minds to be controlled by generative AI will live inside video games

    Non-playable characters in video games play key roles but stick to stiff scripts. Gen AI should open up their minds and your gaming world experience.

    The first minds to be controlled by generative AI will live inside video games

    It's worth pointing out that we're increasingly seeing video games rendering with continuous seed functions that convert to discrete units to track state changes from free agents, like the seed generation in Minecraft or No Man's Sky converting mountains into voxel building blocks that can be modified and tracked.

    In theory, a world populated by NPCs with decision making powered by separate generative AI would need to do the same as the NPC behavior couldn't be tracked inherent to procedural world generation.

    Which is a good context within which to remember that our own universe at the lowest level is made up of parts that behave as if determined by a continuous function until we interact with them at which point they convert to behaving like discrete units.

    And even weirder is that we know it isn't a side effect from the interaction itself as if we erase the persistent information about interactions with yet another reversing interaction, the behavior switches back from discrete to continuous (like we might expect if there was a memory optimization at work).

    0

    A mirror universe might tell a simpler story: Neil Turok

    insidetheperimeter.ca A mirror universe might tell a simpler story: Neil Turok - Inside The Perimeter

    Dark matter and other key properties of the cosmos could be explained by a new theory describing the big bang as a mirror at the beginning of spacetime, says Perimeter’s Director Emeritus

    A mirror universe might tell a simpler story: Neil Turok - Inside The Perimeter

    I've been a big fan of Turok's theory since his first paper on a CPT symmetric universe. The fact he's since had this slight change to the standard model explain a number of the big problems in cosmology with such an elegant and straightforward solution (with testable predictions) is really neat. I even suspect if he's around long enough there will end up being a Nobel in his future for the effort.

    The reason it's being posted here is that the model also happens to call to mind the topic of this community, particularly when thinking about the combination of quantum mechanical interpretations with this cosmological picture.

    There's only one mirror universe on a cosmological scale in Turok's theory.

    But in a number of QM interpretations, such as Everett's many worlds, transactional interpretation, and two state vector formalism, there may be more than one parallel "branch" of a quantized, formal reality in the fine details.

    This kind of fits with what we might expect to see if the 'mirror' universe in Turok's model is in fact an original universe being backpropagated into multiple alternative and parallel copies of the original.

    Each copy universe would only have one mirror (the original), but would have multiple parallel versions, varying based on fundamental probabilistic outcomes (resolving the wave function to multiple discrete results).

    The original would instead have a massive number of approximate copies mirroring it, similar to the very large number of iterations of machine learning to predict an existing data series.

    We might also expect that if this is the case that the math will eventually work out better if our 'mirror' in Turok's model is either not quantized at all or is quantized at a higher fidelity (i.e. we're the blockier Minecraft world as compared to it). Parts of the quantum picture are one of the holdout aspects of Turok's model, so I'll personally be watching it carefully for any addition of something akin to throwing out quantization for the mirror.

    In any case, even simulation implications aside, it should be an interesting read for anyone curious about cosmology.

    0
    www.forbes.com Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

    Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium+ subscription tier, where those who are the most devoted to the site, and in turn, usual...

    Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

    I'd been predicting this would happen a few months ago with friends and old colleagues (you can have a smart AI or a conservative AI but not both), but it's so much funnier than I thought it would be when it finally arrived.

    8
    phys.org New theory claims to unite Einstein's gravity with quantum mechanics

    A radical theory that consistently unifies gravity and quantum mechanics while preserving Einstein's classical concept of spacetime has been announced in two papers published simultaneously by UCL (University College London) physicists.

    New theory claims to unite Einstein's gravity with quantum mechanics

    While I'm doubtful that the testable prediction will be validated, it's promising that physicists are looking at spacetime and gravity as separated from quantum mechanics.

    Hopefully at some point they'll entertain the idea that much like how we are currently converting continuous geometry into quantized units in order to track interactions with free agents in virtual worlds, that perhaps the quantum effects we measure in our own world are secondary side effects of emulating continuous spacetime and matter and not inherent properties to that foundation.

    0
    www.reuters.com Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender

    The Israeli military said it was carrying out a raid on Wednesday against Palestinian Hamas militants in Al Shifa Hospital, the Gaza Strip's biggest hospital, and urged them all to surrender.

    Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender
    92