Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LO
locallynonlinear @awful.systems
Posts 1
Comments 92
We can, protect artists
  • Ha! Nope, not buying it.

    nasty license Ironic, considering that their work directly builds upon Stable Diffusion.

    Funny you mention licenses, since stable diffusion and leading AI models were built on labor exploitation. When this issue is finally settled by law, history will not look back well on you.

    So I’m not allowed to have the discussion I’m currently having

    Doesn't seem to prevent you from doing it anyways. Does any license slow you down? Nope.

    nor to include it in any Linux distro

    Not sure that's true, but also unnecessary. Artists don't care about this or need it to be. I think it's a disengenous argument, made in the astronaut suit you wear on the high horse drawn from work you stole from other people.

    This is not only an admission of failure but a roadmap for anybody who wants to work around Nightshade.

    Sounds like an admission of success given that you have to step out of the shadows to tell artists on mastodon not to use it because, ahem, license issues?????????

    No. Listen. The point is to alter the economics, to make training on image from the internet actively dangerous. It doesn't even take much. A small amount of internet data actively poisoned requires future models to use alignment to bypass it, increasing the marginal (thin) costs of training and cheating people out of their work.

    Shame on you dude.

    If you want to hurt the capitalists, consider exfiltrating weights directly, as was done with LLaMa, to ruin their moats.

    Good luck on competing in the arms race to use other people's stuff.

    @[email protected] can we ban the grifter?

  • We can, protect artists

    Remember how we were told that genAI learns "just like humans", and how the law can't say about fair use, and I guess now all art is owned by big tech companies?

    Well, of course it's not true. Exploiting a few of the ways in which genAI --is not-- like human learners, artists can filter their digital art in such a way that if a genAI tool consumes it, it actively reduces the quality of the model, undoing generalization and bleading into neighboring concepts.

    Can an AI tool be used to undo this obfuscation? Yes. At scale, however, doing so requires increasing compute costs more and more. This also looks like an improvable method, not a dead end -- adversarial input design is a growing field of machine learning with more and more techniques becoming highly available. Imagine this as sort of "cryptography for semantics" in the sense that it presents asymetrical work on AI consumers (while leaving the human eye much less effected).

    Now we just need labor laws to catch up.

    Wouldn't it be funny if not only does generative AI not lead to a boring dystopia, but the proliferation and expansion of this and similar techniques to protect human meaning eventually put a lot of grifters out of business?

    We must have faith in the dark times. Share this with your artist friends far and wide!

    28
    Cultists Draw a Boogeyman on Cardboard, Become Afraid Of It
  • Scientists terrified to discover that language, the thing they trained into an highly flexible matrix of nearly arbitrary numbers, ends up can exist in multiple forms, including forms unintended by the matrix!

    What happens next, the kids lie to their parents so they can go out partying after dark? The fall of humanity!

  • [SOLVED] cannot login using mobile Firefox
  • Fair!

    That said, don't be a strange to family and friends about this kind of thing though. I've been surprised to find out people I knew were hiding their unemployment from me for months, which is totally fine and understandable -- they don't owe me anything -- but sometimes, faith in that people want to help each other with meaningful things when times are hard is one of those things worth testing.

  • Hi, I'm Scott Alexander and I will now explain why every disease is in fact just poor genetics by using play-doh statistics to sorta refute a super specific point about schizophrenia heritability.
  • Also seems relevant

    Like in the deer, the large-scale target morphology can be revised – the pattern memory re-written – by transient physiological experience. The genetics sets the hardware with a default pattern outcome, but like any good cognitive system, it has a re-writable memory that learns from experience.

  • Hi, I'm Scott Alexander and I will now explain why every disease is in fact just poor genetics by using play-doh statistics to sorta refute a super specific point about schizophrenia heritability.
  • I wonder if Scott is the person who stood up during Michael Levin's talk on (non genetic) bio-electric circuits storing morphological memory across time and said, “those animals can’t exist!”

    Just like neuroscientists try to read out and decode the memories inside a living brain, we can now read and write (a little bit…) the anatomical goals and memories of the collective intelligence of morphogenesis. The first time I presented this at a conference – genetically wild-type worms with a drastically different, rewritten, permanent, target morphology – someone stood up and said that this was impossible and “those animals can’t exist”. Here’s a video taken by Junji Morokuma, of them hanging out.

  • here in Top Pedophiles Of Twitter, my "friend" thinks about race so very little that he shit-tests every new person he meets with a racial slur
  • you forgot the last stage of the evolution,

    you'll later find out that people were talking about you, your actions, your words, and that being ghosted was in fact the consequence of your actions, and then you'll have one last opportunity to turn it all around

    1. do some self introspection and reconcile what actually happened vs what you intended to happen, and decide that it is in fact possible to create relationships without trying to meta discomfort them for your purposes specifically

    or

    1. wokeism is the reason, so this time you need to be even MORE obnoxious, to filter people out who would talk behind your back even strongester! (repeat from the top of your flow)
  • good news, everyone! eliezer is writing fiction again
  • I love DnD and TTRPGs. I even love watching some streams when the quality is high. But I'm with you slides in pocket protector I don't generally like this new wave of people who bring the expectation to my tables that every scene and every situation is a massive mellow drama mary sue projection for their OC that must be maximized.

    What was that about wit and brevity? Simple done well?

  • "if you're not stupid, it doesn't matter if COVID was a lab leak"
  • So far, there has been zero or one[1] lab leak that led to a world-wide pandemic. Before COVID, I doubt anyone was even thinking about the probabilities of a lab leak leading to a worldwide pandemic.

    So, actually, many people were thinking about lab leaks, and the potential of a worldwide pandemic, despite Scott's suggestion that stupid people weren't. For years now, bioengineering has been concerned with accidental lab leaks because the understanding that risk existed was widespread.

    But the reality is that guessing at probabilities of this sort of thing still doesn't change anything. It's up to labs to pursue safety protocols, which happens at the economic edge of of the opportunity vs the material and mental cost of being diligent. Reality is that lab leaks may not change probabilities, but yes the events of them occurring does cause trauma which acts, not as some bayesian correction, but an emotional correction so that people's motivations for atleast paying more attention increases for a short while.

    Other than that, the greatest rationalist on earth can't do anything with their statistics about label leaks.

    This is the best paradox. Not only is Scott wrong to suggest people shouldn't be concerned about major events (the traumatic update to individual's memory IS valuable), but he's wrong to suggest that anything he or anyone does after updating their probabilities could possibly help them prepare meaningfully.

    He's the most hilarious kind of wrong.

  • "if you're not stupid, it doesn't matter if COVID was a lab leak"
  • Ah, if only the world wasn't so full of "stupid people" updating their bayesians based off things they see on the news, because you should already be worried of and calculating your distributions for... inhales deeply terrorist nuclear attacks, mass shootings, lab leaks, famine, natural disasters, murder, sexual harassment, conmen, decay of society, copyright, taxes, spitting into the wind, your genealogy results, comets hitting the earth, UFOs, politics of any and every kind, and tripping on your shoe laces.

    What... insight did any of this provide? Seriously. Analytical statistics is a mathematically consistent means of being technically not wrong, while using a lot of words, in order to disagree on feelings, and yet saying nothing.

    Risk management is not a statistical question in fact. It's an economics question of your opportunities. It's why prepping is better seen as a hobby, a coping mechanism and not as viable means of surviving apocalypse. It's why even when a EA uses their super powers of bayesian rationality the answer in the magic eight ball is always just "try to make money, stupid".