Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)IM
imadabouzu @awful.systems
Posts 0
Comments 54
Ilya Sutskever's new AI super-intelligence startup raises a billion dollars. Unclear what they actually do.
  • I'm actually, not convinced that AI meaningfully beyond human capability actually makes any sense, either. The most likely thing is that after stopping the imitation game, an AI developed further would just.. have different goals than us. Heck, it might not even look intelligent at all to half of human observers.

    For instance, does the Sun count as a super intelligence? It has far more capability than any human, or humanity as a whole, on the current time scale.

  • Ilya Sutskever's new AI super-intelligence startup raises a billion dollars. Unclear what they actually do.
  • I don't get it. If scaling is all you need, what does a "cracked team" of 5 mean in the end? Nothing?

    What's, the different between super intelligence being scaling, and super intelligence, being whatever happens? Can someone explain to me the difference between what is and what SUPER is? When someone gives me the definition of super intelligence as "the power to make anything happen," I always beg, again, "and how is that different precisely from not, that?"

    The whole project is tautological.

  • Bostrom's advice for the ethical treatment of LLMs: remind them to be happy
  • When it comes to cloning or copying, I always have to remind people: at least half of what you are today, is the environment of today. And your clone X time in the future won't and can't have that.

    The same thing is likely for these models. Inflate them again 100 years in the future, and maybe they're interesting for inspecting as a historical artifact, but most certainly they wouldn't be used the same way as they had been here and how. It'd just, be something different.

    Which would beg the question, why?

    I feel like a subset of sci-fi and philosophical meandering really is just increasingly convoluted paths of trying to avoid or come to terms with death as a possibly necessary component of life.

  • Disapproving of automated plagiarism is classist ableism, actually: Nanowrimo
  • I don't entirely agree, though.

    That WAS the point of NaNoWriMo in the beginning. I went there because I wanted feedback, and feedback from people who cared (not offense to my friends, but they weren't interested in my writing and that's totes cool).

    I think it is a valid core desire to want constructive feedback on your work, and to acknowledge that you are not a complete perspective, even on yourself. Whether the AI can or does provide that is questionable, but the starting place, "I want /something/ accessible to be a rubber ducky" is valid.

    My main concern here is, obviously, it feels like NanoWriMo is taking the easy way out here for the $$$ and likely it's silicon valley connections. Wouldn't it be nice if NaNoWriMo said something like, "Whatever technology tools exist today or tomorrow, we stand for writer's essential role in the process, and the unethical labor implications of indiscriminate, non consensus machine learning as the basis for any process."

  • NaNoWriMo gets AI sponsor, says not writing your novel with AI is ‘classist and ableist’
  • NovelAI

    I'll step up and say, I think this is fine, and I support your use. I get it. I think that there are valid use cases for AI where the unethical labor practices become unnecessary, and where ultimately the work still starts and ends with you.

    In a world, maybe not too far in the future, where copyright law is strengthened, where artist and writer consent is respected, and it becomes cheap and easy to use a smaller model trained on licensed data and your own inputs, I can definitely see how a contextual autocomplete that follows your style and makes suggestions is totally useful and ethical.

    But i understand people's visceral reaction to the current world. I'd say, it's ok to stay your course.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 September 2024
  • Maybe hot take, but when I see young people (recent graduation) doing questionable things in pursuit of attention and a career, I cut them some slack.

    Like it's hard for me to be critical for someone starting off making it in, um, gestures about this, world today. Besides, they'll get the sense knocked into them through pain and tears soon enough.

    I don't find it strange or malice, I find it as symptom of why it was easier for us to find honest work then, and harder for them now.

  • Bostrom's advice for the ethical treatment of LLMs: remind them to be happy
  • This kind of thing is a fluff piece, meant to be suggestive but ultimately saying nothing at all. There are many reasons to hate Bostrom, just read his words, but this is two philosophers who apparently need attention because they have nothing useful to say. All of Bostrom's points here could be summed up as "don't piss on things, generally speaking."

    As for consciousness. Honestly, my brain turns off instantly when someone tries to make any point about consciousness. Seriously though, does anyone actually use the category of "conscious / unconscious" to make any decision?

    I don't disrespect the dead (not conscious). I don't bother animals or insects when I have no business with them (conscious maybe not conscious?). I don't treat my furniture or clothes like shit, and am generally pleased they exist. (not conscious). When encountering something new or unusual, I just ask myself, "is it going to bite me?" first. (consciousness is irrelevant) I know some of my actions do harm either directly or indirectly to other things, such as eating, or consuming, or making mistakes, or being. But I don't assume myself a hero or arbiter of moral integrity, I merely acknowledge and do what I can. Again, consciousness kind of irrelevant.

    Does anyone run consciousness litmus tests on their friends or associates first before interacting, ever? If so, does it sting?

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 1 September 2024
  • Oh man, anyone who runs on such existential maximalism has such infinite power to state things as if their conclusion has only one possible meaning.

    How about invoking Monkey Paw -- what if every statement is true but just not in the way they think.

    1. A perfect memory which is infinitely copyable and scaleable is possible. And it's called, all the things in nature in sum.
    2. In fact, we're already there today, because it is, quite literally the sum of nature. The question for tomorrow is, "so like, what else is possible?"
    3. And it might not even have to try or do anything at all, especially if we don't bother to save ourselves from ecological disaster.
    4. What we don't know can literally be anything. That's why it's important not to project fantasy, but to conserve of the fragile beauty of what you have, regardless of whether things will "one day fall apart". Death and Taxes mate.

    And yud can both one day technically right and whose interpretations today are dumb and worthy of mockery.

  • The UK wants AI in schools to mark kids’ homework
  • The issue isn't even that AI is doing grading, really. There are worlds where using technology to assist in grading isn't a loss for a student.

    The issue is that all of this is as an excuse not to invest in students at all and the turn here is purely a symptom of that. Because in a world where we invest in technology to assist in education, the first thing that happens is we recognize the completely unsexy and obvious things that also need to happen, like funding for maintenance of school buildings, basic supplies, balancing class sizes by hiring and redistricting, you know. The obvious shit.

    But those things don't attract the attention of the debt metabolism, they're too obvious and don't include more leverage for short term futures. To believe there is a future for the next generation is risk inherent and ambiguous. You can only invest in that it if you actually care.

  • No, OpenAI Strawberry isn’t imminent — but it sure trolled the AI doomers
  • Yeah, this lines up with what I have heard, too. There is always talk of new models, but even the stuff in the pipeline not yet released isn't that differentiable from the existing stuff.

    The best explanation of strawberry is that it isn't any particular thing, it's rather a marketing and project framing, both internal and external, that amounts to... cost optimizations, and hype driving. Shift the goal posts, tell two stories: one is if we just get affordable enough, genAI in a loop really can do everything (probably much more modest, when genAI gets cheap enough by several means, it'll have several more modest and generally useful use cases, also won't have to be so legally grey). The other is that we're already there and one day you'll wake up and your brain won't be good enough to matter anymore, or something.

    Again, this is apparently the future of software releases. :/

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 1 September 2024
  • Yes, and this is what I keep hearing internally as well.

    Even OpenAI employees admit frankly that the current models are nothing to be scared of, that the advancements have largely been in product and economics. But also, rattles bones AGI is still definitely coming in a few years, maybe. And why aren't the world governments taking THAT seriously yet?

    It's. It's marketing. This is the future of a software release I guess.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 25 August 2024
  • Yeah that's totally fair, I just was tailgating the sneer I guess.

    Almost never do they find a solution in anything in the left side of politics.

    That's a good point, and I think it speaks well to their savior complex. They want above all to push the guilt and discomfort of social issues away so they don't have to live in the discomfort of reality. Dogma does this, and it really doesn't matter if you have the veneer of science or the mythology.

  • Silicon Valley getting back to its roots: the Y Combinator cruise missile
  • I kind of wonder if this whole movement of rationalists believing they can "just" make things better than people already in the field comes from the contracting sense that being rich and having an expensive educational background may in fact be less important than having background experience and situational context in the future, two things they loath?

  • Google is no longer asking — feed the AI or you’re not in search results
  • It's... it's almost as if the law about shareholder value as intended as a metaphor for accountability, not a literal, reductive claim that results in ouroboros. Almost like, our economic system is supposed to be a means, not an end in of itself?

    No. Definitely can't be that.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 25 August 2024
  • Procreate is an example of what good AI deployment looks like. They do use technology, and even machine learning, but they do it in obviously constructive scopes between where the artist's attention is focused. And they're committed to that because... there's no value for them to just be a thin wrapper on an already completely commoditized technology on its way to the courtroom to be challenged by landmark rulings with no more room ceiling to grow into whooooooops.