Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BR
BrickedKeyboard @awful.systems
Posts 1
Comments 40
I am extremely curious what the general take around here is on the Singulairty
  • now if that isn’t just the adderall talking

    Nail on the head. Especially on the internet/'tech bro' culture. All my leads at work also have such a, "extreme OCD" kinda attitude. Sorry if you feel offended emotionally, I didn't mean it.

    The rest of your post is ironically very much something that Eliezer posits a superintelligence would be able to do. Or from the anime Death Note. I use a few words or phrases, you analyze the shit out of them and try to extract all the information you can and have concluded all this stuff like

    opening gambit

    “amongst friends”

    hiding all sorts of opinions behind a borrowed language

    guff about “discovering reality”

    real demands as “getting with the right programme”,

    allegedly, scoring points “off each other”

    Off each other” was another weasel phrase

    you know that at least at first blush you weren’t scoring points off anyone

    See everything you wrote above is a possibly correct interpretation of what I wrote. It's like the english lit analysis after the author's dead. Eliezer posits a superintelligence could use this kind of analysis to convince operators with admin authority to break the rules, or L in death note uses this to almost catch the killer.

    It's also all false in this case. (it's also why a superintelligence probably can't actually do this) I've been on the internet long enough to know it is almost impossible to convince someone of anything, unless they already were willing and you just link some facts they didn't know about. So my gambit actually something very different.

    Do you know how you get people to answer a question on the internet? To post something that's wrong*. And it clearly worked, there's more discussion on this thread than this entire forum in several pages, maybe since it was created.

    *ironically in this case I posted what I think is the correct answer but it disagrees with your ontology. If I wanted lesswrongers to comment on my post I would need a different OP.

  • I am extremely curious what the general take around here is on the Singulairty
  • which is fine. the bigger topic is, could you leave a religion if the priest's powers were real*, even if the organization itself was questionable?

    *real as in generally held to be real by all the major institutions in the world you are in. Most world governments and stock market investors are investing in AI, they believe they will get an ROI somehow.

  • I am extremely curious what the general take around here is on the Singulairty
  • Next time it would be polite to answer the fucking question.

    Sorry sir:

    *I have to ask, on the matter of (2): why? * I think I answered this.

    What’s being signified when you point to “boomer forums”? That’s an “among friends” usage: you’re free to denigrate the boomer fora here. And > then once again you don’t know yet if this is one of those “boomer forums”, or you wouldn’t have to ask.

    What people in their droves are now desperate to ask, I will ask too: which is it dummy? Take the stopper out of your speech hole and tell us how > you really feel.

    I am not sure what you are asking here, sir. It's well known to those in the AI industry that a profound change is upon us and that GPT-4 shows generality for it's domain, and robotics generality is likely also possible using a variant technique. So individuals unaware of this tend to be retired people who have no survival need to learn any new skills, like my boomer relatives. I apologize for using an ageist slur.

  • Building an entirely new city from scratch in Northern California: an environmentally friendly way of providing more housing, according to some silicon valley types
  • Doesn't the futurism/hopium idea of building an ideal city go back to Disney? Who does more or less have feudal stronghold rights over florida?

    https://en.wikipedia.org/wiki/EPCOT_(concept)

    Because of these two modes of transportation, residents of EPCOT would not need cars. If a resident owned a car, it would be used "only for weekend pleasure trips."[citation needed] The streets for cars would be kept separate from the main pedestrian areas. The main roads for both cars and supply trucks would travel underneath the city core, eliminating the risk of pedestrian accidents. This was also based on the concept that Walt Disney devised for Disneyland. He did not want his guests to see behind-the-scenes activity, such as supply trucks delivering goods to the city. Like the Magic Kingdom in Walt Disney World, all supplies are discreetly delivered via tunnels.

    Or The Line in Saudi Arabia.

    Definely Sneer-worthy, though it's sometimes worked. Napoleon redesigned Paris, which is probably a good thing. But they are stuck with that design to this day, which is probably bad.

  • I am extremely curious what the general take around here is on the Singulairty
  • The counter argument is GPT-4. For the domains this machine has been trained on it has a large amount of generality - a large amount of capturing that real world complexity and dirtiness. Reinforcement learning can make it better.

    Or in essence, if you collect colossal amounts of information, yes pirated from humans, and then choose what to do next by 'what would a human do', this does seem to solve the generality problem. You then fix your mistakes with RL updates when the machine fails on a real world task.

  • I am extremely curious what the general take around here is on the Singulairty
  • Did this happen with Amazon? The VC money is a catalyst. It's advancing money for a share of future revenues. If AI companies can establish a genuine business that collects revenue from customers they can reinvest some of that money into improving the model and so on.

    OpenAI specifically seems to have needed about 5 months to go to 1 billion USD annual revenue, or the way tech companies are valued, it's already worth more than 10 billion intrinsic value.

    If they can't - if the AI models remain too stupid to pay for, then obviously there will be another AI winter.

    https://fortune.com/2023/08/30/chatgpt-creator-openai-earnings-80-million-a-month-1-billion-annual-revenue-540-million-loss-sam-altman/

  • I am extremely curious what the general take around here is on the Singulairty
  • I agree completely. This is exactly where I break with Eliezer's model. Yes obviously an AI system that can self improve can only do so until it's either (1) the best algorithm that can run on the server farm (2) finding a better algorithm takes more compute than is worth the investment in current compute

    That's not a god. You do this in an AI experiment now and it might crap out at double or less the starting performance and not even be above the SOTA.

    But if robots can build robots, and the current AI progress shows a way to do it (foundation model on human tool manipulation), then...

    Genuinely asking, I don't think it's "religion" to suggest that a huge speedup in global GDP would be a dramatic event.

  • I am extremely curious what the general take around here is on the Singulairty
  • Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won't be in a week or a month, energy requirements alone limit how fast it can happen.

    Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.

    Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you "priced in" this possibility in your world view?

  • I am extremely curious what the general take around here is on the Singulairty
  • I wanted to know what you know and I don't. If rationalists are all scammers and not genuinely trying to be, per the name 'lesswrong' in their view of reality, what's your model of reality. What do you know? So far unfortunately I haven't seen anything. Sneer club's "reality model" seems to be "whatever the mainstream average person knows + 1 physicist", and it exists to make fun of the mistakes of rationalists and I assume ignores any successes if there are any.

    Which is fine, I guess? Mainstream knowledge is probably usually correct. It's just that I already know it, there's nothing to be learned here.

  • I am extremely curious what the general take around here is on the Singulairty
  • This pattern shows up often when people are trying to criticize tesla or spaceX. And yeah, if you measure "current reality" vs "promises of their hype man/lead shitposter and internet troll", absolutely. Tesla probably will never achieve full self driving using anything like their current approach. But if you compare Tesla "to other automakers, "to most automakers that ever existed"", or SpaceX to "any rocket company since 1970" there's no comparison. If you're going to compare the internet to pre-internet, compare it to BBS you would access via modem or fax machines or libraries. No comparison.

    Similarly you should compare GPT-4 and the next large model to be released, Gemini, vs all AI software for all time. It's no comparison.

  • I am extremely curious what the general take around here is on the Singulairty
  • take some time and read this

    I read it. I appreciated the point that human perception of current AI performance can scam us, though this is nothing new. People were fooled by Eliza.

    It's a weak argument though. For causing an AI singularity, functional intelligence is the relevant parameter. Functional intelligence just means "if the machine is given a task, what is the probability it completes the task successfully". Theoretically an infinite chinese room can have functional intelligence (the machine just looks up the sequence of steps for any given task).

    People have benchmarked GPT-4 and it's got general functional intelligence at tasks that can be done on a computer. You can also just go pay up $20 a month and try it. It's below human level overall I think, but still surprisingly strong given it's emergent behavior from computing tokens.

  • I am extremely curious what the general take around here is on the Singulairty
  • I appreciated this post because it never occurred to me that the "thumb might be on the scales" for the "rules for discourse" that seems to be the norm around the rat forms. I personally ignore most of it, however, the "ES" rat phrase is simply saying, "I know we humans are biased observers, this is where I'm coming from". If the topic were renewable energy and I was the 'head of extraction at BP', you can expect that whatever I have to say is probably biased against renewable energy.

    My other thought reading this was : what about the truth. Maybe the mainstream is correct about everything. "Sneer club" seems to be mostly mainstream opinions. That's fine I guess but the mainstream is sometimes wrong about issues that have been poorly examined or near future events. The collective opinions of everyone don't really price in things that are about to happen, even if it's obvious to experts. For example, the mainstream opinion on covid was usually lagging several weeks behind Zvi's posts on lesswrong.

    Where I am going with this is you can point out bad arguments on my part, but I mean in the end, does truth matter? Like are we here to score points on each other or share what we think reality is or will in the very near future be?

  • I am extremely curious what the general take around here is on the Singulairty
  • To be clear, maybe you will be unimpressed with this, scale matters. I said in the above text "10 times current industrial output. Within 17 years RMR, robots making robots.". If you already priced that in, ok, that's an acceptable position, but the magnitude of a singularity matters, not just that it's happening.

  • I am extremely curious what the general take around here is on the Singulairty
  • And just to be clear, for one to be "lost in the AI religion", the claims have to be false, correct? We will not see the things I mentioned within the timeframe I gave (7 years, 17 years, and implicitly if there is not immediate progress towards the nearer deadline within 1 year it's not going to happen).

    Google's Gemini will not be multimodal, be capable of learning to do tasks by reinforcement learning to human level, right? Robotics foundation models will not work.

  • I am extremely curious what the general take around here is on the Singulairty

    First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow 'rationalists' are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

    The flaw here is that there's 8 billion people alive right now, and we don't actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying "fuck em". This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

    But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can't solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

    And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

    So I was wondering what the people here generally think. There are "boomer" forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

    I also have noticed that the whole rationalist schtick of "what is your probability" seems like asking for "joint probabilities", aka smoke a joint and give a probability.

    Here's my questions:

    1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

    2. Do you consider it likely, before 2040, those domains will include robotics

    3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

    4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

    5. Is AI system design an issue. I hate to say "alignment", because I think that's hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

    *"epistemic status": I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas..

    159