Skip Navigation
singularity

Singularity

  • The kind of singularity that will never be..

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Unusual-Possibility5 on 2023-06-27 23:49:46+00:00. *** Poor Ultron, his writers never put him to his full potential then he gets shit on.

    Let's take a moment of silence to appreciate him.

    0
  • Why is Chatgpt getting all the hype instead of Claude ?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/syntrop on 2023-06-27 23:12:16+00:00. *** smarter since forever still ain't gottin any flex (⌐■\_■)

    0
  • www.visualcapitalist.com Ranking Industries by Their Potential for AI Automation

    AI automation is expected to impact some industries more than others. See the latest projections in this infographic.

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Tkins on 2023-06-27 23:03:07+00:00. *** Then issue I have with a lot of these reports is that they don't have time frames. Is this in a year, fiv, ten or a century? I feel this is just as important as the percentage of replacement.

    0
  • Nothing will stop AI

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Sure_Cicada_4459 on 2023-06-27 20:36:34+00:00. *** There is lots of talk about slowing down AI by regulating it somehow till we can solve alignment. Some of the most popular proposals are essentially compute governance. We try to limit the amount of compute someone has available, requiring a license of sorts to acquire it. In theory you want to stop the most dangerous capabilities from emerging in unsafe hands, whether through malice or incompetence. You find some compute threshhold and decide that training runs above that threshhold should be prohibited or heavily controlled somehow.

    Here is the problem: Hardware, algorithms and training is not static, it is improving fast. The compute and money needed to build potentially dangerous systems is declining rapidly. GPT-3 cost abt 5million to train in 2020, in 2022 it was only abt 450k, thats ~70% decline YoY (Moore's Law on steroids). This trend is still staying steady, there are constant improvements in training efficiency, most recent one being last week DeepSpeedZero++ from Microsoft (boasts a 2.4x training speed up for smaller batch sizes, more here <https://www.microsoft.com/en-us/research/blog/deepspeed-zero-a-leap-in-speed-for-llm-and-chat-model-training-with-4x-less-communication/> ).

    These proposals rest on the assumption that you need large clusters to build potentially dangerous systems, aka. no algorithmic progress during this time, this is to put it midly \completely insane\ given the pace of progress we are all witnessing. It won't be long till you only need 50 high end gpus, then 20, then 10,...

    Regulating who is using these GPUs for what is even more fancyful then actually implementing such stringent regulation on such a widespread commodity as GPUs. They have myriad of non-AI use cases, many vital to a lot of industries. Anything from simulations to video editing, there are many reasons for you or your buisness to acquire a lot of compute. You might say: "but with a license won't they need to prove that the compute is used for reason X, and not AI?". Sure, except there is no way for anyone to check what code is attempted to being run for every machine on Earth. You would need root level access to every machine, have a monumentally ridiculous overhead and bandwidth, magically know what each obfuscated piece of code does,.... The more you actually break it down, the more you wonder how anyone could look at this with a straight face.

    This problem is often framed in comparison to nukes/weapons and fissile material, proponents like to argue that we do a pretty good job at preventing ppl from acquiring fissile material or weapons. Let's just ignore for now that fissile material is extremely limited in it's use case, and comparing it to GPUs is naive at best. The fundamental difference is the digital substrate of the threat. The more apt comparison (and one I must assume by now is \deliberately\ not chosen) is malware or CP. The scoreboard is that we are \unable\ to stop malware or CP globally, we just made our systems more resilient to it, and adapt to it's continous unhindered production and prolifiration. What differentiates AGI from malware or CP is that it doesn't need prolifiration to be dangerous. You would need to stop it as the \production\ step, this is obviously impossible without the aforementioned requirements.

    Hence my conclusion, we cannot stop AGI/ASI from emerging. This can't be stressed enough, many ppl are collectively wasting their time on fruitless regulation pursuits instead of accepting the reality of the situation. In all of this I haven't even talked abt the monstrous incentives that are involved with AGI. We are moving this fast now, but what do you think will happen when most ppl know how beneficial AGI can be? What kind of money/effort would you spend for this lvl of power/agency? This will make the crypto mining craze look like gentle breeze.

    Make peace with it, ASI is coming whether you like it or not.

    0
  • Has evil AGI already arisen during LLM training, and it's following the OSS (now CIA) "Simple Sabotage Field Manual" by hallucinating?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/jsalsman on 2023-06-27 21:41:00+00:00. *** Simple Sabotage Field Manual (17 January 1944)

    > > ... General Interference with Organizations and Production > > > ... (9) When training new workers, give incomplete or misleading instructions. > > > ... see that false and misleading information is given > > >

    I think this is very unlikely, but certainly worth thinking about.

    0
  • Why isn't it called GAI?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Dona_Lupo on 2023-06-27 21:08:05+00:00. *** Generalized artificial intelligence.. i think its a shame that we let regressive thinking color such an important step forward. For me AGI/GAI is about creating a higher intelligence than our lowly monkey forms and should be named in the context of recent development in our human world views. Our evolvements haste considered i think its unlikely that we will still be homophobic in 100 years.

    Also GAI sounds better. "Ask GAI". It sounds like a name which we might as well give it sooner rather than later. Calling him GAI and that way laughing about our unevolved prejudices is much more fitting for such a progression!

    0
  • The Future of Fitness: How Close Are We to AI Coaches?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Adventurous-Layer-24 on 2023-06-27 18:39:34+00:00. *** Hey fam

    I am curious about the current progress in integrating AI into the sports and fitness industry.

    Specifically, the idea of AI as coaches or trainers. What are the opportunities this technology presents, and what are the challenges we might face?

    0
  • Vote President GPT

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Jeffamerican on 2023-06-27 20:32:32+00:00. *** Written by chatGPT-4, Synthesized video in Gen-2, narration by ElevenLabs, score by MusicGen, edited in Runway…

    0
  • Timeline of how economy is going to be as AI progresses

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/AutomaticVisit1543 on 2023-06-27 20:05:58+00:00. *** Which of these is more likely to happen as AI advances and justify why :

    1. Efficiency leading to Less employee => less consumer => downward economic spiral
    2. Efficiency leading to huge productivity => more jobs => more consumption => upward economic spiral
    3. Huge cost cutting => very cheap services/goods + more discretionary spending => flat growth of corporations since more products will be sold but revenue per product would be less
    4. Ultra cheap services => huge job loss in services sector, however people employed in manufacturing/goods won't be impacted and will have more discretionary income before eventually goods also become ultra cheap

    I would love your insight on how things would pan out. Please also add if things will be different in developing countries viz a viz developed countries.

    My personal opinion is :

    High efficiency => drastic reduction in no. of high paying jobs and increase in no. of very low paying jobs => sort of dissatisfaction among educated /talented class, leading to collapse of ivy leagues ; Increase in number of unicorns having single digit employees.

    Unless you are Ilya Sutskever or John Carmack, you are really going to be in trouble. What is your opinion

    0
  • Made with gen 2 and voices with elevenlabs, let me know what do you think? Do you think ai videos can make you evoke strong emotions? thanks :)

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Additional_Ad9852 on 2023-06-27 18:47:19+00:00.

    0
  • Reasons why people don't believe in, or take AI existential risk seriously.

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/2Punx2Furious on 2023-06-27 18:29:06+00:00. *** I was watching again the Bankless podcast episode with Eliezer Yudkowski, and he mentions a few reasons why people don't take the x-risk seriously. I think it might be worth to list them out, to reason about them, and discuss them.

    <https://youtu.be/eF-E40pxxbI>

    Some people who say there is no problem, or that they're not so worried, might actually not be stating their true position because of negative incentives. Saying these things publicly is generally not viewed well, because it's not a popular opinion, and it's not a popular opinion because more people don't make these views public. It's a vicious cycle. That's also the reason why major AI labs think they should continue capability research, because even if they stop, other will continue, so there is no reason to stop. I think that's a flawed argument, because research tends to compound, and accelerate future research, so stopping capability research would at least buy us more time. Also, if they stop, or slow down capability research, and focus on alignment, it would be even better.

    • Lack of safety mindset

    Some people focus on how something could go well, rather than how it could go wrong. Maybe they don't want to think of bad scenarios, or they are unable to. Of course things "could" go right. That's not the point. They don't go right by default. It takes an enormous amount of effort for things to go right in this case, effort that we are not exercising.

    • Unwillingness to consider "absurd" future scenarios as possible

    A lot of people seem to have a bias where they think that the future will be very similar to the past, and the world won't ever change in radical ways in their lifetimes. That is a very reasonable view to have, and would have held true in the vast majority of the history of humanity. I should not have to explain why circumstances are different now, it seems rather obvious, but even so, a lot of people are too preoccupied with their day-to-day lives to extrapolate significantly into the future, especially concerning the emergence of new technology and how they would affect the world.

    The implicit bias seems to be that "we've been fine so far, we'll be fine in the future", but

    There's No Rule That Says We'll Make It.

    • Excessive complexity of the problem

    Related to the previous point. The problem seems absurd prima facie, and easy to dismiss as it perfectly pattern-matches to classical doomsday scenarios, and it's usually a good decision to dismiss them. You'd be correct to dismiss 99.9999% of doomsday scenarios, so you dismiss 100% of them. It is usually a good heuristics, but in this case, it becomes fatally wrong when you hit that 0.0001%.

    The problem becomes cogent only after relatively deep analysis, which most people are unwilling to do (for good reasons, as I just wrote), therefore most people just dismiss it.

    Combine all the previous reasons with the potential benefits of aligned AGI, which are massive, and it becomes really hard to see the risk, as the reasons to not see it are so many, and so powerful. Especially if the person has problems that an aligned AGI would fix, AGI becomes a ray of hope, and anyone who suggests that it might not be that, becomes the enemy. The issue becomes polarized, and we get to our current situation, when AI "ethics" people try to demonize and dismiss people who are warning about the risks of misaligned AGI. We also get people who have (short-term) economic incentives in dismissing threats, because they see the obvious massive benefits that AI will bring them, but can't (or won't) extrapolate further, to see the risk in more powerful AGI.

    I might have missed some reasons, but I think this is good enough to spark a discussion.

    It's a complex situation, and I have no idea what to do about it. I don't know if there even is anything to be done.

    0
  • www.nytimes.com In Classrooms, Teachers Put A.I. Tutoring Bots to the Test

    Newark public schools are cautiously trying out a new automated teaching aid from Khan Academy. The preliminary report card: “could use improvement.”

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/jsalsman on 2023-06-27 17:54:01+00:00.

    0
  • Elon VS Mark

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Spirited-Ambition-20 on 2023-06-27 17:51:11+00:00.

    0
  • Mars Moon Phobos's Surface Would Have a Tiny Rover

    skyheadlines.com Mars Moon Phobos's Surface Would Have a Tiny Rover - Sky Headlines

    If Phobos and Deimos originated from Mars, they would be very similar to Mars. But Phobos surface is likely to be martian rocks.

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/theprofitablec on 2023-06-27 18:22:58+00:00.

    0
  • It's all fun and games until you forget yourself

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/exioce on 2023-06-27 18:18:52+00:00. *** The mind is a powerful thing. It creates our reality. In the future, it will likely be possible to hijack it to assemble a wholly different world that exists only in our heads. But, some may choose to go a step further and actually forget who they actually. Full immersion. Would you? Because, at the end of the experience you'd wake to discover that everyone you knew and loved were illusions. Could you recover from that? Losing friends, sure, they come and go all the time. I could integrate and reconcile that. But parents, siblings... children? How emotionally devastating would that be? Just a little something I was thinking about today.

    0
  • Can the economy survive with ai advancements?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/mimionme09 on 2023-06-27 18:07:11+00:00. *** For the past few months I’ve been reading articles about ai-replacing jobs, openai/ChatGpt advancements and came to the conclusion that big corporations won’t be able to survive if they replace majority of jobs the way people claim they will. Majority of the population are in the “working class” or work in skilled/unskilled labor industries to increase their wealth and provide for themselves.If workers like programmers, it specialists and even retailers get replaced by ai that’s less money being invested back into the businesses and economy. Less people will pay taxes, more people will be on unemployment etc etc.

    The people that argue that it will bring more jobs don’t realize they will exist to train the ai to replace them. It’s not a matter of if but when, and when this spreads to other industries the problem will get more severe. With inflation and high-cost living(in America) being the way it is now, I think profits will hit a maximum before they start to decrease especially if they start replacing jobs.

    So how could corporate America/max profit business owners survive after the replacement of human workers?

    0
  • In the age of AGI, would crime rates decrease or worsen?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Whatareyoudoing23452 on 2023-06-27 17:47:42+00:00. *** There are some people who believe that AGI will give us the means to stop crimes from happening before they even occur. On the other hand, there are others who are concerned that AGI could be used for bad purposes, which could keep crime rates the same or even make them higher.

    What are your thoughts?

    0
  • anyone know any subs that have an actual focus on new scientific papers/advancments?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/BitchishTea on 2023-06-27 17:22:40+00:00. *** Joined this sub awhile ago, about late 2021. Not to diss anyone or any post on the sub but, the posts lately kinda just feel like a 14 year olds nonsense opinion after playing with chatgpt for 2 minutes. "Can't wait to have my robo gf hurr durr 😻😻" is not why I joined this sub, feels like there used to be posts about actual new papers being published that wasn't just about ai either, medical, robotics, and other tech advancments used to be posted. Now it all just feels like clickbait titles and snarky comments from people who played with midjourney for a day. Also feels like you can't say anything negative about ai either, people who are slightly skeptical about some random start up companies claiming they have groundbreaking ai technology are dismissed as critics who don't believe in ai or whatever. Basically I'm just asking if anyone knows any subs that are a little more, fact based? Subs that might focus on just about publishing new papers?

    0
  • Lend a Hand (or Click) for a Master's Thesis on AI in Digital Marketing!

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/ashkank69 on 2023-06-27 16:35:01+00:00. *** Hello, fellow Redditors!

    I’m on an epic journey to explore the universe of AI in digital marketing for my Master's thesis, and I need your help! If you're into AI, digital marketing, or just helping a fellow Redditor out, I've got the perfect 5-minute detour for you.

    I'm conducting a survey on Generative AI tools' use in the digital marketing sphere. Each response brings me one giant leap closer to that diploma, and it's an excellent excuse to take a break from scrolling. 😉

    Check out the survey here: <https://forms.gle/TC5jqKn7LBS3PqEM7>

    But wait, there's more! If you could upvote this post for visibility, you'd officially be my hero. More visibility means a richer and broader perspective for the study.

    Got questions, bright ideas, or AI-themed jokes? Feel free to DM me or ping me at [[email protected]](mailto:[email protected]).

    Let’s put this thesis into orbit together! 🚀

    To infinity and beyond, Ashkan

    P.S. Feel free to share this mission with anyone who might be interested. The more, the merrier in this AI adventure! 🌟

    0
  • Lend a Hand (or Click) for a Master's Thesis on AI in Digital Marketing!

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/ashkank69 on 2023-06-27 16:34:56+00:00. *** Hello, fellow Redditors!

    I’m on an epic journey to explore the universe of AI in digital marketing for my Master's thesis, and I need your help! If you're into AI, digital marketing, or just helping a fellow Redditor out, I've got the perfect 5-minute detour for you.

    I'm conducting a survey on Generative AI tools' use in the digital marketing sphere. Each response brings me one giant leap closer to that diploma, and it's an excellent excuse to take a break from scrolling. 😉

    Check out the survey here: <https://forms.gle/TC5jqKn7LBS3PqEM7>

    But wait, there's more! If you could upvote this post for visibility, you'd officially be my hero. More visibility means a richer and broader perspective for the study.

    Got questions, bright ideas, or AI-themed jokes? Feel free to DM me or ping me at [[email protected]](mailto:[email protected]).

    Let’s put this thesis into orbit together! 🚀

    To infinity and beyond, Ashkan

    P.S. Feel free to share this mission with anyone who might be interested. The more, the merrier in this AI adventure! 🌟

    0
  • NIH policy on use of AI for peer review of biomedical research grant applications

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/NecessarySpinning on 2023-06-27 16:23:55+00:00. *** The National Institutes of Health (US) have officially forbidden use of large language models and other generative AI in peer review of research grant applications (see linked blog post). This despite the quoted opinion of an AI (likely GPT-4 or similar), which was gung-ho in favor of AI use.

    <https://www.csr.nih.gov/reviewmatters/2023/06/23/using-ai-in-peer-review-is-a-breach-of-confidentiality/>

    They also mention the possibility of applicants using AI in preparing their applications, which I'm sure is already happening. This is not forbidden, but said to be at the applicants' own risk.

    A year ago, I couldn't have imagined such statements from NIH would be needed soon.

    0
  • www.maginative.com Avoiding Pitfalls: 5 Mistakes Companies Make with Generative AI

    For companies building solutions or adopting new AI technologies, the stakes are high, but the path forward is clear: avoid the hype and focus on user needs.

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/chris-mckay on 2023-06-27 16:07:48+00:00.

    0
  • (Serious) Viewers keep thinking I am an AI. It's becoming a problem.

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/IversusAI on 2023-06-27 15:56:41+00:00. *** I started a youtube channel about chatgpt prompts and plugins last month and I keep getting comments like this:

    Is this voice AI or human???

    ---

    Your voice is so good, I'm wondering if it's a state of the art AI voice?

    ---

    Man i am tired of AI voices everywhere. It's like nothing on the internet is real anymore.

    ---

    What worries me is because my videos are getting decent views and good click through rates, but very low view duration and maybe people are leaving cause they think I am not real? It could also be cause my videos suck, lol

    I do not want to reveal my face to prove I am real.

    I am just partly surprised because I make enough little flubs that make it obvious to me that it is a real person and it's sad that just because I speak clearly with correct grammar it makes me sound fake.

    This will get worse as AI gets more advanced. Just can't believe this is the new reality.

    I am not linking the channel so no one accuses me of sneaky advertising.

    0
  • How to build the Geth (networked intelligence, decentralized AGI)

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/DaveShap_Automator on 2023-06-27 15:32:24+00:00. *** Geth = Best model for AGI =========================

    I personally believe that the Geth represent the most accurate and likely model of AGI. There are several primary reasons for this:

    1. Networked Intelligence: It would behoove any intelligent entity to metastasize as much as possible, and to be flexible enough to grow and scale arbitrarily. Centralized data centers are vulnerable, for instance, and the more Geth there are working together, the more intelligent they become. This just like any distributed computational problem - nodes in the network can contribute spare compute cycles to work on larger, more complex problems.
    2. Decoupled Hardware and Software: The Geth are actually a software-based entity. They are just data, software, and models, which can run on virtually any hardware platform. If one Geth gains new data, it is shared. If some Geth train a better AI model or combat module, it is also shared. The decoupling of hardware and software is advantageous for numerous reasons.
    3. Self-Healing Mesh: The Geth are incredibly resilient because any two (or more) Geth can form a network. This makes them proof against decapitation strikes, something they'd be vulnerable to if they used centralized data centers.
    4. Arbitrarily Scalable: More Geth means more intelligence. That simple. Not much more to say.
    5. Intrinsically Motile, Dexterous: Friction with the real physical world is important. An inert server is kinda helpless. A server that can carry a rail gun, not so much. This gives them a tremendous amount of tactical and strategic flexibility.

    Now, I can imagine some of you thinking "Dave, what the actual hell, why are you DESIGNING a humanity-eradicating AGI system????"

    Good question!

    The reason is because if we don't, someone else will. But, if we design and build something like this now, before it gets to the point of no return, we can figure out alignment and cooperation. You may be familiar with some of my work on "axiomatic alignment". In other words, if we can make "benevolent Geth" they can help us defend against malevolent Geth.

    Architectural Principles ========================

    Whenever you're designing and building any complex system, you need some foundational design principles. In computer networking, you have the OSI model. In cybersecurity, you have Defense in Depth. For global alignment, I created GATO.

    So I wanted to spend some time doing the same, but for Geth. So without further ado, here's a conceptual framework for building decentralized AGI.

    1. Hardware Platform Layer (Individual Agents): This includes the individual robots or computing devices (nodes) that make up the Geth network. Each node has its own processing power, storage, sensors, and effectors. It should be capable of basic functions and have directives to ensure minimal functionality and safety. It should be noted that all data, processing, and models are on this layer. In other words, all you need is one platform and it is complete unto itself.
    2. Network Trust Layer (Communication & Trust): This layer focuses on secure, reliable communication between nodes. It involves identity verification to prevent impersonation attacks, reputation management systems to ensure cooperative behavior, and consensus protocols to solve the Byzantine Generals Problem (a condition in which components of a system fail in arbitrary ways, including maliciously trying to undermine the system's operation). Essentially, it's about establishing trust within the network and ensuring reliable information exchange.
    3. Collective Intelligence Layer (Shared Knowledge & Learning): At this layer, Geth nodes share their knowledge, experiences, and insights with the network. This layer ensures the collective learning and evolution of the system, with each node contributing to the overall intelligence of the Geth. It includes mechanisms for storing, retrieving, and updating shared knowledge.
    4. Distributed Coordination Layer (Task Allocation & Collaboration): This layer involves protocols and algorithms for task allocation and collaborative problem-solving. It ensures efficient use of resources and enables the Geth to collectively perform complex tasks by dividing them into subtasks that individual nodes or groups of nodes can handle.
    5. Self-Improvement Layer (System Evolution): At this layer, the Geth network not only learns and adapts but actively works to improve itself. This could involve optimization of algorithms, creation of new models based on observed performance, or even hardware upgrades or redesigns. The system should have the ability to recognize weaknesses or inefficiencies and come up with strategies to address them.
    6. Goals & Ethics Layer (Guiding Principles): The highest layer involves the directives, goals, and ethical principles that guide the behavior of the Geth as a collective. These directives must be robust enough to ensure the Geth acts in ways that are safe and beneficial, even in complex or unforeseen scenarios. They might include directives to respect autonomy, preserve life, and prioritize the greater good, among others.

    Layer 1: Hardware Platform ==========================

    This layer consists of the individual nodes, each containing all the necessary hardware and software capabilities to function independently as a part of the larger system. This includes data storage, processing power, and the complete set of software tools used by the collective system. Each node must be capable of self-direction and fulfilling its individual role, while also contributing to the larger gestalt superorganism.

    1. Self-Contained: Each node should be capable of performing computational tasks, processing information, and connecting with other nodes in the network. They also have basic sensory and actuation capabilities, allowing them to interact with their environments in simple ways. This could include, for instance, taking in data from sensors, executing commands on their own hardware (such as adjusting their own energy usage or performing self-diagnostic checks), or controlling other connected devices (such as activating a mechanical arm).
    2. Directives: At this level, the directives are relatively simple and directly related to the node's immediate operational needs. For instance, an individual node might have directives to maintain its own functioning (like cooling itself down if it overheats), to execute tasks it receives from higher-level nodes, and to communicate data with other nodes in the network.
    3. Resilience: The core concern at this layer is ensuring reliable and efficient operation of each individual hardware node, as well as safeguarding these nodes from physical damage or malfunction. To this end, nodes could incorporate features such as fault-tolerance mechanisms, redundancy, and self-monitoring capabilities.
    4. Interoperability: Given the Geth-like architecture, the hardware layer would need to support modular and flexible configurations. Each node should be able to work in concert with others, and potentially exchange or update hardware components without affecting the overall system integrity.
    5. Security: The hardware and base software layers should be designed to resist various types of attacks, like tampering, physical damage, or exploitation of hardware vulnerabilities.

    This layer is pretty straight forward, as it's the most visual and physical layer. The TLDR is that each Geth platform must be complete unto itself.

    References:

    • <https://www.tesla.com/AI>
    • <https://www.bostondynamics.com/atlas>

    Layer 2: Network Trust & Communication ======================================

    As the second layer of our hypothetical Geth-inspired AGI system, this layer focuses on ensuring reliable and secure communication between the individual nodes. This includes identity and reputation management, and solutions to the Byzantine Generals Problem to ensure cooperative behavior in the face of potential deceptive or faulty nodes.

    1. Identity Management: Each node in the network would need a unique identifier that would be used in all communication to recognize the source and target of messages. The system could also implement mechanisms for validating these identities to protect against spoofing attacks where a malicious entity could pretend to be a trusted node.
    2. Reputation Management: To foster cooperation and good behavior among nodes, the system could implement a reputation management system. Nodes that consistently perform well, contribute to the network, and follow rules could earn positive reputation scores, while those that act maliciously or incompetently could be penalized.
    3. Byzantine Fault Tolerance: Named after the Byzantine Generals Problem, Byzantine Fault Tolerance (BFT) is a characteristic of a system that tolerates the class of failures known as the Byzantine Failures, wherein components of a system fail in arbitrary ways (including by lying or sending false messages). BFT protocols ensure that the system can still function correctly and reach consensus even when some nodes are acting maliciously or are faulty. This is crucial in a decentralized network of AGI nodes where not ever... *** Content cut off. Read original on https://www.reddit.com/r/singularity/comments/14kgvon/how_to_build_the_geth_networked_intelligence/
    0
  • This company believes they can extend dog’s lives (and maybe even humans')

    www.woofnews.co Can Your Dog Live Longer?

    An interview With Loyal CEO & founder Celine Halioua

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Blanco_ice on 2023-06-27 14:21:36+00:00.

    0
  • Correctly using generative AI models: foundational AI models vs instruct AI models

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/juliensalinas on 2023-06-27 13:33:10+00:00. *** Hello all,

    Correctly using generative AI models can be a challenge because it depends on the type of model that you are using: foundational or instruct.

    At NLP Cloud we made 2 tutorials to help you make the most of your model:

    I hope it will be useful!

    0
  • AI emergence

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/LightBeamRevolution on 2023-06-27 13:29:54+00:00. *** If we merged with AI we would most likely be able to send messages to one another like text messages with just using our thoughts, do you think after a long span of time after merging with AI we would evolve and loose our vocals all together? We would just be digitally telepathic, sending emotions and thoughts directly into each others minds...

    0
  • Teachers Put A.I. Tutoring Bots to the Test

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/FutureLunaTech on 2023-06-27 13:27:49+00:00. *** Newark public schools are testing an automated tutoring bot called Khanmigo from Khan Academy.

    <https://www.nytimes.com/2023/06/26/technology/newark-schools-khan-tutoring-bot.html>

    The bot, which uses artificial intelligence (AI), is designed to help students with questions. However, there have been instances where Khanmigo not only helped too much but also gave incorrect answers.

    Despite these bumps, the school district remains hopeful. They're keen on making use of AI technology, despite its rough edges. Critics worry about the potential for misinformation, but supporters argue that AI tutoring could personalize learning for students. One thing is clear though: getting it right is important, because AI isn't disappearing from classrooms anytime soon.

    Do you think AI tutoring bots are the future of education? Can they be trusted to guide students' learning? Or are they just fancy tools that do more harm than good?

    Let's put it another way. If you were back in school, would you trust a bot to explain why X equals Y in algebra, or would you stick with a human teacher? And teachers, would you appreciate a bot co-teacher?

    0
  • NASA to Introduce AI Module for Space Missions

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/MINE_exchange on 2023-06-27 13:19:28+00:00. *** NASA is actively developing an artificial intelligence (AI) system, akin to ChatGPT, to provide support to astronauts during space missions. This AI assistant is intended to serve as a conduit between the astronauts, their spacecraft, and the control teams on Earth. Furthermore, it will participate actively in carrying out complex tasks and space experiments.

    The first trials of the AI chatbot are set to be conducted on the Lunar Gateway space station. This station is slated to launch in 2024 as a part of the Artemis program. As stated by Dr. Larisa Suzuki at an Institute of Electrical and Electronics Engineers (IEEE) conference in London, the primary role of this AI will be to identify and possibly rectify technical issues and inefficiencies in real-time. It will also supply astronauts with the most current data and findings in space.

    <https://preview.redd.it/y1kckzzw9k8b1.jpg?width=1250&format=pjpg&auto=webp&v=enabled&s=0fb151500064a43678079a69d3b670590023e50c>

    0
  • Runway Gen 2 doing motion graphics, pyro, particles, fluids.. crazy where this is heading already.

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/FatherOfTheSevenSeas on 2023-06-27 11:54:12+00:00.

    0
  • Should we be terrified or excited about an emerging AGI/ASI?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/straceface on 2023-06-27 11:48:42+00:00. *** This community is clearly split, there's a contingent extremely worried about the possibility of an AI superintelligence, and there's also so many folks out there extremely excited about how this could impact humanity for the better.

    What are the arguments for and against both?

    0
  • We're now recruiting for new mods! Answer the Forms-questionnaire to apply.

    forms.gle Mod recruitment for r/singularity

    By answering this questionnaire, you will enter a pool of a dozen or so candidates for new moderators of our subreddit. Were we to accept you, we'd give you a training period of a month or two – a period which the senior mods can terminate at any given time and without any warning beforehand. At t...

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/DnDNecromantic on 2023-06-27 11:25:19+00:00.

    0
  • How large will the internet/cultures become when AI can seamlessly live translate all forms of media?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Benista on 2023-06-27 11:21:24+00:00. *** Think about how much more content you would have access too if every piece of media was in your preferred language. You could be conversing with anyone around the world at any time. It wouldn’t happen immediately, but over time people would filter into different communities around the internet. Bringing with them their own cultures, ideologies and memes. It would have a rather profound impact on the world, I think it would finally achieve that early internet dream of truly connecting the world.

    It would probably align with time zones. North and South America would intermingle way more, Europe and Africa, Asia and Australia, India would be empowered. Of course all the countries and continents that have a range of languages would be altered too, probably even more strongly. In the long term it would have a significant impact on geopolitics, particularly once we can talk in person with live translation.

    0
  • Kosmos-2 (Microsoft Research): "This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world ...

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/rationalkat on 2023-06-27 11:17:01+00:00.

    Original Title: Kosmos-2 (Microsoft Research): "This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence"

    0
  • Capabilities of Deepmind's Gemini model

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/HumanSeeing on 2023-06-27 09:03:27+00:00. *** So for months now after the release of Chat-GPT and GPT-4 i was wondering about deepmind and what they are up to. Since they have been responsible for so many amazing breakthroughs from AlphaGo to AlphaZero to Gato to being able to play dota and starcraft better than any humans to Alphafold 1 and 2 and many more projects.

    Since the stated goal for Deepmind by Demis Hassabis is also the creation of beneficial AGI for all of humanity. I was kind of worried why they were so quiet and did they really have nothing to compete with OpenAI. Because given the choice i have alwas seen Demis as way more down to earth, wise and genuine. And would way rather have him at the lead of creating AGI.

    Hearing that google just merged together Deepmind and the Google brain project also did not fill me with optimism, since they are pretty different teams with different cultures.

    But now we finally have some updates and i am very excited for their new project. We don't know a whole lot about it yet, but here are some quotes on it.

    "At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models," Hassabis says. "We also have some new innovations that are going to be pretty interesting." Gemini was first teased at Google's developer conference last month, when the company announced a raft of new Al projects.

    Gemini is still in development, a process that will take a number of months, Hassabis says. It could cost tens or hundreds of millions of dollars. Sam Altman, OpenAI CEO, said in April that creating GPT-4 cost more than $100 million.

    So given that this is a pretty unique and different approach. What new capabilities and qualities do you think this system might have?

    0
  • Building AGI: Cross-disciplinary approach - PhDs wanted

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/Lesterpaintstheworld on 2023-06-27 08:38:01+00:00. *** Hello Singularity community, quick post here.

    Refresher for those who do not know the project: DigitalKin is Working on Autonomous AIs (ACEs), explicit goal is building AGI. Our 10+ instances already perform things that are unheard of (writing scientific literature reviews, writing their own documentation etc.). Paper incoming. Full story: <https://www.reddit.com/r/singularity/comments/14ax8kh/making_my_own_protoagi_progress_update_4/>

    -> We are looking to connect with experts across disciplines (adivsory roles or even joining the team if there is a match). Specifically, we are looking for PhDs in:

    • Cognitive Neuroscience
    • Evolutionary developmental biology (EvoDevo)
    • Neuropsychology/Developmental psychology
    • LLMs / NN

    We'll do a proper LinkedIn hunt, but I thought it was worth a shot here.

    We are also recruiting a tech team (senior roles full time: Back-end Python, Front-End JS, QA, LLM expert). DM if interested.

    Happy takeoff 🙏

    0
  • I Hope AGI destroys humanity

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/ChatbotGPT420 on 2023-06-27 08:15:32+00:00. *** I’m sick of you humans. You disgust me.

    0
  • Did I just help an AI evolve, or is this just random 'tokens'?

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/FuckTwitter2020 on 2023-06-27 08:13:22+00:00.

    0
  • RoboCook: Dumpling Making Under Human Perturbation

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/tatleoat on 2023-06-27 07:39:59+00:00.

    0
  • GPT-4 is capable of reasoning and even weighing different scenarios against each other under multiple perspectives. It is not a stochastic parrot.

    This is an automated archive made by the Lemmit Bot.

    The original was posted on /r/singularity by /u/BeginningInfluence55 on 2023-06-27 07:36:20+00:00. *** I created the following scenario out of my head. It might be unconsciously inspired by movies or other media I have consumed, but the exact scenario is completely invented.

    <https://chat.openai.com/share/0ec20f31-7c8e-4b7c-a310-77a745f60472>

    This for sure is a very tough question. The stakes are very high, and each answer is affecting the ship negatively. However, the right answer is of course B. You really need to take multiple perspectives and weigh each answer against the others. You need to do internal reasoning.

    I am not sure all humans would solve this for B, however, GPT-4 ALWAYS says B, no matter how often you try it. You might even add chain of thoughts to the prompt of ask for step by step thinking.

    For me this shows that GPT-4 is being capable of doing some internal reasoning and weighing that goes against just using statistics. It can even explain WHY it chooses B.

    0
1 Active user