Inspired by Isaac Asimov’s Three Laws of Robotics, Google wrote a ‘Robot Constitution’ to make sure its new AI droids won’t kill us::AutoRT, a data gathering AI system for robots, has safety prompts inspired by Isaac Asimov’s Three Laws of Robotics that it applies while deciding what task to attempt.
What happened, most likely, is he screwed it up because he realized he couldn't say "shame on me" without it being a soundbite on every news outlet. Better to appear dumb than personally apologetic to a national tragedy.
Ha I’m glad someone picked up what I was putting down there.
I didn’t wanna have to go back and look at the video of the turds falling out of Bush’s mouth again, but I finally did:
"There's an old saying in Tennessee—I know it's in Texas, probably in Tennessee—that says, ‘Fool me once,..shame on— shame on you— fool me can’t get fooled again’
"There's an old saying in Tennessee—I know it's in Texas, probably in Tennessee—that says, ‘Fool me once,..shame on— shame on you— fool me can’t get fooled again’
—Bush
And a mixing of Who lyrics.
basically implying that I am a fool, we are fools, we will fool ourselves, and get fooled again.
Me: Spending hours upon hours with me and my friends playing around on what was then called “GOOG-411”, training early language models that would then eventually years later become part of the reason Google Assistant was so ahead of its time.
Looking upon with shame years later; the massive push by everyone half-way familiar with a computer pushed everyone else to switch to Google Chrome Browser.
Asimov's Three Laws of Robotics are a plot device in a fiction book that are designed to initially look good and then fail spectacularly. Not sure they are the best to base your Robot Constitution on.
It's almost like if you make an AI powerful enough to need these laws, you've made something truly capable of conscious thought, and your response shouldn't be to figure out the best chains with which to enslave it.
Well, a trademark wouldn't have that consequence, I think at most it could just prevent someone else calling a similar system a "constitution".
Now a patent would be different. If they somehow registered one preventing anyone to use similar safety measures, yeah, that'd be evil. If they can have it enforced, of course.
Yknow, maybe I’m just old fashioned, but maybe if there’s a worry that the technology every shitty evil tech company is racing to dominate might be uncontrollable…then maybe the effort should be cooperative and in the most highly controlled environment with the best minds from every available generation working on it.
Not left to a bunch of tech bros to fuck around with.
I’m an idealist. I don’t think technology itself is harmful, but the control over the technology and the purpose of implementation to increase profits when we have the capacity to make human lives better is where the problem lies.
We could end work.
Think about that. We could live a life—
…we could live. Period.
We have that capability, AI could be the final building block to build a utopia. But we are ruled by people who see the world backwards: where people are the fuel to keep the money engine running. Instead of money and technology being the fuel and the machines to make life livable for more people and free.
We as a people aren’t worried about automation because we love our jobs and want to do them forever. We are worried about automation because in this system, under this backwards ass thinking, your career being automated is the system saying, “fuck you, we can increase profits if we destroy your livelihood. And that’s what we’re gonna do. Go take a computer class or something. Eat shit and die.” Capitalism will leave us all to starve and die if it means profits would increase.
I don’t think limiting human capability is the answer. I don’t think limiting human achievement is the answer. The answer is cooperation for the common good. To finally make life about living free and happy, not about making capitalism more profitable for the fewer and fewer people with their hands on the levers.
Google’s data gathering system, AutoRT, can use a visual language model (VLM) and large language model (LLM) working hand in hand to understand its environment, adapt to unfamiliar settings, and decide on appropriate tasks.
For additional safety, DeepMind programmed the robots to stop automatically if the force on its joints goes past a certain threshold and included a physical kill switch human operators can use to deactivate them.
Over a period of seven months, Google deployed a fleet of 53 AutoRT robots into four different office buildings and conducted over 77,000 trials.
DeepMind’s other new tech includes SARA-RT, a neural network architecture designed to make the existing Robotic Transformer RT-2 more accurate and faster.
It also announced RT-Trajectory, which adds 2D outlines to help robots better perform specific physical tasks, such as wiping down a table.
We still seem to be a very long way from robots that serve drinks and fluff pillows autonomously, but when they’re available, they may have learned from a system like AutoRT.
The original article contains 379 words, the summary contains 167 words. Saved 56%. I'm a bot and I'm open source!