The same question was asked a million times during the crypto boom. "They're insisting that [some-crypto-project] is a safe passive income when people have proven that it's a ponzi scheme. Who do they expect to believe them?" And the answer is, zealots who made crypto (or in this case, AI) the basis of their entire personality.
Their target audience are the most gullible tech evangelists in the world that think AI is magic. If there was a limit to the lies those people are willing to believe, they wouldn't be buying the thing to begin with.
They have thought of a specific design for the device using its own interaction modality and created a product that is more than just software.
Therefore don't get why people refer to it being just an app? Does it make it worth less, because it runs on Android? Many devices, e.g. e-readers are just Android Apps as well. If it works it works.
In this case it doesn't, so why not focus on that?
The point being, they are charging 200 bucks for hardware that is superfluous and low end for an incomplete software experience that could be delivered without that on an app. The question is, are you going to give up your smartphone for this new device? Are you going to carry both? Probably not.
"It can do 10% of the shit your phone can do, only slower, on a smaller screen, with its own data connection, and inaccurately because you have to hope that our "AI" is sufficiently advanced to understand a command, take action on that command, and respond in a short amount of time. And that's not to even speak about the horrible privacy concerns or that it's a brick without connection!"
Everything about this project seems lackluster at best, other than maybe the aesthetic design from teenage engineering, but even then, their design work seems a bit repetitive. But that may be due to how the company is asking for the work. "We wanna be like Nothing and Playdate!!" "I gotchu fam!"
To address your point about e-readers, they have specific use cases. Long battery lives, large, efficient e-ink displays, and the convenience of having all your books, or a large subset, available to you offline! But when those things aren't a concern, yea, an app will do.
Like with most contemporary product launches, I simply find myself asking, "Who is this for?"
Anything and everything this square does, my phone can do better already and has the added benefit of already being in my pocket and not a pain in the ass to use.
An ereader is a piece of hardware that has a distinct purpose that cannot be matched by other hardware (high quality, high contrast, low power draw static content). Some of them do run Android, and that's a huge value add. But the actual hardware is the reason it exists.
This is just a dogshit Android phone. There is no unique hardware niche it's filling. It's an extremely obvious scam that is very obviously massively downgraded in all of value, utility, and performance by being forced onto separate hardware.
Repackaging ChatGPT is arguably a very nice potential value add, because going to a website is not always very convenient. But it needs to be done right to convince users to use a new method to access ChatGPT instead of just using their website.
What's interesting about this device is that it (supposedly) learns how apps work and how people use them, so if you ask it something that requires using an app it could do it.
So while it might be "just an android app", if it does what's advertised that would be impressive.
Apps are designed to be easy to use. If this device works as advertised (and that's a huge if), then it wouldn't offer much in the way of convenience anyway. From what I've been reading, it doesn't work well at all.
Unless you have tons of money, why preorder? Just wait for the company to inevitably go under and people start reselling their now-useless devices, and then scoop as many as you want from Ebay. Even if the company survives for a while, the functionality is so underwhelming they might start getting rid of them way sooner.
Why are there AI boxes popping up everywhere? They are useless. How many times do we need to repeat that LLMs are trained to give convincing answers but not correct ones. I've gained nothing from asking this glorified e-waste something, pulling out my phone and verifying it.
They have pushed AI so hard in the last couple of years they have convinced many that we are 1 year away from Terminator travelling back in time to prevent the apocalypse
Because money, both from tech hungry but not very savvy consumers, and the inevitable advertisers that will pay for the opportunity for their names to be ejected from these boxes as part of a perfectly natural conversation.
I don't necessarily disagree. You can certainly use LLMs and achieve something in less time than without it. Numerous people here are speaking about coding and while I had no success with them, it can work with more popular languages. The thing is, these people use LLMs as a tool in their process. They verify the results (or the compiler does it for them). That's not what this product is. It's a standalone device which you talk to. It's supposed to replace pulling out your phone to answer a question.
I haven't seen much of them here, but I use other media too. E.g, not long ago there was a lot of coverage about the "Humane AI Pin", which was utter garbage and even more expensive.
I just started diving into the space from a localized point yesterday. And I can say that there are definitely problems with garbage spewing, but some of these models are getting really really good at really specific things.
A biomedical model I saw seemed lauded for it's consistency in pulling relevant data from medical notes for the sake of patient care instructions, important risk factors, fall risk level etc.
So although I agree they're still giving well phrased garbage for big general cases (and GPT4 seems to be much more 'savvy'), the specific use cases are getting much better and I'm stoked to see how that continues.
I think it's a delayed development reaction to Amazon Alexa from 4 years ago. Alexa came out, voice assistants were everywhere. Someone wanted to cash in on the hype but consumer product development takes a really long time.
So product is finally finished (mobile Alexa) and they label it AI to hype it as well as make it work without the hard work of parsing wikipedia for good answers.
Alexa is a fundamentally different architecture from the LLMs of today. There is no way that anyone with even a basic understanding of modern computing would say something like this.
The best convincing answer is the correct one. The correlation of AI answers with correct answers is fairly high. Numerous test show that. The models also significantly improved (especially paid versions) since introduction just 2 years ago.
Of course it does not mean that it could be trusted as much as Wikipedia, but it is probably better source than Facebook.
"Fairly high" is still useless (and doesn't actually quantify anything, depending on context both 1% and 99% could be 'fairly high'). As long as these models just hallucinate things, I need to double-check. Which is what I would have done without one of these things anyway.
I just used ChatGPT to write a 500-line Python application that syncs IP addresses from asset management tools to our vulnerability management stack. This took about 4 hours using AutoGen Studio. The code just passed QA and is moving into production next week.
It's a shortcut for experience, but you lose a lot of the tools you get with experience. If I were early in my career I'd be very hesitant relying on it as its a fragile ecosystem right now that might disappear, in the same way that you want to avoid tying your skills to a single companies product. In my workflow it slows me down because the answers I get are often average or wrong, it's never "I'd never thought of doing it that way!" levels of amazing.
You used the right tool for the job, saved you from hours of work. General AI is still a very long ways off and people expecting the current models to behave like one are foolish.
Are they useless? For writing code, no. Most other tasks yes, or worse as they will be confiently wrong about what you ask them.
First off, this is not the kind of code I write on my end, and I don't think I'm the only one not writing scripts all day. There's a need for scripts at times in my line of work but I spend more of my time thinking about data structures, domain modelling and code architecture, and I have to think about performance as well. Might explain my bad experience with LLMs in the past.
I have actually written similar scripts in comparable amounts of times (a day for a working proof of concept that could have gone to production as-is) without LLMs. My use case was to parse JSON crash reports from a provider (undisclosable due to NDAs) to serialize it to our my company's binary format. A significant portion of that time was spent on deciding what I cared about and what JSON fields I should ignore. I could have used ChatGPT to find the command line flags for my Docker container but it didn't exist back then, and Google helped me just fine.
Assuming you had to guide the LLM throughout the process, this is not something that sounds very appealing to me. I'd rather spend time improving on my programming skills than waste that time teaching the machine stuff, even for marginal improvements in terms of speed of delivery (assuming there would be some, which I just am not convinced is the case).
On another note...
There's no need for snark, just detailing your experience with the tool serves your point better than antagonizing your audience. Your post is not enough to convince me this is useful (because the answers I've gotten from ChatGPT have been unhelpful 80% of the time), but it was enough to get me to look into AutoGen Studio which I didn't know about!
I don't think LLMs are useless, but I do think little SoC boxes running a single application that will vaguely improve your life with loosely defined AI features are useless.
It's no sense trying to explain to people like this. Their eyes glaze over when they hear Autogen, agents, Crew ai, RAG, Opus... To them, generative AI is nothing more than the free version of chatgpt from a year ago, they've not kept up with the advancements, so they argue from a point in the distant past. The future will be hitting them upside the head soon enough and they will be the ones complaining that nobody told them what was comming.
In all reality, it is a ChatGPTitty "fine"tune on some datasets they hobbled together for VQA and Android app UI driving. They did the initial test finetune, then apparently the CEO or whatever was drooling over it and said "lEt'S mAkE aN iOt DeViCe GuYs!!1!" after their paltry attempt to racketeer an NFT metaverse game.
Neither this nor Humane do any AI computation on device. It would be a stretch to say there's even a possibility that the speech recognition could be client-side, as they are always-connected devices that are even more useless without Internet than they already are with.
Make no mistake: these money-hungry fucks are only selling you food cans labelled as magic beans. You have been warned and if you expect anything less from them then you only have your own dumbass to blame for trusting Silicon Valley.
If the Humane could recognise speech on-device, and didn't require its own data plan, I'd be reasonably interested, since I don't really like using my phone for structuring my day.
I'd like a wearable that I can brain dump to, quickly check things without needing to unlock my phone, and keep on top of schedule. Sadly for me it looks like I'll need to go the DIY route with an esp32 board and an e-ink display, and drop any kind of stt + tts plans
I think the issue is that people were expecting a custom (enough) OS, software, and firmware to justify asking $200 for a device that's worse than a $150 phone in most every way.
I didn't know how much work they put into customizing it, but being derived from Android does not mean it isn't custom. Ubuntu is derived from Debian, that doesn't mean that it isn't a custom OS. The fact that you can run the apk on other Android devices isn't a gotcha. You can run Ubuntu .deb files on other Debian distros too. An OS is more of a curated collection of tools, you should not be going out of your way to make applications for a derivative os incompatible with other OSes derived from the same base distro.
Without thinking into it I would have expected some more custom hardware, some on device AI acceleration happening. For one to go and purchase the device it should have been more than just an android app
The best way to do on-device AI would still be a standard SoC. We tend to forget that these mass produced mobile SoCs are modern miracles for the price, despite the crapy software and firmware support from the vendors.
No small startup is going to revolutionize this space unless some kind of new physics is discovered.
The hardware seems very custom to me. The problem is that the device everyone carries is a massive superset of their custom hardware making it completely wasteful.
Qualcomm is listed as having $10 billion in yearly profits (Intel has ~20B, Nvidia has ~80B), the news articles I can find about Rabbit say its raised around $20 million in funding ($0.02 billion). It takes a lot of money to make decent custom chips.
Isn't Lemmy supposed to be tech savvy? What do people think the vast majority of Linux OSs are? They're derivatives of a base distribution. Often they're even derivatives of a derivative.
Did people think a startup was going to build an entire OS from scratch? What would even be the benefit of that? Deriving Android is the right choice here. This R1 is dumb, but this is not why.
The processing was done server-side as it is with the other thing. If you find a way to do it client-side let me know otherwise I'm not interested in your dumb product.
What, you aren't excited about a future where everything is cloud computing spyware that sends all your activity to an AI to be analyzed and picked apart by strangers?
So it's just a single app running on a minimal Android implementation, the AI is done on remote servers and it still gets lousy battery life? Sounds like they dropped the ball on design. Nevertheless, no one is going to carry this that doesn't already have a phone that can do everything the Rabbit does. It has no reason to exist.
It's just marketing to be like "look at how capable our AI is with just one button". I mean if you want to be charitable it's an interesting design exercise, but wasteful and frivolous when everyone is already carrying devices that are far more capable supersets of this.
My understanding is that if you only add modules on top, those can stay closed source. It's possible the AOSP portion of the stack is still stock and untouched.
I don't know, one of the reasons they're decrying everyone running the APK is they claim they've made a bunch of "bespoke alterations" to the AOSP version they're using
AOSP is fully Apache-2.0 licensed except for the Linux kernel, so only their kernel changes would have to be. It's also an important reason why Android was/is so successful.
Having seen what this device does, they may not even have had to alter anything to the base AOSP image. Just set your app as the launcher and you're good to go.
Depends on which part is altered. Lots of Linux distros are just curated collections of software, drivers, and configuration. You can easily achieve your OS goals without touching the code of the base distro at all. If they didn't need to modify the base code then there's nothing to distribute back. That would be like distributing your personal OS power user config settings. If you're not touching source there's nothing to contribute.
This is why I cringe at cell phone manufacturers selling cloud and AI features based on phone models because wtf you're not running that cloud on that handset so why do you gatekeep the product behind that model? It can't require that many resources, it's a cloud app!
I know what you're getting at and this isn't directed at you and I know this is why it's done, but the capabilities of the phone don't have any bearing on the use of the AI so why gatekeep it? It's a dumb way to make a profit.
their page to link accounts to it was not a real webapp, it was a novnc page that would connect to an ubuntu vm that runs chrome with no sandboxing and basic password store under fluxbox wm
Holy shit, that's actually hilarious, I imagine someone would have noticed when their paste/auto type password managers didn't work
For those confused, this sounds like instead of making a real website, they spin up a vm, embed a remote desktop tool into their website and have you login through chrome running on their VM, this is sooooo sketch it, its unreal anyone would use this in a public product.
Imagine if to sign into facebook from an app, you had to go to someone else's computer, login and save your credentials on their PC, would that be a good idea?
What I don’t understand is why. This sounds like way more work than spinning up some out-of-the-box framework with oAuth or a Google login and hosting it on Lambda or Azure. What is logging in on a VM box even going to do for the device?
The issue isn’t even with what it runs on, albeit selling it as specialized hardware is really bizarre, when it’s just a glorified embedded platform with a scroll wheel
You say "bizarre" they say "marketing strategy"... They chose to do this knowing people wouldn't be milked $200 for an app, but if they make it look like a device, the sheep will be lured
An app that would require root access to fully operate. It is designed to run and use apps automatically. Large Action Mode, I think. Easiest way to get this out is a standalone device
I may not fully understand the situation, but AOSP offers an API called Accessability that allows an app to hook and modify how the user interacts with the UI. the best example is probably Talkback.
Such a bad comparison. In the case of Debian and Ubuntu apps you run both apps on your hardware you already have. In case of rabbit, you could just run app on your phone instead of buying rabbit. Rabbit does not offer anything more than their app does when installed on android phone. It's even better on android phone because phone is faster.
The difference here is Ubuntu is open about the fact that stand on the shoulders of something greater than them.
R1 in contrast pretend that everything they've built is proprietary, and therefore no one could possibly come up with something similar.
When it's clearly not the case.
This is critical, not for the purpose of sales, but for the purpose of retaining investor value.
The whole thing reeks of an exercise to generate artificial investor value.
If investors find out that their so-called innovation can actually be done by anyone with some coding skills and connectivity to open AI, then the company value will drop like a hot turd.
No, revealed to not be specific design at all. The device is actually a terrible phone with less feature than a phone, nothing more. The app would likely run as-is on any Android phone with 100% of the feature provided.
Paying $200 for a bottom of the line smartphone that can't smartphone is a bit much.
What do you mean not specific design? Android apps are just programs. What were any of you expecting it to be programmed with? A brand new programming language?
Look I think this little thing is a cool gimmicky thing that is cheap and doesn't really have any practical application at all... I forgot where I was going with that comment but I guess it pretty much sums up my thoughts.