- Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
- Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
- AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
It’s so weird how they’re just insisting it isn’t an android app even though people have proven it is. Who do they expect to believe them?
The same question was asked a million times during the crypto boom. “They’re insisting that [some-crypto-project] is a safe passive income when people have proven that it’s a ponzi scheme. Who do they expect to believe them?” And the answer is, zealots who made crypto (or in this case, AI) the basis of their entire personality.
It’s the Juicero strategy.
“You can’t squeeze our juice packs! Only our special machine can properly squeeze our juice packs for optimal taste!”
Ahh, the good ol’ days, before we knew how batshit AvE was.
I’m assuming you’re talking about the YouTuber; It’s been since before the pandemic that I’ve watched AvE, what did he do?
Leaned hard into anti-vax and sympathizing with the Canadian trucker protests, and made it a fairly prominent part of his videos. Not entirely surprising that he held some of the views, but he got high on his own LIBERTARIAN!!! supply and started thinking that if he thought it, his audience must want to hear it.
Ah, what a shame, he had some good stuff.
Reviewer proceeds to squeeze more juice out with their hands than the machine managed.
Investors who don’t bother reading past the letters A and I in the prospectus.
The AI boom in a nutshell. Repackaged software and content with a shiny AI coat of paint. Even the AI itself is often just repackaged chatgpt.
perplexity for this device. still, excited to get my pre-order if only to add to my teenage engineering collection
It certainly looks sleek. Too bad that’s all it has in its favor.
Why are there AI boxes popping up everywhere? They are useless. How many times do we need to repeat that LLMs are trained to give convincing answers but not correct ones. I’ve gained nothing from asking this glorified e-waste something, pulling out my phone and verifying it.
It’s not black or white.
Of couse AI hallucinates, but not all that an LLM produces is garbage.
Don’t expect a “living” Wikipedia or Google, but, it sure can help with things like coding or translating.
I don’t necessarily disagree. You can certainly use LLMs and achieve something in less time than without it. Numerous people here are speaking about coding and while I had no success with them, it can work with more popular languages. The thing is, these people use LLMs as a tool in their process. They verify the results (or the compiler does it for them). That’s not what this product is. It’s a standalone device which you talk to. It’s supposed to replace pulling out your phone to answer a question.
I have now heard of my first “ai box”. I’m on Lemmy most days. Not sure how it’s an epidemic…
I haven’t seen much of them here, but I use other media too. E.g, not long ago there was a lot of coverage about the “Humane AI Pin”, which was utter garbage and even more expensive.
I think it’s a delayed development reaction to Amazon Alexa from 4 years ago. Alexa came out, voice assistants were everywhere. Someone wanted to cash in on the hype but consumer product development takes a really long time.
So product is finally finished (mobile Alexa) and they label it AI to hype it as well as make it work without the hard work of parsing wikipedia for good answers.
Alexa is a fundamentally different architecture from the LLMs of today. There is no way that anyone with even a basic understanding of modern computing would say something like this.
Alexa is a fundamentally different architecture from the LLMs of today.
Which is why I explicitly said they used AI (LLM) instead of the harder to implement but more accurate Alexa method.
Maybe actually read the entire post before being an ass.
I just started diving into the space from a localized point yesterday. And I can say that there are definitely problems with garbage spewing, but some of these models are getting really really good at really specific things.
A biomedical model I saw seemed lauded for it’s consistency in pulling relevant data from medical notes for the sake of patient care instructions, important risk factors, fall risk level etc.
So although I agree they’re still giving well phrased garbage for big general cases (and GPT4 seems to be much more ‘savvy’), the specific use cases are getting much better and I’m stoked to see how that continues.
I just used ChatGPT to write a 500-line Python application that syncs IP addresses from asset management tools to our vulnerability management stack. This took about 4 hours using AutoGen Studio. The code just passed QA and is moving into production next week.
https://github.com/blainemartin/R7_Shodan_Cloudflare_IP_Sync_Tool
Tell me again how LLMs are useless?
To be honest… that doesn’t sound like a heavy lift at all.
You used the right tool for the job, saved you from hours of work. General AI is still a very long ways off and people expecting the current models to behave like one are foolish.
Are they useless? For writing code, no. Most other tasks yes, or worse as they will be confiently wrong about what you ask them.
Are they useless?
Only if you believe most Lemmy commenters. They are convinced you can only use them to write highly shitty and broken code and nothing else.
This is my expirence with LLMs, I have gotten it to write me code that can at best be used as a scaffold. I personally do not find much use for them as you functionally have to proofread everything they do. All it does change the work load from a creative process to a review process.
I don’t agree. Just a couple of days ago I went to write a function to do something sort of confusing to think about. By the name of the function, copilot suggested the entire contents of the function and it worked fine. I consider this removing a bit of drudgery from my day, as this function was a small part of the problem I needed to solve. It actually allowed me to stay more focused on the bigger picture, which I consider the creative part. If I were a painter and my brush suddenly did certain techniques better, I’d feel more able to be creative, not less.
Who’s going to tell them that “QA” just ran the code through the same AI model and it came back “Looks Good”.
:-)
It’s no sense trying to explain to people like this. Their eyes glaze over when they hear Autogen, agents, Crew ai, RAG, Opus… To them, generative AI is nothing more than the free version of chatgpt from a year ago, they’ve not kept up with the advancements, so they argue from a point in the distant past. The future will be hitting them upside the head soon enough and they will be the ones complaining that nobody told them what was comming.
They aren’t trying to have a conversation, they’re trying to convince themselves that the things they don’t understand are bad and they’re making the right choice by not using it. They’ll be the boomers that needed millennials to send emails for them. Been through that so I just pretend I don’t understand AI. I feel bad for the zoomers and genas that will be running AI and futilely trying to explain how easy it is. Its been a solid 150 years of extremely rapid invention and innovation of disruptive technology. But THIS is the one that actually won’t be disruptive.
Wonderfully said, this is a very good point.
I heard someone even leaked the apk LMAO that’s hilarious that your 200 dollar product can be literally pirated
You wouldn’t download a bunny…
I would stew a bunny…
Does the apk have unlimited access to Perplexity AI?
I’m confused by this revelation. What did everybody think the box was?
Without thinking into it I would have expected some more custom hardware, some on device AI acceleration happening. For one to go and purchase the device it should have been more than just an android app
The best way to do on-device AI would still be a standard SoC. We tend to forget that these mass produced mobile SoCs are modern miracles for the price, despite the crapy software and firmware support from the vendors.
No small startup is going to revolutionize this space unless some kind of new physics is discovered.
I think the plausibility comes from the fact that a specialized AI chip could theoretically outperform a general purpose chip by several orders of magnitude, at least for inference. And I don’t even think it would be difficult to convert a NN design into a chip or that it would need to be made on a bleeding edge node to get that much more performance. The trade off would be that it can only do a single NN (or any NNs that single one could be adjusted to behave identically to, eg to remove a node you could just adjust the weights so that it never triggers).
So I’d say it’s more accurate to put it as “the easiest/cheapest way to do an AI device is to use a standard SoC”, but the best way would be to design a custom chip for it.
They’re not a chip
manufacturerdesigner though, and modern phone processors are already fast enough to do near real time text generation and fast image generation (20 tokens/second llama 2, ~1 second for a distilled SD 1.5, on Snapdragon 8 Gen 3)Unfortunately, the cheapest phones with that processor seem about $650, and the Rabbit R1 costs $200 and uses a MediaTek Helio P35 from late 2018.
Neither AMD nor nVidia are chip manufacturers. They just design them and send them off to TSMC or Samsung to get made.
The hardware seems very custom to me. The problem is that the device everyone carries is a massive superset of their custom hardware making it completely wasteful.
I think the issue is that people were expecting a custom (enough) OS, software, and firmware to justify asking $200 for a device that’s worse than a $150 phone in most every way.
I didn’t know how much work they put into customizing it, but being derived from Android does not mean it isn’t custom. Ubuntu is derived from Debian, that doesn’t mean that it isn’t a custom OS. The fact that you can run the apk on other Android devices isn’t a gotcha. You can run Ubuntu .deb files on other Debian distros too. An OS is more of a curated collection of tools, you should not be going out of your way to make applications for a derivative os incompatible with other OSes derived from the same base distro.
The Rabbit OS is running server side.
Same. As soon as I saw the list of apps they support, it was clear to me that they’re running Android. That’s the only way to provide that feature.
Most of the processing is done server side though.
Isn’t Lemmy supposed to be tech savvy? What do people think the vast majority of Linux OSs are? They’re derivatives of a base distribution. Often they’re even derivatives of a derivative.
Did people think a startup was going to build an entire OS from scratch? What would even be the benefit of that? Deriving Android is the right choice here. This R1 is dumb, but this is not why.
I don’t even understand what the point is of this product. Seems like e-waste at first glance.
It’s just marketing to be like “look at how capable our AI is with just one button”. I mean if you want to be charitable it’s an interesting design exercise, but wasteful and frivolous when everyone is already carrying devices that are far more capable supersets of this.
This is why I cringe at cell phone manufacturers selling cloud and AI features based on phone models because wtf you’re not running that cloud on that handset so why do you gatekeep the product behind that model? It can’t require that many resources, it’s a cloud app!
It is to make you spend more to buy the better model. If you really want that AI you won’t mind spending a bit more
I know what you’re getting at and this isn’t directed at you and I know this is why it’s done, but the capabilities of the phone don’t have any bearing on the use of the AI so why gatekeep it? It’s a dumb way to make a profit.
It’s a dumb way to make a profit.
If it works, is it dumb?
When VHS was still around, DVDs were priced higher even though they were much cheaper to produce. If people are willing to pay more, producers/distributors will charge more. Yay capitalism.
I know. It’s dumb as hell. Just like everything being priced at 4.99 instead of 5.00. people are just stupid and it seems to wprk out for the companies.
Should’ve named the company Snake Oil Inc.
their page to link accounts to it was not a real webapp, it was a novnc page that would connect to an ubuntu vm that runs chrome with no sandboxing and basic password store under fluxbox wm
someone dumped the home directory from it
I wish I knew what any of this meant…
It basically implies that they cobbled together some standard technology but they didn’t even put it together very well.
It’s like a solution that’s held in place with chewing gum and Band-Aids.
•Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
All android devices are “emulators” like their hardware isn’t special
The issue isn’t even with what it runs on, albeit selling it as specialized hardware is really bizarre, when it’s just a glorified embedded platform with a scroll wheel
The company is known for making quirky hardware, usually niche musical instruments with creatively chosen knobs, switches, cute UIs and such.
They figured they could ride the AI hype wave based on their expensive niche audience but it’s blowing up in their face.
Is that not custom hardware? I really don’t see any issue with how they built this thing. The issue is what they built.
You should see the rest of the overpriced toys these guys have marketed as genius over the years.
An app that would require root access to fully operate. It is designed to run and use apps automatically. Large Action Mode, I think. Easiest way to get this out is a standalone device
I may not fully understand the situation, but AOSP offers an API called Accessability that allows an app to hook and modify how the user interacts with the UI. the best example is probably Talkback.
Programmed electronic revealed to be running a program?
No, revealed to not be specific design at all. The device is actually a terrible phone with less feature than a phone, nothing more. The app would likely run as-is on any Android phone with 100% of the feature provided.
Paying $200 for a bottom of the line smartphone that can’t smartphone is a bit much.
What do you mean not specific design? Android apps are just programs. What were any of you expecting it to be programmed with? A brand new programming language?
They were expecting it to not be Android, but something more custom. Like I feel even just bare bones Linux would’ve been more acceptable.
Linux isnt custom, its just another open source OS. This is ridiculous.
Dude it’s an android app they are trying to sell for $200. Why apologize for this thing? Get some better expectations.
You say android app like that makes it worthless. Its software they made running on hardware they designed.
Would you pay $200 for an android app?
This is not about the programming language nor the OS. It’s about masquerading a cheap butchered android phone as a brand new device. If it was some custom, optimized hardware to connect the main I/O (camera, touchscreen, buttons) to a piece of software that communicate with a remote server, it would justify the price. But as it is, it’s a borderline refurbished weak phone hardware sold for $200.
This is not about the programming language nor the OS
It’s about masquerading a cheap butchered android phone
Wait til you hear about what android is. Android isnt a company that makes phones.
I mean, isn’t all software just an app that runs on hardware?
Next you’re going to tell me that lemmy is one of these dirty dirty apps
Nahhh, not as dirty as Reddit.
lmao threatening action against their own imminent irrelevance, more like
Not cool guys, not cool at all
And get serious - fuck your “proprietary” details, fuck lying/misrepresentation for money, and fuck you for trying a stunt like this.
Call me when you actually put the genie in the bottle!