When the Rabbit R1 was first announced, it generated a lot of buzz—this tiny, stylish device promised to reinvent personal computing with the help of artificial intelligence. But as a blind accessibility specialist, I didn’t rush to pre-order one. In fact, it was my friend Michael who picked one up when it launched—and after a few underwhelming attempts to use it, the device sat untouched for months.
That changed one day when we were bored. Out of curiosity (and maybe stubbornness), we decided to give the Rabbit R1 another spin.
My Background: Accessibility Matters, But I Love Good AI
Before diving in, it’s worth mentioning where I come from. I’m an accessibility specialist by trade—ensuring digital experiences work for blind and low vision users. But I’m also someone who loves good AI. When I see potential in a tool, I don’t write it off just because it’s not perfect. I see it as a challenge and an opportunity for improvement.
The First Time Around: Meh
When Michael first unboxed the Rabbit R1, nothing about it screamed “usable” for blind users. No screen reader. No haptic cues. No audio guidance. It felt like another AI device that forgot we exist. So we set it aside.
What Is the Rabbit R1?
The Rabbit R1 is a handheld AI-powered device built around a system the company calls a Large Action Model (LAM). Unlike traditional voice assistants like Siri or Alexa, Rabbit is built to do things—log into websites, automate tasks, and control other apps or systems based on your requests.
It includes:
– A push-to-talk button
– A scroll wheel
– A rotating camera (Rabbit Eye)
– A touchscreen
– A USB-C port
But where it really shines is online, through a tool called the Rabbit Hole.
Into the Rabbit Hole
The Rabbit Hole is Rabbit’s web interface—this is where the magic really starts for those of us who rely on screen readers.
Once logged in, I explored several modes, including:
Playground
This is where you can type out any task in natural language. I told it: “Update my server.” It asked for my login credentials, then proceeded to connect and walked me through the entire update process. Within 10–15 minutes, it had performed the task. This kind of real-world automation—without needing a traditional terminal—was a huge win.
Cookie Jar
This is where Rabbit stores login credentials for services. The catch? It uses a virtual browser window that’s not accessible. I had to rely on NVDA OCR to locate fields and type in my credentials. Frustrating, but I made it work.
Real Tasks I Completed
Once I got the hang of things, I started pushing Rabbit’s limits:
– Described collectibles on Michael’s bookshelf
– Researched business strategies
– Debugged Python code
– Found cheap 3D printer filament
– Ran server commands
– Opened multiple windows for parallel tasks
Intern Mode: Rabbit’s Own AI Agent
Rabbit recently introduced Intern, a new mode that acts as your AI assistant. Some tasks it can perform include:
– Creating online courses
– Writing Python apps
– Summarizing news in Word documents
However, it has limitations:
– Audio editing had strange sounds
– Video uploads failed
– Audio-to-text didn’t work
– Editing Squarespace sites was unsuccessful
One win: generating alt text for images worked.
Today’s Test: Navigating the R1’s On-Device Menu
I wanted to figure out how to manage the R1’s menu. First, I tried using my Meta Ray-Ban smart glasses, but they weren’t helpful. They read some text but gave inaccurate or bad information.
Then, I used Seeing AI. I pressed the side button and used the scroll wheel while in short text mode. Seeing AI read out items like Settings and Updates, but it didn’t indicate what was selected. I had to rely on my remaining vision to identify the red selection highlight.
I counted five items down to reach Updates and used the side button to select it. It wasn’t perfect, but it was usable with some effort.
Why I Bought One Anyway
After testing Michael’s device, I saw real potential and ordered my own Rabbit R1 from Best Buy. It will arrive Thursday. Michael will help me set it up, and I’m fine with that. This device, despite its flaws, shows what’s possible when AI meets utility.
Looking Ahead: Opening a Dialogue
I don’t expect the Rabbit R1 to be perfect yet. But I believe in progress. I plan to start a dialogue with Rabbit’s team about how to make the device more accessible to blind and low vision users. Accessibility isn’t an afterthought—it’s a foundation for innovation, and I’m excited to help drive that conversation forward.
Check out my work at