Why I Bought the Rabbit R1 (Even Though It’s Not Accessible)?

When the Rabbit R1 was first announced, it generated a lot of buzz—this tiny, stylish device promised to reinvent personal computing with the help of artificial intelligence. But as a blind accessibility specialist, I didn’t rush to pre-order one. In fact, it was my friend Michael who picked one up when it launched—and after a few underwhelming attempts to use it, the device sat untouched for months.

That changed one day when we were bored. Out of curiosity (and maybe stubbornness), we decided to give the Rabbit R1 another spin.

My Background: Accessibility Matters, But I Love Good AI

Before diving in, it’s worth mentioning where I come from. I’m an accessibility specialist by trade—ensuring digital experiences work for blind and low vision users. But I’m also someone who loves good AI. When I see potential in a tool, I don’t write it off just because it’s not perfect. I see it as a challenge and an opportunity for improvement.

The First Time Around: Meh

When Michael first unboxed the Rabbit R1, nothing about it screamed “usable” for blind users. No screen reader. No haptic cues. No audio guidance. It felt like another AI device that forgot we exist. So we set it aside.

What Is the Rabbit R1?

The Rabbit R1 is a handheld AI-powered device built around a system the company calls a Large Action Model (LAM). Unlike traditional voice assistants like Siri or Alexa, Rabbit is built to do things—log into websites, automate tasks, and control other apps or systems based on your requests.

It includes:
– A push-to-talk button
– A scroll wheel
– A rotating camera (Rabbit Eye)
– A touchscreen
– A USB-C port

But where it really shines is online, through a tool called the Rabbit Hole.

Into the Rabbit Hole

The Rabbit Hole is Rabbit’s web interface—this is where the magic really starts for those of us who rely on screen readers.

Once logged in, I explored several modes, including:

Playground

This is where you can type out any task in natural language. I told it: “Update my server.” It asked for my login credentials, then proceeded to connect and walked me through the entire update process. Within 10–15 minutes, it had performed the task. This kind of real-world automation—without needing a traditional terminal—was a huge win.

Cookie Jar

This is where Rabbit stores login credentials for services. The catch? It uses a virtual browser window that’s not accessible. I had to rely on NVDA OCR to locate fields and type in my credentials. Frustrating, but I made it work.

Real Tasks I Completed

Once I got the hang of things, I started pushing Rabbit’s limits:
– Described collectibles on Michael’s bookshelf
– Researched business strategies
– Debugged Python code
– Found cheap 3D printer filament
– Ran server commands
– Opened multiple windows for parallel tasks

Intern Mode: Rabbit’s Own AI Agent

Rabbit recently introduced Intern, a new mode that acts as your AI assistant. Some tasks it can perform include:
– Creating online courses
– Writing Python apps
– Summarizing news in Word documents

However, it has limitations:
– Audio editing had strange sounds
– Video uploads failed
– Audio-to-text didn’t work
– Editing Squarespace sites was unsuccessful

One win: generating alt text for images worked.

Today’s Test: Navigating the R1’s On-Device Menu

I wanted to figure out how to manage the R1’s menu. First, I tried using my Meta Ray-Ban smart glasses, but they weren’t helpful. They read some text but gave inaccurate or bad information.

Then, I used Seeing AI. I pressed the side button and used the scroll wheel while in short text mode. Seeing AI read out items like Settings and Updates, but it didn’t indicate what was selected. I had to rely on my remaining vision to identify the red selection highlight.

I counted five items down to reach Updates and used the side button to select it. It wasn’t perfect, but it was usable with some effort.

Why I Bought One Anyway

After testing Michael’s device, I saw real potential and ordered my own Rabbit R1 from Best Buy. It will arrive Thursday. Michael will help me set it up, and I’m fine with that. This device, despite its flaws, shows what’s possible when AI meets utility.

Looking Ahead: Opening a Dialogue

I don’t expect the Rabbit R1 to be perfect yet. But I believe in progress. I plan to start a dialogue with Rabbit’s team about how to make the device more accessible to blind and low vision users. Accessibility isn’t an afterthought—it’s a foundation for innovation, and I’m excited to help drive that conversation forward.

Check out my work at

https://taylorarndt.substack.com

iOS 18: The Ultimate Upgrade

iOS 18 Icon

Apple has unveiled iOS 18, a major update packed with new features, enhancements, and improvements that redefine the iPhone experience.  This release introduces Apple Intelligence, a suite of personalized features that make your iPhone more intuitive and helpful. Here’s an in-depth look at everything iOS 18 has to offer.

Apple Intelligence: Your Personalized Assistant

Apple Intelligence is the highlight of iOS 18, offering a new level of customization and smart capabilities across the entire operating system. Designed to understand your personal context, Apple Intelligence suggests actions, assists with writing, and offers tailored recommendations. This feature brings a smarter, more context-aware Siri, new writing tools, and more precise notifications, making your iPhone experience more seamless and intuitive. I currently use the iPhone 14 and look forward to upgrading to take full advantage of these features—this is the highlight for me.

Enhanced Siri Experience

Siri has received a significant upgrade in iOS 18, featuring improved language understanding and contextual awareness. Siri now anticipates your needs, offers real-time suggestions, and understands commands based on your current activity, integrating even more closely into your daily routine. A potential application I am exploring is whether Siri’s screen-aware feature can read unlabeled buttons on the screen, which would greatly enhance usability.

Customization at Your Fingertips

iOS 18 allows you to personalize your iPhone like never before:

– Rearrange Apps and Widgets: Easily customize your Home Screen layout by rearranging apps and widgets.

– New App Icon Look: Choose a Dark mode look, tint app icons with any color, or let iOS suggest a color based on your wallpaper.

– Locked and Hidden Apps: Secure sensitive apps with Face ID, keeping your data safe when sharing your device.

Redesigned Control Center

The Control Center receives a complete overhaul with new groups of controls that are accessible with a single swipe. You can customize controls, resize them, and group them as you like. The new Controls Gallery allows you to add your favorite controls from various apps, enhancing personalization.

Photos App: A New Way to Relive Memories

The Photos app has been completely redesigned, making it easier to organize and access your library:

– Browse by Topic: Collections organize your photos by recent days, people, pets, and trips, providing a more intuitive browsing experience.

– Customize Collections: Pin your favorite collections, ensuring your most cherished photos are always easy to find.

Messages: More Fun and Functional

iOS 18 brings exciting new ways to communicate in Messages:

– Text Effects: Apply animated effects to text, words, or emojis, with suggestions appearing as you type.

– Tapback with Any Emoji or Sticker: Express yourself with a wider variety of emojis and stickers in your responses.

– Messages via Satellite: Stay connected without Wi-Fi or cellular, using satellite technology on supported iPhones.

– Schedule Messages: Use the Send Later feature to schedule messages for a specific time, ensuring you never forget to send an important text. This feature is a welcome addition as it allows scheduling communications at the most appropriate times.

Mail: Coming Soon with New Features

Later this year, iOS 18 will introduce Mail improvements with automatic categorization and a focus on important messages. The new Primary category will help users manage their inbox more effectively, prioritizing time-sensitive and significant emails. With the volume of emails I receive, this enhancement will be transformative in streamlining my communication management.

Safari: Smarter Browsing

Safari in iOS 18 introduces Highlights, automatically detecting relevant information on a page and making it easily accessible. A redesigned Reader mode now includes a table of contents and high-level summaries, allowing users to get a quick overview of articles before diving in.

Maps: Explore Like Never Before

iOS 18 brings new topographic maps and trail networks, making it easy to plan hikes and outdoor activities. Users can create custom routes, download maps for offline use, and access detailed hiking information, including trail length and elevation. I am particularly interested in exploring whether the custom route’s function can work like waypoints, enhancing navigation similar to Good Maps.

Game Mode: Elevate Your Gaming Experience

Game Mode minimizes background activity to maintain high frame rates and reduce audio latency, especially when using AirPods and wireless game controllers. This ensures smooth gameplay and an immersive gaming experience.

New Wallet Features

The Wallet app now supports Tap to Cash, allowing iPhone users to complete transactions by simply bringing their devices together. This new capability will make Apple Cash transactions even more convenient. Additionally, users can now pay with rewards and set up installment payments for Apple Pay, offering greater flexibility in managing payments.

Enhanced Accessibility Features

iOS 18 introduces revolutionary accessibility updates:

– Eye Tracking: Control your iPhone using just your eyes.

– Music Haptics: Sync the iPhone Taptic Engine with the rhythm of songs, enhancing the music experience for users who are deaf or hard of hearing.

– Vocal Shortcuts: Record specific sounds to trigger actions on iPhone, assisting those with atypical speech in communicating more effectively.

Privacy and Security Enhancements

Privacy remains a priority with redesigned Privacy and Security settings, offering easier ways to manage what information you share with apps. New contact-sharing controls and improved Bluetooth privacy provide users with more control over their data.

Additional Updates

iOS 18 brings a host of other features, including:

– Live Call Transcription: Record and transcribe phone calls directly from the Phone app. This feature is invaluable for capturing discussions and sharing notes within my team.

– New Calculator Features: Access the Math Notes calculator and explore unit conversion and history features in a new portrait mode, potentially revolutionizing accessibility in math.

– Freeform Updates: New diagramming modes, alignment tools, and improved sharing options make Freeform boards even more versatile.

iOS 18 Release Date and Compatibility

iOS 18 is set to be released on September 16th and will be compatible with a wide range of iPhone models, from the iPhone 11 up to the latest iPhone 16 series. With so many new features, iOS 18 promises to be the most powerful and personalized iPhone experience yet.

Apple Event Recap: A New Era of Innovation, Intelligence, and Accessibility

Apple’s recent event at Apple Park was not just about new products; it was a showcase of how technology can empower, connect, and enhance the lives of all users, including those with disabilities. The event highlighted Apple’s ongoing commitment to accessibility, ensuring that its innovations are designed to be inclusive and usable by everyone. With major announcements around Apple Watch, AirPods, and iPhone, Apple continues to lead the way in integrating advanced technologies that redefine our interactions with the world.

Apple Watch Series 10: The Thinnest, Most Advanced Apple Watch Ever

Apple Watch Series 10 made its debut with a 30% larger, more advanced display, designed for enhanced readability and a sleek look in polished finishes like Jet Black and Rose Gold. Featuring a Wide-angle OLED Display, improved brightness, and power efficiency, the Series 10 redefines interaction by making it easier to view the watch from any angle.

The Series 10 is Apple’s thinnest design yet, measuring just 9.7 mm, and incorporates advanced technologies such as the S10 SiP and watchOS 11. These features enable intelligent capabilities like sleep apnea detection, advanced workout metrics, and new water-based activity tracking, positioning it as the perfect companion for any lifestyle. I am definitely considering purchasing this in the future. However, I want the phone first.

Apple Watch Ultra 2: The Ultimate Sports Watch

Apple Watch Ultra 2 was introduced as Apple’s most rugged and capable smartwatch to date. With new Black Titanium finishes, the Ultra 2 offers advanced GPS, extended battery life, and enhanced sensors for underwater activities, making it ideal for athletes and outdoor enthusiasts.

AirPods 4: Redefining Personal Audio

Apple unveiled the next generation of AirPods, focusing on comfort, audio quality, and intelligent features. Powered by the H2 chip, AirPods 4 deliver superior sound with richer bass, personalized spatial audio, and new machine learning features like voice isolation and intuitive Siri interactions. For the first time, AirPods 4 come with Active Noise Cancellation and Transparency mode, adapting automatically to different environments. USB-C and wireless charging options further improve convenience. However, I’m hesitant to purchase as they may not fit well given my smaller ear size. I was impressed by the lower prices though.

AirPods Pro 2: Health-Focused Audio Innovations

Apple introduced revolutionary health features in AirPods Pro 2, including hearing protection, a clinically validated Hearing Test, and an over-the-counter Hearing Aid feature. These additions make AirPods Pro 2 a transformative tool for those with hearing challenges, providing accessible hearing support without compromising audio quality. I believe this is truly a game-changer.

iPhone 16 and iPhone 16 Pro: The Next Level of Apple Intelligence

The iPhone 16 lineup marks the beginning of a new era, integrating Apple Intelligence at its core. With the A18 chip and a 16-core Neural Engine, the new iPhones deliver enhanced on-device intelligence. Features such as the customizable Action button and advanced camera systems make the iPhone 16 the most capable and personal iPhone yet. My favorite feature is the visual intelligence, which is a standout addition.

Visual Intelligence was a major highlight of the iPhone 16, transforming the device into a powerful tool for everyday interactions. The new Camera Control on iPhone 16 allows users to instantly learn about their surroundings by simply pointing the camera. This feature leverages on-device intelligence and Apple services to provide real-time information without storing images, ensuring privacy. For example, users can identify a restaurant, view ratings, check hours, and even add events from a flyer directly to their calendar with a simple click. It also integrates with third-party tools, allowing users to search for products online or get academic help with a single tap.

The Pro models elevate the experience further with new Titanium finishes, larger displays, and superior gaming capabilities, all driven by the A18 Pro chip. Apple Intelligence integrates deeply into the system, enhancing communication, reliving memories, and even personalizing Siri to better assist with day-to-day tasks. The iPhone 16 Pro’s Camera Control not only enhances photography but also provides users with access to powerful AI-driven insights, making it an invaluable tool for visually impaired users and beyond.

Commitment to Accessibility

From the outset, the Apple Event placed a strong emphasis on accessibility, with several speakers acknowledging the profound impact Apple products have had on people with disabilities. Apple’s dedication to accessibility was evident across all announcements, as the company showcased features like on-device intelligence that respects user privacy, the health-focused innovations in AirPods, and the Visual Intelligence capabilities in iPhone 16 that make information more accessible. The entire event served as a testament to Apple’s vision of technology that is not just cutting-edge but also inclusive, ensuring that everyone, regardless of their abilities, can benefit from the latest advancements.

The Apple Event highlighted the company’s relentless drive for innovation, underpinned by a strong commitment to accessibility and user empowerment. From the thinnest Apple Watch ever to health-focused audio solutions and iPhones that redefine personal intelligence and accessibility, Apple continues to set the standard for how technology should integrate into and enhance our daily lives. Comment below and tell us your thoughts  on the event.

My Experience Using Meta Ray-Bans for shopping


I’ve been curious about whether my Meta Ray-Ban smart glasses could help me shop independently, so I decided to put them to the test at CVS and Natural Grocers in Austin. My experience was filled with both challenges and moments of shock, especially when the people around me saw how I navigated the store using these glasses.

Testing the Meta Ray-Ban Glasses at CVS
I had some errands to run at CVS, including picking up items at the pharmacy, so I thought this would be the perfect opportunity to see how the glasses could assist me. To get there, I booked an Uber, using my Meta Ray-Bans to help me identify the car when it arrived. The first Uber wasn’t the right one. I asked the glasses, “Look and tell me what car this is,” and they responded, “This is a white sedan.” I knew immediately that this wasn’t my ride.

When the correct Uber finally pulled up, I used the glasses to confirm by asking, “Look and tell me What color is this car?” The response was, “white.” I then asked, “What car is this?” and it correctly identified it as a white Honda. I double-checked by asking for the license plate number, and the glasses gave me the right details. Feeling confident, I got in and made my way to CVS.

Once at the store, I quickly grabbed my pharmacy items but wanted to explore further to see how much the glasses could assist me in navigating the aisles. I asked, “Look and tell me a detailed description of this aisle,” and it responded with descriptions of greeting cards for graduations and birthdays. Moving down another aisle, it identified the snacks section, describing candies and other treats.

When I picked up items I was interested in, I asked, “Look and tell me what I am holding.” Unfortunately, one of the specific items I was searching for was out of stock, but this process made finding and verifying products a lot easier. Before leaving, I asked the glasses, “Look and tell me if you see the exit sign,” and they guided me accurately to it. I repeated the same steps to find my Uber, verifying the car details to ensure I got into the right one.

Walking to Natural Grocers
The next day, I decided to visit Natural Grocers, which is close enough for me to walk. As I approached, I used my Meta Ray-Ban glasses to ensure I was at the correct location by asking, “Look and tell me if this is Natural Grocers.” The glasses confirmed it was indeed the right business, so I walked in confidently.

Once inside, I began exploring the store using the glasses to assist me. I moved from aisle to aisle, asking for descriptions. When I stood in front of a freezer, I asked, “Look and tell me a detailed description of the contents in this freezer.” The glasses provided descriptions like “pre-made meals,” and when I asked for specifics, it detailed items such as “chicken pot pie” and “chicken tenders.”

I picked up a chicken pot pie and asked, “Look and tell me what I am holding,” which confirmed it was the right item. I followed up with, “Look and tell me the directions,” to verify the cooking instructions, and finally, “Look and tell me the price.” With all the information I needed, I headed to the checkout.

A Moment of Astonishment at the Checkout
As I approached the checkout, the cashier was visibly surprised. The glasses had guided me independently through the store, and it was clear that everyone around was amazed. The cashier couldn’t believe that a blind person could shop without assistance, using only smart glasses to navigate and identify items. The entire store seemed to be in shock and awe, with people watching as I smoothly completed my shopping without needing help from anyone.

After checking out, I wanted to share this experience with my friend Michael. I used the glasses to initiate a WhatsApp call, showing off my purchase. The process was a bit tricky at first, as the call initially connected to my phone instead of my glasses, making it hard to hear him. I manually switched the call back to the glasses and pressed the capture button twice to activate the glasses’ camera. Once I got it right, the video call worked smoothly, and I was able to share my shopping adventure with Michael.

Final Thoughts
Using Meta Ray-Ban glasses for shopping was an empowering experience, allowing me to navigate stores like CVS and Natural Grocers independently. Despite a few initial challenges, such as finding the correct Uber and setting up video calls, the glasses proved invaluable. They helped me verify car details, identify products, read instructions, and guide me to exits.

The reaction from the people at Natural Grocers was particularly rewarding—they were astonished to see how technology enabled me to shop confidently on my own. These smart glasses are more than just a cool gadget; they’re transforming everyday experiences, making them more accessible and enjoyable.

Ray-Ban Meta Smart Glasses for Blind Users: Complete Guide

Ray-Ban Meta Smart Glasses combine style with advanced technology, offering unique benefits for blind and visually impaired users. These glasses are equipped with AI-driven features that provide hands-free accessibility, making daily tasks easier. However, it’s important to understand their limitations and how they fit into your overall accessibility toolkit. In this guide, we’ll explore the styles, key features, available commands, and important considerations when using these glasses.

Where to Buy and Available Styles

Ray-Ban Meta Smart Glasses are available at the Meta Store, Ray-Ban’s website, and major retailers like Amazon and Best Buy. Prices start at $299, with additional costs for custom lenses such as prescription or polarized options. The glasses come in various styles, including the feminine Skylar, the classic Wayfarer, and the retro Headliners, each offering different color and lens configurations.

Key Features and Accessibility Commands

The glasses are equipped with a 12MP ultra-wide camera, open-ear speakers, and advanced AI. Here are some essential commands:

 “Hey Meta, look and tell me what you see”: Identifies objects or people in view.

 “Hey Meta, look and give me a detailed description”: Provides a detailed analysis of what the camera sees.

 “Hey Meta, look and tell me everything you see”: Offers a comprehensive overview of all visible elements.

– “Hey Meta,  look and read this”: Reads text aloud, ideal for reading signs, menus, or documents.

 “Hey Meta, translate this”: Translates foreign text into your language.

Limitations and Features Under Development

While Ray-Ban Meta Smart Glasses offer many useful features, they currently do not support popular accessibility services like Be My Eyes or Aira. Meta is working on expanding the AI capabilities, but these features are not yet available. You may come across commands online that claim to offer extended functionality; however, results can vary. This is because Meta often rolls out new features gradually or tests them with select users, meaning not all commands will work consistently.

What Not to Use These Glasses For

These glasses are not designed to replace critical tools like canes or guide dogs. They are not suitable for recognizing medications, people, or performing tasks related to health and safety. The glasses’ AI is not intended for precise navigation, identifying health hazards, or making decisions about personal safety. They should be viewed as a supplementary aid rather than a primary accessibility solution.

Important Tips for Using Ray-Ban Meta Smart Glasses

1. Listen for the Beep: After a command response, a beep indicates that Meta is ready for your next command.

2. Experiment with Commands: The AI’s performance can vary, so it’s important to try different commands and learn which work best for your needs.

3. Be Aware of Limitations: Always use these glasses as an additional tool, not as a substitute for traditional mobility aids.

4. Avoid Using for Health and Safety Tasks: The AI is not equipped to handle critical safety-related identifications or medical advice.

Ray-Ban Meta Smart Glasses provide a stylish and accessible solution that enhances everyday experiences for blind and visually impaired users. With continuous updates and evolving features, these glasses are poised to become even more functional. However, it’s crucial to recognize their current limitations and use them in conjunction with traditional accessibility aids.

Comparing Meta AI, Be My AI, and Access AI

AI-powered accessibility tools like Meta AI, Be My AI, and Access AI from Aira are significantly enhancing how visually impaired users interact with the world. Each of these tools has distinct approaches, features, and benefits. Below, we compare these solutions in detail, including Aira’s new AI initiatives that are shaping the future of accessible technology.

Meta AI

Meta AI is a broad, general-purpose AI assistant integrated into Meta’s platforms, such as Facebook, Instagram, and WhatsApp. It leverages advanced language models like Llama to offer generative AI capabilities, including text, image recognition, and chat-based assistance. Meta AI’s strength lies in its powerful generative features and widespread integration, which makes it suitable for a wide range of everyday tasks beyond just accessibility.

However, Meta AI is not specifically tailored to the needs of visually impaired users. It focuses on general interaction improvements, and while it offers high-level image descriptions, it lacks the accessibility-specific refinements that specialized tools provide. Meta is currently expanding its AI reach but faces regulatory delays in Europe due to privacy and data use concerns. As part of its commitment to responsible AI development, Meta AI allows users to control data usage and offers transparency about its data handling practices.

Be My AI

Be My AI is a feature within the Be My Eyes app that uses AI, powered by OpenAI’s GPT-4 Vision model, to provide detailed descriptions of images. This tool complements the live assistance offered by sighted volunteers, allowing users to access quick and descriptive feedback on visual content. Be My AI’s strength is in its conversational style, where users can ask follow-up questions to gain deeper context about what is being seen.

The focus of Be My AI is on providing accurate and responsive descriptions specifically for visually impaired users. It excels in making AI interactions feel personal and relevant, offering a straightforward, user-friendly experience tailored to individual needs. However, unlike Aira’s Access AI, Be My AI does not offer human verification, which can be a critical feature for ensuring high trust in certain situations.

Access AI from Aira

Access AI is part of Aira’s broader vision of integrating AI into its existing visual interpreting services. It allows users, known as Explorers, to capture or upload images and receive instant AI-generated descriptions. What sets Access AI apart is the optional human verification through Aira Verify, where a professional visual interpreter can review and confirm the AI’s responses. This combination of AI and human input ensures that the service remains highly accurate, secure, and reliable.

Access AI also includes features like multi-photo upload, verbosity controls, and chat history, which enhance user interaction and personalization. Additionally, Aira’s commitment to privacy means that no Access AI sessions are shared with third parties, safeguarding user data. Aira’s new Build AI initiative further advances its AI capabilities by allowing users to contribute to AI development in a secure and controlled manner. This program, available primarily in the US, collects real-world data to improve future AI features, enhancing Aira’s service without compromising user privacy. The access AI is free for the time being.

Each of these AI tools offers unique benefits, catering to different needs and preferences. Whether you’re looking for a general-purpose AI assistant like Meta AI, a visually impaired-focused tool like Be My AI, or a hybrid solution with human verification like Access AI, there’s a tool that can help enhance accessibility in your daily life.

My Experience with Gemini vs. ChatGPT: Why Google’s AI Didn’t Meet My Expectations

My Experience with Gemini vs. ChatGPT: Why Google’s AI Didn’t Meet My Expectations

When Google announced Gemini, I was excited about the prospect of a new AI that promised to elevate productivity and simplify daily tasks. With advanced capabilities like managing Google Calendar events and understanding complex queries, I believed it would be a valuable addition to my workflow. However, after trying Gemini, my experience didn’t match the high expectations set by Google’s marketing.

High Hopes from the Google Event

Google’s event showcased Gemini as a cutting-edge AI assistant designed to handle everyday tasks more intelligently. They highlighted its ability to manage calendar events with simple voice commands, interact naturally, and offer better contextual understanding. It felt like the AI we had been waiting for—a smart assistant that could handle scheduling, reminders, and more with ease.

I quickly set up the AI and started using it for tasks like scheduling appointments, checking my calendar, and managing daily reminders. Unfortunately, the reality didn’t live up to the presentation.

Disappointing Performance with Calendar Integration

One of the most anticipated features of Gemini was its ability to manage Google Calendar events. The promise was that you could simply say, “Add an event to my calendar,” and the AI would take care of the rest. However, my experience was far from seamless:

1. **Unreliable Command Execution**: The AI frequently misinterpreted commands or failed to respond altogether. This was especially frustrating when I was trying to quickly add or adjust calendar events. Instead of simplifying my scheduling, Gemini often made it more cumbersome, requiring multiple attempts to get a simple task done【8†source】.

2. **Dependence on Google Assistant**: Gemini heavily relied on Google Assistant to perform calendar tasks, which often led to confusion and delays. This dependence made the experience feel disjointed, as it wasn’t always clear whether Gemini or Assistant was handling the request. The reliance on Assistant highlighted that Gemini was not a fully independent AI solution, detracting from the seamless integration promised by Google【9†source】.

3. **Contextual Understanding Issues**: One of the biggest letdowns was Gemini’s lack of contextual awareness. It struggled to maintain the flow of a conversation, especially when handling follow-up questions or complex scheduling scenarios. This shortcoming made it difficult to rely on Gemini for anything beyond basic, straightforward commands.

Coding Challenges: Gemini Falls Short

As someone who frequently works with code, I was particularly interested in seeing how Gemini could assist in coding tasks. However, I found that Gemini struggled significantly when it came to providing accurate and useful coding assistance:

– **Limited Code Understanding**: Gemini often generated incorrect or incomplete code snippets, which required extensive corrections. It didn’t handle complex coding scenarios well and frequently misunderstood the context of what I was trying to achieve.

– **Poor Debugging Assistance**: One of my main frustrations was Gemini’s inability to effectively help debug code. Unlike ChatGPT, which can provide detailed explanations and suggestions for fixing errors, Gemini’s responses were often too vague or off-target to be helpful.

– **Inconsistent Code Formatting**: Even basic code formatting suggestions were inconsistent, making it difficult to rely on Gemini for any serious coding assistance. This was a significant drawback, especially when compared to the more polished and reliable performance of ChatGPT in handling code-related queries.

Why ChatGPT Outshines Gemini

After struggling with Gemini, I returned to ChatGPT, which consistently outperformed Google’s AI. ChatGPT’s strengths became clear in contrast:

– **Consistent Contextual Awareness**: ChatGPT handles complex queries with ease, maintaining context throughout conversations without needing constant corrections. This ability makes it far more reliable for tasks that go beyond simple commands.

– **Seamless Integration and Performance**: Unlike Gemini, ChatGPT works consistently across platforms and devices. Whether I’m using it on my desktop, phone, or through various apps, the performance is smooth and responsive, with no awkward transitions or delays.

– **Superior Command and Code Handling**: ChatGPT executes commands accurately and quickly, whether I’m asking it to manage a schedule, answer questions, or provide detailed responses. It’s especially strong in coding tasks, offering reliable code generation, debugging assistance, and well-formatted snippets that save time and effort.

My experience with Gemini was a reminder that high expectations don’t always align with real-world performance. While Google’s AI shows potential, especially with continued updates, it still has a long way to go before matching the reliability, contextual understanding, and ease of use offered by ChatGPT.

If you’re exploring alternatives or have suggestions, I’d love to hear from you! Leave a comment on this post or join the iAccessibility community, where we discuss various accessibility tools and strategies, including AI-driven solutions. You can join us

 Let’s collaborate and find the best ways to make these tools work for everyone!

For now, ChatGPT remains my go-to assistant, delivering consistent results and adapting seamlessly to my needs. As AI continues to evolve, I look forward to seeing how both Gemini and ChatGPT improve, but until then, ChatGPT is the tool that truly supports my day-to-day tasks with reliability and precision.

If you want to talk AI, please email me at taylor@techopolis.online

Exploring AI for the Blind: Tools and Technologies

Introduction to AI

Artificial Intelligence (AI) is a technology that mimics human intelligence to perform tasks like understanding language, recognizing objects, and making decisions. While AI might seem complex, it’s quickly becoming an integral part of everyday life, offering incredible tools for everyone, including those who are blind or visually impaired. AI can help you navigate the world, access information, and communicate more effectively.

How AI Can Help You

For blind users, AI can be a game-changer, making many daily tasks easier and more accessible. Here are some of the most impactful AI tools you can start using today:

ChatGPT: Your Conversational Assistant

What It Does: ChatGPT is an AI you can talk to like a friend or assistant. You can ask it questions, get help writing emails, or even have it explain complex topics in simple terms.

How to Use It: To get started, visit chat.openai.com in your web browser. You can type or speak your questions, and ChatGPT will respond with helpful information. Whether you need to draft a message, learn something new, or just chat, ChatGPT is there to assist you.

Free Plan and Pricing:

Free Plan: ChatGPT offers a free tier that gives you access to the basic features of the AI. This plan is great for getting started and exploring what the AI can do without any financial commitment.

ChatGPT Plus: For those who want more advanced features, faster response times, and priority access during high-demand periods, OpenAI offers a paid plan called ChatGPT Plus. This plan costs $20 per month and provides enhanced capabilities, making it ideal for users who rely on ChatGPT for frequent or complex tasks.

Getting Started:

You don’t need an account to start using the free version of ChatGPT, but signing up can provide a more personalized experience. If you’re new to AI, I highly recommend exploring the free plan first to see how it can fit into your daily routine. As you become more comfortable, you can decide if upgrading to ChatGPT Plus is right for you.

Seeing AI: Your Eyes for the Visual World

What It Does: Developed by Microsoft, Seeing AI is a free app that narrates the world around you. It can read text, describe people and objects, and even identify colors. For example, you can use it to read a letter, check the color of your clothes, or even recognize the faces of people in photos.

How to Use It: Download the Seeing AI app from the App Store. Once installed, you can use your phone’s camera to point at text or objects, and the app will describe what it sees. Seeing AI is particularly useful for handling tasks like reading grocery labels, identifying ingredients, and even recognizing currency.

Be My AI: A Companion for Visual Assistance

What It Does: Be My AI is a feature within the Be My Eyes app, which connects blind users with sighted volunteers via video call. With its integration of GPT-4, Be My AI can now provide instant descriptions of photos, identify objects, and help with tasks like setting up a new device or reading product labels—all through AI, without needing a human volunteer.

How to Use It: Start by downloading the Be My Eyes app. Within the app, you can use the Be My AI feature to ask the AI to describe what’s in front of your camera. This is incredibly useful for quick, real-time assistance with visual tasks.

Trust But Verify: A Note on AI Limitations

While AI can significantly enhance your daily life, it’s essential to remember that it isn’t perfect. AI tools like ChatGPT, Seeing AI, and Be My AI are incredibly advanced, but they can sometimes make mistakes or misinterpret information. Therefore, it’s always a good practice to trust but verify. Double-check important details when possible, and use AI as a helpful assistant rather than a sole source of truth.

Everyday Uses of AI

AI isn’t just for special occasions—it can be part of your daily routine:

• Morning Routine: Check the weather, get news updates, or listen to your schedule for the day using AI tools.

• Shopping: Use AI to read labels, compare products, or find the best deals.

• Traveling: Get AI-assisted navigation to ensure you reach your destination safely and efficiently.

• Entertainment: Discover new books, movies, or music recommendations based on your preferences.

Resources

• ChatGPT: https://chat.openai.com

• Seeing AI: https://apps.apple.com/us/app/seeing-ai/id999062298

• Be My AI: https://www.bemyeyes.com

Marvel Unlimited

App Name

Marvel Unlimited

App Version

1.76.0

Platform

iOS/iPadOS

Category

Books

Description

Marvel Unlimited is the premier subscription service to access over 30,000 digital comics.

Marvel Unlimited features all of your favorite characters from Marvel movies, TV shows, and video games. Read the comic books that inspired your favorite super heroes and villains on the big screen! Start your free 7-day trial today!

Experience the all-new digital comic format, Marvel’s Infinity Comics available exclusively on Marvel Unlimited. Featuring in-universe stories from top creators told in visionary vertical format, designed for your device.

Read comics and stories about Spider-Man, Iron Man, Captain America, Captain Marvel, The Avengers, Thor, Hulk, the X-Men, the Guardians of the Galaxy, Star Wars, Doctor Strange, Deadpool, Thanos, Mysterio, Ant-Man, The Wasp, Black Panther, Wolverine, Hawkeye, Wanda Maximoff, Jessica Jones, the Defenders, Luke Cage, Venom, and many more!

Wondering where to start? Check out endless reading guides curated by Marvel experts to guide you through the last 80 years of the Marvel Universe. Read about comic events that inspired the movies such as the Spider-Verse, Civil War, Thanos and the Infinity Gauntlet, and even Star Wars!

Unlimited downloads allows you to read as many comics as you want offline and on-the-go! Follow your favorite characters, creators and series and get notified when new issues come out! Marvel Unlimited is available on mobile phones, tablets, and anywhere you can access the web.

Key Features:
• Access over 30,000 Marvel comics at your fingertips
• Infinity Comics, in-universe stories from top creators designed for your device
• Endless reading guides
• Unlimited downloads to read anywhere
• Personalized comic book recommendations
• Sync progress across devices
• New comics and old classics added every week
• No commitments. Cancel online at any time.

Choose from three different Marvel Unlimited comic subscription plans as follows:
• Monthly – Our most popular plan!
• Annual – Great savings!
• Annual Plus – Get a new, exclusive merchandise kit each year you’re a member! (US Only)

Links:
Terms of Use: https://disneytermsofuse.com
Privacy Policy: https://disneyprivacycenter.com/
Subscriber Agreement: https://www.marvel.com/corporate/marvel_unlimited_terms
California Privacy Rights: https://privacy.thewaltdisneycompany.com/en/current-privacy-policy/your-california-privacy-rights/
Do Not Sell My Info: https://privacy.thewaltdisneycompany.com/en/dnsmi

Subscription via iTunes:

Download the app and sign up for your free one-week trial to start reading. If you are enjoying your free trial, do nothing and your membership will automatically renew each month. You can cancel at any time. Your subscription automatically renews unless auto-renew is turned off at least 24 hours before the end of the then-current subscription period. Your payment method associated with your iTunes account will automatically be charged at the same price for renewal, as stated above, within 24 hours prior to the end of the then-current billing period. You can manage your subscription and/or turn off auto-renewal by visiting your iTunes Account Settings after purchase. Any unused portion of a free trial period will be forfeited when the user purchases a subscription to that publication.

Before you download this app, please consider that it may include or support advertising, some of which the Walt Disney Family of Companies may target to your interests. You may choose to control targeted advertising within mobile applications by using your mobile device settings (for example, by re-setting your device’s advertising identifier and/or opting out of interest-based ads).

Free or Paid

Free

Devices you’ve tested on

iPhone and iPad

Accessibility Rating

2 – Needs Work

Accessibility Comments

All buttons are labeled for VoiceOver users, but the comics do not read or describe the page contents.

Screen Reader Performance

The app is easy to navigate with VoiceOver.

Button Labeling

All buttons are labeled for use with VoiceOver.

Usability

This app is very usable with VoiceOver with the comics being the only exception here.
This app is extremely usable for low vision users that take advantage of VoiceOver. Perhaps AI could be used to read each comic book.

Other Comments

This app is extremely usable for low vision users that take advantage of VoiceOver. Perhaps AI could be used to read each comic book.

App Store Links

https://apps.apple.com/us/app/marvel-unlimited/id607205403

Developer Website

https://www.marvel.com/unlimited

Current City

App Name

Current City

App Version

2.0.1

Platform

iOS/iPadOS

Category

Travel

Description

Current City is an app developed by Techopolis Online Solutions, LLC to log your travels. It helps users determine their current city and state. It is particularly useful for travelers who need quick and precise information about their location. You can see your photos and make notes about your travels. Current City can also log the cities you've visited, and can give information about that city, and show any photos taken there.

Free or Paid

Paid

Price

0.99

Devices you’ve tested on

Iphone and Ipad

Accessibility Comments

The app is fully accessible.

Screen Reader Performance

The app is fully accessible with Voiceover

Button Labeling

All buttons are labeled

Usability

The app is completely usable with a screenreader.

Other Comments

App Store Links

https://apps.apple.com/us/app/current-city/id1097557845

Developer Website

Home

Back to Top