Home / Blog / Report

My Experience with Gemini vs. ChatGPT: Why Google’s AI Didn’t Meet My Expectations

My Experience with Gemini vs. ChatGPT: Why Google’s AI Didn’t Meet My Expectations

When Google announced Gemini, I was excited about the prospect of a new AI that promised to elevate productivity and simplify daily tasks. With advanced capabilities like managing Google Calendar events and understanding complex queries, I believed it would be a valuable addition to my workflow. However, after trying Gemini, my experience didn’t match the high expectations set by Google’s marketing.

High Hopes from the Google Event

Google’s event showcased Gemini as a cutting-edge AI assistant designed to handle everyday tasks more intelligently. They highlighted its ability to manage calendar events with simple voice commands, interact naturally, and offer better contextual understanding. It felt like the AI we had been waiting for—a smart assistant that could handle scheduling, reminders, and more with ease.

I quickly set up the AI and started using it for tasks like scheduling appointments, checking my calendar, and managing daily reminders. Unfortunately, the reality didn’t live up to the presentation.

Disappointing Performance with Calendar Integration

One of the most anticipated features of Gemini was its ability to manage Google Calendar events. The promise was that you could simply say, “Add an event to my calendar,” and the AI would take care of the rest. However, my experience was far from seamless:

1. **Unreliable Command Execution**: The AI frequently misinterpreted commands or failed to respond altogether. This was especially frustrating when I was trying to quickly add or adjust calendar events. Instead of simplifying my scheduling, Gemini often made it more cumbersome, requiring multiple attempts to get a simple task done【8†source】.

2. **Dependence on Google Assistant**: Gemini heavily relied on Google Assistant to perform calendar tasks, which often led to confusion and delays. This dependence made the experience feel disjointed, as it wasn’t always clear whether Gemini or Assistant was handling the request. The reliance on Assistant highlighted that Gemini was not a fully independent AI solution, detracting from the seamless integration promised by Google【9†source】.

3. **Contextual Understanding Issues**: One of the biggest letdowns was Gemini’s lack of contextual awareness. It struggled to maintain the flow of a conversation, especially when handling follow-up questions or complex scheduling scenarios. This shortcoming made it difficult to rely on Gemini for anything beyond basic, straightforward commands.

Coding Challenges: Gemini Falls Short

As someone who frequently works with code, I was particularly interested in seeing how Gemini could assist in coding tasks. However, I found that Gemini struggled significantly when it came to providing accurate and useful coding assistance:

– **Limited Code Understanding**: Gemini often generated incorrect or incomplete code snippets, which required extensive corrections. It didn’t handle complex coding scenarios well and frequently misunderstood the context of what I was trying to achieve.

– **Poor Debugging Assistance**: One of my main frustrations was Gemini’s inability to effectively help debug code. Unlike ChatGPT, which can provide detailed explanations and suggestions for fixing errors, Gemini’s responses were often too vague or off-target to be helpful.

– **Inconsistent Code Formatting**: Even basic code formatting suggestions were inconsistent, making it difficult to rely on Gemini for any serious coding assistance. This was a significant drawback, especially when compared to the more polished and reliable performance of ChatGPT in handling code-related queries.

Why ChatGPT Outshines Gemini

After struggling with Gemini, I returned to ChatGPT, which consistently outperformed Google’s AI. ChatGPT’s strengths became clear in contrast:

– **Consistent Contextual Awareness**: ChatGPT handles complex queries with ease, maintaining context throughout conversations without needing constant corrections. This ability makes it far more reliable for tasks that go beyond simple commands.

– **Seamless Integration and Performance**: Unlike Gemini, ChatGPT works consistently across platforms and devices. Whether I’m using it on my desktop, phone, or through various apps, the performance is smooth and responsive, with no awkward transitions or delays.

– **Superior Command and Code Handling**: ChatGPT executes commands accurately and quickly, whether I’m asking it to manage a schedule, answer questions, or provide detailed responses. It’s especially strong in coding tasks, offering reliable code generation, debugging assistance, and well-formatted snippets that save time and effort.

My experience with Gemini was a reminder that high expectations don’t always align with real-world performance. While Google’s AI shows potential, especially with continued updates, it still has a long way to go before matching the reliability, contextual understanding, and ease of use offered by ChatGPT.

If you’re exploring alternatives or have suggestions, I’d love to hear from you! Leave a comment on this post or join the iAccessibility community, where we discuss various accessibility tools and strategies, including AI-driven solutions. You can join us

 Let’s collaborate and find the best ways to make these tools work for everyone!

For now, ChatGPT remains my go-to assistant, delivering consistent results and adapting seamlessly to my needs. As AI continues to evolve, I look forward to seeing how both Gemini and ChatGPT improve, but until then, ChatGPT is the tool that truly supports my day-to-day tasks with reliability and precision.

If you want to talk AI, please email me at taylor@techopolis.online

Notification Summaries in iOS and iPadOS 18.1 with Apple Intelligence

Apple is adding artificial intelligence in in iOS 18.1 for iPhone 15 Pro models and up, and any other devices that support M series processors. One of the features coming is the ability to have notifications summarized on the Lock Screen through the use of Apple Intelligence

How Does This Work?

Sometimes an app may show several notifications on your Lock Screen, and you will need to expand the group of notifications to see what everything is about. This can take time and effort to click through all of your notifications, or to scroll through them if you are low vision. Notification summaries will show a summary of your notifications at the top of the stack, and there will be a number next to the app icon to show how many items are in that stack.

Example

Here’s how this is nice. I have a Discord bot that states who has joined my Discord server in a text channel, This allows us to know when someone is online, so that we can all join the conversation. Apple Intelligence saw several notifications and summarized them by saying that Multiple people have joined General including @Person1, and @person2. This makes it much easier to understand these notifications without moving through each of the items in the notification stack.

Conclusion

I think Notification Summaries are going to be extremely helpful to iOS users, and I look forward to seeing what they do in the full release. We are starting to see some major improvements to iOS through AI, and it will only get better from here.

Do you use Apple Intelligence, or have you tried this feature? Let us know in the comments, or anywhere in the community.

The Beginning of a New Era: AppleVis Returns September 9th

AppleVis announced that it would be closing down on July 27th, 2024. The AppleVis team stated that the site would stay in read only mode until August 31, 2024, and would then shut down.

On August 28th, 2024 AppleVis announced that they would be acquired by Be My Eyes, and that the site would continue, and that the site would re-open on September 9th. This means that AppleVis, and all of its wonderful resources will continue to exist after the shutdown date.

It is unclear at this point as to how the website will change after it comes back, but I think it is very exciting that this resource will continue to exist for everyone to take advantage of. There has been a lot of content posted at AppleVis, and I think it is important that it remains online.

You can learn more about this by reading the press release from Be My Eyes.

Be My Eyes Aquires AppleVis to Secure Its Future and to Invest For Growth

Exploring AI for the Blind: Tools and Technologies

Introduction to AI

Artificial Intelligence (AI) is a technology that mimics human intelligence to perform tasks like understanding language, recognizing objects, and making decisions. While AI might seem complex, it’s quickly becoming an integral part of everyday life, offering incredible tools for everyone, including those who are blind or visually impaired. AI can help you navigate the world, access information, and communicate more effectively.

How AI Can Help You

For blind users, AI can be a game-changer, making many daily tasks easier and more accessible. Here are some of the most impactful AI tools you can start using today:

ChatGPT: Your Conversational Assistant

What It Does: ChatGPT is an AI you can talk to like a friend or assistant. You can ask it questions, get help writing emails, or even have it explain complex topics in simple terms.

How to Use It: To get started, visit chat.openai.com in your web browser. You can type or speak your questions, and ChatGPT will respond with helpful information. Whether you need to draft a message, learn something new, or just chat, ChatGPT is there to assist you.

Free Plan and Pricing:

Free Plan: ChatGPT offers a free tier that gives you access to the basic features of the AI. This plan is great for getting started and exploring what the AI can do without any financial commitment.

ChatGPT Plus: For those who want more advanced features, faster response times, and priority access during high-demand periods, OpenAI offers a paid plan called ChatGPT Plus. This plan costs $20 per month and provides enhanced capabilities, making it ideal for users who rely on ChatGPT for frequent or complex tasks.

Getting Started:

You don’t need an account to start using the free version of ChatGPT, but signing up can provide a more personalized experience. If you’re new to AI, I highly recommend exploring the free plan first to see how it can fit into your daily routine. As you become more comfortable, you can decide if upgrading to ChatGPT Plus is right for you.

Seeing AI: Your Eyes for the Visual World

What It Does: Developed by Microsoft, Seeing AI is a free app that narrates the world around you. It can read text, describe people and objects, and even identify colors. For example, you can use it to read a letter, check the color of your clothes, or even recognize the faces of people in photos.

How to Use It: Download the Seeing AI app from the App Store. Once installed, you can use your phone’s camera to point at text or objects, and the app will describe what it sees. Seeing AI is particularly useful for handling tasks like reading grocery labels, identifying ingredients, and even recognizing currency.

Be My AI: A Companion for Visual Assistance

What It Does: Be My AI is a feature within the Be My Eyes app, which connects blind users with sighted volunteers via video call. With its integration of GPT-4, Be My AI can now provide instant descriptions of photos, identify objects, and help with tasks like setting up a new device or reading product labels—all through AI, without needing a human volunteer.

How to Use It: Start by downloading the Be My Eyes app. Within the app, you can use the Be My AI feature to ask the AI to describe what’s in front of your camera. This is incredibly useful for quick, real-time assistance with visual tasks.

Trust But Verify: A Note on AI Limitations

While AI can significantly enhance your daily life, it’s essential to remember that it isn’t perfect. AI tools like ChatGPT, Seeing AI, and Be My AI are incredibly advanced, but they can sometimes make mistakes or misinterpret information. Therefore, it’s always a good practice to trust but verify. Double-check important details when possible, and use AI as a helpful assistant rather than a sole source of truth.

Everyday Uses of AI

AI isn’t just for special occasions—it can be part of your daily routine:

• Morning Routine: Check the weather, get news updates, or listen to your schedule for the day using AI tools.

• Shopping: Use AI to read labels, compare products, or find the best deals.

• Traveling: Get AI-assisted navigation to ensure you reach your destination safely and efficiently.

• Entertainment: Discover new books, movies, or music recommendations based on your preferences.

Resources

• ChatGPT: https://chat.openai.com

• Seeing AI: https://apps.apple.com/us/app/seeing-ai/id999062298

• Be My AI: https://www.bemyeyes.com

Tip: Type The Apple Logo on Apple Devices

Did you know you can type tye  (Apple) logo on any Apple device with a keyboard? Here’s how to use it.

  1. Enter a text field on any Apple device that has a keyboard.
  2. Use the keystroke Option + Shift + K and you will see the  logo.

NOTE: This may not work on other platforms including Windows or Android.

Spotlight: Perspective AI Vision – Early Beta

A pair of white glasses on a blue gradient background.

The iAccessibility Report has showcased many apps over the years, but it is important to continue this trend. Taylor Arndt has been working on an app that uses on device AI models to recognize text, objects. She is also planning to support face detection as well.

The interesting thing about this app is that it is all on device, and no data is stored, or sent to the cloud. This does mean that the app is limited to what is included with the built in models, but it is much faster at getting responses.

Taylor has created a public beta, and we wanted to share this with everyone so she can get feedback on the app. We encourage other developers to submit a report with their betas as well, so that the iAccessibility community can provide feedback as well.

App Name

Perspective AI Vision

TestFlight Link

https://testflight.apple.com/join/GqLbLyS5

Feedback Email

taylor@techopolis.online

Roadmap

There has been a lot of changes that have happened here at iAccessibility this month, and there will be many more to come. In this post, I want to discuss where we’ve been, and where we’re going as iAccessibility so that we are all on the same page.

The Past of iAccessibility.

iAccessibility started in 2010 as a blog to review apps called the iAccessibility Report. I, Michael Doise, reviewed apps with VoiceOver on this blog to determine what worked with VoiceOver. The blog then started to have other content regarding technology, and we started a podcast in 2015 called the iACast. In 2016, we started our WhatsApp Community and TeamTalk Server, which later became a Discord community server instead. iAccessibility has also had the chance to work with some amazing podcasters like Unmute Presents and many others.

Where We Are Going

iAccessibility has been a part of my life for the past 14 years, and the goal has always been to build a thriving community of technology users who help each other in the efforts of learning and using technology. The below items describe where we are, and where iAccessibility is going.

  • iAccessibility has built an app directory so that blind and low vision technology users can find out if an app is accessible, or at least usable. This app directory covers Apple platforms, but also includes Windows and Android devices as well. We will have a similar layout to what may expect, but we will also have features like AI based search and more.
  • The iACast will continue, and we will be improving it with more conversations, and interactive experiences. These may include town halls, and interactive calls on Zoom and Clubhouse
  • iAccessibility has created a new forums system using Discourse. We plan to work with Discourse to increase the accessibility of these forums, so that other forum communities can benefit from these accessibility changes.
  • Bug Tracking is very important, and we plan to start tracking bugs with Apple platforms. We are considering a plan to create a bug tracker with BugZilla for users to submit bugs through, and these will be reflected through a blog post on the iAccessibility website.
  • We will also be opening up opportunities for users to contribute. Right now, users can contribute to the App Directory, but we will soon allow users to submit posts to the Report blog. or to submit podcast content. All posts will need to be approved, but this allows the site to be ran and driven by the community.
  • iAccessibility can’t be ran by one person and one person should not run a site as big as iAccessibility. Therefore, the team has decided to create a nonprofit organization ro run and manage iAccessibility. The board has been named, and documentation will be filed to form the nonprofit in the coming weeks. This will allow the team and community to make decisions to drive the forward progress of the website, forums, and other services provided by iAccessibility.

Conclusion

iAccessibility has always been a place to help people to connect with each other, and learn to use technology. It is my hope that this roadmap will help everyone know where we’ve been, and to help you all know where we are going in the future. We have been around for 14 years, and I think the next 14 will be even more amazing with all of you building what iAccessibility is going to be. Thank you for reading this, and please feel free to reach out and ask us any questions you have have either on Mastodon at https://iaccessibility.social/@iaccessibility, on X as @iaccessibility1 We are still working on emails, but you can email mikedoise@icloud.com until everything is set up. Thank you again, and I look forward to speaking with everyone in the community.

Logging In To The Discord Desktop App Using A Mobile Device

Discord app icon

Discord is a very popular app for gamers, and it is used by gamers to communicate during gameplay. Many other communities use Discord as well, and it has become a very popular alternative to TeamTalk, as it is a mainstream product. One obstacle to using Discord though is that the Discord Desktop app is difficult to navigate, and is also difficult to log in to. This part of Discord does not need to be difficult though if you use your movile device for the login process. Here’s how this works.

Bypassing the Desktop Login Process

The Discord app has always let users login with their username and password, but the desktop app may also require the user to solve an hCAPTCHA, which shows images on the screen to be solved. There is an accessible cookie that can be installed in the browser, but this can also be difficult.

Another solution that exists though is that the Discord Desktop app also shows a QR code on screen. This QR code is the key to a fast and easy login process.

Using the QR code in the Desktop App

Use the following process to login to the Discord desktop app using the mobile app.

  1. Login to the Discord Mobile App
  2. Use the QR Code scanner either from the camera or control center on your mobile device and point the camera at your computer’s monitor
  3. Select the option that appears to open the Discord Mobile app.
  4. Approve the login request on your phone.

You should now be logged in to your Discord account on your PC or Mac computer.

Conclusion

Logging in to Discord on the computer can be fairly complicated, and I have even found it difficult with low vision. Using the QR code really helps to automate this process, and takes a lot of the struggle out of logging in.

The End of an Era: Reflecting on the Closure of AppleVis

In the ever-evolving landscape of technology and accessibility, few resources have stood as tall and steadfast as AppleVis. Since its inception, AppleVis has been a cornerstone for the visually impaired community, offering invaluable resources, a vibrant community, and a platform for advocacy and education. It is with a heavy heart that we acknowledge the recent announcement of its closure.

A Legacy of Empowerment

AppleVis quickly became a beacon of hope and empowerment for users of Apple products who are blind or visually impaired. Through detailed app reviews, accessibility guides, and community forums, AppleVis provided a wealth of information that was often hard to find elsewhere. The site’s commitment to inclusivity and accessibility helped countless individuals navigate the world of technology with confidence and independence.

Community and Collaboration

One of the most remarkable aspects of AppleVis was its thriving community. Users from all over the world came together to share their experiences, offer support, and collaborate on solutions to common challenges. This sense of community was not just about sharing information; it was about fostering a sense of belonging and mutual support.

The forums were filled with discussions ranging from troubleshooting technical issues to celebrating the latest advancements in accessibility. The collective knowledge and camaraderie found on AppleVis were unparalleled, and many users found lifelong friends through their interactions on the site.

A Source of Advocacy and Change

AppleVis was not just a passive resource; it was a powerful advocate for change. By highlighting accessibility issues and providing direct feedback to developers, AppleVis played a crucial role in pushing for improvements in software and app accessibility. The site’s reviews and recommendations often served as a catalyst for developers to prioritize accessibility in their products.

Through their efforts, AppleVis helped shape a more inclusive digital landscape. The site’s influence extended beyond the visually impaired community, impacting the broader tech industry and raising awareness about the importance of accessibility for all users.

The Announcement and Its Opportunities

The announcement of AppleVis’s closure marks the end of an era, but it also opens the door to new opportunities. The void left by AppleVis creates a unique space for innovation and fresh perspectives in the field of accessibility. Now, more than ever, there is a chance for new platforms and resources to emerge, building on the legacy of AppleVis while introducing innovative solutions to the challenges faced by the blind and visually impaired community.

Looking Ahead

As we reflect on the closure of AppleVis, it’s essential to focus on the future and the opportunities it brings. At iAccessibility, we are committed to continuing the work that AppleVis started and to pushing the boundaries of what is possible in accessibility. We will strive to provide valuable resources, foster a sense of community, and advocate for greater accessibility in technology. The legacy of AppleVis will inspire us to innovate, ensuring that the visually impaired community has the tools and support they need to thrive in an increasingly digital world.

In closing, we extend our heartfelt thanks to AppleVis for all that it has given to the community. Your contributions have made a lasting impact, and your legacy will not be forgotten. We look forward to building on that legacy and exploring new frontiers in accessibility.

Voice-Activated Showdown: Google Assistant Actions vs Amazon Alexa Skills!

In the fast-paced world of smart home automation and voice-activated control, two giants, Google Assistant and Amazon Alexa, are vying for our attention. They entice us with their unique offerings—Google Assistant Actions and Amazon Alexa Skills. They’re not just platforms; they’re the pathway to a future where our voices replace the remote, and conversations with gadgets become the norm. As we dive deeper into the comparison, we’re not just looking at features; we’re peeking into the future, one command at a time.

Google Assistant Actions

  1. Ease of Use
    Google Assistant is like that friend who just gets you. Its intuitive interface and natural language processing are akin to having a casual chat, making interactions feel seamless and conversational. Plus, the straightforward setup and user-friendly nature make it a breeze to integrate into your daily routine.
  2. Accessibility
    Google Assistant doesn’t just hear; it listens. Features like Voice Match and the ability to understand multiple languages and accents showcase its commitment to inclusivity. The straightforward setup process is the cherry on top, making it a front-runner in the accessibility race.
  3. Integration
    If you’re a Google aficionado, the deep integration with Android devices and Google services will feel like home. It’s not just an assistant; it’s an extension of the Google ecosystem, making your smart home feel like part of a bigger, smarter family.
  4. Availability of New Actions
    Though the number of Actions might not hit the high notes like Alexa, the steady growth tells a story of a platform evolving. With over 4,000 Actions in the U.S., it’s like a library steadily stocking up on new titles, each one expanding the horizon of what’s possible.
  5. Community and Developer Support
    The developer community around Google Assistant is buzzing, with forums and support channels that are bustling marketplaces of ideas and solutions. It’s a nurturing ground for innovation, and every query finds an echo.
  6. Future Prospects
    Google Assistant Actions are on a rising tide, with the waves of AI and machine learning propelling it forward. The competitive landscape is a playground, and Google Assistant is gearing up for a game of epic proportions.

Amazon Alexa Skills

  1. Ease of Use
    Amazon Alexa, with its treasure trove of Skills, is like a swiss army knife of voice-activated functionalities. It asks for a bit of structure in conversation but rewards with a vast realm of possibilities, making the learning curve worth every moment.
  2. Accessibility
    Alexa extends its hand to everyone with a suite of accessibility features. The setup is intuitive, and though mastering the myriad of Skills might require a beat, the rhythm of voice-activated control soon becomes second nature.
  3. Integration
    The realm of Amazon Alexa Skills is a playground for third-party integrations. It’s an open house for external developers and services, making the smart home experience a rich tapestry of functionalities.
  4. Availability of New Skills
    Alexa is like a seasoned librarian with a vast collection of Skills. With roughly 60,000 in the U.S. and 80,000 overall, it’s a testament to a mature developer community that’s been busy weaving a wide web of voice-activated functionalities.
  5. Community and Developer Support
    The bustling bazaar of Alexa Skills is a testament to a vibrant and engaged developer community. It’s a well-trodden path with well-documented guides, making the journey of development less of a trek and more of an exploration.
  6. Future Prospects
    With the dawn of generative AI technologies, the horizon is aglow with potential for Amazon Alexa Skills. The evolving competitive landscape is a narrative of endless possibilities, and Alexa is scripting its chapters with a steady hand.

As we saunter through the realms of Google Assistant Actions and Amazon Alexa Skills, we’re not just comparing platforms; we’re exploring narratives of innovation, user-centricity, and the relentless pursuit of making our smart homes a tad smarter. The choice between these two isn’t black and white; it’s a palette of preferences, existing device ecosystems, and envisioned smart home narratives. It’s a glimpse into a future where our voices are the keys to boundless possibilities.

Back to Top