Testing Gemini 2.0's VISION MODE: Better than ChatGPT?
Yesterday Google introduced Gemini 2.0, a revolutionary update within PROJECT ASTRA, with a vision mode capable of interpreting visual information in seconds. In this video I put it to the test in Google AI Studio: I summarize web pages, describe books instantly and discover its potential to create content in real time. I also reflect on the impact this innovation will have when OpenAI incorporates vision into its ChatGPT with advanced speech. The competition between Google and OpenAI becomes more exciting than ever, and could change the way we interact with AI.
🎬 Video highlights:
0:00 Topic introduction and presentation.
1:31 Demonstration of Deep Research: advanced research.
2:42 Introduction to Stream Real-Time: the multimodal assistant
4:00 Testing real-time interaction with Gemini
7:26 Creating a post with Gemini Vision
12:35 Testing on iPhone: real-time visual analysis
15:33 Comparison with OpenAI and future expectations
18:20 Farewell: attendees' vision of the future.
✨ Video tags - #gemini #chatgpt #google
------
🌐 Links you're interested in:
VIP community with exclusive content -https://lamanzanamordida.com/m
Apple news website, tutorials and more -https://lamanzanamordida.net/t
Telegram channel of The Bite Apple -https://t.me/LaMMordidaa
------
📩 Social networks and contact:
X -https://www.twitter.com/lammordida
Instagram -https://www.instagram.com/lammordida
Facebook -https://www.facebook.com/LaMMordida
TikTok -https://www.tiktok.com/@lammordida
Mail - fernandodelmoralgarcia@gmail.com
------