- Tech Momentum
- Posts
- Power Up Your Workflow
Power Up Your Workflow
Learn how new AI features like ChatGPT groups, Gemini 3 and Android Auto upgrades boost your workflow, skills, safety and daily productivity.

Welcome to Tech Momentum!
You want to understand what truly matters in AI — and these four new releases from OpenAI and Google deliver real-world impact you can use. They upgrade how you work, build, decide and travel. No hype, just practical breakthroughs that move your digital life forward. This edition shows you exactly why these shifts matter for you.
Let’s break it all down!
Updates and Insights for Today
Unlock Group Chats in ChatGPT — Collaborate Instantly
Unlock AI Growth for Your Small Business
You Can Build with Gemini 3 Now
You Drive — Gemini Takes the Wheel
The latest in AI tech
AI Tutorials: New NotebookLM Updates are INSANE (FREE!)
AI News
Unlock Group Chats in ChatGPT — Collaborate Instantly
Quick Summary
You can now invite others — friends, family or coworkers — into the same ChatGPT conversation and work together. OpenAI says your private memory stays separate and you control participation.
Key Insights
Group chats allow up to 20 participants via shareable link and include ChatGPT as a member.
ChatGPT adapts to group dynamics — it may respond when needed, stay quiet when not, and can use emojis and group photos.
Your personal ChatGPT memory isn’t used in group chats; they’re treated as separate from your private conversations.
Roll-out is starting for Free, Go, Plus and Pro users in certain regions; a broader global launch is underway.
Why It’s Relevant
You get to turn ChatGPT into a true collaborative tool — no more one-on-one conversations. This means you can plan trips, audit projects, brainstorm ideas, or make decisions together with your team or group in real time. Because memory and privacy controls are built-in, you retain ownership and visibility of your data. For anyone relying on ChatGPT for both social and professional collaboration, this update makes it much more versatile and group-friendly.
📌 Read More: OpenAI
Unlock AI Growth for Your Small Business
Quick Summary
OpenAI, together with DoorDash and SCORE, hosted the “Small Business AI Jam” on November 20: a one-day, hands-on workshop for small business leaders to build real AI workflows using ChatGPT.
Key Insights
The event included five U.S. hubs (San Francisco, New York, Houston, Detroit, Miami) where up to 1,000 small business owners participated.
Participants learned to create AI-powered tools for daily business tasks like marketing copy, scheduling, customer messages and bookkeeping.
No coding background required; the event targeted “Main Street” businesses with 1–100 employees, including restaurants, shops and service providers.
Free registration, with follow-up online sessions and office hours available for attendees to keep building after the main event.
Why It’s Relevant
For you as a small business owner, this initiative means you no longer need to wait to adopt AI — you can start today with guided support from OpenAI and partners. The framework is built for business-people, not just developers, so it lowers the barrier to entry and brings immediate value. By participating, you gain skills and a usable workflow you can implement right away to save time, reach more customers or streamline operations. This move reflects a broader shift: AI is becoming accessible for small businesses, not just large enterprises.
📌 Read More: OpenAI
You Can Build with Gemini 3 Now
Quick Summary
Gemini 3 from Google is now live, offering its most advanced AI model yet — including expanded reasoning, multimodal capabilities, and agentic workflows.
It is available across Google’s ecosystem for developers and enterprises, bringing a 1-million-token context window and major leap in coding, visual and spatial understanding.
Key Insights
Gemini 3 Pro outperforms previous versions in benchmarks, including a high score in coding tasks such as Terminal-Bench 2.0.
The model supports advanced multimodal reasoning: text, images, video, audio, spatial context — enabling new workflows in robotics, XR, autonomous systems.
Developers gain new capabilities like “vibe coding” (build apps from natural language prompts) and agent tools via Google Antigravity.
Enterprises can integrate Gemini 3 through Vertex AI, Google AI Studio and other platforms for tasks including legal contract analysis, supply-chain planning, document and video understanding.
Why It’s Relevant
For you, Gemini 3 represents a major upgrade in AI access: whether you’re a developer building next-gen apps, a business automating complex workflows, or a creative professional working with visuals and code, this model broadens what you can accomplish with AI. Because it supports multimodal input and large context windows, tasks that once required human teams can now be handled more efficiently, accurately and at scale. As AI embeds deeper into tools and business systems, adopting such a frontier model gives you a strategic edge — not just in experimentation but in production readiness.
📌 Read More: Google
You Drive — Gemini Takes the Wheel
Quick Summary
Google is rolling out Gemini to Android Auto globally, replacing Google Assistant in car infotainment systems for drivers who have upgraded their phones.
The update supports 45 languages and brings more conversational, multimodal capabilities to driving tasks.
Key Insights
Gemini replaces Google Assistant in Android Auto, enabling back-and-forth natural language interaction while driving.
With Gemini you can ask for route stops, nearby businesses, translations, auto-texting, and playlist creation via voice.
Rollout begins globally for users who have installed Gemini on their phone and appears via a tooltip in the car display.
The upgrade impacts over 250 million cars that support Android Auto, though timing varies by region and device.
Why It’s Relevant
If you use Android Auto, this update changes how you interact with your car’s infotainment system — you get a far more capable voice assistant, not just simple commands. That means you could do things like ask for “the best open café on my route that accepts dogs” or “send ETA to my friend and translate into Spanish” without touching your phone. From a safety and productivity perspective, that’s a meaningful step forward in the in-car experience. And for tech watchers or product developers, it signals how generative AI is now moving into driving environments, not just phones.
📌 Read More: TechCrunch
AI Tuturial
New NotebookLM Updates are INSANE (FREE!)

Quick Summary
Notebook LM introduces two major upgrades: automatic infographics and full slide-deck generation powered by Google’s new image model Nano Banana Pro. These features convert your research into professional visuals in seconds. They save you time and remove the manual design work from your workflow.
Key Insights
Notebook LM now creates infographics directly from your source documents.
Slide-deck generation builds full presentations with structure, visuals and layouts.
Nano Banana Pro produces accurate, context-aware graphics using Google’s knowledge base.
Rollout starts with Pro users, with free-tier access coming soon.
What Can I Learn?
How to turn raw notes or research papers into clear infographics.
How to generate slide decks with structure, visuals and audience-specific tone.
How to write prompts that produce better visual results.
How to use visual summaries to explain complex ideas faster.
Which Benefits Do I Get?
You save hours normally spent on designing slides or graphics.
You present research more clearly with accurate visuals.
You simplify complex topics for teams, clients or students.
You improve productivity with one tool that handles research and visuals.
Why It Matters
These upgrades turn Notebook LM into a full research-to-presentation engine. You reduce friction between analysis and communication, letting you focus on ideas instead of formatting. Visuals become easier to produce, more accurate and more consistent. If you work with information, this update gives you a serious speed and clarity advantage.
📌 Watch the full video: YouTube

1. Google Rolls Out Gemini 3 Search Mode
Gemini 3 arrives in Google Search with a new AI-mode that blends answers, visuals and reasoning in one interface. Users can upload photos or use voice to ask complex questions; the system produces explanatory responses and actionable next steps. The rollout signals Google’s intent to make search more interactive, intuitive and capable of handling multifaceted tasks.
📌 Read More: Google
2. UK Regulator Examines AI-Generated Political Ads
The UK’s Electoral Commission has launched a probe into AI-generated campaign materials, citing concerns over transparency and fairness. The investigation targets how political parties and campaigns deploy synthetic imagery and text ahead of elections. It raises questions about regulation, trust and the integrity of public discourse in the age of immersive AI-tools.
📌 Read More: BBC
3. Hugging Face CEO Warns of LLM Bubble
Hugging Face CEO Clément Delangue says we are in a “large language model (LLM) bubble, not an AI bubble,” cautioning that enthusiasm around multimodal models may outpace practical business returns. His comments highlight the gap between hype and scalable deployment. Investors and builders may need to adjust expectations around timelines, revenue models and real-world impact.
📌 Read More: TechCrunch
4. TikTok Lets Users Set AI-Content Levels
TikTok now gives users a setting to control how much AI-generated content shows up in their feed. The move aims to boost user agency and transparency in content discovery. With this option, people can limit, allow or prioritize synthetic-media posts, addressing growing concerns about authenticity, manipulation and platform trust.
📌 Read More: TechCrunch
Thanks for sticking with us to the end!
We'd love to hear your thoughts on today's email!
Your feedback helps us improve our content
⭐⭐⭐Superb
⭐⭐Not bad
⭐ Could've been better
Not subscribed yet? Sign up here and send it to a colleague or friend!
See you in our next edition!
Tom




