Don’t miss out! Get FREE access to my Skool community — packed with resources, tools, and support to help you with Data, Machine Learning, and AI Automations! 📈 https://www.skool.com/data-and-ai Hire me for N8N Automations: https://ryanandmattdatascience.com/hire-n8n-automation-engineer/ *Get 10% off your Hostinger n8n Self Hosted plan here: https://hostinger.com/datascience *Get n8n Cloud Here: https://n8n.partnerlinks.io/zbf786z9qbko 🚀 Hire me for Data Work: https://ryanandmattdatascience.com/data-freelancing/ 👨💻 Mentorships: https://ryanandmattdatascience.com/mentorship/ 📧 Email: ryannolandata@gmail.com 🌐 Website & Blog: https://ryanandmattdatascience.com/ 🍿 WATCH NEXT n8n Playlist: https://www.youtube.com/watch?v=MYsr7EIbDG0&list=PLcQVY5V2UY4K0mpuJ-oYO_LI25w5VDUD5 In this video, I put Claude Sonnet 4.6 and Google Gemini 3.1 Pro through three real-world AI tests to find out which model delivers better value for the price. Both models just launched this week, and the AI community is buzzing about which one actually performs better in production scenarios. I test each model across three critical use cases: business thinking analysis, tool calling with real integrations, and information extraction from complex documents. For the business scenario, I task each AI with solving a SaaS growth problem and compare how deeply they analyze the challenge. In the tool calling test, I give both models access to Perplexity, Google Sheets, and Gmail to see how they handle multi-step workflows. Finally, I feed them a dense financial earnings transcript to test their ability to extract specific data points and perform calculations. Throughout all tests, I track execution speed, output quality, and how well each model follows instructions. The results reveal some surprising differences in speed, accuracy, and attention to detail between Claude Sonnet 4.6 and Gemini 3.1. I also break down the pricing differences and explain when the extra cost might actually be worth it for your specific use case. If you are building AI agents or trying to decide which large language model to use in your automation workflows, this comparison will help you make an informed decision based on real performance data, not just hype. TIMESTAMPS 00:00 Introduction & Model Comparison Overview 01:31 Pricing Breakdown: Sonnet 4.6 vs Gemini 3.1 02:08 Test 1: Business Thinking Scenario 05:17 Test 1 Results: Gemini Performance 07:32 Test 1 Results: Sonnet Performance & Math Error Detection 09:00 Test 2: Tool Calling with Perplexity, Sheets & Gmail 11:13 Test 2 Results: Anthropic Claude Tools Performance 12:43 Test 2 Results: Gemini Tools Performance 14:16 Test 3: Information Extractor from Financial Data 16:02 Test 3 Results: Enterprise Customer Count Analysis 17:12 Final Comparison: Speed & Performance Review 19:00 Final Comparison: Output Quality & Token Usage OTHER SOCIALS: Ryan’s LinkedIn: https://www.linkedin.com/in/ryan-p-nolan/ Matt’s LinkedIn: https://www.linkedin.com/in/matt-payne-ceo/ Twitter/X: https://x.com/RyanMattDS *This is an affiliate program. We receive a small portion of the final sale at no extra cost to you.

The New Way Small Businesses Are Using AI to Close More Deals
68 views

How to Use Claude Cowork Connectors (Step-by-Step Tutorial)
377 views

How to Use Claude Cowork Scheduled Tasks (Step-by-Step Tutorial)
398 views

How I Built an AI Email Agent With Claude Cowork (No Code)
650 views

The Claude Cowork Feature That Generates Ideas for You
575 views

I Built an AI Email Assistant in n8n (Community Challenge)
363 views