Vigyata.AI
Is this your channel?

Claude Cowork Scrapes 99% of Sites — Here's How

3.3K views· 67 likes· 22:18· Mar 12, 2026

🛍️ Products Mentioned (5)

💼 Business owner or operator with a team? We build AI automation systems that cut costs and scale ops — done for you: https://ryanandmattdatascience.com/ai-consultant/ 🚀 Want to make money with AI skills? Join our free community — real projects, real client strategies, and the exact stack we use: https://www.skool.com/data-and-ai I asked Claude Cowork to scrape every sold eBay listing for vintage cards in the last 30 days — no code, just plain English. In minutes I had a clean spreadsheet with titles, prices, and sale types ready to analyze. In this video I'll show you exactly how to set it up, walk through two real scraping examples, and give you the honest truth about when Claude Cowork scraping is worth it (and when it isn't). ⚠️ Before you scrape anything — I cover the legal and safety side too, including how to read a robots.txt file and which sites will get you banned. What's covered: How to set up Claude Desktop + Claude in Chrome for scraping How to read a robots.txt file and check terms of service Live eBay scrape: sold listings → spreadsheet Live 130point.com scrape → PowerPoint slide deck with images Token usage breakdown (14% of my $100 plan for 3 pages) When to use Cowork scraping vs. Python, n8n, or third-party tools TIMESTAMPS 00:00 - What we're building: eBay scrape with no code 00:41 - Setup: Claude Desktop + Claude in Chrome 04:20 - Scraping safety: robots.txt and terms of service 07:25 - Example 1: Scraping eBay sold listings 09:19 - Writing the scraping prompt 10:47 - Running the scrape + handling connection errors 12:17 - Results: spreadsheet with prices, dates, and sale types 13:00 - Token usage: is Cowork scraping efficient? 14:08 - Example 2: Scraping 130point.com + building a slide deck 17:04 - Slide deck results (and what went wrong) 19:49 - Token usage check: 14% for 3 pages scraped 20:22 - Final verdict: when to use Cowork vs. Python or n8n 🚀 Hire me for Data Work: https://ryanandmattdatascience.com/data-freelancing/ 👨‍💻 Mentorships: https://ryanandmattdatascience.com/mentorship/ 📧 Email: ryannolandata@gmail.com 🌐 Website & Blog: https://ryanandmattdatascience.com/ OTHER SOCIALS: Ryan’s LinkedIn: https://www.linkedin.com/in/ryan-p-nolan/ Matt’s LinkedIn: https://www.linkedin.com/in/matt-payne-ceo/ Twitter/X: https://x.com/RyanMattDS *This is an affiliate program. We receive a small portion of the final sale at no extra cost to you.

About This Video

In this video, I tested Claude Co‑work as a “no code, plain English” web scraper by pulling every sold eBay listing for a vintage card search over the last 30 days. I walk you through the exact setup: installing Claude Desktop, enabling Co‑work, and turning on the Claude in Chrome connector so Claude can actually control a browser tab (you’ll see the gold outline when it’s driving). Then I show the prompt I used, how I handled the annoying connection/proxy error, and what the output looked like when it worked: a clean spreadsheet with sold date, title, price, and sale type (auction vs buy it now), plus a summary tab. Before any scraping, I also cover the part most people skip: safety and rules. I show how to check a site’s robots.txt and remind you to read terms of service—because scraping the wrong site (LinkedIn is the classic example) can get you banned, and paywalled or sensitive sites can be blocked outright. Then I run a second demo on 130point.com, scraping two search terms and asking Claude to build a slide deck with the top sales and images. It worked, but it was slow and token-heavy (14% of my $100 plan for three pages), so my honest verdict is: Co‑work is great for one-off scrapes, but for repeated workflows you’re usually better off with Python, n8n, or third-party scrapers like Firecrawl.

Frequently Asked Questions

🎬 More from Ryan & Matt Data Science