Vigyata.AI
Is this your channel?

Right Tool for the Right Job - Windows is that tool!

293 views· 23 likes· 32:36· Nov 8, 2025

🛍️ Products Mentioned (3)

Right Tool for the Right Job - Windows is that tool! I show off the power of running Windows Server on the right equipment for the right tasks. Dual Tesla P40's make light work of mastering my LLM queries. Get those Server 2022 Wallpapers here: https://wallpaperaccess.com/windows-server-2022 CREDITS: "Subscribe Button" by MrNumber112 https://youtu.be/Fps5vWgKdl0 Music I Use: https://www.bensound.com/free-music-for-videos License code: MS6H2K6QACHPUSXU Visit our sister channel Unkyjoe's Aquatics: https://www.youtube.com/channel/UCWHrQHkTZ1iACO8VxIXkBzw Thanks for watching! I hope you all enjoy... Join us on LBRY:https://lbry.tv/$/invite/@unkyjoesplayhouse:9 Facebook: https://www.facebook.com/UnkyjoesPlayhouse/ Twitter :https://twitter.com/ujplayhouse Email:unkyjoesplayhouse@gmail.com For PayPal or Patreon donations to Unkyjoe's Playhouse, please visit the "About" section on my channel. All cash donations are directly put back into Unkyjoe's Playhouse channel projects. I cannot respond to all emails, but give it a go! *PLEASE NOTE* I do not respond to YouTube or Google+ private messages. Please contact me via the official Facebook page or via my email address to get in touch.

About This Video

Well, greetings people of the internet—Uncle Joe here. In this one I’m making the case for “right tool for the right job,” and yeah… sometimes that tool is Windows. I dragged my Dell R730 back into service (dual Xeon E5-2697 v4, 256GB RAM, and dual Tesla P40s with 24GB VRAM each) because I got tired of having good gear just sitting there. I’m not trying to slice these GPUs up for a bunch of VMs anymore—I want to use them natively for stuff like local LLM work, and Windows Server makes that dead simple compared to GPU passthrough gymnastics. I show LM Studio seeing both P40s (in WDDM mode with an older-but-working driver) and splitting a 20B model across both cards, giving me very usable performance—often 20–40+ tokens/sec depending on the prompt. Then I load a 70B model just to prove the point: it’ll run, but you’re trading speed for size (I saw ~3–4 tokens/sec). From there I pivot into why Hyper-V is still one of my favorite “boring and stable” tools for lab VMs, plus a quick look at how I’m doing Windows storage pools (spinning-rust RAID10-ish mirror and a simple SSD pool). Bottom line: Linux is great, Proxmox is great—but when the workload is native Windows + GPUs + Hyper-V, Windows Server is absolutely the right tool.

Frequently Asked Questions

🎬 More from Unkyjoe's Playhouse