Welcome back to Mr. Fred’s Tech Talks, the podcast where we make technology simple, fun, and easy enough to share at the dinner table!
In Episode 6, we scratched the surface of Artificial Intelligence and Large Language Models (LLMs). Now, in Episode 7, we’re peeling back another layer and diving deeper into how these models are actually trained.
And to keep things fun? We’re using the analogy of a parrot that never stops practicing.
Episode Highlights:
- Nostalgic sound bites from classic movies & TV (see if you can name them all!)
- The parrot analogy: how it helps us understand LLMs.
- The 9-step training pipeline: from collecting data to fine-tuning.
- What “neurons” in an AI model really are (hint: tiny math functions, not brain cells).
- Probability explained with loaded dice: why LLMs predict words instead of “understanding” them.
- Why training quality, fine-tuning, and safety filters matter.
- Tech Tip of the Week: Ask AI how it got its answer, not just for the answer itself.
Key Takeaway
Large Language Models don’t think or understand the way humans do. They’re parrots powered by math, predicting the most likely next word based on patterns in data. When trained well, they’re powerful tools. When trained poorly, they’re just noisy parrots repeating junk.
Listen Now
👉 Listen on Spotify
👉 Listen on Apple Podcasts
👉 Listen on Acast
Bonus Freebie 🛠️
Download the free companion guide: How a Large Language Model Learns (PDF), a one-page explainer for parents, teachers, and curious kids.
Join the Conversation
- Visit GetMeCoding.com for coding projects, resources, and tech guides.
- Share this episode with a friend who’d love a simple take on AI.
- Don’t forget to subscribe so you don’t miss Episode 8: How AI Learns: The Student with the Lopsided Textbook.






