LLMs and GenAI

Articles about Large Language Models and Generative AI, exploring their architectures, training methods, and real-world applications.

6 items
The Mathematics Behind LLM Fine-Tuning: A Beginner's Guide to how and why finetuning works
Data, Analytics & AISoftware EngineeringLLMs and GenAI

The Mathematics Behind LLM Fine-Tuning: A Beginner's Guide to how and why finetuning works

Jul 28, 202511 min read

Understand the mathematical foundations of LLM fine-tuning with clear explanations and minimal prerequisites. Learn how gradient descent, weight updates, and Transformer architectures work together to adapt pre-trained models to new tasks.

Read article
Adapating LLMs: Off-the-Shelf vs. Context Injection vs. Fine-Tuning — When and Why
Data, Analytics & AISoftware EngineeringLLMs and GenAI

Adapating LLMs: Off-the-Shelf vs. Context Injection vs. Fine-Tuning — When and Why

Jul 22, 202512 min read

A comprehensive guide to choosing the right approach for your LLM project: using pre-trained models as-is, enhancing them with context injection and RAG, or specializing them through fine-tuning. Learn the trade-offs, costs, and when each method works best.

Read article
Building Intelligent Agents with LangChain and LangGraph: Part 1 - Core Concepts
Notebook
Data, Analytics & AISoftware EngineeringLLMs and GenAI

Building Intelligent Agents with LangChain and LangGraph: Part 1 - Core Concepts

Jul 21, 20255 min read

Learn the foundational concepts of LLM workflows - connecting language models to tools, handling responses, and building intelligent systems that take real-world actions.

Open notebook
What are AI Agents, Really?
Data, Analytics & AISoftware EngineeringLLMs and GenAI

What are AI Agents, Really?

May 27, 20258 min read

A comprehensive guide to understanding AI agents, their building blocks, and how they differ from agentic workflows and agent swarms.

Read article
Understanding the Model Context Protocol (MCP)
Data, Analytics & AISoftware EngineeringLLMs and GenAI

Understanding the Model Context Protocol (MCP)

May 22, 20255 min read

A deep dive into how MCP makes tool use with LLMs easier, cleaner, and more standardized.

Read article
Why Temperature=0 Doesn't Guarantee Determinism in LLMs
Data, Analytics & AISoftware EngineeringLLMs and GenAI

Why Temperature=0 Doesn't Guarantee Determinism in LLMs

May 18, 202510 min read

An exploration of why setting temperature to zero doesn't eliminate all randomness in large language model outputs.

Read article

Stay updated

Get notified when I publish new articles on data and AI, private equity, technology, and more.