
Building Intelligent Agents with LangChain and LangGraph: Part 2 - Agentic Workflows
Learn how to build agentic workflows with LangChain and LangGraph.
Search articles
Articles about Large Language Models and Generative AI, exploring their architectures, training methods, and real-world applications.

Learn how to build agentic workflows with LangChain and LangGraph.

A comprehensive guide covering the 2021 Foundation Models Report published by Stanford's CRFM. Learn how this influential report formally defined foundation models, provided a systematic framework for understanding large-scale AI systems, analyzed opportunities and risks, and shaped research agendas and policy discussions across the AI community.

Understand the mathematical foundations of LLM fine-tuning with clear explanations and minimal prerequisites. Learn how gradient descent, weight updates, and Transformer architectures work together to adapt pre-trained models to new tasks.

A comprehensive guide to choosing the right approach for your LLM project: using pre-trained models as-is, enhancing them with context injection and RAG, or specializing them through fine-tuning. Learn the trade-offs, costs, and when each method works best.

Learn the foundational concepts of LLM workflows - connecting language models to tools, handling responses, and building intelligent systems that take real-world actions.

A comprehensive guide covering Dense Passage Retrieval (DPR) and Retrieval-Augmented Generation (RAG), the 2020 innovations that enabled language models to access external knowledge sources. Learn how dense vector retrieval transformed semantic search, how RAG integrated retrieval with generation, and their lasting impact on knowledge-aware AI systems.

A comprehensive guide to understanding AI agents, their building blocks, and how they differ from agentic workflows and agent swarms.

The application of deep neural networks to speech recognition in 2012, led by Geoffrey Hinton and his colleagues, marked a revolutionary breakthrough that transformed automatic speech recognition. This work demonstrated that deep neural networks could dramatically outperform Hidden Markov Model approaches, achieving error rates that were previously thought impossible and validating deep learning as a transformative approach for AI.

A deep dive into how MCP makes tool use with LLMs easier, cleaner, and more standardized.

An exploration of why setting temperature to zero doesn't eliminate all randomness in large language model outputs.

DeepMind's WaveNet revolutionized text-to-speech synthesis in 2016 by generating raw audio waveforms directly using neural networks. Learn how dilated causal convolutions enabled natural-sounding speech generation, transforming virtual assistants and accessibility tools while influencing broader neural audio research.

In 2005, the PropBank project at the University of Pennsylvania added semantic role labels to the Penn Treebank, creating the first large-scale semantic annotation resource compatible with a major syntactic treebank. By using numbered arguments and verb-specific frame files, PropBank enabled semantic role labeling as a standard NLP task and influenced the development of modern semantic understanding systems.

In 1998, Charles Fillmore's FrameNet project at ICSI Berkeley released the first large-scale computational resource based on frame semantics. By systematically annotating frames and semantic roles in corpus data, FrameNet revolutionized semantic role labeling, information extraction, and how NLP systems understand event structure. FrameNet established frame semantics as a practical framework for computational semantics.

In 2002, IBM researchers introduced BLEU (Bilingual Evaluation Understudy), revolutionizing machine translation evaluation by providing the first widely adopted automatic metric that correlated well with human judgments. By comparing n-gram overlap with reference translations and adding a brevity penalty, BLEU enabled rapid iteration and development, establishing automatic evaluation as a fundamental principle across all language AI.

In 2001, Lafferty and colleagues introduced CRFs, a powerful probabilistic framework that revolutionized structured prediction by modeling entire sequences jointly rather than making independent predictions. By capturing dependencies between adjacent elements through conditional probability and feature functions, CRFs became essential for part-of-speech tagging, named entity recognition, and established principles that would influence all future sequence models.

Natural language processing underwent a fundamental shift from symbolic rules to statistical learning. Early systems relied on hand-crafted grammars and formal linguistic theories, but their limitations became clear. The statistical revolution of the 1980s transformed language AI by letting computers learn patterns from data instead of following rigid rules.

Claude Shannon's 1948 work on information theory introduced n-gram models, one of the most foundational concepts in natural language processing. These deceptively simple statistical models predict language patterns by looking at sequences of words. They laid the groundwork for everything from autocomplete to machine translation in modern language AI.

In 1950, Alan Turing proposed a deceptively simple test for machine intelligence, originally called the Imitation Game. Could a machine fool a human judge into thinking it was human through conversation alone? This thought experiment shaped decades of AI research and remains surprisingly relevant today as we evaluate modern language models like GPT-4 and Claude.

Joseph Weizenbaum's ELIZA, created in 1966, became the first computer program to hold something resembling a conversation. Using clever pattern-matching techniques, its famous DOCTOR script simulated a Rogerian psychotherapist. ELIZA showed that even simple tricks could create the illusion of understanding, bridging theory and practice in language AI.

Hidden Markov Models revolutionized speech recognition in the 1970s by introducing a clever probabilistic approach. HMMs model systems where hidden states influence what we can observe, bringing data-driven statistical methods to language AI. This shift from rules to probabilities fundamentally changed how computers understand speech and language.

In 1958, Frank Rosenblatt created the perceptron at Cornell Aeronautical Laboratory, the first artificial neural network that could actually learn to classify patterns. This groundbreaking algorithm proved that machines could learn from examples, not just follow rigid rules. It established the foundation for modern deep learning and every neural network we use today.

In 1968, Terry Winograd's SHRDLU system demonstrated a revolutionary approach to natural language understanding by grounding language in a simulated blocks world. Unlike earlier pattern-matching systems, SHRDLU built genuine comprehension through spatial reasoning, reference resolution, and the connection between words and actions. This landmark system revealed both the promise and profound challenges of symbolic AI, establishing benchmarks that shaped decades of research in language understanding, knowledge representation, and embodied cognition.

Bernard Widrow and Marcian Hoff built MADALINE at Stanford in 1962, taking neural networks beyond the perceptron's limitations. This adaptive architecture could tackle real-world engineering problems in signal processing and pattern recognition, proving that neural networks weren't just theoretical curiosities but practical tools for solving complex problems.

In 1991, IBM researchers revolutionized machine translation by introducing the first comprehensive statistical approach. Instead of hand-crafted linguistic rules, they treated translation as a statistical problem of finding word correspondences from parallel text data. This breakthrough established principles like data-driven learning, probabilistic modeling, and word alignment that would transform not just translation, but all of natural language processing.

In 1995, RNNs revolutionized sequence processing by introducing neural networks with memory—connections that loop back on themselves, allowing machines to process information that unfolds over time. This breakthrough enabled speech recognition, language modeling, and established the sequential processing paradigm that would influence LSTMs, GRUs, and eventually transformers.

In 1997, Hochreiter and Schmidhuber introduced Long Short-Term Memory networks, solving the vanishing gradient problem through sophisticated gated memory mechanisms. LSTMs enabled neural networks to maintain context across long sequences for the first time, establishing the foundation for practical language modeling, machine translation, and speech recognition. The architectural principles of gated information flow and selective memory would influence all subsequent sequence models, from GRUs to transformers.

In the 1980s, neural networks hit a wall—nobody knew how to train deep models. That changed when Rumelhart, Hinton, and Williams introduced backpropagation in 1986. Their clever use of the chain rule finally let researchers figure out which parts of a network deserved credit or blame, making deep learning work in practice. Thanks to this breakthrough, we now have everything from word embeddings to powerful language models like transformers.

In the mid-1990s, Princeton University released WordNet, a revolutionary lexical database that represented words not as isolated definitions, but as interconnected concepts in a semantic network. By capturing relationships like synonymy, hypernymy, and meronymy, WordNet established the principle that meaning is relational, influencing everything from word sense disambiguation to modern word embeddings and knowledge graphs.

In 1988, Yann LeCun introduced Convolutional Neural Networks at Bell Labs, forever changing how machines process visual information. While initially designed for computer vision, CNNs introduced automatic feature learning, translation invariance, and parameter sharing. These principles would later revolutionize language AI, inspiring text CNNs, 1D convolutions for sequential data, and even attention mechanisms in transformers.

In 1987, Slava Katz solved one of statistical language modeling's biggest problems. When your model encounters word sequences it has never seen before, what do you do? His elegant solution was to "back off" to shorter sequences, a technique that made n-gram models practical for real-world applications. By redistributing probability mass and using shorter contexts when longer ones lack data, Katz back-off allowed language models to handle the infinite variety of human language with finite training data.

In 1987, Alex Waibel introduced Time Delay Neural Networks, a revolutionary architecture that changed how neural networks process sequential data. By introducing weight sharing across time and temporal convolutions, TDNNs laid the groundwork for modern convolutional and recurrent networks. This breakthrough enabled end-to-end learning for speech recognition and established principles that remain fundamental to language AI today.

In 2004, ROUGE and METEOR addressed critical limitations in BLEU's evaluation approach. ROUGE adapted evaluation for summarization by emphasizing recall to ensure information coverage, while METEOR enhanced translation evaluation through semantic knowledge incorporation including synonym matching, stemming, and word order considerations. Together, these metrics established task-specific evaluation design and semantic awareness as fundamental principles in language AI evaluation.
Get notified when I publish new articles on data and AI, private equity, technology, and more.
No spam, unsubscribe anytime.
Create a free account to unlock exclusive features, track your progress, and join the conversation.