SHRDLU - Understanding Language Through Action
Back to Writing

SHRDLU - Understanding Language Through Action

Michael Brenndoerfer•October 1, 2025•4 min read•588 words•Interactive

Terry Winograd's 1968 SHRDLU system took a revolutionary approach to language understanding. Instead of just pattern matching, it created genuine comprehension within a simulated blocks world. SHRDLU could parse complex sentences, track what was happening, and demonstrate that computers could truly understand language when grounded in a physical context.

1968: SHRDLU

Two years after ELIZA demonstrated the power of pattern matching, Terry Winograd's SHRDLU took a fundamentally different approach to natural language understanding. Rather than using shallow tricks to simulate conversation, SHRDLU attempted to create genuine understanding within a constrained domain, a simulated world of colored blocks.

Loading component...

Understanding Through Action

SHRDLU operated in a "blocks world," a simplified environment containing pyramids, blocks, and boxes of various colors that could be manipulated by a simulated robot arm. What made SHRDLU revolutionary was its ability to:

  • Parse complex sentences using formal grammar rules
  • Maintain world state by tracking the positions and properties of objects
  • Execute actions based on linguistic commands
  • Answer questions about its world and previous actions

Unlike ELIZA's pattern-matching approach, SHRDLU represented a genuine attempt at understanding. It built an internal model of its world, tracked changes over time, and could reason about spatial relationships and object properties. This was the first system to demonstrate that a computer could understand language in the context of a physical (albeit simulated) environment.

The Blocks World: A Perfect Testing Ground

The blocks world wasn't chosen arbitrarily. It provided the perfect balance of complexity and constraint:

  • Finite vocabulary: Objects had clear, unambiguous names and properties
  • Clear semantics: Actions like "pick up," "put down," and "stack" had precise meanings
  • Visual feedback: The system could verify its understanding through action
  • Spatial reasoning: Required understanding of concepts like "on," "under," "beside"

This constrained environment allowed Winograd to focus on the core challenges of language understanding without getting lost in the infinite complexity of open-domain conversation.

Sample Interaction

Here's a classic exchange that demonstrates SHRDLU's sophisticated capabilities:

Loading component...

What Made This Interaction Revolutionary

This seemingly simple conversation demonstrates multiple breakthroughs in language understanding:

1. Reference Resolution

When the human says "Grasp the pyramid," SHRDLU recognizes the ambiguity—there are multiple pyramids in its world. Rather than guessing or failing silently, it explicitly requests clarification. This shows genuine understanding of the reference problem.

2. Complex Spatial Reasoning

The command "Find a block which is taller than the one you are holding" requires SHRDLU to remember what it's currently holding, compare heights of other objects, identify the target object, and plan and execute the movement.

3. Pronoun Resolution

When SHRDLU says "By 'it', I assume you mean the block which is taller than the one I am holding," it demonstrates sophisticated understanding of pronoun reference. It doesn't just resolve the pronoun—it explains its reasoning.

Limitations and Legacy

Despite impressive demonstrations, SHRDLU's limitations revealed fundamental challenges with rule-based language understanding:

  • Domain Brittleness: SHRDLU was completely dependent on its blocks world domain. Adding new object types required extensive modifications.
  • Exponential Scaling: Each new sentence pattern required new grammar rules, causing complexity to grow exponentially.
  • The Frame Problem: Complete symbolic representation becomes intractable in complex domains.
  • The Symbol Grounding Problem: How do linguistic symbols connect to real-world meaning?

Historical Impact

SHRDLU fundamentally changed how researchers thought about language understanding:

  1. Language Understanding Standards: Provided the first convincing demonstration of genuine computer language understanding
  2. Symbol Grounding: Highlighted how language needs to be grounded in action and world knowledge
  3. Microworlds Methodology: Established an approach that dominated AI research for two decades
  4. Embodied AI: Anticipated modern research integrating language with perception and action

SHRDLU represented both the culmination and the beginning of the end of rule-based NLP, pointing toward the need for statistical and machine learning approaches.

Loading component...
Michael Brenndoerfer

About the author: Michael Brenndoerfer

All opinions expressed here are my own and do not reflect the views of my employer.

Michael currently works as an Associate Director of Data Science at EQT Partners in Singapore, where he drives AI and data initiatives across private capital investments.

With over a decade of experience spanning private equity, management consulting, and software engineering, he specializes in building and scaling analytics capabilities from the ground up. He has published research in leading AI conferences and holds expertise in machine learning, natural language processing, and value creation through data.

Related Content

Backpropagation - Training Deep Neural Networks
Notebook
Data, Analytics & AIMachine Learning

Backpropagation - Training Deep Neural Networks

Oct 1, 2025•20 min read

In the 1980s, neural networks hit a wall—nobody knew how to train deep models. That changed when Rumelhart, Hinton, and Williams introduced backpropagation in 1986. Their clever use of the chain rule finally let researchers figure out which parts of a network deserved credit or blame, making deep learning work in practice. Thanks to this breakthrough, we now have everything from word embeddings to powerful language models like transformers.

BLEU Metric - Automatic Evaluation for Machine Translation
Notebook
Data, Analytics & AIMachine Learning

BLEU Metric - Automatic Evaluation for Machine Translation

Oct 1, 2025•5 min read

In 2002, IBM researchers introduced BLEU (Bilingual Evaluation Understudy), revolutionizing machine translation evaluation by providing the first widely adopted automatic metric that correlated well with human judgments. By comparing n-gram overlap with reference translations and adding a brevity penalty, BLEU enabled rapid iteration and development, establishing automatic evaluation as a fundamental principle across all language AI.

Convolutional Neural Networks - Revolutionizing Feature Learning
Notebook
Data, Analytics & AIMachine Learning

Convolutional Neural Networks - Revolutionizing Feature Learning

Oct 1, 2025•4 min read

In 1988, Yann LeCun introduced Convolutional Neural Networks at Bell Labs, forever changing how machines process visual information. While initially designed for computer vision, CNNs introduced automatic feature learning, translation invariance, and parameter sharing. These principles would later revolutionize language AI, inspiring text CNNs, 1D convolutions for sequential data, and even attention mechanisms in transformers.

Stay updated

Get notified when I publish new articles on data and AI, private equity, technology, and more.