Of Models and Metaphors

Models are how we understand, describe and predict the world around us. They are the basis of science and knowledge. Metaphors can give an intuitive feel for a model.

Metaphors are useful for avoiding a complete understanding something. Avoiding a full understanding in favor of being content with some simple abstract idea. Feynman refuses to give examples of what physical phenomena are like because there is nothing like an electron spinning around a nucleus that can give a better understanding of it. It is not like planets around the sun. The forces involved are completely different!

Metaphors have their place. In a literary sense they create a connection with your audience and convey feeling and empathy. In a technical setting a metaphor can be used effectively to highlight fundamental properties of a model. They provide an excellent communication device. But beware of their use if you are searching for a precise understanding.

Peter Norvig presents an excellent dive into Models and Theories from a computer scientist's perspective. The key theme to me is that rich data produces better models than clever algorithms. There is a large class of problems that can be solved very well by models that are statistical in nature and not intuitively understandable by virtue of their size. Computers are allowing us to build models in ways that are beyond metaphor.

At Outpace we use simple machine learning over large datasets to personalize content. It makes our customers lots of money. We routinely see a 20-60% lift in conversions, compared to business as usual. The value of data is very obvious in such a setting. We literally convert data into money.

In science, access to data leads to amazing progress. Examples such as the human genome project illustrate the huge leap modeling can take when backed by rich data.

For these reasons I am suspicious of explanations by metaphor. They rarely contain fidelity or utility to the degree that an explanation from data does. The best explanations present a model, or description of a model that is grounded in data.


  1. What would you make of the idea that the only way we can understand things complex things is through the construction of networks of thin metaphors that have tangible meaning in context?
    Symbolic representations do not necessarily equate to solutions in Euclidean spaces - how are we then to understand these without metaphor?

    1. Your statement certainly provides a rich source for thought on what is knowledge, how the brain works, the role of metaphor in learning, and undecidability. I am focused on a more narrow point that one's understanding on a subject is always deeper if one transitions from a metaphor to a (useful) model. So I welcome broadening the discussion.

      I think the mechanisms of the brain, what little is known of them, deserves to be considered first class phenomena in so much as that we don't have a precise biological or physical explanation. Certainly we know the brain consists of networks of connections, synapses that fire in patterns that re-enforce, match and remap. The definition of meaning is elusive. I can't imagine a definition of meaning that is not self-referential. So in the context of "how does the brain work", I would argue that "a network of thin metaphors" is less than what is known about the brain, and more than I can really say about meaning.

      In the context of learning, a network of metaphors is vital. If I'm learning how to program and am introduced to a "stack" that I can "push" values onto and "pop" values off, a stack of papers is a hugely valuable metaphor. I might learn about functions by using Turtles, where I move a "turtle" with commands to go forward, left, forward again etc, then combine those instructions into a function. As I learn the basics, I can follow links to new abstractions that build on those fundamentals. In that sense, learning, teaching, popularizing, communicating all benefit greatly from the "network of metaphors".


    2. One might wonder how I can say that metaphors are vital to learning in one breath and say I am suspicious of metaphors in the other. Well, once I learn about stacks I need to go on to learn about computer memory. I might learn about computer memory initially using a metaphor of a filing cabinet that can be arranged in various ways to provide abstractions such as a stack. I might eventually learn about the electronics involved, even Maxwell's equations... at some point I'm content to say I don't need to know how X works. Metaphors helped me to quickly explore a vast universe of knowledge. But what do I really know about stacks? Why do I get stack overflows when I make a recursive call? How big can my stack be? How fast are algorithms that use stacks? To understand these I need a more precise model. That model can be understood well now that I have a notion of the hardware. Where exactly is the line between a metaphor and a model? At the extremes I think we can agree there is a shallow metaphoric notion of "how things work" and at the other end a precise model that matches some specific notion (in this case computer memory management).

      Now memory management is in itself not something not easily understandable even with a precise model. Garbage collection is a metaphor that blankets out a complex model of memory management. If I go to the effort of learning that model, I will have a deeper understanding of that specific notion. Hence I feel it is entirely consistent to see metaphors as a way to quickly explore knowledge, but that if I stop there I will only have a sense of what could otherwise be precise.

      So if I notice that I think I understand something and that understanding is based in a metaphor, probably there is a deeper understanding to be had. Similarly if someone is trying to convince me of something by metaphor they probably have the good intention of conveying some missing understanding on my part. If we are choosing a lunch spot I might be OK with that. But if we are deciding how much steel to buy for a bridge, I'm not content to be convinced until I understand (and believe) the model as well as the notion. This is really the key thing I'm driving at... I find it useful to be suspicious of metaphors, and I think many other people may also find this a useful technique for examining their learning, work, decisions and communicate.

      Well, that's a lot of assertions and we haven't even gotten to undecidability... so I'll just observe on that, undecidability is a pain, but maths is still useful :)

      Thanks for the thought provoking comment.