In the summer of 1984, starting out as an economic journalist for The Boston Globe, I published The Idea of Economic Complexity (Viking).  “Complexity,” I wrote, “is an idea on the tip of the modern tongue.”

About that much, at least, I was right.

My book was received with newspaperly courtesy by The New York Times, but it was soon eclipsed by three much more successful titles. Chaos: The Making of a New Science (Viking), by James Gleick, appeared in 1987. Complexity: The Emerging Science at the Edge of Order and Chaos (Simon & Schuster), by M. Mitchell Waldrop; and Complexity: Life at the Edge of Chaos (Macmillan), by Roger Lewin, both appeared in 1992.  The reviewer for Science remarked that the latter read like the movie version of the former.

Gleick reported on the doings of a community of physicists, biologists and astronomers, including mathematician Benoit Mandelbrot, who were studying, among other things, “the butterfly effect.”  Lewin and Waldrop both wrote mainly about W. Brian Arthur, of the Santa Fe Institute.  I had pinned my hopes on Peter Albin, of the City University of New York, whose students hoped he would be the next Joseph Schumpeter.

When the famously pessimistic financial economist Hyman Minsky retired, Albin was chosen to replace him at the Levy Institute at Bard College, but he suffered a massive stroke before he could take the job. Duncan Foley, then of Barnard College, edited and introduced a volume of Albin’s papers: Barriers and Bounds to Rationality: Essays on Economic Complexity and Dynamics in Interactive Systems (Princeton, 1998).  Arthur went to win many awards and write a well-regarded book, The Nature of Technology: What It Is and How It Evolves (Free Press, 2011).

By then complexity had become a small industry, powered by a vigorous technology of agent-based modeling. Publisher John Wiley & Sons started a journal, Princeton University Press a series of titles, Ernst & Young opened a practice. Among the barons who came across my screen were John Holland, Scott Page, Robert Axelrod, Leigh Tesfatsion, Seth Lloyd, Alan Kirman, Blake LeBaron, J. Barkley Rosser Jr., and Eric Beinhocker, as well as three men who became good friends: Joel Moses, Yannis Ionnides, and David Colander.  All extraordinary thinkers.  I long ago went far off the chase.

Two of the most successful expositors of economic complexity were research partners, as least for a time:  Ricardo Hausmann, of Harvard University’s Kennedy School of Government, and physicist César Hidalgo, of MIT’s Media Lab. They, too, worked with a gifted mathematician, Albert-László Barabási, of Northeastern University, to produce a highly technical paper; then,  with colleagues, assembled an Atlas of Complexity: Mapping Paths to Prosperity (MIT, 2011), a data-visualization tool that continues to function online. Meanwhile, Hidalgo’s Why Information Grows: The Evolution of Order, from Atoms to Economies (Basic, 2015) remains an especially lucid account of humankind’s escape (so far) from the Second Law of Thermodynamics, but there is precious little economics in it. For the economics of international trade, see Gene Grossman and Elhanan Helpman.

That leaves economist Martin Shubik, surely the second most powerful mind among economists to have tackled the complexity problem (John von Neumann was first). Shubik pursued an overarching theory of money all his life, one in which money and financial institutions emerge naturally, instead of being given. In The Guidance of an Enterprise Economy (MIT, 2016), he considered that he and physicist Etic Smith had achieved it. Shubik died last year, at 92.  His ideas about strict definitions of “minimal complexity” will take years to resurface in others’ hands.

So what have I learned?  That the word itself was clearly shorthand: complexity of what? One possible phenomenon is complexity of the division of labor, or the extent of aggregate specialization in an economic system.

I came close to saying as much in 1984. The book began,

To be complex is to consist of two or more separable, analyzable parts, so the degree of complexity of an economy consists of the number of different kinds of jobs in the system and the manner of their organization and interdependence in firms, industries, and so forth. Economic complexity is reflected, crudely, in the Yellow Pages, by occupational dictionaries, and by standard industrial classification (SIC) codes.  It can be measured by sophisticate modern techniques such as graph theory or automata theory. The whys and wherefores of complexity are not our subject here, however; it is with the idea itself that we are concerned. A high degree of complexity is what hits you in the face in a walk across New York City; it is what is missing in Dubuque, Iowa. A higher degree of specialization and interdependence – not merely more money or greater wealth – is what makes the world of 1984 so different from the world of 1939.

I was interested in specialization as a way of talking about why the prices of everyday goods and services were what they were apart from the quantity of money. I was writing towards the end of forty years of steadily rising prices.  I had become entranced by some painstaking work published twenty-five years before, by economists E.H. Phelps Brown and Sheila Hopkins.  There were measurements of both the money cost of living in England and the purchasing power of workers’ wages over seven centuries.  The price level exhibited a step-wise pattern, relentlessly up for a century, steady the next; purchasing power, a jagged but ultimately steady increase (sorry, only JSTOR subscription links).

[W] hen we find the craftswomen who have been building Nuffield College in our own day earning a hundred fifty pennies in the time it took their forebears building Merton to earn one, the impulse to break through the veil of money becomes powerful: we are bound to ask, what sort of command over the things that builders buy did these pennies give from time to time?

It turned out the higher the money price, the more prosperous was the craftsman’s lot, at least in the long run, though sometimes after periods of immiseration lasting decades.  That was much as Adam Smith led readers to expect in the first sentence of The Wealth of Nations:  “The greatest improvement in the productive power of labor, and the greater part of the skill, dexterity, and judgement, with which it is directed, or applied, seems to have been the effects of the division of labor.” Today’s builders rely on a bewildering array of materials and machines to pursue their tasks, compared to those who built Merton College.

What interested me were intricate questions about the direction of causation.  Had prices grown higher because the number of pennies had increased?  Or had the supply of pennies grown to accommodate an increasing overall division of labor? To put it slightly differently, in those periods of “industrial revolution” – there had been at least two or three such events – had prices risen because the size of the market and the division of labor had grown, and the quantity of money along with them?  Or was it the other way around?

Economists had no hope of answering questions like this, it seemed to me, because they had no good way of posing them. They were in the grip of the quantity theory of money, which at least since the time of the first European voyages to the West, has held that “the general level of prices” is proportional to the quantity of money in the system available to pay for those goods. This is, I thought, little more than an analogy with Boyle’s Law, one of the most striking early successes of the scientific revolution, which holds that the pressure and volume of a fixed amount of gas are inversely proportional.  Release the contents from a steel cylinder into a balloon and the container expands.  But it still contains no more gas than before.  Something like that must have been in the mind of the first person who first spoke of “inflating” the currency. From there it was a short jump to the way that classical quantity theory relies on the principle of plenitude – the age-old assumption, inherited from Plato, that there can be nothing truly new under the sun, that the collection of goods of “general price level” were somehow fixed.

But I was no economist. My book found no traction.  By then, however, I was hooked; and within a few years I had found my way to a circle of economists at whose center was Paul Romer, then a professor at the University of Rochester.  Romer was in the process of putting the growth of knowledge at the center of economics, but that turns out not to be the whole story, just the beginning of it.

The Yellow Pages are all but gone, casualties of search advertising; other industries that supported themselves by assembling audiences have shrunk (newspapers, magazines, broadcast television). Still others have grown (internet firms, Web vendors, producers of streaming content).  Tens of thousands of jobs have been lost; hundreds of thousands of jobs have been created.

I still have the feeling that the important changes in the global division of labor have something to do with the behavior of traditional macroeconomic variables.  Romer once surmised that the way into the problem was via Gibson’s paradox – a strong and durable positive empirical correlation between interest rates and the general level of prices, where theory expected to find the reverse.  Meanwhile, central bankers are fathoming the mysteries of the elusive Phillips curve, the inverse relationship between unemployment and inflation.

Which brings me back to 1984.  Also in that year, Michael Piore and Charles Sabel published The Second Industrial Divide: Possibilities for Prosperity (Basic). They found their new highly flexible manufacturing firms in northwestern and central Italy instead of Silicon Valley.  Their entrepreneurs had ties to communist parties and the Catholic Church instead of liberation sympathies. But the idea was much the same: computers would be the key to flexible specialization. For all the talk since about economic complexity, that is the book about the changing division of labor worth re-reading.