How government engineers the development and function of capitalism
Review of Ages of American Capitalism by Steven Levy
If you want to get an historical grounding in the American economy and politics, this is the best book I know. The issues are clear from the beginning: how can we explain the explosive economic growth of the last few centuries? Why are depressions – panics, liquidity traps, and the like – apparently inevitable? What impact do government policies have? Finally, are different forms of capitalism possible or desirable? While Jonathan Levy can’t answer these questions with complete certainty or objectivity, this book is a phenomenal read that points in many directions.
Levy starts by defining capitalism: 1) as an investment process through which a capital asset is supposed to yield future profit; in which 2) the profit motive is crucial, however much the motives of capitalists are a mix of the rational and irrational; and that 3) it always faces the choice between the impulse to hoard wealth in the present (“liquidity preference” for security) versus invest for the longer term in “illiquid factors of production”.
Levy sees 4 developmental stages (or “ages”) of American capitalism.
First, to the 1860s, there was the “age of commerce”, which combined mercantilism, empire, slavery, international trade, even piracy. Mercantilism was a form of proto-capitalism, in which the state was involved with private interests in an attempt to maximize exports while minimizing imports. This could involve the building of empire, which guaranteed a large market for controlled trade and was supported by military intervention when necessary. After the Glorious Revolution, the British Empire became increasingly focused in support of private commercial interests, with Whigs deemphasizing aristocratic privilege in favor of gentlemen farmers seeking to maximize yield and profits; these farmers were to become some of the first capitalist entrepreneurs. In addition, imperial trade began to exhibit a “multiplier effect” of wealth creation, i.e. economic development was self-reinforcing as the accumulation of cash reserves (“transactional liquidity”) encouraged productive investment both to satisfy growing demand and find a more profitable use of their assets.
Slavery was an integral part of this system, particularly with the lethally brutal labor involved in establishing agrarian colonies in undeveloped areas such as the Caribbean – working them to death was more profitable than the use of indentured servants, whose contracts implied survival for eventual release. To justify this and create hierarchical division, racist ideologies were invented. Later, as agricultural laborers, slaves produced 80% of exports back to the home country; they also served as “stable capital assets”.
A crucial restraint in this period was the “organic economy” or, simply put, nature: given agricultural technology and practices, land could only produce so much before reaching diminishing returns and soil exhaustion; to produce more, you needed more land. As the population rose in old-regime Europe, this led to the extraction of increasing surpluses from the peasants, many of whom were gradually forced to cede their rights in the “commons” to landlords who claimed the lands as their own. In the colonies, where there was abundant land so long as native populations could be contained or disposed of, white males stood to gain more rights than peasants would have dared to dream (i.e. freedom from arbitrary power, including legal rights to the fruits of land improvements by their own labor); they could head a household, secure a living for their family and employees or slaves, and even begin to seek investment opportunities. (Levy argues there were “gamblers” who chased profit in trade as well as more autarkic subsistence farmers.)
At the time of the American revolution, a sharp distinction between the public and private sectors did not yet exist – the colonies were still profiting hugely from their membership in a mercantile empire. Upon independence, US citizens faced economic depression, inflation, and a severe credit crunch. In effect, the lack of a national trade policy rendered the Articles of Confederation impotent against British mercantilism, devolving into policies of austerity by default. Depression proved a fertile setting for the 1787 Constitutional Convention.
As Treasury Secretary in the first Administration of the Federal Government, Alexander Hamilton pursued a policy of paying off foreign creditors and refinancing the national debt. He believed that the US should institute a policy of permanent, rolling national debt backed by the authority of a Bank of the US (BUS) and financed by anticipated revenues that private banks would be handling, i.e. a “fiscal-military state” along the model of the Bank of England. In this way, he argued, long-term capitalism could develop dynamically. The BUS would be mandated to expand the money supply as needed by profit-seeking merchant elites; to do so, the BUS would leverage its funds (lend much more than it held), “quickening” the circulation of credit. He was also planning to create an industrial manufacturing base directly under the aegis of the Federal Government.
Opponents believed the Hamilton’s scheme not only privileged elites but betrayed the republican revolution itself; Federalist policies, they feared, would lead to corruption and patronage, financial oligarchy, parasitical bureaucracy, etc. Instead of that policy course, Jefferson argued that the government should promote the interests of rural households – the “Yeoman farmers” – rather than the capitalists. It was the safer option, based as it supposedly was on “personal virtue” and “hard work”. In Jefferson’s system, trade should be predominantly internal, opening to the west rather than competing with European industry in Atlantic commerce. Though unable to completely roll back Hamilton’s policies, this effectively ended the massively profitable “triangle of trade” of the 17th and 18th Centuries, according to which European manufactures went to Africa; the next cargo was slaves for export to the Americas; in the final leg, the Americas proved raw materials to Europe. (Profits from the triangle had led to an accumulation of capital that arguably funded the initial stages of the industrial revolution.)
While internal trade grew robustly and trade relations with Britain resumed, this early form of capitalism encouraged credit-led booms and busts, that is, speculative bubbles inevitably collapsed into credit-crunch panics, followed by deflation and recession. Two severe panics in the first decades of the 19th Century paved the way for Andrew Jackson’s populism. According to Jackson, due to elite corruption in the BUS, government was to blame; the private market, in his view, should guide the economy without interference from the state (the “laissez-faire” ideology). The political genius of Jackson’s program was to attract a new constituency: the rural, white-male voter. With no national plan or BUS, the states were left to their own devices, resulting in the chartering of state banks with their own currencies as well as helter-skelter investments in infrastructure.
Most importantly, the conception of the corporation changed fundamentally in the 1840s. Rather than a semi-public legal entity that required a “grant” or “concession of sub-sovereignty” from the state, the creation of a corporation became a “democratic” right available to the common man. While this opened vital opportunities for entrepreneurs, it also resulted in a focus on the short-term credit cycle and freed corporations from any long-term direction by the state – governmental controls on business purpose, direction and operation decisively lost ground. Perhaps the most significant political development was that private corporations came to serve as “Caesarian vehicles” for power (think Robber Barons), an outcome that was not anticipated in the Constitution; by extending the protection of “freedom of contract”, the Supreme Court blocked states from “infringing” on the rights of corporations.
Slavery continued to be a political issue, but it was only one of the varieties of capitalism that was developing at the time. In the south, with its dependence on continued expansion west to maintain cotton crops, slaves served as the principal form of stored wealth (or “portable liquidity”), indeed the 4 million slaves were worth an estimated $3 billion – more than the land itself. It is little wonder that southern potentates resisted their emancipation.
Meanwhile, with the rise of the industrial economy, the north was undergoing a fundamental transition, not only to overcome the limits of the organic economy via new methods and organization, but by investing in factories, where unprecedented productivity growth was centered. Jobs became wage based, with the right to quit but also a loss of security, i.e. workers became vulnerable to the credit cycle in new ways. This represented the beginning of the “age of capital”, the second developmental stage of American capitalism; it lasted into the 1930s.
Emancipation destroyed, in a stroke, the basis of southern wealth. The sharecropping system – a form of quasi-slavery for an economy based on cheap labor – froze the south into a state of under-development for over a century.
In the Union States during the Civil War, Lincoln had proved that an activist Federal Government could control the economy through national planning, i.e. a mix of neo-mercantilism and industrial policy; with the National Banking Acts, a system of Federal chartered banks was established, ending the chaos and currency fragmentation of the Jacksonian era and proving that state-led credit creation could stimulate economic growth, much as Hamilton had advocated. At the end of the war, the return to the gold standard (sponsored by the “Resumptionists”) ended these experiments in activist economic policies; the gold standard linked the US and UK credit markets and ensured that fragility and ideological rigidity would frame economic policymaking for the next 80 years.
As they increased in power and size, corporations became massive bureaucracies that applied the “modern management” techniques under development. Having studied railroads as a speculator, Andrew Carnegie became a trailblazing type of entrepreneur: he invested heavily in fixed long-term assets (steel) continually for productivity gains (“capital deepening”), cut prices to crush rivals and dominate the market, and operated his mills at full capacity. At this time, steel was an intermediary product, defusing into the rest of the economy as railroads were doing for transportation and logistics, all to initiate a self-sustaining kind of growth; based on fossil fuels and urbanization, the transformation was “civilizational” in scope in a way perhaps as fundamental as the agricultural revolution in Mesopotamia. US cities became centers to process raw materials as well as manufacture finished products; even farming was becoming more commercially entrepreneurial rather than subsistence based.
Of course, this new civilization altered the basic conditions of working life, creating new political divisions and stresses. Capital deepening gradually eliminated most skilled, artisanal labor, which became an easily discarded factor in production, enabling capitalists to pay a lower wage. Just about anyone could learn to operate in highly specialized mechanical jobs, as engineered through “Taylorism”; reduced to simple repetitive tasks, laborers lost prestige and control in the work place, even their male mojo. This led to social unrest and ideological dissent, ranging from romantics like Thoreau to revolutionaries like Marx; the Union movement arose, but was tainted with racism and exclusionary factions that guaranteed political fragmentation. If wealth was increasing at an accelerating rate, the vulnerability and flaws of the integrated capitalist economy were reflected in ever more severe financial panics and depressions. The tools of government – essentially austerity – were unable to boost the economy out of the doldrums, let alone to alleviate the human suffering that resulted.
The next step was the mass production of consumer products, as pioneered by Henry Ford. Taking his cue from the example of Carnegie, this “Fordism” consisted of investment in fixed capital that 1) enhanced the use of energy in the production process and 2) transformed the factory from a disparate series of workshops into a fully integrated system, an architecture that treated the worker as a cog in an intricately designed machine. Not yet the eccentric, Antisemitic crank that he was later to become, Ford stated that his mission was to “re-make man”, to forge a new civilization for which employees should be grateful, even worshipful. Workers, he insisted, should be able to buy his cars. Not only would he pay better for higher productivity, but he was able to produce more at sharply lower cost. Soon, competitors like GM studied his methods and surpassed him: corporations were becoming bigger, better organized, and unprecedentedly productive.
The Great Depression illustrates the fallacies of the gold standard in a financial environment that allowed “uninhibited capital mobility” or convertibility. To borrow money or trade internationally, the reasoning went, the nation needed to adopt the gold standard, whose universal convertibility was believed to be the only way to promote “sufficient trust”. Gold set rigid limits on the leveraging that created liquidity as exchanges and investment increasingly required: if a bank lent out too much, the nation’s central bank would not have enough gold to convert upon demand. When the gold “fled” the country, the central bank would be forced to raise interest rates to attract it back, which led directly to financial panic and deflation. This is essentially what happened in 1929.
The financial mechanics of the 1929 crash were complex. Most importantly, with the 1920s boom, banks could borrow from the Fed and raise interest rates on short-term loans offered to speculators in the stock market. This “convertibility” proved so profitable that European capital (hence, gold) flooded into Wall Street as did manufacturers’ profits. However, to keep the system going, stock prices had to continually rise. When the British banks perceived the need to attract gold back into Europe, they raised interest rates, which the US felt compelled to do in turn. This led to a precipitous collapse of prices in the stock market – speculators could no longer count on cheap loans to bid up stock prices – so suddenly, loans that could not be paid were called in and consumer demand tanked, which annihilated corporate profit, forcing them to lay off workers, further lowering demand. It was a perfect storm, a liquidity trap that resulted in a catastrophic hoarding of assets. An additional factor was the unwieldy war-reparation loop: US loans went to Germany, which then paid WWI reparations to France and the UK, which the latter could then cycle back to service their debts to the US. Taken together, the entire system seized up in a deflationary spiral, sucking the world economy into the worst depression yet.
To address the crisis, President Hoover pursued the “politics of administrative intelligence”, according to which the government might aid industry, but not coerce it because that would “snuff out the divine spark” of entrepreneurialism. Arguing that banks should voluntarily “band together”, Hoover injected liquidity in the banking system, but it was too little, too late. Moreover, he continued to adhere strictly to the gold standard, which limited his policy options to austerity, essentially to let the economy bottom out without government intervention. This perpetuated the liquidity trap, a spectacular failure of policy. Economic actors continued to hoard their money. Businesses feared to invest because of low demand. Hoover’s veto of the Relief Bill – “it is not the business of government” – only made things worse.
The third developmental stage of capitalism – “the age of control” – involved direct government intervention, a major departure from the laissez-faire policies of the Republican Party. This was intended to jumpstart the economy out of the depression. With the New Deal, President Franklin Roosevelt initiated a series of new policies, but at the start, he principally addressed the psychology of the country, offering hope and the promise of security – so that people would begin to spend their money rather than hoard it and independent companies would again invest in fixed capital and start hiring. Social Security would become the cornerstone of the welfare state, providing some economic security. Later, FDR’s “developmental policies” had two forms: 1) state-funded public corporations that gave contracts to the private sector; 2) direct public investment outside of private channels.
During the first 100 days, FDR shored up the banking system with regulations and liquidity guarantees. Because easy capital convertibility was “the source of the bust”, his immediate goal was to “slow capital down”; this would curb speculative panics and make banks both less dependent on transactional liquidity and less vulnerable to leveraging. To accomplish this, the Glass-Steagall act separated commercial banks from investment banks. In addition, the Agricultural Adjustment Administration moved to restrict commodity over-supply, essentially paying farmers to produce less. In the industrial sector, the National Recovery Administration sought to accomplish a similar rise in prices, wages, and profits, again via limiting supply. This early activism went a long way to restoring confidence that something could be done to escape the deflationary trap and even to return to growth.
The immediate crisis over, political polarization set in. Feeling safer, bankers opposed all further government direction and regulation. Social Security was slanted to help white male wage earners in manufacturing, in effect ignoring rural and domestic labor, women, and non-white races and hence, fomenting political division within the New Deal coalition. By creating an institution to legally enforce collective bargaining rights (the National Labor Relations Board), FDR allied with the working class and earned the bitter enmity of much of the business elite. Moreover, for ideological reasons, conservatives opposed the New Deal’s deficit spending.
By 1936, the economy had substantially recovered, at least in measures of price, production, and income. However, because extensive retooling led to a surge in productivity, unemployment levels remained unacceptably high – less labor was needed. By 1937, FDR hesitated to continue deficit spending for the usual concerns: inflation, uncertainty about government actions, and not least, the fears of white supremacists as blacks entered the wage work force. While many in his Administration argued that only government spending could bring full recovery and hance must continue, FDR decided to cut back. The result of his deferral to conservative ideology was a severe economic downturn in 1937.
As Keynes observed, it appeared politically impossible for the capitalist democracies to intervene at sufficient levels to end the depression. Monetary policy alone – making credit available by setting low interest rates – was not up to the task. What was needed, in his view, were public investments that led to expenditures on capital goods, then consumer goods, igniting the “fiscal multiplier”, i.e. to spur private-sector investment, thereby boosting industrial development into a “virtuous spiral” of self-reinforcing growth. In his view, nothing short of the “socialization of investment” would work.
What finally overcame the depression was putting the world economy on a war footing. American factories would beat fascism. Interestingly, US economic mobilization was capital intensive and labor saving, whereas the Axis powers relied on labor-intensive policies, including slave labor. While big government came to be viewed as a fully legitimate force in terms of counter-cyclical fiscal policy, a coalition of Republicans and southern Democrats persisted in attempting to roll back FDR’s ambitions, such as his Bill of Economic Rights, which was intended to guarantee jobs and provide housing, medical care, education and a living wage. Truman was completely hamstrung by this opposition, settling for the Fair Deal – moderate income redistribution, timid adversarial regulation, and subsidization of some private investment; business investment was largely by tax incentive rather than direct public investment. Virtually all long-term fixed capital investment would be decided in corporations.
The allies set up the Bretton Woods system in the aftermath of WWII. Favoring free trade in goods to promote the multiplier effect, the US insisted the Dollar should function as the world’s reserve currency (pegged at $35 per ounce of gold). The International Monetary Fund was set up to enable and enforce the requirements of Bretton Woods, i.e. to manage trade imbalances, provide emergency liquidity, etc., in exchange for commitments to certain economic policies. Flaws in the system – the buildup of US Dollars outside the US and the inherent risk in the gold peg – did not appear for decades. With the 1948 establishment of the General Agreement on Tariffs and Trade (GATT), trades preferences of the past were regulated and largely abandoned, though some protectionist measures were tolerated.
Given the drop in military spending, it was fortunate that explosive mass consumerism (with suburbanization, the automobile, credit cards, big box retail, and TV) could fill the looming gap in demand. Consumption became a national phenomenon, even a civic identity, a quest for perpetual satisfaction and entertainment – perhaps almost a religion. Meanwhile, Eisenhower heavily invested in infrastructure, the foundation of the national highway system.
In the US, there was no overarching national economic plan, though the military-industrial complex exerted a decisive influence on investments. One of the most important characteristics that differentiated the US was the “fiscal triangle”, which consisted of government (25% of GDP), the private economy, and a civic sector (501(c) for non-profit entities that were exempt from Federal taxes); tax exemption enabled the civic sector to invest in social initiatives, such as daycare, at a far greater level than other developed countries, which depended largely on their governments to do so. This had the effect of delegitimizing government-sponsored social initiatives in the US.
In a further development, the Fordist trajectory – squeezing every last ounce of productivity out of centralized factories – was approaching the point of diminishing returns, leading into consolidation (or “recomposition”) of corporations into larger entities rather than innovative micro-ventures. In addition, as war-torn nations recovered brilliantly, the dollar came under speculative pressure because of its over-valuation. Ironically, the defeated Axis powers, Germany and Japan, began to radically experiment with industrial organization, creating just-in-time manufacturing and global decentralized networks of industrial suppliers.
Finally, the marginalized and repressed (minorities, women, rural backwaters, etc.) began to demand more, which, combined with the youth revolt, led a tumultuous decade in the 1960s. This signaled the end of the liberal coalition: Democratic policies lacked effective institutional mechanisms to address the complex societal issues that were emerging and largely outside the scope of macroeconomic tools – of justice, a fairer redistribution of wealth, and “equal rights”. Blue collar workers began to join the Republican Party, which promised lower taxes and the elimination of the “nanny state”, at least to the “undeserving”; this was reflected in Nixon’s “southern strategy”.
By the end of the 1960s, serious problems began surface in the American economy. For starters, corporate profit began to stagnate due to international competition. With an over-valued dollar, the trade deficit became chronic and ever enlargening, putting pressure on the gold peg, thereby threatening the world trade system. Nixon was forced to abandon the gold standard. The OPEC cartel was able to quadruple oil prices, only the most prominent of primary-material industrial inputs that became more expensive as supply limits were hit. As a result, productivity growth slowed – corporations invested less in fixed assets, sought opportunities to exploit low-cost labor in regions outside the “rust belt”, and faced the limits of expansion – saturated markets – in a number of essential product lines. (For example, you rarely need more than 2 cars in a family.) Well paying manufacturing jobs began to give way to the service sector, which offered fewer career opportunities and lower wages. This became the era of “stagflation”, the combination of slow growth and inflation that appeared to repudiate the “tradeoff” in Keynesian economics.
The emerging economy, Levy says, was exemplified by Houston. With its industries highly automated and Texas oil a limited sector, the principal engine of economic growth is real estate: houses, offices and all related services spread into unregulated spaces without any long-term development plan. The potential scope of this kind of growth, of course, is limited and far less dynamic than the creation of a corporation that would seek to continue to produce into the future. What you get are McMansions, massive office towers that are increasingly difficult to fill, and an industry that feeds on legal and financial fees; beyond construction, the working class is left with service jobs.
Following the defection of the blue collar vote to the GOP, the Democratic Party entered a period of decline. Not only did stagflation tarnish its record for economic stewardship, but its factionalism and inability to formulate and unite around new progressive policies left a void for the Republican party to fill. From here on, neo-liberalism – the unfettered “free market” with minimal government – was placed at the center of political debate, supposedly a neutral mechanism that would somehow make the difficult societal decisions for us with the greatest efficiency. Any government action, in this schema, would inevitably distort the cleansing and rational action of the “real” market, i.e. there was a “natural employment level” and a Panglossian equilibrium to expect. Milton Friedman argued that the only role for government was to control the growth of the money supply, which Fed Chair Paul Volcker pursued in the “Volcker Shock”. In the election year 1980, the GDP shrunk 7.9%, dooming Jimmy Carter’s reelection bid.
The fundamental shift of the “Reagan Revolution” embodies the fourth stage of American capitalism, which Levy dubs “the age of chaos”. According to Arthur Laffer’s precepts of supply side economics, government should cut taxes and deregulate the economy, staying out so that entrepreneurs could run the show; economic growth would raise tax receipts while the welfare state was rendered obsolete. What resulted was not what Laffer predicted: tax cuts did not lead to a revival of manufacturing; the budget was not balanced (i.e. tax cuts led to chronic budget deficits); we did not outgrow the need for welfare; the trade deficit did not shrink, but grew massively. Instead, with a mass of roving capital resulting from the tax cuts, there was a significant divestment in industry – investors preferred short-term speculation, which heightened instability when compared to the fixed investments of the recent past. Crucially, longer-term income generation shifted from production to the appreciation of assets, that is, stocks and real estate. Speculation depended on confidence that assets would continue to appreciate. It became the task of the Fed to maintain that confidence. Essentially, the rich got richer. Finally, the rise in asset prices was buttressed by inflows of foreign capital, from a mixture of chronic trade deficits and confidence that the US Government would protect property rights.
Perhaps worse, the new goal of many corporations became profit from financial manipulation. In concrete terms, quarterly profits had to be convertible and liquid, which was the exact opposite of fixed capital investments, where the time horizon for profit had been 20 years or more; in other words, wealth had been based on the depreciation of fixed assets of means of production and now, it was based on stock appreciation. Indeed, the Reagan boom was the first in the history of capitalism in which fixed investment declined as a percentage of GDP. Stock price became the new measure of corporate success and by the mid-80s the profit target deadline had fallen to 2 years. Accounting value (the “historical”) came not from projections of past use of productive capital onto the future, but from the expectation of future incomes as based on the present value of the stock. This sparked an industry of accounting gimmicks and paper entrepreneurialism. The line between reality and vision was blurring, a truly postmodern adaptation of capitalism.
Leveraged buyouts came to dominate Wall Street. Corporate raiders bought shares backed by “junk bonds”, which were high-risk, high-interest financial vehicles. In order to meet debt payments, raiders sold off company assets and cut costs, often laying off managers in a clear sign of the power shift in favor of finance, then would sell back the shares on Wall Street at a profit. While this process “hollowed out” many corporations, a fictive world of trading took over, that is, trading between raiders and profiting by assigning ever higher asset prices. Moreover, as the dollar was the world’s reserve currency, the US could attract investments from abroad to finance its debts, a unique advantage that other developed economies did not enjoy.
After a brief recession that marred the George HW Bush Presidency, in 1993 the Clinton Administration initiated another macroeconomic expansion with Rubinomics. With capital mobility and the globalization of manufacturing and finance accepted by this time as a given, the Fed lowered long-term interest rates, banks’ profits rose, thereby expanding credit, which together ensured that asset prices would continue to ever-more-unprecedented levels. In addition, Clinton paved the way for further globalization in a series of treaties, leading to a surge in cross-border investments for the long term, effectively siphoning off capital that might have been invested in US factories.
With the computer chip and internet, optimism for the future remained strong in the “New Economy”. Though there were many failures, e.g. the dot-com bubble, a number of firms figured out a way to be profitable in both data mining (google) and a drive to monopoly retail (Amazon). If Silicon Valley became a hotbed of “innovation” and speculative investment, its economic impact appears less revolutionary than advertised. Levy does not address these issues at sufficient length and it is a subject of controversy in academic economics. I will write more about them in subsequent essays.
W tried to extend the asset boom in what he called the “ownership society”, which in theory opened enrichment to anyone who could afford a home. Levy writes: “a Wall Street funding, rating, and trading cartel” emerged, i.e. investment banks, commercial banks, and rating agencies colluded to provide fresh investors into the real estate market. This involved a number of financial inventions that concealed leveraging and risk, such as mortgage-backed securities or collateralized-debt obligations, yet maintained the appearance of perpetual gains in real estate. At every stage, cartel members charged exorbitant fees and got rich trading amongst themselves. The only reality constraint was that people had to continue buying, which cartel participants accomplished by offering ever-riskier loans to less and less qualified borrowers.
It worked until it didn’t. Suddenly, prices began to fall, payment delinquencies and bankruptcies exploded, “securitized” loans were called in and the system stalled as fresh loans became unobtainable – obligations could no longer be rolled over in the future. It was a classic panic, baked into the logic of the system. Once again, the Fed stepped in, first to stop the panic as the lender of last resort. Then, the Federal Government began to purchase assets that no one wanted (“quantitative easing”). Lacking imagination and vision, the Obama Administration merely supported the asset-based macroeconomy, in effect re-establishing the conditions for its perpetuation via cheap credit and deficit spending rather than seek to initiate some new phase of capitalism – the Fed merely subsidized the banks, who passed the financial costs to home owners. Burdened by debt, stagnant wages, and depreciating home values, consumers spent less, rendering the recovery slow if steady. It was criticized as another “jobless recovery” with insufficient fiscal stimulus, a recipe for political failure.
At present, we remain trapped in the pattern dependent on converting leveraged capital into fresh income for investment in real estate and stocks. Faith in the finance-led vision of the global economy is waning. We know this cannot go on forever, as we saw in 1929 and 2008. Yet no new vision has yet emerged. Obviously, Levy thinks US capitalism is ripe for some kind of fundamental transformation. State action, he observes, is always required to initiate a new stage of capitalism. Given this book is a history, one must read between the lines to get at his opinion regarding what should be done.
On the one hand, Levy appears to believe that the US should raise labor income as well as guide investment to productive uses, which were the goals of the age of control and advocated by Keynes. I do not see how this is politically feasible, unless of course we take advantage of a crisis (which, he points out, the Obama Administration failed to do). On the other hand, Levy implies that more productive investment at home would help. In light of the so-called China threat, I see this as generically plausible, but it begs the question of how we might develop and in which direction.
In my opinion, much of the problem that industrial capitalism faces stems from the saturation of our basic needs: not even the New Economy – the information age, the internet, the computer revolution or what have you – proved capable of catapulting us to a genuinely higher level of prosperity. We have enough cars, enough TVs, clean water, and an electric grid, all of which were the engines of growth in the golden era that ended in 1973. What else would we need? Sufficient food and entertainment, to be sure, but we also need a new definition of the meaning of life, beyond consumerism.
Given the environmental crisis we are facing, I think we need to abandon our obsession with GDP growth. We are rapidly reaching the limits of what the planet can sustain. Though this is beyond the scope of Levy’s book, perhaps we should begin to seek satisfaction in having enough rather than endlessly wanting more. Some form of wealth redistribution or even a guaranteed minimum income for everyone would help, particularly given the threat that robotics and AI pose to employment. I will explore these issues in later essays.
My criticisms of the book are minor. I learned an immense amount, it put into context things I have studied and puzzled over for 40 years. I believe it is a masterpiece that deserves to become an academic standard. Indeed, as we face great uncertainty, I feel optimistic that we can forge a new direction for capitalism, a new age that could be more equitable, sustainable, and stable. The alternative is bleak.
Note. Forgive me if you’ve already read this, but it was one of my first substack essays and I believe many have never seen it. The essay was an experiment, far deeper than I normally dare to go. If you like it, I will do more like this.
Related reviews: