“Blood and Money” (2020) by David McNally
Blood and Money: War, Slavery, and the State by David McNally, 2020
In all class societies money is enmeshed in practices of domination and expropriation. It began with enslaved, commodified humans, taken as plunder and readily translatable into a monetary equivalent. In the ancient Greco-Roman world, slavery, markets, and money evolved in tandem, each indispensable to the other. Snatched from their tribe or community, slaves, most often women and children used as domestic servants, were kin-less, no longer human, living “pieces of property” (Aristotle), like cattle. The luxury of having slaves allowed the ancient Greeks and Romans to create “civilization.” Ultimately, as Nietzsche recognized, slavery is a condition of “every higher culture.”
Ancient Greece is rightly recognized as the first extensively monetized society. Not that markets and commodity exchange were unique to Greece. On the contrary, these were widespread in the ancient world, as were a variety of special-purpose monies, items that performed a range of monetary functions, such as measuring the value of things, without operating as universal currencies. In ancient Egypt and Mesopotamia, for example, payments were frequently made by means of a common standard unit, or measure of value, such as copper, silver, or grain, in which the values of goods or services could be expressed. When two traders wanted to exchange quantities of cloth and barley, for instance, each good might be said to be worth a particular amount of copper. This enabled a comparison of quantities of one thing with another. Copper served here as a unit of account, a standard measure that could be used as a reference point in trade, with little or no copper, grain, or silver actually changing hands. A document from the New Kingdom in Egypt (1550–1070 BCE), for instance, registering the purchase of an ox, indicates that its price was fifty deben of copper (a little over 4.5 kilograms). But only five deben of the fifty were actually paid in copper; the balance was rendered in other commodities whose values were expressed in copper. A similar pattern is observed in Mesopotamia, where various legal codes regulated payments like fines and wages, as well as commodity prices, in fixed weights of silver. Grain, too, could be used for payments, according to an established silver/grain ratio. In Homer we frequently observe cattle serving as a standard of value.
These special-purpose “monies” didn’t operate as the unique socially sanctioned representative of value that moved through circuits of exchange in the way that full-fledged or general-purpose money does. And while the kingdom of Lydia in Asia Minor (now part of western Turkey) seems to have been the first in the Mediterranean world to have introduced coinage bearing an insignia of the state, the absence of small coins there suggests it wasn’t a highly monetized society. Among the states of classical Greece, particularly Athens, however, a unified coinage issued by the state would become fundamental to daily life, mediating incessant exchanges of goods and services. Coins functioned as a means of exchange and as general equivalents, capable of being traded for everything from olive oil to sex, or for paying government fines. Most significant here is the archaeological recovery of large quantities of small coins from the classical period, indicating that they weren’t simply used by the wealthy for large purchases of land, enslaved people, and livestock, but were also deployed in small denominations to facilitate everyday transactions among ordinary citizens. State-issued money in Athens (among other Greek city-states) was found everywhere, forming an indispensable element of daily life and provoking insecurities, lest one not have enough of it. Not for nothing did Aristotle declare that money “is a measure of everything,” and that it functions as such by “making things commensurable.”
Homeric Greece, or at least the life of the Greek heroes depicted in the epics, revolved around an aristocratic gift economy. Social status was tied to “competitive generosity,” the ability to bestow lavish gifts on one’s peers along with displays of munificence toward those beneath one in the social order. Gift-giving was essential to the exercise and display of noble virtue. Guest-friendship and gift exchange bound people together in a nexus of reciprocal obligation, in which receivers owed their benefactors services, assistance, and counter-gifts. Gift-giving systems can certainly be hierarchical ones, reproducing patterns of social inequality. But some critics have mistakenly assumed that these patterns involved a sort of capitalist calculation. They were in fact something entirely different. As much as gifts involved expectations of counter-gifts (hospitality, services, or prized items), the wealthy were driven not to accumulate goods as ends in themselves, but rather to outdo other aristocrats in the extravagance of their generosity. More accurately, perhaps, they were driven to accumulate wealth in order to disperse it. Alongside valor on the battlefield, gift giving was the key to honor and power.
Critical anthropology has identified three dominant forms of socioeconomic reciprocity: generalized, balanced, and negative. Generalized reciprocity approximates a system of “pure” gift giving in which those who can give, and those who need receive. Balanced reciprocity involves direct exchange (barter) of goods considered in some sense to be equivalents. Negative reciprocity entails getting something for nothing, or at least for as little as possible, by means of plunder, theft, and other forms of violent appropriation. As Marshall Sahlins suggests, these three modes of reciprocity are governed by social distance. The closer and more intimate the social bond (e.g., kinship), the more the pure gift-giving of generalized reciprocity will prevail. The greater the social distance, the more negative reciprocity (plunder and theft) will be the norm. Where two or more of these modes of interaction are in play, relations between them often become unstable. Indeed, it was characteristic of the archaic Greek world before coinage that a dramatic growth in the sphere of negative reciprocity, particularly raiding and plunder, began to undermine the generalized reciprocity that prevailed within more intimate social relations.
Before turning to those developments, let’s note other crucial features of non-monetary economies based on reciprocity. To begin with, they’re multicentric systems. Their socioeconomic organization involves distinct spheres, governed by divergent principles and values. A realm in which direct exchange (trade, or balanced reciprocity) occurs may coexist with a domain of negative reciprocity (plunder and theft). Trade and plunder (balanced and negative reciprocity) are permissible only with “outsiders.” The sphere of generalized reciprocity, within the doman of the clan, tribe, or village community, is allergic to such practices, and there gift-giving dominates. In the intimate world of the local community, not only are theft and plunder considered evil, but so are efforts to commodify and instrumentalize people and most goods. The bulk of goods move through reciprocal circuits of societal reproduction, preserving and reproducing people and their social bonds in an ethos of mutuality. Rather than things belonging to people, persons adhere to things. People belong, for instance, to the land, the rivers, and the forest. They belong to their ancestors and their gods, to their community and kin, to the shared histories of the living and the dead. Individuals belong to their dwellings and to those of public life – the assembly spaces, sacred trees, temples, and communal sites. In fact, the Homeric adjective indicating that an individual is free, eleutheros, actually refers to a state of belonging to others and to the community. To be free is to belong, whereas the unfree lack belonging, like a captured foreigner ripped from her community, with whom the conquerors acknowledge no shared histories or communal obligations. Whereas the unfree inhabit a realm of disconnection, the free belong to a social world in which persons and things are alive with memory and current life: they connect groups in the here and now and link generations through time.
In aristocratic gift-givingm the nature of a prestigious gift was that it had a distinguished history. The most prized shield was woen by a legendary hero in an epic battle; a valued goblet touched the lips of a great king; a treasured cauldron had been passed own through the generations of a noble household. The same was true, on a more modest scale, for commoners – land, jewelry, treasured heirlooms and pieces of clothing all had embodied histories and identities. They were no more alienable by the individual than their own body (a condition reserved for slaves). They were incommensurable – their histories were specific to persons nd groups. They couldn’t be traded, because they had no equivalent. A society of this sort may well have had special-purpose monies, that is, specific goods used to measure value or serve as a means of payment, but they would have operated largely in the sphere of trade with strangers. Goods were exchanged in Homeric Greece (1100–800 BCE), but within local communities, this predominantly took the form of gift, not market, exchange.
Aristocratic gift-giving reproduced rank and social difference at the same time that it reknitted networks of reciprocity. And for the poor, obligations to provide flows of wealth to those above oneself – what would later develop into rent and taxes – were modeled on the image of the gift. In the world of the epics, the term gift included a disparate set of goods and services, including prizes, rewards, fines, taxes, fees, and even loans. Yet fines, fees, and taxes could easily morph into exactions, just as loans might culminate in debt service and even debt bondage. In all these ways, the “gift economy” could shade into relations of appropriation and class exploitation, especially if intra-ruling class competition were heightened in response to new forms of accumulation, such as raiding and plundering. It’s significant that during the heroic age much of the wealth that could be dis-accumulated through gift-giving was acquired via warfare, raiding, and slaving. Gift economies could thus be deeply imbricated with slavery and debt bondage, as was certainly the case in ancient Greece. Nevertheless, however much hierarchical social relations were reproduced in and through some forms of gift economy, the latter obeyed a social logic foreign to commodity exchange. The appropriateness of expressions of generosity and hospitality among aristocrats, for instance, was not determined by market considerations of equivalence. What was appropriate was governed by historical patterns of reciprocity and by social standing. Similarly, social norms were meant to govern the sharing of booty (goods acquired through negative reciprocity). Yet, these norms might become unstable and contested, and turning to Homer’s texts, we readily detect symptoms of precisely such a breakdown in aristocratic norms. From the outset of the Iliad, we encounter a collapse of noble leadership, centered on a crisis over distribution of wealth acquired through war, plunder, and trade. By the 8th century BCE, raiding was an established aristocratic practice for accumulating wealth in archaic Greece. Conducted via longboats rowed by dozens of men, raiders launched surprise attacks in order to capture cattle, enslaved women, and items of precious metal. So routine was such plundering activity that in his Politics Aristotle describes warfare as “a way of acquiring wealth.” So inextricably connected were plunder and trade in the ancient world that markets were literally powered by warfare. Yet as accumulation by raiding grew from about 800 BCE on, it also produced powerful conflicts over distributive norms within the dominant group.
At the beginning of the Iliad, we learn that the noble warrior Achilles had been angered by Agamemnon, leader of the Greek war against the Trojans. Having had to part with some of his booty in order to appease gods and men, Agamemnon demands that he be granted Briseis, daughter of a noble Trojan family, who was earlier awarded to Achilles as a war prize. Agamemnon’s acquisitiveness violates customary norms of distribution. The crisis sketched in the Iliad, which also runs throughout the Odyssey, reflected the undermining of more traditional relations by new patterns of war, exchange, and accumulation, which were also creating preconditions for the emergence of coinage.
Archaic Greece (roughly 700–480 BCE) underwent a protracted expansion in people, settlement, and trade. As population increased, so did the movements of persons and goods. New colonies were established, and new cultural contacts created. With these came an upsurge in raiding and communal warfare, the first records of which date from the late 8th century. All of this drove a series of socioeconomic transformations, culminating in the political revolutions that would usher in both the classical polis and coinage. While historians continue to debate the scale of the demographic takeoff in the archaic era, there is little doubt that crucial changes in diet and metallurgy propelled population growth, which in turn sent land-hungry people in search of colonies. Fueled though it may have been by a desire for agricultural land, colonization also fostered slaving and market exchange. The capture of women as unfree wives and concubines was nothing new, but not all the women and children captured would necessarily be incorporated into the household. There was increasingly an option to sell them.
Early Greek colonists, at least on the Black Sea coast, collaborated with local chieftains in the slave trade. Conquest of land involved not just capture of persons, but also the sale of some as human commodities. Colonization additionally involved the establishment of Greek trading posts. Agricultural produce comprised a considerable part of what Greek colonists sent to these markets. But shares of the booty claimed by raiding parties, from precious metal goods to enslaved people, would have played no small part. The fruits of this plunder were often exchanged for manufactured goods from the East, such as iron, fabrics, metal objects, and precious ornaments. Indeed, Greeks may well have deliberately increased their slaving activities to enhance trade with the East.
While colonization and raiding opened up new market-oriented activities bound up with slaving, we shouldn’t assume that this involved the rise of a new merchant class. On the contrary, raiding was a preeminently aristocratic activity. It was wealthy nobles (aristoi) who had the resources to build or purchase boats and equip them with both rowers and weapons, and the epics clearly depict them as leaders of raiding parties. In particular, younger sons of aristocratic families were often raiders, just as they were among the most numerous of colonizers. The distinction between warriors and traders is thus a fluid one, revolving on “little but an ideological hairline.”
The sustained wave of colonization, which created new settlements and trading centers, removed the young men of the colonies from traditional spaces of aristocratic reciprocity. At sea for extended periods and no longer living alongside the old noble families, colonists and traders advanced more in the world by piling up personal wealth than by relying on customary networks of obligation. Indeed, when establishing constitutions for new poleis, they marked their novel status with more egalitarian laws, at least for those deemed citizens. As a result, the colonies often contributed to the growth of an anti-aristocratic ethos that became increasingly potent after about 700. As new fortunes were made in the nexus of raiding, slaving, and trading, a new fluidity entered the order of rank and power, with lesser lords sometimes acquiring fortunes exceeding those of their former superiors. At the same time, craftsmen and soldiers increasingly entered into market transactions. With these changes came the cultural transformations associated with “the orientalizing revolution.” Luxury goods from the East entered the lives of Greek aristocrats on a growing scale, and both traders and migrant craftsmen established new cultural contacts. The myths, rituals, and household goods of the nobility increasingly reflected admiration of Eastern societies – their art and their highly stratified social orders. In the words of one historian, “The man who draws his boat down into the sea and sails it is no longer tied to the man who had previously ordered his life across the boundary of his fields…The potter who sells his vases by the docks must make what the foreigner wants, not what the basileus [lord] used to demand…The mercenary must learn to take orders from any general set over him, not just from the commander of his phratry [extended kinship group].”
This wasn’t a market society in any modern sense of the term, never mind a capitalist one. The vast majority of people were peasants producing most of their own subsistence goods through the labors of their households. Nonetheless, more members of all social classes, including peasant farmers, did enter markets, and the wealth and sensibilities generated there disrupted older patterns of social life, putting considerable strain on traditional forms of reciprocity. Wealth derived at a spatial distance from the community augmented the social distances between its members. Distancing, as we’ve already observed, was fundamental to negative reciprocity. It permitted unbalanced transactions, including violent ones. And as colonization, slaving, and the growth of trade networks all developed, wealth based on negative reciprocity began to have ever more profound impacts within traditional Greek communities.
Perhaps most significant, these new dynamics of accumulation shook up long-standing relations between rich and poor. To begin with, it seems clear that private ownership of land became more entrenched during this period, and that land was increasingly alienable. Moreover, the general direction of land transfers was from the poor to the rich, implying both growing dispossession, on the one hand, and increased concentration of landed wealth, on the other. Concomitantly, the practice of smallholders honoring the nobility with “gifts” was being displaced by regular payment of rents. By 600 BCE substantial numbers of small peasant farmers were classified as hektemoroi, tenants who turned over one-sixth of their produce to a lord. These rents had lost all semblance of gifts by the time of Draco’s legal codes (621 BCE), which harshly punished the poor for transgressions of the law, including failure to make payments. When rent was too onerous, or other economic hardships intervened, the small farmer, unable to count on the generosity associated with older forms of reciprocity, would have had to contract a debt. But debts incurred in this way were negotiated against the security of their land or their body (or that of one of their kin). Indeed, for the rich, the whole purpose of loans may by now have been as a step toward indebting the poor in order to dispossess them of their holdings. For the peasant, of course, to be displaced and rendered landless could only be a catastrophe. Dispossession often went hand in hand with enslavement or exile.
By Solon’s time (590s BCE), the social tensions depicted by Homer and Hesiod had erupted into intense social conflict. So profound was the turmoil brought on by popular insurgence that the Athenian aristocracy, seeing no way out, conferred on Solon dictatorial authority to rewrite the laws in an effort to end the crisis and restabilize the polis. Echoing Hesiod, Solon plainly signaled that rapacious practices of exploitation would have to be drastically curbed. The abolition of rents on land was a landmark moment in the process that gave birth to the independent peasant-citizen, a unique social type that united labor and self-rule, and that comprised the most radical ingredient of ancient democracy.
For the rich, the relaxation of exactions on the Athenian poor meant that the bulk of their surplus product would henceforth have to come from enslaved people. Moreover, if poor members of the community could no longer be subjected to bondage in any form, then it followed that enslaved people would henceforth enter the society principally as commodities purchased on the market. One of the reasons that commercial exchange had been kept at a distance from communal life probably had to do with the fact that the principal commodity exchanged with foreigners was often enslaved persons. The ancient Greek world was by no means unique in this regard. Enslaved people appear to have been the earliest goods traded in Neolithic Europe and the principal item of commerce among the indigenous peoples of the Pacific Northwest. In some societies, enslaved people in fact appear to have been the first acceptable form of private property, particularly where land wasn’t privatized. We find evidence of slave sales in Babylonia and Assyria around 2400 BCE. And further south, in Sumer, temple records indicate the presence of enslaved people by 2700 BCE and of an active market in foreign captives by about 2000 BCE.
As early as 1580 BCE, a large-scale trade in enslaved people had developed across the Indian Ocean – a case that prompted Patterson to remark, “Slavery was intricately tied up with the origins of trade itself.” Indeed, outside the ambit of the Greco-Roman world, the evidence strongly suggests that the Arab slave trade that began in the 8th and 9th centuries CE exceeded in size the staggering scale of the Roman trade, which at its height involved 250,000 to 400,000 enslaved people per year. But even though slave markets came later to the Greco-Roman world, by the time of Homer, war, slavery, and market exchange were inextricably connected, and the Mediterranean was home to some of the most active slave markets anywhere.
So closely integrated were warfare and trade that slave dealers in the ancient Greek world trailed behind armies, purchasing their captives. Throughout the ancient Greek world, violence and enslavement were central to a heroic narrative that pivoted, as we have seen, on the exploits of the aristocratic warrior. To be sure, there were other routes to enslavement. Debt bondage, or the sale of children and kin in order to raise money or acquire food and land, defined alternate passages to slavery. By the classical periods in Greece and Rome, however, both these routes were closed off following uprisings of the demos that won legal protection against the enslavement of citizens. Henceforth, enslaved people in these societies would be foreigners: outsiders seized in war or imported for sale.
This growth of markets in enslaved people prompted Giuseppe Salvioli to declare that enslaved people were the first commodities regularly sold for a profit in the Greek world. While the evidence is sketchy, it does seem that enslaved people were among the first commodities bought and sold, and among the most significant. Not only were ancient wars “slave hunts,” as Max Weber suggested; they also fueled some of the most active large-scale markets in the world at the time. In the Greek society captured in Homer’s epics, the prevailing aristocratic-warrior ethos meant that slavery was highly gendered. The victors typically killed the men who survived the field of battle, while enslaving the women and children. Warrior elites appropriated for themselves a select number of female and child captives, distributed some as booty, and sold off others for commercial gain. Over time, however, enslaved adult male, too, became available through commercial markets, and even the epics portray such sales.
We know that merchants trafficking in the “barbarian” regions on the edges of the Greek world carried thousands of enslaved people, many from the Danubian and Black Sea areas, to markets on the main trade routes, like Delos, Corinth, Chios, and Rhodes. Following the end of the Peloponnesian War (414–404 BCE), an extremely active trade in war captives and victims of raids developed at Delos, perhaps peaking around 100 BCE. And by the classical period of the fifth and fourth centuries BCE, public slave auctions were held every month in the Athenian agora, by which time commerce had supplanted warfare as the principal source of enslaved people in Greece. By the second half of the 1st century CE, chains with manacles were part of the standard equipment of the Roman soldier.
Enslaved people were a possession vital to aristocratic comfort in the same way that cattle were necessary to a flourishing agricultural estate.
Slave trading was the exchange of (socially dead) outsiders with other outsiders to whom one owed no communal obligations. And money, as the means of conducting such trade, was a medium for these alienated social interactions. It should come as little surprise, then, that in many societies, enslaved people have also served as money, or at least as a measure of value. Given their frequent status as the primary items of trade, enslaved people readily became the item by which other goods were measured. We observe this sort of slave-money in a wide range of societies, including many in precolonial and colonial Africa, in the Pacific Northwest of what is now Canada, among the Obydo people of what is now Brazil, and, perhaps most famously, in early Christian Ireland.
Monetized relations elevated the principle of abstract generality as a primary “way of seeing,” with philosophy increasingly imagining the world as consisting of universal substances or forms.
The modern capitalist operates as “producer” of commodities not by way of labor, but with his money.” Money enables this by taking command of the labor of others. Its social power resides in the domination of their bodies (repositories of labor power). In the fully developed capitalist mode of production, this happens primarily by way of its domination of slave or wage labor.
One meaning of the 5th-century BCE Greek word for law, nomos, was distribution. At the root of law in the democratic polis, we thus find the idea of appropriate sharing in a communal meal: the just allocation of food. A just republic is therefore defined as one in which wealth is properly shared. As we shall see, the word for money, nomisma, carries this semantic charge as well. In addition to its long-standing link to distribution, the word nomos involved a whole series of connotations that suggested order, way of life, societal norms, and appropriate social relations. The social order in question was shared by both men and gods, who communicated and exchanged gifts with one another. The human side of this equation required sacrifices and feasts, often the responsibility of priests associated with hereditary groups known as gene (singular: genos). These esteemed private citizens continued to preside over communal rituals even as the democratic polis developed throughout the 6th and 5th centuries BCE, when city-funded festivals came to dominate the Athenian calendar.
From Solon’s early 6th-century reforms on, religious life and public ritual were increasingly governed by the city-state rather than noble households. The laws of distribution, nomoi, were regulated by public officials, working with the priests of the temple, as were the rites observed in sacrifices. Crucially, payments to the gods were matters of sacred custom, governed by civic statutes, not market relations. People did not haggle with the gods; they repaid the debt of life according to community norms.
Monetary substitution was widespread throughout the ancient world, norms for measuring the value of bodies and lives originating at least as much in the domain of law as that of the market. This included state-regulated, not market-determined, wage rates and prices.
The polis made its coinage legal tender, a state-sanctioned currency that all sellers and creditors were obliged to accept. This was reinforced by the state’s insistence on conducting its own business with coins, especially by requiring them in payment of taxes and fines. In important respects, therefore, monetization was a political process as well as a commercial one.
Enslaved bodies were acquired through war, which had to be fought by common soldiers. The ruling class conceived of these soldiers as not unlike enslaved people. As much as sacrifice of animals had been at the root of Greek culture, the sacrifice of soldiers was at the root of its wars and empire. And this sacrifice was obtained with money.
With the growth of the polis as the space of public life, sacred ritual increasingly revolved around public temples, many of which were built during the 8th century BCE. With older forms of reciprocity breaking down, aristocrats were pressured to redirect generosity by means of dedications to these temples. A new egalitarian sensibility identified public-spiritedness with redistribution via public institutions, over which the demos had increasing influence. This change has been described as a shift of redistribution, from gifts-to-men to gifts-to-gods; but it was also a shift from aristocratic charity to the poor, to (rich) citizens’ obligations to the state. As temples became repositories of wealth (and in some cases the first banks in the ancient Greek world), they also took over from private patrons the distribution of wealth to commoners by way of civic feasts. In a society in which the civic and the religious were integrated, temples became sites of economic transactions. They sponsored marketplaces and fairs; issued loans to city authorities; paid out wages, particularly for labor employed in temple-building projects; and sold donated objects in order to raise money to fund their diverse functions. In all these ways, temples became focal points for market transactions, while organizing the ritual events at which animals were sacrificed and food shared. Another way of saying all this is that with the erosion of clientelist relations of aristocratic life, the temple-agora-polis nexus arose in part to subordinate wealth to the community as a whole.
Coinage emerged as part of a project to assert the supremacy of polis-produced tokens of value over transactional spheres dominated by aristocratic luxury goods. The first coins, made of electrum, an alloy of gold and silver, were produced (as noted above) by the Lydian monarchy in Asia Minor around 600 BCE. Half a century later, the last of the Lydian kings, Croesus, had the first coins minted from gold and silver. By then or shortly thereafter, a number of Greek city-states, Athens, Corinth, and Aegina among them, began producing silver coins. Coinage overcame noble exclusiveness by putting stamped precious metals into general circulation.
Tyranny as a political form emerged across the Greek world in the century after 650 BCE. Corinth experienced perhaps the most long-lasting succession of tyrannies (655 to 585 BCE), but Athens, too, knew tyrannical rule for much of the half-century from 560 to 510. Often, tyrants were dissident nobles seeking to break the domination of an elite network of aristocrats. Tyrants not only concentrated powers in their own hands; typically, they also appealed to the people for support, enhancing the rights of commoners in efforts to curb aristocratic influence. The emergence of such tyrannies seems to have been a product of major social transformations. First, the growth of poverty, indebtedness, and dispossession was rendering aristocratic rule increasingly intolerable for many of the lower sort. It’s instructive that Corinth, where tyranny arose first and endured longest, was also the most commercially developed Greek city at the time. If trade and commercial wealth contributed to social differentiation and class grievances, then it’s no surprise that a popular reaction against noble authority should have come first in a major trading city. Second, with the rise, toward the end of the 8th century BCE, of hoplite warfare, based on a mass of heavily armed troops recruited from the middling sort, war and politics became more reliant on non-aristocratic groups. The ethos of the new warfare foregrounded mass action, rather than the skill and valor of the heroic individual. Hoplites represented perhaps one-third of the men of a city-state at the time, and an aristocrat seeking to break the power of traditional noble households might easily appeal to this group, whose members possessed arms and a commitment to democratization. But as much as aspiring tyrants might mobilize this social layer for incursions against the old power structure, as appears to have taken place at Corinth, these democratically inclined citizens could also launch such upheavals themselves.
Tyrants frequently encouraged the growth of trade and colonization, perhaps as sources of funding for a new kind of state. After all, the new infrastructures of public space and power all came with considerable costs, whether these were used to expand the agora, construct temples, foster urban festivals, create water systems, or finance warfare. But if trade and colonization were to provide the wealth indispensable to emerging forms of governance, they could more readily do so if they were integrated into monetary circuits dominated by state-sanctioned means of payment and exchange. Just as politics were increasingly recentered, shifting from the noble household and the aristocratic symposium toward the agora and the assembly, so were the circuits of wealth, as coins bearing the authority of the city-state displaced precious metals exchanged between aristocratic households.
The upsurge in temple building in the late 8th century involved large communal efforts that accompanied the political rise of the demos. By the time of Solon’s reforms (the 590s BCE), the polis and urban religion were advancing together, as exemplified in the erection of stone temples, the reorganization of festivals, and dedications of marble statues. All were processes extended under the tyrannies of Peisistratus and his sons, who dominated most of the half century from 561 to 510, making religion more accessible to all. Private wealth was thereby rendered accountable to a polis that expressed a new civic consciousness. This visibilization of power also involved a spatial revolution, and places of assembly, ritual celebration, theater, and market activity were fundamental to the development of the city-state around 600 BCE. Thus the expansion of the agora and the erection of temples were accompanied by the construction of a new theater of Dionysia to accommodate tragic performances. Such spatial transformations invariably involve economies of human labor and expenditures of wealth to pay wages, and to purchase tools and building materials. After their construction, many of the activities conducted within these public spaces also required ongoing expenditures, from the purchase of meat for festivals to the payment of poets for their performances. Tellingly, the new festivals substituted cash prizes for competitions, making monetary payments to poets in place of precious metal gifts like goblets or cauldrons. In addition, the preeminent Greek coin, the Athenian owl, first produced around 515 BCE, carried the seal of the polis and was backed by its laws. Monetization was thus as much about new practices related to law, religion, and the state as it was about novel patterns of trade. Nevertheless, the owl was legendary for the purity of its silver content, which in part enabled it to become the first real world money, accepted throughout the entire Mediterranean trading region (and regularly copied by other states). Archaeologists have found owls in southern Anatolia, Syria, Egypt, Cyprus, and Afghanistan. Athens’s owl coins thus represented a unique fusion of political and economic dynamics: they bore the imprint of a powerful state that could enforce their circulation within its sovereign domain; and they were made of such high-quality silver that they were widely accepted by merchants, state officials, and others far beyond the field of Athenian jurisdiction. They thus represented money’s first full-fledged modular form, metallic coinage.
The silver for the owls came from Attica’s silver mines at Laurium, and these mines were worked by the largest concentration of enslaved people in the Greek world: perhaps as many as 30,000 in the late 4thcentury BCE. The value of Athenian owls, which created the dominant modular form of money for nearly 2,500 years, was therefore rooted in the past labor of thousands of enslaved people. Equally crucial, the reach of this world money owed more to war than it did to trade, more to blood than it did to markets. Like enslaved people, soldiers were acquired through money.
For much of human history, the mobilization of military forces was among the largest of state undertakings. Large-scale warfare requires the recruitment of thousands of soldiers, their provision with weapons and equipment (swords, ships, horses, and shields), and their means of provision (food, tents, clothing), along with monetary reserves for wages. As time went on, such mercenary soldiers were employed on ever-larger scales, not just as bodyguards, but as soldiers and sailors for military campaigns. By the time of the Peloponnesian War between Sparta and Athens (431–404 BCE), both sides were utilizing hired fighters on a substantial scale. The year after the war’s end, during the violent class struggles that shook Athens, oligarchs and democrats alike hired armed combatants. This was followed almost immediately by Cyrus the Younger’s ostensible recruitment of 10,000 Greek mercenaries in a campaign to win the Persian throne. Even larger numbers of hired fighters from Greece are said to have served in Egypt over a period lasting a century and a half. By then, large-scale hiring of Greek mercenaries – in the thousands or tens of thousands – was common throughout the Mediterranean region, North Africa, and Persia. In fact, one historian has proposed that a “military-coinage complex” was in place throughout the second half of the first millennium BCE.
To get a rudimentary sense of the expenses involved and their impact on the supply of coinage, consider that to support just one legion cost Rome around 1,500,000 denarii a year, so that the main reason for the regular annual issue of silver denarii was simply to pay the army. One historian has suggested that the only reason silver coins were issued in the late Roman Empire, where the monetary standard was based on gold, was for military payments. It was entirely impractical to pay soldiers on the move in cattle, enslaved people, or raw bullion. Coins were readily portable and, if made of high-quality precious metal, as were Athenian owls, comprised readily mobile high-value items. But, of course, soldiers on the move would also spend some of their wages on food, sex, clothing, and other goods. Not only did this contribute to rural monetization, as peasant farmers often sold produce to troops; it also expanded the circuits of Greek coins. At least equally significant, hired fighters, cut off from kin networks and any means of subsistence of their own, became more dependent upon money than before, and more accustomed to its usage.
What was the status of these men who fought for wages? To modern eyes, they seem like wage laborers. Yet to the ancients, particularly aristocratic commentators, to sell one’s body – and this was the predominant understanding of wage labor – was to be enslaved. The modern liberal distinction between selling one’s labor, as does a wage worker, and selling one’s body, like an enslaved person, seems not to be found in ancient texts. Repeatedly, those who work for wages, including mercenaries, are compared to enslaved people. This merging of the statuses of enslaved and wage laborer was widespread in ancient texts, which typically described those working for wages as doulos, the most frequent term for a chattel slave, or as latris, a term meaning “hired man” or “servant,” as well as “slave.” This semantic ambiguity must have owed something to the fact that of those who arrived every day at the Athenian market for day laborers, the overwhelming majority were enslaved. It wasn’t uncommon in either Greece or Rome for enslaved people to earn wages, which they shared with their masters. In Athens, enslaved people were almost certainly the largest group working for wages, and in Rome some enslaved people, particularly those with a craft skill, received a regular monthly wage from their masters. Thus, when mercenaries hired themselves out for money, they were engaged in an activity that most observers associated with enslaved people. Enslaved people and wage earners shared the condition of being under the control of someone else, a control registered in the surrender of their bodies (and liberties), even if temporarily, for money. These associations continued at least into the early modern period in Europe.
This brings us to Marx’s insight that “it was in the army that the ancients first fully developed a wage system.” Elaborating on this insight, he indicates that he’s thinking of both mercenaries and citizen-soldiers in the Greco-Roman world as wage laborers. After all, in the imperial armies of Greece and Rome, citizen-soldiers received pay, as did mercenaries. By the early 200s CE, when Rome’s imperial army peaked at about 450,000 soldiers, military service was unquestionably the primary site of wage labor. The wage system flourished at the nexus of money and war. “Soldiers and money;” Julius Caesar is said to have proclaimed, “if you lack one, you will soon lack the other.”
We’ve noted that production of coinage increased dramatically in Athens at the end of the Persian Wars (479–431 BCE), just as it did in Rome during the first two Punic Wars (218–214 BCE). Indeed, by the end of the 6th century BCE, over 100 mints were at work producing coins throughout the Greek world. By the 2ndcentury CE, it’s estimated that the annual budget of the Roman imperial state was around 225 million denarii, fully three-quarters of which went to pay the wages of the empire’s 400,000 soldiers. Military requirements drove the spread of coinage, not only throughout the Roman Republic, Carthage, and the Hellenistic kingdoms established by Alexander the Great, but also in Persia and the Celtic states. In the case of Gaul, coinage was introduced as payment to Celtic mercenaries for service in the Macedonian armies of Philip II, Alexander III, and their successors. Not surprisingly, when they began to strike coins of their own, the Celts used Macedonian and other Greek coins as their models. They did so largely for purposes of financing their own armies as they centralized political power and undertook a concerted program of state-building. Among the best-studied cases of the symbiosis of money and state-building is Ptolemaic Egypt, which developed from Alexander’s conquest in 332 BCE. Following the conqueror’s death in 323, Ptolemy became governor (satrap), and over the course of more than a century, he and his three successors ruled Egypt without interruption, using monetization to promote state-building. Not that ancient Egypt had been unfamiliar with coinage, but its usage had been largely confined to Mediterranean trade and some large luxury purchases. What changed with the Ptolemies was the extent of cash transactions, as monetization joined hands with militarization. Once again, much of the process began with the hiring of mercenaries.
To absorb the costs of mercenaries and Egyptian soldiers in addition to those of his Macedonian troops, Alexander relied not only on his imperial coinage minted in Macedonia, but also on coins from newly established local mints. The Ptolemies inherited this monetary system from Alexander, and they continued to use it to wage wars, fund proxies, build armed power, and hire mercenaries. Beyond the introduction of a plethora of new taxes, the Ptolemies also fostered monetization by requiring payment in money rather than kind (e.g., grain), with the exception of levies on land and grain. The production and sale of textiles, fruit, papyrus, beer, salt, and fodder crops were all taxed in money, not in kind, as were real estate sales, transport, and services. Perhaps no levy contributed to monetization more than the salt tax, introduced in 263 BCE. Effectively a poll tax, the levy on salt was applied to all men and women. Its enforcement drove substantial numbers of Egyptians into episodic wage labor, so that they might earn the money (typically bronze coins) needed to pay it. Indeed, for many inhabitants of Egypt taxation was the only reason for entering the monetary cycle at all. [The same thing in colonized Africa and other colonized territories like North and South America. In Kenya, for example, peasants were forced into the money economy by the imposition of poll taxes so that they would lose their land/independence.]
This should remind us that as much as the development of money is closely connected to trade and markets, there is no automatic process by which these give rise to full-fledged money. Historical accounts that focus exclusively on barter and exchange ignore the decisive role of a new configuration of an institution that’s been central to the history of organized violence and warfare – the state. Indeed, there’s a compelling truth to historian W.V. Harris’s claim that “the economic power of the state is historically the crucial element in the history of monetization.” As the Egyptian case further shows, money can as much be an instrument of imperial rule as one of democratic power.
After the collapse of the Roman Empire, money persisted, but not on the same scale. Only with the emergence of European colonial expansion in the early modern period did monetization surge forward again. And once more, it rose on the tides of war.
As Walter Benjamin reminds us, in class society, all progress in civilization is also progress in barbarism. Accompanying the growth in the arts and sciences in Muslim societies was a boom in slaving in Arab regions and parts of Europe. As trade revived across Eurasia, so did the ancient link between markets and enslaved people. The commercial growth of the Muslim empire fueled new slave trades, and the Vikings were central actors here as they journeyed south from northern Europe to trade with the Islamic world. In the course of their voyages, the Viking Rus’ seized upon a prized commodity – captive Slavs, from whom we derive the term slave. These were joined by enslaved Celts and Scandinavians. By the 9th century CE, enslaved people auctioned by the Rus’ were pouring into Scandinavia, north Africa, Spain, Baghdad, and parts of Asia. It’s said that there were 13,000 enslaved Slavs at the Muslim court in Córdoba, and in 950 perhaps 15% of the population of Europe consisted of bonded persons. Yet, the trade in Slavs, Celts, and Scandinavians was eclipsed by the trans-Saharan slave trade, which persisted for thirteen centuries. Millions of bonded Africans were transported across the Sahara, while millions more were captured in east Africa in a trade in human flesh whose scale may have exceeded the later Atlantic slave trade. The Arab slave trade from Africa soon eclipsed that of the Roman Empire at its peak, which, as we have seen, may have involved up to 400,000 persons per year.
It’s highly revealing that as the slave trade surged, so did the presence of coins throughout central Asia, as, once more, slaving and monetization grew in tandem. Western Europe soon felt the effects, as Muslim traders set up slaving centers in cities like Marseilles and Rome. While some commentators celebrate the commercial and cultural flourishing of the Italian city-states a short time later, they frequently forget that their renaissance, too, was based on slaving.
As innovative as China’s rulers had been in monetary affairs, their reliance on taxes raised on marginal peasant surpluses couldn’t provide the wealth necessary to sustain imperial power or to underpin a new mode of world money. When a genuinely successful paper currency arrived, via the Bank of England in the 1690s, it would be on the wings of a new (and capitalist) regime of empire and war.
Before the Crusades began in the 1090s, the Italian city-states, particularly Venice, had entered into a new cycle of expansion, largely due to their participation in the medieval slave trade. Soon they were extending their reach into north Africa, a harbinger of things to come. By 1180, more than a third of Genoa’s trade was with Africa’s northern region. Already, foundations were being laid for the great wave of slave-based colonization that would make the modern world. However, we ought not be deceived: the European colonization of the late medieval period took a predominantly feudal form.
The framework for substantial growth in trade and markets was created across the Mongol “global century,” 1250–1350, in which the Mongol Empire stitched together a trading system that oversaw 100 years of commercial growth and increasing monetization. Large chunks of Europe prospered throughout the Mongol century, the Italian city-states among them, while southern India and China, too, experienced a century-long economic upswing. Lubricating the commercial expansion was silver, which played the role of world money across Eurasia as it flowed from western Europe, the Near East, Africa, and parts of Asia in exchange for goods from the Far East, such as spices, ceramics, raw silk, and silk textiles. Gold had become the primary currency of Europe by the 14th century, a position that would be further consolidated as Portugal increased its African supplies of the precious metal, and even gold migrated east, albeit not as dramatically as did silver.
By 1450, a new international constellation was forming around three vital elements. The first we might call the Columbian moment: the new burst of Atlantic colonization that began with Portugal’s seizure of islands off the northwest coast of Africa, then extended to the Americas and further parts of both Asia and Africa. The second was the extensive looting of precious metals. Gold was critical in the early going, with modest west African supplies swelled by violent expropriation of treasures from Mexico. Soon, however, New World silver moved to the forefront. At Potosí, in what is now Bolivia, the Spanish extracted mind-boggling stocks of silver. By 1600, this mining town had 150,000 inhabitants, and for over a century it churned out half the world’s silver supply. So gigantic were the shipments of gold and silver from the Americas that their value rocketed from just over a million pesos in the early years of the 16th century (1503–10) to almost 70 million in its final decade (1591–1600). By that time, the Spanish “piece of eight,” a silver coin worth eight reals, had become a world currency. The third element of the emerging global configuration was a new wave of enslavement of Africans, which included putting black labor to the cultivation of sugar.
European colonizers first undertook sugar production, often by means of slave labor, in Syria and Palestine during the era of the Crusades. Sugar plantations also developed in Sicily and Muslim Spain in the late medieval period. By the 15th century, Portugal and Spain were conducting slave-based sugar cultivation in Madeira and the Canary Islands, and in the coming decades, Portugal would extend its sugar-slave complex to Brazil, and Spain to Española (Haiti). Before long, Spain and Portugal would encounter a new imperial rival in Holland. Dutch hegemony, in turn, soon confronted another challenger. From 1655, the year England seized Jamaica from Spain, its military and colonial undertakings began to surpass those of its Dutch rivals. English dominance would be world transformative, as it reconfigured the three elements of money, colonization, and slaving on the foundations of an emerging capitalist mode of production. An epochal shift was in the making, one that would usher in a global empire based on new dynamics of exploitation and a novel form of money. In the 15th century, the westward shift of global power might have been masked by continuing and massive flows of New World gold and silver to India, China, and central Asia. But by the middle of the 17th century there could be no doubt: the world order was undergoing profound transformation. Looking back, it’s easy to see why. In the words of one historian, “It was Europe’s entrenched relationship with violence and militarism that allowed it to place itself at the center of the world after the great expeditions of the 1490s.” Violence and militarism were also being reorganized on an unprecedented socioeconomic terrain, one involving new modalities of money and exploitation. With these, a new order of empire was about to be unleashed upon the world.
There was a dialectical tension between monarch and money in the emergence of modern forms of power. A capitalist society can’t accommodate any equivalence here: personal rule must give way to the impersonal rule of capital. A crucial English judicial ruling was made in 1605. Elizabeth I had recently reduced the amount of silver in the coins her government circulated in Ireland. When an Irish merchant used these new coins to pay an English creditor, the latter rejected them, complaining that they contained less metal than did the coins in use at the time they’d entered into their contract. English justices ruled that the new coins were legal tender and had to be accepted, however – a judgement rightly been described as a victory of nominalism over metallism. According to the ruling, the value of a coin wasn’t determined by the metal it contained, but by its worth as decreed by the monarch.
From about 1500, waging war on a dramatically larger scale became the central preoccupation of all European governments. This had to do with ruling-class responses to the general crisis of the feudal mode of production during the 14th century. Following the disintegration of the Roman Empire in the 5th century CE under the combined weight of overexpansion, social and regional revolts, and barbarian invasions, decentralized military lordship gradually emerged as the pivot of social organization. This occurred only through a protracted period of fragmented armed conflict, and the failure of the Carolingian Empire (800–888) to reconstitute imperial power in central and western Europe. Those who prospered in this period of instability and breakdown were those who excelled in war, raiding, and plunder. Eventually, the plunderers settled down as landlords, forming the core of the feudal ruling class. The feudal aristocracy wasn’t just an arms-bearing class pledging military support to a monarch; it was a class constituted through war and a specific sort of war economy. In the early feudal period across western and central Europe, “the pursuit and intensification of warfare revitalized an economy based on forcible capture and pillage,” in the words of French historian Georges Duby. Warfare was the key to seizing tracts of land and concentrating weapons, knights, and resources. Occupation of land also involved the conquest of peasant communities, which were then subjected to systematic pillage in the form of regularized rents, services, and obligations (such as marriage fines), rather than the episodic plunder of a marauding warrior class.
The center of the feudal arrangement was the lord’s manor, which was both a unit of landed economy and of political domination. As a social-geographic entity, the manor comprised four types of land. First was the lord’s demesne, the produce of which was appropriated directly by the lord, and which was worked by his serfs (who owed specific amounts of labor and other services), as well as by enslaved people and hired wage laborers. Next came the customary land worked by unfree peasants, who effectively belonged to the lord. These bonded tenants owed labor services on the lord’s demesne (which might be commuted to cash rents), were required to pay a variety of fines (such as entry fines in taking up a tenancy; chevage, for the right to live outside the manor; and merchet, for permission to marry), and were bound to the rulings of their lord’s manor court in any issue concerning their tenancies. The third form of landed property consisted of free holdings, whose occupants were required to pay rents (usually lower than those of servile tenants) and sometimes to provide services. But they were free of many other obligations, and had the right of access to the king’s courts. Finally, there were extensive common lands, which were the property of the community and were available to all its members for grazing animals; gathering wood, straw, and berries; and for hunting and fishing. For smallholders, in particular, common rights could be the key to the continued survival and reproduction of the peasant household. In England during the late 13th century, perhaps three-fifths of the population consisted of unfree persons. These unfree peasants found themselves in a class relation with lords fragmented across multiple territorial units; based on a manor house, castle, or monastery; and each operating its own courts and armed forces. Lordly domination thus involved a relatively decentralized network of power. The parcelized sovereignty of classic feudalism is integral to its social form, in which local lords and bishops exercised extensive legal and military powers, while pledging military support (initially in the form of the feudal levy – the direct provision of knights, soldiers, horses, and weapons, which was incrementally displaced as time went on by financial contributions) to defend the territory and common interests of the association of noble families that comprised the feudal state. So confined was this ruling class that from 1160 to 1220, there were just over 150 great lords (or magnates) in England.
The powers of monarchical states were significantly limited by both ancient and modern standards. In their immediate domains, individual lords had their own manor courts (and sometimes their own gallows), and they made local law with next to no oversight. They appropriated directly from their peasants with little regulation from above, and they marauded and pillaged with a considerable degree of autonomy. Beyond this, monarchical power was hemmed in by the international power of the Catholic Church – not only the dominant ideological institution of the era, but also an enormous property owner with its own courts and diplomatic offices, and the principal arbiter in European interstate relations. The extent of church power can be gleaned from the fact that in 1086 it appropriated one-quarter of all the landed revenues of England, a share that appears to have been typical across most of Europe.
The state, in short, wasn’t much more than a large noble household, able to call upon aristocratic support in times of war.
For about 250 years, from roughly 1000 to 1240 CE, the amount of land under cultivation, the social surplus product (rents and tithes), and the population grew persistently, the latter roughly doubling in England between 1100 and 1300. Especially around 1150 or so, the process of reclaiming uncultivated land for farming proceeded rapidly. New villages sprang up as the agricultural frontier was extended, and new revenues flowed to the lords. In England, perhaps a million acres of woodland, heath, moor, and fens were brought into productive use. The feudal system showed a weak tendency toward technological innovation, manifest in its use of animal traction and water power, in particular. As a result, while agricultural productivity grew, it appears generally to have lagged behind population increase, lacking anything comparable to the capitalist tendency toward systematic technological revolutions of the means of production. As output and surpluses expanded, the feudal economy also fostered the growth of market exchange, monetary transactions, and urban development. Yet, by the mid to late 13th century (perhaps around 1230–40 in France and 1280 in England), this growth wave had exhausted itself. The centers of growth shifted to towns whose merchants could profit through the growth of the Eurasian economy that emerged during the Mongol century (1250–1350).
Because the primary dynamic of feudal economic growth was an extensive one – by means of increasing the amount (or extent) of land under cultivation, it produced diminishing returns.
The priority of political accumulation for lords also meant that military investment in men and arms often trumped investment on the land. At the same time, the poverty of the peasantry set real limits to their capacities for reinvestment of wealth. Furthermore, there was no systematic imperative for urban merchants to invest in manufacturing industries, either to increase their scale or to improve their technologies. Inevitably, then, feudal expansion came to a halt. Demographic crises like the Black Death (1346–53) also wiped out tens of millions, with effects lasting for generations. Not only did life expectancy fall, but more than a century after the Black Death (that is, by about 1470), most European villages were only half as populous as they had been in 1300. Across this era of contraction, between 20 and 30% of settlements disappeared in Germany, while in England we find at least 2,000 abandoned rural settlements. Regression on this scale crushed manorial revenues, which plummeted in the century after the Black Death by as much as 70%. The feudal mode of production had entered into a downward spiral.
Given the feudal mode’s internal constraints on sustained expansion, there was an inbuilt tendency for lords to construct larger and more effective military forces in order to clamp down on peasant resistance to heightened exploitation and forcibly encroach on other lords’ resources. All-out attacks on village communities had severely limited prospects, however, given the capacities for peasants, particularly in times of falling population, to flee to the estates of other lords or rise in rebellion, which could occur in insurrectionary fashion, as in the French jacquerie of 1358, or the English Peasants’ Revolt of 1381. Given these constraints on efforts to squeeze peasants, lords frequently turned on other lords, the result of which was a generalized tendency to intra-lordly competition and conflict, something that made it necessary to accumulate ever more military resources, beginning with land and retainers. War became a semi-permanent state of of affairs. Intra-lordly military conflicts might take the form of civil wars within the territories of a kingdom, or of wars between kingdoms, such as the Hundred Years’ War (1337-1453) that pitted England against France. The ultimate objective of such confrontations was the conquest of foreign lands, including the peasants that went with them (and the surplus product they could produce), to be incorporated as new territories into the marauding states, and whose proceeds were to be shared as war booty. Conflicts on this scale required that nationally based nobilities band together and endow monarchs with sufficient military and financial resources to wage longer wars in larger spaces. Monarchs needed to mobilize larger armies, technologically improved weapons, greater financial resources, and the augmented political and administrative powers these involved.
Larger armies, new weaponry (such as cannons), new arts of fortification, and unending conflict all greatly raised military costs. During his reign (1413–22), Henry V of England had spent more than two-thirds of the royal budget, plus most of the revenue from his lands in France, on his army and navy, and in financing war-related debts. Nearly two centuries later, during the last five years of her reign (1598–1603), Elizabeth I directed three-quarters of her budget to war-related expenses. During a similar five-year period (1572–76), the Hapsburg dynasty devoted more than 75% of its revenues to defense and war-induced debts, as did the French monarchy. Yet the capacities of European monarchies to cover escalating military costs were strikingly restricted. Incapable of significantly expanding financial capacities and resources, states across Europe confronted incessant fiscal catastrophes driven by mounting war costs. The result was an unending series of crisis measures – emergency borrowing, forced loans, debasements of the coinage, and debt repudiations – that undermined the longer-term financial viability of the monarchies involved. Indeed, for two hundred years after 1485, most European states repeatedly struggled to finance permanent debts brought on by the costs of war. Even the Spanish Crown, recipient of 40% of all the New World silver plundered between 1503 and 1660, came away from its military campaigns deeply indebted.
In this context, any state that could establish a stable system of war finance was sure to accrue an enormous advantage. This breakthrough was made by the English state after 1689, incarnated in the Bank of England, formed in 1694. But the road to this breakthrough was neither smooth, nor inevitable. It would require a prolonged period of upheaval (1640–89) to usher in deep social transformations through with new forms of impersonal power displaced older forms intimately linked to personalized rule. In the process, new modalities of money emerged in direct association with war, colonialism, and slavery. The transformations of the English state under the Tudor monarchs (1485–1603) involved crucial moves toward political centralization and the subordination of contending sources of power. The most dramatic events involved its external relations as a nation. The effect of the Protestant Reformation (accelerated by Henry VIII’s demand for papal support for his divorce), for example, was to eliminate political interferences by the foreign power of the Church of Rome, and of bishops and prelates loyal to it. In addition to this, Tudor monarchs systematically curtailed the military and political powers of the nobility. The power of violence and war was henceforth the domain of the Crown and its state apparatuses.
Early English colonialism had three distinctive features, the combination of which contributed to the consolidation of capitalist social relations in England. The first was the relative weakness of the Crown in military and colonial affairs in the early period, which brought private interests into war finance. The Crown’s weakness was displayed especially clearly throughout the conflict with Spain, which broke out in 1585 and continued for eighteen years. Disposing of annual revenues of roughly 300,000 pounds, Elizabeth I was in no position to lead a military confrontation with the dominant imperial power of the time. A relatively weak state at this stage fostered the emergence of new forms of bourgeois power, providing a source of strength by bringing bourgeois fortunes into the financial affairs of the state. This is why the Spanish War of 1585–1604 was commanded by private capitalists – merchants, landed gentry, and prosperous sea captains. This has led to the apt description of early English colonialism as a system of privateering.
Two examples from the war with Spain illustrate the point. Francis Drake’s 1585 expedition to the West Indies is the stuff of English lore, yet the queen supplied only two of 25 ships. The rest were provided by private investors looking to profit from the looting of Spanish vessels. Two years later, of the 23 ships in the English military expedition to Cádiz, eleven were launched by a consortium of London merchants, and four by traders from Plymouth, while two belonged to the lord admiral, and six to the queen. War was thus directly bound up with private investment and profit making. Investors in military expeditions expected to claim “prizes,”particularly commodities such as wine, olives, raisins, figs, oils, and nuts, plundered from Spanish and Portuguese vessels. Another distinctive characteristic of early English colonialism was the central role of the landed gentry in privateering, trade, and colonization. It’s been estimated that half of the peers of England invested in foreign trade between 1575 and 1630, a practice unparalleled in continental Europe. Landowners were also major investors in joint-stock companies devoted to trading and establishing plantation colonies, showing a distinctly capitalist commitment to long-term investments that might not yield a profit in the short run. With the establishment of plantation colonies in Ireland in the second half of the 16thcentury, gentry investors once again played a critical role. Not only were trade and plunder inseparable in the 16th century, both also involved collaboration between merchants and landed gentlemen.
If England’s landed gentry was unique in its commitment to trade, plunder, and colonization, the question is: Why was it, in this country and at this time, that the gentry broke with the most persistent tradition of their class throughout western Europe? To answer this question requires examining the transformation of large English landowners (aristocracy and gentry) into a class of agrarian capitalists. In other words, we must investigate the history of early modern plunder at home, which laid the basis for colonial plunder abroad. The crisis of the feudal mode of production in Europe, or the “great medieval depression” of the 13th and 14thcenturies, produced a series of divergent societal trajectories. In much of the eastern part of the continent, the bonds of serfdom were reimposed on a considerable scale, quite intensely in Bohemia and Prussia. In other areas, notably France, absolute monarchies powerfully centralized political authority while preserving peasant property as the basis of state revenues via taxes. In much of the Netherlands and the Italian city-states, highly commercialized merchant republics evolved on the basis of expanding markets for foreign trade. Here merchants’ capital predominated, but often without transition to a full-fledged capitalist mode of production. Such a transition did, however, occur in England, where it was predicated upon late medieval/early modern transformations in the countryside that ushered in agrarian capitalist social relations.
Between 13509 and 1520 a stratum of rich peasants, often referred to as yeomen, arose who were crucial to the emergence of capitalist farming. The flip side of the yeomanry’s rise was the development of a huge layer of effectively landless poor peasants whose condition increasingly resembled those of an agrarian proletariat. The collapse of population brought on by the Black Death caused wages to rise, and if they hoped to retain or attract tenants, lords had few options but to relax obligations, lighten services, and offer attractive rents. Feudal bonds that had persisted gradually dissolved: between 1350 and 1450 serfdom largely disappeared from English manors. Lords began leasing out their estate lands, either in blocks to a number of better-off peasants, or sometimes in their entirety to a single well-off tenant. Perhaps 25% of the cultivated area of English manors was so rented in the decades around 1400. Frequently, this also involved a conversion of land from arable (crop growing) to pasture (livestock grazing), the latter of which required fewer tenants and less labor. Crucially, pasturelands needed to be enclosed, which meant a radical transformation in the social geography of English life, as the open-field system so central to peasant life went into decline.
Unenclosed farms (open fields), linked to common lands, gave way to an increasingly enclosed and privatized agricultural system, dominated by large-scale commercial farms employing wage laborers. Enclosure also meant the spatial consolidation of lands, as peasants and incipiently capitalist farmers swapped, or bought and sold, strips of land to create single contiguous units in place of dispersed plots. As enclosure (privatization) and engrossment (the expansion of individual holdings by richer farmers) proceeded, a consistent pattern emerged, after 1348, of a declining number of tenants throughout the English manorial economy. Large farmers were acquiring more land, poor farmers were losing theirs.
By the first half of the 16th century, perhaps 45% of English lands had been enclosed, with large yeomen farmers the principal beneficiaries, working lands of up to 200 acres in arable regions, and as much as 500-600 acres in grazing areas. By most sensible criteria, they could no longer be considered peasants. Increasingly, they were commercial farmers, investing in enclosure, livestock, marling, and other improvements, and producing specialized agricultural commodities for the market, while frequently hiring wage laborers and encroaching on the rights and properties of their poorer neighbors. As enclosed sheep walks and cattle granges appeared where open fields, marshes, and publicly accessible wastes had once stood, wealthy farmers with larger herds of sheep and cattle frequently overstocked the commons, exceeding customary practices and intruding on the access of the poorest. The related decline of open-field systems and the erosion of wastes and commons meant that life became more precarious for cottagers and poor peasants, whose personal plots were inadequate to household subsistence. By the 1520s, an enormous process of social differentiation had transpired.
As much as wealthy peasants were the original drivers, landlords soon grasped the advantages that enclosure, engrossment, and eviction might bring. By the 1520s, with the resumption of population growth, leaseholds became hugely advantageous for landowners. As prices rose and demand for land mounted, landowners could regularly raise rents that accorded with what the wealthiest yeoman farmers could pay. By the 16th century, roughly seven-eighths of free tenants held fewer than 20 acres of land, the minimum necessary for household subsistence. Meanwhile, those wealthy farmers who were accumulating land in order to more efficiently specialize in commercial production were more and more compelled (by rising rents and competition from other yeoman farmers) to invest to raise the productivity of farm labor. Lords, too, now had a competitive incentive to spend on their estates by way of enclosure, drainage, irrigation, and more in order to attract the most prosperous tenants able to pay top market rents. Through these processes, both wealthy farmers (producing for the market) and landlords (making the investments that would attract them and their rents) were becoming market dependent. In all this, the “improving” lord was considerably assisted by the social differentiation of the peasantry, which, according to Hilton, “destroyed the internal cohesion of the medieval rural community,” thereby making it much more difficult to mount coordinated peasant resistance.
Commercial tenant farmers used wage labor to produce for the market, while charging market-determined rents. Meanwhile, peasant dispossession was generating a class of propertyless laborers at a faster rate than they could be productively absorbed. During the period from 1560 to 1625, for instance, England’s vagrant population grew twelve times over.
It’s this set of social relations, based around the triad landlord/capitalist–tenant farmer/wage laborer, that we have in mind when we describe England by the time of the revolution of the 1640s as a society based upon agrarian capitalism.
The transformation of the English landed classes into capitalist landowners wouldn’t have assumed the form or observed the pace it did without the phenomenal plunder of church lands stimulated by the English Reformation. It’s possible that these hugely enhanced revenues and properties might have funded moves toward a more autonomous, centralized state. But Henry VIII was caught in the crossfire of war, and this led him to squander what he’d plundered. The king commenced war with Scotland and France in 1543, and concluded peace three years later, having expended the weighty sum of £2 million on his campaigns. Even the extraordinary taxation of 1540–47, which raised £650,000, could cover only a third of these costs. To dig out, the king sold Crown lands, particularly those seized from the monasteries. Altogether, the government took in another £800,000 in this way – enough to stay afloat and to pay back foreign loans. But in surrendering huge tracts of land (and the revenues they provided), the king was undermining the long-term financial independence of the Crown. What the king lost, the prosperous commercial gentry gained. For it was they – and this includes those younger sons who’d thrived as merchants, manufacturers, lawyers, and state officials – who bought up the bulk of these manor estates. The Crown’s great plunder was thereby shared out, benefiting the most commercially minded sections of the landed class. Equally significant, layers of “new men” entered landed society, including wealthy clothiers, merchants, and prosperous yeoman farmers, all of whom bought Crown lands. The combined result of these processes was a stunning growth in the number and the combined wealth of the landed gentry.
If wealthy yeoman farmers had been the principal agents of agrarian change for a century and a half after 1370, they were now increasingly overtaken by landed gentlemen. Enclosure moved into a higher gear, and capital investment on the land assumed grander dimensions. Twice as much enclosure took place in the 17thcentury as in any other, eclipsing both what had come before and what would occur later. As we’ve noted, between 1600 and 1760, nearly 30% of all English lands were enclosed. So active was the land market that perhaps a quarter of all the land in England changed owners (often many times) between 1500 and 1700, becoming concentrated in ever-fewer hands. “The gainers in this process,” observes one rural historian, “were the great landowners and the gentry, the losers the institutional holders, crown and church, and the peasants, perhaps in roughly equal proportions.” For the small tenants, the end point of these processes meant displacement from their lands, via eviction, increased fines on renewal of leases, enclosure of the commons, and rack renting, to the point where up to three-quarters of all land was gathered into the hands of large landowners. With the great wave of enclosure by act of Parliament between 1760 and 1830, the destruction of the English peasantry was completed, the tipping point having been reached much earlier.
By 1640, something in the neighborhood of four in ten peasants were no longer connected to a manor – nearly quadruple the proportion of a century earlier. And of these, two million were entirely landless. These people had been thoroughly proletarianized, alongside millions more who, clinging to a cottage and whatever common rights remained, survived through sub-subsistence farming supplemented by wages. English society was now a predominantly agrarian capitalist one. Feudalism and the classic manorial economy were dead or dying, and with them the traditional peasantry. Large-scale, market-oriented farming now dominated economic life, revolving around commercial farms worked by wage laborers in the employ of capitalist farmers, who rented from a commercialized landowning class. It’s important to add that many wage laborers were contracted as servants in husbandry, not as ideal-type “free laborers,” Between 1574 and 1821, servants comprised between one-third and one-half of the agricultural workforce. Similar forms of indentured labor figured in manufacturing industries in the guise of apprenticeships. In this regard, as in many others, rapidly growing industries in the countryside and the towns were developing in symbiosis with capitalist reorganization on the land, and in conjunction with bonded forms of colonial labor. All of these metamorphoses involved the profound monetization of relations between people, and between individuals and the land.
Land had now been substantially commodified, its value no longer registered in terms of communal memory and belonging, but in terms of market rents and prices. Older peasant practices had involved the re-creation of social space through annual perambulations, where members of the community walked the boundaries of the parish, the village, and its components, orally recording boundaries, common rights, and practices. With their bodies, they traced their shared belonging to the land and its custodians. Within these customary practices, which, to be sure, had their oppressive features, land was integrated with people; it expressed their histories and communal relations. Enclosed land, on the other hand, was bounded, measured, and monetized; it was set off from all but its direct owners, extracted from communal and customary relations. Against the concreteness of bodies and collective histories, enclosed land asserted the dominance of money and abstract measurement. A piece of land was so many acres, capable of producing a crop worth so much per acre. Mapping, which was virtually unknown in the English countryside before 1500, captured this reorganization of land into units of abstract space, areas bereft of actual people and their histories of belonging. Commodified land was open to the highest bidder, meant to be used to generate the largest monetized surplus product possible. Money governed who got land and food, and who got dispossessed.
We see all of this at work in the Cromwellian conquest of Ireland, where William Petty undertook in the 1650s to survey 22 Irish counties. The lands of Irish peasants and their landlords were thus subjected to the rule of number, quantified and placed on grids, the better to expropriate and enclose them. Petty’s famous Down Survey subjected Ireland not just to a spectatorial gaze, but to the quantifying logic of money, all of it backed up by troops and terror. The world was indeed seen anew: through the lenses of profit and dispossession.
We should be clear now as to how England’s landed class came, uniquely in Europe, to figure as pioneers of trade, plunder, and colonization. This was a class that had been reshaped along capitalist lines. Large English landowners comprised a class accustomed to plunder (by way of enclosure and eviction of peasant holders), to accumulation, and to capital investment in the countryside. It was no great stretch to extend those practices to the seas ans beyond.
By 1550, the English gentry had defeated the great uprisings against enclosure and eviction that had swept much of the country the previous year. The rebellions of 1549 were so widespread and insurrectionary that they have fairly been described as the closest thing Tudor England saw to a class war. Centered in Norfolk, they drew on manifold social and religious grievances, with anti-enclosure riots at the forefront. The insurgents tore down fences and hedges, grazed animals on the wastes, and demanded restoration of customary rights and reductions in rents and other exactions. Had they succeeded, they might well have clipped the wings of rural capitalism. Their defeat, however, paved the way for the plundering class to turn outward in the first great wave of English colonization.
Members of the English gentry first sought to transplant agrarian capitalism to Ireland. Beginning in 1565, plans were made to clear native inhabitants off their lands to make way for English colonizers, many of them soldiers. To their proficiency as enclosers, English colonizers added expertise in the use of armed force. In 1574, they massacred all 600 inhabitants of Rathlin Island, before wiping out a couple of hundred supporters of Brian McPhelim O’Neill and his family at a Christmas feast later that year. Massacre as official policy was decreed by Humphrey Gilbert, English colonel for Ireland, when he ordered that “the heddes of all those which were killed in the daie should be cutte off from their bodies” and laid outside his tent, in order to terrorize the Irish who might come to see him. Conquest, dispossession, plantation, and terror were joined as integral elements of a program for settler colonialism. And while the success of the colonial projects of 1565-76 was mixed, they established a pattern of conquest that came to fruition most thoroughly in the Americas. Indeed, English families involved as “adventurers” in Ireland frequently went on to establish plantations in Virginia. In this they were following the lead of Gilbert himself, who planted the first English colonial outpost in North America at Newfoundland in 1583.
Among the distinguishing features of the late-16thand early 17th-century occupation of Ireland was the conquerors’ emphasis on commercial agriculture and their use of “improvement” as a legitimating device. Sir John Davies, lawyer, writer, and colonizer, was among the chief architects of English imperial rule in Ireland. In a 1610 letter to the Earl of Salisbury, he laid out the legal and moral case for seizing Irish lands: “His Majesty is bound in conscience to use all lawful and just courses to reduce his people from barbarism to civility. They [the Irish] would never, to the end of the world, build houses, make townships, or villages, or manure or improve the land as it ought to be; therefore it stands neither with Christian policy nor conscience to suffer so good and fruitful a country to lie waste like a wilderness, when his Majesty may lawfully dispose of it to such persons as will make a civil plantation thereupon.” Of course, this claim had already been developed as a rationale for enclosure with respect to common and “waste” lands in England itself. But its extension to settler-colonial contexts was momentous, signaling as it did that agrarian capitalism would be joined to a global project of colonial conquest.
However, none of this could be accomplished without first overcoming Spain, whose monarchy had created the largest overseas empire the world had ever seen. Direct hostilities began in May 1585, when crews of English ships were arrested in Spanish harbors and their goods confiscated. Immediately, English merchants began petitioning their government for letters of reprisal authorizing them to use armed vessels in pursuit of private retribution via the capture of Spanish ships and their goods. By the summer, the first of such private war parties took to the sea, followed shortly thereafter by Francis Drake’s expedition to the West Indies. Over the next 18 years, until the war’s end in 1603, hundreds of naval war parties were launched. During just three years at the height of the conflict (1589–91), over 200 private ships set sail in search of plunder. These voyages were effectively state-sanctioned piracy, and gentry investors were the key to their fortunes. Until the 1650s, privateering of this sort was the essence of English naval conflicts around the globe.
Early English colonialism followed different patterns in the west and the east. Where the latter was concerned, commerce dominated, often based around large trading concerns like the Levant and East India Companies, controlled by London’s great city merchants. To the west, in contrast, trade was accompanied by settler colonization and the development of plantation economies. Lesser traders and merchants from outside the privileged networks of the city elite played prominent roles in the colonization of the Americas, frequently in league with gentry investors. As we approach the English Revolution of the 1640s, it is these “new merchants” who come to the fore as the most dynamic commercial capitalists. Moving beyond the merely mercantile business of carrying goods from one market to another, these new merchants began investing in commodity production in emerging planter colonies, and eventually in slavery. The new traders broke from the practices of the great city merchants and the likes of the East India Company, who were unwilling to take the risks or make the new types of investment in plantation production that the colonial trade demanded. By the 1620s, all of the original companies formed for colonial trade with the Americas had collapsed because of the reluctance of big merchant capital from the city to invest in labor and means of production. Henceforth, accelerated colonial development was carried out by an entirely different set of traders, who supplied labor directly by financing the voyages of indentured servants, whose contracts were purchased by planters. Soon, many of them also financed slave ships.
The new breed of merchant could rarely afford to operate on their own, and typically needed partners from the landed classes in order to finance investment on the scale necessary. And they readily found gentry capitalists eager to collaborate. In fact, roughly half of England’s peers of the realm invested in trade between 1575 and 1630, with nearly 1,200 members of the gentry and nobility putting capital into joint-stock companies specializing in overseas commercial ventures. Taking surpluses that originated as rents and agricultural profits, they eagerly put forward capital for colonial projects that might take some years to show a return.
Plantation colonies – in contrast to exercises in commercial plunder, such as those carried out by the East India Company – required the organization of a labor force. Here, the English ruling classes were again pioneers. Starting with the “surplus” population in Britain generated by enclosure and dispossession from the land, they constructed a transatlantic labor supply system based on indentureship, a practice, as we’ve seen, that had a long history in Britain. Service in husbandry was a rite of passage for many young adults from rural households: a majority of agricultural wage laborers in early modern England would have spent time as servants, bound to a master for a year at a time. Under the law, any unmarried and propertyless person under the age of sixty could be forced into service. Similarly, in English manufacturing many apprenticeships involved indentured service of up to seven years. But a transnational indenture system and the veritable industry in the export of bonded laborers from Britain to the New World that it involved required the huge reserve army of labor that emerged with primitive capitalist accumulation on the land. In fact, 17th-century England outperformed all competitors in this area. Though its population averaged under 5 million, the country shipped out 700,000 migrant laborers across the century, around half a million of whom went to Britain’s New World colonies. The ability to export one-seventh of the population and a substantially higher percentage of the young adult population spoke to the scale of the mass dispossession brought on by primitive accumulation, and to the rapid increases in labor productivity associated with agrarian and emerging industrial capitalism.
By 1700, the English had established 17 colonies in the Americas, encompassing roughly 100,000 square miles and containing a population of around 400,000. France, with a domestic population four times larger and a colonial land mass twice as large as England’s, had settled only 70,000, while the Dutch Republic had fewer than 20,000 New World settlers. As a consequence, Dutch colonialism remained trade-based, while the English bounded ahead in the production of plantation commodities.
For a prolonged period, Dutch merchants led Europe in the noxious trade of buying and selling enslaved Africans, though England eventually surpassed them in this bloody business as well. But England ran rings around its European competitors in the development of slave-based production in the New World. Contrary to those formalisms that counterpose capitalism to slavery and bonded labor, the historical relation was just the opposite. The world’s first full-fledged capitalist power was in fact the most massive trader and exploiter of enslaved labor.
[According to Wikipedia, “The term ‘English Revolution’ has been used to describe two different events in English history. The first to be so called – by Whig historians – was the Glorious Revolution of 1688, whereby James II was replaced by William III and Mary II as monarch and a constitutional monarchy was established. In the 20th century, however, Marxist historians introduced the use of the term ‘English Revolution’ to describe the period of the English Civil Wars and Commonwealth (1640–1660), in which Parliament challenged King Charles I’s authority, engaged in civil conflict against his forces, and executed him in 1649. This was followed by a ten-year period of bourgeois republican government, the Commonwealth, before monarchy was restored in the shape of Charles’ son, Charles II, in 1660.] The English Revolution [in the Marxist sense] abolished the king’s arbitrary legal powers, eliminated the last remnants of monarchical jurisdiction over lordly property, and destroyed of the rights of peasants. Through these measures, agrarian capitalists were emancipated from constraints imposed from above and from below. Government was now overwhelmingly accountable to the landed gentlemen assembled in Parliament.
If the revolution dealt shocks to internal opponents of the new capitalist order, it also prepared attacks on England’s external enemies. Under the rule of Oliver Cromwell and his generals (1646–58), England’s new rulers aggressively expanded their global reach by building up shipping, trading, slaving, and colonization. The Navigation Acts of 1651, for instance, required that all goods produced and traded by the English and their colonies be carried in English ships. This was a transformational move against Dutch dominance in world trade and shipping, provoking the first of a series of wars in 1652–54. Rather than a battle waged by privateers, this Anglo-Dutch war was the first state-backed imperialist adventure in English history. For this purpose, between 1651 and 1660, over 200 ships were added to the British navy. At the same time, a government-initiated conquest of Ireland was afoot. The 1652 Act for the Settlement of Ireland authorized the seizure of two-thirds of Irish lands and their transfer to settler-colonists. Ireland became the first example of English settler colonialism.
England’s rulers coveted Spain’s American possessions as much as they sought to capture much of Dutch trade and shipping, and war with Spain soon became Cromwell’s order of the day. While frustrated on some fronts, his troops managed to seize Jamaica in 1655, a victory that proved momentous. The Navigation acts had already required that people enslaved for
English colonies be sent in English ships. Now, having inflicted military defeats on the Dutch and seized Jamaica, the English cornered even more of the slave trade: their ships carried more than 350,000 Africans to New World slavery during the latter half of the 17th century. The meantime, the Royal African Company, chartered in 1672, shipped 60,000 Africans to the Americas in the 1680s alone. [Wikipedia says that “The Royal African Company was an English mercantile trading company set up in 1660 by the royal Stuart family and City of London merchants to trade along the west coast of Africa. It was led by the Duke of York, who was the brother of Charles II and later took the throne as James II. It shipped more African slaves to the Americas than any other institution in the history of the Atlantic slave trade. It was established after Charles II gained the English throne in the Restoration of 1660. While its original purpose was to exploit the gold fields up the Gambia River, it soon developed and led a brutal and sustained slave trade. After becoming insolvent in 1708, it survived in a state of much reduced activity until 1752 when its assets were transferred to the new African Company of Merchants, which lasted until 1821.”]
White indentured servants from Europe were increasingly displaced by enslaved Africans. In 1650, for instance, there were roughly 100,000 English settlers in North America and the Caribbean, and only a few thousand enslaved Africans. [Note that, according to Wikipedia, not all white indentured servants were voluntary. Some were English “criminals” and kidnapped poor children, and 10,000 were Irish “rebels,” subjected to forced labor for a given period.] In 1700 there were 260,000 English settlers alongside 150,000 enslaved Africans. Barbados underwent a spectacularly rapid metamorphosis. In the twenty years after 1640, the number of enslaved Africans on the island increased fifty times over.
Both economic and political considerations drove this shift to African bonded labor. Rising wages in England from 1660 reduced the flow of indentured servants and drove up their price, just as New World planters were experiencing greater needs for labor power. And joint insurrectionary conspiracies by servants and enslaved people in Barbados and Virginia added a socio-political impetus for separating “races” (first by creating the concept of them) in order to break their sense of shared interests. For these reasons and others, by 1700 three-quarters of all arrivals to these regions came from Africa. In the first decade of the 18thcentury, British ships transported more than 100,000 enslaved Africans. Henceforth, until the trade was abolished in 1807, the English held their place as Europe’s reigning slave traders.
In a European world economy increasingly characterized by the global movement of goods, an international means of payment was crucial. In the absence of a national currency that could function as international money, as had the Athenian owl, global payments relied on precious metals, be they in the form of bullion or high-quality coins. In the 16th century, the acquisition of gold became the obsessive aim of European colonial policy. By the 1470s, Portuguese mariners were reaping the stuff along Africa’s “Gold Coast.” Meanwhile, Spain, the first European power with extensive colonies in the Americas, displayed a devotion to gold (and soon after to silver) that can only be described as maniacal. Yet, as Adam Smith saw during the second half of the 18th century, wealth accrues ultimately to those who succeed in raising the productivity of labor, capturing markets in the process, not those who pursue money as an end in itself. Win the battle for markets in goods, and money will flow your way. Spain’s leaders, like Portugal’s monarchs, were led astray in seeking to build up great hoards of gold rather than investing to raise the efficiency of labor.
By the 1580s, Spain controlled vast New World territories, along with trading posts in India, Africa, the Philippines, and beyond. Its inflows of silver and gold were staggering. To all appearances, its imperial power was unrivaled. Yet already it was reeling from massive financial crises based on imperial overextension and the weakness of domestic production. By establishing plantation colonies in the New World, building up a protected shipping industry, investing in colonial trade and production, and backing all of this up with unrivaled military force, England was moving into first place in the new imperial order. One ominous indicator of this is that between 1697 and 1702, as a new financial order was being established, the monetary value of enslaved people exported from Africa exceeded that of gold. The means of producing wealth – enslaved labor power, not the metallic means of payment, had become central to New World fortunes. Nevertheless, prior to the 1690s, England’s rulers were financially constrained from fully unleashing English imperial power. These constraints would be burst only via revolution in state and finance.
The problem of financing war and colonialism had emerged with Oliver Cromwell’s aggressive imperial policies. To carry through the conquest of Ireland and Scotland and the First Dutch War (1652–54), Cromwell sold off Crown lands and the estates of royalist opponents, along with dean and chapter lands. And, even then, he was forced to raise taxes. Taxation, however, encountered resistance. Cromwell’s regime, resting as it did on an army rooted in the lower and middling strata, wasn’t sufficiently a government of the ruling class for the latter to readily accept new taxes. This was its fatal flaw, and a paramount reason that the Stuart monarchy was restored in 1660 with Charles II. Yet, Charles’s regime, too, failed to win the confidence of England’s dominant class. He and his advisors inevitably gave priority to new wars and renounced past debt payments. European monarchies were famous for such maneuvers. However, the English state was in the midst of transformation into a bourgeois monarchy, and debt-holders insisted that it adhere to a corresponding regime of property and power. The viability of a large, liquid market in state debt required the abiding confidence of financial markets in the integrity of government borrowing. So long as the monarch insisted on sovereign immunity before the law, which amounted to its right to renege on contracts, its credit would be suspect. The bourgeois principle had been established by the courts: in financial matters the Crown was to be treated like any other party to a contract. The state was to be subsumed to impersonal power – to the compulsions of money and the market.
The Stuart monarchs weren’t prepared to observe these obligations, and this is what sealed their fate. Their demise had little to do with the actual performance of the economy. In fact, the Cromwellian matrix of war, colonialism, and slaving prospered after the Stuarts were restored in 1660. Enclosure and agricultural improvement continued apace; colonial investment thrived. Soaring shipments of British manufactures to the North American colonies financed massive British imports of slave-produced goods like tobacco, cotton, and sugar. In this environment, the big trading companies flourished like never before. The East India Company may have doubled its capital between 1660 and 1668, but it was outdone by the Hudson’s Bay Company, whose assets tripled, and by the Royal African Company, which quadrupled in size during the same period. No, it wasn’t a faltering economy that doomed the Stuarts. What sealed their fate in the economic domain was their handling of finance and taxation. Their financial autonomy from Parliament, the growing king’s army, and the king’s pro-Catholic religious and foreign policy all fueled fears of absolutism.
A fully capitalist monetary system couldn’t be constructed until money was extracted from the bones of princes and transplanted into the blood of the commonwealth. This is what brought about the revolutionary upheaval of 1688–89, beinging William of Orange to the throne in the guise of “liberator” at the invitation of seven of the most powerful men in Britain. James Stuart fled to France, and the elite of the country’s landed gentlemen, great merchants, and manufacturers rallied behind William’s invasion force. Parliament assigned the new king an annual income of 700,000 pounds, much less than had been granted to Charles II or James II, making it impossible for him to pay an army or wage war without seeking Parliament’s authorization to raise funds.
A leash having been placed on the monarch, Parliament was again ready to wage war. Nearly 50,000 troops were sent to crush a new rebellion in Ireland. The long-range military goal, however, was to break France, England’s imperial rival since Spain had been vanquished under Elizabeth and the Netherlands under Cromwell and the Stuarts. If England was to achieve imperial hegemony – dominance of world trade and colonization and military supremacy in Europe, France, with its powerful absolute monarchy and a population four times greater, would have to be subdued. For 20 of the next 25 years, the two nations waged an epic struggle for European supremacy (the Nine Years War of 1689–97 and the War of the Spanish Succession, 1701–14). Decisive to victory was England’s alliance with the Dutch, Spanish, and Austrian states. And what held the alliance together was money.
Everywhere in Europe, it seemed, England’s rulers were subsidizing troops and sending money. In the late stages of the war (1710–11), the English state was paying to keep 171,000 troops and their officers in action across Europe, almost 114,000 of whom were of foreign origin. And this was just the beginning. For the next century, the pit of military spending seemed bottomless, as wave after wave of troops was armed, fed, and sent to theaters of combat, while warships were built in the thousands. In terms of capital investment, nothing in the early 18th century compared with building a warship. A large multistory cotton mill cost around £5,000 at the time, while a first-rate battleship required an expenditure of up to £39,000. Altogether, the British navy had an investment of around £2.25 million in its ships during the first half of the century, at a time when the total capital investment in the 243 woolen mills in the West Riding equaled a little over £400,000. In 1691 the English government was spending £3 million for military purposes. Within four years the figure had reached £8 million. Yet, tax revenues over those same years averaged just £4.5 million. As always, the gap had to be covered by borrowing. And now, in the context of the new bourgeois monarchy, an innovative structure of debt-based war finance could emerge, and with it the foundations of a capitalist banking system.
Enter the Bank of England, formed to fund war with France. Even its most farsighted founders couldn’t have grasped the revolution in monetary arrangements the bank’s first loans would trigger. But as war became effectively permanent for a century after 1689, so did the need for war finance. It was thus as an instrument for permanent war finance that the bank transformed England’s financial architecture. In May 1694 the government accepted a proposal from William Paterson, a Scottish merchant and financier, who offered to gather subscriptions for a large perpetual loan to the state, the subscribers to which would be incorporated as the Bank of England, with the authority to take deposits and circulate paper instruments like notes, bills, and checks. Nothing in the act that created the bank, nor anything in its charter, suggested that it would become the most revolutionary innovation in monetary practice since the invention of ancient coinage. But it did – because it stumbled on the means by which to monetize public debt. Turning government debts into money would become the foundation of modern finance, and of money’s second modular form. Two essential features of its early loan made the bank truly innovative. First, for the first time in English history, a loan to the state would be made largely in paper currency (banknotes and bills), rather than in gold and silver coins. Second, those who subscribed to the loan – that is to say, those who bought interest-bearing shares in the bank – would be able to cash out at any times by selling their shares (their part of the loan to the state) on the developing stock market. Because a paper currency arrangement like this was so novel, the government insisted that the bank maintain a £200,000 gold reserve to back up its bills and notes, believing this would reassure the public as to their solidity. As essential as this metallic reserve was, it was the circulation of a privately produced paper currency based on public debt that revolutionized the production of money.
In 1698 the government started to accept the bank’s paper instruments in payment of taxes, turning bills issued by a private bank into money, though they did not formally become legal tender for another 135 years. Year after year, decade after decade, as it provided more finance to the Crown, the bank acquired enhanced powers: to issue notes as legal tender (first to the treasury and then more generally); to enjoy a monopoly on the issue of such notes; to have the forgery of its notes subject to the death penalty, just like the king’s coinage; to have its property exempted from taxation; and to serve as the official agent for the exchange of bills issued by the Exchequer. No longer simply an investment trust for a public loan, in just a few decades the bank had become the pivot upon which both public and private finance turned. It was the principal lender to the state and the supplier of liquidity to the financial system as a whole.
The Athenian owl had been based upon past labor, that of enslaved people in the Laurian silver mines. Bank of England notes, however, carried an index of future labor, a slice of social wealth (derived from labor) that would find its way to the state in the form of taxes. The bank agreed in principle to exchange its notes for silver or gold coin, even though its precious metal reserves amounted to merely 12 to 15% of its money stock. Paper currency revolved on the government’s use of future tax revenues to meet the interest payments on its past debts.
As England’s rulers imposed the disciplines of financial markets on the Hanoverian monarchs, they also granted them, subject to parliamentary consent, the powers of taxation required to finance a century of war. An unwavering commitment to honoring its debts thus allowed the British state to establish sovereignty over the disposal of military force.
There was tremendous growth in the government’s tax revenues after the 1680s. The English were especially heavily taxed. By the first quarter of the 18th century, they were paying more than twice as much as were the French per capita. By the 1780s, they were paying nearly three times as much. The representatives of the English ruling class who assembled in Parliament didn’t oppose taxes, but they’d authorize them only for a regime that demonstrated accountability to financial markets, the investors who dominated them, and their class representatives in Parliament. Increasingly, the state’s tax income was derived from the customs and excise taxes, rather than levies on landed wealth. These revenues were granted in large measure because they were funding capitalist expansionism, not courtly extravagance. It was war, the foundation of foreign trade and colonization, that drove the ever-rising budgets of the state. Throughout the 18th century, current military expenditures plus spending on debts from past wars consumed between 75 and 85% of total public expenditure. By any reasonable definition, this was a fiscal-military state.
The shift of British capitalism’s center of gravity toward manufacturing industries and related changes in trade patterns may have facilitated the abolition of the slave trade, and then of slavery itself in the British Empire. While plantation slavery hasn’t caused the industrial revolution in Britain between 1760 and 1820, it did play an active role in its pattern and timing. While the growth of the home market must be seen as a central driver of the process, there can be little doubt that it was considerably accelerated by colonial exports. During the last two decades of the 18th century, a crucial moment in the industrial takeoff in Britain, almost 60% of increased industrial output was exported, with exports to Africa and America leading the way. At the beginning of the century, just as the large integrated plantation was emerging, only 10% of British exports went to those regions; by the end of the century it was 40%. During the critical third quarter of the century (1750–75), nearly two-thirds of Britain’s increased exports derived from expanding sales in Africa and America. Most importantly, manufacturing industries were prime beneficiaries of export growth, as fully one-third of British manufactured output went to foreign markets during the first three-quarters of the century, with Africa and America once more in the forefront. Finally, a considered estimate suggests that in the period 1775–1815, a quarter or more of fixed capital formation in Britain came from domestic reinvestment of profits from the triangular trade.
[According to Wikepedia, New World sugar, tobacco, and cotton were sent to the metropole (colonizing country), which then sent textiles, rum, and manufactured goods to Africa to trade for slaves, to be transported to the Americas. The metropole could also be bypassed, with the West Indies, for example, sending slave-produced sugar to New England, and rum made from the sugar and other goods traded in Africa for slaves. “A classic example of the triangular trade is the colonial molasses trade. Merchants purchased raw sugar (often in its liquid form, molasses) from plantations in the Caribbean and shipped it to New England and Europe, where it was sold to distillery companies that produced rum. The profits from the sale of sugar were used to purchase rum, furs, and lumber in New England which merchants shipped to Europe. With the profits from the European sales, merchants purchased Europe’s manufactured goods, including tools and weapons. Then the merchants shipped those manufactured goods, along with the American sugar and rum, to West Africa where they were bartered for slaves. The slaves were then brought back to the Caribbean to be sold to sugar planters. The profits from the sale of slaves in Brazil, the Caribbean islands, and the American South were then used to buy more sugar, restarting the cycle.
The first leg of the triangle was from a European port to Africa, in which ships carried supplies for sale and trade, such as copper, cloth, trinkets, slave beads, guns and ammunition. When the ship arrived, its cargo would be sold or bartered for slaves. On the second leg, ships made the journey of the Middle Passage from Africa to the New World, where enslaved survivors were sold in the Caribbean or the American colonies. The ships were then prepared to get them thoroughly cleaned, drained, and loaded with export goods for a return voyage, the third leg, to their home port. From the West Indies the main export cargoes were sugar, rum, and molasses; from Virginia, tobacco and hemp. The ship then returned to Europe to complete the triangle. In actual practice, cash crops were transported mainly by a separate fleet which only sailed from Europe to the Americas and back. The triangular trade is a trade model, not an exact description of the ship’s route.
A 2017 study provides evidence for the hypothesis that the export of gunpowder to Africa increased the transatlantic slave trade: “A one percent increase in gunpowder set in motion a 5-year gun-slave cycle that increased slave exports by an average of 50%, and the impact continued to grow over time.”
Newport, Rhode Island was a major port involved in the colonial triangular slave trade. Many significant Newport merchants and traders participated in the trade working closely with merchants and traders in the Caribbean and Charleston, South Carolina.
According to research provided by Emory University, as well as Henry Louis Gates Jr., an estimated 12.5 million slaves were transported from Africa to colonies in North and South America.”
The heyday of African slavery was accompanied, especially in England, with harsh treatment of the poor — the poorhouse, capital punishment or deportation for relatively petty crimes, or wage slavery under the worst conditions — all to further enrich the already well-to-do.]
The global dominance of the British Empire enabled the pound to function as world money for nearly 200 years. But just as the pound ascended on the wings of war, its descent, too, would be war-induced. This time it would take another Thirty Years’ War – comprising the two World Wars of 1914–45 – to reorganize the regime of world money.
The dollar’s rise to global dominance began with Virginia tobacco receipts and notes based on mortgaged land. The first, tobacco money, became legal tender in 1642 and persisted for nearly 200 years. The second, currency issued by land banks, originated in South Carolina in 1712 and would soon be found in eight more American colonies. The first, of course, is a receipt on the product of slave labor, and the second is backed by lands appropriated from indigenous peoples. Across these monies, therefore, run traces of the barely concealed sources of American capitalism: slavery and settler colonialism.
Typically, the historical study of money in the United States is posed in terms of a number of binary oppositions: paper money versus metallic coins; debtors versus creditors; and state banks versus central banks. Undoubtedly, all of these oppositions are significant and important. But as explanatory matrices, they all elide fundamental questions of violence, expropriation, domination, and labor. They wash blood from money, obscuring the roots of US currencies in war and subjugation. The metamorphosis of land from communal to private property in the Americas was accomplished by centuries of war. Looting, scalping, raping, pillaging, shooting, and burning villages were the techniques of organized slaughter that enabled settler colonialists to displace and eliminate indigenous peoples via extirpative war.
During and after the American Revolution, federal military forces moved to the forefront of “Indian removal,” taking over the role previously assumed by armed settler groups. In 1779, George Washington ordered his troops to undertake “the total destruction and devastation” of the Iroquois Confederacy. Forced relocation, bribery, exploitation of debts, and tribal animosities would all be tools to this end, deployed strategically by President Jefferson in a campaign of ethnic cleansing in the early 1800s. The War of 1812 against Britain drove militarized expropriation to even higher levels, creating the conditions for General Andrew Jackson’s murderous rampages through Georgia, Alabama, and Florida (1812–25), the conquest of Texas beginning in 1825, and that of New Mexico, Arizona, California, Nevada, Colorado, and Utah across the 1840s, followed by the systematic “cleansing” of the West after the Civil War. By 1887, indigenous peoples in the United States had been dispossessed of nearly three billion acres of land, or more than 98% of the land mass of the continental United States, in one of history’s most colossal and merciless processes of primary capitalist accumulation.
The commodification of land was sealed in violence against indigenous bodies. And that violence, both material and symbolic, was directly monetized in the form of scalp bounties. Rewards for Indian scalps appeared in American colonial laws beginning in the 1670s. Massachusetts and South Carolina were among the most aggressive promoters, with the former offering ten pounds sterling for a scalp, about ten times the maximum day wage of a laborer. Even “pacific” Pennsylvania got into the act, regularly increasing scalp bounties as years went by. This grim commerce excelled at something slavery pioneered: the reduction of persons to monetary sums.
A large number of colonial currencies were debt notes issued for war finance. And these debts were paid off with land sales once indigenous peoples had been pushed out of their environments. It’s instructive that America’s first major financial crash in 1792 was triggered by the defeat of the United States Army by Little Turtle and the Indians of the Western Confederacy in what’s now northwestern Ohio. Since military defeat meant no new lands, and thus no land sales to pay off war debts, it immediately induced economic panic.
It seems fitting that America’s first president found his original profession as a surveyor, since military displacement of Indians generated a huge demand for the mapping of expropriated land. Like many in his trade, Washington found time to snatch up speculative holdings for himself, purchasing a thousand acres in the Shenandoah Valley in 1750. Then, before the decade was out, his marriage to Martha Custis made him one of northern Virginia’s largest landowners. To these estates he added 25,000 acres, his reward for military service, much of it against indigenous peoples in the French and Indian Wars. Continued war service brought him 45,000 more acres in 1773. None of these lands were worth much without labor, and on Virginia’s large estates the work was done by enslaved people of African descent. At his death, Washington owned 277 slaves, having excelled in the two practices foundational to planter capitalism: indigenous displacement and African enslavement.
War pivots on finance, and breeds new forms of it. Two-thirds of the cost of the Revolutionary War was funded by the use of bills of credit. To the good fortune of the Americans, a market in these promissory notes developed among investors in Paris and Amsterdam willing to bet on a victory for the colonial rebels (and on the land seizures that would accompany it). Massachusetts issued its first war bonds in May 1775. The following month, the Continental Congress started printing a national paper currency, known as continentals. Treasury Secretary Alexander Hamilton was soon to propose a national bank with powers to print notes, coin money, receive deposits, and make private and public loans. A decade later, this proposal would come to fruition with the creation of the First Bank of the United States in 1791. But even without a national bank, the Americans found means of war finance in the form of “floods of paper money,” as future president John Adams put it. By the time the shooting had stopped, Congress had issued $226 million in notes, while an additional $100 million in paper currency had flowed from the states. Americans had already demonstrated a unique fondness for paper money, much to the dismay of the imperial metropole, which repeatedly prohibited it (in laws of 1720, 1741, and 1751).27 But the Revolution scaled a new summit. Paper money would reign supreme in the United States for the next fifty years, notwithstanding widespread fetishism of precious metals as the only “true” money.
As early as 1794, when four chartered banks could be found in the whole of the British Isles, the United States already hosted eighteen. By 1825, the United States had nearly two and a half times as much banking capital as did England and Wales. Riding this precocious financialization, bank assets as a share of aggregate US output rose steadily from 1785, hitting levels in the 1820s comparable to those that many countries reached only in the 1990s.
The First Bank of the United States (BUS) faced hostility from its inception. Created in 1791 as a component of Alexander Hamilton’s program for an activist state promoting capitalist development, the Philadelphia-based BUS was modeled on the Bank of England. From the start, powerful Virginia tobacco planters opposed Hamilton’s national it, mistrusting any shift of financial power out of the South. Republicans like Thomas Jefferson inflected this opposition with anti-centralist rhetoric, and in 1801 they created the US Land Office as an alternative. Then, in the wake of monetary scams and financial panics, an awkward coalition of agrarian populists, financial capitalists outside Philadelphia’s Chestnut Street, and working-class radicals turned their enmity on the bank and the very idea of central banking. By 1811, the BUS couldn’t muster enough support in the Senate to get its charter renewed.
With the elimination of the BUS’s power to regulate banking, new banks could multiply like weeds.The more than 200 banks to be found in 1815 reported a combined $82 million in capital, only one-fifth of which was backed by silver and gold. By this time, canal companies, railways, blacksmiths, and various academies were also issuing a dazzling array of notes that circulated as money. American capitalism had set off down the road of fragmented finance. Yet, down that road lay the potholes of scams and panics that regularly provoked anti-banking fevers, and a fetish of precious metal.
No sooner had the First Bank expired than support for a strong central bank was renewed, following the trauma of the War of 1812, which saw the British burn down the White House, provoking a financial panic during which banks suspended conversion of notes into specie. With public expenditures running two to three times higher than government income, it required substantial sales of interest-bearing Treasury notes to finance the war. Accepted as legal tender for all government transactions, including taxes, these notes became a crucial part of the money supply. In 1816, shortly after the war’s end, Congress approved the launch of the Second Bank of the United States, though it remained little more than an accessory to the Treasury, the real central bank at the time. The Second Bank soon found itself with half of all bank-held specie, and it began to assume many of the coordinating and regulating functions of a modern central bank. However, its history was plagued by crises, scandals, and intensifying political opposition. Particularly during the tenure of Nicholas Biddle as BUS president (1823–36), with the United States in the throes of intensifying capitalist transformation, the bank became a lightning rod for social grievances against moneyed interests and Washington officials. The acclaimed Indian killer, Andrew Jackson, shrewdly mobilized these sentiments on his road to the presidency, fostering a market populism that extolled economic individualism while condemning bankers and bureaucrats. Dynamic capitalist development in the United States was thereby joined to a fragmented, decentralized, and largely unregulated banking system.
Monetary fragmentation didn’t significantly hinder capitalist accumulation in the United States, however. There has been a widespread tendency, particularly since the onset of global financialization in the 1970s, to treat finance as the prime mover of capitalist development, a view that all too easily meshes with neoclassical conceptions of capitalism as a “money economy” fueled by individual property rights. Such perspectives miss the vital sources of capitalist growth in labor, exploitation, and accumulation of means of production. For it was as a powerful machinery for for harnessing human labor that the US economy thrived in the decades after 1800, notwithstanding its highly localized financial system.
A growing body of research has demonstrated that this phase of vigorous capitalist growth had agrarian roots. Where dispossession of indigenous people had been largely completed, the transition from independent farming to agrarian capitalism occurred in large measure through household production, rather than via its eradication, as had been the story in Britain with the expropriation of small tenants. In the US case, family farms were rendered market-dependent via the effects of land prices, mortgages, and market pressures. Farmers were increasingly compelled to produce monetizable cash crops in order to make debt and mortgage payments to government and banks, and to purchase farm implements and household goods. All of these capitalist relations subjected petty commodity producers to the imperatives of the market.
Well before the outbreak of the Civil War, a market-integrated, commodity-producing agriculture held sway in the dominant regions of the US economy, generating monetized surpluses and increasing demand for manufactured goods. This in turn stimulated the large-scale industrial manufacture of shoes and textiles, especially in Massachusetts; the development of water-driven mills and cotton-spinning machinery; the concentration of urban populations; and the expansion of roads, turnpikes, and canals, to be followed by steamboats and railroads. While powerful internal transformations propelled these developments, European wars again intervened – this time the conflicts of 1793–1815 provoked by the revolution in France, enabling US shipping to emerge as the world’s premier mover of world goods, and enticing European investors to enter American markets. American growth further heightened the country’s attractive power to immigrants. Population soared from 3.9 million in 1790 to 9.6 million twenty years later, just as capitalist industrialization pushed the number of cotton mills from 15 to 87 in the space of four years, at the same time as the quantity of spindles increased tenfold. Beginning in the 1790s, the corporate form of organization emerged in the North, soon becoming widespread. By 1861, American states had incorporated over 22,000 enterprises, making the United States the original “corporation nation.” The precocious rise of the corporation also fostered the growth of finance, as joint-stock firms took out loans and issued equities, bonds, and other securities. This was the stimulus for a wave of new banks, whose numbers jumped from four in 1791 to 250 by 1816. Banking in the United States was thus a major beneficiary of a feverish process of social and geographic expansion of commodity production and trade, alongside the corporatization of American business. None of these processes was unduly hindered by the fragmented character of US banking.
It would be easy to imagine that the localism of US banking owed much to peculiar forms of finance in the Southern slave states. But, as much as the market in enslaved people lent distinctive features to finance in the South, banks there were tightly connected with both Northern and British mercantile groups. In fact, given that cotton was the world’s most widely traded commodity by the 1830s, banking in the South was a force for financial integration, not fragmentation. The same was true for the commerce in enslaved people. Enslaved people were the largest capital investment in the Southern economy, and slave trading was a powerfully rationalized business. Slaves were widely used as collateral in debt transactions, from the purchase of shares in Louisiana banks to the contracting of a mortgage. In addition to collateralizing investments, enslaved people comprised one of the commodities most actively bought and sold across the South, and banks were keen to provide funds to businesses engaged in buying and selling them. In the case of the Bank of North Carolina, perhaps two-thirds of its loans were made to slave traders. Significantly, those states in the Deep South that imported the most enslaved people, and thus had the most active markets in bonded persons, were also the most monetized. So much did slave markets foster banking that by 1840, Louisiana, Mississippi, Alabama, and Florida were circulating more bank money per capita than any other US states. In this respect, as in many others, there was nothing premodern about the Southern economy.
As indicated, Southern slave-based banking was thoroughly integrated into financial markets in the eastern United States, as well as the global market based in London. Baltimore’s premier merchant bank, Alexander Brown & Sons, eventually the nation’s second-largest mercantile exchange, connected investors in Liverpool, London, South America, Africa, and beyond to the purchase and sale of cotton and enslaved people. Lending in the Mississippi Valley Cotton Belt was dominated by the Second Bank of the United States, thus incorporating slave and cotton finance into the monetary circuits of America’s de facto central bank. Many banks originating in the South, like the Consolidated Association of Planters of Louisiana, issued loans backed by collateralized enslaved people. Slave trader Baptiste Moussier, working out of New Orleans, partnered with a Virginia bank to build an interlocking slave-trading network that featured branches in New York, London, Le Havre, and New Orleans. Banking in the American South was thus integral to international capital flows that dealt in financialized instruments secured by enslaved bodies.
Jackson’s hostility to bankers was genuine. But it was also shrewd politics, for it enabled him to channel the grievances of farmers and workers undergoing the pressures of capitalist transition into an expansionist program of indigenous displacement that glorified the virtues of the white male yeoman farmer and artisan. Class antagonisms could thus be mobilized against
entrenched privilege of the sort represented by the Philadelphia bankers who dominated the Second BUS, while glorifying a market individualism based on white male producers. The strident campaign Jackson waged against the Second BUS during his second presidential term (1833–37), which included withdrawing all its federal deposits in 1833, effectively destroyed the institution, whose charter wasn’t renewed in 1836. Yet, as much as this appeared as a victory for popular forces, the destruction of the bank was in fact largely a blow at an older set of capitalists by a newer, more numerous set. Destruction of the bank ended federal regulation of bank credit and shifted the money center of the country from Chestnut Street to Wall Street.
In addition to shifting the center of financial power, the demise of the Second Bank, like that of its predecessor, opened the floodgates to the pell-mell creation of new banks and paper monies. There was an irony here, since Jackson was a “hard money” man, as manifest in the “specie circular” he and his Treasury secretary issued in 1836, which directed federal land agents to accept only silver and gold in payment for relatively large parcels of public lands. Yet, an expanding US capitalism could not function on the limited monetary resources of gold and silver bullion and coin. So, having destroyed the effective central bank of the United States, the hard-money president unwittingly oversaw a manic proliferation of paper, much of it produced by so-called wildcat banks, operating on next to no capital and prey to counterfeiters when they weren’t generating funny money themselves. By 1860, some 7,000 different banknotes could be found circulating in the United States, courtesy of 1,600 state banks. Traveling alongside those were up to 4,000 counterfeit issues.
American capitalism lacked a typical central bank at the time. But the independent Treasury fulfilled many central bank functions. We’ve seen the decisive role of Treasury notes during the War of 1812; and these were also widely deployed in response to the Panic of 1837 in order to stimulate the economy, and in 1847 to cover deficits incurred during the Mexican-American War. In providing monetary stimulus during panics and in financing wartime deficits, the Treasury performed key central bank functions. Congress authorized such practices in both 1840 and 1846, when it passed Independent Treasury Acts as an alternative to sanctioning a national bank.
The US Civil War (1861–65) “nationalized” the state and the banking system, a metamorphosis that began, predictably, with war finance. As new troops swelled the ranks of the army, war costs skyrocketed. During Lincoln’s first fiscal year, which closed on June 30, 1861, US government expenditures totaled $67 million. A year later they’d climbed to $475 million, topping out at $1.3 billion in 1865 – a level they didn’t reach again for over fifty years. While the Union government managed to improve and augment its revenue collection, taxes couldn’t possibly keep pace with the mushrooming costs of war. Lincoln’s tax revenues comprised only about one-quarter of what his government spent. The rest had to be covered by borrowing (selling bonds), or by printing money and declaring it legal tender. And even this required significant increases in taxes, since bonds can be sold only if creditors believe in government’s capacity to make its interest payments. The war couldn’t have been waged, never mind won, without innovative debt instruments and new forms of money, underwritten by a tremendous expansion of state powers. The Revenue Act of July 1862 enshrined federal powers of taxation, laid the basis for an income tax, and radically expanded the powers of the federal government, but it was an earlier bill, the Legal Tender Act of February 1862, that had inadvertently launched a revolution in money and finance. From the early days of the Civil War, the Union had been issuing Treasury notes as a means to raise funds. One version of these, known as demand notes, didn’t pay interest and was used widely as currency for everyday payments. The Treasury secretary, Samuel Chase (another gold and silver bug), resisted making these notes legal tender, though this would eventually become inevitable. With the Legal Tender bill, the famous notes known as greenbacks were born, backed by nothing more than the state’s promises to pay, and enforced as legal tender by an act of government.
At one level, the US government now had something equivalent to the currency issued by the Bank of England: legal tender notes backed by the credit of the state. But whereas pound notes were legally convertible into gold, Lincoln’s government went a major step further, in imitation of British wartime practice, by prohibiting convertibility of greenbacks for specie, a suspension that would persist for 17 years, long past the end of the conflict. Karl Marx, observing the situation from London, was not in the least surprised. The triumph of “the paper operations of the Yankees,” he wrote, derived from three social factors: confidence in Lincoln’s government and its cause; the desperate need for currency in the US West; and the Union’s favorable balance of trade. He might have added that the latter was a product of the vitality of its agrarian and industrial capitalism. In 1860, for instance, 110,000 manufacturing enterprises were active in the North, compared to 18,000 in the South. The South possessed no machine shops that could build marine engines for its navy, while during the war the Union constructed 671 warships, 236 of them steam-powered vessels. Its industrial base enabled the North to manufacture 1.7 million rifles; the South produced barely any. Rather than being sent into disarray by civil war, the Northern economy surged industrially, as Congress provided massive land grants to launch the Union Pacific and Central Pacific railways. Railroad expansion stimulated domestic steel production, and the first commercial steel using the Bessemer process appeared in 1864. Output in industries such as iron ore, machine tools, wool, and lumber jumped two to three times between 1861 and 1865. Even a tripling of the money supply from 1860 to 1865 didn’t create monetary instability. In the Confederacy, on the other hand, the more than $1.5 billion in notes (“graybacks”) pumped into circulation plummeted in value rapidly – not only because they weren’t legal tender, but also due to economic dislocation and financial disorder.
Notwithstanding strategic bungling and political hesitations on the part of the Union, its economic resources (its industrial system) enabled it to wage war for as long as proved necessary. In political-military terms, it was the self-emancipating activity of African Americans, including the “general strike of the slaves” and the entry of 200,000 black troops into the Union Army, that would prove decisive. But whereas black insurgency would be broken in the Reconstruction era, the key wartime monetary transformations that underwrote military success would persist, sometimes in modified forms, in the capitalist economic expansion that followed Union victory. These transformations required, however, that the banking system be “nationalized,” and state banks brought to heel. The federal government would thus have to become the regulator of the American financial system. Lincoln pushed this process in a letter of January 19, 1863, which advocated “a uniform currency” supplied by a new system of federally constituted banks. The real heavy lifting on the banking front was done, however, by Ohio Senator John Sherman.
Nothing fosters state centralization like war, and it wasn’t long before Sherman laid out a program to nationalize and centralize government authority. The Ohio Republican steered through the Senate a bill inaugurating a system of national banks. When many state banks continued to print their own notes rather than switch over to greenbacks, he launched a tax offensive against them. When his initial 2% tax didn’t bring the state banks to heel, he pushed through legislation raising it to 10%. The government was now dictating what would and wouldn’t function as money. Determined to annihilate the notes issued by state banks, Sherman invoked constitutional powers. “It was the intention of the framers of the Constitution,” he urged, “to destroy absolutely all paper money, except that issued by the United States.” Early the next year, as the 10% tax on currency-issuing state banks took effect, large numbers of them nationalized themselves, agreeing to the exclusive use of greenbqcks and national banknotes. American finance thus revolved around two national currencies, both issued by government: United States notes (greenbacks) and national notes tied to government bonds.
For the dollar to become a principal currency of international business, multiple transformations would be necessary – economic, political, and institutional, and over the half century that followed the Civil War, these would all be put in place. Yet the processes of change were typically piecemeal and confused. Having created a national fiat money, America’s rulers now tried to get rid of it by restoring convertibility of banknotes to gold. Yet so ham-fisted were the efforts that it took them nearly fifteen years to pull it off. Immediately after the Civil War, the government began retiring and destroying greenbacks, just as economic growth turned up and state revenues leapt higher. The money supply thus contracted, while demand for money rose in response to the rising volume of transactions. The result was predictable: as the money stock declined by about 7% a year, prices dropped by 8% annually from 1866–68. Declining prices are almost always disastrous under capitalism, as investors and individuals postpone purchases and investments in order to take advantage of the lower prices expected next week, next month, or next year. The economic effects were worst in the South and West, already cash-starved. These regions strenuously opposed the resumption of dollar–gold convertibility and frequently advocated for silver to become an additional component (sometimes the primary one) of the money supply. Yet, notwithstanding widespread public hostility, the party of resumption eventually prevailed, though it wasn’t until 1879 that the convertibility of dollars for gold came into effect.
If the United States was to be a top-tier player in international markets, it needed a currency that carried global legitimacy, one that might become a recognized instrument of global finance. What brought about the internationalization of the dollar was a dynamic process of capital accumulation in industry, agriculture, and transportation. Agricultural expansion remained vital to these developments after the Civil War, though it was now increasingly integrated with finance and burgeoning manufacturing industries. During the 1870s, farm acreage grew by 44%. This extensive growth dovetailed with increases in the productivity of agricultural labor to generate often-staggering rises in output. Between 1866 and 1886 the corn produced in Kansas rose from 30 million bushels to 750 million. The wheat crop of North Dakota, not quite 3 million bushels in 1880, passed 60 million in 1887. These figures had no historical precedent. Between the end of the Civil War (1865) and the 1898 Spanish-American War, US wheat production jumped by more than 250%, and the oiutput of corn by over 220%. All of this was tied, of course, to industrial transformations of the landscape: canals and waterways, steamboats, the telegraph, and, most dramatically, the railways. Over the course of the 1850s, the national railroad system more than tripled in size, from 9,000 miles of track to 30,000. And no city benefited more than Chicago, by this time a hub of railways, grain trading, meat-packing, and finance.
Not only did US capitalism witness, as we have seen, the most widespread adoption of the corporate form; it also developed some of the most intricately structured financial markets in the world. What was emerging in the United States, as illustrated in the case of Chicago, was a dynamic symbiosis between agriculture and manufacturing, in which finance served as a leavening agent. Between 1869 and 1883, the American economy grew faster than ever before, at a rate of about 9% a year. Railroads were at the heart of this boom, with more than 162,000 miles of track laid in the forty years after 1860. By 1890 the US produced more steel than Britain, and a decade later, its total manufacturing output had overtaken Britain’s. Meanwhile, the American economy had entered a phase of furious concentration and centralization of capital. Huge conglomerates emerged, like the Standard Oil Trust (1892), the US Steel Company (1901), and General Motors (1908). By 1902, there were nearly 100 industrial firms with capitalizations in excess of ten million dollars, a size that was extraordinarily rare just a decade earlier. So overweening was the power of giant corporations that by 1909 nearly two-thirds of all manufacturing workers labored for fewer than 5% of all industrial enterprises.
The dynamism of post–Civil War American capitalism and its growing financial sophistication positioned the US economy for a jump to global status. Booming exports of manufactures and semifinished goods delivered a favorable balance of trade virtually every year after 1873. But this rise posed significant questions about the US monetary order. Gold was the measure of world value as well as its embodiment as the store of global value. It was the means of comparing national prices and earnings and, thereby, of globally measuring what one nation owed another in the course of world trade and investment flows (tallied via each state’s balance of payments). Great Britain had fixed the price of an ounce of gold at £4.247, while the US government set its value at $20.671. This determined a rate of exchange between dollars and pounds based on gold. Each major currency had a gold value, which provided for straightforward conversions of one currency into another. At the end of the Civil War, the United States wasn’t on the gold standard: it had a pure fiat money until 1879. And even after its return to gold, strong forces continually pressed for a silver standard, or a bimetallic standard (gold and silver). However, those sections of American business that were eyeing a jump to global status knew that their project required a dollar tied exclusively to gold. The urgency of committing the dollar to gold became more pressing in 1871, when, having militarily defeated France, German Chancellor Otto von Bismarck took his newly unified state onto gold. This quickly forced France onto the gold standard, as well. In rapid order, four other European nations converted to gold, as did Japan, India, Russia, and Argentina throughout the 1890s, followed in the next decade by Austria-Hungary, Mexico, Brazil, and Thailand. As the gold standard became genuinely international, Britain, its originator, established itself as the center of world finance.
For any government entering the late-19th-century scramble for empire, or even simply building its armed forces for defense against aggressors, gold was essential as payment for weapons, ships, armored vehicles, aircraft, provisions, cotton and linen for uniforms, and iron – increasingly the world’s key industrial material. It’s no accident, therefore, that the internationalization of the gold standard occurred in the decades leading to the First World War, when military budgets were soaring amid colonial scrambles. Total military expenditures for Britain, Germany, Russia, Austria-Hungary, France, and Italy more than tripled between 1880 and 1914, as the imperial powers lurched toward war.
In 1898, US capital turned its sights on Spain, a colonial power in terminal decline. Offering little real resistance to American military aggression, Spain handed over its colonies in the Philippines, Guam, and Puerto Rico, while Cuba was made a US protectorate. American capitalism was now asserting itself as the rising imperial power of the era. But it still needed to establish itself as a heavyweight in the sphere of world trade, investment, and finance. Twenty-four months after Spain was vanquished, the Gold Standard Act of 1900 was passed. The dollar was now officially tied to gold and backed by a $150 million gold reserve at the Treasury. Progress toward central banking was slow, however, in part because some sections of capital distrusted any state initiative that seemed too readily influenced by popular forces. Then the Panic of 1907 struck, rocking markets and severely damaging the international reputation of America’s financial system. When it was finally accomplished with the Federal Reserve Act of 1913, it was too late to address the trauma that seized global financial markets with the outbreak of world war in August 1914.
By the time war erupted, the US economy had become the world’s largest, responsible for one-third of global industrial output – nearly as much as that of Britain, France, and Germany combined. Yet, notwithstanding its industrial heft, America was a lightweight in world financial markets, where, Britain reigned supreme. When countries needed to raise money, they sought out British pounds, effectively as good as gold. Even the German mark and the French franc received more use as a means of international payment than did the dollar. Certainly, some countries tapped US markets for loans, though rarely were the bonds issued in New York denominated in dollars, given the US currency’s lack of global reach. But this mismatch between industrial and financial power wouldn’t last. The game-changer, once again, was war and war finance. In fact, America’s rise to financial supremacy would take a mere six months of war. And this time, gold would work in its favor. One after another, the belligerent states of 1914 were forced to suspend gold convertibility, as the demands of war finance exhausted their supplies. France, Germany, and Russia did so in August, the first month of the conflict. Given the scale of mobilization, which went on for nearly five years, the primary combatants were starved for cash, while America, which would wait three years before entering the conflict, was flush with loanable funds. And this time America’s leaders, intent on achieving global status, refused to be pushed off the gold standard.
Adhering to gold was far from easy. As Europe moved to war in late July 1914, its governments feverishly stockpiled bullion. To do so, they began selling their US financial assets, like stocks and bonds, and cashing in the dollars they received for US gold, which was immediately shipped across the ocean (or sometimes to Canada, in the case of Britain). During the final week of July, $25 million in gold was drained from America’s reserves in this way. This seemed bad enough. But the drain had actually started earlier, with $9 million in gold exports in May, rising to $44 million a month later. Altogether, $83 million worth of US gold had been siphoned to Europe, even before the shooting started. Of course, the American government could take the dollar off gold for the duration of the war, but this posed two problems. First, America was not at war (and would stay out of the fighting for three more years), and war was the only accepted rationale for suspending dollar-gold convertibility. Second, notwithstanding its entry into the conflict, Britain was committed to maintaining the gold standard in order to preserve the unique role of the pound as a world currency, and that of London as the center of world finance. For the United States to close the gold window would be an admission that it couldn’t compete with sterling as a world money, setting back New York’s emergence as a site of world finance. All of this compelled America’s leaders to cling to gold. Expressing a growing bourgeois consensus, Benjamin Strong, first president of the Federal Reserve Bank of New York, vowed to make the dollar “an international currency” by building “confidence in the redeemability of dollars in gold at all times.” Among other things, this also implied a growing global role for American banks, and to this end the Federal Reserve Act authorized US national banks with at least $1 million to set up foreign branches, a privilege of which the largest quickly availed themselves.
Finally, Treasury Secretary William McAdoo decided to shut down Wall Street in order to close off the main avenue by which European investors could sell their American stock holdings for gold. He arranged for the New York Stock Exchange to close its doors on July 31, 1914. And closed it would remain for the next four months. McAdoo knew he was simply buying time, but by the time the market reopened, the US had pumped out exports, particularly of grain and cotton, to European states desperate to feed and clothe their troops. By autumn, US exports were booming, the value of the dollar was ticking upward, and European states were hurrying to New York to borrow money. Gold was no longer flowing out, and foreign money was pouring in. By the time the United States entered the global conflagration in 1917, foreign governments had raised more than $2.5 billion in wartime funds by selling dollar-denominated securities in New York. The dollar had now displaced sterling as the global currency of choice. By the early 1920s, interest rates were lower in New York than in London, which encouraged governments and investors to continue to do business there. At long last, American capitalism had mastered money’s second modal form – central bank currency backed by government debt and redeemable in gold.
The total wars of the first half of the 20th century were conflicts characterized by the industrialization of killing: the use of machine technology – tanks, modern battleships, bomber aircraft, chemical weapons, and the atom bomb – in order to murder and destroy. More than sixteen million people probably perished in the first global conflagration, vindicating Rosa Luxemburg’s verdict that “shamed, dishonored, wading in blood and dripping with filth – thus stands bourgeois society.” Twenty years later, the science and technology of destruction had become frighteningly more powerful as the Second World War exterminated at least 60 million people, 3% of the world population. And now a second feature of total war – the systematic killing (genocide) and displacement of civilian populations – came fully into its own, producing the industrialized death camp (Auschwitz), and giving rise to the new human categories of refugees and the stateless.
In the interregnum between the two concentrated periods of global slaughter, capitalism underwent its most devastating slump, the Great Depression of the 1930s, reminding us why millions came to reject a system that seemed capable of nothing but murder and economic hardship. The American economy was particularly hard hit by the slump, but it was far from isolated. Seventeen other countries saw bank panics and collapses, and one after another, each went off the gold standard. The United States ended gold convertibility for domestic purposes with the Gold Reserve Act of 1934, though it retained the yellow metal for settlement of international transactions. Trade protectionism and currency devaluations induced a massive contraction in world trade, which underwent a decline of more than a third by 1932. After more than five years of persistent decline, it seemed the bottom had been touched. Output and investment started to grow, unemployment eased a bit. Then, in 1937, the bottom fell out once more. But this new slump wouldn’t drag on like that of 1929–35. What ended it was war – total war to be precise, and human carnage on a planetary scale. World capitalism revived on the back of state-directed war economies.
Much as war revived world capitalism, its effects were highly uneven. And there was no greater beneficiary of this unevenness than Uncle Sam. To begin with, the United States didn’t enter the war until late 1941, more than two and a half years after the rest of its allies. Yet, while it remained outside the conflict, it nonetheless profited from it, producing ever more goods for its European trading partners, and providing them loans. And even when it did enter the brutal fray, the United States didn’t experience the massive physical and industrial decimation that accompanied the human slaughter elsewhere. Japan, for instance, lost one-quarter of its factories and a third of its industrial equipment as a result of the carnage. Italy’s steel industry saw a quarter of its capacity destroyed, while Germany suffered damage to 17% of its stock of fixed capital. One-tenth of France’s industrial stock was wiped out. While economies like these were ravaged by war, the United States boomed. At the commencement of hostilities in 1939, the US economy was about one-half the combined size of those of Europe, Japan, and the Soviet Union. Six years later, when the war ended, it was larger than all of them combined. By that point, the United States accounted for half of global industrial production and held almost three-quarters of the world’s supplies of gold. Lifted ever higher on oceans of blood, the US now dominated the international economy, with the dollar its unrivaled world money.
Once the bombs had stopped falling, the losers’ countries had been occupied, and the victors’ lust for conquest (temporarily) satisfied, world capitalism entered a sustained quarter-century expansion (1948-73). There were cyclical fluctuations across the Great Boom, with periodic domestic recessions. But overall, global capitalism experienced a period of high growth rates, robust profits and investment, low unemployment, expanding world trade, and generally rising living standards. This “golden age” for Western economies had its brutal undersides of course: imperial interventions, racist violence, the Cold War arms race, and an intensified regulation of gender and sexuality. Yet, Western economies kept humming. World manufacturing output quadrupled between the early 1950s and the early 1970s, while world trade in manufactures leapt ten times higher. The stock of plant, machinery, and equipment per worker more than doubled, driving labor productivity forward at record pace. Food production rose more quickly than world population, with grain yields doubling over a 30-year period. Rates of economic growth hit record highs in every major capitalist nation, with the exceptions of Great Britain and the United States. As labor militancy was contained and pro-business policies were consolidated, the era of high profits created the space for governments to expand social provision (“the welfare state”) without compromising capital accumulation. A myth of social harmony triumphed, according to which capital, labor, and the unemployed could prosper together in a new managed capitalism.
Although often described as the “Keynesian era,” the boom had considerably less to do with the influence of British economist John Maynard Keynes than is often supposed. What fueled expansion wasn’t deft fiscal and monetary management so much as the maintenance of high rates of profitability. Through a complex algebra – whose variables included the impacts of wartime destruction of capital, the stimulating effects of permanent arms budgets, the mobilization of new technologies, the containment of labor insurgency, and the reconstitution of reserve armies of labor – capitalism seemed to have found a new growth trajectory that would forever eliminate crises and depressions.
Floating atop the boom, coordinating its flows, was the almighty dollar. After climbing into the top tier during the First World War, the supremacy of the greenback became incontestable after the second global conflagration. Dollar hegemony was enshrined when Western leaderds gathered at Bretton Woods, New Hampshire, in the summer of 1944 to hammer out a new monetary regime. Contrary to a present-day myth, this conference was anything but a harmonious gathering of responsible leaders seeking the common good. Bretton Woods was another exercise in great-power politics, with the United States refusing proposals from Keynes, the British representative, in order to impose dollar supremacy. The global economy would henceforth revolve around the dollar as the world’s primary reserve currency. All major currencies would be pegged at fixed rates to the dollar, which in turn was to be tied to gold at a ratio of $35 to an ounce. The major powers also agreed to finance a new institution, the International Monetary Fund, empowered to provide loans in the event of currency crises should a member country encounter severe balance-of-payments difficulties.
The main challenge to this arrangement in the early years of postwar boom was a dollar shortage in Europe. By mid-1947, the United States was running a $20 billion export surplus – goods that could only be paid for by America’s trade partners with dollars or gold, each of which was in short supply due to their massive concentration in US hands. Without an outflow of dollars to Europe and Japan to support international trade and capital flows, the dollar-based system risked seizing up. The conundrum was at first evaded when the American government announced the Marshall Plan and its program of aid to Europe. But it was the decision to rearm Europe, and provide military aid to this end, that really turned the tide. This decision crystallized into strategic policy direction in 1950 with the Korean War. The policy was accelerated in the face of the election of left-leaning governments in France and Italy, worker revolts in countries like Japan, and fears that the end of war would allow another wave of working-class and socialist insurgence. As arms spending soared, and the American state shifted to “military Keynesianism” at home and continuous military assistance to allies abroad, an outflow of dollars lubricated the gears of the world economy. But US corporations were soon to augment these outflows by increasing their offshore investments in order to build multinational operations and thereby profit from sales in rebounding European economies. The “solution” initially provided by militarism was now amplified by corporate globalization. The result was a consolidation of the new regime of world money, as persistent US balance-of-payments deficits driven by overseas military spending and aid, alongside foreign investment, generated the ddollar liquidity essential to the smoot working of the world economy.
This postwar balance wouldn’t last, however. As the imperial hegemon, America was paying for troops and tanks, bombs and aircraft, and for ever more expensive ballistic missiles. Simultaneously, it was exporting dollars to maintain foreign bases, and to feed, clothe, house, and pay wages to soldiers overseas – all costs that rose with the escalation of the war in Vietnam. In 1964, America’s debts to foreign central banks exceeded its gold reserves for the first time in the post-war period, due to the balance-of-payments deficit stemming from Vietnam-driven foreign sopending.
The dollar crisis erupted in 1971. By that point, US military outlays abroad exceeded its foreign sales of armaments by almost $3 billion, with some analysts estimating that the foreign costs of empire amounted to $8 billion per year. The dollar outflow and the weakening position of the greenback that accompanied it was thus directly connected to the costs of war and empire. To be sure, the US state believed in the necessity of such imperial expenditures. But, the empire itself appeared to be in decline at the time, nowhere more so than in Vietnam, where the US was losing the war despite ongoing military escalation. As the rest of the world watched the gyrations of the dollar and the humbling of America’s war machine at the hands of Vietnam’s national liberation movement, the US empire underwent a credibility crisis. Underlying this challenge to hegemony was American capitalism’s relative economic decline. While the US government spent extravagantly on empire, its domestic economy performed much more sluggishly than did the economies that had seemed so weak and broken in 1945. America’s listless progress can be seen in growth rates of “capital stock” in manufacturing – the accumulation of plants, machines, and equipment. During the critical 15-year period from 1955 to 1970, US capital stock in industry grew by 57%. In Europe, it grew twice as rapidly (116%), while in Japan it rose an incredible nine times faster (500%). By 1972, American business had spent several years at the bottom of the rankings of industrial nations for reinvestment of corporate earnings. The United States was thus among the least dynamic of the major capitalist economies. As a result, it now began to lose markets, especially for manufactured goods, to up-and-coming rivals. To make matters immeasurably worse, the so-called Great Boom was winding down, ordaining the downfall of the Bretton Woods system.
The decline of the Great Boom was driven by the overaccumulation of capital and a downward movement in the general rate of profit that began around 1968. Like all economic slowdowns, the effects were borne unevenly. Having lost ground to its main competitors, American capitalism endured some of the harshest blows, among them a full-fledged dollar crisis. As early as 1960, while Europe and Japan recovered, the reserves of dollars held overseas exceeded America’s supply of gold. Had all those dollars been converted for the precious metal, the United States would have been forced off the gold standard. Instead, US governments enforced a series of stopgap measures, some of them at odds with the spirit of Bretton Woods. As early as 1961, Americans were prohibited from holding gold outside the country. Then European governments were strong-armed into contributing to a Gold Pool, and urged to refrain from converting dollars into gold. Shortly after, Americans were barred from collecting gold coins. But none of these ad hocmoves could offset the structurall trend: the US was now importing more goods than it exported, and shipping out billions to finance overseas military spending, while covering the shortfall with a currency in which its trade partners were drowning.
To make matters worse, the era of escalating war in Vietnam was also one of steadily rising price inflation. The real buying power of the dollar – the actual material goods it could purchase – declined persistently. The dollar was no longer as good as gold, but it could still be redeemed for the precious metal by foreign central banks. Notwithstanding US threats, foreign governments and investors rushed to do just that. By 1968, more than 40% of the US gold reserves had left the country. Finally, in 1971 President Richard Nixon slammed shut the gold window. No longer would the US Treasury provide bullion for dollars, even to foreign central banks.
The buying and selling of currencies now became a world growth industry. In 1973, the daily turnover in foreign exchange (forex) markets amounted to $15 billion; by 2007 it had grown more than two hundred times, to $3.2 trillion a day. The daily turnover in nontraditional forex markets also exploded, reaching $4.2 trillion in 2007. With financial movements this massive, there was no way for governments to set the value of currencies. Having abandoned a world money anchored to gold, capitalism found itself in a new era of floating exchange rates, changing every day under the influence of massive flows of finance across global markets. This
fostered a proliferation of new financial instruments, particularly financial derivatives meant to hedge the risks associated with volatile money, that actually had the effect of heightening instability by producing even more complex means of speculation. The global financial crisis of 2008–2009 was in large part an expression of these volatilities. Lurking behind this monetary instability was the reality of a dollar that had become a global fiat money. It’s important to emphasize here that fiat refers only to the ability of the state to enforce the acceptance of its currency, not to impose its value. The latter is determined in the long run by the relative productivity of the capital within the nation-state in question. This is approximated by the rate of exchange a national currency maintains with others and with commodities.
The inherent paradox of a currency that serves as world money is that it’s both a credit money produced by a single nation-state and a global means for measuring value and making international payments. The system works most effectively when the imperial hegemon spends extensively outside its borders, thereby furnishing the system with liquidity, while also maintaining decided advantages in the production of vital goods and services, so that holders of the world money require it for ongoing transactions. This was the story of the dollar for the quarter century from 1945 to 1970. But once the United States lost its decisive strength in world manufacturing, dollars became little more than inconvertible IOUs accumulating in the hands of its major trading partners. Henceforth, the outflow of dollars in order to police world capitalism became an economic liability. This is what produced the run on US gold reserves, the collapse of the dollar-gold standard collapsed, and a decade of profound turmoil in the global economy.
Theory hasn’t generally caught up with the dollar as international (and imperial) fiat money. Consider that, more than four decades after it was taken off gold, the dollar is used in 85% of all foreign exchange transactions; it’s the medium in which the world’s central banks hold nearly two-thirds of their currency reserves; it’s the currency in which over half of the world’s exports are priced; and it’s the money in which roughly two-thirds of international bank loans are denominated. In short, notwithstanding its detachment from gold or America’s enduring and massive payments deficits, a reconstituted dollar dominates the world economy. And with that domination we find ourselves in the age of the third modal form of money: one in which a national fiat money, untethered to any sort of physical commodity, operates as world money. I refer to this as global fiat money.
We can understand this changed modality by returning to the earlier discussion of Bank of England notes. These, as we observed, were essentially private monetary instruments based on government debt. What prevented them from being pure fiat money was the bank’s obligation to redeem them for gold or silver at a guaranteed rate. During wars, however, the Bank of England often suspended convertibility, just as Lincoln’s government did during the Civil War. Throughout those periods, money was backed by nothing more than state debt and the government’s injunction that these currencies had to be accepted as legal tender (fiat monies). What distinguishes the period after 1973, when Nixon announced that there would be no return to a gold-based dollar, is that inconvertible fiat money was made permanent. The dollar is no longer exchangeable with gold at a fixed rate guaranteed by the US state, but it can still be readily exchanged for gold or any other commodity at prevailing market rates. Rather than money becoming inconvertible, then, it’s more accurate to say that it’s become destabilized in a regime of floating exchange rates.
During the neoliberal period (since the late 1970s), global wage labor has expanded immensely. The global paid working class has effectively doubled over about a quarter century, from roughly 1.5 billion to 3 billion wage workers. At the heart of this great doubling was the drawing of hundreds of millions of newly dispossessed laborers in Asia, especially China, into capitalist production. The new forms of money and finance we see today are integrally related to this expansion of capital accumulation and the working class on a world scale.
Writing about the post-1973 advent of “pure paper money,” radical geographer David Harvey observes that when money supply is “liberated from any physical production constraints, the power of the state becomes much more relevant, because political and legal backing must replace the backing provided by the money commodity.” What backs the US dollar today isn’t gold, but government debt (US Treasury bills and bonds, in particular) and government fiat, the state’s declaration that a given currency is legal tender. What’s distinctive about money in its third modal form is that, released from its formal ties to gold, it obeys neither of its former masters: not the credit system nor bullion. With the constraints imposed by the latter now removed, we move in a world of full-fledged credit money. “In contemporary economies, then,” economist Duncan Foley points out, “a fictitious capital, the liability of the state, rather than a produced commodity, functions as the measure of value.” When I receive a US dollar, I accept a note based exclusively on future payments derived from government revenues. Of course, since the state has made this note legal tender, within the United States I’m obliged, like everyone else, to take it as a means of payment. But why should US government debt be a functional basis of world money? Why, in other words, should foreign central banks and international investors find an exclusively debt-backed dollar an acceptable and legitimate means of regulation and coordination of world payments and finance? One tempting answer is that the United States has largely forced the dollar on the rest of the world, reaping an enormous advantage in being able to provide “decommodified” money for the world’s goods and services. Over $500 billion in US currency circulates outside the United States, for which foreigners have had to provide an equivalent in goods and services. In addition to well over half of all dollars circulating outside the United States, representing cost-free imports to the US economy (IOUs that are never cashed in), as of the late 1990s, three-quarters of each year’s new dollars stayed abroad. Equally significant, by early 2018 foreign governments had accumulated $6.25 trillion in US Treasury securities. Dollar-receiving countries, in other words, unable to convert their dollar holdings into a higher form of money, like gold, have often used them to purchase American government debt, as well as other US assets. In essence, they’ve loaned back to the US state the same dollars Americans have spent to import goods and services or purchase foreign assets. The effect of this arrangement is to exempt the United States from a balance-of-payments constraint. Rather than having to boost exports or cut imports (and domestic consumption) in the event of sustained deficits in its payments with the rest of the world, the United States can simply issue IOUs that are redeemable primarily for its own government debt. This is an “exorbitant privilege,” as a former French finance minister complained, amounting to allowing the United States – and it alone – to issue as means of global payment IOUs that in principle never have to be repaid.
The US dollar provides a highly liquid world money in an age of exceptional globalization of production, investment, and exchange. It’s precisely this that has made the dollar significantly “functional” for world capitalism. Nevertheless, its role in providing a world measure of value and a highly liquid means of payment and exchange also involves contradictory dynamics – ones that might ultimately upset the very global structure it’s meant to support.
World Money in the Age of Floating Currencies and Financial Turbulence
Viewed as an overarching historical process, what the US state did in the decade after 1973 was to redesign American finance in keeping with the already-multinational configuration of industrial capital. To be sure, new forms of imperial hegemony were constructed in the process. But, and this is a point downplayed by those who incline to a theory of American super-imperialism, US rulers were able to do this in large measure because the arc of transformation was conducive to global capital in general. As economic geographer Neil Smith observed, “However powerful US capital and the American state are, globalization isn’t the same as Americanization. Ruling classes around the world are heavily invested in globalization for their own interests.”
If the rise of globalized manufacturing was the dominant economic story of the 1960s, financial globalization was the saga of the 1970s. Forex trading became a mode of gambling, the placing of currency bets in the roaring markets of casino capitalism. Whereas 80% of forex transactions were tied to regular business activities in 1975, and merely 20% to speculation, by the early 1990s speculative trading had come to account for 97% of forex transactions, a level it’s sustained since then.
While all this was transpiring, one government after another recognized the writing on the wall and signed on to the so-called “financial revolution” of the 1980s and 1990s. After all, if governments endeavored to regulate the national space, firms could simply enter “stateless” zones like the Eurodollar market, where borrowing was often cheaper and less constrained by regulations. In order to retain financial business, nation-states inclined to the rules of the game mimicking stateless spaces by deregulating finance and eliminating capital controls. As a result of this financial deregulation, gross capital outflows from the 14 largest industrial economies jumped from an average of about $65 billion a year in the late 1970s to $460 billion a year by 1989. Capital now flowed more readily across borders than at any time since before the Great Depression, as the global investment activities of multinational firms were complemented by globalizing finance. Increasingly, the US domestic economy was a chief beneficiary. During the 1980s, after the American economy had been restructured and stabilized, capital flows into the United States, especially for purchases of US bonds and equities, grew twenty times over in real terms. Where capital outflows driven by US multinationals had propelled economic globalization in the 1950s and 1960s, capital inflows now joined continuing outflows as part of the complex financial architecture of the dollar-based regime of global fiat money.
Under the gold standard, labor could be disciplined when, in the face of currency decline and a rush to gold, the central bank raised interest rates in order to draw gold back to its coffers. Rising interest rates in a moment of financial crisis tended to induce deep slumps that, in pushing hundreds of thousands out of work, acted to reduce wages and worker militancy. In the post–Bretton Woods monetary order, however, central bank policy has substituted for the disciplinary effects of the gold standard. Through control over the interest rate charged to commercial banks (the discount rate), central bankers assume the enforcement role previously performed by the treasury in its obligation to convert notes into gold. This took one of its most dramatic forms in the punishing deployment of record-high interest rates by the Federal Reserve under Paul Volcker in the late 1970s and early 1980s. Draconian levels of interest drove the annual inflation rate in the United States down from 14 to 3%, restoring global investor confidence in the dollar at the price of a bruising world recession. More recently, the German central bank, the Bundesbank, has attempted to impose a similar sort of financial discipline on the Eurozone, notwithstanding the anti-stimulative effects of such policy in the midst of a global slump. The irony is that since the onset of the 2007 downturn, central banks have struggled to avoid deflation of the sort that has ailed Japan since the 1990s, where it’s produced chronic stagnation. Generating inflation, rather than curbing it, became the order of the day in a period of slump. The sort of obsession with financial discipline displayed by German governments is not, in these circumstances, geared toward fighting inflation. Instead, it’s about the exercise of class discipline over labor: the use of austerity as a means to compress wages in the interests of profitability.
Private banks today create 95% of all money, with central banks issuing merely 5%. Contrary to most economic theory, the bulk of money originates as bank credit, in loans to borrowers for investments, mortgages, credit card payments, student loans, etc. When my local bank agrees to provide me with a line of credit or a mortgage of $100,000, it doesn’t actually go and find already-existing dollars to lend me; it creates the money ex nihilo and registers the amount digitally in an account bearing my name. I can now proceed to spend that money by promising to make payments from future earnings. My bank has increased the money supply by $100,000 – an amount I have merely promised to repay – with the stroke of a few keys on a keyboard. Of course, my mortagage or line of credit is small potatoes compared to the billions being created every day to satisfy the borrowing needs of corporations, investors, and governments.
All of this private credit-money creation proceeds smoothly until a downturn in the economy or a financial shock induces a credit crisis. It then becomes clear that much of the created bank credit money, alongside much of the stockpile of private non-bank credit instruments (corporate bonds, commercial paper, etc.) is as worthless as the IOUs passed by a penniless person. At such moments there ensues a stampede to “safety,” represented by the world’s most valued currencies and gold. Where financial institutions have been buying and selling toxic fictitious capitals to one another, the effects of a credit crunch can be calamitous, as they were in 2007–2009, when a global financial meltdown rocked the international banking system.
When such crises occur outside the imperial centers of the system, local economies are frequently forced to endure the devastations of “structural adjustment,” in order to borrow to pay off creditors and thus exit the crisis. Most African countries, Mexico, Brazil, Argentina, Thailand, Malaysia, Iceland, Latvia, Greece, and others have had such programs inflicted on them by international financial institutions. But when panic shakes the core of the system, the rules abruptly change. Rather than cutting its debt, as most states are compelled to do, the US imperial state massively expanded its borrowing throughout the crisis that began in 2008. Operating as the world central bank, the US Federal Reserve intervened to monetize trillions of dollars of pre-validated credit money created by private banks, alongside trillions of toxic assets produced by banks and other institutions. Assets owned by US Federal Reserve banks more than quintupled between 2008 and 2014 as the central bank monetized holdings throughout the global credit system.
Removed from the gold constraint, central banks can easily reflate in the face of a credit crunch. This means pushing down interest rates in order to encourage borrowing (which, as we’ve seen, is a form of monetary expansion) and directly expand the money supply. (Often this is accomplished by giving commercial banks “high-powered money” – central bank cash – in exchange for their toxic paper assets.) Put slightly differently, central banks can act to preserve the values pre-validated by private banks (as credit money) since they’re under no requirement to convert them to gold. Removed from the metallic barrier, a central bank can offer high-powered money for debased credit money virtually without limit, which is why the money supply keeps expanding. This means that financialization continues in the face of financial crisis. In fact, the panic of 2008–2009 proved just how malleable central bank policy could be in that regard, as trillions upon trillions were pumped into a financial system in the throes of a global meltdown, and “quantitative easing” was used as if there were no limit to the production of money. As early as November 2011, the US Federal Reserve alone had already pumped over $13 trillion into the rescue of the global banking system, even before the Fed introduced its formal quantitative easing programs. And China’s massive bailout and stimulus program, along with more modest interventions by the Eurozone, added many trillions more to the global rescue package.
As we’ve seen, the Great Depression of 1929–39 was the paradigmatic expression of crisis under the gold standard, where slumps were deep and enduring, involving massive destruction of capital in the form of widespread business bankruptcies and mass unemployment. Global capitalism drifted off the gold standard during that crisis, never to fully return (the gold–dollar exchange standard devised in 1944 was already a half step in the direction of a post-gold monetary order). The world financial meltdown that broke out in 2007, triggering a global slump, is symptomatic of the new pattern of capitalist crisis under the third modal form of money. Unconstrained by a metallic barrier, the third modal form enables central banks to monetize private credit monies, as discussed above, in order to avert a 1930s-style slump. However, this tends to block the purge of inefficient capitals on the massive scale necessary to open vistas for new waves of investment and accumulation. As a result, the decade of economic recovery after 2009 was the slowest on record, due, in particular, to sluggish capital investment. While profitability was restored in the United States after 2009, thanks to the squeezing of workers by way of layoffs, precarious forms of employment, speedups, and wage compression, followed by corporate tax cuts, no sustained investment boom has been generated, nor is one likely in the absence of a deep slump that destroys the least efficient capitals. The result is a sluggish economy awash in cheap money: relative economic stagnation combined with repeated bubbles in financial investments like stocks, collateralized mortgage obligations, emerging market debt, “junk” bonds, and so on.
The dollar-based world monetary order is characterized by two axes of conflict that might yet undo dollar hegemony in the third modal form. The first of these is inter-capitalist tensions that provoke rival blocs to search for alternatives to the dollar as world money. The second, and ultimately the most crucial, is the set of class tensions running through the financialized regime of late capitalist dollar hegemony.
By creating another transnational currency – the euro is now the second-largest reserve currency in the world – and one that dominates trade in Europe, the Eurozone reduced the volume of formally inconvertible American IOUs it’s forced to accept in the course of international trade and finance. Notwithstanding all the economic and institutional turmoil of the Eurozone since 2009, the social logic of the euro project is to reduce the bounds of dollar hegemony. More recently, China has started to move down a similar track, albeit one with Chinese characteristics. In 2016, the yuan was recognized as a world currency by the International Monetary Fund, which incorporated it into the basket of currencies that make up IMF “special drawing rights.” Two years later, the Chinese government relaxed restrictions on banking and finance, making it much easier for foreign banks and nonfinancial corporations to buy and sell yuan and invest in China’s banking sector. At roughly the same time, China’s leaders launched the Shanghai oil futures market, where prices are denominated in yuan. All of these moves are designed to position the yuan as a more genuinely global currency. And while the dollar won’t be dethroned in short order, it’s significant that the yuan bloc, as measured by analysts at the International Monetary Fund, is now the world’s second-largest currency zone.
China’s financial liberalization is part of a campaign to push world capitalism toward a new set of monetary arrangements in which at least three major currencies – the dollar, the euro, and the yuan – would co-exist as world monies.
China has cut deals with oil companies in Russia, Iran, and Venezuela to accept the yuan in payment for China’s foreign purchases of oil. As the world’s largest oil importer, China’s move here will incrementally reshape global currency markets. So will its $1 trillion One Belt, One Road initiative, which will link it more powerfully with economies across Europe, Africa, and Asia. The program has also recently been supplemented by the formation of the Shanghai Cooperation Council, which will integrate the Chinese economy ever more closely with those of Russia, India, and Pakistan. China’s track toward monetary diversification is fueled by the enduring dilemma posed for capitalism’s most dynamic trading nations by a dollar-based regime of global fiat money: in payment for goods, they’re forced to accumulate dollars that have little use other than to purchase American financial assets. What’s more, those very assets are inherently unstable under a world monetary regime rife with endemic financial speculation, asset bubbles, and financial crises. It’s difficult to foresee a scenario in which the American government would tolerate such a power grab and accept the dethroning of the dollar without extremely bitter conflict, involving at least the threat of war.
Late capitalism not only threatens humankind with an intensification of violence and war, but also with catastrophic climate change. The growing restiveness of China’s massive industrial working class, along with recent, if shortlived, seizures of city squares – from Occupy Wall Street, to Tahrir Square in Cairo, to Taksim Gezi Park in Istanbul, to the insurgent streets of Sudan, Chile, Ecuador, and Lebanon – in the name of the struggle against austerity and inequality represent, at least in part, a rebellion against this system, as do community uprisings like those in Ferguson, Missouri, with their insistence that Black Lives Matter, and International Women’s Strikes and climate justice rebellions in country after country.
The poor have suffered. The blood has flowed. Everywhere, the bleeding has occurred in the service of war, empire, slavery, and money. Yet, beyond suffering, there is joy, love, community, festivity, defiance, and celebration. All of this nourishes what Walter Benjamin called a “tradition of the oppressed” – of solidarity and insurgence – that sustains stories and practices at odds with those of the conquerors. In that space reside resources of hope and intimations of utopia.
There are currents of human life that move outside the circuits of money and violence. The image of liberation refuses to die, notwithstanding a world of violence, hunger, bondage, and catastrophic climate change. “I hear a drum beating on the far bank of the river. A breeze stirs and catches it. The resonant pounding is borne on the wind.” In his 1802 sonnet to Toussaint Louverture, William Wordsworth called that breeze “the common wind.” It speaks of hope and resistance. Of a world without war and cruelty. Of an end to the chains of oppression. It whispers, “No more blood for money.”