Filtering Tag: featured

Charles’ Personal Rule: A Stable or Tyrannised England?

Within discussions of England’s political history, the most famous moments are known and widely discussed – the Magna Carta of 1215, and the Cromwell Protectorate of the 1650s spring immediately to mind. However, the renewal of an almost-mediaeval style of monarchical absolutism, in the 1630s, has proven both overlooked and underappreciated as a period of historical interest. Indeed, Charles I’s rule without Parliament has faced an identity crisis amongst more recent historians – was it a period of stability or tyranny for the English people?

If we are to consider the Personal Rule as a period in enough depth, the years leading up to the dissolution of Charles’ Third Parliament (in 1629) must first be understood. Succeeding his father James I in 1625, Charles’ personal style and vision of monarchy would prove to be incompatible with the expectations of his Parliaments. Having enjoyed a strained but respectful relationship with James, MPs would come to question Charles’ authority and choice of advisors in the coming years. Indeed, it was Charles’ stubborn adherence to the Divine Right of King’s doctrine, writing once that “Princes are not bound to give account of their actions but to God alone”, that meant that he believed compromise to be defeat, and any pushback against him to be a sign of disloyalty.

Constitutional tensions between King and Parliament proved the most contentious of all issues, especially regarding the King’s role in taxation. At war with Spain between 1625 – 1630 (and having just dissolved the 1626 Parliament), Charles was lacking in funds. Thus, he turned to non-parliamentary forms of revenue, notably the Forced Loan (1627) – declaring a ‘national emergency’, Charles demanded that his subjects all make a gift of money to the Crown. Whilst theoretically optional, those who refused to pay were often imprisoned; a notable example would be the Five Knights’ Case, in which five knights were imprisoned for refusing to pay (with the court ruling in Charles’ favour). This would eventually culminate in Charles’ signing of the Petition of Right (1628), which protected the people from non-Parliamentary taxation, as well as other controversial powers that Charles chose to exercise, such as arrest without charge, martial law, and the billeting of troops.

The role played by George Villiers, the Duke of Buckingham, was also another major factor that contributed to Charles’ eventual dissolution of Parliaments in 1629. Having dominated the court of Charles’ father, Buckingham came to enjoy a similar level of unrivalled influence over Charles as his de facto Foreign Minister. It was, however, in his position as Lord High Admiral, that he further worsened Charles’ already-negative view of Parliament. Responsible for both major foreign policy disasters of Charles’ early reign (Cadiz in 1625, and La Rochelle in 1627, both of which achieved nothing and killed 5 to 10,000 men), he was deemed by the MP Edward Coke to be “the cause of all our miseries”. The duke’s influence over Charles’ religious views also proved highly controversial – at a time when anti-Calvinism was rising, with critics such as Richard Montague and his pamphlets, Buckingham encouraged the King to continue his support of the leading anti-Calvinist of the time, William Laud, at the York House Conference in 1626.

Heavily dependent on the counsel of Villiers until his assassination in 1628, it was in fact, Parliament’s threat to impeach the Duke, that encouraged Charles to agree to the Petition of Right. Fundamentally, Buckingham’s poor decision-making, in the end, meant serious criticism from MPs, and a King who believed this criticism to be Parliament overstepping the mark and questioning his choice of personnel.

Fundamentally by 1629, Charles viewed Parliament as a method of restricting his God-given powers, one that had attacked his decisions, provided him with essentially no subsidies, and forced him to accept the Petition of Right. Writing years later in 1635, the King claimed that he would do “anything to avoid having another Parliament”. Amongst historians, the significance of this final dissolution is fiercely debated: some, such as Angela Anderson, don’t see the move as unusual; there were 7 years for example, between two of James’ Parliaments, 1614 and 1621 – at this point in English history, “Parliaments were not an essential part of daily government”. On the other hand, figures like Jonathan Scott viewed the principle of governing without Parliament officially as new – indeed, the decision was made official by a royal proclamation.

Now free of Parliamentary constraints, the first major issue Charles faced was his lack of funds. Lacking the usual taxation method and in desperate need of upgrading the English navy, the King revived ancient taxes and levies, the most notable being Ship Money. Originally a tax levied on coastal towns during wartime (to fund the building of fleets), Charles extended it to inland counties in 1635 and made it an annual tax in 1636. This inclusion of inland towns was construed as a new tax without parliamentary authorisation. For the nobility, Charles revived the Forest Laws (demanding landowners produce the deeds to their lands), as well as fines for breaching building regulations.

The public response to these new fiscal expedients was one of broad annoyance, but general compliance. Indeed, between 1634 and 1638, 90% of the expected Ship Money revenue was collected, providing the King with over £1m in annual revenue by 1637. Despite this, the Earl of Warwick questioned its legality, and the clerical leadership referred to all of Charles’ tactics as “cruel, unjust and tyrannical taxes upon his subjects”.However, the most notable case of opposition to Ship Money was the John Hampden case in 1637. A gentleman who refused to pay, Hampden argued that England wasn’t at war and that Ship Money writs gave subjects seven months to pay, enough time for Charles to call a new Parliament. Despite the Crown winning the case, it inspired greater widespread opposition to Ship Money, such as the 1639-40 ‘tax revolt’, involving non-cooperation from both citizens and tax officials. Opposing this view, however, stands Sharpe, who claimed that “before 1637, there is little evidence at least, that its [Ship Money’s] legality was widely questioned, and some suggestion that it was becoming more accepted”.

In terms of his religious views, both personally and his wider visions for the country, Charles had been an open supporter of Arminianism from as early as the mid-1620s – a movement within Protestantism that staunchly rejected the Calvinist teaching of predestination. As a result, the sweeping changes to English worship and Church government that the Personal Rule would oversee were unsurprisingly extremely controversial amongst his Calvinist subjects, in all areas of the kingdom. In considering Charles’ religious aims and their consequences, we must focus on the impact of one man, in particular, William Laud. Having given a sermon at the opening of Charles’ first Parliament in 1625, Laud spent the next near-decade climbing the ranks of the ecclesiastical ladder; he was made Bishop of Bath and Wells in 1626, of London in 1629, and eventually Archbishop of Canterbury in 1633. Now 60 years old, Laud was unwilling to compromise any of his planned reforms to the Church.

The overarching theme of Laudian reforms was ‘the Beauty of Holiness’, which had the aim of making churches beautiful and almost lavish places of worship (Calvinist churches, by contrast, were mostly plain, to not detract from worship). This was achieved through the restoration of stained-glass windows, statues, and carvings. Additionally, railings were added around altars, and priests began wearing vestments and bowing at the name of Jesus. However, the most controversial change to the church interior proved to be the communion table, which was moved from the middle of the room to by the wall at the East end, which was “seen to be utterly offensive by most English Protestants as, along with Laudian ceremonialism generally, it represented a substantial step towards Catholicism. The whole programme was seen as a popish plot”. 

Under Laud, the power and influence wielded by the Church also increased significantly – a clear example would be the fact that Church courts were granted greater autonomy. Additionally, Church leaders became evermore present as ministers and officials within Charles’ government, with the Bishop of London, William Juxon, appointed as Lord Treasurer and First Lord of the Admiralty in 1636. Additionally, despite already having the full backing of the Crown, Laud was not one to accept dissent or criticism and, although the severity of his actions has been exaggerated by recent historians, they can be identified as being ruthless at times. The clearest example would be the torture and imprisonment of his most vocal critics in 1637: the religious radicals William Prynne, Henry Burton and John Bastwick.

However successful Laudian reforms may have been in England (and that statement is very much debatable), Laud’s attempt to enforce uniformity on the Church of Scotland in the latter half of the 1630s would see the emergence of a united Scottish opposition against Charles, and eventually armed conflict with the King, in the form of the Bishops’ Wars (1639 and 1640). This road to war was sparked by Charles’ introduction of a new Prayer Book in 1637, aimed at making English and Scottish religious practices more similar – this would prove beyond disastrous. Riots broke out across Edinburgh, the most notable being in St Giles’ Cathedral (where the bishop had to protect himself by pointing loaded pistols at the furious congregation. This displeasure culminated in the National Covenant in 1638 – a declaration of allegiance which bound together Scottish nationalism with the Calvinist faith.

Attempting to draw conclusions about Laudian religious reforms very many hinges on the fact that, in terms of his and Charles’ objectives, they very much overhauled the Calvinist systems of worship, the role of priests, and Church government, and the physical appearance of churches. The response from the public, however, ranging from silent resentment to full-scale war, displays how damaging these reforms were to Charles’ relationship with his subjects – coupled with the influence wielded by his wife Henrietta Maria, public fears about Catholicism very much damaged Charles’ image, and meant religion during the Personal Rule was arguably the most intense issue of the period. In judging Laud in the modern-day, the historical debate has been split: certain historians focus on his radical uprooting of the established system, with Patrick Collinson suggesting the Archbishop to have been “the greatest calamity ever visited upon by the Church of England”, whereas others view Laud and Charles as pursuing the entirely reasonable, a more orderly and uniform church.

Much like how the Personal Rule’s religious direction was very much defined by one individual, so was its political one, by Thomas Wentworth, later known as the Earl of Strafford. Serving as the Lord Deputy of Ireland from 1632 to 1640, he set out with the aims of ‘civilising’ the Irish population, increasing revenue for the Crown, and challenging Irish titles to land – all under the umbrella term of ‘Thorough’, which aspired to concentrate power, crackdown on oppositions figures, and essentially preserve the absolutist nature of Charles’ rule during the 1630s.

Regarding Wentworth’s aims toward Irish Catholics, Ian Gentles’ 2007 work The English Revolution and the Wars in the Three Kingdoms argues the friendships Wentworth maintained with Laud and also with John Bramhall, the Bishop of Derry, “were a sign of his determination to Protestantize and Anglicize Ireland”.Devoted to a Catholic crackdown as soon as he reached the shores, Wentworth would subsequently refuse to recognise the legitimacy of Catholic officeholders in 1634, and managed to reduce Catholic representation in Ireland’s Parliament, by a third between 1634 and 1640 – this, at a time where Catholics made up 90% of the country’s population. An even clearer indication of Wentworth’s hostility to Catholicism was his aggressive policy of land confiscation. Challenging Catholic property rights in Galway, Kilkenny and other counties, Wentworth would bully juries into returning a King-favourable verdict, and even those Catholics who were granted their land back (albeit only three-quarters), were now required to make regular payments to the Crown. Wentworth’s enforcing of Charles’ religious priorities was further evidenced by his reaction to those in Ireland who signed the National Covenant. The accused were hauled before the Court of Castle Chamber (Ireland’s equivalent to the Star Chamber) and forced to renounce ‘their abominable Covenant’ as ‘seditious and traitorous’. 

Seemingly in keeping with figures from the Personal Rule, Wentworth was notably tyrannical in his governing style. Sir Piers Crosby and Lord Esmonde were convicted by the Court of Castle Chamber for libel for accusing Wentworth of being involved in the death of Esmond’s relative, and Lord Valentina was sentenced to death for “mutiny” – in fact, he’d merely insulted the Earl.

In considering Wentworth as a political figure, it is very easy to view him as merely another tyrannical brute, carrying out the orders of his King. Indeed, his time as Charles’ personal advisor (1639 onwards) certainly supports this view: he once told Charles that he was “loose and absolved from all rules of government” and was quick to advocate war with the Scots. However, Wentworth also saw great successes during his time in Ireland; he raised Crown revenue substantially by taking back Church lands and purged the Irish Sea of pirates. Fundamentally, by the time of his execution in May 1641, Wentworth possessed a reputation amongst Parliamentarians very much like that of the Duke of Buckingham; both men came to wield tremendous influence over Charles, as well as great offices and positions.

In the areas considered thus far, it appears opposition to the Personal Rule to have been a rare occurrence, especially in any organised or effective form. Indeed, Durston claims the decade of the 1630s to have seen “few overt signs of domestic conflict or crisis”, viewing the period as altogether stable and prosperous. However, whilst certainly limited, the small amount of resistance can be viewed as representing a far more widespread feeling of resentment amongst the English populace. Whilst many actions received little pushback from the masses, the gentry, much of whom were becoming increasingly disaffected with the Personal Rule’s direction, gathered in opposition.  Most notably, John Pym, the Earl of Warwick, and other figures, collaborated with the Scots to launch a dissident propaganda campaign criticising the King, as well as encouraging local opposition (which saw some success, such as the mobilisation of the Yorkshire militia). Charles’ effective use of the Star Chamber, however, ensured opponents were swiftly dealt with, usually those who presented vocal opposition to royal decisions.

The historiographical debate surrounding the Personal Rule, and the Caroline Era more broadly, was and continues to be dominated by Whig historians, who view Charles as foolish, malicious, and power-hungry, and his rule without Parliament as destabilising, tyrannical and a threat to the people of England. A key proponent of this view is S.R. Gardiner who, believing the King to have been ‘duplicitous and delusional’, coined an alternative term to ‘Personal Rule’ – the Eleven Years’ Tyranny. This position has survived into the latter half of the 20th Century, with Charles having been labelled by Barry Coward as “the most incompetent monarch of England since Henry VI”, and by Ronald Hutton, as “the worst king we have had since the Middle Ages”. 

Recent decades have seen, however, the attempted rehabilitation of Charles’ image by Revisionist historians, the most well-known, as well as most controversial, being Kevin Sharpe. Responsible for the landmark study of the period, The Personal Rule of Charles I, published in 1992, Sharpe came to be Charles’ most staunch modern defender. In his view, the 1630s, far from a period of tyrannical oppression and public rebellion, were a decade of “peace and reformation”. During Charles’ time as an absolute monarch, his lack of Parliamentary limits and regulations allowed him to achieve a great deal: Ship Money saw the Navy’s numbers strengthened, Laudian reforms mean a more ordered and regulated national church, and Wentworth dramatically raised Irish revenue for the Crown – all this, and much more, without any real organised or overt opposition figures or movements.

Understandably, the Sharpian view has received significant pushback, primarily for taking an overly optimistic view and selectively mentioning the Personal Rule’s positives. Encapsulating this criticism, David Smith wrote in 1998 that Sharpe’s “massively researched and beautifully sustained panorama of England during the 1630s … almost certainly underestimates the level of latent tension that existed by the end of the decade”.This has been built on by figures like Esther Cope: “while few explicitly challenged the government of Charles I on constitutional grounds, a greater number had experiences that made them anxious about the security of their heritage”. 

It is worth noting however that, a year before his death in 2011, Sharpe came to consider the views of his fellow historians, acknowledging Charles’ lack of political understanding to have endangered the monarchy, and that, more seriously by the end of the 1630s, the Personal Rule was indeed facing mounting and undeniable criticism, from both Charles’ court and the public.

Sharpe’s unpopular perspective has been built upon by other historians, such as Mark Kishlansky. Publishing Charles I: An Abbreviated Life in 2014, Kishlansky viewed parliamentarian propaganda of the 1640s, as well as a consistent smear from historians over the centuries as having resulted in Charles being viewed “as an idiot at best and a tyrant at worst”, labelling him as “the most despised monarch in Britain’s historical memory”. Charles however, faced no real preparation for the throne – it was always his older brother Henry that was the heir apparent. Additionally, once King, Charles’ Parliaments were stubborn and uncooperative – by refusing to provide him with the necessary funding, for example, they forced Charles to enact the Forced Loan. Kishlansky does, however, concede the damage caused by Charles’ unmoving belief in the Divine Right of Kings: “he banked too heavily on the sheer force of majesty”.

Charles’ personality, ideology and early life fundamentally meant an icy relationship with Parliament, which grew into mutual distrust and the eventual dissolution. Fundamentally, the period of Personal Rule remains a highly debated topic within academic circles, with the recent arrival of Revisionism posing a challenge to the long-established negative view of the Caroline Era. Whether or not the King’s financial, religious, and political actions were met with a discontented populace or outright opposition, it remains the case that the identity crisis facing the period, that between tyranny or stability remains yet to be conclusively put to rest.


Photo Credit.

All States Desire Power: The Realist Perspective

Within the West, the realm of international theory has, since 1945, been a discourse dominated almost entirely by the Liberal perspective. Near-universal amongst the foreign policy establishments of Western governments, a focus on state cooperation, free-market capitalism and more broadly, internationalism, is really the only position held by most leaders nowadays – just look at ‘Global Britain’. As Francis Fukuyama noted, the end of the Cold War (and the Soviet Union) served as political catalysts, and brought about ‘the universalisation of Western liberal democracy as the final form of human government’.

Perhaps even more impactful however, were the immediate post-war years of the 1940s. With the Continent reeling from years of physical and economic destruction, the feeling amongst the victors was understandably a desire for greater closeness, security and stability. This resulted in numerous alliances being formed, including political (the UN in 1945), military (NATO in 1949), and also economic (with the various Bretton Woods organisations). For Europe, this focus on integration manifested itself in blocs like the EEC and ECSC, which would culminate in the Maastricht Treaty and the EU.

This worldview however, faces criticism from advocates championing another, Realism. The concerns of states shouldn’t, as Liberals claim, be on forging stronger global ties or forming more groups – instead, nations should be domestically-minded, concerned with their internal situation and safety. For Realism, this is what foreign relations are about: keeping to oneself, and furthering the interests of the nation above those of the wider global community.

To better understand Realism as an ideological school, we must first look to theories of human nature. From the perspective of Realists, the motivations and behaviour of states can be traced back to our base animalistic instincts, with the work of Thomas Hobbes being especially noteworthy. For the 17th Century thinker, before the establishment of a moral and ordered society (by the absolute Sovereign), Man is concerned only with surviving, protecting selfish interests and dominating other potential rivals. On a global scale, these are the priorities of nation-states and their leaders – Hans Morgenthau famously noted that political man was “born to seek power”, possessing a constant need to dominate others. However much influence or power a state may possess, self-preservation is always a major goal. Faced with the constant threat of rivals with opposing interests, states are always seeking a guarantee of protection – for Realists, the existence of intergovernmental organisations (IGOs) is an excellent example of this. Whilst NATO and the UN may seem the epitome of Liberal cooperation, what they truly represent is states ensuring their own safety.

One of the key pillars of Realism as a political philosophy is the concept of the Westphalian System, and how that relates to relationships between countries. Traced back to the Peace of Westphalia in 1648, the principle essentially asserts that all nation-states have exclusive control (absolute sovereignty) over their territory. For Realists, this has been crucial to their belief that states shouldn’t get involved in the affairs of their neighbours, whether that be in the form of economic aid, humanitarian intervention or furthering military interests. It is because of this system that states are perceived as the most important, influential and legitimate actors on the world stage: IGOs and other non-state bodies can be moulded and corrupted by various factors, including the ruthless self-interest of states.

With the unique importance of states enshrined within Realist thought, the resulting global order is one of ‘international anarchy’ – essentially a system in which state-on-state conflict is inevitable and frequent. The primary reason for this can be linked back to Hobbes’ 1651 work Leviathan: with no higher authority to enforce rules and settle disputes, people (and states) will inevitably come into conflict, and lead ‘nasty, brutish and short’ existences (an idea further expanded upon by Hedley Bull’s The Anarchical Society). Left in a lawless situation, with neither guaranteed protection nor guaranteed allies (all states are, of course, potential enemies), it’s every man for himself. At this point, Liberals will be eager to point out supposed ‘checks’ on the power of nation-states. Whilst we’ve already tackled the Realist view of IGOs, the existence of international courts must surely hold rogue states accountable, right? Well, the sanctity of state sovereignty limits the power of essentially all organisations: for the International Court of Justice, this means it’s rulings both lack enforcement, and can also be blatantly ignored (e.g., the court advised Israel against building a wall along the Palestinian border in 2004, which the Israelis took no notice of). Within the harsh world we live in, states are essentially free to do as they wish, consequences be damned.

Faced with egocentric neighbours, the inevitability of conflict and no referee, it’s no wonder states view power as the way of surviving. Whilst Realists agree that all states seek to accumulate power (and hard military power in particular), there exists debate as to the intrinsic reason – essentially, following this accumulation, what is the ultimate aim? One perspective, posited by thinkers like John Mearsheimer (and Offensive Realists), suggests that states are concerned with becoming the undisputed hegemon within a unipolar system, where they face no danger – once the most powerful, your culture can be spread, your economy strengthened, and your interests more easily defended. Indeed, whilst the United States may currently occupy the position of hegemon, Mearsheimer (as well as many others) have been cautiously watching China – the CCP leadership clearly harbour dreams of world takeover.

Looking to history, the European empires of old were fundamentally creations of hegemonic ambition. Able to access the rich resources and unique climates of various lands, nations like Britain, Spain and Portugal possessed great international influence, and at various points, dominated the global order. Indeed, when the British Empire peaked in the early 1920s, it ruled close to 500 million people, and covered a quarter of the Earth’s land surface (or history’s biggest empire). Existing during a period of history in which bloody expensive wars were commonplace, these countries did what they believed necessary, rising to the top and brutally suppressing those who threatened their positions – regional control was ensured, and idealistic rebels brought to heel.

In stark contrast is the work of Defensive Realists, such as Kenneth Waltz, who suggest that concerned more with security than global dominance, states accrue power to ensure their own safety, and, far from lofty ideas of hegemony, favour a cautious approach to foreign policy. This kind of thinking was seen amongst ‘New Left’ Revisionist historians in the aftermath of the Cold War – the narrative of Soviet continental dominance (through the takeover of Eastern Europe) was a myth. Apparently, what Stalin truly desired was to solidify the USSR’s position through the creation of a buffer wall, due to the increasingly anti-Soviet measures of President Truman (which included Marshall Aid to Europe, and the Truman Doctrine).

Considering Realism within the context of the 21st Century, the ongoing Russo-Ukrainian War seems the obvious case study to examine. Within academic circles, John Mearsheimer has been the most vocal regarding Ukraine’s current predicament – a fierce critic of American foreign policy for decades now, he views NATO’s eastern expansion as having worsened relations with Russia, and only served to fuel Putin’s paranoia. From Mearsheimer’s perspective, Putin’s ‘special military operation’ is therefore understandable and arguably justifiable: the West have failed to respect Russia’s sphere of influence, failed to acknowledge them as a fellow Great Power, and consistently thwarted any pursuits of their regional interests.

Alongside this, Britain’s financial involvement in this conflict can and should be viewed as willing intervention, and one that is endangering the already-frail British economy. It is all well and good to speak of defending rights, democracy and Western liberalism, but there comes a point where our politicians and media must be reminded – the national interest is paramount, always. This needs not be our fight, and the aid money we’re providing the Ukrainians (in the hundreds of billions) should instead be going towards the police, housing, strengthening the border, and other domestic issues.

Our politicians and policymakers may want a continuance of idealistic cooperation and friendly relations, but the brutal unfriendly reality of the system is becoming unavoidable. Fundamentally, self-interested leaders and their regimes are constantly looking to gain more power, influence and territory. By and large, bodies like the UN are essentially powerless; decisions can’t be enforced and sovereignty acts an unbreachable barrier. Looking ahead to the UK’s future, we must be more selfish, focused on making British people richer and safer, and our national interests over childish notions of eternal friendship.


Photo Credit.

John Galt, Tom Joad, and other Polemical Myths

Just about the only titles by Ayn Rand I’d feel comfortable assigning my students without previous suggestion by either student or boss would be Anthem or We the Living, mostly because they both fit into broader genres of dystopian and biographical fiction, respectively, and can, thus, be understood in context. Don’t get me wrong: I’d love to teach The Fountainhead or Atlas Shrugged, if I could find a student nuanced (and disciplined) enough to handle those two; however, if I were to find such a student, I’d probably skip Rand and go straight to Austen, Hugo, and Dostoevsky—again, in part to give students a context of the novelistic medium from which they can better understand authors like Rand.

My hesitation to teach Rand isn’t one of dismissal; indeed, it’s the opposite—I’ve, perhaps, studied her too much (certainly, during my mid-twenties, too exclusively). I could teach either of her major novels, with understanding of both plot and philosophy, having not only read and listened to them several times but also read most of her essays and non-fiction on philosophy, culture, art, fiction, etc. However, I would hesitate to teach them because they are, essentially, polemics. Despite Rand’s claiming it was not her purpose, the novels are didactic in nature: their events articulate Rand’s rationalistic, human-centric metaphysics (itself arguably a distillation of Aristotelian natural law, Lockean rights, and Nietzschean heroism filtered through Franklin, Jefferson, and Rockefeller and placed in a 20th-century American context—no small feat!). Insofar as they do so consistently, The Fountainhead and Atlas Shrugged succeed, and they are both worth reading, if only to develop a firsthand knowledge of the much-dismissed Rand’s work, as well as to understand their place in 20th-century American culture and politics.

All that to say that I understand why people, especially academics, roll their eyes at Rand (though at times I wonder if they’ve ever seriously read her). The “romantic realism” she sought to develop to glorify man as (she saw) man ought to be, which found its zenith in the American industrialist and entrepreneur, ran counter to much that characterized the broader 20th century culture (both stylistically and ideologically), as it does much of the 21st. Granted, I may have an exaggerated sense of the opposition to Rand—her books are still read in and out of the classroom, and some of her ideas still influence areas of at least American culture—and one wonders if Rand wouldn’t take the opposition, itself, as proof of her being right (she certainly did this in the last century). However, because of the controversy, as well as the ideology, that structures the novels, I would teach her with a grain of salt, not wanting to misuse my position of teaching who are, essentially, other people’s kids who probably don’t know and haven’t read enough to understand Rand in context. For this fact, if not for the reasoning, I can imagine other teachers applauding me.

And yet, how many academics would forego including Rand in a syllabus and, in the same moment, endorse teaching John Steinbeck without a second thought?

I generally enjoy reading books I happened to miss in my teenage years. Had I read The Great Gatsby any sooner than I did in my late twenties, I would not have been ready for it, and the book would have been wasted on me. The same can be said of The Scarlet Letter, 1984, and all of Dostoevsky. Even the books I did read have humbled me upon rereading; Pride and Prejudice wasn’t boring—I was.

Reading through The Grapes of Wrath for the first time this month, I am similarly glad I didn’t read it in high school (most of my peers were not so lucky, having had to read it in celebration of Steinbeck’s 100th birthday). The fault, dear Brutus, is not in the book (though it certainly has faults) but in ourselves—that we, as teenagers who lack historical, political, and philosophical context, are underlings. One can criticize Atlas Shrugged for presenting a selective, romanticized view of the capitalist entrepreneur (which, according to Rand’s premises, was thorough, correct, consistent, and, for what it was, defensible) which might lead teenagers to be self-worshipping assholes who, reading Rand without nuance, take the book as justification for mistaking their limited experience of reality as their rational self-interest. One can do much the same, though for ideas fundamentally opposed to Rand’s, for The Grapes of Wrath.

A member of the Lost Generation, John Steinbeck was understandably jaded in his view of 19th-century American ideals. Attempting to take a journalistic, modern view of the Great Depression and Dust Bowl from the bottom up, he gave voice to the part of American society that, but for him, may have remained inarticulate and unrecorded. Whatever debate can be had about the origins of Black Tuesday (arguably beginning more in Wilson’s Washington and Federal Reserve than on Wall Street), the Great Depression hit the Midwest hardest, and the justifiable sense that Steinbeck’s characters are unfair victims of others’ depredations pervades The Grapes of Wrath, just as it articulates one of the major senses of the time. When I read the book, I’m not only reading of the Joad family: I’m reading of my own grandfather, who grew up in Oklahoma and later Galveston, TX. He escaped the latter effects of the Dust Bowl by going not to California but to Normandy. I’m fortunate to have his journal from his teenage years; other Americans who don’t have such a journal have Steinbeck.

However, along with the day-in-the-life (in which one would never want to spend a day) elements of the plot, the book nonetheless offers a selectively, one might even say romantically, presented ideology in answer to the plot’s conflict. Responding to the obstacles and unfairness depicted in The Grapes of Wrath one can find consistent advocacy of revolution among the out-of-work migrants that comprise most of the book. Versus Rand’s extension of Dagny Taggart or Hank Rearden’s sense of pride, ownership, and property down to the smallest elements of their respective businesses, one finds in Steinbeck the theme of a growing disconnect between legal ownership and the right to the land.

In the different reflections interpolated throughout the Joads’ plot Steinbeck describes how, from his characters’ view, there had been a steady divorce over the years between legal ownership of the land and appreciation for it. This theme was not new to American literature. The “rural farmer vs city speculator” mythos is one of the fundamental characteristics of American culture reaching back to Jefferson’s Democratic Republicans’ opposition to Adams’s Federalists, and the tension between the southwest frontiersman and the northeast banker would play a major role in the culture of self-reliance, the politics of the Jacksonian revolution onward, and the literature of Mark Twain and others. Both sides of the tension attempt to articulate in what the inalienable right to property inheres. Is it in the investment of funds and the legal buying and owning of land, or is it in the physical production of the land, perhaps in spite of whoever’s name is on the land grant or deed? Steinbeck is firmly in the latter camp.

However, in The Grapes of Wrath one finds not a continuation of the yeoman farmer mythos but an arguable undermining of the right to property and profit, itself, that undergirds the American milieu which makes the yeoman farmer possible, replacing it with an (albeit understandable) “right” based not on production and legal ownership, but on need. “Fallow land’s a sin,” is a consistent motif in The Grapes of Wrath, especially, argue the characters, when there are so many who are hungry and could otherwise eat if allowed to plant on the empty land. Steinbeck does an excellent job effecting sympathy for the Joads and other characters who, having worked the soil their whole lives, must now compete with hundreds of others like them for jobs paying wages that, due to the intended abundance of applicants, fall far short of what is needed to fill their families’ stomachs.

Similarly, Steinbeck goes to great pains to describe the efforts of landowners to keep crop prices up by punishing attempts to illegally grow food on the fallow land or pick the fruit left to rot on trees, as well as the plot, narrowly evaded by the Joads, to eradicate “reds” trying to foment revolution in one of the Hoovervilles of the book (Tom Joad had, in fact, begun to advocate rising up against landowners in more than one instance). In contrast to the Hoovervilles and the depredations of locals against migrant Okies stands the government camp, safely outside the reach of the local, unscrupulous, anti-migrant police and fitted out with running water, beneficent federal overseers, and social events. In a theme reminiscent of the 19th-century farmers’ looking to the federal government for succor amidst an industrializing market, Steinbeck concretizes the relief experienced in the Great Depression by families like the Joads at the prospects of aid from Washington.

However, just as Rand’s depictions of early twentieth-century America is selective in its representation of the self-made-man ethos of her characters (Rand omits, completely, World War I and the 1929 stock market crash from her novels), Steinbeck’s representation of the Dust Bowl is selective in its omissions. The profit-focused prohibitions against the Joads’ working the land were, in reality, policies required by FDR’s New Deal programs—specifically the Agricultural Adjustment Act, which required the burning of crops and burying of livestock in mass graves to maintain crop prices and which was outlawed in 1936 by the Supreme Court. It is in Steinbeck’s description of this process, which avoids explicitly describing the federal government’s role therein, where one encounters the phrase “grapes of wrath,” presaging a presumable event—an uprising?—by the people: “In the souls of the people the grapes of wrath are filling and growing heavy, growing heavy for the vintage.” Furthermore, while Rand presents, if in the hypothetical terms of narrative, how something as innocuous and inevitable as a broken wire in the middle of a desert can have ramifications that reach all the way to its company’s highest chair, Steinbeck’s narrative remains focused on the Joads, rarely touching on the economic exigencies experienced by the local property and business owners except in relation to the Joads and to highlight the apparent inhumanity of the propertied class (which, in such events as the planned fake riot at the government camp dance party, Steinbeck presents for great polemical effect).

I use “class” intentionally here: though the Great Depression affected all, Steinbeck’s characters often adopt the class-division viewpoint not only of Marx but of Hegel, interpreting the various landowners’ actions as being intentionally taken at the expense of the lower, out-of-work, classes. Tom Joad’s mother articulates to Tom why she is, ultimately, encouraged by, if still resentful of the apparent causers of, their lot:

“Us people will go on living when all them people is gone. Why, Tom, we’re the people that live. They ain’t gonna wipe us out. Why, we’re the people—we go on.”

“We take a beatin’ all the time.”

“I know.” Ma chuckled. “Maybe that makes us tough. Rich fellas come up an’ they die, an’ their kids ain’t no good, an’ they die out. But, Tom, we keep a-comin’. Don’ you fret none, Tom. A different time’s comin’.”

Describing, if in fewer words than either Hegel or Marx, the “thesis-antithesis-synthesis” process of historical materialism, where their class is steadily strengthened by their adverse circumstances in ways the propertied class is not, Mrs. Joad articulates an idea that pervades much of The Grapes of Wrath: the sense that the last, best hope and strength of the put-upon lower classes is found in their being blameless amidst the injustice of their situation, and that their numbers makes their cause inevitable.

This, I submit, is as much a mythos—if a well-stylized and sympathetically presented one—as Rand’s depiction of the producer-trader who is punished for his or her ability to create, and, save for the discernible Marxist elements in Steinbeck, both are authentically American. Though the self-prescribed onus of late 19th- and early 20th-century literature was partially journalistic in aim, Steinbeck was nonetheless a novelist, articulating not merely events but the questions beneath those events and concretizing the perspectives and issues involved into characters and plots that create a story, in the folk fairy tale sense, a mythos that conveys a cultural identity. Against Rand’s modernizing of the self-made man Steinbeck resurrects the soul of the Grange Movement of farmers who, for all their work ethic and self-reliance, felt left behind by the very country they fed. That The Grapes of Wrath is polemical—from the Greek πολεμικός for “warlike” or “argumentative”—does not detract from the project (it may be an essential part of it). Indeed, for all the license and selectivity involved in the art form, nothing can give fuel to a cause like a polemical novel—as Uncle Tom’s Cabin, The Jungle, and many others show.

However, when it comes to assigning polemics to students without hesitation, I…hesitate. Again, the issue lies in recognizing (or, for most students, being told) that one is reading a polemic. When one reads a polemical novel, one is often engaging, in some measure, with politics dressed up as story, and it is through this lens and with this caveat that such works must be read—even (maybe especially!) when they are about topics with which one agrees. As in many things, I prefer to defer to Aristotle, who, in the third section of Book I of the Nicomachean Ethics, cautions against young people engaging in politics before they first learn enough of life to provide context:

Now each man judges well the things he knows, and of these he is a good judge. And so the man who has been educated in a subject is a good judge of that subject, and the man who has received an all-round education is a good judge in general. Hence a young man is not a proper hearer of lectures on political science; for he is inexperienced in the actions that occur in life, but its discussions start from these and are about these; and, further, since he tends to follow his passions, his study will be vain and unprofitable, because the end aimed at is not knowledge but action. And it makes no difference whether he is young in years or youthful in character; the defect does not depend on time, but on his living, and pursuing each successive object, as passion directs.

Of course, the implicit answer is to encourage young people (and ourselves) to read not less but more—and to read with the knowledge that their own interests, passions, neuroses, and inertias might be unseen participants in the process. Paradoxically, it may be by reading more that we can even start to read. Rand becomes much less profound, and perhaps more enjoyable, after one reads the Aristotle, Hugo, and Nietzsche who made her, and I certainly drew on American history (economic and political) and elements of continental philosophy, as well as other works of Steinbeck and the Lost Generation, when reading The Grapes of Wrath. Yet, as Aristotle implies, young people haven’t had the time—and, more importantly, the metaphysical and rhetorical training and self-discipline—to develop such reflection as readers (he said humbly and as a lifelong student, himself). Indeed, as an instructor I see this not as an obstacle but an opportunity—to teach students that there is much more to effective reading and understanding than they might expect, and that works of literature stand not as ancillary to the process of history but as loci of its depiction, reflection, and motivation.

Perhaps I’m exaggerating my case. I have, after all, taught polemical novels to students (Anthem among them, as well as, most recently, 1984 to a middle schooler), and a novel I’ve written and am trying to get published is, itself, at least partially polemical on behalf of keeping Shakespeare in the university curriculum. Indeed, Dostoevsky’s polemical burlesque of the psychology behind Russian socialism, Devils, or The Possessed, so specifically predicted the motives and method of the Russian Revolution (and any other socialist revolution) more than fifty years before it happened that it should be required reading. Nonetheless, because the content and aim of a work requires a different context for teaching, a unit on Devils or The Grapes of Wrath would look very different from one on, say, The Great Gatsby. While the latter definitely merits offering background to students, the former would need to include enough background on the history and perspectives involved to be able to recognize them. The danger of omitting background from Fitzgerald would be an insufficient understanding of and immersion in the plot, of Steinbeck, an insufficient knowledge of the limits of and possible counters to the argument.

Part of the power and danger of polemical art lies in its using a fictional milieu to carry an idea that is not meant to be taken as fiction. The willing suspension of disbelief that energizes the former is what allows the latter idea to slip in as palatable. This can produce one of at least two results, both, arguably, artistic aberrations: either the idea is caught and disbelief is not able to be suspended, rendering the artwork feeling preachy or propagandistic, or the audience member gives him or herself over to the work completely and, through the mythic capability of the artistic medium, becomes uncritically possessed by the idea, deriving an identity from it while believing they are merely enjoying and defending what they believe to be great art. I am speaking from more than a bit of reflection: whenever I see some millennial on Twitter interpret everything through the lens of Harry, Ron, and Hermione, I remember mid-eye-roll that I once did the same with Dagny, Francisco, and Hank.

Every work of art involves a set of values it seeks to concretize and communicate in a certain way, and one culture’s mythos may be taken by a disinterested or hostile observer to be so much propaganda. Because of this, even what constitutes a particular work as polemical may, itself, be a matter of debate, if not personal taste. One can certainly read and gain much from reading any of the books I’ve mentioned (as The Grapes of Wrath‘s Pulitzer Prize shows), and, as I said, I’m coming at Grapes with the handicap of its being my first read. I may very well be doing what I warn my students against doing, passing judgment on a book before I understand it; if I am, I look forward to experiencing a well-deserved facepalm moment in the future, which I aim to accelerate by reading the rest of Steinbeck’s work (Cannery Row is next). But this is, itself, part of the problem—or boon—of polemics: that to avoid a premature understanding one must intentionally seek to nuance their perspective, both positively and negatively, with further reading.

Passively reading Atlas Shrugged or The Grapes of Wrath, taking them as reality, and then interpreting all other works (and, indeed, all of life) through their lens is not dangerous because they aren’t real, but because within the limits of their selective stylization and values they are real. That is what makes them so powerful, and, as with anything powerful, one must learn how to use them responsibly—and be circumspect when leading others into them without also ensuring they possess the discipline proper to such works.


Photo Credit.

Eve: The Prototype of the Private Citizen

Written in the 1660s, John Milton’s Paradise Lost is the type of book I imagine one could spend a lifetime mining for meaning and still be left with something to learn. Its being conceived as an English Epic that uses the poetic forms and conventions of Homeric and Ovidic antiquity to present a Christian subject, it yields as much to the student of literature as it does to students of history and politics, articulating in its retelling of the Fall many of the fundamental questions at work in the post-Civil-War body politic of the preceding decade (among many other things). Comparable with Dante’s Inferno in form, subject, and depth, Paradise Lost offers—and requires—much to and from readers, and it is one of the deepest and most complex works in the English canon. I thank God Milton did not live a half century earlier or write plays, else I might have to choose between him and Shakespeare—because I’d hesitate to simply pick Shakespeare.

One similarity between Milton and Shakespeare that has import to today’s broader discussion involves the question of whether they present their female characters fairly, believably, and admirably, or merely misogynistically. Being a Puritan Protestant from the 1600s writing an Epic verse version of Genesis 1-3, Milton must have relegated Eve to a place of silent submission, no? This was one of the questions I had when I first approached him in graduate school, and, as I had previously found when approaching Shakespeare and his heroines with the same query, I found that Milton understood deeply the gender politics of Adam and Eve, and he had a greater respect for his heroine than many current students might imagine.

I use “gender politics” intentionally, for it is through the different characterizations of Adam and Eve that Milton works out the developing conception of the citizen in an England that had recently executed its own king. As I’ve written in my discussion of Shakespeare’s history plays, justified or not, regicide has comprehensive effects. Thus, the beheading of Charles I on 30 January 1649 had implications for all 17th-century English citizens, many of which were subsequently written about by many like Margaret Cavendish and John Locke. At issue was the question of the individual’s relation to the monarch; does the citizen’s political identity inhere in the king or queen (Cavendish’s perspective), or does he or she exist as a separate entity (Locke’s)? Are they merely “subjects” in the sense of “the king’s subjects,” or are they “subjects” in the sense of being an active agent with an individual perspective that matters? Is it Divine Right, conferred on and descended from Adam, that makes a monarch, or is it the consent of the governed, of which Eve was arguably the first among mankind?

Before approaching such topics in Paradise Lost, Milton establishes the narrative framework of creation. After an initial prologue that does an homage to the classical invoking of the Muses even as it undercuts the pagan tradition and places it in an encompassing Christian theology (there are many such nuances and tensions throughout the work), Milton’s speaker introduces Satan, nee Lucifer, having just fallen with his third of heaven after rebelling against the lately announced Son. Thinking, as he does, that the Son is a contingent being like himself (rather than a non-contingent being coequal with the Father, as the Son is shown to be in Book III), Satan has failed to submit to a rulership he does not believe legitimate. He, thus, establishes one of the major themes of Paradise Lost: the tension between the individual’s will and God’s. Each character’s conflict inheres in whether or not they will choose to remain where God has placed them—which inerringly involves submitting to an authority that, from their limited perspective, they do not believe deserves their submission—or whether they will reject it and prefer their own apparently more rational interests. Before every major character—Satan, Adam, and Eve—is a choice between believing the superior good of God’s ordered plan and pursuing the seemingly superior option of their individual desires.

Before discussing Eve, it is worth looking at her unheavenly counterpart, Sin. In a prefiguration of the way Eve was formed out of Adam before the book’s events, Sin describes to Satan how she was formed Athena-style out of his head when he chose to rebel against God and the Son, simultaneously being impregnated by him and producing their son, Death. As such she and Satan stand as a parody not only of the parent-progeny-partner relationship of Adam-Eve but also of God and the Son. Describing her illicit role in Lucifer’s rebellion, Sin says that almost immediately after birth,

I pleased and with attractive graces won

The most averse (thee chiefly) who full oft

Thyself in me thy perfect image viewing

Becam’st enamoured and such joy thou took’st

With me in secret that my womb conceived

A growing burden.

Paradise Lost II.761-767

In here and other places, Sin shows that her whole identity is wrapped up in Satan, her father-mate. In fact, there is rarely any instance where she refers to herself without also referring to him for context or as a counterpoint. Lacking her own, private selfhood from which she is able to volitionally choose the source of her identity and meaning, Sin lives in a state of perpetual torment, constantly being impregnated and devoured by the serpents and hellhounds that grow out of her womb.

Sin’s existence provides a Dantean concretization of Satan’s rebellion, which is elsewhere presented as necessarily one of narcissistic solipsism—a greatness derived from ignoring knowledge that might contradict his supposed greatness. A victim of her father-mate’s “narcissincest” (a term I coined for her state in grad school), Sin is not only an example of the worst state possible for the later Eve, but also, according to many critics, of women in 17th-century England, both in relation to their fathers and husbands, privately, as well as to the monarch (considered by many the “father of the realm”), publically. Through this reading, we can see Milton investigating, through Sin, not only the theology of Lucifer’s fall, but also of an extreme brand of royalism assumed by many at the time. And yet, it is not merely a simple criticism of royalism, per se: though Milton, himself, wrote other works defending the execution of Charles I and eventually became a part of Cromwell’s government, it is with the vehicle of Lucifer’s rebellion and Sin—whose presumptions are necessarily suspect—that he investigates such things (not the last instance of his work being as complex as the issues it investigates).

After encountering the narcissincest of the Satan-Sin relationship in Book II we are treated to its opposite in the next: the reciprocative respect between the Father and the Son. In what is, unsurprisingly, one of the most theologically-packed passages in Western literature, Book III seeks to articulate the throneroom of God, and it stands as the fruit of Milton’s study of scripture, soteriology, and the mysteries of the Incarnation, offering, perhaps wisely, as many questions as answers for such a scene. Front and center is, of course, the relationship between the Son and Father, Whose thrones are surrounded by the remaining two thirds of the angels awaiting what They will say. The Son and Father proceed to narrate to Each Other the presence of Adam and Eve in Eden and Satan’s approach thereunto; They then discuss what will be Their course—how They will respond to what They, omniscient, already know will happen.

One major issue Milton faced in representing such a discussion is the fact that it is not really a discussion—at least, not dialectically. Because of the triune nature of Their relationship, the Son already knows what the Father is thinking; indeed, how can He do anything but share His Father’s thoughts? And yet, the distance between the justice and foresight of the Father (in no ways lacking in the Son) and the mercy and love of the Son (no less shown in the words of the Father) is managed by the frequent use of the rhetorical question. Seeing Satan leave Hell and the chaos that separates it from the earth, the Father asks:

Only begotten Son, seest thou what rage

Transports our Adversary whom no bounds

Prescribed, no bars…can hold, so bent he seems

On desperate revenge that shall redound

Upon his own rebellious head?

—Paradise Lost III.80-86

The Father does not ask the question to mediate the Son’s apparent lack of knowledge, since, divine like the Father, the Son can presumably see what He sees. Spoken in part for the sake of those angels (and readers) who do not share Their omniscience, the rhetorical questions between the Father and Son assume knowledge even while they posit different ideas. Contrary to the solipsism and lack of sympathy between Sin and Satan (who at first does not even recognize his daughter-mate), Book III shows the mutual respect and knowledge of the rhetorical questions between the Father and Son—who spend much of the scene describing Each Other and Their motives (which, again, are shared).

The two scenes between father figures and their offspring in Books II and III provide a backdrop for the main father-offspring-partner relationship of Paradise Lost: that of Adam and Eve—with the focus, in my opinion, on Eve. Eve’s origin story is unique in Paradise Lost: while she was made out of Adam and derives much of her joy from him, she was not initially aware of him at her nativity, and she is, thus, the only character who has experienced and can remember (even imagine) existence independent of a source.

Book IV opens on Satan reaching Eden, where he observes Adam and Eve and plans how to best ruin them. Listening to their conversation, he hears them describe their relationship and their respective origins. Similar to the way the Father and Son foreground their thoughts in adulatory terms, Eve addresses Adam as, “thou for whom | And from whom I was formed flesh of thy flesh | and without whom am to no end, my guide | And head” (IV.440-443). While those intent on finding sexism in the poem will, no doubt, jump at such lines, Eve’s words are significantly different from Sin’s. Unlike Sin’s assertion of her being a secondary “perfect image” of Satan (wherein she lacks positive subjectivity), Eve establishes her identity as being reciprocative of Adam’s in her being “formed flesh,” though still originating in “thy flesh.” She is not a mere picture of Adam, but a co-equal part of his substance. Also, Eve diverges from Sin’s origin-focused account by relating her need of Adam for her future, being “to no end” without Adam; Eve’s is a chosen reliance of practicality, not an unchosen one of identity.

Almost immediately after describing their relationship, Eve recounts her choice of being with Adam—which necessarily involves remembering his absence at her nativity. Hinting that were they to be separated Adam would be just as lost, if not more, than she (an idea inconceivable between Sin and Satan, and foreshadowing Eve’s justification in Book IX for sharing the fruit with Adam, who finds himself in an Eve-less state), she continues her earlier allusion to being separated from Adam, stating that, though she has been made “for” Adam, he a “Like consort to [himself] canst nowhere find” (IV.447-48). Eve then remembers her awakening to consciousness:

That day I oft remember when from sleep

I first awaked and found myself reposed

Under a shade on flow’rs, much wond’ring where

And what I was, whence thither brought and how.

Paradise Lost IV.449-452

Notably seeing her origin as one not of flesh but of consciousness, she highlights that she was alone. That is, her subjective awareness preexisted her understanding of objective context. She was born, to use a phrase by another writer of Milton’s time, tabula rasa, without either previous knowledge or a mediator to grant her an identity. Indeed, perhaps undercutting her initial praise of Adam, she remembers it “oft”; were this not an image of the pre-Fall marriage, one might imagine the first wife wishing she could take a break from her beau—the subject of many critical interpretations! Furthermore, Milton’s enjambment allows a dual reading of “from sleep,” as if Eve remembers that day as often as she is kept from slumber—very different from Sin’s inability to forget her origin due to the perpetual generation and gnashing of the hellhounds and serpents below her waist. The privacy of Eve’s nativity so differs from Sin’s public birth before all the angels in heaven that Adam—her own father-mate—is not even present; thus, Eve is able to consider herself without reference to any other. Of the interrogative words with which she describes her post-natal thoughts— “where…what…whence”—she does not question “who,” further showing her initial isolation, which is so defined that she initially cannot conceive of another separate entity.

Eve describes how, hearing a stream, she discovered a pool “Pure as th’ expanse of heav’n” (IV.456), which she subsequently approached and, Narcissus-like, looked down into.

As I bent down to look, just opposite

A shape within the wat’ry gleam appeared

Bending to look on me. I started back,

It started back, but pleased I soon returned,

Pleased it returned as soon with answering looks

Of sympathy and love.

Paradise Lost IV.460-465

When she discovers the possibility that another person might exist, it is, ironically, her own image in the pool. In Eve, rather than in Sin or Adam, we are given an image of self-awareness, without reference to any preceding structural identity. Notably, she is still the only person described in the experience—as she consistently refers to the “shape” as “it.” Eve’s description of the scene contains the actions of two personalities with only one actor; that is, despite there being correspondence in the bending, starting, and returning, and in the conveyance of pleasure, sympathy, and love, there is only one identity present. Thus, rather than referring to herself as an image of another, as does Sin, it is Eve who is here the original, with the reflection being the image, inseparable from herself though it be. Indeed, Eve’s nativity thematically resembles the interaction between the Father and the Son, who, though sharing the same omniscient divinity, converse from seemingly different perspectives. Like the Father Who instigates interaction with His Son, His “radiant image” (III.63), in her first experience Eve has all the agency.

As the only instance in the poem when Eve has the preeminence of being another’s source (if only a reflection), this scene invests her interactions with Adam with special meaning. Having experienced this private moment of positive identity before following the Voice that leads her to her husband, Eve is unique in having the capacity to agree or disagree with her seemingly new status in relation to Adam, having remembered a time when it was not—a volition unavailable to Sin and impossible (and unnecessary) to the Son.

And yet, this is the crux of Eve’s conflict: will she continue to heed the direction of the Voice that interrupted her Narcissus-like fixation at the pool and submit herself to Adam? The ambivalence of her description of how she would have “fixed | Mine eyes till now and pined with vain desire,” over her image had the Voice not come is nearly as telling as is her confession that, though she first recognized Adam as “fair indeed, and tall!” she thought him “less fair, | Less winning soft, less amiably mild | Than that smooth wat’ry image” (IV.465-480). After turning away from Adam to return to the pool and being subsequently chased and caught by Adam, who explained the nature of their relation—how “To give thee being I lent | Out of my side to thee, nearest my heart, | Substantial life to have thee by my side”—she “yielded, and from that time see | How beauty is excelled by manly grace | And wisdom which alone is truly fair” (IV. 483-491). One can read these lines at face value, hearing no undertones in her words, which are, after all, generally accurate, Biblically speaking. However, despite the nuptial language that follows her recounting of her nativity, it is hard for me not to read a subtle irony in the words, whether verbal or dramatic. That may be the point—that she is not an automaton without a will, but a woman choosing to submit, whatever be her personal opinion of her husband.

Of course, the whole work must be read in reference to the Fall—not merely as the climax which is foreshadowed throughout, but also as a condition necessarily affecting the writing and reading of the work, it being, from Milton’s Puritan Protestant perspective, impossible to correctly interpret pre-Fall events from a post-Fall state due to the noetic effects of sin. Nonetheless, in keeping with the generally Arminian tenor of the book—that every character must have a choice between submission and rebellion for their submission to be valid, and that the grace promised in Book III is “Freely vouchsafed” and not based on election (III.175)—I find it necessary to keep in mind, as Eve seems to, the Adam-less space that accompanied her nativity. Though one need not read all of her interaction with Adam as sarcastic, in most of her speech one can read a subtextual pull back to the pool, where she might look at herself, alone.

In Eve we see the fullest picture of what is, essentially, every key character’s (indeed, from Milton’s view, every human’s) conflict: to choose to submit to an assigned subordinacy or abstinence against the draw of a seemingly more attractive alternative, often concretized in what Northrop Frye calls a “provoking object”—the Son being Satan’s, the Tree Adam’s, and the reflection (and private self it symbolizes, along with an implicit alternative hierarchy with her in prime place) Eve’s. In this way, the very private consciousness that gives Eve agency is that which threatens to destroy it; though Sin lacks the private selfhood possessed by Eve, the perpetual self-consumption of her and Satan’s incestuous family allegorizes the impotent and illusory self-returning that would characterize Eve’s existence if she were to return to the pool. Though she might not think so, anyone who knows the myth that hers parallels knows that, far from limiting her freedom, the Voice that called Eve from her first sight of herself rescued her from certain death (though not for long).

The way Eve’s subjectivity affords her a special volition connects with the biggest questions of Milton’s time. Eve’s possessing a private consciousness from which she can consensually submit to Adam parallels John Locke’s “Second Treatise on Civil Government” of the same century, wherein he articulates how the consent of the governed precedes all claims of authority. Not in Adam but in Eve does Milton show that monarchy—even one as divine, legitimate, and absolute as God’s—relies on the volition of the governed, at least as far as the governed’s subjective perception is concerned. Though she cannot reject God’s authority without consequence, Eve is nonetheless able to agree or disagree with it, and through her Milton presents the reality that outward submission does not eliminate inward subjectivity and personhood (applicable as much to marriages as to monarchs, the two being considered parallel both in the poem and at the time of its writing); indeed, the inalienable presence of the latter is what gives value to the former and separates it from the agency-less state pitifully experienced by Sin.

And yet, Eve’s story (to say nothing of Satan’s) also stands as a caution against simply taking on the power of self-government without circumspection. Unrepentant revolutionary though he was, Milton was no stranger to the dangers of a quickly and simply thrown-off government, nor of an authority misused, and his nuancing of the archetype of all subsequent rebellions shows that he did not advocate rebellion as such. While Paradise Lost has influenced many revolutions (political in the 18th-century revolutions, artistic in the 19th-century Romantics, cultural in the 20th-century New Left), it nonetheless has an anti-revolutionary current. Satan’s presumptions and their later effects on Eve shows the self-blinding that is possible to those who, simply trusting their own limited perception, push for an autonomy they believe will liberate them to an unfettered reason but which will, in reality, condemn them to a solipsistic ignorance.

By treating Eve, not Adam, as the everyman character who, like the character of a morality play, represents the psychological state of the tempted individual—that is, as the character with whom the audience is most intended to sympathize—Milton elevates her to the highest status in the poem. Moreover—and of special import to Americans like myself—as an articulation of an individual citizen who does not derive the relation to an authority without consent, Eve stands as a prototype of the post-17th-century conception of the citizen that would lead not only to further changes between the British Crown and Parliament but also a war for independence in the colonies. Far from relegating Eve to a secondary place of slavish submission, Milton arguably makes her the most human character in humanity’s first story; wouldn’t that make her its protagonist? As always, let this stimulate you to read it for yourself and decide. Because it integrates so many elements—many of which might defy new readers’ expectations in their complexity and nuance—Paradise Lost belongs as much on the bookshelf and the syllabus as Shakespeare’s Complete Works, and it presents a trove for those seeking to study the intersection not only of art, history, and theology, but also of politics and gender roles in a culture experiencing a fundamental change.


Photo Credit.

A Romantic Case for Anime

We’ve all felt it—the mixed excitement and dread at hearing a beloved book is set to be made into a movie. They might do it right, capturing not only key plot events but also (and more importantly) how it feels to be swept up in the work as a whole; 2020’s Emma with Anya Taylor-Joy comes to my mind, most of all for the way it captures how someone who understands and loves Austen’s ubiquitous irony might feel when reading her work. However, they also might do it poorly; despite both 1974 and 2013 attempts’ being worth watching, I’ve yet to see a rendition of The Great Gatsby that captures the book’s plot and narrative tone in the right proportion (in my opinion, the 1974 version emphasizes the former but misses some of the latter, while parts of the 2013 version exagerrate the latter just to the border of parody). My readers have, no doubt, already imagined examples of works they’ve always wished could be faithfully put onto the screen and others they’d rather not be risked to the vicissitudes of translating from one medium to another.

The last decade has thankfully seen a growth in long-form, box-office quality productions that makes it more possible than ever to imagine longer works being produced without curtailing their lengthy plotlines—example, the BBC’s 2016 rendition of War and Peace. However, this leaves another, perhaps more important, hurdle to hazard: while live-action media can now faithfully follow the plots of the originals, there still remains the difficulty of conveying the tone and feel of the works, especially when different media necessarily have different capacities and limitations of representation. Though I’ve enjoyed productions that have been made, I don’t know that I would expect live-action renditions to reproduce the aesthetic impression of, say, Paradise Lost, The Hunchback of Notre Dame, or Crime and Punishment, and I worry that attempts to do so might mar more than measure up. The problem lies in the difficulty of translating characters’ inner experience—which is usually conveyed by a stylizing narrator—via the essentially externalistic medium of the camera eye.

While a live action movie or series might remain faithful to the selective events in a plot, the lack of an interpretive narrator removes a key element of what defines epic poems and novels. Paradoxically, the narrowing of perspective through a stylizing narrator allows story to move from the limits of natural events into the limitlessness of human perception and interpretation. Voiceover narrators can provide thematic stylization in film, as well as essential plot coherence, but it is still primarily the camera that replaces the literary narrator as the means of conveyance. Furthermore, if too ubiquitous, voiceovers can separate the audience from the action, which is the focus of film. Film’s power inheres in its ability to place the audience in the midst of a plot, removing as many frames between the watcher and the story’s events as possible. However, this is also why books are so difficult to translate: motion pictures focus on events when the aesthetic experience of literature inheres in how characters and narrator experience said events.

The literary movement that focused most on the character’s experience (and, vicariously, ours) as the purpose of art was Romanticism. Romantic literature and poetry were less concerned about the subject matter than about their effect on the character’s emotions—in the sense that, from the generally Platonic metaphysics of the Romantics, the incidental reaches its fullest meaning by provoking an aesthetic experience far beyond it. From Hawthorne’s rose bush growing outside Salem’s prison, to Shelley’s secondhand rumination on the ruined feet of Ozymandias, to Keats’s apostrophe to the Grecian urn, the Romantics showed how part of the reality of an object involves its significance to the observer, and it was the role of the Romantic narrator and speaker to draw out that effect for the reader.

It is this essential influence of the narrator and characters’ inner lives on the great works’ aesthetic experience that makes me skeptical of even the best acting, camera work, and post-production effects to sufficiently replace them. It may be possible, and, again, I have very much enjoyed some renditions. Furthermore, not wanting to be the audience member who misses the Shakespeare performance for the open copy of the play on their lap, I tend to watch movie adaptations as distinct works rather than in strict relation to the originals. However, this, itself, may be a concession to my hesitance to trust film to live up to the aesthetic experience of certain books. I would, however, trust anime to do so.

While a history of Japanese manga and anime is beyond the scope of this piece (or my expertise), since choosing to explore the artform as a post-grad-school reward (or recovery—one can only stare at the sun that is Paradise Lost for so long) I’ve watched plenty of anime over the past ten years, and I have become convinced that it might serve as, at least, a middle ground when seeking to capture plot, narrative tone, and inner character experience in a motion medium. Anime is capable of handling virtually every story genre, and while it contains many of the same ridiculous hi-jinks and satire of Western cartoons and CG animation, it can also capture tragic pathos and sublime catharsis in ways that would be out of place in the vast majority of Western animation. This makes sense: originating in early 20th-century Japan, manga and anime were not subject to the same skepticism about artistic representations of transcendent value that characterized Western art after the move from 19th-century Romanticism and Realism to 20th-century modernism and post-modernism.

Of course, there have been exceptions; 20th-century Disney animation, or Marvel and DC Comics, were iconic because they attempted to be iconic—they unironically tried to depict in images those values and stories that are transcendent. However, even these were created predominantly with the child (or the childlike adult) in mind. Furthermore, while anime certainly has deserved elements of ambivalence, if not cynicism, and while there are many incredibly satirical and humorous series, anime as an artform is not implicitly dismissive of narrative trustworthiness and characters’ experience of the transcendent in the same way that much of Western motion art is. Rather, anime conventionally allows for the sublime heights and deepest horrors that previously characterized Romanticism, all of which it presents through the stylization of animation. This stylization is able to act as an interpretive medium just like a novel’s narrator, contextualizing events through the experience of those involved in a way often eschewed by, if not unavailable to, film.

For an example, I submit Kaguya-sama: Love is War (Japanese Kaguya-sama wa Kokurasetai – Tensai-tachi no Ren’ai Zunōsen, “Kaguya Wants to Make Them Confess: The Geniuses’ War of Hearts and Minds”). Though a romantic comedy in the Slice-of-Life genre, it exemplifies anime’s ability to convey the heights and depths of inner experience of the characters—here Kaguya and Miyuki, a pair of high school teenagers who, as student council president and vice president, compete to be top of their class while being secretly in love with each other and too proud to admit it. As the English title conveys, a running metaphor through the show is the bellicose subtext of their attempts to maneuver each other into confessing their love first and, thus, losing the war; think Beatrice and Benedick with the extremizing effect of teenage hormones and motifs of heavy artillery.

Plot-wise, Love is War follows a standard rom-com formula, with tropes recognizable to Western audiences: the pride and prejudices of the characters, the much ado about things that end up being really nothing, the presence of a mutual friend who acts as an oblivious catalist and go-between in the relationship, etc. However, the show reinvigorates these tropes by portraying via hyperbolic narrator the deuteragonists’ experience of the episodes’ conflicts, bringing audience members into the all-consuming tension of how a teenager might see something as minor as whether to share an item from their lunch. The combination of chess and military metaphors conveys the inner conflicts of the initially cold but gradually warming characters (the “tsundere” character type common in such animes), and the consistency of such motifs creates a unified aesthetic that, due in large part to the disconnect between the over-the-top tone and, in reality, low-stakes subject matter, is hysterical. Another unique aspect about Love is War is that, due to its focus on the characters’ experience of the plot (all the better for being trivially mundane), it’s a technically Romantic romantic comedy.

Love is War is, of course, a low-stakes example of what modern anime can do, though it did score three awards, including Best Comedy, at the 2020 Crunchyroll Anime Awards. A more serious example, Death Note, similarly conveys much of its gravitas through voiceover—this time the first-person narration of protagonist Light Yagami, a high schooler who with the help of a book from the realm of the dead is able to kill anyone whose name and face he knows, and L, a mysterious and reclusive detective charged by Interpol to find him. Throughout the series—which employs similar, if non-parodic, attempts by characters to outwit each other as Love is War—Light and L articulate their planned maneuvers and the implications thereof through inner voiceover. Not only does the narration lay out elements of their battle of wits that the audience might have missed, but it conveys the growing tension the two experience—especially Light, who, as he amasses fame as both a menace and cult hero experiences a growing egotism and subsequent paranoia around the possibility of being found out.

Just as Love is War is, in many ways, a parallel of Pride and Prejudice (Elizabeth and Darcy, themselves, both being tsundere characters), Death Note’s focus on a young man who wishes to achieve greatness by killing those deserving of death and who subsequently develops a maddening neurosis is virtually the same as Crime and Punishment—however enormously their plots and endings differ (Crime and Punishment lacks an explicit demonic presence like Death Note’s Shinigami Ryuk, the Death Note’s otherworldly owner; Dostoevsky would not employ the spectre of a conversant devil until The Brothers Karamazov—yet another point of consanguinity between anime like Death Note and his writing). Regardless of their differing plots, the anime’s inclusion of the characters’ inner thoughts and imaginations convey an increasingly tense tone similar to how Dostoevsky steadily shows Raskalnikov’s moral unmooring, and the explanations and attempted self-justifications by both Light and L convey more than I think even the best cinema would be capable of showing.

I am not advocating that every narrative motif or figuration be included in page-to-screen renditions, nor that we cease trying to actively reinvigorate great works of art through judicious adaptations into new media. Yet, if the inner lives of teenagers—which are often exaggerated, if at times unnecessarily, to Romantic proportions—can be portrayed by anime to such comic and tragic effect, with the figuration and tone of the characters’ perceptions seamlessly paralleling the literal events without obscuring them, then I’d be interested to see what an anime Jane Eyre, The Alchemist, or Sula might look like. Based on the above examples, as well as anime heavyweights like Fullmetal Alchemist, Cowboy Bebop, and, if one is not faint of heart, Berserk, all of which present events in some measure through the background and perspective of the main characters, I could imagine the works of Milton, Hugo, Austen, Dostoevsky, and others in anime form, with the aesthetic experience of the original narration intact.


Photo Credit.

The Conservative Cope

According to recent polling by YouGov, a measly 1% of 18- to 24-year-olds plan to vote Conservative at the next general election. Having won roughly 20% of this demographic in the 2019, the Conservative Party has lost 95% of its support amongst Britain’s youngest voters in less than four years.

In reaction to this collapse in support, journalists and commentators have taken to rehashing the same talking-points regarding Tory ineptitude and how to resolve it – build more houses, be more liberal, have younger parliamentarians, and so on.

I don’t intend to add this ever-growing pile of such opinion pieces. Instead, I want to put Tory ineptitude into perspective, in hopes of undermining the entrenched and parochial coping of Britain’s right leaning politicians and commentariat.

Even though Churchill didn’t coin the phrase, right-leaning talking-heads maintain that “if you’re not a liberal at 20 you have no heart, if you’re not a conservative at 40 you have no brain”, even if not articulated as such; the progressive and liberal tendencies of the young are annoying, but natural and inevitable.

Of course, this is simply not true. Thatcher won the most support from 18- to 24-year-olds in 1979 and 1983, something which left-wing and right-wing critics are more than happy to point out, yet such doubters of the Iron Law of Liberal Youth have managed to reinvent the law, albeit without the caveat of an inevitable turn to the right in later life.

Socialists and capitalists don’t agree on many things, but they are united by the belief that Britain’s youth is a bastion of progressive leftism, marching in lock-step with other first-time voters around the world. In the former, this inspires great confidence; in the latter, this inspires a sense of foreboding.

Other commentators have blamed Brexit, which is also wrong. Despite the widely-cited age-gap between the average Remainer and Leaver, the UK’s relationship with the EU is pretty far down the average young person’s list of political priorities, hence why almost every avid post-Brexit remainer is a terminally online geriatric. Ironically, The Data from the British Election Study predicted a gradual increase in support for the Conservatives amongst Britain’s younger voters between 2015 and 2019.  

Any person that has met the new cohort of young conservatives will attest their nationalistic and socially conservative modus operandi. Having its failures on crime and immigration reduction broadcast across the nation, its unsurprising that such people would lose faith in the Conservative Party’s ability to govern as a conservative party.

Indeed, given the Conservative Party’s eagerness to hold onto the Cameronite ‘glory days’ of tinkering managerialism, interspersed with tokenistic right-wing talking-points (i.e., the things which actually matter to the conservative base) its little wonder that the Tories have failed to win the young.

The Conservative Party Conference has a less than palatable reputation, but when the bulk of events revolve around uninformed conversations about tech, financial quackery, achieving Net Zero and lukewarm criticisms of The Trans Business, it is unsurprising so many Tory activists choose to preoccupy themselves with cocaine and sodomy.

Contrast this with the European continent, where right-wing populist parties are doing remarkably well with a demographic the Tories have all but officially dismissed. In the second round of France’s 2022 presidential election, incumbent president Emmanuel Macron, a centrist liberal europhile, was re-elected for a second term, with more than 58% of the vote. Although Macron obtained the majority of 18 to 24 years old who voted, it was over 60s which provided the backbone of his re-election, acquiring roughly 70% of their votes.

Moreover, whilst she was most popular with older voters (50- to 59-year-olds), the right-wing Marine Le Pen secured a sizeable portion of voters across all age brackets, especially those aged between 25- and 59- years old, filling the chasm left-behind by Macrons’ near monopolisation of France’s oldest citizens.

These patterns were generally replicated in the first round of voting, although the far-left Melenchon garnered the most support from France’s youngest voters. At first glance, most right-leaning commentators would flippantly dismiss the wholesale liberal indoctrination of the youth, overlooking the astonishing fact that roughly 25% of France’s youngest voters support right-wing nationalism, whether that be Marine Le Pen or Eric Zemmour.

Due to growing suspicion of the two main parties in Germany, the centre-right Christian Democratic Union (CDU, otherwise known as Union) and the centre-left Social Democratic Party (SPD), third parties have gained support from the disaffected young, such the centre-left Greens, the centre-right Free Democratic Party (FDP) and the right-wing Alternative for Germany (AfD).

Whilst it’s not doing as well as the Greens with first-time voters on the national stage, the AfD is making strides at the federal level and is doing noticeably well with Germans in their 30s, which isn’t insignificant in a country with a median age of 45. Compare this to Britain’s Conservatives, who start to faulter with anyone below the age of 40!

Moreover, the AfD is effectively usurping the CDU as the main right-leaning political force in many parts of Germany. For example, the AfD was the most popular party with voters under 30 in the CDU stronghold of Saxony-Anhalt during the last state election, a forebodingly bittersweet centrist victory.

Similarly, Meloni’s centre-right coalition, dominated by the nationalist Brothers of Italy party, didn’t lead amongst the nation’s youngest voters (18 to 34 years old), but they came extremely close, gaining 30% of their votes compared to the centre-left coalition’s 33% – and won every other age bracket in the last general election. Again, not bad for a country with a median age just shy of 50.

Moreover, these trends transcend Western Europe, showing considerable signs of life in the East. Jobbik, the far-right opposition to Viktor Orban’s right-wing Fidesz party, is highly popular party with university students, and despite losing the recent election, Poland’s right-wing Law and Justice party obtained roughly a third of first-time votes in the election four years prior.

Roughly a quarter of first-time voters in Slovakia opted for the People’s Party-Our Slovakia, a far-right party with neo-Nazi roots, and roughly 35% of Bulgarian voters between 18- and 30-years-old voted for the right at the last parliamentary election, centre-right and far-right included.

Evidently, the success of right-wing nationalism amongst young voters across Europe, isn’t confined to republics. In addition to its republics, European constitutional monarchies, such as Sweden, Norway, and Spain, have materialised into right-wing electoral success.

The Moderate Party, Sweden’s main centre-right political force, won the largest share of voters aged by 18- and 21-years-of-age, with the insurgent right-wing Sweden Democrats placing second amongst the same demographic, coming only a few points behind their centre-right recipients of confidence-and-supply in government.

Further broken down by sex, the Sweden Democrats were distinctly popular young Swedish men, and tied with the Social Democrats as the most popular party with Swedish men overall. Every age-bracket below 65-year-old was a close race between the Social Democrats and the Moderates or the Sweden Democrats, whilst those aged 65 and over overwhelmingly voted for the Social Democrats.

Similar to the Netherlands, whilst the Labour Party and Socialist Left Party were popular among young voters at the last Norwegian general election, support for centre-right Conservative Party and right-wing Progress Party didn’t trail far behind, with support for centre-left and centre-right parties noticeably increasing with age.

Whilst their recent showing wasn’t the major upset pollsters had anticipated, Spain’s right-wing Vox remains a significant political force, as a national party and amongst the Spanish youth, being the third most popular party with voters aged 18- to 24-year-olds.

Erstwhile, the centre-right Peoples Party (PP) is the most popular party with voters between 18- and 34-year-old with the centre-left Spanish Socialist Workers’ Party (PSOE) drawing most of its support from voters aged 55 and older, especially voters over 75.

Still, it is easy to see how sceptics might blame our culture differences with the European continent on the right’s alleged inability to win over the young. After all, its clear youth politics is taken more seriously on the European continent. The JFvD, the youth wing of the right-wing Forum for Democracy (FvD) in the Netherlands, regularly organises activities which extend beyond campaign drudgery, from philosophy seminars to beach parties. Contrast this to the UK, where youth participation begins and ends with bag-carrying and leafleting; the drudgery of campaigning is only interspersed by instances of sexual harassment and other types of degenerate behaviour.

However, this suspicion is just as easily put to rest when we compare Britain to the rest of the Anglosphere, especially New Zealand, Canada, and the United States of America.

In the run-up to New Zealand’s general election, polling from The Guardian indicated greater support for the centre-right National Party (40%) amongst voters aged 18- to 34-years than the centre-left Labour Party (20%), a total reversal of the previous election, defying purported trends of a global leftward shift amongst younger generations.

More to the point, support was not going further left, with the centre-left Labour-Green coalition accounting for 34% of millennial votes, compared to the centre-right coalition’s rather astounding 50%; again, a complete reversal of previous trends and more proof than any that so-called ‘youthquakes’ aren’t as decisive as commentators and activists would have us believe.

Despite Labour’s success with young voters in 2017 and 2019, when the voter turnout of younger generations is as abysmal as Britain’s, it’s not exactly a given that parties and individuals of a non-socialistic persuasion should abdicate Britain’s future to a dopey loon like Corbyn. The creed of Britain’s youth isn’t socialism, but indifference.

If anything, right-leaning parties are more than capable of producing ‘youthquakes’ of their own. In a time when the British Conservatives are polling at 1% with their native young, the Canada’s Conservative Party are the most popular party with, polling at around 40% with 18- to 29-year olds, and despite his depiction as a scourge upon America’s youth, Trump comfortably won white first-time voters in both 2016 and 2020. Perhaps age isn’t the main dividing line in the Culture War after all!

In conclusion, the success of the Conservative Party with younger voters does not hinge upon our electoral system, our constitutional order, our place in Europe or the Anglosphere. Simply put, the Tories’ inability to win over the young is not an inability at all, but the result of coping; a stubborn and ideological unwillingness motivated by geriatric hubris, disproven time and time again by the success of other right-wing parties across the Western world.


Photo Credit.

The Obsession with News

In 1980, Ted Turner and Reese Schonfeld co-founded the Cable News Network (CNN). Despite derision over the idea of a 24 hour rolling news channel, CNN became a massive hit and would become the forefather to the news system today. In the 43 years since CNN first aired, news channels have changed from having bulletins every few hours to being on air 24/7. Our parents would have to wait for the top of the hour for news, unless breaking news broke into programming, whilst we can just turn it on with a press of a button.

Whilst many may marvel at the idea of 24 hour news, it is part of why news today has its problems. As a result of constant media absorption, competition from social media and the internet, as well as a fast-paced world, society itself has become obsessed with the news. Every tiny little story becomes splashed across screens, both large and small, in a desperate attempt to capture the moment before it vanishes. 

Everything is Breaking News

If, like me, you have the BBC news app alert on your phone, then this will be a similar tale. The alert goes off. You check it. Whilst it’s officially classed as ‘Breaking News,’ it’s not really that important. Some things are of course important. Look at the death of Her Majesty The Queen last year. That was a news story that knocked everything else off the air. Considering that she had been our monarch since 1952, it’s fair to say that this was incredibly important breaking news. 

Generally, the app applies the term ‘Breaking News’ rather liberally. Holly Willoughby leaving This Morning after fourteen years is not worth your phone going off. Beyoncé removing ‘offensive lyrics’ from an old song isn’t worth it either.

That also applies to news channels. Sky News and BBC will have that ticket going across the bottom of the screen quite happily for just about any reason. Rare is the day where the bottom of Sky News is not a flash of yellow and black. Even a slow news day will have breaking news just to keep things a bit fresh.

It’s understandable really. In this day and age, news travels fast. It comes and goes in the blink of the eyes. News companies want to have their hold on the story before the next one comes. When Twitter/X or Facebook gets the news first, well, that’s one less story that they’ve managed to break to viewers. The big media organisations may have the means to research the stories and get the scoops, but they don’t ever get it out first. One is more likely to find out a story through social media than they are the 24 hour news or their app. 

Considering the point of the 24 hour news cycle is to be fresh, that’s not really a good thing.

Every Little Story, Made Bigger 

On the 18th April 1930, BBC news would announce that “there is no news.”

Can you imagine that today? Another issue with the 24 hour cycle and news today is the fact that there’s a desperation to find something to report on. When channels and apps are never off, they can’t have a rest. Something must be going on. It doesn’t matter what it is, but it must be something.

Perhaps it’s a take on a news story through the issue of race, gender or sexuality. Perhaps it’s a random study from Australia. Whatever it is, it’s got a place in the news because it’s something.

Take for example the Daily Climate Show on Sky News. What was originally a daily, thirty minute slot on prime time was axed to a weekend event. It’s not hard to see why this was. In its desperation to make more news out of something, Sky took a risk by devoting half an hour everyday to the exact same topic. Considering how climate change and its presentation is a divisive subject, it was hardly a risk worth taking. Changing it to every weekend was still a poorly thought out move. 

Repetition

You might turn the news on when you get up at seven in the morning. You might turn the news on at ten before you go to bed. What might link those two viewings is that they are exactly the same.

When the media can’t slot a new story in, they’ll just repeat it. If it’s an unfolding story, then of course you’ll see it or read about it again later because there are news things to be said. The problem occurs when it’s the same story over and over again. 

Nobody wants to hear the same story they did fifteen hours ago without new information. It’s tiresome.

The Fear Factor

Then there’s the fear in which the media thrives.

From the moment that Boris Johnson told us that we now had to stay in our homes because of COVID, the media was all over the pandemic- perhaps even before then. With nothing else happening because everyone was locked down, all the media could do was run constant stories about the ever climbing death toll. At first, well, it was what we expected. Then it started to get a bit repetitive. 

These stories tend to get a much frostier reception if reported today. Commentators scold the media for trying to scare us or create fear. 

They could, however, get away with it during those early months. With nothing else to do, we had more time for the news. Their stories were constantly about the deaths and after effects of COVID. We were already unable to leave our homes and live our daily lives, with constant mask wearing when we went out, so did we need to be intimidated even more?

It’s not just COVID. Look at the climate protestors, especially the young ones, when interviewed. Some of them cry in fear for their future, weeping about the thought of a planet that could be gone when they have reached adulthood. Considering the constant doomsday coverage of climate change in the news, it’s easy to see where this fear comes from. Kids’ news shows like Sky’s awful FYI focus on the topic regularly. It’s constantly on mainstream news. 

Children are more in tune with the world today. With all the darkness in the news and on social media, some will blame it for the declining mental health we are seeing in young people. Indeed, where is the hope? Well, people don’t watch the news to hear about new innovations or cute animals being born in zoos. Fear is more gripping than hope, and a bigger seller too, but it’s not good for morale.

It’s vitally important that we know what’s going on in the world, but too much news is bad for the soul. In a world where it’s all too accessible and the media makes money on constant news, we can’t rely on it for real information. We’re either fed fear or repetitiveness. The obsession with news is, ironically, making us less knowledgeable. Resist the urge to keep up behind what is needed. It’s better for you.


Photo Credit.

Out of the Cauldron: America’s Intervention in Somalia, Thirty Years Later

Since the end of World War Two, the United States has received a heightened amount of criticism for how it has conducted itself abroad. Its interventionism and choices in which to act has thrown many a challenge to its own legitimacy and eligibility as a superpower. In the following short article, I will argue that at its core some of America’s interventionism is more a sign of a flawed but idealist attempt at helping the world. 

From this, we can think of many circumstances in which America and its strategic choices to intervene have resulted in large scale failure. From Southeast Asia, to that of Latin America, we can be shown countless examples of either propping up corrupt dictatorial figures or even right-wing paramilitary death squads. This criticism has found itself not just from the proverbial left-wing but even from non-intervention right and libertarian circles. These failures have sadly detracted from the times America was arguably right to say it had a moral obligation to do something. Its critics have largely been correct with figures like Odd Arne Westad, Stephan Walt and Vincent Bevins, being the most notable at generating specific criticism towards American foreign policy failings, rather than the traditional critics like Noam Chomsky and Howard Zinn; commentators who prefer to use certain political arguments, while ignoring other uncomfortable and less than ideologically convenient truths about America. 

As such, thirty years ago, we can be shown an example of America standing up for doing what is right and just, and being punished for it anyway. This was found by America directly after Desert Storm and the first Gulf War, turning its attention to that of Somalia and the Horn of Africa in 1993. Although remaining complex and disputed, what is known is that the nation of Somalia had largely fallen into a state of Civil War by the end of the 1980s. This conflict still continues to this day and still continues to displace many within the region at large. By the early 1990s, the nation remained a lawless place, in which differing power factions and warlords were fighting over what was left. It is this, for which many were exposed prior to American involvement. 

American involvement came as a means to support and back up the United Nations which were being targeted within the country. From this involvement, America turned its attention to that of targeting those figures and warlords which had declared war on the United Nations personnel, such as Mohamed Farrah Aidid. As such, the mission to find him was that of Operation Gothic Serpent. 

Out of Gothic Serpent, we enter what became the most famous and defining image of this conflict, the ‘Battle of Mogadishu’. The Battle of Mogadishu, fought in early October 1993, became famous through writer Mark Bowden’s book, ‘Black Hawk Down: A Story of Modern War’, for which Ridley Scott’s 2001 war movie was named. This battle became emblematic of America’s foreign policy failures, as it fought the toughest house to house fighting since the Tet Offensive within 1968’s Vietnam (something it would not match until the Second Battle of Fallujah in 2004). The outcome for the battle was the death of some twenty one Americans and nearly 300 Somalians. Dead American bodies were dragged naked through the streets and shown on CNN, the helicopters that had been shot down (four in total) became symbols of America’s failure and success for the Somalis. The sight of American Marines running back to base after being chased out the city by armed militia showed they were far from welcome.

How did America end up here? In the years prior, Somalia had descended into being a failed state, the outcome of which had been a complete disaster for all within its borders. Compounding this, a large famine had begun to grip the nations. The results of which meant that some 200,000-300,000 individuals had succumbed to starvation, in 1992. Alongside high rates of food aid looting and storage, the famine was used as a tool to wage war and genocide against others within the nation. There was no fixing this, there was little anyone could do to stop this nation from ripping itself apart at the seams. Out of this mess and the targeting of the United Nations personnel, America decided to support those on the ground. Within several years, it had all but left completely. 

What we can learn from such events? Three decades after, has Somalia improved in any way? Well of course not. Has Afghanistan improved in any way since 2001? Of course not. As such, this attempt at fixing the problem was to only inflame the situation and result in the deaths of more individuals. More than just the US State Department, the entire world must realise its problems cannot be solved by sacrificing the lives of white boys from Arkansas and Ohio.

One might begin to wonder if the Somalian Civil War and its aftermath had never occurred, whether America might have also seen itself as being politically able and morally obliged to intervene in other African nations that also went through genocide in the subsequent years, such as Rwanda and Burundi. In this sense, its attempts are more than flawed but tragic. Imagine if Vietnam had never occurred, if America would have then had the stomach to stop the Cambodian Genocide from occurring. 

Heavy is the head that wears the crown, some might say. I would personally lean towards the viewpoint that America’s intervention into Somalia during this time is more indicative of a wider tragic sense of action that has haunted America since the end of World War Two. In many ways, America is damned if it does and damned if it doesn’t. It is forced to choose between fickle condemnation and disgrace its reputation as a superpower or military action, the latter of which produces images of dead soldiers being dragged naked through the streets, leading many to ask the question: what was it all for?


Photo Credit.

Diversity: A Pyrrhic Victory

The Russo-Ukraine war has underscored the arduous, industrialised drudgery which characterises modern warfare; the mechanised obliteration made possible by modern technology has minimised opportunities for combatants to attain individual recognition and perform feats of life-affirming glory.

In continuation of this grim rediscovery, a revitalised war between Israel and Palestine has revealed the metaphysics to which modern warfare owes its preference for annihilation over capitulation: the depoliticization of combatants, a dehumanising process in which Palestinians become “human animals” and Israelis become “filthy pigs”.

Those who say “Israel’s security is our security” are wrong, but they’re less wrong than those who believe Britain is unaffected by the recent attacks in the south of the country. Over the course of decades, Britain’s policy of mass immigration has produced a series of immigrant enclaves in towns and cities up and down the country, many of which dislike each other far more than the native white British population for a variety of historic reasons; a fact which has been made apparent to everyone after several members poured into London, to celebrate and to mourn the outbreak of war.

However, as one can clearly see in the videos, with Turkish and Palestinian flags fluttering side-by-side, it’s not merely Britain’s Jewish and Palestinian diasporas being at each other’s throats, it’s a matter of every ethnic diaspora and commune piling into coalition with one another, further diminishing social trust and charging historic grievances.

Across all of England, from Oldham to Stoke, from Birmingham to Burnley, from Peckham to Kensington, from Rotherham to Dover, Britain’s post-war policy of mass immigration has gradually turned the Land of Hope and Glory into a giant drop-zone for an inter-ethnic Battle Royale.

Far from a cohesive unit, it is near impossible to walk through the middle of London without encountering a protest dedicated to the interests of another nation. When the government sought to curb illegal migration, Britain’s Albanian diaspora descended upon London in boisterous assembly, decrying the government’s rhetoric as racist and a xenophobic sleight against the disproportionately Albanian ‘asylum-seekers’ crossing the English Channel.

Then again, why shouldn’t they turn out to show support for their Albanian brothers and sisters? Aren’t public protest and freedom of speech cornerstones of our liberal democracy? Surely, the same can be said about the pro-Palestine demonstrations? Weren’t their ‘fiery but mostly peaceful’ demonstrations indicative of their successful integration into Modern British society, underpinned by the civic values of diversity and inclusion, liberty and tolerance? Let’s face it: diversity hasn’t failed. Diversity has triumphed and everyone hates it.

Before projecting the Israeli flag onto 10 Downing Street and the House of Commons, Prime Minister Rishi Sunak, who is of Indian descent, condemned the attack in the strongest possible terms:

“As the barbarity of today’s atrocities becomes clearer, we stand unequivocally with Israel. This attack by Hamas is cowardly and depraved. We have expressed our full solidarity to Benjamin Netanyahu and will work with international partners in the next 24 hours to co-ordinate support.”

Many have humorously remarked on the staunch, some might say excessive, support for Israel amongst Indians and those of Indian descent, but such solidarity is entirely rational. Given their historic enmity with Pakistan, it’s unsurprising that Indians would support the group with a grievance against a comparable ethnoreligious enemy. In blunt terms, the Indian support for Israel isn’t derived from a fondness for Jews, but from a general dislike of Muslims.

The tendency of our politicians to talk about hatred and division in the same breath overlooks the fact ‘hatred’ is just as capable of uniting people as it is of dividing them. Of course, Sunak is not your typical member of Britain’s Indian diaspora but given the riots in Leicester during the autumn of last year, it’s safe to say that if such grievance can be imported in-tact from the Indian subcontinent to the English midlands, it definitely extends from the English midlands to the nations of the Levant.

Meanwhile, north of Hadrian’s Wall, Scottish First Minister Humza Yousaf, who is of Pakistani descent, issued a more lukewarm response to the widely publicised atrocities:

“My wife Nadia and I spent this morning on the phone to her family in Gaza. Many others in Scotland will be deeply worried about their families in Israel and Palestine. My thoughts and prayers are very much with those worried about loved ones caught up in this awful situation.”

Whilst many found the latter’s statement wavering and distasteful, it’s important to see things from the perspective of Yousaf. After all, he has family in Gaza and the chances this doesn’t affect his view on such matters is highly unlikely.

For readers who don’t recall, Yousaf made national news attacking then-SNP leadership contender Kate Forbes for her Christian view on gay marriage, suggesting her stance made her unfit to be First Minister. A matter of days later, it was revealed Yousaf had dodged a crucial Holyrood vote to liberalise marriage laws due to pressure from his fellow members of the local Muslim population.

Evidently, he is trying to balance his ethnoreligious and familial interests and emotions with his official responsibilities, as leader of the SNP and First Minister of Scotland. Indeed, this is impossible for most and far from easy for him – especially given his scornful opinions of the people he governs – yet it’s clear, given his unique position, he is forced to show more consideration than most people; people who lack the responsibilities of public office.

On her way to the Israeli embassy to pay her respects, Bella Wallersteiner, a liberal-conservative commentator of Jewish descent, encountered a large celebration of the attack on Israel. In response to the public display of support for Hamas and Palestine, she posted:

“I’ve left as didn’t feel safe. I tried speaking to a few protestors and making the point that it was totally inappropriate to hold a demonstration of this kind after a heinous terrorist attack. As you can imagine, I didn’t get very far. I’d advise people avoid the area.”

As someone who has routinely championed immigration and cosmopolitanism, Wallersteiner only now felt threatened by the implications of diversity and mass immigration because it negatively implicated her ethnic group. It goes without saying that homogenous societies are hard enough to maintain, even when its inhabitants adhere to pro-social values. As such, you can’t advocate the creation of a multi-ethnic, multicultural society until it affects you; such an ethnocentric outlook is unlikely to produce good results, for oneself or for other people.

Of course, Wallersteiner is not the only one guilty of ethno-narcissism. Diane Abbott’s letter to The Observer, which ignited accusations of anti-semitism, anti-ziganism, and anti-Irishness, which led to her suspension from the Labour Party, drew a qualitative distinction between racism and prejudice. According to Abbott, whilst Jews, Roma, and the Irish have been victims of prejudice, experience of racism is particular to black people. In summary: “You’re an Other, and therefore you’re a victim, but at least you’re a White Other, unlike me – a BLACK woman.”

Essentially, anti-semitism is bad, but anti-blackness is worse. The aforementioned minority groups aren’t immune to discrimination, but they are immune to exceptionally egregious forms of discrimination due to their ‘whiteness’ or relative proximity thereto; a notion which critics called a “hierarchy of racism“.

One might say this dispute has served as proxy for vying wings of the Labour Party, which is partially true. However, it’s evident that ethnic grievance plays a far more important role. Corbynites did take to Twitter/X (where else?) to complain about Abbott’s suspension, but their gripe had next-to-nothing to do with Blairite manoeuvring.

Instead, they targeted the implicit anti-blackness of Abbott’s critics and the publicity they received, suggesting they were the ones perpetuating a “hierarchy of racism”, privileging concerns about anti-Semitism over anti-Blackness, seemingly ignoring Abbott’s comments regarding the Roma and the Irish, thereby undermining their outrage and revealing their own ethnically motivated hypocrisy.

Every faction involved lays claim to real ‘anti-racism’. Compared to other social ills, they agree racism is evil, yet each group believes some evils are eviller than others. They agree on a general qualitative assessment but disagree on a distinct qualitative assessment; they agree on whites as the common enemy, but not who benefits the most from the racist superstructure of Western society, other than whites themselves.

Even when considered non-white, Jews are perceived as ‘white(r)’ than their comrades. As such, non-Jews band together to push concerns about anti-semitism to the periphery of ‘anti-racism’. Just as minority activists align themselves against whites due to their general non-whiteness, increasingly collectivised ‘Black and Brown’ members align themselves against Jews due to their distinct non-whiteness to push their interests up the priorities list of the ‘anti-racist’ movement.

Indeed, the anti-white intersectional logic of the anti-racist coalition which ejected the white working class from the political left, laying the groundwork for the Conservative electoral landslide in 2019, a victory which is being undone because the Tories severely underdelivered on their promise to lower immigration, is problematising a faction which helped this process along.

Arguably parallel to peripheralization of ‘cisgender’ women within anti-sexism in pursuit of ‘trans rights’, both Jews and ‘cisgender’ women are prone to flock to right-leaning media, who herald them as martyrs cancelled by the Social Justice Mob and so on. Just as ‘TRAs’ and ‘TERFs’ appeal to the external enemy of the sexist heterosexual man, accusing each other of jeopardising the safety of women – as if the nature of womanhood wasn’t the source of conflict to begin with – vying ethnic factions of the anti-racist coalition accuse each other of playing into the hands of white supremacy by advancing their respective interests.

The UK government does this all the time. Due to the hegemonic obsession with diversity amongst the political and media class, a propensity which has given rise to legal commitments to support and promote Diversity, Equality, and Inclusion, as per the Equality Act (2010), the state-backed intersectional diversity which it encourages necessarily inflames tensions between minority groups and the white British majority.

In an attempt to hold warring minority groups together, hoping to offset the explosive potential of re-opening historic grievances, and to integrate a growing migrant and migrant-descended population, one which emerged from a policy which the British people have consistently opposed whenever given the chance, every facet of media has become infected with anti-white sentiment. From Access UK’s state-funded hotep workshops to fabricating history about the British Isles, from inserting slavery and racism into every facet of media to covering up racially-motivated grooming gangs to protect ‘social cohesion’.

However, whilst minority groups view the anti-racist coalition as a means of affirming their uniquely serious grievance – discrimination against their particular group – it becomes apparent that their opposition to whites merely aligns ethnic grievances; it does not assess their validity or resolve them. As such, the potentiality for conflict remains, overflowing into violence and aggression every time there is an international crisis or domestic dispute.

The direct consequence of this is the antithetical to what every self-appointed champion of small government and liberal values theoretically wants, which is more power being given to the state to interfere in people’s day-to-day life through censorship and distort public opinion through social engineering.

Sadiq Khan’s recent announcement to increase ‘anti-hate’ patrols is just one such example. In any other circumstance, conservatives and libertarians would dismiss such measures as pedantic, overbearing, and ideologically driven, yet nobody seems concerned that the attack in southern Israel is being used to empower an apparatus which spends every other day arresting people for ‘hate speech’.

The protection of people and property is the initial function of the police, so I severely doubt that specific ‘anti-hate’ measures will be limited to arresting people who smash up shopfronts and graffiti public property, especially since the police cannot be relied upon to fulfil its most basic functions, as revealed by their indifference to serious crimes and the public’s rapidly declining trust.

Moreover, what are new arrivals to this country supposed to integrate to? Democracy? What is democracy without a demos? Civil liberties? Which are routinely trampled by the managerial state? Capitalism? Do you seriously expect society to be held together by consumerism? People will eventually ask for something more than material security and economic growth, both of which we are failing to procure anyway; what holds society together then?

Integration is a necessarily particular process, it assumes a particular group and set of customs to which people can be integrated over time. You can’t ‘integrate’ people to a global matrix of sustenance. You can’t ‘integrate’ people to a group which you allow to be displaced through migration. You can’t ‘integrate’ people to a value system which is designed to accommodate everyone, lest you plan on hollowing out every religion on the Earth, forcing people to treat their symbols as quirky cultural tokens and their prophets as secularised self-help gurus.

How perversely ironic is it that the liberal-left obsession with diversity has emerged from the inability to comprehend that people genuinely are different to one another? If anything, it is the native population which has been told to ‘integrate’, to tolerate and adhere, to ways and customs of the new arrivals, not the other way around.

The Labour Party, almost definitely the next party of government, issued a document titled: “Report of the Commission on the UK’s Future”. According to the report, the commission “originally used in the first democracies in Ancient Greece – that are critical for the success of any nation, with Britain being no exception” – demos (shared identity), telos (shared ambitions), and ethos (shared values).

Curiously, the report left out another very important concept to the Ancient Greeks: ethnos (shared character; ethnicity). According to the ancients, a society which lacks a sufficient degree of homogeneity inevitably leads to a lack of social trust, a lack of social trust will inevitably lead to factions, and factions will inevitably lead to the outbreak of disorder and even civil war. As such, in an attempt to ensure its survival, the state must micromanage society down to the last snivelling minutia to tie everything together; a far-flung difference from the unarmed, gentle-natured, and almost passive policemen of George Orwell’s England Your England.

As Singapore shows, a diverse society is only manageable if you have a stable demographic supermajority and reliable public institutions, especially when it comes to dealing with the bare necessities of public order, such as preventing violence and theft. The UK has neither of these. As per the most recent census, the white British majority is declining and crime is basically decriminalised.

As such, if things continue at their current rate and on their current course, we’re going to need more than ‘anti-hate’ patrols, Tebbit’s Cricket Test, and Hotep Histories to integrate an increasingly diverse populous; dear reader, we’re going to need the Katechon. Indeed, diversity is not the fancy of freedom lovers, but of tyrants, as Aristotle elucidates in Politics:

It is a habit of tyrants never to like anyone who has a spirit of dignity and independence. The tyrant claims a monopoly of such qualities for himself; he feels that anybody who asserts a rival dignity, or acts with independence, is threatening his own superiority and the despotic power of his tyranny; he hates him accordingly as a subverter of his own authority. It is also a habit of tyrants to prefer the company of aliens to that of citizens at table and in society; citizens, they feel, are enemies, but aliens will offer no opposition.” (1313B29)

I started this article with a reference to the wars in Ukraine and Israel, yet these two are not the only major conflicts which 2023 has endured. The war between Armenia and Azerbaijan, initiated after the latter launched a large-scale military invasion against the breakaway region of Nagorno-Karabakh, violating the 2020 ceasefire agreement between the nations and leading to the expulsion of over 100,000 Armenians.

Whilst Nagorno-Karabakh is internationally recognized as part of Azerbaijan, most of its territory was governed by ethnic Armenians. Without this natural fraternity, this sense of demos, the Republic of Artsakh could simply not exist, nor would the Azerbaijani government need to re-constitute the state through Asiatic authoritarianism. Even for us moderns, it is clear that diversity is not the basis of peaceful and stable self-government. The more we stray from this fact, we will deny ourselves to attain that which we have always wanted: the ability to discriminate and enjoy people as individuals and exceptions, rather than monoliths to which we are forced to remain diffident, for the sake of ourselves and others.

Therefore, to conclude, I shall leave you with this passage from Aristotle’s Politics, in which the great philosopher outlines the natural conclusion of a society which does not take its responsibility towards the diversity of its constituents with any prudence or honesty:

“Heterogeneity of stocks may lead to faction – at any rate until they have had time to assimilate. A city cannot be constituted from any chance collection of people, or in any chance period of time. Most of the cities which have admitted settlers, either at the time of their foundation or later, have been troubled by faction. For example, the Achaeans joined with settlers from Troezen in founding Sybaris, but expelled them when their own numbers increased; and this involved their city in a curse. At Thurii the Sybarites quarreled with the other settlers who had joined them in its colonization; they demanded special privileges, on the ground that they were the owners of the territory, and were driven out of the colony. At Byzantium the later settlers were detected in a conspiracy against the original colonists, and were expelled by force; and a similar expulsion befell the exiles from Chios who were admitted to Antissa by the original colonists. At Zancle, on the other hand, the original colonists were themselves expelled by the Samians whom they admitted. At Apollonia, on the Black Sea, factional conflict was caused by the introduction of new settlers; at Syracuse the conferring of civic rights on aliens and mercenaries, at the end of the period of the tyrants, led to sedition and civil war; and at Amphipolis the original citizens, after admitting Chalcidian colonists, were nearly all expelled by the colonists they had admitted.” (1303A13)


Photo Credit.

Rousseau and the Legacy of Romanticism

One idea that I’ve fondly taken from Augusto del Noce is that ideologies have an internal logic to them, which unfolds as they interact with real-world events. Philosophies aren’t static, but constantly changing as they play out against one another historically. This view, while similar to the Marxist notion of praxis, finds ultimate inspiration from Joseph de Maistre. It’s de Maistre who writes that the French Revolution sweeps men up against their will and devours its own children.

Once put into practice, revolutions take on a life of their own, and like a wild tiger on a leash, drag their authors to new and unheard-of places. This isn’t to deny human free will; something I strongly affirm. It’s rather to recognise that rational humans, faced with circumstances, strive to act consistently with what they believe. This mechanism of consistency is what causes ideologies to evolve over time.

This insight is a powerful tool for understanding the long-term consequences of philosophies. The social sciences may quantify popular actions and opinions, but because human wishes are often nebulous, and people are fond of lying to themselves, there’s always room to dispute the results. The romantic primitivism of Jean Jacques Rousseau is one such ideology. Since being unleashed into the world from the bloody womb of the Reign of Terror, it has branched in many different directions and morphed into shapes Rousseau himself wouldn’t have recognised.

To define romanticism, I turn to Irving Babbitt and his 1919 work Rousseau and Romanticism. Romanticism sets itself in opposition to classicism. Classicism seeks standards for ethics and culture in universal types which it deems natural. It’s not the case that classicism seeks rules necessarily (lest we confuse it with Kantianism). Aristotle is the foremost classicist, yet he denies that norms are truly codifiable into rules. A universal type is rather an ideal based upon the nature of something. A classicist (like Aristotle) might say that being polite at the dinner table is something we should do, and this politeness consists of showing due moderation in eating, drinking, talking etc. But this isn’t a rule so much as a way of displaying the excellence proper to a human being.  

Classicism isn’t opposed to emotion either. Rather, it subdues emotions to rational norms. Human nature has a standard of excellence which demands the proper use of emotion. In other words, emotions are good or bad depending on how we wield them according to a standard for the human species. The one able to do this is a universal type, what Aristotle calls the phronimos, or wise man. The later Stoics didn’t condemn emotions entirely, as the popular misconception of them. They rather encouraged natural emotions and discouraged the unnatural. Again, nature is a standard for ideal behaviour, external to individual fancy.

Romanticism, on the other hand, seeks standards in what’s unique and unrepeatable. Instead of conforming to generic ideals, goodness comes from spontaneous individual acts and thoughts. The cause for this is Rousseau’s doctrine of original sanctity. Classicism makes a distinction between ‘things-as-they-are’ and ‘things-as-they-ought’. Humans, animals, and plants don’t come into the world fulfilling an ideal; they arrive imperfect and must strive after their ideal. If we deny this, as Rousseau does, then to be good just is to be what one is. The generic ideal has no purpose and drops out. Authenticity to oneself as one is becomes the aim of life, and this can only find expression in unrepeatable spontaneous acts.

Indeed, once authenticity becomes central, it’s but a short step to rebelling against all standards which society imposes on the self. Since whatever standards society imposes must be ideal repeatable types, and no ideal repeatable types are authentic, no socially imposed standards can be authentic. And since goodness lies in authenticity, being truly good means casting off the standards society has imposed.

As Alasdair Macintyre wryly says in After Virtue, Enlightenment philosophes have the least self-awareness of all thinkers. They create new and revolutionary systems, but the content of their morality is entirely inherited from the civilisation they’ve inherited and which they despise. Thus, Rousseau’s ethics are stuffed full of quaint and puritanical Calvinist ideas from his Genevan upbringing. “Effeminacy” is one of his constant worries, and he applies the term, in boyish fashion, to anything he doesn’t like.  Thus, in the Discourse on the Origin of Inequality, he can condemn civilised man:

“By becoming sociable and enslaved, he becomes weak, fearful, and grovelling, and his soft and effeminate way of life ends by enervating both his strength and his courage.”

Take these relics away, however, and Rousseau’s romanticism has only its sentimental primitivism to act as a limiting moral principle. Goodness is whatever lies in the untainted human heart, freed from social corruption. What becomes of it then? I wager it must enter an eternal spiral of liberation. Romanticism is built on the idea that we’ll be truly happy only when we free ourselves from all external rules and uncover a pre-social authenticity. Since this is a lie, no amount of liberation will ever create happiness. So, to remain consistent with itself, romanticism must seek ever more shackles of oppression to shatter. It’s either that or admit error.

The progressive radicalisation built into romanticism is visible everywhere. The sexual revolution, for example, has no brakes, because it’s built on a romanticised and primitivist vision of sex that would be falsified the moment brakes are applied. The radicals of the mid-twentieth century believed that socialised sexuality was corrupt, and once the orgasm was freed from all external restraints, pure happiness would result (Wilhelm Reich, for example, thought-free love was the precondition to utopian social democracy). Free love hasn’t made us happier, however. So, the answer is to find ever more previously unknown sexual taboos, whose chains we must shatter if we’re at last to be free.

In everyday morality, romantic assumptions have remade the life quest we each undertake for goodness, into a quest for authenticity. Finding one’s true self is now a drain on the wallet of the entire Western bourgeoisie. People of ages past underwent transformative moral journeys that turned them from sinners to saints, but theirs wasn’t a trek for authenticity. They did something far more mundane: they changed their minds. There’s an implicit vanity in the true-self doctrine. Changing your mind means admitting error. Finding your true self means you were right all along, but just didn’t notice it, because society was keeping you blind.

The cultural production of this quest is, I believe, simply inferior to the production of a mind that looks outwards from itself onto something else. Someone obsessed with finding his authentic self doesn’t have time to stand in awe of things greater than himself. What is falling in love, if not to be overcome by the sense of the intrinsic irreplaceable value of another person, without reference to oneself? We have all effectively become Rousseau writing his Confessions. A man who delighted in nature and other people only as frissons to express his authentic self, and could begin his book with the words:

“Here is the only portrait of a man, painted exactly after nature and in all her truth, that exists and probably ever will exist.”

In education, romantic ideas have done away with the rote learning that characterised pedagogy from Ancient Greece, through the middle ages and down to the Victorian Age. Twentieth-century educators like John Dewey, following in Rousseau’s footsteps, sought to remake schooling around the true self doctrine. Instead of moulding a pupil to conform to an ideal (a gentleman or citizen), modern education exists to help him discover his uncorrupted pre-social self. Self-expression without rules has become the educational norm, with the result that we have people who are experts in analysing their own minds and emotions, but incapable of self-denial or rigour. The excellence of mind and body requires constant training. We accept this more readily about the body because physical fitness is visible. But the mind, which is invisible, needs just as much training to be fit for purpose.

In the end, I see romanticism as an enormous civilisational gamble. The difference between classicism and romanticism is about what we think reality is truly like. The classicist sees a human race born lacking and sees culture as how a scaffold is to a building. Culture exists as an aid to human completion. The romantic, meanwhile, claims that human nature isn’t completable, but already complete, and merely corrupted. He wagers that if we accept this idea, we can remake the world for the better. Like any gambler, he doesn’t think about the stakes if the wager is lost. Here the stakes are social catastrophe if the assumption is untrue. If the truth is classical, then romanticism is akin to raising a lion on a strict vegetarian diet. 


Photo Credit.

Scroll to top