Month: October 2023

Atatürk: A Legacy Under Threat

The founders of countries occupy a unique position within modern society. They are often viewed either as heroic and mythical figures or deeply problematic by today’s standards – take the obvious examples of George Washington. Long-held up by all Americans as a man unrivalled in his courage and military strategy, he is now a figure of vilification by leftists, who are eager to point out his ownership of slaves.

Whilst many such figures face similar shaming nowadays, none are suffering complete erasure from their own society. That is the fate currently facing Mustafa Kemal Atatürk, whose era-defining liberal reforms and state secularism now pose a threat to Turkey’s authoritarian president, Recep Tayyip Erdoğan.

To understand the magnitude of Atatürk’s legacy, we must understand his ascent from soldier to president. For that, we must go back to the end of World War One, and Turkey’s founding.

The Ottoman Empire officially ended hostilities with the Allied Powers via the Armistice of Mudros (1918), which amongst other things, completely demobilised the Ottoman army. Following this, British, French, Italian and Greek forces arrived in and occupied Constantinople, the Empire’s capital. Thus began the partitioning of the Ottoman Empire: having existed since 1299, the Treaty of Sèvres (1920) ceded large amounts of territory to the occupying nations, primarily being between France and Great Britain.

Enter Mustafa Kemal, known years later as Atatürk. An Ottoman Major General and fervent anti-monarchist, he and his revolutionary organisation (the Committee of Union and Progress) were greatly angered by Sèvres, which partitioned portions of Anatolia, a peninsula that makes up the majority of modern-day Turkey. In response, they formed a revolutionary government in Ankara, led by Kemal.

Thus, the Turkish National Movement fought a 4-year long war against the invaders, eventually pushing back the Greeks in the West, Armenians in the East and French in the South. Following a threat by Kemal to invade Constantinople, the Allies agreed to peace, with the Treaty of Kars (1921) establishing borders, and Lausanne (1923) officially settling the conflict. Finally free from fighting, Turkey declared itself a republic on 29 October 1923, with Mustafa Kemal as president.

His rule of Turkey began with a radically different set of ideological principles to the Ottoman Empire – life under a Sultan had been overtly religious, socially conservative and multi-ethnic. By contrast, Kemalism was best represented by the Six Arrows: Republicanism, Populism, Nationalism, Laicism, Statism and Reformism. Let’s consider the four most significant.

We’ll begin with Laicism. Believing Islam’s presence in society to have been impeding national progress, Atatürk set about fundamentally changing the role religion played both politically and societally. The Caliph, who was believed to be the spiritual successor to the Prophet Muhammad, was deposed. In their place came the office of the Directorate of Religious Affairs, or Diyanet – through its control of all Turkey’s mosques and religious education, it ensured Islam’s subservience to the State.

Under a new penal code, all religious schools and courts were closed, and the wearing of headscarves was banned for public workers. However, the real nail in the coffin came in 1928: that was when an amendment to the Constitution removed the provision declaring that the “Religion of the State is Islam”.

Moving onto Nationalism. With its roots in the social contract theories of thinkers like Jean-Jacques Rousseau, Kemalist nationalism defined the social contract as its “highest ideal” following the Empire’s collapse – a key example of the failures of a multi-ethnic and multi-cultural state.

The 1930s saw the Kemalist definition of nationality integrated into the Constitution, legally defining every citizen as a Turk, regardless of religion or ethnicity. Despite this however, Atatürk fiercely pursed a policy of forced cultural conformity (Turkification), similar to that of the Russian Tsars in the previous century. Both regimes had the same aim – the creation and survival of a homogenous and unified country. As such, non-Turks were pressured into speaking Turkish publicly, and those with minority surnames had to change, to ‘Turkify’ them.

Now Reformism. A staunch believer in both education and equal opportunity, Atatürk made primary education free and compulsory, for both boys and girls. Alongside this came the opening of thousands of new schools across the country. Their results are undeniable: between 1923 – 38, the number of students attending primary school increased by 224%, and 12.5 times for middle school.

Staying true to his identity as an equal opportunist, Atatürk enacted monumentally progressive reforms in the area of women’s rights. For example, 1926 saw a new civil code, and with it came equal rights for women concerning inheritance and divorce. In many of these gender reforms, Turkey was well-ahead of other Western nations: Turkish women gained the vote in 1930, followed by universal suffrage in 1934. By comparison, France passed universal suffrage in 1945, Canada in 1960 and Australia in 1967. Fundamentally, Atatürk didn’t see Turkey truly modernising whilst Ottoman gender segregation persisted

Lastly, let’s look at Statism. As both president and the leader of the People’s Republican Party, Atatürk was essentially unquestioned in his control of the State. However, despite his dictatorial tendencies (primarily purging political enemies), he was firmly opposed to dynastic rule, like had been the case with the Ottomans.

But under Recep Tayyip Erdoğan, all of this could soon be gone.

Having been a high-profile political figure for 20 years, Erdoğan has cultivated a positive image domestically, one focused on his support for public religion and Turkish nationalism, whilst internationally, he’s received far more negative attention focused on his growing authoritarian behaviour. Regarded widely by historians as the very antithesis of Atatürk, Erdoğan’s pushback against state secularism is perhaps the most significant attack on the founder’s legacy.

This has been most clearly displayed within the education system. 2017 saw a radical shift in school curriculums across Turkey, with references to Charles Darwin’s theory of evolution being greatly reduced. Meanwhile, the number of religious schools has increased exponentially, promoting Erdoğan’s professed goal of raising a “pious generation of Turks”. Additionally, the Diyanet under Erdoğan has seen a huge increase in its budget, and with the launch of Diyanet TV in 2012, has spread Quranic education to early ages and boarding schools.

The State has roles to play in society but depriving schoolchildren of vital scientific information and funding religious indoctrination is beyond outrageous: Soner Cagaptay, author of The New Sultan: Erdoğan and the Crisis of Modern Turkey, referred to the changes as: “a revolution to alter public education to assure that a conservative, religious view of the world prevails”.

There are other warning signs more broadly, however. The past 20 years have seen the headscarf make a gradual reappearance back into Turkish life, with Erdoğan having first campaigned on the issue back in 2007, during his first run for the presidency. Furthermore, Erdoğan’s Justice and Development Party (AKP), with its strong base of support amongst extremely orthodox Muslims, has faced repeated accusations of being an Islamist party – as per the constitution, no party can “claim that it represents a form of religious belief”.

Turkish women, despite being granted legal equality by Atatürk, remain the regular victims of sexual harassment, employment discrimination and honour killings. Seemingly intent on destroying all the positive achievements of the founder, Erdoğan withdrew from the Istanbul Convention (which forces parties to investigate, punish and crackdown on violence against women) in March 2021.

All of these reversals of Atatürk’s policies reflect the larger-scale attempt to delete him from Turkey’s history. His image is now a rarity in school textbooks, at national events, and on statues; his role in Turkey’s founding has been criminally downplayed.

President Erdoğan presents an unambiguous threat to the freedoms of the Turkish people, through both his ultra-Islamic policies and authoritarian manner of governance. Unlike Atatürk, Erdoğan seemingly has no problems with ruling as an immortal dictator, and would undoubtedly love to establish a family dynasty. With no one willing to challenge him, he appears to be dismantling Atatürk’s reforms one law at a time, reducing the once-mythical Six Arrows of Kemalism down to a footnote in textbooks.

A man often absent from the school curriculums of Western history departments, Mustafa Kemal Atatürk proved one of the most consequential leaders in both Turkish history, and the 20th Century. A radical and a revolutionary he may have been, but it was largely down to him that the Turkish people received a recognised nation-state, in which state secularism, high-quality education and equal civil rights were the norm.

In our modern world, so many of our national figures now face open vilification from the public and politicians alike. But for Turkey, future generations may grow up not even knowing the name or face of their George Washington. Whilst several political parties and civil society groups are pushing back against this anti-Atatürk agenda, the sheer determination displayed by Erdoğan shows how far Turks must yet go to preserve the founder’s legacy.


Photo Credit.

The Nationalist Case for Caution

Over the last few weeks, we’ve been experiencing a rare phenomenon; politicians seem to have grown a spine. There’s tough talk of deportations and standing up to Islamic extremism. It’s far too good to be true. Many MPs and our Prime Minister have been living vicariously through Israel, springing to the defence of the Israelis and British Jews with extreme fervour. Cross-party leader support from Sunak and Starmer has been unwavering.

However, whilst the latter is facing rebellion from his immigrant and militant leftist contingent as Israeli aggression continues unabated, Sunak engaged in the humiliation ritual of meeting Israeli leaders at the King David Hotel. I’m unsure if a meeting between a sitting British PM and the Israeli leadership has been held there since Jewish terrorists bombed it, killing many Britons, but it’s certainly something I would not like to see again from a future PM.

The conflict in this area of the world does not particularly interest me. There are so many domestic problems facing Britain that I am somewhat dismayed that the sclerotic and otherwise necrotic government can rapidly reanimate when something so detached from us reaches their desks. As this latest crisis has rolled along several unfortunate reality checks have hit Britain. In an ideal world, we could completely wash our hands of it, but we are not in that position. Imported ethnic conflicts are coming to fruition and we need to navigate them as best we can.

Many Israelis were killed and over 100 hostages were taken, Israel retaliated with its usual tactic of bombing Gaza with extreme prejudice. The actions of Hamas provoked disgust from the wider British public, seeing people murdered in their homes does not sit particularly well, or so you would have thought. After the Israel response, pro-Palestinian demonstrations erupted across the UK. Among the usual suspects of white leftists was a sizeable ethnic minority contingent. London drew the most attention, at home and internationally, and most of the attendees were of minority ethnic backgrounds.

Between the messages of “Free Palestine” and “From the River to the Sea” less familiar ones started to emerge; the idea that what Hamas had achieved was anti-colonialism in action. This is where alarm bells should begin to ring, especially if you have been listening to leftist talking points in recent years or paying attention to protest actions. If decolonisation was not in fact simply tearing down statues or renaming streets but murdering your “oppressors” then perhaps this large and young contingent of resentful ethnic minorities could turn out to be a life-threatening problem. As the country moves toward the White British becoming a minority, as in London, this is a ticking time-bomb. When you see at the most recent protest, in which White people are attacked for defending the Cenotaph and being called “White Trash” by immigrants, the nature of this protest as anti-White decolonisation action becomes more clear.

MPs and media personalities began the tough talk immediately. “Deportations for anti-semitic students” and possible prison time for Hamas supporters. Nothing so far has come to fruition in that regard and deportations are unlikely to happen because the Tories have failed to dismantle the legal apparatus that allows the successful appealing of deportation efforts. Deportation efforts that are consistently hampered by an industry of state-leeching legal practices and lawyers. A more devious development appeared during the most recent protest with the publishing of an article exploring the “English roots of anti-Semitism.” A feeble attempt to paint these events as originating as a native White issue or an extension of historical anti-Semitism, not something artificially imported. 

It was at this point I began to think “Where was this talk when minority rape gangs were exposed or terrorist attacks were carried out here?” It was non-existent. We were actually encouraged to “Not look back in anger” with special government networks rolling out I “heart” (city/town affected by terrorism) almost immediately and a deafening silence with regard to the former. Anybody that has spoken out about these issues has been frequently demonised or silenced, they have not enjoyed even a fraction of Government support that Israel and British Jews have received over the last few weeks.

In response to the hostage crisis in Israel, British Jews have been running a poster campaign and projecting the images on the back of trucks in London. These efforts have been disrupted in the form of posters being torn down or the vehicles being stopped by the police themselves for the sake of “public order.” The people doing a substantial amount of the poster tearing have been minorities and in a particularly amusing clip a Jewish man states to an unconcerned Black woman that he “supported Black Lives Matter.” For him he had been a good ally how could these people betray them? Don’t they know they’re only meant to undermine White British society? The great irony that many British Jews are pro-immigration and support leftist causes that have led to this is not lost on us. Indeed, many leftist Jews were marching for Palestine in a somewhat annihilationist expression of self-determination.

The police have been no less cowardly than usual in their reactions. Violent rhetoric against Jews and Israel, actual calls to Jihad, being hand waved away by the police. As one would expect British people with Union flags or the St. George’s Cross have been arrested, escorted away or spoken to with typical condescension. Since the Oldham and Bradford riots the British police have been deathly afraid of policing minority issues for fear of them rioting or triggering acts of terrorism. This fear needs to be presented to the public as the police are utterly incapable of presenting this issue themselves. We saw a little bit of spine from the armed officers a few weeks ago but they have since stood down, happy to operate in a system that oppresses Whites and one that will throw them under the bus for political expedience. I’m not sure what is going on with the police in the UK; large swathes captured by leftists is the easiest answer, but many officers must be “lying back” and “thinking of the pension.” In the most recent protests we have seen police officers injured, although it’s hard to muster sympathy. Police support on the Right is definitely on the wane.

The online nationalist space has been interesting. Many Third-Positionists have naturally aligned themselves with Palestine and other aggrieved minorities in order to “strike out” at “Jewish power structures” and to rebuke Israel itself as a “colonial holdout.” Fundamentally, the minorities they are in a temporary alliance with see no difference between Jews/Israelis and Whites. Their incredibly short-sighted tactic is stoking anti-White rhetoric that is all too often used to put White British interests down.

The counter-jihad types have once again insisted that this is proof that we are indeed in a civilisational war with Islam, but I tentatively disagree. We have a demographic problem that extends beyond the Islamic population here. This should not be a theological debate, it’s a question of race; the preservation of the White British. I would be lying if I said that the large protests taking on Islamic characteristics didn’t alarm me. It’s as if we were seeing a vision of the future. The prospect of large Islamic voting blocs is something that could have us leaning more on the counter-jihad ideology in the future. Thankfully our electoral system suppresses this somewhat, although that itself is a double-edged sword suppressing us.

Others have seen the Government’s bold talk of deportations as a good opportunity for us to begin broaching the demographic question. I am not convinced that pushing this under the guise of dealing with “anti-Semitism” will serve us in any real capacity long term. Like all rushed legislation, it actually has the ability to hurt us in the long run; you set the precedent that anti-Semitism is somehow antithetical to life in Britain then many similar pieces of legislation could follow. We are already staring down the possibility of a Labour government that wants to make “misogyny” a “hate crime”, we don’t need more restrictions to speech before then if they can be avoided.

One silver lining from this mess, if not a vehicle to directly push deportations, is that it shows the obvious failure of the enforced multiracial project we call Multiculturalism. The success of this project has been so heavily disputed in recent weeks after being somewhat cynically trotted out by Suella Braverman. One other potential benefit is that it once again allows us to hammer home the point about the blocks to deportations from the ECHR as well as the legal practices opposing them being funded by the taxpayer.

The left has revealed its agenda by living vicariously through the actions of Hamas and the government has revealed their priorities in jumping to the defence of Israel and British Jews. Nationalists are effectively homeless; we have to advocate for ourselves. Leading from our principles towards our goals and not simply hoping to achieve something by serving others who have so often opposed us. Immigration, repatriation, withdrawal from the ECHR, smashing the Equality Act and Human Rights Acts, are all things that take precedence. Can we take a step toward doing any of these things with this crisis? Undoubtedly, but we must be cautious.


Photo Credit.

Charles’ Personal Rule: A Stable or Tyrannised England?

Within discussions of England’s political history, the most famous moments are known and widely discussed – the Magna Carta of 1215, and the Cromwell Protectorate of the 1650s spring immediately to mind. However, the renewal of an almost-mediaeval style of monarchical absolutism, in the 1630s, has proven both overlooked and underappreciated as a period of historical interest. Indeed, Charles I’s rule without Parliament has faced an identity crisis amongst more recent historians – was it a period of stability or tyranny for the English people?

If we are to consider the Personal Rule as a period in enough depth, the years leading up to the dissolution of Charles’ Third Parliament (in 1629) must first be understood. Succeeding his father James I in 1625, Charles’ personal style and vision of monarchy would prove to be incompatible with the expectations of his Parliaments. Having enjoyed a strained but respectful relationship with James, MPs would come to question Charles’ authority and choice of advisors in the coming years. Indeed, it was Charles’ stubborn adherence to the Divine Right of King’s doctrine, writing once that “Princes are not bound to give account of their actions but to God alone”, that meant that he believed compromise to be defeat, and any pushback against him to be a sign of disloyalty.

Constitutional tensions between King and Parliament proved the most contentious of all issues, especially regarding the King’s role in taxation. At war with Spain between 1625 – 1630 (and having just dissolved the 1626 Parliament), Charles was lacking in funds. Thus, he turned to non-parliamentary forms of revenue, notably the Forced Loan (1627) – declaring a ‘national emergency’, Charles demanded that his subjects all make a gift of money to the Crown. Whilst theoretically optional, those who refused to pay were often imprisoned; a notable example would be the Five Knights’ Case, in which five knights were imprisoned for refusing to pay (with the court ruling in Charles’ favour). This would eventually culminate in Charles’ signing of the Petition of Right (1628), which protected the people from non-Parliamentary taxation, as well as other controversial powers that Charles chose to exercise, such as arrest without charge, martial law, and the billeting of troops.

The role played by George Villiers, the Duke of Buckingham, was also another major factor that contributed to Charles’ eventual dissolution of Parliaments in 1629. Having dominated the court of Charles’ father, Buckingham came to enjoy a similar level of unrivalled influence over Charles as his de facto Foreign Minister. It was, however, in his position as Lord High Admiral, that he further worsened Charles’ already-negative view of Parliament. Responsible for both major foreign policy disasters of Charles’ early reign (Cadiz in 1625, and La Rochelle in 1627, both of which achieved nothing and killed 5 to 10,000 men), he was deemed by the MP Edward Coke to be “the cause of all our miseries”. The duke’s influence over Charles’ religious views also proved highly controversial – at a time when anti-Calvinism was rising, with critics such as Richard Montague and his pamphlets, Buckingham encouraged the King to continue his support of the leading anti-Calvinist of the time, William Laud, at the York House Conference in 1626.

Heavily dependent on the counsel of Villiers until his assassination in 1628, it was in fact, Parliament’s threat to impeach the Duke, that encouraged Charles to agree to the Petition of Right. Fundamentally, Buckingham’s poor decision-making, in the end, meant serious criticism from MPs, and a King who believed this criticism to be Parliament overstepping the mark and questioning his choice of personnel.

Fundamentally by 1629, Charles viewed Parliament as a method of restricting his God-given powers, one that had attacked his decisions, provided him with essentially no subsidies, and forced him to accept the Petition of Right. Writing years later in 1635, the King claimed that he would do “anything to avoid having another Parliament”. Amongst historians, the significance of this final dissolution is fiercely debated: some, such as Angela Anderson, don’t see the move as unusual; there were 7 years for example, between two of James’ Parliaments, 1614 and 1621 – at this point in English history, “Parliaments were not an essential part of daily government”. On the other hand, figures like Jonathan Scott viewed the principle of governing without Parliament officially as new – indeed, the decision was made official by a royal proclamation.

Now free of Parliamentary constraints, the first major issue Charles faced was his lack of funds. Lacking the usual taxation method and in desperate need of upgrading the English navy, the King revived ancient taxes and levies, the most notable being Ship Money. Originally a tax levied on coastal towns during wartime (to fund the building of fleets), Charles extended it to inland counties in 1635 and made it an annual tax in 1636. This inclusion of inland towns was construed as a new tax without parliamentary authorisation. For the nobility, Charles revived the Forest Laws (demanding landowners produce the deeds to their lands), as well as fines for breaching building regulations.

The public response to these new fiscal expedients was one of broad annoyance, but general compliance. Indeed, between 1634 and 1638, 90% of the expected Ship Money revenue was collected, providing the King with over £1m in annual revenue by 1637. Despite this, the Earl of Warwick questioned its legality, and the clerical leadership referred to all of Charles’ tactics as “cruel, unjust and tyrannical taxes upon his subjects”.However, the most notable case of opposition to Ship Money was the John Hampden case in 1637. A gentleman who refused to pay, Hampden argued that England wasn’t at war and that Ship Money writs gave subjects seven months to pay, enough time for Charles to call a new Parliament. Despite the Crown winning the case, it inspired greater widespread opposition to Ship Money, such as the 1639-40 ‘tax revolt’, involving non-cooperation from both citizens and tax officials. Opposing this view, however, stands Sharpe, who claimed that “before 1637, there is little evidence at least, that its [Ship Money’s] legality was widely questioned, and some suggestion that it was becoming more accepted”.

In terms of his religious views, both personally and his wider visions for the country, Charles had been an open supporter of Arminianism from as early as the mid-1620s – a movement within Protestantism that staunchly rejected the Calvinist teaching of predestination. As a result, the sweeping changes to English worship and Church government that the Personal Rule would oversee were unsurprisingly extremely controversial amongst his Calvinist subjects, in all areas of the kingdom. In considering Charles’ religious aims and their consequences, we must focus on the impact of one man, in particular, William Laud. Having given a sermon at the opening of Charles’ first Parliament in 1625, Laud spent the next near-decade climbing the ranks of the ecclesiastical ladder; he was made Bishop of Bath and Wells in 1626, of London in 1629, and eventually Archbishop of Canterbury in 1633. Now 60 years old, Laud was unwilling to compromise any of his planned reforms to the Church.

The overarching theme of Laudian reforms was ‘the Beauty of Holiness’, which had the aim of making churches beautiful and almost lavish places of worship (Calvinist churches, by contrast, were mostly plain, to not detract from worship). This was achieved through the restoration of stained-glass windows, statues, and carvings. Additionally, railings were added around altars, and priests began wearing vestments and bowing at the name of Jesus. However, the most controversial change to the church interior proved to be the communion table, which was moved from the middle of the room to by the wall at the East end, which was “seen to be utterly offensive by most English Protestants as, along with Laudian ceremonialism generally, it represented a substantial step towards Catholicism. The whole programme was seen as a popish plot”. 

Under Laud, the power and influence wielded by the Church also increased significantly – a clear example would be the fact that Church courts were granted greater autonomy. Additionally, Church leaders became evermore present as ministers and officials within Charles’ government, with the Bishop of London, William Juxon, appointed as Lord Treasurer and First Lord of the Admiralty in 1636. Additionally, despite already having the full backing of the Crown, Laud was not one to accept dissent or criticism and, although the severity of his actions has been exaggerated by recent historians, they can be identified as being ruthless at times. The clearest example would be the torture and imprisonment of his most vocal critics in 1637: the religious radicals William Prynne, Henry Burton and John Bastwick.

However successful Laudian reforms may have been in England (and that statement is very much debatable), Laud’s attempt to enforce uniformity on the Church of Scotland in the latter half of the 1630s would see the emergence of a united Scottish opposition against Charles, and eventually armed conflict with the King, in the form of the Bishops’ Wars (1639 and 1640). This road to war was sparked by Charles’ introduction of a new Prayer Book in 1637, aimed at making English and Scottish religious practices more similar – this would prove beyond disastrous. Riots broke out across Edinburgh, the most notable being in St Giles’ Cathedral (where the bishop had to protect himself by pointing loaded pistols at the furious congregation. This displeasure culminated in the National Covenant in 1638 – a declaration of allegiance which bound together Scottish nationalism with the Calvinist faith.

Attempting to draw conclusions about Laudian religious reforms very many hinges on the fact that, in terms of his and Charles’ objectives, they very much overhauled the Calvinist systems of worship, the role of priests, and Church government, and the physical appearance of churches. The response from the public, however, ranging from silent resentment to full-scale war, displays how damaging these reforms were to Charles’ relationship with his subjects – coupled with the influence wielded by his wife Henrietta Maria, public fears about Catholicism very much damaged Charles’ image, and meant religion during the Personal Rule was arguably the most intense issue of the period. In judging Laud in the modern-day, the historical debate has been split: certain historians focus on his radical uprooting of the established system, with Patrick Collinson suggesting the Archbishop to have been “the greatest calamity ever visited upon by the Church of England”, whereas others view Laud and Charles as pursuing the entirely reasonable, a more orderly and uniform church.

Much like how the Personal Rule’s religious direction was very much defined by one individual, so was its political one, by Thomas Wentworth, later known as the Earl of Strafford. Serving as the Lord Deputy of Ireland from 1632 to 1640, he set out with the aims of ‘civilising’ the Irish population, increasing revenue for the Crown, and challenging Irish titles to land – all under the umbrella term of ‘Thorough’, which aspired to concentrate power, crackdown on oppositions figures, and essentially preserve the absolutist nature of Charles’ rule during the 1630s.

Regarding Wentworth’s aims toward Irish Catholics, Ian Gentles’ 2007 work The English Revolution and the Wars in the Three Kingdoms argues the friendships Wentworth maintained with Laud and also with John Bramhall, the Bishop of Derry, “were a sign of his determination to Protestantize and Anglicize Ireland”.Devoted to a Catholic crackdown as soon as he reached the shores, Wentworth would subsequently refuse to recognise the legitimacy of Catholic officeholders in 1634, and managed to reduce Catholic representation in Ireland’s Parliament, by a third between 1634 and 1640 – this, at a time where Catholics made up 90% of the country’s population. An even clearer indication of Wentworth’s hostility to Catholicism was his aggressive policy of land confiscation. Challenging Catholic property rights in Galway, Kilkenny and other counties, Wentworth would bully juries into returning a King-favourable verdict, and even those Catholics who were granted their land back (albeit only three-quarters), were now required to make regular payments to the Crown. Wentworth’s enforcing of Charles’ religious priorities was further evidenced by his reaction to those in Ireland who signed the National Covenant. The accused were hauled before the Court of Castle Chamber (Ireland’s equivalent to the Star Chamber) and forced to renounce ‘their abominable Covenant’ as ‘seditious and traitorous’. 

Seemingly in keeping with figures from the Personal Rule, Wentworth was notably tyrannical in his governing style. Sir Piers Crosby and Lord Esmonde were convicted by the Court of Castle Chamber for libel for accusing Wentworth of being involved in the death of Esmond’s relative, and Lord Valentina was sentenced to death for “mutiny” – in fact, he’d merely insulted the Earl.

In considering Wentworth as a political figure, it is very easy to view him as merely another tyrannical brute, carrying out the orders of his King. Indeed, his time as Charles’ personal advisor (1639 onwards) certainly supports this view: he once told Charles that he was “loose and absolved from all rules of government” and was quick to advocate war with the Scots. However, Wentworth also saw great successes during his time in Ireland; he raised Crown revenue substantially by taking back Church lands and purged the Irish Sea of pirates. Fundamentally, by the time of his execution in May 1641, Wentworth possessed a reputation amongst Parliamentarians very much like that of the Duke of Buckingham; both men came to wield tremendous influence over Charles, as well as great offices and positions.

In the areas considered thus far, it appears opposition to the Personal Rule to have been a rare occurrence, especially in any organised or effective form. Indeed, Durston claims the decade of the 1630s to have seen “few overt signs of domestic conflict or crisis”, viewing the period as altogether stable and prosperous. However, whilst certainly limited, the small amount of resistance can be viewed as representing a far more widespread feeling of resentment amongst the English populace. Whilst many actions received little pushback from the masses, the gentry, much of whom were becoming increasingly disaffected with the Personal Rule’s direction, gathered in opposition.  Most notably, John Pym, the Earl of Warwick, and other figures, collaborated with the Scots to launch a dissident propaganda campaign criticising the King, as well as encouraging local opposition (which saw some success, such as the mobilisation of the Yorkshire militia). Charles’ effective use of the Star Chamber, however, ensured opponents were swiftly dealt with, usually those who presented vocal opposition to royal decisions.

The historiographical debate surrounding the Personal Rule, and the Caroline Era more broadly, was and continues to be dominated by Whig historians, who view Charles as foolish, malicious, and power-hungry, and his rule without Parliament as destabilising, tyrannical and a threat to the people of England. A key proponent of this view is S.R. Gardiner who, believing the King to have been ‘duplicitous and delusional’, coined an alternative term to ‘Personal Rule’ – the Eleven Years’ Tyranny. This position has survived into the latter half of the 20th Century, with Charles having been labelled by Barry Coward as “the most incompetent monarch of England since Henry VI”, and by Ronald Hutton, as “the worst king we have had since the Middle Ages”. 

Recent decades have seen, however, the attempted rehabilitation of Charles’ image by Revisionist historians, the most well-known, as well as most controversial, being Kevin Sharpe. Responsible for the landmark study of the period, The Personal Rule of Charles I, published in 1992, Sharpe came to be Charles’ most staunch modern defender. In his view, the 1630s, far from a period of tyrannical oppression and public rebellion, were a decade of “peace and reformation”. During Charles’ time as an absolute monarch, his lack of Parliamentary limits and regulations allowed him to achieve a great deal: Ship Money saw the Navy’s numbers strengthened, Laudian reforms mean a more ordered and regulated national church, and Wentworth dramatically raised Irish revenue for the Crown – all this, and much more, without any real organised or overt opposition figures or movements.

Understandably, the Sharpian view has received significant pushback, primarily for taking an overly optimistic view and selectively mentioning the Personal Rule’s positives. Encapsulating this criticism, David Smith wrote in 1998 that Sharpe’s “massively researched and beautifully sustained panorama of England during the 1630s … almost certainly underestimates the level of latent tension that existed by the end of the decade”.This has been built on by figures like Esther Cope: “while few explicitly challenged the government of Charles I on constitutional grounds, a greater number had experiences that made them anxious about the security of their heritage”. 

It is worth noting however that, a year before his death in 2011, Sharpe came to consider the views of his fellow historians, acknowledging Charles’ lack of political understanding to have endangered the monarchy, and that, more seriously by the end of the 1630s, the Personal Rule was indeed facing mounting and undeniable criticism, from both Charles’ court and the public.

Sharpe’s unpopular perspective has been built upon by other historians, such as Mark Kishlansky. Publishing Charles I: An Abbreviated Life in 2014, Kishlansky viewed parliamentarian propaganda of the 1640s, as well as a consistent smear from historians over the centuries as having resulted in Charles being viewed “as an idiot at best and a tyrant at worst”, labelling him as “the most despised monarch in Britain’s historical memory”. Charles however, faced no real preparation for the throne – it was always his older brother Henry that was the heir apparent. Additionally, once King, Charles’ Parliaments were stubborn and uncooperative – by refusing to provide him with the necessary funding, for example, they forced Charles to enact the Forced Loan. Kishlansky does, however, concede the damage caused by Charles’ unmoving belief in the Divine Right of Kings: “he banked too heavily on the sheer force of majesty”.

Charles’ personality, ideology and early life fundamentally meant an icy relationship with Parliament, which grew into mutual distrust and the eventual dissolution. Fundamentally, the period of Personal Rule remains a highly debated topic within academic circles, with the recent arrival of Revisionism posing a challenge to the long-established negative view of the Caroline Era. Whether or not the King’s financial, religious, and political actions were met with a discontented populace or outright opposition, it remains the case that the identity crisis facing the period, that between tyranny or stability remains yet to be conclusively put to rest.


Photo Credit.

All States Desire Power: The Realist Perspective

Within the West, the realm of international theory has, since 1945, been a discourse dominated almost entirely by the Liberal perspective. Near-universal amongst the foreign policy establishments of Western governments, a focus on state cooperation, free-market capitalism and more broadly, internationalism, is really the only position held by most leaders nowadays – just look at ‘Global Britain’. As Francis Fukuyama noted, the end of the Cold War (and the Soviet Union) served as political catalysts, and brought about ‘the universalisation of Western liberal democracy as the final form of human government’.

Perhaps even more impactful however, were the immediate post-war years of the 1940s. With the Continent reeling from years of physical and economic destruction, the feeling amongst the victors was understandably a desire for greater closeness, security and stability. This resulted in numerous alliances being formed, including political (the UN in 1945), military (NATO in 1949), and also economic (with the various Bretton Woods organisations). For Europe, this focus on integration manifested itself in blocs like the EEC and ECSC, which would culminate in the Maastricht Treaty and the EU.

This worldview however, faces criticism from advocates championing another, Realism. The concerns of states shouldn’t, as Liberals claim, be on forging stronger global ties or forming more groups – instead, nations should be domestically-minded, concerned with their internal situation and safety. For Realism, this is what foreign relations are about: keeping to oneself, and furthering the interests of the nation above those of the wider global community.

To better understand Realism as an ideological school, we must first look to theories of human nature. From the perspective of Realists, the motivations and behaviour of states can be traced back to our base animalistic instincts, with the work of Thomas Hobbes being especially noteworthy. For the 17th Century thinker, before the establishment of a moral and ordered society (by the absolute Sovereign), Man is concerned only with surviving, protecting selfish interests and dominating other potential rivals. On a global scale, these are the priorities of nation-states and their leaders – Hans Morgenthau famously noted that political man was “born to seek power”, possessing a constant need to dominate others. However much influence or power a state may possess, self-preservation is always a major goal. Faced with the constant threat of rivals with opposing interests, states are always seeking a guarantee of protection – for Realists, the existence of intergovernmental organisations (IGOs) is an excellent example of this. Whilst NATO and the UN may seem the epitome of Liberal cooperation, what they truly represent is states ensuring their own safety.

One of the key pillars of Realism as a political philosophy is the concept of the Westphalian System, and how that relates to relationships between countries. Traced back to the Peace of Westphalia in 1648, the principle essentially asserts that all nation-states have exclusive control (absolute sovereignty) over their territory. For Realists, this has been crucial to their belief that states shouldn’t get involved in the affairs of their neighbours, whether that be in the form of economic aid, humanitarian intervention or furthering military interests. It is because of this system that states are perceived as the most important, influential and legitimate actors on the world stage: IGOs and other non-state bodies can be moulded and corrupted by various factors, including the ruthless self-interest of states.

With the unique importance of states enshrined within Realist thought, the resulting global order is one of ‘international anarchy’ – essentially a system in which state-on-state conflict is inevitable and frequent. The primary reason for this can be linked back to Hobbes’ 1651 work Leviathan: with no higher authority to enforce rules and settle disputes, people (and states) will inevitably come into conflict, and lead ‘nasty, brutish and short’ existences (an idea further expanded upon by Hedley Bull’s The Anarchical Society). Left in a lawless situation, with neither guaranteed protection nor guaranteed allies (all states are, of course, potential enemies), it’s every man for himself. At this point, Liberals will be eager to point out supposed ‘checks’ on the power of nation-states. Whilst we’ve already tackled the Realist view of IGOs, the existence of international courts must surely hold rogue states accountable, right? Well, the sanctity of state sovereignty limits the power of essentially all organisations: for the International Court of Justice, this means it’s rulings both lack enforcement, and can also be blatantly ignored (e.g., the court advised Israel against building a wall along the Palestinian border in 2004, which the Israelis took no notice of). Within the harsh world we live in, states are essentially free to do as they wish, consequences be damned.

Faced with egocentric neighbours, the inevitability of conflict and no referee, it’s no wonder states view power as the way of surviving. Whilst Realists agree that all states seek to accumulate power (and hard military power in particular), there exists debate as to the intrinsic reason – essentially, following this accumulation, what is the ultimate aim? One perspective, posited by thinkers like John Mearsheimer (and Offensive Realists), suggests that states are concerned with becoming the undisputed hegemon within a unipolar system, where they face no danger – once the most powerful, your culture can be spread, your economy strengthened, and your interests more easily defended. Indeed, whilst the United States may currently occupy the position of hegemon, Mearsheimer (as well as many others) have been cautiously watching China – the CCP leadership clearly harbour dreams of world takeover.

Looking to history, the European empires of old were fundamentally creations of hegemonic ambition. Able to access the rich resources and unique climates of various lands, nations like Britain, Spain and Portugal possessed great international influence, and at various points, dominated the global order. Indeed, when the British Empire peaked in the early 1920s, it ruled close to 500 million people, and covered a quarter of the Earth’s land surface (or history’s biggest empire). Existing during a period of history in which bloody expensive wars were commonplace, these countries did what they believed necessary, rising to the top and brutally suppressing those who threatened their positions – regional control was ensured, and idealistic rebels brought to heel.

In stark contrast is the work of Defensive Realists, such as Kenneth Waltz, who suggest that concerned more with security than global dominance, states accrue power to ensure their own safety, and, far from lofty ideas of hegemony, favour a cautious approach to foreign policy. This kind of thinking was seen amongst ‘New Left’ Revisionist historians in the aftermath of the Cold War – the narrative of Soviet continental dominance (through the takeover of Eastern Europe) was a myth. Apparently, what Stalin truly desired was to solidify the USSR’s position through the creation of a buffer wall, due to the increasingly anti-Soviet measures of President Truman (which included Marshall Aid to Europe, and the Truman Doctrine).

Considering Realism within the context of the 21st Century, the ongoing Russo-Ukrainian War seems the obvious case study to examine. Within academic circles, John Mearsheimer has been the most vocal regarding Ukraine’s current predicament – a fierce critic of American foreign policy for decades now, he views NATO’s eastern expansion as having worsened relations with Russia, and only served to fuel Putin’s paranoia. From Mearsheimer’s perspective, Putin’s ‘special military operation’ is therefore understandable and arguably justifiable: the West have failed to respect Russia’s sphere of influence, failed to acknowledge them as a fellow Great Power, and consistently thwarted any pursuits of their regional interests.

Alongside this, Britain’s financial involvement in this conflict can and should be viewed as willing intervention, and one that is endangering the already-frail British economy. It is all well and good to speak of defending rights, democracy and Western liberalism, but there comes a point where our politicians and media must be reminded – the national interest is paramount, always. This needs not be our fight, and the aid money we’re providing the Ukrainians (in the hundreds of billions) should instead be going towards the police, housing, strengthening the border, and other domestic issues.

Our politicians and policymakers may want a continuance of idealistic cooperation and friendly relations, but the brutal unfriendly reality of the system is becoming unavoidable. Fundamentally, self-interested leaders and their regimes are constantly looking to gain more power, influence and territory. By and large, bodies like the UN are essentially powerless; decisions can’t be enforced and sovereignty acts an unbreachable barrier. Looking ahead to the UK’s future, we must be more selfish, focused on making British people richer and safer, and our national interests over childish notions of eternal friendship.


Photo Credit.

John Galt, Tom Joad, and other Polemical Myths

Just about the only titles by Ayn Rand I’d feel comfortable assigning my students without previous suggestion by either student or boss would be Anthem or We the Living, mostly because they both fit into broader genres of dystopian and biographical fiction, respectively, and can, thus, be understood in context. Don’t get me wrong: I’d love to teach The Fountainhead or Atlas Shrugged, if I could find a student nuanced (and disciplined) enough to handle those two; however, if I were to find such a student, I’d probably skip Rand and go straight to Austen, Hugo, and Dostoevsky—again, in part to give students a context of the novelistic medium from which they can better understand authors like Rand.

My hesitation to teach Rand isn’t one of dismissal; indeed, it’s the opposite—I’ve, perhaps, studied her too much (certainly, during my mid-twenties, too exclusively). I could teach either of her major novels, with understanding of both plot and philosophy, having not only read and listened to them several times but also read most of her essays and non-fiction on philosophy, culture, art, fiction, etc. However, I would hesitate to teach them because they are, essentially, polemics. Despite Rand’s claiming it was not her purpose, the novels are didactic in nature: their events articulate Rand’s rationalistic, human-centric metaphysics (itself arguably a distillation of Aristotelian natural law, Lockean rights, and Nietzschean heroism filtered through Franklin, Jefferson, and Rockefeller and placed in a 20th-century American context—no small feat!). Insofar as they do so consistently, The Fountainhead and Atlas Shrugged succeed, and they are both worth reading, if only to develop a firsthand knowledge of the much-dismissed Rand’s work, as well as to understand their place in 20th-century American culture and politics.

All that to say that I understand why people, especially academics, roll their eyes at Rand (though at times I wonder if they’ve ever seriously read her). The “romantic realism” she sought to develop to glorify man as (she saw) man ought to be, which found its zenith in the American industrialist and entrepreneur, ran counter to much that characterized the broader 20th century culture (both stylistically and ideologically), as it does much of the 21st. Granted, I may have an exaggerated sense of the opposition to Rand—her books are still read in and out of the classroom, and some of her ideas still influence areas of at least American culture—and one wonders if Rand wouldn’t take the opposition, itself, as proof of her being right (she certainly did this in the last century). However, because of the controversy, as well as the ideology, that structures the novels, I would teach her with a grain of salt, not wanting to misuse my position of teaching who are, essentially, other people’s kids who probably don’t know and haven’t read enough to understand Rand in context. For this fact, if not for the reasoning, I can imagine other teachers applauding me.

And yet, how many academics would forego including Rand in a syllabus and, in the same moment, endorse teaching John Steinbeck without a second thought?

I generally enjoy reading books I happened to miss in my teenage years. Had I read The Great Gatsby any sooner than I did in my late twenties, I would not have been ready for it, and the book would have been wasted on me. The same can be said of The Scarlet Letter, 1984, and all of Dostoevsky. Even the books I did read have humbled me upon rereading; Pride and Prejudice wasn’t boring—I was.

Reading through The Grapes of Wrath for the first time this month, I am similarly glad I didn’t read it in high school (most of my peers were not so lucky, having had to read it in celebration of Steinbeck’s 100th birthday). The fault, dear Brutus, is not in the book (though it certainly has faults) but in ourselves—that we, as teenagers who lack historical, political, and philosophical context, are underlings. One can criticize Atlas Shrugged for presenting a selective, romanticized view of the capitalist entrepreneur (which, according to Rand’s premises, was thorough, correct, consistent, and, for what it was, defensible) which might lead teenagers to be self-worshipping assholes who, reading Rand without nuance, take the book as justification for mistaking their limited experience of reality as their rational self-interest. One can do much the same, though for ideas fundamentally opposed to Rand’s, for The Grapes of Wrath.

A member of the Lost Generation, John Steinbeck was understandably jaded in his view of 19th-century American ideals. Attempting to take a journalistic, modern view of the Great Depression and Dust Bowl from the bottom up, he gave voice to the part of American society that, but for him, may have remained inarticulate and unrecorded. Whatever debate can be had about the origins of Black Tuesday (arguably beginning more in Wilson’s Washington and Federal Reserve than on Wall Street), the Great Depression hit the Midwest hardest, and the justifiable sense that Steinbeck’s characters are unfair victims of others’ depredations pervades The Grapes of Wrath, just as it articulates one of the major senses of the time. When I read the book, I’m not only reading of the Joad family: I’m reading of my own grandfather, who grew up in Oklahoma and later Galveston, TX. He escaped the latter effects of the Dust Bowl by going not to California but to Normandy. I’m fortunate to have his journal from his teenage years; other Americans who don’t have such a journal have Steinbeck.

However, along with the day-in-the-life (in which one would never want to spend a day) elements of the plot, the book nonetheless offers a selectively, one might even say romantically, presented ideology in answer to the plot’s conflict. Responding to the obstacles and unfairness depicted in The Grapes of Wrath one can find consistent advocacy of revolution among the out-of-work migrants that comprise most of the book. Versus Rand’s extension of Dagny Taggart or Hank Rearden’s sense of pride, ownership, and property down to the smallest elements of their respective businesses, one finds in Steinbeck the theme of a growing disconnect between legal ownership and the right to the land.

In the different reflections interpolated throughout the Joads’ plot Steinbeck describes how, from his characters’ view, there had been a steady divorce over the years between legal ownership of the land and appreciation for it. This theme was not new to American literature. The “rural farmer vs city speculator” mythos is one of the fundamental characteristics of American culture reaching back to Jefferson’s Democratic Republicans’ opposition to Adams’s Federalists, and the tension between the southwest frontiersman and the northeast banker would play a major role in the culture of self-reliance, the politics of the Jacksonian revolution onward, and the literature of Mark Twain and others. Both sides of the tension attempt to articulate in what the inalienable right to property inheres. Is it in the investment of funds and the legal buying and owning of land, or is it in the physical production of the land, perhaps in spite of whoever’s name is on the land grant or deed? Steinbeck is firmly in the latter camp.

However, in The Grapes of Wrath one finds not a continuation of the yeoman farmer mythos but an arguable undermining of the right to property and profit, itself, that undergirds the American milieu which makes the yeoman farmer possible, replacing it with an (albeit understandable) “right” based not on production and legal ownership, but on need. “Fallow land’s a sin,” is a consistent motif in The Grapes of Wrath, especially, argue the characters, when there are so many who are hungry and could otherwise eat if allowed to plant on the empty land. Steinbeck does an excellent job effecting sympathy for the Joads and other characters who, having worked the soil their whole lives, must now compete with hundreds of others like them for jobs paying wages that, due to the intended abundance of applicants, fall far short of what is needed to fill their families’ stomachs.

Similarly, Steinbeck goes to great pains to describe the efforts of landowners to keep crop prices up by punishing attempts to illegally grow food on the fallow land or pick the fruit left to rot on trees, as well as the plot, narrowly evaded by the Joads, to eradicate “reds” trying to foment revolution in one of the Hoovervilles of the book (Tom Joad had, in fact, begun to advocate rising up against landowners in more than one instance). In contrast to the Hoovervilles and the depredations of locals against migrant Okies stands the government camp, safely outside the reach of the local, unscrupulous, anti-migrant police and fitted out with running water, beneficent federal overseers, and social events. In a theme reminiscent of the 19th-century farmers’ looking to the federal government for succor amidst an industrializing market, Steinbeck concretizes the relief experienced in the Great Depression by families like the Joads at the prospects of aid from Washington.

However, just as Rand’s depictions of early twentieth-century America is selective in its representation of the self-made-man ethos of her characters (Rand omits, completely, World War I and the 1929 stock market crash from her novels), Steinbeck’s representation of the Dust Bowl is selective in its omissions. The profit-focused prohibitions against the Joads’ working the land were, in reality, policies required by FDR’s New Deal programs—specifically the Agricultural Adjustment Act, which required the burning of crops and burying of livestock in mass graves to maintain crop prices and which was outlawed in 1936 by the Supreme Court. It is in Steinbeck’s description of this process, which avoids explicitly describing the federal government’s role therein, where one encounters the phrase “grapes of wrath,” presaging a presumable event—an uprising?—by the people: “In the souls of the people the grapes of wrath are filling and growing heavy, growing heavy for the vintage.” Furthermore, while Rand presents, if in the hypothetical terms of narrative, how something as innocuous and inevitable as a broken wire in the middle of a desert can have ramifications that reach all the way to its company’s highest chair, Steinbeck’s narrative remains focused on the Joads, rarely touching on the economic exigencies experienced by the local property and business owners except in relation to the Joads and to highlight the apparent inhumanity of the propertied class (which, in such events as the planned fake riot at the government camp dance party, Steinbeck presents for great polemical effect).

I use “class” intentionally here: though the Great Depression affected all, Steinbeck’s characters often adopt the class-division viewpoint not only of Marx but of Hegel, interpreting the various landowners’ actions as being intentionally taken at the expense of the lower, out-of-work, classes. Tom Joad’s mother articulates to Tom why she is, ultimately, encouraged by, if still resentful of the apparent causers of, their lot:

“Us people will go on living when all them people is gone. Why, Tom, we’re the people that live. They ain’t gonna wipe us out. Why, we’re the people—we go on.”

“We take a beatin’ all the time.”

“I know.” Ma chuckled. “Maybe that makes us tough. Rich fellas come up an’ they die, an’ their kids ain’t no good, an’ they die out. But, Tom, we keep a-comin’. Don’ you fret none, Tom. A different time’s comin’.”

Describing, if in fewer words than either Hegel or Marx, the “thesis-antithesis-synthesis” process of historical materialism, where their class is steadily strengthened by their adverse circumstances in ways the propertied class is not, Mrs. Joad articulates an idea that pervades much of The Grapes of Wrath: the sense that the last, best hope and strength of the put-upon lower classes is found in their being blameless amidst the injustice of their situation, and that their numbers makes their cause inevitable.

This, I submit, is as much a mythos—if a well-stylized and sympathetically presented one—as Rand’s depiction of the producer-trader who is punished for his or her ability to create, and, save for the discernible Marxist elements in Steinbeck, both are authentically American. Though the self-prescribed onus of late 19th- and early 20th-century literature was partially journalistic in aim, Steinbeck was nonetheless a novelist, articulating not merely events but the questions beneath those events and concretizing the perspectives and issues involved into characters and plots that create a story, in the folk fairy tale sense, a mythos that conveys a cultural identity. Against Rand’s modernizing of the self-made man Steinbeck resurrects the soul of the Grange Movement of farmers who, for all their work ethic and self-reliance, felt left behind by the very country they fed. That The Grapes of Wrath is polemical—from the Greek πολεμικός for “warlike” or “argumentative”—does not detract from the project (it may be an essential part of it). Indeed, for all the license and selectivity involved in the art form, nothing can give fuel to a cause like a polemical novel—as Uncle Tom’s Cabin, The Jungle, and many others show.

However, when it comes to assigning polemics to students without hesitation, I…hesitate. Again, the issue lies in recognizing (or, for most students, being told) that one is reading a polemic. When one reads a polemical novel, one is often engaging, in some measure, with politics dressed up as story, and it is through this lens and with this caveat that such works must be read—even (maybe especially!) when they are about topics with which one agrees. As in many things, I prefer to defer to Aristotle, who, in the third section of Book I of the Nicomachean Ethics, cautions against young people engaging in politics before they first learn enough of life to provide context:

Now each man judges well the things he knows, and of these he is a good judge. And so the man who has been educated in a subject is a good judge of that subject, and the man who has received an all-round education is a good judge in general. Hence a young man is not a proper hearer of lectures on political science; for he is inexperienced in the actions that occur in life, but its discussions start from these and are about these; and, further, since he tends to follow his passions, his study will be vain and unprofitable, because the end aimed at is not knowledge but action. And it makes no difference whether he is young in years or youthful in character; the defect does not depend on time, but on his living, and pursuing each successive object, as passion directs.

Of course, the implicit answer is to encourage young people (and ourselves) to read not less but more—and to read with the knowledge that their own interests, passions, neuroses, and inertias might be unseen participants in the process. Paradoxically, it may be by reading more that we can even start to read. Rand becomes much less profound, and perhaps more enjoyable, after one reads the Aristotle, Hugo, and Nietzsche who made her, and I certainly drew on American history (economic and political) and elements of continental philosophy, as well as other works of Steinbeck and the Lost Generation, when reading The Grapes of Wrath. Yet, as Aristotle implies, young people haven’t had the time—and, more importantly, the metaphysical and rhetorical training and self-discipline—to develop such reflection as readers (he said humbly and as a lifelong student, himself). Indeed, as an instructor I see this not as an obstacle but an opportunity—to teach students that there is much more to effective reading and understanding than they might expect, and that works of literature stand not as ancillary to the process of history but as loci of its depiction, reflection, and motivation.

Perhaps I’m exaggerating my case. I have, after all, taught polemical novels to students (Anthem among them, as well as, most recently, 1984 to a middle schooler), and a novel I’ve written and am trying to get published is, itself, at least partially polemical on behalf of keeping Shakespeare in the university curriculum. Indeed, Dostoevsky’s polemical burlesque of the psychology behind Russian socialism, Devils, or The Possessed, so specifically predicted the motives and method of the Russian Revolution (and any other socialist revolution) more than fifty years before it happened that it should be required reading. Nonetheless, because the content and aim of a work requires a different context for teaching, a unit on Devils or The Grapes of Wrath would look very different from one on, say, The Great Gatsby. While the latter definitely merits offering background to students, the former would need to include enough background on the history and perspectives involved to be able to recognize them. The danger of omitting background from Fitzgerald would be an insufficient understanding of and immersion in the plot, of Steinbeck, an insufficient knowledge of the limits of and possible counters to the argument.

Part of the power and danger of polemical art lies in its using a fictional milieu to carry an idea that is not meant to be taken as fiction. The willing suspension of disbelief that energizes the former is what allows the latter idea to slip in as palatable. This can produce one of at least two results, both, arguably, artistic aberrations: either the idea is caught and disbelief is not able to be suspended, rendering the artwork feeling preachy or propagandistic, or the audience member gives him or herself over to the work completely and, through the mythic capability of the artistic medium, becomes uncritically possessed by the idea, deriving an identity from it while believing they are merely enjoying and defending what they believe to be great art. I am speaking from more than a bit of reflection: whenever I see some millennial on Twitter interpret everything through the lens of Harry, Ron, and Hermione, I remember mid-eye-roll that I once did the same with Dagny, Francisco, and Hank.

Every work of art involves a set of values it seeks to concretize and communicate in a certain way, and one culture’s mythos may be taken by a disinterested or hostile observer to be so much propaganda. Because of this, even what constitutes a particular work as polemical may, itself, be a matter of debate, if not personal taste. One can certainly read and gain much from reading any of the books I’ve mentioned (as The Grapes of Wrath‘s Pulitzer Prize shows), and, as I said, I’m coming at Grapes with the handicap of its being my first read. I may very well be doing what I warn my students against doing, passing judgment on a book before I understand it; if I am, I look forward to experiencing a well-deserved facepalm moment in the future, which I aim to accelerate by reading the rest of Steinbeck’s work (Cannery Row is next). But this is, itself, part of the problem—or boon—of polemics: that to avoid a premature understanding one must intentionally seek to nuance their perspective, both positively and negatively, with further reading.

Passively reading Atlas Shrugged or The Grapes of Wrath, taking them as reality, and then interpreting all other works (and, indeed, all of life) through their lens is not dangerous because they aren’t real, but because within the limits of their selective stylization and values they are real. That is what makes them so powerful, and, as with anything powerful, one must learn how to use them responsibly—and be circumspect when leading others into them without also ensuring they possess the discipline proper to such works.


Photo Credit.

Eve: The Prototype of the Private Citizen

Written in the 1660s, John Milton’s Paradise Lost is the type of book I imagine one could spend a lifetime mining for meaning and still be left with something to learn. Its being conceived as an English Epic that uses the poetic forms and conventions of Homeric and Ovidic antiquity to present a Christian subject, it yields as much to the student of literature as it does to students of history and politics, articulating in its retelling of the Fall many of the fundamental questions at work in the post-Civil-War body politic of the preceding decade (among many other things). Comparable with Dante’s Inferno in form, subject, and depth, Paradise Lost offers—and requires—much to and from readers, and it is one of the deepest and most complex works in the English canon. I thank God Milton did not live a half century earlier or write plays, else I might have to choose between him and Shakespeare—because I’d hesitate to simply pick Shakespeare.

One similarity between Milton and Shakespeare that has import to today’s broader discussion involves the question of whether they present their female characters fairly, believably, and admirably, or merely misogynistically. Being a Puritan Protestant from the 1600s writing an Epic verse version of Genesis 1-3, Milton must have relegated Eve to a place of silent submission, no? This was one of the questions I had when I first approached him in graduate school, and, as I had previously found when approaching Shakespeare and his heroines with the same query, I found that Milton understood deeply the gender politics of Adam and Eve, and he had a greater respect for his heroine than many current students might imagine.

I use “gender politics” intentionally, for it is through the different characterizations of Adam and Eve that Milton works out the developing conception of the citizen in an England that had recently executed its own king. As I’ve written in my discussion of Shakespeare’s history plays, justified or not, regicide has comprehensive effects. Thus, the beheading of Charles I on 30 January 1649 had implications for all 17th-century English citizens, many of which were subsequently written about by many like Margaret Cavendish and John Locke. At issue was the question of the individual’s relation to the monarch; does the citizen’s political identity inhere in the king or queen (Cavendish’s perspective), or does he or she exist as a separate entity (Locke’s)? Are they merely “subjects” in the sense of “the king’s subjects,” or are they “subjects” in the sense of being an active agent with an individual perspective that matters? Is it Divine Right, conferred on and descended from Adam, that makes a monarch, or is it the consent of the governed, of which Eve was arguably the first among mankind?

Before approaching such topics in Paradise Lost, Milton establishes the narrative framework of creation. After an initial prologue that does an homage to the classical invoking of the Muses even as it undercuts the pagan tradition and places it in an encompassing Christian theology (there are many such nuances and tensions throughout the work), Milton’s speaker introduces Satan, nee Lucifer, having just fallen with his third of heaven after rebelling against the lately announced Son. Thinking, as he does, that the Son is a contingent being like himself (rather than a non-contingent being coequal with the Father, as the Son is shown to be in Book III), Satan has failed to submit to a rulership he does not believe legitimate. He, thus, establishes one of the major themes of Paradise Lost: the tension between the individual’s will and God’s. Each character’s conflict inheres in whether or not they will choose to remain where God has placed them—which inerringly involves submitting to an authority that, from their limited perspective, they do not believe deserves their submission—or whether they will reject it and prefer their own apparently more rational interests. Before every major character—Satan, Adam, and Eve—is a choice between believing the superior good of God’s ordered plan and pursuing the seemingly superior option of their individual desires.

Before discussing Eve, it is worth looking at her unheavenly counterpart, Sin. In a prefiguration of the way Eve was formed out of Adam before the book’s events, Sin describes to Satan how she was formed Athena-style out of his head when he chose to rebel against God and the Son, simultaneously being impregnated by him and producing their son, Death. As such she and Satan stand as a parody not only of the parent-progeny-partner relationship of Adam-Eve but also of God and the Son. Describing her illicit role in Lucifer’s rebellion, Sin says that almost immediately after birth,

I pleased and with attractive graces won

The most averse (thee chiefly) who full oft

Thyself in me thy perfect image viewing

Becam’st enamoured and such joy thou took’st

With me in secret that my womb conceived

A growing burden.

Paradise Lost II.761-767

In here and other places, Sin shows that her whole identity is wrapped up in Satan, her father-mate. In fact, there is rarely any instance where she refers to herself without also referring to him for context or as a counterpoint. Lacking her own, private selfhood from which she is able to volitionally choose the source of her identity and meaning, Sin lives in a state of perpetual torment, constantly being impregnated and devoured by the serpents and hellhounds that grow out of her womb.

Sin’s existence provides a Dantean concretization of Satan’s rebellion, which is elsewhere presented as necessarily one of narcissistic solipsism—a greatness derived from ignoring knowledge that might contradict his supposed greatness. A victim of her father-mate’s “narcissincest” (a term I coined for her state in grad school), Sin is not only an example of the worst state possible for the later Eve, but also, according to many critics, of women in 17th-century England, both in relation to their fathers and husbands, privately, as well as to the monarch (considered by many the “father of the realm”), publically. Through this reading, we can see Milton investigating, through Sin, not only the theology of Lucifer’s fall, but also of an extreme brand of royalism assumed by many at the time. And yet, it is not merely a simple criticism of royalism, per se: though Milton, himself, wrote other works defending the execution of Charles I and eventually became a part of Cromwell’s government, it is with the vehicle of Lucifer’s rebellion and Sin—whose presumptions are necessarily suspect—that he investigates such things (not the last instance of his work being as complex as the issues it investigates).

After encountering the narcissincest of the Satan-Sin relationship in Book II we are treated to its opposite in the next: the reciprocative respect between the Father and the Son. In what is, unsurprisingly, one of the most theologically-packed passages in Western literature, Book III seeks to articulate the throneroom of God, and it stands as the fruit of Milton’s study of scripture, soteriology, and the mysteries of the Incarnation, offering, perhaps wisely, as many questions as answers for such a scene. Front and center is, of course, the relationship between the Son and Father, Whose thrones are surrounded by the remaining two thirds of the angels awaiting what They will say. The Son and Father proceed to narrate to Each Other the presence of Adam and Eve in Eden and Satan’s approach thereunto; They then discuss what will be Their course—how They will respond to what They, omniscient, already know will happen.

One major issue Milton faced in representing such a discussion is the fact that it is not really a discussion—at least, not dialectically. Because of the triune nature of Their relationship, the Son already knows what the Father is thinking; indeed, how can He do anything but share His Father’s thoughts? And yet, the distance between the justice and foresight of the Father (in no ways lacking in the Son) and the mercy and love of the Son (no less shown in the words of the Father) is managed by the frequent use of the rhetorical question. Seeing Satan leave Hell and the chaos that separates it from the earth, the Father asks:

Only begotten Son, seest thou what rage

Transports our Adversary whom no bounds

Prescribed, no bars…can hold, so bent he seems

On desperate revenge that shall redound

Upon his own rebellious head?

—Paradise Lost III.80-86

The Father does not ask the question to mediate the Son’s apparent lack of knowledge, since, divine like the Father, the Son can presumably see what He sees. Spoken in part for the sake of those angels (and readers) who do not share Their omniscience, the rhetorical questions between the Father and Son assume knowledge even while they posit different ideas. Contrary to the solipsism and lack of sympathy between Sin and Satan (who at first does not even recognize his daughter-mate), Book III shows the mutual respect and knowledge of the rhetorical questions between the Father and Son—who spend much of the scene describing Each Other and Their motives (which, again, are shared).

The two scenes between father figures and their offspring in Books II and III provide a backdrop for the main father-offspring-partner relationship of Paradise Lost: that of Adam and Eve—with the focus, in my opinion, on Eve. Eve’s origin story is unique in Paradise Lost: while she was made out of Adam and derives much of her joy from him, she was not initially aware of him at her nativity, and she is, thus, the only character who has experienced and can remember (even imagine) existence independent of a source.

Book IV opens on Satan reaching Eden, where he observes Adam and Eve and plans how to best ruin them. Listening to their conversation, he hears them describe their relationship and their respective origins. Similar to the way the Father and Son foreground their thoughts in adulatory terms, Eve addresses Adam as, “thou for whom | And from whom I was formed flesh of thy flesh | and without whom am to no end, my guide | And head” (IV.440-443). While those intent on finding sexism in the poem will, no doubt, jump at such lines, Eve’s words are significantly different from Sin’s. Unlike Sin’s assertion of her being a secondary “perfect image” of Satan (wherein she lacks positive subjectivity), Eve establishes her identity as being reciprocative of Adam’s in her being “formed flesh,” though still originating in “thy flesh.” She is not a mere picture of Adam, but a co-equal part of his substance. Also, Eve diverges from Sin’s origin-focused account by relating her need of Adam for her future, being “to no end” without Adam; Eve’s is a chosen reliance of practicality, not an unchosen one of identity.

Almost immediately after describing their relationship, Eve recounts her choice of being with Adam—which necessarily involves remembering his absence at her nativity. Hinting that were they to be separated Adam would be just as lost, if not more, than she (an idea inconceivable between Sin and Satan, and foreshadowing Eve’s justification in Book IX for sharing the fruit with Adam, who finds himself in an Eve-less state), she continues her earlier allusion to being separated from Adam, stating that, though she has been made “for” Adam, he a “Like consort to [himself] canst nowhere find” (IV.447-48). Eve then remembers her awakening to consciousness:

That day I oft remember when from sleep

I first awaked and found myself reposed

Under a shade on flow’rs, much wond’ring where

And what I was, whence thither brought and how.

Paradise Lost IV.449-452

Notably seeing her origin as one not of flesh but of consciousness, she highlights that she was alone. That is, her subjective awareness preexisted her understanding of objective context. She was born, to use a phrase by another writer of Milton’s time, tabula rasa, without either previous knowledge or a mediator to grant her an identity. Indeed, perhaps undercutting her initial praise of Adam, she remembers it “oft”; were this not an image of the pre-Fall marriage, one might imagine the first wife wishing she could take a break from her beau—the subject of many critical interpretations! Furthermore, Milton’s enjambment allows a dual reading of “from sleep,” as if Eve remembers that day as often as she is kept from slumber—very different from Sin’s inability to forget her origin due to the perpetual generation and gnashing of the hellhounds and serpents below her waist. The privacy of Eve’s nativity so differs from Sin’s public birth before all the angels in heaven that Adam—her own father-mate—is not even present; thus, Eve is able to consider herself without reference to any other. Of the interrogative words with which she describes her post-natal thoughts— “where…what…whence”—she does not question “who,” further showing her initial isolation, which is so defined that she initially cannot conceive of another separate entity.

Eve describes how, hearing a stream, she discovered a pool “Pure as th’ expanse of heav’n” (IV.456), which she subsequently approached and, Narcissus-like, looked down into.

As I bent down to look, just opposite

A shape within the wat’ry gleam appeared

Bending to look on me. I started back,

It started back, but pleased I soon returned,

Pleased it returned as soon with answering looks

Of sympathy and love.

Paradise Lost IV.460-465

When she discovers the possibility that another person might exist, it is, ironically, her own image in the pool. In Eve, rather than in Sin or Adam, we are given an image of self-awareness, without reference to any preceding structural identity. Notably, she is still the only person described in the experience—as she consistently refers to the “shape” as “it.” Eve’s description of the scene contains the actions of two personalities with only one actor; that is, despite there being correspondence in the bending, starting, and returning, and in the conveyance of pleasure, sympathy, and love, there is only one identity present. Thus, rather than referring to herself as an image of another, as does Sin, it is Eve who is here the original, with the reflection being the image, inseparable from herself though it be. Indeed, Eve’s nativity thematically resembles the interaction between the Father and the Son, who, though sharing the same omniscient divinity, converse from seemingly different perspectives. Like the Father Who instigates interaction with His Son, His “radiant image” (III.63), in her first experience Eve has all the agency.

As the only instance in the poem when Eve has the preeminence of being another’s source (if only a reflection), this scene invests her interactions with Adam with special meaning. Having experienced this private moment of positive identity before following the Voice that leads her to her husband, Eve is unique in having the capacity to agree or disagree with her seemingly new status in relation to Adam, having remembered a time when it was not—a volition unavailable to Sin and impossible (and unnecessary) to the Son.

And yet, this is the crux of Eve’s conflict: will she continue to heed the direction of the Voice that interrupted her Narcissus-like fixation at the pool and submit herself to Adam? The ambivalence of her description of how she would have “fixed | Mine eyes till now and pined with vain desire,” over her image had the Voice not come is nearly as telling as is her confession that, though she first recognized Adam as “fair indeed, and tall!” she thought him “less fair, | Less winning soft, less amiably mild | Than that smooth wat’ry image” (IV.465-480). After turning away from Adam to return to the pool and being subsequently chased and caught by Adam, who explained the nature of their relation—how “To give thee being I lent | Out of my side to thee, nearest my heart, | Substantial life to have thee by my side”—she “yielded, and from that time see | How beauty is excelled by manly grace | And wisdom which alone is truly fair” (IV. 483-491). One can read these lines at face value, hearing no undertones in her words, which are, after all, generally accurate, Biblically speaking. However, despite the nuptial language that follows her recounting of her nativity, it is hard for me not to read a subtle irony in the words, whether verbal or dramatic. That may be the point—that she is not an automaton without a will, but a woman choosing to submit, whatever be her personal opinion of her husband.

Of course, the whole work must be read in reference to the Fall—not merely as the climax which is foreshadowed throughout, but also as a condition necessarily affecting the writing and reading of the work, it being, from Milton’s Puritan Protestant perspective, impossible to correctly interpret pre-Fall events from a post-Fall state due to the noetic effects of sin. Nonetheless, in keeping with the generally Arminian tenor of the book—that every character must have a choice between submission and rebellion for their submission to be valid, and that the grace promised in Book III is “Freely vouchsafed” and not based on election (III.175)—I find it necessary to keep in mind, as Eve seems to, the Adam-less space that accompanied her nativity. Though one need not read all of her interaction with Adam as sarcastic, in most of her speech one can read a subtextual pull back to the pool, where she might look at herself, alone.

In Eve we see the fullest picture of what is, essentially, every key character’s (indeed, from Milton’s view, every human’s) conflict: to choose to submit to an assigned subordinacy or abstinence against the draw of a seemingly more attractive alternative, often concretized in what Northrop Frye calls a “provoking object”—the Son being Satan’s, the Tree Adam’s, and the reflection (and private self it symbolizes, along with an implicit alternative hierarchy with her in prime place) Eve’s. In this way, the very private consciousness that gives Eve agency is that which threatens to destroy it; though Sin lacks the private selfhood possessed by Eve, the perpetual self-consumption of her and Satan’s incestuous family allegorizes the impotent and illusory self-returning that would characterize Eve’s existence if she were to return to the pool. Though she might not think so, anyone who knows the myth that hers parallels knows that, far from limiting her freedom, the Voice that called Eve from her first sight of herself rescued her from certain death (though not for long).

The way Eve’s subjectivity affords her a special volition connects with the biggest questions of Milton’s time. Eve’s possessing a private consciousness from which she can consensually submit to Adam parallels John Locke’s “Second Treatise on Civil Government” of the same century, wherein he articulates how the consent of the governed precedes all claims of authority. Not in Adam but in Eve does Milton show that monarchy—even one as divine, legitimate, and absolute as God’s—relies on the volition of the governed, at least as far as the governed’s subjective perception is concerned. Though she cannot reject God’s authority without consequence, Eve is nonetheless able to agree or disagree with it, and through her Milton presents the reality that outward submission does not eliminate inward subjectivity and personhood (applicable as much to marriages as to monarchs, the two being considered parallel both in the poem and at the time of its writing); indeed, the inalienable presence of the latter is what gives value to the former and separates it from the agency-less state pitifully experienced by Sin.

And yet, Eve’s story (to say nothing of Satan’s) also stands as a caution against simply taking on the power of self-government without circumspection. Unrepentant revolutionary though he was, Milton was no stranger to the dangers of a quickly and simply thrown-off government, nor of an authority misused, and his nuancing of the archetype of all subsequent rebellions shows that he did not advocate rebellion as such. While Paradise Lost has influenced many revolutions (political in the 18th-century revolutions, artistic in the 19th-century Romantics, cultural in the 20th-century New Left), it nonetheless has an anti-revolutionary current. Satan’s presumptions and their later effects on Eve shows the self-blinding that is possible to those who, simply trusting their own limited perception, push for an autonomy they believe will liberate them to an unfettered reason but which will, in reality, condemn them to a solipsistic ignorance.

By treating Eve, not Adam, as the everyman character who, like the character of a morality play, represents the psychological state of the tempted individual—that is, as the character with whom the audience is most intended to sympathize—Milton elevates her to the highest status in the poem. Moreover—and of special import to Americans like myself—as an articulation of an individual citizen who does not derive the relation to an authority without consent, Eve stands as a prototype of the post-17th-century conception of the citizen that would lead not only to further changes between the British Crown and Parliament but also a war for independence in the colonies. Far from relegating Eve to a secondary place of slavish submission, Milton arguably makes her the most human character in humanity’s first story; wouldn’t that make her its protagonist? As always, let this stimulate you to read it for yourself and decide. Because it integrates so many elements—many of which might defy new readers’ expectations in their complexity and nuance—Paradise Lost belongs as much on the bookshelf and the syllabus as Shakespeare’s Complete Works, and it presents a trove for those seeking to study the intersection not only of art, history, and theology, but also of politics and gender roles in a culture experiencing a fundamental change.


Photo Credit.

A Romantic Case for Anime

We’ve all felt it—the mixed excitement and dread at hearing a beloved book is set to be made into a movie. They might do it right, capturing not only key plot events but also (and more importantly) how it feels to be swept up in the work as a whole; 2020’s Emma with Anya Taylor-Joy comes to my mind, most of all for the way it captures how someone who understands and loves Austen’s ubiquitous irony might feel when reading her work. However, they also might do it poorly; despite both 1974 and 2013 attempts’ being worth watching, I’ve yet to see a rendition of The Great Gatsby that captures the book’s plot and narrative tone in the right proportion (in my opinion, the 1974 version emphasizes the former but misses some of the latter, while parts of the 2013 version exagerrate the latter just to the border of parody). My readers have, no doubt, already imagined examples of works they’ve always wished could be faithfully put onto the screen and others they’d rather not be risked to the vicissitudes of translating from one medium to another.

The last decade has thankfully seen a growth in long-form, box-office quality productions that makes it more possible than ever to imagine longer works being produced without curtailing their lengthy plotlines—example, the BBC’s 2016 rendition of War and Peace. However, this leaves another, perhaps more important, hurdle to hazard: while live-action media can now faithfully follow the plots of the originals, there still remains the difficulty of conveying the tone and feel of the works, especially when different media necessarily have different capacities and limitations of representation. Though I’ve enjoyed productions that have been made, I don’t know that I would expect live-action renditions to reproduce the aesthetic impression of, say, Paradise Lost, The Hunchback of Notre Dame, or Crime and Punishment, and I worry that attempts to do so might mar more than measure up. The problem lies in the difficulty of translating characters’ inner experience—which is usually conveyed by a stylizing narrator—via the essentially externalistic medium of the camera eye.

While a live action movie or series might remain faithful to the selective events in a plot, the lack of an interpretive narrator removes a key element of what defines epic poems and novels. Paradoxically, the narrowing of perspective through a stylizing narrator allows story to move from the limits of natural events into the limitlessness of human perception and interpretation. Voiceover narrators can provide thematic stylization in film, as well as essential plot coherence, but it is still primarily the camera that replaces the literary narrator as the means of conveyance. Furthermore, if too ubiquitous, voiceovers can separate the audience from the action, which is the focus of film. Film’s power inheres in its ability to place the audience in the midst of a plot, removing as many frames between the watcher and the story’s events as possible. However, this is also why books are so difficult to translate: motion pictures focus on events when the aesthetic experience of literature inheres in how characters and narrator experience said events.

The literary movement that focused most on the character’s experience (and, vicariously, ours) as the purpose of art was Romanticism. Romantic literature and poetry were less concerned about the subject matter than about their effect on the character’s emotions—in the sense that, from the generally Platonic metaphysics of the Romantics, the incidental reaches its fullest meaning by provoking an aesthetic experience far beyond it. From Hawthorne’s rose bush growing outside Salem’s prison, to Shelley’s secondhand rumination on the ruined feet of Ozymandias, to Keats’s apostrophe to the Grecian urn, the Romantics showed how part of the reality of an object involves its significance to the observer, and it was the role of the Romantic narrator and speaker to draw out that effect for the reader.

It is this essential influence of the narrator and characters’ inner lives on the great works’ aesthetic experience that makes me skeptical of even the best acting, camera work, and post-production effects to sufficiently replace them. It may be possible, and, again, I have very much enjoyed some renditions. Furthermore, not wanting to be the audience member who misses the Shakespeare performance for the open copy of the play on their lap, I tend to watch movie adaptations as distinct works rather than in strict relation to the originals. However, this, itself, may be a concession to my hesitance to trust film to live up to the aesthetic experience of certain books. I would, however, trust anime to do so.

While a history of Japanese manga and anime is beyond the scope of this piece (or my expertise), since choosing to explore the artform as a post-grad-school reward (or recovery—one can only stare at the sun that is Paradise Lost for so long) I’ve watched plenty of anime over the past ten years, and I have become convinced that it might serve as, at least, a middle ground when seeking to capture plot, narrative tone, and inner character experience in a motion medium. Anime is capable of handling virtually every story genre, and while it contains many of the same ridiculous hi-jinks and satire of Western cartoons and CG animation, it can also capture tragic pathos and sublime catharsis in ways that would be out of place in the vast majority of Western animation. This makes sense: originating in early 20th-century Japan, manga and anime were not subject to the same skepticism about artistic representations of transcendent value that characterized Western art after the move from 19th-century Romanticism and Realism to 20th-century modernism and post-modernism.

Of course, there have been exceptions; 20th-century Disney animation, or Marvel and DC Comics, were iconic because they attempted to be iconic—they unironically tried to depict in images those values and stories that are transcendent. However, even these were created predominantly with the child (or the childlike adult) in mind. Furthermore, while anime certainly has deserved elements of ambivalence, if not cynicism, and while there are many incredibly satirical and humorous series, anime as an artform is not implicitly dismissive of narrative trustworthiness and characters’ experience of the transcendent in the same way that much of Western motion art is. Rather, anime conventionally allows for the sublime heights and deepest horrors that previously characterized Romanticism, all of which it presents through the stylization of animation. This stylization is able to act as an interpretive medium just like a novel’s narrator, contextualizing events through the experience of those involved in a way often eschewed by, if not unavailable to, film.

For an example, I submit Kaguya-sama: Love is War (Japanese Kaguya-sama wa Kokurasetai – Tensai-tachi no Ren’ai Zunōsen, “Kaguya Wants to Make Them Confess: The Geniuses’ War of Hearts and Minds”). Though a romantic comedy in the Slice-of-Life genre, it exemplifies anime’s ability to convey the heights and depths of inner experience of the characters—here Kaguya and Miyuki, a pair of high school teenagers who, as student council president and vice president, compete to be top of their class while being secretly in love with each other and too proud to admit it. As the English title conveys, a running metaphor through the show is the bellicose subtext of their attempts to maneuver each other into confessing their love first and, thus, losing the war; think Beatrice and Benedick with the extremizing effect of teenage hormones and motifs of heavy artillery.

Plot-wise, Love is War follows a standard rom-com formula, with tropes recognizable to Western audiences: the pride and prejudices of the characters, the much ado about things that end up being really nothing, the presence of a mutual friend who acts as an oblivious catalist and go-between in the relationship, etc. However, the show reinvigorates these tropes by portraying via hyperbolic narrator the deuteragonists’ experience of the episodes’ conflicts, bringing audience members into the all-consuming tension of how a teenager might see something as minor as whether to share an item from their lunch. The combination of chess and military metaphors conveys the inner conflicts of the initially cold but gradually warming characters (the “tsundere” character type common in such animes), and the consistency of such motifs creates a unified aesthetic that, due in large part to the disconnect between the over-the-top tone and, in reality, low-stakes subject matter, is hysterical. Another unique aspect about Love is War is that, due to its focus on the characters’ experience of the plot (all the better for being trivially mundane), it’s a technically Romantic romantic comedy.

Love is War is, of course, a low-stakes example of what modern anime can do, though it did score three awards, including Best Comedy, at the 2020 Crunchyroll Anime Awards. A more serious example, Death Note, similarly conveys much of its gravitas through voiceover—this time the first-person narration of protagonist Light Yagami, a high schooler who with the help of a book from the realm of the dead is able to kill anyone whose name and face he knows, and L, a mysterious and reclusive detective charged by Interpol to find him. Throughout the series—which employs similar, if non-parodic, attempts by characters to outwit each other as Love is War—Light and L articulate their planned maneuvers and the implications thereof through inner voiceover. Not only does the narration lay out elements of their battle of wits that the audience might have missed, but it conveys the growing tension the two experience—especially Light, who, as he amasses fame as both a menace and cult hero experiences a growing egotism and subsequent paranoia around the possibility of being found out.

Just as Love is War is, in many ways, a parallel of Pride and Prejudice (Elizabeth and Darcy, themselves, both being tsundere characters), Death Note’s focus on a young man who wishes to achieve greatness by killing those deserving of death and who subsequently develops a maddening neurosis is virtually the same as Crime and Punishment—however enormously their plots and endings differ (Crime and Punishment lacks an explicit demonic presence like Death Note’s Shinigami Ryuk, the Death Note’s otherworldly owner; Dostoevsky would not employ the spectre of a conversant devil until The Brothers Karamazov—yet another point of consanguinity between anime like Death Note and his writing). Regardless of their differing plots, the anime’s inclusion of the characters’ inner thoughts and imaginations convey an increasingly tense tone similar to how Dostoevsky steadily shows Raskalnikov’s moral unmooring, and the explanations and attempted self-justifications by both Light and L convey more than I think even the best cinema would be capable of showing.

I am not advocating that every narrative motif or figuration be included in page-to-screen renditions, nor that we cease trying to actively reinvigorate great works of art through judicious adaptations into new media. Yet, if the inner lives of teenagers—which are often exaggerated, if at times unnecessarily, to Romantic proportions—can be portrayed by anime to such comic and tragic effect, with the figuration and tone of the characters’ perceptions seamlessly paralleling the literal events without obscuring them, then I’d be interested to see what an anime Jane Eyre, The Alchemist, or Sula might look like. Based on the above examples, as well as anime heavyweights like Fullmetal Alchemist, Cowboy Bebop, and, if one is not faint of heart, Berserk, all of which present events in some measure through the background and perspective of the main characters, I could imagine the works of Milton, Hugo, Austen, Dostoevsky, and others in anime form, with the aesthetic experience of the original narration intact.


Photo Credit.

The Conservative Cope

According to recent polling by YouGov, a measly 1% of 18- to 24-year-olds plan to vote Conservative at the next general election. Having won roughly 20% of this demographic in the 2019, the Conservative Party has lost 95% of its support amongst Britain’s youngest voters in less than four years.

In reaction to this collapse in support, journalists and commentators have taken to rehashing the same talking-points regarding Tory ineptitude and how to resolve it – build more houses, be more liberal, have younger parliamentarians, and so on.

I don’t intend to add this ever-growing pile of such opinion pieces. Instead, I want to put Tory ineptitude into perspective, in hopes of undermining the entrenched and parochial coping of Britain’s right leaning politicians and commentariat.

Even though Churchill didn’t coin the phrase, right-leaning talking-heads maintain that “if you’re not a liberal at 20 you have no heart, if you’re not a conservative at 40 you have no brain”, even if not articulated as such; the progressive and liberal tendencies of the young are annoying, but natural and inevitable.

Of course, this is simply not true. Thatcher won the most support from 18- to 24-year-olds in 1979 and 1983, something which left-wing and right-wing critics are more than happy to point out, yet such doubters of the Iron Law of Liberal Youth have managed to reinvent the law, albeit without the caveat of an inevitable turn to the right in later life.

Socialists and capitalists don’t agree on many things, but they are united by the belief that Britain’s youth is a bastion of progressive leftism, marching in lock-step with other first-time voters around the world. In the former, this inspires great confidence; in the latter, this inspires a sense of foreboding.

Other commentators have blamed Brexit, which is also wrong. Despite the widely-cited age-gap between the average Remainer and Leaver, the UK’s relationship with the EU is pretty far down the average young person’s list of political priorities, hence why almost every avid post-Brexit remainer is a terminally online geriatric. Ironically, The Data from the British Election Study predicted a gradual increase in support for the Conservatives amongst Britain’s younger voters between 2015 and 2019.  

Any person that has met the new cohort of young conservatives will attest their nationalistic and socially conservative modus operandi. Having its failures on crime and immigration reduction broadcast across the nation, its unsurprising that such people would lose faith in the Conservative Party’s ability to govern as a conservative party.

Indeed, given the Conservative Party’s eagerness to hold onto the Cameronite ‘glory days’ of tinkering managerialism, interspersed with tokenistic right-wing talking-points (i.e., the things which actually matter to the conservative base) its little wonder that the Tories have failed to win the young.

The Conservative Party Conference has a less than palatable reputation, but when the bulk of events revolve around uninformed conversations about tech, financial quackery, achieving Net Zero and lukewarm criticisms of The Trans Business, it is unsurprising so many Tory activists choose to preoccupy themselves with cocaine and sodomy.

Contrast this with the European continent, where right-wing populist parties are doing remarkably well with a demographic the Tories have all but officially dismissed. In the second round of France’s 2022 presidential election, incumbent president Emmanuel Macron, a centrist liberal europhile, was re-elected for a second term, with more than 58% of the vote. Although Macron obtained the majority of 18 to 24 years old who voted, it was over 60s which provided the backbone of his re-election, acquiring roughly 70% of their votes.

Moreover, whilst she was most popular with older voters (50- to 59-year-olds), the right-wing Marine Le Pen secured a sizeable portion of voters across all age brackets, especially those aged between 25- and 59- years old, filling the chasm left-behind by Macron’s near monopolisation of France’s oldest citizens.

These patterns were generally replicated in the first round of voting, although the far-left Melenchon garnered the most support from France’s youngest voters. At first glance, most right-leaning commentators would flippantly dismiss the wholesale liberal indoctrination of the youth, overlooking the astonishing fact that roughly 25% of France’s youngest voters support right-wing nationalism, whether that be Marine Le Pen or Eric Zemmour.

Due to growing suspicion of the two main parties in Germany, the centre-right Christian Democratic Union (CDU, otherwise known as Union) and the centre-left Social Democratic Party (SPD), third parties have gained support from the disaffected young, such the centre-left Greens, the centre-right Free Democratic Party (FDP) and the right-wing Alternative for Germany (AfD).

Whilst it’s not doing as well as the Greens with first-time voters on the national stage, the AfD is making strides at the federal level and is doing noticeably well with Germans in their 30s, which isn’t insignificant in a country with a median age of 45. Compare this to Britain’s Conservatives, who start to faulter with anyone below the age of 40!

Moreover, the AfD is effectively usurping the CDU as the main right-leaning political force in many parts of Germany. For example, the AfD was the most popular party with voters under 30 in the CDU stronghold of Saxony-Anhalt during the last state election, a forebodingly bittersweet centrist victory.

Similarly, Meloni’s centre-right coalition, dominated by the nationalist Brothers of Italy party, didn’t lead amongst the nation’s youngest voters (18 to 34 years old), but they came extremely close, gaining 30% of their votes compared to the centre-left coalition’s 33% – and won every other age bracket in the last general election. Again, not bad for a country with a median age just shy of 50.

Moreover, these trends transcend Western Europe, showing considerable signs of life in the East. Jobbik, the right-leaning opposition to Viktor Orban’s right-wing Fidesz party, is highly popular party with university students, and despite losing the recent election, Poland’s right-wing Law and Justice party obtained roughly a third of first-time votes in the election four years prior.

Roughly a quarter of first-time voters in Slovakia opted for the People’s Party-Our Slovakia, a far-right party with neo-Nazi roots, and roughly 35% of Bulgarian voters between 18- and 30-years-old voted for the right at the last parliamentary election, centre-right and far-right included.

Evidently, the success of right-wing nationalism amongst young voters across Europe, isn’t confined to republics. In addition to its republics, European constitutional monarchies, such as Sweden, Norway, and Spain, have materialised into right-wing electoral success.

The Moderate Party, Sweden’s main centre-right political force, won the largest share of voters aged by 18- and 21-years-of-age, with the insurgent right-wing Sweden Democrats placing second amongst the same demographic, coming only a few points behind their centre-right recipients of confidence-and-supply in government.

Further broken down by sex, the Sweden Democrats were distinctly popular with young Swedish men, and tied with the Social Democrats as the most popular party with Swedish men overall. Every age-bracket below 65-year-old was a close race between the Social Democrats and the Moderates or the Sweden Democrats, whilst those aged 65 and over overwhelmingly voted for the Social Democrats.

Similar to the Netherlands, whilst the Labour Party and Socialist Left Party were popular among young voters at the last Norwegian general election, support for centre-right Conservative Party and right-wing Progress Party didn’t trail far behind, with support for centre-left and centre-right parties noticeably increasing with age.

Whilst their recent showing wasn’t the major upset pollsters had anticipated, Spain’s right-wing Vox remains a significant political force, as a national party and amongst the Spanish youth, being the third most popular party with voters aged 18- to 24-year-olds.

Erstwhile, the centre-right Peoples Party (PP) is the most popular party with voters between 18- and 34-year-old with the centre-left Spanish Socialist Workers’ Party (PSOE) drawing most of its support from voters aged 55 and older, especially voters over 75.

Still, it is easy to see how sceptics might blame our culture differences with the European continent on the right’s alleged inability to win over the young. After all, its clear youth politics is taken more seriously on the European continent. The JFvD, the youth wing of the right-wing Forum for Democracy (FvD) in the Netherlands, is the largest political youth movement in the Benelux. The JFvD regularly organises activities which extend beyond campaign drudgery, from philosophy seminars to beach parties. Contrast this to the UK, where youth participation begins and ends with bag-carrying and leafleting; the drudgery of campaigning is only interspersed by instances of sexual harassment and other degenerate behaviour.

However, this suspicion is just as easily put to rest when we compare Britain to the rest of the Anglosphere, especially New Zealand, Canada, and the United States of America.

In the run-up to New Zealand’s general election, polling from The Guardian indicated greater support for the centre-right National Party (40%) amongst voters aged 18- to 34-years than the centre-left Labour Party (20%), a total reversal of the previous election, defying purported trends of a global leftward shift amongst younger generations.

More to the point, support was not going further left, with the centre-left Labour-Green coalition accounting for 34% of millennial votes, compared to the centre-right coalition’s rather astounding 50%; again, a complete reversal of previous trends and more proof than any that so-called ‘youthquakes’ aren’t as decisive as commentators and activists would have us believe.

Despite Labour’s success with young voters in 2017 and 2019, when the voter turnout of younger generations is as abysmal as Britain’s, it’s not exactly a given that parties and individuals of a non-socialistic persuasion should abdicate Britain’s future to a dopey loon like Corbyn. The creed of Britain’s youth isn’t socialism, but indifference.

If anything, right-leaning parties are more than capable of producing ‘youthquakes’ of their own. In a time when the British Conservatives are polling at 1% with their native young, the Canada’s Conservative Party are the most popular party with, polling at around 40% with 18- to 29-year olds, and despite his depiction as a scourge upon America’s youth, Trump comfortably won white first-time voters in both 2016 and 2020. Perhaps age isn’t the main dividing line in the Culture War after all!

In conclusion, the success of the Conservative Party with younger voters does not hinge upon our electoral system, our constitutional order, our place in Europe or the Anglosphere. Simply put, the Tories’ inability to win over the young is not an inability at all, but the result of coping; a stubborn and ideological unwillingness motivated by geriatric hubris, disproven time and time again by the success of other right-wing parties across the Western world.


Photo Credit.

The Obsession with News

In 1980, Ted Turner and Reese Schonfeld co-founded the Cable News Network (CNN). Despite derision over the idea of a 24 hour rolling news channel, CNN became a massive hit and would become the forefather to the news system today. In the 43 years since CNN first aired, news channels have changed from having bulletins every few hours to being on air 24/7. Our parents would have to wait for the top of the hour for news, unless breaking news broke into programming, whilst we can just turn it on with a press of a button.

Whilst many may marvel at the idea of 24 hour news, it is part of why news today has its problems. As a result of constant media absorption, competition from social media and the internet, as well as a fast-paced world, society itself has become obsessed with the news. Every tiny little story becomes splashed across screens, both large and small, in a desperate attempt to capture the moment before it vanishes. 

Everything is Breaking News

If, like me, you have the BBC news app alert on your phone, then this will be a similar tale. The alert goes off. You check it. Whilst it’s officially classed as ‘Breaking News,’ it’s not really that important. Some things are of course important. Look at the death of Her Majesty The Queen last year. That was a news story that knocked everything else off the air. Considering that she had been our monarch since 1952, it’s fair to say that this was incredibly important breaking news. 

Generally, the app applies the term ‘Breaking News’ rather liberally. Holly Willoughby leaving This Morning after fourteen years is not worth your phone going off. Beyoncé removing ‘offensive lyrics’ from an old song isn’t worth it either.

That also applies to news channels. Sky News and BBC will have that ticket going across the bottom of the screen quite happily for just about any reason. Rare is the day where the bottom of Sky News is not a flash of yellow and black. Even a slow news day will have breaking news just to keep things a bit fresh.

It’s understandable really. In this day and age, news travels fast. It comes and goes in the blink of the eyes. News companies want to have their hold on the story before the next one comes. When Twitter/X or Facebook gets the news first, well, that’s one less story that they’ve managed to break to viewers. The big media organisations may have the means to research the stories and get the scoops, but they don’t ever get it out first. One is more likely to find out a story through social media than they are the 24 hour news or their app. 

Considering the point of the 24 hour news cycle is to be fresh, that’s not really a good thing.

Every Little Story, Made Bigger 

On the 18th April 1930, BBC news would announce that “there is no news.”

Can you imagine that today? Another issue with the 24 hour cycle and news today is the fact that there’s a desperation to find something to report on. When channels and apps are never off, they can’t have a rest. Something must be going on. It doesn’t matter what it is, but it must be something.

Perhaps it’s a take on a news story through the issue of race, gender or sexuality. Perhaps it’s a random study from Australia. Whatever it is, it’s got a place in the news because it’s something.

Take for example the Daily Climate Show on Sky News. What was originally a daily, thirty minute slot on prime time was axed to a weekend event. It’s not hard to see why this was. In its desperation to make more news out of something, Sky took a risk by devoting half an hour everyday to the exact same topic. Considering how climate change and its presentation is a divisive subject, it was hardly a risk worth taking. Changing it to every weekend was still a poorly thought out move. 

Repetition

You might turn the news on when you get up at seven in the morning. You might turn the news on at ten before you go to bed. What might link those two viewings is that they are exactly the same.

When the media can’t slot a new story in, they’ll just repeat it. If it’s an unfolding story, then of course you’ll see it or read about it again later because there are news things to be said. The problem occurs when it’s the same story over and over again. 

Nobody wants to hear the same story they did fifteen hours ago without new information. It’s tiresome.

The Fear Factor

Then there’s the fear in which the media thrives.

From the moment that Boris Johnson told us that we now had to stay in our homes because of COVID, the media was all over the pandemic- perhaps even before then. With nothing else happening because everyone was locked down, all the media could do was run constant stories about the ever climbing death toll. At first, well, it was what we expected. Then it started to get a bit repetitive. 

These stories tend to get a much frostier reception if reported today. Commentators scold the media for trying to scare us or create fear. 

They could, however, get away with it during those early months. With nothing else to do, we had more time for the news. Their stories were constantly about the deaths and after effects of COVID. We were already unable to leave our homes and live our daily lives, with constant mask wearing when we went out, so did we need to be intimidated even more?

It’s not just COVID. Look at the climate protestors, especially the young ones, when interviewed. Some of them cry in fear for their future, weeping about the thought of a planet that could be gone when they have reached adulthood. Considering the constant doomsday coverage of climate change in the news, it’s easy to see where this fear comes from. Kids’ news shows like Sky’s awful FYI focus on the topic regularly. It’s constantly on mainstream news. 

Children are more in tune with the world today. With all the darkness in the news and on social media, some will blame it for the declining mental health we are seeing in young people. Indeed, where is the hope? Well, people don’t watch the news to hear about new innovations or cute animals being born in zoos. Fear is more gripping than hope, and a bigger seller too, but it’s not good for morale.

It’s vitally important that we know what’s going on in the world, but too much news is bad for the soul. In a world where it’s all too accessible and the media makes money on constant news, we can’t rely on it for real information. We’re either fed fear or repetitiveness. The obsession with news is, ironically, making us less knowledgeable. Resist the urge to keep up behind what is needed. It’s better for you.


Photo Credit.

The Great Exhibition of 2025

The beginnings of the Great Exhibition of 1851 begin with Prince Albert. After spending time as the President of the Society for the Encouragement of Arts, Commerce and Manufactures a few years prior, and having seen the success of the Exhibition of Products of French Industry, the Royal Commission for the Exhibition of 1851 was created. Spearheaded by Victorian England’s greatest industrial thinkers, including Henry Cole, Owen Jones, and Isambard Kingdom Brunel, plans were immediately drawn up.

Joseph Paxton’s Crystal Palace was the chosen design for the structure. Measuring 1,851 feet with an interior height of 128 feet, the purpose of the structure was not only to be practical and cost effective, but a statement too. The Crystal Palace was representative of Victorian ingenuity and innovation, combining practicality with beauty. Without a need for interior lighting and being of a temporary nature, the structure was perhaps the greatest exhibit of all. Inside, exhibits included the Koh-I-Noor diamond, the Trophy Telescope, the first voting machine, and the prototype for the 1851 Colt Navy. Over the course of the next six months, over six million people would visit the exhibition, including the likes of Charles Darwin, Karl Marx, and Queen Victoria on 33 occasions.

The Victorians were longtermists, and thus understood that influencing the long-term future was essential to maintaining a strong and capable society. Indeed, many civilisations of the past were longtermist, and they understood the importance of building not only for functionality but for its wider contribution to culture. The Great Exhibition was such an example, proving wildly popular not only for its technological vigour, but for its cultural significance. Considering the success of the Great Exhibition and the exhibitions that preceded and proceeded it, I care to ask why we haven’t hosted such an exhibition in recent history.

If a new Great Exhibition is to succeed, it really needs to capture the spirit of 1851. This cannot be used as a soulless quest for more tourism. It needs to be genuinely meaningful, productive, and awe-inspiring. Recent attempts to plan and host such an event, namely being the Festival of Great Britain and Northern Ireland, probably passed by without many of us knowing of its existence, let alone care. These events are terribly unproductive, and undoubtedly lead to a greater financial input than the value received. A genuine event of real value needs to break boundaries.

What would a modern-day exhibition look like? It should be noted that the structure used in 1851 was itself ‘modern’ and ‘different’ by 19th century standards. The point here is to create and innovate. To create something new, even if to be inspired by the past. Perhaps, this would serve an opportunity for architects to search for new architectural design, one to move us past the heavily criticised architecture of recent times. Whatever it is, it needs to be magnificent and grand. The Great Exhibition showed us that technological advancement does not have to be clouded in grey.

Moreover, the exhibits should be genuinely innovative and valuable. In 2013, Peter Theil famously said that “we wanted flying cars, instead we got 140 characters.” Technological innovation has been narrowing towards internet-based services, whether that be social media or management software, and has left us in a ‘great stagnation’. There is huge opportunity to innovate outside of this and break the frontiers of technology, particularly as the United Kingdom begins to pivot towards a tech-based economy. We can be the first to lead the revolutions in medicine, transport, manufacturing, and energy.

In 1850, Prince Albert said: “The exhibition of 1851 is to give us a true test and a living picture of the point of development at which the whole of mankind has arrived in this great task, and a new starting point from which all nations will be able to direct their further exertions.”

This speaks greatly of the attitude needed today, and hopefully the attitude a new Great Exhibition can birth. We need to once again strive to break the frontiers of industry and technology, and to understand its importance to long-term civilisational thinking. Only by innovating can the United Kingdom hope to regather its former power.

Ultimately, this is about developing and refining a culture, just as Prince Albert pursued his vision of Victorian industrialism. What is our vision? Who do we seek to be? Much has been said of the United Kingdom’s lack of identity and purpose, but the answers need to stretch further than basic policy reforms. Groundbreaking change is needed to sway the current direction of the country, to alter its path. Given it is planned correctly, a new Great Exhibition can do just that.


Photo Credit.

Scroll to top