latest

Islam as Arabism

‘Here the initiative individual […] regains his place as a formative force in history. […] If he is a prophet like Mohammed, wise in the means of inspiring men, his words may raise a poor and disadvantaged people to unpremeditated ambitions and surprising power.’

– Will and Ariel Durant, The Lessons of History

That Islam is a sociopolitical ideology as well as a religion hardly requires demonstration. It included a political component from its very inception, since tradition has it that Muhammad was the Muslims’ worldly ruler as well as their spiritual leader. The caliphs succeeded him (‘caliph’ means ‘successor’) in that capacity: they, too, were political and religious rulers in one. If the caliphate had not been abolished in 1924, non-Muslims would likely be much less blind to Islam’s political side.

This political side is too rarely acknowledged. However, even less attention has been paid to the ethnic aspect of Islam’s politics. Hardly any commentators seem to mention the undercurrent of Arabism present in the Mohammedan creed – yet once one has noticed it, it is impossible to ignore. Islam is not just any ideology; it is a vehicle of Arab imperialism.

Some readers may not readily see any such ethnic element, but others will likely find it obvious. In Algeria, for instance, Islam is widely taken to be a facet of ‘Arabdom,’ which is why proud Berbers tend not to be passionate Muslims. It is not just non-Arabs who believe that Islam and Arabdom are intimately linked. Consider that Tunisia’s ‘Arab Muslim’ character is mentioned in the preamble to the country’s constitution. Likewise, Morocco’s constitution states that Moroccan national identity is ‘forged by the convergence of its Arab-Islamic, Amazigh and Saharan-Hassanic components.’ Such language underscores the essential connection between Arab identity and Islam. What follows is a brief overview of some aspects of this connection.

The Traditions

The traditional accounts of Islam’s early history, including the hadith, contain plenty of naked Arabism. In this context, we can largely set aside the question of whether these accounts are reliable. For the most part, it scarcely matters whether the traditions are true or fabricated; it only matters that they are believed.

Perhaps the most infamous racist hadith is the one in which Muhammad describes black people as seeming to have raisins for heads. The saying in question is Number 256 in Book 89 of volume nine of Bukhari’s anthology: ‘You should listen to and obey[…] your ruler even if he was an Ethiopian (black) slave whose head looks like a raisin.’

Some Muslims try to divert attention from the questionable physical description and onto the statement’s supposed egalitarianism. They claim this passage expresses a progressive sentiment that people of any race could be worthy rulers. However, one should bear in mind the context: the next two hadiths likewise extol obedience to rulers. For example, Number 257 has Muhammad say: ‘A Muslim has to listen to and obey (the order of his ruler) whether he likes it or not, as long as his orders involve not one in disobedience (to Allah).’ The common theme in these stories is the requirement to submit to those in power. Against this backdrop, the hypothetical Ethiopian ruler is clearly mentioned in order to emphasise how absolute this duty is: it applies even if the ruler belongs to an inferior ethnic group. Similar examples of racism in the hadith and other Islamic sources are listed by Isaac Marshall.

As Robert Spencer shows in Did Muhammad Exist?, early Arab politics under the Abbasid dynasty was marked by references to Muhammad’s example to promote various causes, notably including ‘the rapid expansion of the Arab Empire.’ This sometimes included strong ethnic undertones. As Spencer notes, Muhammad was reported to have said that Muslims would conquer ‘the palaces of the pale men in the lands of the Byzantines’ and to have announced: ‘the Greeks will stand before the brown men (the Arabs) in troops in white garments and with shorn heads, being forced to do all that they are ordered.’ Why mention the Byzantines’ lighter complexion? Presumably, this served to underscore their ethnic distinctness (non-Arabness) and, by implication, their inferiority. As for the second quote, it clearly portrays Muhammad as having wished for the Arabs specifically, rather than Muslims of any ethnicity, to dominate the Greeks.

According to tradition, having garnered only a handful of followers in Mecca, Muhammad achieved his first major success in Yathrib (later Medina). This milestone was made possible by an ethnic conflict between Arabs and Jews in which the former deemed him useful for their cause. ‘The Arabs of Yathrib,’ explains Ali Sina in Understanding Muhammad and Muslims, ‘accepted Muhammad readily, not because of the profundity of his teachings, […] but because of their rivalry with the Jews.’ It was in Medina that Islam’s trademark Jew-hatred truly began to burgeon.

Over a millennium later, the resources of Muslims worldwide are still being drained in service to an Arab struggle against Jews in Israel – and Islam is the tool through which those resources are extracted. Of course, not everyone in the Muslim world is content with this arrangement. In Iran, which is now a mostly non-Muslim country, protestors chant: ‘Forget about Palestine, forget about Gaza, think about us.’ Likewise, the Moroccan Amazigh Democrat Party (a Berber organisation now renamed ‘Moroccan Ecologist Party – Greens’) stands for both secularism and ‘normalizing relations with Israel.’ The more a group is free from Islam, it seems, the less need it feels to sacrifice its own interests in order to help Middle Eastern Arabs re-conquer Israel.

The History

Islam’s history shows it to be, from its beginnings, fundamentally intertwined with Arab identity. In Arabs: A 3,000-Year History of Peoples, Tribes and Empires, Tim Mackintosh-Smith provides such manifold examples of this pattern that it would be plagiaristic to reproduce them all here. Drawing on Muslim historian al-Baladhuri’s description of the Arab conquests of the seventh century AD, he writes that the Taghlib, despite being Christian, were made exempt from the ‘poll-tax’ which unbelievers must pay under Islamic law. The reason was that the Taghlib were Arabs, and could thus make the case that they were different from the ‘conquered barbarians’ to whom the tax was normally applied. ‘Islam in its expansive period had as much to do with economics and ethnicity as with ethics.’ During the later centuries of Islam, other groups – most notably, the Ottomans – appear to take the lead in the Muslim world. Nevertheless, ‘the centuries of “invisibility” in fact conceal an Arab expansion almost as remarkable for its extent as the first eruption of Islam,’ though this second phase occurred ‘through the Arab world’s back door, into the Indian Ocean.’

For Mackintosh-Smith, Islam should be viewed ‘as a unifying national ideology, and Muhammad as an Arab national hero.’ It may be worthwhile to mention, in this context, the theory that Muhammad never existed and was instead a character popularised decades after his supposed death. Robert Spencer summarises the case for this position in Did Muhammad Exist?. Despite dating Islam’s emergence to the early eighth century, Spencer notes that two inscriptions from Arab-ruled lands during the second half of the seventh century refer to some watershed moment which had occurred in 622. As he states, this is the traditional date of the Hijra, when Muhammad supposedly fled from Mecca to Medina. Interestingly, one of the inscriptions was made 42 years (on the lunar calendar) after 622, yet it purports to have been written in ‘the year 42 following the Arabs.’ Why the odd phrasing? Spencer argues that, in 622, the Byzantines inflicted a heavy defeat on the Persian Empire, sending it into decline. The Arabs were quick to take advantage of the resultant ‘power vacuum’ and soon conquered Persia. Consequently, he speculates: ‘What became the date of the Hijra may have originally marked the beginning of the Arabians as a political force to be reckoned with on the global scene.’ If this idea is correct – and it certainly makes sense of the strange phrase ‘the year 42 following the Arabs’ – then the very year with which the Islamic calendar begins, 622, may originally have been commemorated in celebration of Arab military expansion. This would also make it all the more ironic for anyone conquered by Arabs, and especially Iranians, to be a Muslim.

Still, the conquest of non-Arabs by Arabs is sanctified in Islam even if one utterly rejects the thesis Spencer propounds. Since the expansion of early Islam – and much of later Islam – was inseparable from Arab expansion into surrounding territories, being Muslim practically forces one to look back with approval on the conquests of non-Arabs by Arabs. (The spread of other world religions did not involve a comparable dependence on armed subjugation.) As Raymond Ibrahim has written, ‘the historic Islamic conquests are never referred to as “conquests” in Arabic and other Muslim languages; rather, they are futuhat—literally, “openings” for the light of Islam to enter.’

Throughout Islam’s history, jihadism and Islamic expansionism have gone hand in hand with Arab supremacism. This has perhaps been most apparent in Sudan and Mauritania, where Islamism has long been inextricably linked to racism and genocide against, and enslavement of, non-Arab blacks. Serge Trifkovic makes this point powerfully in The Sword of the Prophet, highlighting the irony of black Muslims in America who consider Islam a natural part of African heritage.

In addition to the racism already found in Islamic scriptures, the slave trade which has flourished under Islamic rule and been legitimised in conjunction with jihad ideology has also spawned racialist justifications. Trifkovic comments: ‘The Muslims’ view on their two main sources of slaves, sub-Saharan Africa and Slavic Eastern Europe, developed into the tradition epitomized by a tenth-century Islamic writer:

“The people of Iraq […] are the ones who are done to a turn in the womb. They do not come out with something between blond, blanched and leprous coloring, such as the infants dropped from the wombs of the women of the Slavs and others of similar light complexion; nor are they overdone in the womb until they are […] black, murky, malodorous, stinking, and crinkly-haired, with […] deficient minds, […] such as the Ethiopians and other blacks[.]”’

Islam’s Arab Character

Despite claims of divine revelation and the notion that the Qur’an existed from the beginning of time, Islamic doctrine is wholly permeated by mediaeval Arab culture and the paganism of pre-Islamic Arabia. Thus, Samuel Zwemer notes that the belief in jinn reflects a ‘substratum of paganism.’ Nor is this belief peripheral to Islam; numerous verses in the Qur’an discuss these supposed spirits and Muhammad is claimed, writes Zwemer, to have been ‘sent to convert the Jinn to Islam as well as the Arabs.’ It is also a well-known fact that the pilgrimage to Mecca goes back to pre-Islamic paganism.

The creed’s ethical teachings, furthermore, are deeply shaped by its origins among mediaeval Arabs. In many ways, it represents an alien culture imposed on other peoples by Arab conquest. One might object that Europe is Christian and Christianity is likewise an alien influence on it, having come from the Middle East. Yet Christianity’s Middle Eastern origins have been greatly exaggerated. It is a fundamentally European religion, having arisen in the Roman Empire and been shaped by Greek philosophy from its fount. Even pre-Christian Judaism had been heavily shaped by Hellenic thought, as Martin Hengel showed in his classic Judaism and Hellenism. In any event, Christianity is far less intrusive than Islam, which seems intent on micro-managing every aspect of the believer’s life.

An obvious example of how Islam imposes alien values on the societies it conquers is the role it mandates for women. Apostate Prophet, a German-American ex-Muslim of Turkish descent, avers that ‘the Turks […] treated their women much, much better before they converted to Islam.’ Current scholarship appears to bear this notion out. One author concludes that, in pre-Islamic times, ‘Turkish women ha[d] a much more free life than women of other communities and that women within Turkish communities [during that period] can be seen as sexless and they can take part in men’s positions.’ This is obviously far different from women’s role in Islamic societies. The difference was famously demonstrated by Turkey’s Deputy Prime Minister Bülent Arınç, founding member of the ruling Islamist group, the Justice and Development Party (AKP). On the occasion of the Islamic holiday Eid al-Fitr, Arınç urged Turks to pay greater heed to the Qur’an and stated that women should ‘not laugh in public.’ If conditions in Turkey are not as bad as in other Islamic countries, where practices like female genital mutilation are common, that is in large part thanks to the secularising revolution of Kemalism.

However, to say that Islam’s ethics fully reflect the norms of pre-Islamic Arabia would be unfair to the Arabs of the time. For instance, Ali Sina argues that, ‘prior to Islam, women in Arabia were more respected and had more rights than at any time since’ (Understanding Muhammad and Muslims). Even within the context of that undeveloped region, it seems that Islamisation represented a step back.

Islam’s Arab character has serious practical consequences which work to Arabs’ relative advantage and other groups’ relative disadvantage – although, naturally, adherence to Islam represents a net disadvantage for all groups. As Hugh Fitzgerald observes, Islam makes people ‘pray five times a day in the direction of Arabia (Mecca), ideally take Arab names, read the Qur’an in Arabic, and sometimes even construct a false Arab ancestry (as the “Sayeeds” of Pakistan).’ The requirement to fast throughout the day during Ramadan appears tailored to the Arabian Peninsula and is ill-suited to life in certain other regions. Moreover, Islam proves highly effective at funneling money from the whole Muslim world into Arabia. The required pilgrimage to Mecca earns Saudi Arabia ten to fifteen billion US dollars per annum; added to this are another four to five billion gained through ‘the umra, a non-obligatory pilgrimage to Mecca.’ ‘Pilgrimage income,’ adds the same source, ‘also accounts for the second largest share of [Saudi] government revenue after hydrocarbon sales.’

Will the Awakening Come?

‘Although Islam presents itself as a universal religion,’ writes Robert Spencer, ‘it has a decidedly Arabic character’ which has consistently aided ‘Arabic supremacists’ in Muslim areas. As stated, Islam is detrimental to all people, but it seems especially absurd that any non-Arab would be a Muslim. Hopefully, the other nations ensnared by this ideology will find the backbone to break free of it sooner rather than later.

Some such stirrings, though faint, can already be seen. As of this writing, Apostate Prophet’s video Islam is for Arabs has garnered nearly 200,000 views in five years. We have noted the distaste for Islam among many Algerian Berbers, and a similar pattern has been recorded in Morocco: ‘for some Berbers, conversion [to Christianity] is a return to their own roots.’ Should this trend continue, it could, in theory, become quite significant. As of 2000, Arabs constituted only 44% of Morocco’s population, just under the combined share of Arabised Berbers (24%) and other Berbers (21%).

Iran is an even more promising case. As mentioned, it appears that most of the country’s population is no longer Muslim. National pride seems to have played a part in this spectacular sea change, as evidenced by the popularity of Zoroastrianism among some Iranians. Perhaps Iran, once liberated, could act as a model for other non-Arab Muslim countries with a sense of dignity.

The national issue may not prove potent enough to de-Islamise societies completely. However, that may not be required. A major tipping point could be achieved simply by reaching a point at which criticism of Islam can no longer be stifled. Islam’s success depends on fear to prevent people from opposing it. Thus, in environments where adherence to it is not socially enforced – for instance, in Western societies –, deconversion rates tend to be high. Anywhere the compulsion to obey Islam is defeated, the main battle will have been won.


Photo Credit

Right Place, Right Time, Wrong Movement

During an interview for the H. L. Mencken Club, writer Derek Turner described political correctness as a ‘clown with a knife’, combining more petty nanny state tendencies with a more totalitarian aim, thereby allowing it to gain considerable headway as no-one takes it seriously enough. In a previous article, the present author linked such a notion to the coverup of grooming gangs across Britain, with it being one of the most obvious epitomes of such an idea, especially for all the lives ruined because of the fears of violating that ‘principle’ being too strong to want to take action.

The other notion that was linked was that of Islamic terrorism, whereby any serious attempts to talk about it (much less respond to it in an orderly way) is hindered by violating political correctness – with both it and Islamic extremism being allowed to gain much headway in turn. Instead, the establishment falls back onto two familiar responses. At best, they treat any such event with copious amounts of sentimentality, promising that such acts won’t divide the country and we are all united in whatever communitarian spirit is convenient to the storyline. 

At worst, they aren’t discussed at all, becoming memory-holed in order to not upset the current state of play. Neither attitude does much good, especially in the former’s case as it can lead, as Theodore Dalrymple noted, to being the ‘forerunner and accomplice of brutality whenever the policies suggested by it have been put into place’. The various ineffective crackdowns on civil liberties following these attacks can attest to that.

However, while there is no serious current political challenge to radical Islam, there was for a time a serious enough alternative movement that was, and despite it not being completely mainstream, certainly left its mark.

That was Britain’s Counter-Jihad movement, a political force that definitely lived up to the name for those who could remember it. Being a loud and noisy affair, it protested (up and down the country) everything contingent with Islamism, from terrorism to grooming gangs. It combined working-class energy with militant secularism, with its supposed influences ranging as far as Winston Churchill to Christopher Hitchens. It was often reactionary in many of its viewpoints but with appeals to left-wing cultural hegemony. It was as likely to attack Islam for its undermining of women’s and LGBT rights as for its demographic ramifications through mass immigration.

While hard to imagine now, it was the real deal, with many of its faces and names becoming countercultural icons among the British right. Tommy Robinson, Anne Marie Waters, Paul Weston, Pat Condell, Jonaya English, as well as many others fitted this moniker to varying degrees of success. It had its more respectable intellectual faces like Douglas Murray and Maajid Nawaz, while even entertaining mainstream politics on occasion, most notably with Nigel Farage and UKIP (especially under the leadership of Gerard Batten) flirting with it from time to time.

While being a constant minor mainstay in British politics for the early part of the 21st century, it was in 2017 when it reached its zenith. The numerous and culminating Islamic terrorist attacks that year, from Westminster Bridge to Manchester Arena to the London Borough Market as well as the failed Parsons Green Tube bombing had (cynically or otherwise) left the movement feeling horribly vindicated in many of its concerns. Angst among the public was high and palpable, to the point that even the BBC pondered as to whether 2017 had been ‘the worst year for UK terrorism’. Douglas Murray released his magnum opus in The Strange Death of Europe, of which became an instant best-seller and critical darling, all the while being a blunt and honest examination of many issues including that of radical Islam within Britain and much of the continent itself – something that would have previously been dismissed as mere reactionary commentary. And at the end of the year, the anti-Islam populist party For Britain begun in earnest, with its founder and leader in Anne Marie Waters promising to use it as a voice for those in Britain who ‘consider Islam to be of existential significance’.  

In short, the energy was there, the timing was (unfortunately) right and the platforms were finally available to take such a concern to the mainstream. To paraphrase the Radiohead song, everything (seemed to be) in its right place.

Despite this, it would ironically never actually get better for the movement, with its steep decline and fall coming slowly but surely afterwards. This was most symbolically displayed in mid-2022 when For Britain folded, with Waters citing both far-left harassment and a lack of financial support due to the ongoing cost-of-living crisis in her decision to discontinue. This came shortly after its candidate Frankie Rufolo quite literally jumped for joy after coming last in the Tiverton and Honiton by-election, the last the party would contest. All the movement is now is a textbook case of how quickly fortunes can change.

What was once a sizeable movement within British politics is now just as much a relic of 2017 as the last hurrah of BGMedia, the several jokes about Tom Cruise’s abysmal iteration of The Mummy (half-finished trailer and film alike) and the several viral Arsenal Fan TV videos that have aged poorly for… obvious reasons. Those in its grassroots are now alienated and isolated once more, and are presumably resorting to sucking a lemon. Why its complete demise happened is debatable, but some factors are more obvious than others.

The most common explanation is one the right in general has blamed for all their woes in recent years – what Richard Spencer dubbed ‘The Great Shuttening’. This conceit contended that reactionary forces would eventually become so powerful in the political arena that the establishment would do all it could to restrict its potential reach for the future. This was an idea that played out following the populist victories of Brexit and Trump, largely (and ironically) because of the convenient seppuku that the alt-right gave to the establishment following the Unite the Right rally in Charlottesville in 2017, leading to much in the way of censorship (on social media especially) with that event and the death left in its wake being the pretext. 

Needless to say, it wasn’t simply Spencer and his ilk that were affected, confined to either Bitchute or obscure websites in sharp contrast from their early 2010s heyday. Counter-jihad was another casualty in the matter, with many of its orgs and figureheads being banned on social media and online payment services, limiting the potential growth that they would have had in 2017 and beyond. In turn, the only access they now had to the mainstream was the various hit pieces conducted on them, which unsurprisingly didn’t endear many to these types of characters and groups.

But if they couldn’t gain grassroots support (on social media or off it), it might be for another obvious reason for the collapse: the movement itself was not an organically developed one, of which made its downfall somewhat inevitable. This is because much of the movement’s main cheerleaders and backers were that of the conservative elite (or Conservatism Inc., for pejorative purposes), on both sides of the Atlantic, rather than the public at large. For Tommy Robinson in particular, the movement’s unofficial figurehead for the longest time, this was most apparent. 

On the British end, it was a matter of promoting Robinson in differing ways. At best, they tactfully agreed with him even if disagreeing with his behaviour and antics more broadly, and at worst, they promoted him as someone who wasn’t as bad as much of the press claimed he was. That he had friendly interviews in the Spectator, puff pieces written for him in the Times, all the while having shows from This Morning and The Pledge allowing right-wing commentators to claim that he was highlighting supposed legitimate contentions of the masses demonstrated much of this promotion.

American conservative support came through similar promotion. This mostly came during his various court cases in 2018 and 2019, whereby many major networks framed him as a victim of a kangaroo court and a political prisoner (all the while failing to understand basic British contempt of court laws as they did so under ‘muh freedom’ rhetoric). However, most of the important American support was financial. This often came directly from neoconservative think tanks, mainly the Middle East Forum which gave Robinson much financial support, as did similar organisations. To what end is unknown, but given the war-hawk views of some involved (including MEF head Daniel Pipes), it is reasonable to assume something sinister was going on with that kind of help. 

This in turn compounded another central reason as to the movement’s collapse: the genuine lack of authenticity in it as a whole. This is because the movement’s pandering to secularism and left-wing thought as expressed earlier are acceptable within mainstream political discourse. This sharp contrast between the inherently left-wing Robinson and Waters and their ideologically reactionary base made the movement unstable from the get-go. Much of it was a liberal movement designed to attack Islam as undermining the West as defined by the cultural revolution of the 1960s, not a reactionary one attacking that revolution as a whole as well much to the chagrin of its supporters. 

Counter-jihad was therefore just simply a more radical version of the acceptable establishment attack on Islamism. As Paul Gottfried wrote in a recent Chronicles column, ‘Those who loudly protest that Muslims oppose feminism and discriminate against homosexuals are by no means conservative. They are simply more consistent in their progressive views than those on the woke left who treat Islamic patriarchy indulgently’. It is for this reason that the mainstream right were far kinder to counter-jihad and Robinson in the early 2010s than the likes of actual right-wingers like Nigel Farage and the Bow Group under its current leadership. 

It is no surprise then that a movement with such inauthentic leadership and contradictory ideology would collapse once such issues became too big to ignore, with Robinson himself being the main fall guy for the movement’s fate. With questions being asked about his background becoming too numerous, the consistent begging for donations becoming increasingly suspect and people eventually getting fed up of the pantomime he had set up of self-inflicted arrests and scandals, his time in the spotlight came to a swift end. His former supporters abandoned him in droves, all the while his performance in the 2019 European Elections was equally dismal, where he came in below the often-mocked Change UK in the North West region, to audible laughter. Following his surprise return to X, formerly Twitter, and his antics during Remembrance Day, scepticism regarding his motives, especially amongst people who would otherwise support him, has only increased.

Now this article isn’t designed to attack British Counter-Jihad as a movement entirely. What it is meant for is to highlight the successes and failings of the movement for better attempts in the future. For one example, as other have discussed elsewhere, when noting the failings of the 2010s right, having good leadership with a strong mass movement and sound financial backing is key. 

Those that can get this right have been successful in recent years. The Brexit campaign was able to do this through having moderate and popular characters like Nigel Farage, eccentric Tories and prominent left-wingers like George Galloway be its face, all the while having funding from millionaires like Arron Banks and Tim Martin, who could keep their noses mostly clean. The MAGA movement stateside is a similar venture, with faces like Donald Trump, Ron DeSantis and Tucker Carlson being its faces, with Peter Thiel as its (mostly) clean billionaire financier.

The British Counter-Jihad movement had none of that. Its leadership were often questionable rabble rousers, which while having some sympathy among the working class, often terrified much of the middle England vote and support needed to get anywhere. Its grassroots were often of a similar ilk, all the while being very ideologically out of step with its leadership and lacking necessary restraint, allowing for easy demonisation amongst a sneering, classist establishment. The funny money from neocon donors clearly made it a movement whose ulterior motives were troublesome to say the least. 

Hence why counter-jihad collapsed, and its main figurehead’s only use now is living rent free in the minds of the progressive left and cynical politicians (and even cringeworthy pop stars), acting as a necessary bogeyman for the regime to keep their base ever so weary of such politics reappearing in the future. 

However, this overall isn’t a good thing for Britain, as it needs some kind of movement to act as a necessary buffer against such forces in the future. As Robinson admitted in his book Enemy of the State, the problems he ‘highlighted… haven’t gone away. They aren’t going away.’ That was written all the way back in 2015 – needless to say, the situation has become much worse since then. From violent attacks, like the killing of Sir David Amess, to the failed bombing of Liverpool Women’s Hospital to the attempted assassination on Sir Salman Rushdie, to intimidation campaigns against Batley school teachers, autistic school children accidentally scuffing the Quran and the film The Lady of Heaven, such problems instead of going away have come back roaring with a vengeance. 

In turn, in the same way that the grooming gangs issue cannot simply be tackled by occasional government rhetoric, tweets of support by the likes of actress Samantha Morton and GB News specials alone, radical Islam isn’t going to be dealt with by rabble rouser organisations and suspicious overseas money single-handedly. Moves like Michael Gove firing government workers involved with the Lady of Heaven protests are welcome, but don’t go anywhere near far enough.

Without a grassroots org or a more ‘respectable’ group acting as a necessary buffer against such forces, the only alternative is to have the liberal elite control the narrative. At best, they’ll continue downplaying it at every turn, joking about ‘Muslamic Ray Guns’ and making far-left activists who disrupt peaceful protests against Islamist terror attacks into icons.

As for the political establishment, they remain committed to what Douglas Murray describes as ‘Rowleyism’, playing out a false equivalence between Islamism and the far-right in terms of the threat they pose. As such, regime propagandists continue to portray the far-right as the villains in every popular show, from No Offence to Trigger Point. Erstwhile, the Prevent program will be given license to overly focus on the far-right as opposed to Islamism, despite the findings of the Shawcross Review.

In conclusion, British Counter-Jihad was simply a case of right place, right time but wrong movement. What it doesn’t mean is that its pretences should be relegated or confined to certain corners, given what an existential threat radical Islam poses, and as Arnold Toynbee noted, any society that doesn’t solve the crises of the age is one that quickly becomes in peril. British Counter-Jihad was the wrong movement for that. It’s time to build something new, and hopefully something better will take its place.


Photo Credit.

On Setting Yourself on Fire

A man sets himself on fire on Sunday afternoon for the Palestinian cause, and by Monday morning his would-be allies are calling him a privileged white male. At the time of writing, his act of self-immolation has already dropped off the trending tab of Twitter – quickly replaced by the Willy Wonka Experience debacle in Glasgow and Kate Middelton themed conspiracy theories. 

Upsettingly, it is not uncommon for soldiers to take their own lives during and after conflict. This suicide, however, is a uniquely tragic one; Aaron Bushnell was a serving member of the US Airforce working as a software engineer radicalised by communists and libtards to not only hate his country and his military, but himself. His Reddit history shows his descent into anti-white hatred, describing Caucasians as ‘White-Brained Colonisers’. 

White guilt is nothing new, we see it pouring out of our universities and mainstream media all the time. But the fact that this man was so disturbed and affected by it as to make the conscious decision to douse petrol all over himself and set his combat fatigues ablaze reminds us of the genuine and real threat that it poses to us. Today it is an act of suicide by self-immolation, when will it be an act of suicide by bombing?

I have seen some posters from the right talking about the ‘Mishima-esque’ nature of his self-immolation, but this could not be further from the truth. Mishima knew that his cause was a hopeless one. He knew that his coup would fail. He did not enter Camp Ichigaya expecting to overthrow the Japanese government. His suicide was a methodically planned quasi-artistic act of Seppuku so that he could achieve an ‘honourable death’. Aaron Bushnell, on the other hand, decided to set himself on fire because he sincerely believed it would make a difference. Going off his many posts on Reddit, it would also be fair to assume that this act was done in some way to endear himself to his liberal counterparts and ‘atone’ for his many sins (being white). 

Of course, his liberal counterparts did not all see it this way. Whilst videos of his death began flooding the timeline, factions quickly emerged, with radicals trying to decide whether using phrases like ‘rest in power’ were appropriate. That slogan is of course only reserved for black victims of white violence. 

Some went even further, and began to criticise people in general for feeling sorry for the chap. In their view, his death was just one less ‘white-brained coloniser’ to worry about. It appears that setting yourself on fire, screaming in agony as your skin pulls away, feeling your own fat render off, and writhing and dying in complete torture was the absolute bare minimum he could do. 

There are of course those who have decided to martyr and lionise him. It is hard to discern which side is worse. At least those who ‘call him out’ are making a clear case to left leaning white boys that nothing they do will ever be enough. By contrast, people who cheer this man on and make him into some kind of hero are only helping to stoke the next bonfire and are implicitly normalising the idea of white male suicide as a form of redemption.

Pick up your phone and scroll through your friend’s Instagram stories and you will eventually find at least one person making a post about the Israel-Palestine conflict. It might be some banal infographic, or a photo carefully selected to tug at your heart strings; this kind of ‘slacktivism’ has become extremely common in the last few years. 

Dig deeper through the content accounts that produce these kinds of infographics however, and you will find post after post discussing the ‘problems’ of whiteness/being male/being heterosexual etc. These accounts, often hidden from view of the right wing by the various algorithms that curate what we see, get incredible rates of interactions. 

The mindset of westerners who champion these kinds of statements is completely suicidal. They are actively seeking out allies amongst people who would see them dead in a ditch if they had a chance. Half of them would cheer for you as you put a barrel of a gun to your forehead, and the other half would still hate you after your corpse was cold.

There are many on the right who believe that if we just ‘have conversations’ with the ‘sensible left wing’ we will be able to achieve a compromise that ‘works for everyone’. This is a complete folly. The centre left will always make gradual concessions to the extreme left – it is where they source their energy and (eventually) their ideas. Pandering to these people and making compromises is, in essence, making deals with people who hate you. If you fall into one of the previously discussed categories, you are the enemy of goodness and peace. You are eternally guilty, so guilty in fact that literally burning yourself alive won’t save you.


Photo Credit.

On the Alabama IVF Ruling

On the 19th February 2024, the Alabama Supreme Court ruled that embryos created through IVF are “children”, and should be legally recognised as such. This issue was brought by three couples suing their IVF providers due to the destruction of their children while being cryogenically stored under an existing Death of a Minor statue in the state. This statute explicitly covered foetuses (presumably to allow for compensation to be sought by women who has suffered miscarriages or stillbirths which could have been prevented), but there was some ambiguity over whether IVF embryos were covered prior to the ruling that it applies to “all unborn children, regardless of their location”. It has since been revealed that the person responsible was a patient at the clinic in question, so while mainstream outlets have stated that the damage was ‘accidental’, I find this rather implausible given the security in place for accessing cryogenic freezers. It is the author’s own suspicion that the person responsible was in fact an activist foreseeing the consequences of successful Wrongful Death of a Minor lawsuit against the clinic for the desecration of unborn children outside the womb.

The ruling does not explicitly ban or even restrict IVF treatments; it merely states that the products thereof must be legally recognised as human beings. However, this view is incompatible with multiple stages of the IVF process, and this is what makes this step in the right direction a potentially significant victory. For those who may be (blissfully) unaware, the IVF process goes something like this. A woman is hormonally stimulated to release multiple eggs in a cycle rather than the usual one or two. These are then exacted and then fertilised with sperm in a lab. There is nothing explicitly contrary to the view that life begins at conception in these first two steps. However, as Elisabeth Smith (Director of State Policy at the Centre for Reproductive Rights) explains, not all of the embryos created can be used. Some are tossed due to genetic abnormalities, and even of those that remain usually no more than three are implanted into the womb at any given time, but they can be cryogenically stored for up to a decade and implanted at a later date or into someone else.

In this knowledge, three major problems for the IVF industry in Alabama become apparent. The first is that they will not be able to toss those which they deem to be unsuitable for implantation due to genetic abnormalities. This would massively increase the cost to IVF patients as they would have to store all the children created for an unspecified length of time. This is assuming that storing children in freezers is deemed to be acceptable at all, which is not a given as any reasonable person would say that freezing children at later stages of development was incredibly abusive. The second problem is that even if it is permitted to continue creating children outside of the womb and storing them for future implantation (perhaps by only permitting storage for a week or less), it would only be possible to create the number of children that the woman is willing to have implanted. This would further increase costs as if the first attempt at implantation fails, the patient would have to go back to the drawing board and have more eggs extracted, rather than trying again from a larger supply already in the freezer. The third problem is that, particularly if the number of stored children increases dramatically, liability insurance would have to cover any loss, destruction, or damage to said children, which would make it a totally unviable business for all but the wealthiest.

The connection between this ruling and the abortion debate has been made explicitly by both sides. Given that it already has a total ban on abortion, Alabama seems a likely state to take further steps to protect the unborn, which may spread to other Republican states if they are deemed successful. The states that currently also impose a total ban on abortion either at any time after conception or after 6 weeks gestation (where it is only possible to know of a pregnancy for 2 weeks) are Arkansas, Kentucky, Louisiana, Mississippi, Missouri, Oklahoma, South Dakota, Tennessee, Texas, North Carolina, Arizona, and Utah. There are other states with an exception only for rape and incest, with some requiring that this be reported to law enforcement.

However, despite the fact that the ruling was made by Republicans appointed to their posts at the time of Donald Trump’s presidency, he has publicly criticised this decision saying that “we should be making it easier for people to have strong families, not harder”. Nikki Haley appeared initially to support the ban, but later backtracked on this commitment. In a surprisingly intellectually honest move, The Guardian made an explicit link between the medical hysteria on this topic and the prevalence of female doctors among IVF patients. Glenza (2024) wrote:

“Fertility is of special concern to female physicians. Residents typically finish training at 31.6 years of age, which are prime reproductive years. Female physicians suffer infertility at twice the rate of the general population, because demanding careers push many to delay starting a family.”

While dry and factual, this statement admits consciously that ‘infertility’ is (or at least can be) caused by lifestyle choices and priorities (i.e. prioritising one’s career over using ideal reproductive years in the 20’s and early 30’s to marry and have children), rather than genes or bad luck, and is therefore largely preventable by women making different choices.

I sincerely hope that, despite criticism of the ruling by (disproportionately female) doctors which a vested interest, the rule of law stands firm and that an honest interpretation of this ruling is manifested in reality. This would mean that for reasons stated above it will become unviable to run a profitable IVF business, and that while wealthy couples may travel out of state, a majority of those currently seeking IVF will instead adopt children, and/or face the consequences of their life decisions. Furthermore, I hope that young women on the fence about accepting a likely future proposal, pulling the goalie, or aborting a current pregnancy to focus on her career consider the long-term consequences of waiting too long to have children.


Photo Credit.

Joel Coen’s The Tragedy of Macbeth: An Examination and Review

A new film adaptation of Shakespeare’s Scottish tragedy, Joel Coen’s 2021 The Tragedy of Macbeth is the director’s first production without his brother Ethan’s involvement. Released in select theaters on December 25, 2021, and then on Apple TV on January 14, 2022, the production has received positive critical reviews as well as awards for screen adaptation and cinematography, with many others still pending.

As with any movie review, I encourage readers who plan to see the film to do so before reading my take. While spoilers probably aren’t an issue here, I would not want to unduly influence one’s experience of Coen’s take on the play. Overall, though much of the text is omitted, some scenes are rearranged, and some roles are reduced, and others expanded, I found the adaptation to be a generally faithful one that only improved with subsequent views. Of course, the substance of the play is in the performances of Denzel Washington and Frances McDormand, but their presentation of Macbeth and Lady Macbeth is enhanced by both the production and supporting performances.

Production: “where nothing, | But who knows nothing, is once seen to smile” —IV.3

The Tragedy of Macbeth’s best element is its focus on the psychology of the main characters, explored below. This focus succeeds in no small part due to its minimalist aesthetic. Filmed in black and white, the play utilizes light and shadow to downplay the external historical conflicts and emphasize the characters’ inner ones.

Though primarily shown by the performances, the psychological value conflicts of the characters are concretized by the adaptation’s intended aesthetic. In a 2020 Indiewire interview, composer and long-time-Coen collaborator Carter Burwell said that Joel Coen filmed The Tragedy of Macbeth on sound stages, rather than on location, to focus more on the abstract elements of the play. “It’s more like a psychological reality,” said Burwell. “That said, it doesn’t seem stage-like either. Joel has compared it to German Expressionist film. You’re in a psychological world, and it’s pretty clear right from the beginning the way he’s shot it.”

This is made clear from the first shots’ disorienting the sense of up and down through the use of clouds and fog, which continue as a key part of the staging throughout the adaptation. Furthermore, the bareness of Inverness Castle channels the focus to the key characters’ faces, while the use of odd camera angles, unreal shadows, and distorted distances reinforce how unnatural is the play’s central tragic action, if not to the downplayed world of Scotland, then certainly to the titular couple. Even when the scene leaves Inverness to show Ross and MacDuff discussing events near a ruined building at a crossroads (Act II.4), there is a sense that, besides the Old Man in the scene, Scotland is barren and empty.

The later shift to England, where Malcolm, MacDuff, and Ross plan to retake their homeland from now King Macbeth, further emphasizes this by being shot in an enclosed but bright and fertile wood. Although many of the historical elements of the scene are cut, including the contrast between Macbeth and Edward the Confessor and the mutual testing of mettle between Malcolm and MacDuff, the contrast in setting conveys the contrast between a country with a mad Macbeth at its head and the one that presumably would be under Malcolm. The effect was calming in a way I did not expect—an experience prepared by the consistency of the previous acts’ barren aesthetic.

Yet, even in the forested England, the narrow path wherein the scene takes place foreshadows the final scenes’ being shot in a narrow walkway between the parapets of Dunsinane, which gives the sense that, whether because of fate or choice rooted in character, the end of Macbeth’s tragic deed is inevitable. The explicit geographical distance between England and Scotland is obscured as the same wood becomes Birnam, and as, in the final scenes, the stone pillars of Dunsinane open into a background of forest. This, as well as the spectacular scene where the windows of the castle are blown inward by a storm of leaves, conveys the fact that Macbeth cannot remain isolated against the tragic justice brought by Malcom and MacDuff forever, and Washington’s performance, which I’ll explore presently, consistently shows that the usurper has known it all along.

This is a brilliant, if subtle, triumph of Coen’s adaptation: it presents Duncan’s murder and the subsequent fallout as a result less of deterministic fate and prophecy and more of Macbeth’s own actions and thoughts in response to it—which, themselves, become more determined (“predestined” because “wilfull”) as Macbeth further convinces himself that “Things bad begun make strong themselves by ill” (III.2).

Performances:  “To find the mind’s construction in the face” —I.4

Film adaptations of Shakespeare can run the risk of focusing too closely on the actors’ faces, which can make keeping up with the language a chore even for experienced readers (I’m still scarred from the “How all occasions” speech from Branagh’s 1996 Hamlet); however, this is rarely, if ever, the case here, where the actors’ and actresses’ pacing and facial expressions combine with the cinematography to carry the audience along. Yet, before I give Washington and McDormand their well-deserved praise, I would like to explore the supporting roles.

In Coen’s adaptation, King Duncan is a king at war, and Brendan Gleeson plays the role well with subsequent dourness. Unfortunately, this aspect of the interpretation was, in my opinion, one of its weakest. While the film generally aligns with the Shakespearean idea that a country under a usurper is disordered, the before-and-after of Duncan’s murder—which Coen chooses to show onscreen—is not clearly delineated enough to signal it as the tragic conflict that it is. Furthermore, though many of his lines are adulatory to Macbeth and his wife, Gleeson gives them with so somber a tone that one is left emotionally uninvested in Duncan by the time he is murdered.

Though this is consistent with the production’s overall austerity, it does not lend much to the unnaturalness of the king’s death. One feels Macbeth ought not kill him simply because he is called king (a fully right reason, in itself) rather than because of any real affection between Macbeth and his wife for the man, himself. However, though I have my qualms, this may have been the right choice for a production focused on the psychological elements of the plot; by downplaying the emotional connection between the Macbeths and Duncan (albeit itself profoundly psychological), Coen focuses on the effects of murder as an abstraction.

The scene after the murder and subsequent framing of the guards—the drunken porter scene—was the one I most looked forward to in the adaptation, as it is in every performance of Macbeth I see. The scene is the most apparent comic relief in the play, and it is placed in the moment where comic relief is paradoxically least appropriate and most needed (the subject of a planned future article). When I realized, between the first (ever) “Knock, knock! Who’s there?” and the second, that the drunk porter was none other than comic actor Stephen Root (Office Space, King of the Hill, Dodgeball), I knew the part was safe.

I was not disappointed. The drunken obliviousness of Root’s porter, coming from Inverness’s basement to let in MacDuff and Lennox, pontificating along the way on souls lately gone to perdition (unaware that his king has done the same just that night) before elaborating to the new guests upon the merits and pitfalls of drink, is outstanding. With the adaptation’s other removal of arguably inessential parts and lines, I’m relieved Coen kept as much of the role as he did.

One role that Coen expanded in ways I did not expect was that of Ross, played by Alex Hassell. By subsuming other minor roles into the character, Coen makes Ross into the unexpected thread that ties much of the plot together. He is still primarily a messenger, but, as with the Weird Sisters whose crow-like costuming his resembles, he becomes an ambiguous figure by the expansion, embodying his line to Lady MacDuff that “cruel are the times, when we are traitors | And do not know ourselves” (IV.2). In Hassell’s excellent performance, Ross seems to know himself quite well; it is we, the audience, who do not know him, despite his expanded screentime. By the end, Ross was one of my favorite aspects of Coen’s adaptation.

The best part of The Tragedy of Macbeth is, of course, the joint performance by Washington and McDormand of Macbeth and Lady Macbeth. The beginning of the film finds the pair later in life, with presumably few mountains left to climb. Washington plays Macbeth as a man tired and introverted, which he communicates by often pausing before reacting to dialogue, as if doing so is an afterthought. By the time McDormand comes onscreen in the first of the film’s many corridor scenes mentioned above, her reading and responding to the letter sent by Macbeth has been primed well enough for us to understand her mixed ambition yet exasperation—as if the greatest obstacle is not the actual regicide but her husband’s hesitancy.

Throughout The Tragedy of Macbeth their respective introspection and ambition reverse, with Washington eventually playing the confirmed tyrant and McDormand the woman internalized by madness. If anyone needed a reminder of Washington and McDormand’s respective abilities as actor and actress, one need only watch them portray the range of emotion and psychological depth contained in Shakespeare’s most infamous couple.

Conclusion: “With wit enough for thee”—IV.2

One way to judge a Shakespeare production is whether someone with little previous knowledge of the play and a moderate grasp of Shakespeare’s language would understand and become invested in the characters and story; I hazard one could do so with Coen’s adaptation. It does take liberties with scene placement, and the historical and religious elements are generally removed or reduced. However, although much of the psychology that Shakespeare includes in the other characters is cut, the minimalist production serves to highlight Washington and McDormand’s respective performances. The psychology of the two main characters—the backbone of the tragedy that so directly explores the nature of how thought and choice interact—is portrayed clearly and dynamically, and it is this that makes Joel Coen’s The Tragedy of Macbeth an excellent and, in my opinion, ultimately true-to-the-text adaptation of Shakespeare’s Macbeth.


Photo Credit.

The Reality of Degree Regret 

It is now graduation season, when approximately 800,000 (mostly) young people up and down the country decide for once in their lives that it is worth dressing smartly and donning a cap and gown so that they can walk across a stage at their university, have their hands clasped by a ceremonial ‘academic’, and take photos with their parents. Graduation looked a little different for me as a married woman who still lives in my university city, but the concept remains the same. Graduates are encouraged to celebrate the start of their working lives by continuing in the exact same way that they have lived for the prior 21 years: by drinking, partying, and ‘doing what you love’ rather than taking responsibility for continuing your family and country’s legacy. 

However, something I have noticed this year which contrasts from previous years is that graduates are starting to be a lot more honest about the reality of degree regret. For now, this sentiment is largely contained in semi-sarcastic social media posts and anonymous surveys, but I consider it a victory that the cult of education is slowly but surely starting to be criticised. CNBC found that in the US (where just over 50% of working age people have a degree), a shocking 44% of job-seekers regret their degrees. Unsurprisingly, journalism, sociology, and liberal arts are the most regretted degrees (and lead to the lowest-paying jobs). A majority of jobseekers with degrees in these subjects said that if they could go back, they would study a different subject such as computer science or business. Even in the least regretted majors (computer science and engineering), only around 70% said that they would do the same degree if they could start again. Given that CNBC is hardly a network known to challenge prevailing narratives, we can assume that in reality the numbers are probably slightly higher.

A 2020 article detailed how Sixth Form and College students feel pressured to go to university, and 65% of graduates regret it. 47% said that they were not aware of the option of pursuing a degree apprenticeship, which demonstrates a staggering lack of information. Given how seriously educational institutions supposedly take their duty to prepare young people for their future, this appears to be a significant failure. Parental pressure is also a significant factor, as 20% said that they did not believe their parents would have been supportive had they chosen an alternative such as a degree apprenticeship, apprenticeship, or work. This is understandable given the fact that for our parent’s generation, a degree truly was a mark of prestige and a ticket to the middle class, but due to credential inflation this is no longer the case. They were wrong, but only on the matter of scale, as a survey of parents found that as many as 40% had a negative attitude towards alternative paths. 

Reading this, you may think that I am totally against the idea of a university being a place to learn gloriously useless subjects for the sake of advancing knowledge that may in some very unlikely situations become useful to mankind. Universities should be a place to conceptualise new ways the world could be, and a place where the best minds from around the world gather to genuinely push the frontiers of knowledge forward. What I object to is the idea that universities be a 3-year holiday from the real world and responsibilities towards family and community, a place to ‘find oneself’ rather than finding meaning in the outer world, a dating club, or a tool for social mobility. I do not object to taxpayer funding for research if it passes a meaningful evaluation of value for money and is not automatically covered under the cultish idea that any investment in education is inherently good.

In order to avoid the epidemic of degree regret that we are currently facing, we need to hugely reduce the numbers of students admitted for courses which are oft regretted. This is not with the aim of killing off said subjects, but enhancing the education available to those remaining as they will be surrounded by peers who genuinely share their interest and able to derive more benefit from more advanced teaching and smaller classes. Additionally, we need to stop filling the gaps in our technical workforce with immigration and increase the number of academic and vocational training placements in fields such as computer science and engineering. With regards to the negative attitudes, I described above, these will largely be fixed as the millennial generation filled with degree regret comes to occupy senior positions and reduces the stigma of not being a graduate within the workplace. By being honest about the nature of tomorrow’s job market, we can stop children from growing up thinking that walking across the stage in a gown guarantees you a lifetime of prosperity.

On a rare personal note, having my hands clasped in congratulations for having wasted three years of my life did not feel like an achievement. It felt like an embarrassment to have to admit that 4 years ago when I filled out UCAS applications to study politics; I was taken for a fool. I have not had my pre-existing biases challenged and my understanding of the world around me transformed by my degree as promised. As an 18-year-old going into university, I knew that my criticisms of the world around me were ‘wrong’, and I was hoping that and education/indoctrination would ‘fix’ me. Obviously given the fact that 3 years later I am writing for the Mallard this is not the case, and all I have realised from my time here is that there are others out there, and my thoughts never needed to be fixed.


Photo Credit.

Kino

In Defence of Political Conflict

It’s often said that contemporary philosophy is stuck in an intellectual rut. While our forefathers pushed the boundaries of human knowledge, modern philosophers concern themselves with impenetrable esoterica, or gesture vaguely in the direction of social justice.

Yet venture to Whitehall, and you’ll find that once popular ideas have been refuted thoroughly by new schools of thought.

Take the Hegelian dialectic, once a staple of philosophical education. According to Hegel, the presentation of a new idea, a thesis, will generate a competing idea or counterargument, an antithesis. The thesis and the antithesis, opposed as they are, will inevitably come into conflict with one another.

However, this conflict is a productive one. With the merits of both the thesis and the antithesis considered, the reasoned philosopher will be able to produce an improved version of the thesis, a synthesis.

In very basic terms, this is the Hegelian dialectic, a means of philosophical reason which, if applied correctly, should result in a refinement and improvement of ideas over time. Compelling, at its face.

However, this idea is now an outmoded one. Civil servants and their allies in the media and the judiciary have, in their infinite wisdom, developed a better way.

Instead of bothering with the cumbersome work of developing a thesis or responding to it with an antithesis, why don’t we just skip to the synthesis? After all, we already know What Works through observation of Tony Blair’s sensible, moderate time in No 10 – why don’t we just do that? That way, we can avoid all of that nasty sparring and clock off early for drinks.

This is the grim reality of modern British politics. The cadre of institutional elites who came to dominate our political system at the turn of the millennium have decided that their brand of milquetoast liberalism is the be-all and end-all of political thought. The great gods of this new pantheon – Moderation, Compromise, International Standing, Rule of Law – should be consulted repeatedly until nascent ideas are sufficiently tempered.

The Hegelian dialectic has been replaced by the Sedwillian dialectic; synthesis begets synthesis begets synthesis.

In turn, politicians have become more restricted in their thinking, preferring to choose from a bureaucratically approved list of half-measures. Conservatives, with their aesthetic attachment to moderate, measured Edwardian sensibilities, are particularly susceptible to this school of thought. We no longer have the time or space for big ideas or sweeping reforms. Those who state their views with conviction are tarred as swivel-eyed extremists, regardless of the popularity of their views. Despite overwhelming public dissatisfaction with our porous borders, politicians who openly criticise legal immigration will quickly find calls to moderate. If you’re unhappy with the 1.5 million visas granted by the Home Office last year, perhaps you’d be happy with a mere million?

The result has been decades of grim decline. As our social fabric unravels and our economy stagnates, we are still told that compromise, moderation, and sound, sensible management are the solutions. This is no accident. Britain’s precipitous decline and its fraying social fabric has raised the stakes of open political conflict. Nakedly pitting ideas against each other risks exposing our society’s underlying decisions and shattering the myth of peaceful pluralism on which the Blairite consensus rests. After all, if we never have any conflict, it’s impossible for the Wrong Sorts to come out on top.

The handwringing and pearl-clutching about Brexit was, in part, a product of this conflict aversion. The political establishment was ill-equipped to deal with the bellicose and antagonistic Leave campaign, and the stubbornness of the Brexit Spartans. Eurosceptics recognised that their position was an absolute one – Britain must leave the European Union, and anything short of a full divorce would fall short of their vision.

It was not compromise that broke the Brexit gridlock, but conflict. The suspension of 21 rebel Conservative MPs was followed by December’s general election victory. From the beginning of Boris Johnson’s premiership to the end, he gave no quarter to the idea of finding a middle ground.

Those who are interested in ending our national decline must embrace a love of generative adversity. Competing views, often radical views, must be allowed to clash. We should revel in those clashes and celebrate the products as progress. Conservatives in particular must learn to use the tools of the state to advance their interests, knowing that their opponents would do the same if they took power.

There are risks, of course – open conflict often produces collateral damage – but it would be far riskier to continue on our current path of seemingly inexorable deterioration. We must not let the mediocre be the enemy of the good for any longer.


Photo Credit.

The Chinese Revolution – Good Thing, Bad Thing?

This is an extract from the transcript of The Chinese Revolution – Good Thing, Bad Thing? (1949 – Present). Do. The. Reading. and subscribe to Flappr’s YouTube channel!

“Tradition is like a chain that both constrains us and guides us. Of course, we may, especially in our younger years, strain and struggle against this chain. We may perceive faults or flaws, and believe ourselves or our generation to be uniquely perspicacious enough to radically improve upon what our ancestors have made – perhaps even to break the chain entirely and start afresh.

Yet every link in our chain of tradition was once a radical idea too. Everything that today’s conservatives vigorously defend was once argued passionately by reformers of past ages. What is tradition anyway if not a compilation of the best and most proven radical ideas of the past? The unexpectedly beneficial precipitate or residue retrieved after thousands upon thousands of mostly useless and wasteful progressive experimentation.

To be a conservative, therefore, to stick to tradition, is to be almost always right about everything almost all the time – but not quite all the time, and that is the tricky part. How can we improve society, how can we devise better governments, better customs, better habits, better beliefs without breaking the good we have inherited? How can we identify and replace the weaker links in our chain of tradition without totally severing our connection to the past?

I believe we must begin from a place of gratitude. We must hold in our minds a recognition that life can be, and has been, far worse. We must realize there are hard limits to the world, as revealed by science, and unchangeable aspects of human nature, as revealed by history, religion, philosophy, and literature. And these two facts in combination create permanent unsolvable problems for mankind, which we can only evade or mitigate through those traditions we once found so constraining.

To paraphrase the great G.K. Chesterton: “Before you tear down a fence, understand why it was put up in the first place.” I cannot fault a single person for wishing to make a better world for themselves and their children, but I can admonish some persons for being so ungrateful and ignorant, they mistake tradition itself as the cause of every evil under the sun. Small wonder then that their hairbrained alternatives routinely overlook those aspects of society without which it cannot function or perpetuate itself into the future.

And there are other things tied up in tradition besides moral guidance or the management of collective affairs. Tradition also involves how we delve into the mysteries of the universe; how we elevate the basic needs of food, shelter, and clothing into artforms unto themselves; how we represent truth and beauty and locate ourselves within the vast swirling cosmos beyond our all too brief and narrow experience.

It is miraculous that we have come as far as we have. And at any given time, we can throw that all away, through profound ingratitude and foolish innovations. A healthy respect for tradition opens the door to true wisdom. A lack of respect leads only to novelty worship and malign sophistry.

Now, not every tradition is equal, and not everything in a given tradition is worth preserving, but like the Chinese who show such great deference to the wisdom of their ancestors, I wish more in the West would admire or even learn about their own.

Like the Chinese, we are the legatees of a glorious tradition – a tradition that encompasses the poetry of Homer, the curiosity of Eratosthenes, the integrity of Cato, the courage of Saint Boniface, the vision of Michelangelo, the mirth of Mozart, the insights of Descartes, Hume, and Kant, the wit of Voltaire, the ingenuity of Watt, the moral urgency of Lincoln and Douglas.

These and many more are responsible for the unique tradition into which we have been born. And it is this tradition, and no other, which has produced those foundational ideas we all too often take for granted, or assume are the defaults around the world. I am speaking here of the freedom of expression, of inquiry, of conscience. I am speaking of the rule of law, and equality under the law. I am speaking of inalienable rights, of trial by jury, of respect for women, of constitutional order and democratic procedure. I am speaking of evidence based reasoning and religious tolerance.

Now those are all things I wouldn’t give up for all the tea in China. You can have Karl Marx. We’ll give you him. But these are ours. They are the precious gems of our magnificent Western tradition, and if we do nothing else worthwhile in our lives, we can at least safeguard these things from contamination, or annihilation, by those who would thoughtlessly squander their inheritance.”


Photo Credit.

Technology Is Synonymous With Civilisation

I am declaring a fatwa on anti-tech and anti-civilisational attitudes. In truth, there is no real distinction between the two positions: technology is synonymous with civilisation.

What made the Romans an empire and the Gauls a disorganised mass of tribals, united only by their reactionary fear of the march of civilisation at their doorstep, was technology. Where the Romans had minted currency, aqueducts, and concrete so effective we marvel on how to recreate it, the Gauls fought amongst one another about land they never developed beyond basic tribal living. They stayed small, separated, and never innovated, even with a whole world of innovation at their doorstep to copy.

There is always a temptation to look towards smaller-scale living, see its virtues, and argue that we can recreate the smaller-scale living within the larger scale societies we inhabit. This is as naïve as believing that one could retain all the heartfelt personalisation of a hand-written letter, and have it delivered as fast as a text message. The scale is the advantage. The speed is the advantage. The efficiency of new modes of organisation is the advantage.

Smaller scale living in the era of technology necessarily must go the way of the hand-written letter in the era of text messaging: something reserved for special occasions, and made all the more meaningful for it.

However, no-one would take seriously someone who tries to argue that written correspondence is a valid alternative to digital communication. Equally, there is no reason to take seriously someone who considers smaller-scale settlements a viable alternative to big cities.

Inevitably, there will be those who mistake this as going along with the modern trend of GDP maximalism, but the situation in modern Britain could not be closer to the opposite. There is only one place generating wealth currently: the South-East. Everywhere else in the country is a net negative to Britain’s economic prosperity. Devolution, levelling up, and ‘empowering local communities’ has been akin to Rome handing power over to the tribals to decide how to run the Republic: it has empowered tribal thinking over civilisational thinking.

The consequence of this has not been to return to smaller-scale ways of life, but instead to rest on the laurels of Britain’s last civilisational thinkers: the Victorians.

Go and visit Hammersmith, and see the bridge built by Joseph Bazalgette. It has been boarded up for four years, and the local council spends its time bickering with the central government over whose responsibility it is to fix the bridge for cars to cross it. This is, of course, not a pressing issue in London’s affairs, as the Vercingetorix of the tribals, Sadiq Khan, is hard at work making sure cars can’t go anywhere in London, let alone across a bridge.

Bazalgette, in contrast to Khan, is one of the few people keeping London running today. Alongside Hammersmith Bridge, Bazalgette designed the sewage system of London. Much of the brickward is marked with his initials, and he produced thousands of papers going over each junction, and pipe.

Bazalgette reportedly doubled the pipes diameters remarking “we are only going to do this once, and there is always the possibility of the unforeseen”. This decision prevented the sewers from overflowing in 1960.

Bazalgette’s genius saved countless lives from cholera, disease, and the general third-world condition of living among open excrement. There is no hope today of a Bazalgette. His plans to change the very structure of the Thames would be Illegal and Unworkable to those with power, and the headlines proposing such a feat (that ancient civilisations achieved) would be met with one million image macros declaring it a “manmade horror beyond their comprehension.”

This fundamentally is the issue: growth, positive development, and a future worth living in is simply outside the scope of their narrow comprehension.

This train of thought, having gone unopposed for too long, has even found its way into the minds of people who typically have thorough, coherent, and well-thought-out views. In speaking to one friend, they referred to the current ruling classes of this country as “tech-obsessed”.

Where is the tech-obsession in this country? Is it found in the current government buying 3000 GPUs for AI, which is less than some hedge funds have to calculate their potential stocks? Or is it found in the opposition, who believe we don’t need people learning to code because “AI will do it”?

The whole political establishment is anti-tech, whether crushing independent forums and communities via the Online Harms Bill, to our supposed commitment to be a ‘world leader in AI regulation’ – effectively declaring ourselves to be the worlds schoolmarm, nagging away as the US, China, and the rest of the world get to play with the good toys.

Cummings relays multiple horror stories about the tech in No. 10. Listening to COVID figures down the phone, getting more figures on scraps of paper, using the calculator on his iPhone and writing them on a Whiteboard. Fretting over provincial procurement rules over a paltry 500k to get real-time data on a developing pandemic. He may well have been the only person in recent years serious about technology.

The Brexit campaign was won by bringing in scientists, physicists, and mathematicians, and leveraging their numeracy (listen to this to get an idea of what went on) with the latest technology to campaign to people in a way that had not been done before. Technology, science, and innovation gave us Brexit because it allowed people to be organised on a scale and in ways they never were before. It was only through a novel use of statistics, mathematical models, and Facebook advertising that the campaign reached so many people. The establishment lost on Brexit because they did not embrace new modes of thinking and new technologies. They settled for basic polling of 1-10 thousand people and rudimentary mathematics.

Meanwhile the Brexit campaign reached thousands upon thousands, and applied complex Bayesian statistics to get accurate insights into the electorate. It is those who innovate, evolve, and grow that shape the future. There is no going back to small-scale living. Scale is the advantage. Speed is the advantage. And once it exists, it devours the smaller modes of organisation around it, even smaller modes of organisation have the whole political establishment behind it.

When Cummings got what he wanted injected into the political establishment – a data science team in No. 10 – they were excised like a virus from the body the moment a new PM was installed. Tech has no friends in the political establishment, the speed, scale, and efficiency of the thing is anathema to a system which relies on slow-moving processes to keep a narrow group of incompetents in power for as long as possible. The fierce competition inherent to technology is the complete opposite of the ‘Rolls-Royce civil service’ which simply recycles bad staff around so they don’t bother too many people for too long.

By contrast, in tech, second best is close to last. When you run the most popular service, you get the data from running that service. This allows you to make a better service, outcompete others, which gets you more users, which gets you more data, and it all snowballs from there. Google holds 93.12% of the search engine market share. Amazon owns 48% of eCommerce sales. The iPhone is the most popular email client, at 47.13%. Twitch makes up 66% of all hours of video content watched. Google Chrome makes up 70% of web traffic. There next nearest competitor, Firefox (a measly 8.42%,) is only alive because Google gave them 0.5b to stick around. Each one of these companies is 2-40 times bigger than its next nearest competitor. Just as with civilisation, there is no half-arseing technology. It is build or die.

Nevertheless, there have been many attempts to half-ass technology and civilisation. When cities began to develop, and it became clear they were going to be the future powerhouses of modern economies, theorists attempted to create a ‘city of towns’ model.

Attempting to retain the virtues of small town and community living in a mass-scale settlement, they argued for a model of cities that could be made up of a collection of small towns. Inevitably, this failed.

The simple reason is that the utility of cities is scale. It is the access to the large labour pools that attracts businesses. If cities were to become collections of towns, there would be no functional difference in setting up a business in a city or a town, except perhaps the increased ground rent. The scale is the advantage.

This has been borne out mathematically. When things reach a certain scale, when they become networks of networks (the very thing you’re using, the internet, is one such example) they tend towards a winner-takes-all distribution.

Bowing out of the technological race to engage in some Luddite conquest of modernity, or to exact some grudge against the Enlightenment, is signalling to the world we have no interest in carving out our stake in the future. Any nation serious about competing in the modern world needs to understand the unique problems and advantages of scale, and address them.

Nowhere is this more strongly seen than in Amazon, arguably a company that deals with scale like no other. The sheer scale of co-ordination at a company like Amazon requires novel solutions which make Amazon competitive in a way other companies are not.

For example, Amazon currently owns the market on cloud services (one of the few places where a competitor is near the top, Amazon: 32%, Azure: 23%). Amazon provides data storage services in the cloud with its S3 service. Typically, data storage services have to handle peak times, when the majority of the users are online, or when a particularly onerous service dumps its data. However, Amazon services so many people – its peak demand is broadly flat. This allows Amazon to design its service around balancing a reasonably consistent load, and not handling peaks/troughs. The scale is the advantage.

Amazon warehouses do not have designated storage space, nor do they even have designated boxes for items. Everything is delivered and everything is distributed into boxes broadly at random, and tagged by machines so the machines know where to find it.

One would think this is a terrible way to organise a warehouse. You only know where things are when you go to look for them, how could this possibly be advantageous? The advantage is in the scale, size, and randomness of the whole affair. If things are stored on designated shelves, when those shelves are empty the space is wasted. If someone wants something from one designated shelf on one side of the warehouse, and something from another side of the factory, you waste time going from one side to the other. With randomness, you are more likely to have a desired item close by, as long as you know where that is, and with technology you do. Again, the scale is the advantage.

The chaos and incoherence of modern life, is not a bug but a feature. Just as the death of feudalism called humans to think beyond their glebe, Lord, and locality, the death of legacy media and old forms of communication call humans to think beyond the 9-5, elected representative, and favourite Marvel movie.

In 1999, one year after Amazon began selling music and videos, and two years after going public – Barron’s, a reputable financial magazine created by Dow Jones & Company posted the following cover:

Remember, Barron’s is published by Dow Jones, the same people who run stock indices. If anyone is open to new markets, it’s them. Even they were outmanoeuvred by new technologies because they failed to understand what technophobes always do: scale is the advantage. People will not want to buy from 5 different retailers because they want to buy everything all at once.

Whereas Barron’s could be forgiven for not understanding a new, emerging market such as eCommerce, there should be no sympathy for those who spend most of their lives online decrying growth. Especially as they occupy a place on the largest public square to ever occur in human existence.

Despite claiming they want a small-scale existence, their revealed preference is the same as ours: scale, growth, and the convenience it provides. When faced with a choice between civilisation in the form of technology, and leaving Twitter a lifestyle closer to that of the past, even technology’s biggest enemies choose civilisation.


Photo Credit.

Featured

Cause for Remembrance

As the poppy-adorned date of Remembrance Sunday moves into view, with ceremonies and processions set to take place on the 12th November, I couldn’t help but recall a quote from Nietzsche: “The future belongs to those with the longest memory.”

Typical of seemingly every Nietzsche quote it is dropped mid-essay with little to no further context, moulded to fit the context of the essay being written with little to no regard for the message which Nietzsche is trying to convey to the reader; a message which journalist and philosopher Alain de Benoist outlines with expert clarity:

“What he [Nietzsche] means is that Modernity will be so overburdened by memory that it will become impotent. That’s why he calls for the “innocence” of a new beginning, which partly entails oblivion.”

For Nietzsche, a fixation on remembrance, on recollecting everything that has been and everything that will be, keeps us rooted in our regrets and our failures; it deprives us of the joys which can be found in the present moment and breeds resentment in the minds of men.

As such, it is wise to be select with what we remember and how we remember it, should we want to spare ourselves a lifetime of dizzying self-pity and further dismay. In my mind, as well as millions of others, the most destructive wars in human history would qualify for the strange honour of being ‘remembered’, yet so too would other events, especially those events which have yet to achieve fitting closure and continue to encroach upon the present.

As of this article’s publication, it is the 20th anniversary of the disappearance of Charlene Downes, presumed murdered by the Blackpool grooming gangs. At the time of her murder, two Jordanian immigrants were arrested. Iyad Albattikhi was charged with Downes’ murder and Mohammed Reveshi was charged with helping to dispose of her body. Both were later released after denying the charges.

Currently, the only person sentenced in relation to the case was Charlene’s younger brother, who was arrested after he punched a man who openly joked that he had disposed of Charlene’s body by putting it into kebabs, according to witness testimony; information which led the police to change their initial missing person investigation to one of murder.

As reported in various media outlets, local and national, throughout their investigation, the police found “dozens more 13- to 15-year-old girls from the area had fallen victim to grooming or sexual abuse” with an unpublished report identifying eleven takeaway shops which were being used as honeypots – places where non-white men could prey on young white girls.

Like so many cases of this nature, investigations into Charlene’s murder had been held up by political correctness. According to conservative estimates, Charlene is just one of the thousands of victims, yet only a granular fraction of these racially motivated crimes has resulted in a conviction, with local councillors and police departments continuing to evade accountability for their role in what is nothing short of a national scandal.

However, it’s not just local officials who have dodged justice. National figures, including those with near-unrivalled influence in politics and media, have consistently ignored this historic injustice, many outrightly denying fundamental and well-established facts about the national grooming scandal.

Keir Starmer, leader of the Labour Party and likely the next Prime Minister, is one such denialist. In an interview with LBC, Starmer said: “the vast majority of sexual abuse cases do not involve those of ethnic minorities.”

If meant to refer to all sexual offences in Britain, Starmer’s statement is highly misleading. Accounting for the 20% of cases in which ethnicity is not reported, only 60% of sexual offenders in 2017 were classed as white, suggesting whites are underrepresented. In addition, the white ethnic category used such reports includes disproportionately criminal ethnic minorities, such as the Muslim Albanians, who are vastly overrepresented in British prisons, further diminishing the facticity of Starmer’s claim.

However, in the context of grooming gangs, Starmer’s comments are not only misleading, but categorically false. Every official report on ‘Group Sexual Exploitation’ (read: grooming gangs) has shown that Muslim Asians were highly over-represented, and the most famous rape gangs (Telford, Rotherham, Rochdale) along with high-profile murders (Lowe family, Charlene Downes) were the responsibility of Asian men.

As shown in Charlie Peters’ widely acclaimed documentary on the grooming gang scandal, 1 in every 1700 Pakistani men in the UK were prosecuted for being part of a grooming gang between 1997 and 2017. In cities such as Rotherham, it was 1 in 73.

However, according to the Home Office, as they only cover a subset of cases, all reports regarding the ethnic composition of grooming gangs necessarily reject large amounts of data. As such, they estimate between 14% (Berelowitz. 2015) and 84% (Quilliam, 2017) of grooming gang members were Asian, a significant overrepresentation, and even then, these figures are skewed by poor reporting, something the reports make clear.

One report, which focused on grooming gangs in Rotherham, stated:

“By far the majority of perpetrators were described as ‘Asian’ by victims… Several staff described their nervousness about identifying the ethnic origins of perpetrators for fear of being thought racist; others remembered clear direction from their managers not to do so” (Jay, 2014)

Another report, which focused on grooming gangs in Telford, stated:

“I have also heard a great deal of evidence that there was a nervousness about race in Telford and Wellington in particular, bordering on a reluctance to investigate crimes committed by what was described as the ‘Asian’ community.” (Crowther, 2022)

If crimes committed by Asians were deliberately not investigated, whether to avoid creating ethnic disparities to remain in-step with legal commitments to Equality, Diversity, and Inclusion, or to avoid appearing ‘racist’ in view of the media, estimates based on police reports will be too low, especially when threats of violence against the victims is considered:

“In several cases victims received death threats against them or their family members, or threats that their houses would be petrol-bombed or otherwise vandalised in retaliation for their attempts to end the abuse; in some cases threats were reinforced by reference to the murder of Lucy Lowe, who died alongside her mother, sister and unborn child in August 2000 at age 15. Abusers would remind girls of what had happened to Lucy Lowe and would tell them that they would be next if they ever said anything. Every boy would mention it.” (Crowther, 2022)

Overall, it is abundantly clear that deeds, not words, are required to remedy this ongoing scandal. The victims of the grooming gang crisis deserve justice, not dismissal and less-than-subtle whataboutery. We must not tolerate nor fall prey to telescopic philanthropy. The worst of the world’s barbarities will not be found on the distant horizon, for they have been brought to our shores.

As such, we require an end to grooming gang denialism wherever it exists, an investigation by the National Crime Agency into every town, city, council and police department where grooming gang activity has been reported and covered-up, and a memorial befitting a crisis of this magnitude. Only then will girls like Charlene begin to receive the justice they deserve, allowing this crisis to be another cause for remembrance, rather than a perverse and sordid aspect of life in modern Britain.


Photo Credit.

Atatürk: A Legacy Under Threat

The founders of countries occupy a unique position within modern society. They are often viewed either as heroic and mythical figures or deeply problematic by today’s standards – take the obvious examples of George Washington. Long-held up by all Americans as a man unrivalled in his courage and military strategy, he is now a figure of vilification by leftists, who are eager to point out his ownership of slaves.

Whilst many such figures face similar shaming nowadays, none are suffering complete erasure from their own society. That is the fate currently facing Mustafa Kemal Atatürk, whose era-defining liberal reforms and state secularism now pose a threat to Turkey’s authoritarian president, Recep Tayyip Erdoğan.

To understand the magnitude of Atatürk’s legacy, we must understand his ascent from soldier to president. For that, we must go back to the end of World War One, and Turkey’s founding.

The Ottoman Empire officially ended hostilities with the Allied Powers via the Armistice of Mudros (1918), which amongst other things, completely demobilised the Ottoman army. Following this, British, French, Italian and Greek forces arrived in and occupied Constantinople, the Empire’s capital. Thus began the partitioning of the Ottoman Empire: having existed since 1299, the Treaty of Sèvres (1920) ceded large amounts of territory to the occupying nations, primarily being between France and Great Britain.

Enter Mustafa Kemal, known years later as Atatürk. An Ottoman Major General and fervent anti-monarchist, he and his revolutionary organisation (the Committee of Union and Progress) were greatly angered by Sèvres, which partitioned portions of Anatolia, a peninsula that makes up the majority of modern-day Turkey. In response, they formed a revolutionary government in Ankara, led by Kemal.

Thus, the Turkish National Movement fought a 4-year long war against the invaders, eventually pushing back the Greeks in the West, Armenians in the East and French in the South. Following a threat by Kemal to invade Constantinople, the Allies agreed to peace, with the Treaty of Kars (1921) establishing borders, and Lausanne (1923) officially settling the conflict. Finally free from fighting, Turkey declared itself a republic on 29 October 1923, with Mustafa Kemal as president.

His rule of Turkey began with a radically different set of ideological principles to the Ottoman Empire – life under a Sultan had been overtly religious, socially conservative and multi-ethnic. By contrast, Kemalism was best represented by the Six Arrows: Republicanism, Populism, Nationalism, Laicism, Statism and Reformism. Let’s consider the four most significant.

We’ll begin with Laicism. Believing Islam’s presence in society to have been impeding national progress, Atatürk set about fundamentally changing the role religion played both politically and societally. The Caliph, who was believed to be the spiritual successor to the Prophet Muhammad, was deposed. In their place came the office of the Directorate of Religious Affairs, or Diyanet – through its control of all Turkey’s mosques and religious education, it ensured Islam’s subservience to the State.

Under a new penal code, all religious schools and courts were closed, and the wearing of headscarves was banned for public workers. However, the real nail in the coffin came in 1928: that was when an amendment to the Constitution removed the provision declaring that the “Religion of the State is Islam”.

Moving onto Nationalism. With its roots in the social contract theories of thinkers like Jean-Jacques Rousseau, Kemalist nationalism defined the social contract as its “highest ideal” following the Empire’s collapse – a key example of the failures of a multi-ethnic and multi-cultural state.

The 1930s saw the Kemalist definition of nationality integrated into the Constitution, legally defining every citizen as a Turk, regardless of religion or ethnicity. Despite this however, Atatürk fiercely pursed a policy of forced cultural conformity (Turkification), similar to that of the Russian Tsars in the previous century. Both regimes had the same aim – the creation and survival of a homogenous and unified country. As such, non-Turks were pressured into speaking Turkish publicly, and those with minority surnames had to change, to ‘Turkify’ them.

Now Reformism. A staunch believer in both education and equal opportunity, Atatürk made primary education free and compulsory, for both boys and girls. Alongside this came the opening of thousands of new schools across the country. Their results are undeniable: between 1923 – 38, the number of students attending primary school increased by 224%, and 12.5 times for middle school.

Staying true to his identity as an equal opportunist, Atatürk enacted monumentally progressive reforms in the area of women’s rights. For example, 1926 saw a new civil code, and with it came equal rights for women concerning inheritance and divorce. In many of these gender reforms, Turkey was well-ahead of other Western nations: Turkish women gained the vote in 1930, followed by universal suffrage in 1934. By comparison, France passed universal suffrage in 1945, Canada in 1960 and Australia in 1967. Fundamentally, Atatürk didn’t see Turkey truly modernising whilst Ottoman gender segregation persisted

Lastly, let’s look at Statism. As both president and the leader of the People’s Republican Party, Atatürk was essentially unquestioned in his control of the State. However, despite his dictatorial tendencies (primarily purging political enemies), he was firmly opposed to dynastic rule, like had been the case with the Ottomans.

But under Recep Tayyip Erdoğan, all of this could soon be gone.

Having been a high-profile political figure for 20 years, Erdoğan has cultivated a positive image domestically, one focused on his support for public religion and Turkish nationalism, whilst internationally, he’s received far more negative attention focused on his growing authoritarian behaviour. Regarded widely by historians as the very antithesis of Atatürk, Erdoğan’s pushback against state secularism is perhaps the most significant attack on the founder’s legacy.

This has been most clearly displayed within the education system. 2017 saw a radical shift in school curriculums across Turkey, with references to Charles Darwin’s theory of evolution being greatly reduced. Meanwhile, the number of religious schools has increased exponentially, promoting Erdoğan’s professed goal of raising a “pious generation of Turks”. Additionally, the Diyanet under Erdoğan has seen a huge increase in its budget, and with the launch of Diyanet TV in 2012, has spread Quranic education to early ages and boarding schools.

The State has roles to play in society but depriving schoolchildren of vital scientific information and funding religious indoctrination is beyond outrageous: Soner Cagaptay, author of The New Sultan: Erdoğan and the Crisis of Modern Turkey, referred to the changes as: “a revolution to alter public education to assure that a conservative, religious view of the world prevails”.

There are other warning signs more broadly, however. The past 20 years have seen the headscarf make a gradual reappearance back into Turkish life, with Erdoğan having first campaigned on the issue back in 2007, during his first run for the presidency. Furthermore, Erdoğan’s Justice and Development Party (AKP), with its strong base of support amongst extremely orthodox Muslims, has faced repeated accusations of being an Islamist party – as per the constitution, no party can “claim that it represents a form of religious belief”.

Turkish women, despite being granted legal equality by Atatürk, remain the regular victims of sexual harassment, employment discrimination and honour killings. Seemingly intent on destroying all the positive achievements of the founder, Erdoğan withdrew from the Istanbul Convention (which forces parties to investigate, punish and crackdown on violence against women) in March 2021.

All of these reversals of Atatürk’s policies reflect the larger-scale attempt to delete him from Turkey’s history. His image is now a rarity in school textbooks, at national events, and on statues; his role in Turkey’s founding has been criminally downplayed.

President Erdoğan presents an unambiguous threat to the freedoms of the Turkish people, through both his ultra-Islamic policies and authoritarian manner of governance. Unlike Atatürk, Erdoğan seemingly has no problems with ruling as an immortal dictator, and would undoubtedly love to establish a family dynasty. With no one willing to challenge him, he appears to be dismantling Atatürk’s reforms one law at a time, reducing the once-mythical Six Arrows of Kemalism down to a footnote in textbooks.

A man often absent from the school curriculums of Western history departments, Mustafa Kemal Atatürk proved one of the most consequential leaders in both Turkish history, and the 20th Century. A radical and a revolutionary he may have been, but it was largely down to him that the Turkish people received a recognised nation-state, in which state secularism, high-quality education and equal civil rights were the norm.

In our modern world, so many of our national figures now face open vilification from the public and politicians alike. But for Turkey, future generations may grow up not even knowing the name or face of their George Washington. Whilst several political parties and civil society groups are pushing back against this anti-Atatürk agenda, the sheer determination displayed by Erdoğan shows how far Turks must yet go to preserve the founder’s legacy.


Photo Credit.

Charles’ Personal Rule: A Stable or Tyrannised England?

Within discussions of England’s political history, the most famous moments are known and widely discussed – the Magna Carta of 1215, and the Cromwell Protectorate of the 1650s spring immediately to mind. However, the renewal of an almost-mediaeval style of monarchical absolutism, in the 1630s, has proven both overlooked and underappreciated as a period of historical interest. Indeed, Charles I’s rule without Parliament has faced an identity crisis amongst more recent historians – was it a period of stability or tyranny for the English people?

If we are to consider the Personal Rule as a period in enough depth, the years leading up to the dissolution of Charles’ Third Parliament (in 1629) must first be understood. Succeeding his father James I in 1625, Charles’ personal style and vision of monarchy would prove to be incompatible with the expectations of his Parliaments. Having enjoyed a strained but respectful relationship with James, MPs would come to question Charles’ authority and choice of advisors in the coming years. Indeed, it was Charles’ stubborn adherence to the Divine Right of King’s doctrine, writing once that “Princes are not bound to give account of their actions but to God alone”, that meant that he believed compromise to be defeat, and any pushback against him to be a sign of disloyalty.

Constitutional tensions between King and Parliament proved the most contentious of all issues, especially regarding the King’s role in taxation. At war with Spain between 1625 – 1630 (and having just dissolved the 1626 Parliament), Charles was lacking in funds. Thus, he turned to non-parliamentary forms of revenue, notably the Forced Loan (1627) – declaring a ‘national emergency’, Charles demanded that his subjects all make a gift of money to the Crown. Whilst theoretically optional, those who refused to pay were often imprisoned; a notable example would be the Five Knights’ Case, in which five knights were imprisoned for refusing to pay (with the court ruling in Charles’ favour). This would eventually culminate in Charles’ signing of the Petition of Right (1628), which protected the people from non-Parliamentary taxation, as well as other controversial powers that Charles chose to exercise, such as arrest without charge, martial law, and the billeting of troops.

The role played by George Villiers, the Duke of Buckingham, was also another major factor that contributed to Charles’ eventual dissolution of Parliaments in 1629. Having dominated the court of Charles’ father, Buckingham came to enjoy a similar level of unrivalled influence over Charles as his de facto Foreign Minister. It was, however, in his position as Lord High Admiral, that he further worsened Charles’ already-negative view of Parliament. Responsible for both major foreign policy disasters of Charles’ early reign (Cadiz in 1625, and La Rochelle in 1627, both of which achieved nothing and killed 5 to 10,000 men), he was deemed by the MP Edward Coke to be “the cause of all our miseries”. The duke’s influence over Charles’ religious views also proved highly controversial – at a time when anti-Calvinism was rising, with critics such as Richard Montague and his pamphlets, Buckingham encouraged the King to continue his support of the leading anti-Calvinist of the time, William Laud, at the York House Conference in 1626.

Heavily dependent on the counsel of Villiers until his assassination in 1628, it was in fact, Parliament’s threat to impeach the Duke, that encouraged Charles to agree to the Petition of Right. Fundamentally, Buckingham’s poor decision-making, in the end, meant serious criticism from MPs, and a King who believed this criticism to be Parliament overstepping the mark and questioning his choice of personnel.

Fundamentally by 1629, Charles viewed Parliament as a method of restricting his God-given powers, one that had attacked his decisions, provided him with essentially no subsidies, and forced him to accept the Petition of Right. Writing years later in 1635, the King claimed that he would do “anything to avoid having another Parliament”. Amongst historians, the significance of this final dissolution is fiercely debated: some, such as Angela Anderson, don’t see the move as unusual; there were 7 years for example, between two of James’ Parliaments, 1614 and 1621 – at this point in English history, “Parliaments were not an essential part of daily government”. On the other hand, figures like Jonathan Scott viewed the principle of governing without Parliament officially as new – indeed, the decision was made official by a royal proclamation.

Now free of Parliamentary constraints, the first major issue Charles faced was his lack of funds. Lacking the usual taxation method and in desperate need of upgrading the English navy, the King revived ancient taxes and levies, the most notable being Ship Money. Originally a tax levied on coastal towns during wartime (to fund the building of fleets), Charles extended it to inland counties in 1635 and made it an annual tax in 1636. This inclusion of inland towns was construed as a new tax without parliamentary authorisation. For the nobility, Charles revived the Forest Laws (demanding landowners produce the deeds to their lands), as well as fines for breaching building regulations.

The public response to these new fiscal expedients was one of broad annoyance, but general compliance. Indeed, between 1634 and 1638, 90% of the expected Ship Money revenue was collected, providing the King with over £1m in annual revenue by 1637. Despite this, the Earl of Warwick questioned its legality, and the clerical leadership referred to all of Charles’ tactics as “cruel, unjust and tyrannical taxes upon his subjects”.However, the most notable case of opposition to Ship Money was the John Hampden case in 1637. A gentleman who refused to pay, Hampden argued that England wasn’t at war and that Ship Money writs gave subjects seven months to pay, enough time for Charles to call a new Parliament. Despite the Crown winning the case, it inspired greater widespread opposition to Ship Money, such as the 1639-40 ‘tax revolt’, involving non-cooperation from both citizens and tax officials. Opposing this view, however, stands Sharpe, who claimed that “before 1637, there is little evidence at least, that its [Ship Money’s] legality was widely questioned, and some suggestion that it was becoming more accepted”.

In terms of his religious views, both personally and his wider visions for the country, Charles had been an open supporter of Arminianism from as early as the mid-1620s – a movement within Protestantism that staunchly rejected the Calvinist teaching of predestination. As a result, the sweeping changes to English worship and Church government that the Personal Rule would oversee were unsurprisingly extremely controversial amongst his Calvinist subjects, in all areas of the kingdom. In considering Charles’ religious aims and their consequences, we must focus on the impact of one man, in particular, William Laud. Having given a sermon at the opening of Charles’ first Parliament in 1625, Laud spent the next near-decade climbing the ranks of the ecclesiastical ladder; he was made Bishop of Bath and Wells in 1626, of London in 1629, and eventually Archbishop of Canterbury in 1633. Now 60 years old, Laud was unwilling to compromise any of his planned reforms to the Church.

The overarching theme of Laudian reforms was ‘the Beauty of Holiness’, which had the aim of making churches beautiful and almost lavish places of worship (Calvinist churches, by contrast, were mostly plain, to not detract from worship). This was achieved through the restoration of stained-glass windows, statues, and carvings. Additionally, railings were added around altars, and priests began wearing vestments and bowing at the name of Jesus. However, the most controversial change to the church interior proved to be the communion table, which was moved from the middle of the room to by the wall at the East end, which was “seen to be utterly offensive by most English Protestants as, along with Laudian ceremonialism generally, it represented a substantial step towards Catholicism. The whole programme was seen as a popish plot”. 

Under Laud, the power and influence wielded by the Church also increased significantly – a clear example would be the fact that Church courts were granted greater autonomy. Additionally, Church leaders became evermore present as ministers and officials within Charles’ government, with the Bishop of London, William Juxon, appointed as Lord Treasurer and First Lord of the Admiralty in 1636. Additionally, despite already having the full backing of the Crown, Laud was not one to accept dissent or criticism and, although the severity of his actions has been exaggerated by recent historians, they can be identified as being ruthless at times. The clearest example would be the torture and imprisonment of his most vocal critics in 1637: the religious radicals William Prynne, Henry Burton and John Bastwick.

However successful Laudian reforms may have been in England (and that statement is very much debatable), Laud’s attempt to enforce uniformity on the Church of Scotland in the latter half of the 1630s would see the emergence of a united Scottish opposition against Charles, and eventually armed conflict with the King, in the form of the Bishops’ Wars (1639 and 1640). This road to war was sparked by Charles’ introduction of a new Prayer Book in 1637, aimed at making English and Scottish religious practices more similar – this would prove beyond disastrous. Riots broke out across Edinburgh, the most notable being in St Giles’ Cathedral (where the bishop had to protect himself by pointing loaded pistols at the furious congregation. This displeasure culminated in the National Covenant in 1638 – a declaration of allegiance which bound together Scottish nationalism with the Calvinist faith.

Attempting to draw conclusions about Laudian religious reforms very many hinges on the fact that, in terms of his and Charles’ objectives, they very much overhauled the Calvinist systems of worship, the role of priests, and Church government, and the physical appearance of churches. The response from the public, however, ranging from silent resentment to full-scale war, displays how damaging these reforms were to Charles’ relationship with his subjects – coupled with the influence wielded by his wife Henrietta Maria, public fears about Catholicism very much damaged Charles’ image, and meant religion during the Personal Rule was arguably the most intense issue of the period. In judging Laud in the modern-day, the historical debate has been split: certain historians focus on his radical uprooting of the established system, with Patrick Collinson suggesting the Archbishop to have been “the greatest calamity ever visited upon by the Church of England”, whereas others view Laud and Charles as pursuing the entirely reasonable, a more orderly and uniform church.

Much like how the Personal Rule’s religious direction was very much defined by one individual, so was its political one, by Thomas Wentworth, later known as the Earl of Strafford. Serving as the Lord Deputy of Ireland from 1632 to 1640, he set out with the aims of ‘civilising’ the Irish population, increasing revenue for the Crown, and challenging Irish titles to land – all under the umbrella term of ‘Thorough’, which aspired to concentrate power, crackdown on oppositions figures, and essentially preserve the absolutist nature of Charles’ rule during the 1630s.

Regarding Wentworth’s aims toward Irish Catholics, Ian Gentles’ 2007 work The English Revolution and the Wars in the Three Kingdoms argues the friendships Wentworth maintained with Laud and also with John Bramhall, the Bishop of Derry, “were a sign of his determination to Protestantize and Anglicize Ireland”.Devoted to a Catholic crackdown as soon as he reached the shores, Wentworth would subsequently refuse to recognise the legitimacy of Catholic officeholders in 1634, and managed to reduce Catholic representation in Ireland’s Parliament, by a third between 1634 and 1640 – this, at a time where Catholics made up 90% of the country’s population. An even clearer indication of Wentworth’s hostility to Catholicism was his aggressive policy of land confiscation. Challenging Catholic property rights in Galway, Kilkenny and other counties, Wentworth would bully juries into returning a King-favourable verdict, and even those Catholics who were granted their land back (albeit only three-quarters), were now required to make regular payments to the Crown. Wentworth’s enforcing of Charles’ religious priorities was further evidenced by his reaction to those in Ireland who signed the National Covenant. The accused were hauled before the Court of Castle Chamber (Ireland’s equivalent to the Star Chamber) and forced to renounce ‘their abominable Covenant’ as ‘seditious and traitorous’. 

Seemingly in keeping with figures from the Personal Rule, Wentworth was notably tyrannical in his governing style. Sir Piers Crosby and Lord Esmonde were convicted by the Court of Castle Chamber for libel for accusing Wentworth of being involved in the death of Esmond’s relative, and Lord Valentina was sentenced to death for “mutiny” – in fact, he’d merely insulted the Earl.

In considering Wentworth as a political figure, it is very easy to view him as merely another tyrannical brute, carrying out the orders of his King. Indeed, his time as Charles’ personal advisor (1639 onwards) certainly supports this view: he once told Charles that he was “loose and absolved from all rules of government” and was quick to advocate war with the Scots. However, Wentworth also saw great successes during his time in Ireland; he raised Crown revenue substantially by taking back Church lands and purged the Irish Sea of pirates. Fundamentally, by the time of his execution in May 1641, Wentworth possessed a reputation amongst Parliamentarians very much like that of the Duke of Buckingham; both men came to wield tremendous influence over Charles, as well as great offices and positions.

In the areas considered thus far, it appears opposition to the Personal Rule to have been a rare occurrence, especially in any organised or effective form. Indeed, Durston claims the decade of the 1630s to have seen “few overt signs of domestic conflict or crisis”, viewing the period as altogether stable and prosperous. However, whilst certainly limited, the small amount of resistance can be viewed as representing a far more widespread feeling of resentment amongst the English populace. Whilst many actions received little pushback from the masses, the gentry, much of whom were becoming increasingly disaffected with the Personal Rule’s direction, gathered in opposition.  Most notably, John Pym, the Earl of Warwick, and other figures, collaborated with the Scots to launch a dissident propaganda campaign criticising the King, as well as encouraging local opposition (which saw some success, such as the mobilisation of the Yorkshire militia). Charles’ effective use of the Star Chamber, however, ensured opponents were swiftly dealt with, usually those who presented vocal opposition to royal decisions.

The historiographical debate surrounding the Personal Rule, and the Caroline Era more broadly, was and continues to be dominated by Whig historians, who view Charles as foolish, malicious, and power-hungry, and his rule without Parliament as destabilising, tyrannical and a threat to the people of England. A key proponent of this view is S.R. Gardiner who, believing the King to have been ‘duplicitous and delusional’, coined an alternative term to ‘Personal Rule’ – the Eleven Years’ Tyranny. This position has survived into the latter half of the 20th Century, with Charles having been labelled by Barry Coward as “the most incompetent monarch of England since Henry VI”, and by Ronald Hutton, as “the worst king we have had since the Middle Ages”. 

Recent decades have seen, however, the attempted rehabilitation of Charles’ image by Revisionist historians, the most well-known, as well as most controversial, being Kevin Sharpe. Responsible for the landmark study of the period, The Personal Rule of Charles I, published in 1992, Sharpe came to be Charles’ most staunch modern defender. In his view, the 1630s, far from a period of tyrannical oppression and public rebellion, were a decade of “peace and reformation”. During Charles’ time as an absolute monarch, his lack of Parliamentary limits and regulations allowed him to achieve a great deal: Ship Money saw the Navy’s numbers strengthened, Laudian reforms mean a more ordered and regulated national church, and Wentworth dramatically raised Irish revenue for the Crown – all this, and much more, without any real organised or overt opposition figures or movements.

Understandably, the Sharpian view has received significant pushback, primarily for taking an overly optimistic view and selectively mentioning the Personal Rule’s positives. Encapsulating this criticism, David Smith wrote in 1998 that Sharpe’s “massively researched and beautifully sustained panorama of England during the 1630s … almost certainly underestimates the level of latent tension that existed by the end of the decade”.This has been built on by figures like Esther Cope: “while few explicitly challenged the government of Charles I on constitutional grounds, a greater number had experiences that made them anxious about the security of their heritage”. 

It is worth noting however that, a year before his death in 2011, Sharpe came to consider the views of his fellow historians, acknowledging Charles’ lack of political understanding to have endangered the monarchy, and that, more seriously by the end of the 1630s, the Personal Rule was indeed facing mounting and undeniable criticism, from both Charles’ court and the public.

Sharpe’s unpopular perspective has been built upon by other historians, such as Mark Kishlansky. Publishing Charles I: An Abbreviated Life in 2014, Kishlansky viewed parliamentarian propaganda of the 1640s, as well as a consistent smear from historians over the centuries as having resulted in Charles being viewed “as an idiot at best and a tyrant at worst”, labelling him as “the most despised monarch in Britain’s historical memory”. Charles however, faced no real preparation for the throne – it was always his older brother Henry that was the heir apparent. Additionally, once King, Charles’ Parliaments were stubborn and uncooperative – by refusing to provide him with the necessary funding, for example, they forced Charles to enact the Forced Loan. Kishlansky does, however, concede the damage caused by Charles’ unmoving belief in the Divine Right of Kings: “he banked too heavily on the sheer force of majesty”.

Charles’ personality, ideology and early life fundamentally meant an icy relationship with Parliament, which grew into mutual distrust and the eventual dissolution. Fundamentally, the period of Personal Rule remains a highly debated topic within academic circles, with the recent arrival of Revisionism posing a challenge to the long-established negative view of the Caroline Era. Whether or not the King’s financial, religious, and political actions were met with a discontented populace or outright opposition, it remains the case that the identity crisis facing the period, that between tyranny or stability remains yet to be conclusively put to rest.


Photo Credit.

All States Desire Power: The Realist Perspective

Within the West, the realm of international theory has, since 1945, been a discourse dominated almost entirely by the Liberal perspective. Near-universal amongst the foreign policy establishments of Western governments, a focus on state cooperation, free-market capitalism and more broadly, internationalism, is really the only position held by most leaders nowadays – just look at ‘Global Britain’. As Francis Fukuyama noted, the end of the Cold War (and the Soviet Union) served as political catalysts, and brought about ‘the universalisation of Western liberal democracy as the final form of human government’.

Perhaps even more impactful however, were the immediate post-war years of the 1940s. With the Continent reeling from years of physical and economic destruction, the feeling amongst the victors was understandably a desire for greater closeness, security and stability. This resulted in numerous alliances being formed, including political (the UN in 1945), military (NATO in 1949), and also economic (with the various Bretton Woods organisations). For Europe, this focus on integration manifested itself in blocs like the EEC and ECSC, which would culminate in the Maastricht Treaty and the EU.

This worldview however, faces criticism from advocates championing another, Realism. The concerns of states shouldn’t, as Liberals claim, be on forging stronger global ties or forming more groups – instead, nations should be domestically-minded, concerned with their internal situation and safety. For Realism, this is what foreign relations are about: keeping to oneself, and furthering the interests of the nation above those of the wider global community.

To better understand Realism as an ideological school, we must first look to theories of human nature. From the perspective of Realists, the motivations and behaviour of states can be traced back to our base animalistic instincts, with the work of Thomas Hobbes being especially noteworthy. For the 17th Century thinker, before the establishment of a moral and ordered society (by the absolute Sovereign), Man is concerned only with surviving, protecting selfish interests and dominating other potential rivals. On a global scale, these are the priorities of nation-states and their leaders – Hans Morgenthau famously noted that political man was “born to seek power”, possessing a constant need to dominate others. However much influence or power a state may possess, self-preservation is always a major goal. Faced with the constant threat of rivals with opposing interests, states are always seeking a guarantee of protection – for Realists, the existence of intergovernmental organisations (IGOs) is an excellent example of this. Whilst NATO and the UN may seem the epitome of Liberal cooperation, what they truly represent is states ensuring their own safety.

One of the key pillars of Realism as a political philosophy is the concept of the Westphalian System, and how that relates to relationships between countries. Traced back to the Peace of Westphalia in 1648, the principle essentially asserts that all nation-states have exclusive control (absolute sovereignty) over their territory. For Realists, this has been crucial to their belief that states shouldn’t get involved in the affairs of their neighbours, whether that be in the form of economic aid, humanitarian intervention or furthering military interests. It is because of this system that states are perceived as the most important, influential and legitimate actors on the world stage: IGOs and other non-state bodies can be moulded and corrupted by various factors, including the ruthless self-interest of states.

With the unique importance of states enshrined within Realist thought, the resulting global order is one of ‘international anarchy’ – essentially a system in which state-on-state conflict is inevitable and frequent. The primary reason for this can be linked back to Hobbes’ 1651 work Leviathan: with no higher authority to enforce rules and settle disputes, people (and states) will inevitably come into conflict, and lead ‘nasty, brutish and short’ existences (an idea further expanded upon by Hedley Bull’s The Anarchical Society). Left in a lawless situation, with neither guaranteed protection nor guaranteed allies (all states are, of course, potential enemies), it’s every man for himself. At this point, Liberals will be eager to point out supposed ‘checks’ on the power of nation-states. Whilst we’ve already tackled the Realist view of IGOs, the existence of international courts must surely hold rogue states accountable, right? Well, the sanctity of state sovereignty limits the power of essentially all organisations: for the International Court of Justice, this means it’s rulings both lack enforcement, and can also be blatantly ignored (e.g., the court advised Israel against building a wall along the Palestinian border in 2004, which the Israelis took no notice of). Within the harsh world we live in, states are essentially free to do as they wish, consequences be damned.

Faced with egocentric neighbours, the inevitability of conflict and no referee, it’s no wonder states view power as the way of surviving. Whilst Realists agree that all states seek to accumulate power (and hard military power in particular), there exists debate as to the intrinsic reason – essentially, following this accumulation, what is the ultimate aim? One perspective, posited by thinkers like John Mearsheimer (and Offensive Realists), suggests that states are concerned with becoming the undisputed hegemon within a unipolar system, where they face no danger – once the most powerful, your culture can be spread, your economy strengthened, and your interests more easily defended. Indeed, whilst the United States may currently occupy the position of hegemon, Mearsheimer (as well as many others) have been cautiously watching China – the CCP leadership clearly harbour dreams of world takeover.

Looking to history, the European empires of old were fundamentally creations of hegemonic ambition. Able to access the rich resources and unique climates of various lands, nations like Britain, Spain and Portugal possessed great international influence, and at various points, dominated the global order. Indeed, when the British Empire peaked in the early 1920s, it ruled close to 500 million people, and covered a quarter of the Earth’s land surface (or history’s biggest empire). Existing during a period of history in which bloody expensive wars were commonplace, these countries did what they believed necessary, rising to the top and brutally suppressing those who threatened their positions – regional control was ensured, and idealistic rebels brought to heel.

In stark contrast is the work of Defensive Realists, such as Kenneth Waltz, who suggest that concerned more with security than global dominance, states accrue power to ensure their own safety, and, far from lofty ideas of hegemony, favour a cautious approach to foreign policy. This kind of thinking was seen amongst ‘New Left’ Revisionist historians in the aftermath of the Cold War – the narrative of Soviet continental dominance (through the takeover of Eastern Europe) was a myth. Apparently, what Stalin truly desired was to solidify the USSR’s position through the creation of a buffer wall, due to the increasingly anti-Soviet measures of President Truman (which included Marshall Aid to Europe, and the Truman Doctrine).

Considering Realism within the context of the 21st Century, the ongoing Russo-Ukrainian War seems the obvious case study to examine. Within academic circles, John Mearsheimer has been the most vocal regarding Ukraine’s current predicament – a fierce critic of American foreign policy for decades now, he views NATO’s eastern expansion as having worsened relations with Russia, and only served to fuel Putin’s paranoia. From Mearsheimer’s perspective, Putin’s ‘special military operation’ is therefore understandable and arguably justifiable: the West have failed to respect Russia’s sphere of influence, failed to acknowledge them as a fellow Great Power, and consistently thwarted any pursuits of their regional interests.

Alongside this, Britain’s financial involvement in this conflict can and should be viewed as willing intervention, and one that is endangering the already-frail British economy. It is all well and good to speak of defending rights, democracy and Western liberalism, but there comes a point where our politicians and media must be reminded – the national interest is paramount, always. This needs not be our fight, and the aid money we’re providing the Ukrainians (in the hundreds of billions) should instead be going towards the police, housing, strengthening the border, and other domestic issues.

Our politicians and policymakers may want a continuance of idealistic cooperation and friendly relations, but the brutal unfriendly reality of the system is becoming unavoidable. Fundamentally, self-interested leaders and their regimes are constantly looking to gain more power, influence and territory. By and large, bodies like the UN are essentially powerless; decisions can’t be enforced and sovereignty acts an unbreachable barrier. Looking ahead to the UK’s future, we must be more selfish, focused on making British people richer and safer, and our national interests over childish notions of eternal friendship.


Photo Credit.

John Galt, Tom Joad, and other Polemical Myths

Just about the only titles by Ayn Rand I’d feel comfortable assigning my students without previous suggestion by either student or boss would be Anthem or We the Living, mostly because they both fit into broader genres of dystopian and biographical fiction, respectively, and can, thus, be understood in context. Don’t get me wrong: I’d love to teach The Fountainhead or Atlas Shrugged, if I could find a student nuanced (and disciplined) enough to handle those two; however, if I were to find such a student, I’d probably skip Rand and go straight to Austen, Hugo, and Dostoevsky—again, in part to give students a context of the novelistic medium from which they can better understand authors like Rand.

My hesitation to teach Rand isn’t one of dismissal; indeed, it’s the opposite—I’ve, perhaps, studied her too much (certainly, during my mid-twenties, too exclusively). I could teach either of her major novels, with understanding of both plot and philosophy, having not only read and listened to them several times but also read most of her essays and non-fiction on philosophy, culture, art, fiction, etc. However, I would hesitate to teach them because they are, essentially, polemics. Despite Rand’s claiming it was not her purpose, the novels are didactic in nature: their events articulate Rand’s rationalistic, human-centric metaphysics (itself arguably a distillation of Aristotelian natural law, Lockean rights, and Nietzschean heroism filtered through Franklin, Jefferson, and Rockefeller and placed in a 20th-century American context—no small feat!). Insofar as they do so consistently, The Fountainhead and Atlas Shrugged succeed, and they are both worth reading, if only to develop a firsthand knowledge of the much-dismissed Rand’s work, as well as to understand their place in 20th-century American culture and politics.

All that to say that I understand why people, especially academics, roll their eyes at Rand (though at times I wonder if they’ve ever seriously read her). The “romantic realism” she sought to develop to glorify man as (she saw) man ought to be, which found its zenith in the American industrialist and entrepreneur, ran counter to much that characterized the broader 20th century culture (both stylistically and ideologically), as it does much of the 21st. Granted, I may have an exaggerated sense of the opposition to Rand—her books are still read in and out of the classroom, and some of her ideas still influence areas of at least American culture—and one wonders if Rand wouldn’t take the opposition, itself, as proof of her being right (she certainly did this in the last century). However, because of the controversy, as well as the ideology, that structures the novels, I would teach her with a grain of salt, not wanting to misuse my position of teaching who are, essentially, other people’s kids who probably don’t know and haven’t read enough to understand Rand in context. For this fact, if not for the reasoning, I can imagine other teachers applauding me.

And yet, how many academics would forego including Rand in a syllabus and, in the same moment, endorse teaching John Steinbeck without a second thought?

I generally enjoy reading books I happened to miss in my teenage years. Had I read The Great Gatsby any sooner than I did in my late twenties, I would not have been ready for it, and the book would have been wasted on me. The same can be said of The Scarlet Letter, 1984, and all of Dostoevsky. Even the books I did read have humbled me upon rereading; Pride and Prejudice wasn’t boring—I was.

Reading through The Grapes of Wrath for the first time this month, I am similarly glad I didn’t read it in high school (most of my peers were not so lucky, having had to read it in celebration of Steinbeck’s 100th birthday). The fault, dear Brutus, is not in the book (though it certainly has faults) but in ourselves—that we, as teenagers who lack historical, political, and philosophical context, are underlings. One can criticize Atlas Shrugged for presenting a selective, romanticized view of the capitalist entrepreneur (which, according to Rand’s premises, was thorough, correct, consistent, and, for what it was, defensible) which might lead teenagers to be self-worshipping assholes who, reading Rand without nuance, take the book as justification for mistaking their limited experience of reality as their rational self-interest. One can do much the same, though for ideas fundamentally opposed to Rand’s, for The Grapes of Wrath.

A member of the Lost Generation, John Steinbeck was understandably jaded in his view of 19th-century American ideals. Attempting to take a journalistic, modern view of the Great Depression and Dust Bowl from the bottom up, he gave voice to the part of American society that, but for him, may have remained inarticulate and unrecorded. Whatever debate can be had about the origins of Black Tuesday (arguably beginning more in Wilson’s Washington and Federal Reserve than on Wall Street), the Great Depression hit the Midwest hardest, and the justifiable sense that Steinbeck’s characters are unfair victims of others’ depredations pervades The Grapes of Wrath, just as it articulates one of the major senses of the time. When I read the book, I’m not only reading of the Joad family: I’m reading of my own grandfather, who grew up in Oklahoma and later Galveston, TX. He escaped the latter effects of the Dust Bowl by going not to California but to Normandy. I’m fortunate to have his journal from his teenage years; other Americans who don’t have such a journal have Steinbeck.

However, along with the day-in-the-life (in which one would never want to spend a day) elements of the plot, the book nonetheless offers a selectively, one might even say romantically, presented ideology in answer to the plot’s conflict. Responding to the obstacles and unfairness depicted in The Grapes of Wrath one can find consistent advocacy of revolution among the out-of-work migrants that comprise most of the book. Versus Rand’s extension of Dagny Taggart or Hank Rearden’s sense of pride, ownership, and property down to the smallest elements of their respective businesses, one finds in Steinbeck the theme of a growing disconnect between legal ownership and the right to the land.

In the different reflections interpolated throughout the Joads’ plot Steinbeck describes how, from his characters’ view, there had been a steady divorce over the years between legal ownership of the land and appreciation for it. This theme was not new to American literature. The “rural farmer vs city speculator” mythos is one of the fundamental characteristics of American culture reaching back to Jefferson’s Democratic Republicans’ opposition to Adams’s Federalists, and the tension between the southwest frontiersman and the northeast banker would play a major role in the culture of self-reliance, the politics of the Jacksonian revolution onward, and the literature of Mark Twain and others. Both sides of the tension attempt to articulate in what the inalienable right to property inheres. Is it in the investment of funds and the legal buying and owning of land, or is it in the physical production of the land, perhaps in spite of whoever’s name is on the land grant or deed? Steinbeck is firmly in the latter camp.

However, in The Grapes of Wrath one finds not a continuation of the yeoman farmer mythos but an arguable undermining of the right to property and profit, itself, that undergirds the American milieu which makes the yeoman farmer possible, replacing it with an (albeit understandable) “right” based not on production and legal ownership, but on need. “Fallow land’s a sin,” is a consistent motif in The Grapes of Wrath, especially, argue the characters, when there are so many who are hungry and could otherwise eat if allowed to plant on the empty land. Steinbeck does an excellent job effecting sympathy for the Joads and other characters who, having worked the soil their whole lives, must now compete with hundreds of others like them for jobs paying wages that, due to the intended abundance of applicants, fall far short of what is needed to fill their families’ stomachs.

Similarly, Steinbeck goes to great pains to describe the efforts of landowners to keep crop prices up by punishing attempts to illegally grow food on the fallow land or pick the fruit left to rot on trees, as well as the plot, narrowly evaded by the Joads, to eradicate “reds” trying to foment revolution in one of the Hoovervilles of the book (Tom Joad had, in fact, begun to advocate rising up against landowners in more than one instance). In contrast to the Hoovervilles and the depredations of locals against migrant Okies stands the government camp, safely outside the reach of the local, unscrupulous, anti-migrant police and fitted out with running water, beneficent federal overseers, and social events. In a theme reminiscent of the 19th-century farmers’ looking to the federal government for succor amidst an industrializing market, Steinbeck concretizes the relief experienced in the Great Depression by families like the Joads at the prospects of aid from Washington.

However, just as Rand’s depictions of early twentieth-century America is selective in its representation of the self-made-man ethos of her characters (Rand omits, completely, World War I and the 1929 stock market crash from her novels), Steinbeck’s representation of the Dust Bowl is selective in its omissions. The profit-focused prohibitions against the Joads’ working the land were, in reality, policies required by FDR’s New Deal programs—specifically the Agricultural Adjustment Act, which required the burning of crops and burying of livestock in mass graves to maintain crop prices and which was outlawed in 1936 by the Supreme Court. It is in Steinbeck’s description of this process, which avoids explicitly describing the federal government’s role therein, where one encounters the phrase “grapes of wrath,” presaging a presumable event—an uprising?—by the people: “In the souls of the people the grapes of wrath are filling and growing heavy, growing heavy for the vintage.” Furthermore, while Rand presents, if in the hypothetical terms of narrative, how something as innocuous and inevitable as a broken wire in the middle of a desert can have ramifications that reach all the way to its company’s highest chair, Steinbeck’s narrative remains focused on the Joads, rarely touching on the economic exigencies experienced by the local property and business owners except in relation to the Joads and to highlight the apparent inhumanity of the propertied class (which, in such events as the planned fake riot at the government camp dance party, Steinbeck presents for great polemical effect).

I use “class” intentionally here: though the Great Depression affected all, Steinbeck’s characters often adopt the class-division viewpoint not only of Marx but of Hegel, interpreting the various landowners’ actions as being intentionally taken at the expense of the lower, out-of-work, classes. Tom Joad’s mother articulates to Tom why she is, ultimately, encouraged by, if still resentful of the apparent causers of, their lot:

“Us people will go on living when all them people is gone. Why, Tom, we’re the people that live. They ain’t gonna wipe us out. Why, we’re the people—we go on.”

“We take a beatin’ all the time.”

“I know.” Ma chuckled. “Maybe that makes us tough. Rich fellas come up an’ they die, an’ their kids ain’t no good, an’ they die out. But, Tom, we keep a-comin’. Don’ you fret none, Tom. A different time’s comin’.”

Describing, if in fewer words than either Hegel or Marx, the “thesis-antithesis-synthesis” process of historical materialism, where their class is steadily strengthened by their adverse circumstances in ways the propertied class is not, Mrs. Joad articulates an idea that pervades much of The Grapes of Wrath: the sense that the last, best hope and strength of the put-upon lower classes is found in their being blameless amidst the injustice of their situation, and that their numbers makes their cause inevitable.

This, I submit, is as much a mythos—if a well-stylized and sympathetically presented one—as Rand’s depiction of the producer-trader who is punished for his or her ability to create, and, save for the discernible Marxist elements in Steinbeck, both are authentically American. Though the self-prescribed onus of late 19th- and early 20th-century literature was partially journalistic in aim, Steinbeck was nonetheless a novelist, articulating not merely events but the questions beneath those events and concretizing the perspectives and issues involved into characters and plots that create a story, in the folk fairy tale sense, a mythos that conveys a cultural identity. Against Rand’s modernizing of the self-made man Steinbeck resurrects the soul of the Grange Movement of farmers who, for all their work ethic and self-reliance, felt left behind by the very country they fed. That The Grapes of Wrath is polemical—from the Greek πολεμικός for “warlike” or “argumentative”—does not detract from the project (it may be an essential part of it). Indeed, for all the license and selectivity involved in the art form, nothing can give fuel to a cause like a polemical novel—as Uncle Tom’s Cabin, The Jungle, and many others show.

However, when it comes to assigning polemics to students without hesitation, I…hesitate. Again, the issue lies in recognizing (or, for most students, being told) that one is reading a polemic. When one reads a polemical novel, one is often engaging, in some measure, with politics dressed up as story, and it is through this lens and with this caveat that such works must be read—even (maybe especially!) when they are about topics with which one agrees. As in many things, I prefer to defer to Aristotle, who, in the third section of Book I of the Nicomachean Ethics, cautions against young people engaging in politics before they first learn enough of life to provide context:

Now each man judges well the things he knows, and of these he is a good judge. And so the man who has been educated in a subject is a good judge of that subject, and the man who has received an all-round education is a good judge in general. Hence a young man is not a proper hearer of lectures on political science; for he is inexperienced in the actions that occur in life, but its discussions start from these and are about these; and, further, since he tends to follow his passions, his study will be vain and unprofitable, because the end aimed at is not knowledge but action. And it makes no difference whether he is young in years or youthful in character; the defect does not depend on time, but on his living, and pursuing each successive object, as passion directs.

Of course, the implicit answer is to encourage young people (and ourselves) to read not less but more—and to read with the knowledge that their own interests, passions, neuroses, and inertias might be unseen participants in the process. Paradoxically, it may be by reading more that we can even start to read. Rand becomes much less profound, and perhaps more enjoyable, after one reads the Aristotle, Hugo, and Nietzsche who made her, and I certainly drew on American history (economic and political) and elements of continental philosophy, as well as other works of Steinbeck and the Lost Generation, when reading The Grapes of Wrath. Yet, as Aristotle implies, young people haven’t had the time—and, more importantly, the metaphysical and rhetorical training and self-discipline—to develop such reflection as readers (he said humbly and as a lifelong student, himself). Indeed, as an instructor I see this not as an obstacle but an opportunity—to teach students that there is much more to effective reading and understanding than they might expect, and that works of literature stand not as ancillary to the process of history but as loci of its depiction, reflection, and motivation.

Perhaps I’m exaggerating my case. I have, after all, taught polemical novels to students (Anthem among them, as well as, most recently, 1984 to a middle schooler), and a novel I’ve written and am trying to get published is, itself, at least partially polemical on behalf of keeping Shakespeare in the university curriculum. Indeed, Dostoevsky’s polemical burlesque of the psychology behind Russian socialism, Devils, or The Possessed, so specifically predicted the motives and method of the Russian Revolution (and any other socialist revolution) more than fifty years before it happened that it should be required reading. Nonetheless, because the content and aim of a work requires a different context for teaching, a unit on Devils or The Grapes of Wrath would look very different from one on, say, The Great Gatsby. While the latter definitely merits offering background to students, the former would need to include enough background on the history and perspectives involved to be able to recognize them. The danger of omitting background from Fitzgerald would be an insufficient understanding of and immersion in the plot, of Steinbeck, an insufficient knowledge of the limits of and possible counters to the argument.

Part of the power and danger of polemical art lies in its using a fictional milieu to carry an idea that is not meant to be taken as fiction. The willing suspension of disbelief that energizes the former is what allows the latter idea to slip in as palatable. This can produce one of at least two results, both, arguably, artistic aberrations: either the idea is caught and disbelief is not able to be suspended, rendering the artwork feeling preachy or propagandistic, or the audience member gives him or herself over to the work completely and, through the mythic capability of the artistic medium, becomes uncritically possessed by the idea, deriving an identity from it while believing they are merely enjoying and defending what they believe to be great art. I am speaking from more than a bit of reflection: whenever I see some millennial on Twitter interpret everything through the lens of Harry, Ron, and Hermione, I remember mid-eye-roll that I once did the same with Dagny, Francisco, and Hank.

Every work of art involves a set of values it seeks to concretize and communicate in a certain way, and one culture’s mythos may be taken by a disinterested or hostile observer to be so much propaganda. Because of this, even what constitutes a particular work as polemical may, itself, be a matter of debate, if not personal taste. One can certainly read and gain much from reading any of the books I’ve mentioned (as The Grapes of Wrath‘s Pulitzer Prize shows), and, as I said, I’m coming at Grapes with the handicap of its being my first read. I may very well be doing what I warn my students against doing, passing judgment on a book before I understand it; if I am, I look forward to experiencing a well-deserved facepalm moment in the future, which I aim to accelerate by reading the rest of Steinbeck’s work (Cannery Row is next). But this is, itself, part of the problem—or boon—of polemics: that to avoid a premature understanding one must intentionally seek to nuance their perspective, both positively and negatively, with further reading.

Passively reading Atlas Shrugged or The Grapes of Wrath, taking them as reality, and then interpreting all other works (and, indeed, all of life) through their lens is not dangerous because they aren’t real, but because within the limits of their selective stylization and values they are real. That is what makes them so powerful, and, as with anything powerful, one must learn how to use them responsibly—and be circumspect when leading others into them without also ensuring they possess the discipline proper to such works.


Photo Credit.

Eve: The Prototype of the Private Citizen

Written in the 1660s, John Milton’s Paradise Lost is the type of book I imagine one could spend a lifetime mining for meaning and still be left with something to learn. Its being conceived as an English Epic that uses the poetic forms and conventions of Homeric and Ovidic antiquity to present a Christian subject, it yields as much to the student of literature as it does to students of history and politics, articulating in its retelling of the Fall many of the fundamental questions at work in the post-Civil-War body politic of the preceding decade (among many other things). Comparable with Dante’s Inferno in form, subject, and depth, Paradise Lost offers—and requires—much to and from readers, and it is one of the deepest and most complex works in the English canon. I thank God Milton did not live a half century earlier or write plays, else I might have to choose between him and Shakespeare—because I’d hesitate to simply pick Shakespeare.

One similarity between Milton and Shakespeare that has import to today’s broader discussion involves the question of whether they present their female characters fairly, believably, and admirably, or merely misogynistically. Being a Puritan Protestant from the 1600s writing an Epic verse version of Genesis 1-3, Milton must have relegated Eve to a place of silent submission, no? This was one of the questions I had when I first approached him in graduate school, and, as I had previously found when approaching Shakespeare and his heroines with the same query, I found that Milton understood deeply the gender politics of Adam and Eve, and he had a greater respect for his heroine than many current students might imagine.

I use “gender politics” intentionally, for it is through the different characterizations of Adam and Eve that Milton works out the developing conception of the citizen in an England that had recently executed its own king. As I’ve written in my discussion of Shakespeare’s history plays, justified or not, regicide has comprehensive effects. Thus, the beheading of Charles I on 30 January 1649 had implications for all 17th-century English citizens, many of which were subsequently written about by many like Margaret Cavendish and John Locke. At issue was the question of the individual’s relation to the monarch; does the citizen’s political identity inhere in the king or queen (Cavendish’s perspective), or does he or she exist as a separate entity (Locke’s)? Are they merely “subjects” in the sense of “the king’s subjects,” or are they “subjects” in the sense of being an active agent with an individual perspective that matters? Is it Divine Right, conferred on and descended from Adam, that makes a monarch, or is it the consent of the governed, of which Eve was arguably the first among mankind?

Before approaching such topics in Paradise Lost, Milton establishes the narrative framework of creation. After an initial prologue that does an homage to the classical invoking of the Muses even as it undercuts the pagan tradition and places it in an encompassing Christian theology (there are many such nuances and tensions throughout the work), Milton’s speaker introduces Satan, nee Lucifer, having just fallen with his third of heaven after rebelling against the lately announced Son. Thinking, as he does, that the Son is a contingent being like himself (rather than a non-contingent being coequal with the Father, as the Son is shown to be in Book III), Satan has failed to submit to a rulership he does not believe legitimate. He, thus, establishes one of the major themes of Paradise Lost: the tension between the individual’s will and God’s. Each character’s conflict inheres in whether or not they will choose to remain where God has placed them—which inerringly involves submitting to an authority that, from their limited perspective, they do not believe deserves their submission—or whether they will reject it and prefer their own apparently more rational interests. Before every major character—Satan, Adam, and Eve—is a choice between believing the superior good of God’s ordered plan and pursuing the seemingly superior option of their individual desires.

Before discussing Eve, it is worth looking at her unheavenly counterpart, Sin. In a prefiguration of the way Eve was formed out of Adam before the book’s events, Sin describes to Satan how she was formed Athena-style out of his head when he chose to rebel against God and the Son, simultaneously being impregnated by him and producing their son, Death. As such she and Satan stand as a parody not only of the parent-progeny-partner relationship of Adam-Eve but also of God and the Son. Describing her illicit role in Lucifer’s rebellion, Sin says that almost immediately after birth,

I pleased and with attractive graces won

The most averse (thee chiefly) who full oft

Thyself in me thy perfect image viewing

Becam’st enamoured and such joy thou took’st

With me in secret that my womb conceived

A growing burden.

Paradise Lost II.761-767

In here and other places, Sin shows that her whole identity is wrapped up in Satan, her father-mate. In fact, there is rarely any instance where she refers to herself without also referring to him for context or as a counterpoint. Lacking her own, private selfhood from which she is able to volitionally choose the source of her identity and meaning, Sin lives in a state of perpetual torment, constantly being impregnated and devoured by the serpents and hellhounds that grow out of her womb.

Sin’s existence provides a Dantean concretization of Satan’s rebellion, which is elsewhere presented as necessarily one of narcissistic solipsism—a greatness derived from ignoring knowledge that might contradict his supposed greatness. A victim of her father-mate’s “narcissincest” (a term I coined for her state in grad school), Sin is not only an example of the worst state possible for the later Eve, but also, according to many critics, of women in 17th-century England, both in relation to their fathers and husbands, privately, as well as to the monarch (considered by many the “father of the realm”), publically. Through this reading, we can see Milton investigating, through Sin, not only the theology of Lucifer’s fall, but also of an extreme brand of royalism assumed by many at the time. And yet, it is not merely a simple criticism of royalism, per se: though Milton, himself, wrote other works defending the execution of Charles I and eventually became a part of Cromwell’s government, it is with the vehicle of Lucifer’s rebellion and Sin—whose presumptions are necessarily suspect—that he investigates such things (not the last instance of his work being as complex as the issues it investigates).

After encountering the narcissincest of the Satan-Sin relationship in Book II we are treated to its opposite in the next: the reciprocative respect between the Father and the Son. In what is, unsurprisingly, one of the most theologically-packed passages in Western literature, Book III seeks to articulate the throneroom of God, and it stands as the fruit of Milton’s study of scripture, soteriology, and the mysteries of the Incarnation, offering, perhaps wisely, as many questions as answers for such a scene. Front and center is, of course, the relationship between the Son and Father, Whose thrones are surrounded by the remaining two thirds of the angels awaiting what They will say. The Son and Father proceed to narrate to Each Other the presence of Adam and Eve in Eden and Satan’s approach thereunto; They then discuss what will be Their course—how They will respond to what They, omniscient, already know will happen.

One major issue Milton faced in representing such a discussion is the fact that it is not really a discussion—at least, not dialectically. Because of the triune nature of Their relationship, the Son already knows what the Father is thinking; indeed, how can He do anything but share His Father’s thoughts? And yet, the distance between the justice and foresight of the Father (in no ways lacking in the Son) and the mercy and love of the Son (no less shown in the words of the Father) is managed by the frequent use of the rhetorical question. Seeing Satan leave Hell and the chaos that separates it from the earth, the Father asks:

Only begotten Son, seest thou what rage

Transports our Adversary whom no bounds

Prescribed, no bars…can hold, so bent he seems

On desperate revenge that shall redound

Upon his own rebellious head?

—Paradise Lost III.80-86

The Father does not ask the question to mediate the Son’s apparent lack of knowledge, since, divine like the Father, the Son can presumably see what He sees. Spoken in part for the sake of those angels (and readers) who do not share Their omniscience, the rhetorical questions between the Father and Son assume knowledge even while they posit different ideas. Contrary to the solipsism and lack of sympathy between Sin and Satan (who at first does not even recognize his daughter-mate), Book III shows the mutual respect and knowledge of the rhetorical questions between the Father and Son—who spend much of the scene describing Each Other and Their motives (which, again, are shared).

The two scenes between father figures and their offspring in Books II and III provide a backdrop for the main father-offspring-partner relationship of Paradise Lost: that of Adam and Eve—with the focus, in my opinion, on Eve. Eve’s origin story is unique in Paradise Lost: while she was made out of Adam and derives much of her joy from him, she was not initially aware of him at her nativity, and she is, thus, the only character who has experienced and can remember (even imagine) existence independent of a source.

Book IV opens on Satan reaching Eden, where he observes Adam and Eve and plans how to best ruin them. Listening to their conversation, he hears them describe their relationship and their respective origins. Similar to the way the Father and Son foreground their thoughts in adulatory terms, Eve addresses Adam as, “thou for whom | And from whom I was formed flesh of thy flesh | and without whom am to no end, my guide | And head” (IV.440-443). While those intent on finding sexism in the poem will, no doubt, jump at such lines, Eve’s words are significantly different from Sin’s. Unlike Sin’s assertion of her being a secondary “perfect image” of Satan (wherein she lacks positive subjectivity), Eve establishes her identity as being reciprocative of Adam’s in her being “formed flesh,” though still originating in “thy flesh.” She is not a mere picture of Adam, but a co-equal part of his substance. Also, Eve diverges from Sin’s origin-focused account by relating her need of Adam for her future, being “to no end” without Adam; Eve’s is a chosen reliance of practicality, not an unchosen one of identity.

Almost immediately after describing their relationship, Eve recounts her choice of being with Adam—which necessarily involves remembering his absence at her nativity. Hinting that were they to be separated Adam would be just as lost, if not more, than she (an idea inconceivable between Sin and Satan, and foreshadowing Eve’s justification in Book IX for sharing the fruit with Adam, who finds himself in an Eve-less state), she continues her earlier allusion to being separated from Adam, stating that, though she has been made “for” Adam, he a “Like consort to [himself] canst nowhere find” (IV.447-48). Eve then remembers her awakening to consciousness:

That day I oft remember when from sleep

I first awaked and found myself reposed

Under a shade on flow’rs, much wond’ring where

And what I was, whence thither brought and how.

Paradise Lost IV.449-452

Notably seeing her origin as one not of flesh but of consciousness, she highlights that she was alone. That is, her subjective awareness preexisted her understanding of objective context. She was born, to use a phrase by another writer of Milton’s time, tabula rasa, without either previous knowledge or a mediator to grant her an identity. Indeed, perhaps undercutting her initial praise of Adam, she remembers it “oft”; were this not an image of the pre-Fall marriage, one might imagine the first wife wishing she could take a break from her beau—the subject of many critical interpretations! Furthermore, Milton’s enjambment allows a dual reading of “from sleep,” as if Eve remembers that day as often as she is kept from slumber—very different from Sin’s inability to forget her origin due to the perpetual generation and gnashing of the hellhounds and serpents below her waist. The privacy of Eve’s nativity so differs from Sin’s public birth before all the angels in heaven that Adam—her own father-mate—is not even present; thus, Eve is able to consider herself without reference to any other. Of the interrogative words with which she describes her post-natal thoughts— “where…what…whence”—she does not question “who,” further showing her initial isolation, which is so defined that she initially cannot conceive of another separate entity.

Eve describes how, hearing a stream, she discovered a pool “Pure as th’ expanse of heav’n” (IV.456), which she subsequently approached and, Narcissus-like, looked down into.

As I bent down to look, just opposite

A shape within the wat’ry gleam appeared

Bending to look on me. I started back,

It started back, but pleased I soon returned,

Pleased it returned as soon with answering looks

Of sympathy and love.

Paradise Lost IV.460-465

When she discovers the possibility that another person might exist, it is, ironically, her own image in the pool. In Eve, rather than in Sin or Adam, we are given an image of self-awareness, without reference to any preceding structural identity. Notably, she is still the only person described in the experience—as she consistently refers to the “shape” as “it.” Eve’s description of the scene contains the actions of two personalities with only one actor; that is, despite there being correspondence in the bending, starting, and returning, and in the conveyance of pleasure, sympathy, and love, there is only one identity present. Thus, rather than referring to herself as an image of another, as does Sin, it is Eve who is here the original, with the reflection being the image, inseparable from herself though it be. Indeed, Eve’s nativity thematically resembles the interaction between the Father and the Son, who, though sharing the same omniscient divinity, converse from seemingly different perspectives. Like the Father Who instigates interaction with His Son, His “radiant image” (III.63), in her first experience Eve has all the agency.

As the only instance in the poem when Eve has the preeminence of being another’s source (if only a reflection), this scene invests her interactions with Adam with special meaning. Having experienced this private moment of positive identity before following the Voice that leads her to her husband, Eve is unique in having the capacity to agree or disagree with her seemingly new status in relation to Adam, having remembered a time when it was not—a volition unavailable to Sin and impossible (and unnecessary) to the Son.

And yet, this is the crux of Eve’s conflict: will she continue to heed the direction of the Voice that interrupted her Narcissus-like fixation at the pool and submit herself to Adam? The ambivalence of her description of how she would have “fixed | Mine eyes till now and pined with vain desire,” over her image had the Voice not come is nearly as telling as is her confession that, though she first recognized Adam as “fair indeed, and tall!” she thought him “less fair, | Less winning soft, less amiably mild | Than that smooth wat’ry image” (IV.465-480). After turning away from Adam to return to the pool and being subsequently chased and caught by Adam, who explained the nature of their relation—how “To give thee being I lent | Out of my side to thee, nearest my heart, | Substantial life to have thee by my side”—she “yielded, and from that time see | How beauty is excelled by manly grace | And wisdom which alone is truly fair” (IV. 483-491). One can read these lines at face value, hearing no undertones in her words, which are, after all, generally accurate, Biblically speaking. However, despite the nuptial language that follows her recounting of her nativity, it is hard for me not to read a subtle irony in the words, whether verbal or dramatic. That may be the point—that she is not an automaton without a will, but a woman choosing to submit, whatever be her personal opinion of her husband.

Of course, the whole work must be read in reference to the Fall—not merely as the climax which is foreshadowed throughout, but also as a condition necessarily affecting the writing and reading of the work, it being, from Milton’s Puritan Protestant perspective, impossible to correctly interpret pre-Fall events from a post-Fall state due to the noetic effects of sin. Nonetheless, in keeping with the generally Arminian tenor of the book—that every character must have a choice between submission and rebellion for their submission to be valid, and that the grace promised in Book III is “Freely vouchsafed” and not based on election (III.175)—I find it necessary to keep in mind, as Eve seems to, the Adam-less space that accompanied her nativity. Though one need not read all of her interaction with Adam as sarcastic, in most of her speech one can read a subtextual pull back to the pool, where she might look at herself, alone.

In Eve we see the fullest picture of what is, essentially, every key character’s (indeed, from Milton’s view, every human’s) conflict: to choose to submit to an assigned subordinacy or abstinence against the draw of a seemingly more attractive alternative, often concretized in what Northrop Frye calls a “provoking object”—the Son being Satan’s, the Tree Adam’s, and the reflection (and private self it symbolizes, along with an implicit alternative hierarchy with her in prime place) Eve’s. In this way, the very private consciousness that gives Eve agency is that which threatens to destroy it; though Sin lacks the private selfhood possessed by Eve, the perpetual self-consumption of her and Satan’s incestuous family allegorizes the impotent and illusory self-returning that would characterize Eve’s existence if she were to return to the pool. Though she might not think so, anyone who knows the myth that hers parallels knows that, far from limiting her freedom, the Voice that called Eve from her first sight of herself rescued her from certain death (though not for long).

The way Eve’s subjectivity affords her a special volition connects with the biggest questions of Milton’s time. Eve’s possessing a private consciousness from which she can consensually submit to Adam parallels John Locke’s “Second Treatise on Civil Government” of the same century, wherein he articulates how the consent of the governed precedes all claims of authority. Not in Adam but in Eve does Milton show that monarchy—even one as divine, legitimate, and absolute as God’s—relies on the volition of the governed, at least as far as the governed’s subjective perception is concerned. Though she cannot reject God’s authority without consequence, Eve is nonetheless able to agree or disagree with it, and through her Milton presents the reality that outward submission does not eliminate inward subjectivity and personhood (applicable as much to marriages as to monarchs, the two being considered parallel both in the poem and at the time of its writing); indeed, the inalienable presence of the latter is what gives value to the former and separates it from the agency-less state pitifully experienced by Sin.

And yet, Eve’s story (to say nothing of Satan’s) also stands as a caution against simply taking on the power of self-government without circumspection. Unrepentant revolutionary though he was, Milton was no stranger to the dangers of a quickly and simply thrown-off government, nor of an authority misused, and his nuancing of the archetype of all subsequent rebellions shows that he did not advocate rebellion as such. While Paradise Lost has influenced many revolutions (political in the 18th-century revolutions, artistic in the 19th-century Romantics, cultural in the 20th-century New Left), it nonetheless has an anti-revolutionary current. Satan’s presumptions and their later effects on Eve shows the self-blinding that is possible to those who, simply trusting their own limited perception, push for an autonomy they believe will liberate them to an unfettered reason but which will, in reality, condemn them to a solipsistic ignorance.

By treating Eve, not Adam, as the everyman character who, like the character of a morality play, represents the psychological state of the tempted individual—that is, as the character with whom the audience is most intended to sympathize—Milton elevates her to the highest status in the poem. Moreover—and of special import to Americans like myself—as an articulation of an individual citizen who does not derive the relation to an authority without consent, Eve stands as a prototype of the post-17th-century conception of the citizen that would lead not only to further changes between the British Crown and Parliament but also a war for independence in the colonies. Far from relegating Eve to a secondary place of slavish submission, Milton arguably makes her the most human character in humanity’s first story; wouldn’t that make her its protagonist? As always, let this stimulate you to read it for yourself and decide. Because it integrates so many elements—many of which might defy new readers’ expectations in their complexity and nuance—Paradise Lost belongs as much on the bookshelf and the syllabus as Shakespeare’s Complete Works, and it presents a trove for those seeking to study the intersection not only of art, history, and theology, but also of politics and gender roles in a culture experiencing a fundamental change.


Photo Credit.

Scroll to top