Daniel Hawker

In Defence of Marriage

In our 21st Century society, the concepts of love and commitment in relationships have become twisted from what they originally meant to older generations. With the rise of social media (and dating apps in particular), people can form many simultaneous online connections with people who they know next to nothing about and then end the messaging and simply forget about them; this isn’t, in my opinion, a reliable nor realistic way to find a compatible partner – we fall in love with souls, personalities and imperfections, not the photoshopped images someone wants us to associate them with.

But putting aside the downsides and problems with technological romance we need to focus on the root of the bigger problem: many young people have become disillusioned with the idea of marriage, with many viewing it as an outdated and irrelevant institution with no real place in 21st Century life. Far from the high esteem, our ancestors placed this tradition, millennials today feel that there is no real point, that you can live together with your partner happily and contently without vows needing to be taken.

But why have attitudes towards marriage changed so much? This can partly be blamed on the economic situation this generation finds themselves in compared to that of their parents’ or grandparents’ – young people today are the first generation to be less well-off than their parents’ generation. Among many millennials, marriage remains the desired outcome for their relationship but simply isn’t financially realistic. In contrast to past generations, where all socio-economic groups married at roughly the same rate, today marriage is more prevalent among those with higher incomes and levels of education. Societal ideas of family and sex also contribute: with the growing “spectrum” of different gender identities ever-increasing, the nuclear family in decline in Britain and the rejection of the importance of values and beliefs in a relationship.

Young people find themselves nowadays wandering aimlessly in the world of dating, unsure of what sort of person they want to spend their life with, with only vague notions of appearance and personality. When they DO find someone, whether that be through a screen or in-person, the concept of marriage and lifelong commitment is a difficult one to approach, especially if you fear losing the person. Whilst this may indeed be a difficult topic to broach, it’s an extremely important one: if you want to marry, and believe yourself to have found a potential future spouse, you should declare your intentions early one – the longer you leave it, the harder it gets.

Many young people nowadays don’t seek a long-term commitment however, instead opting for casual flings, hook-ups based on a shared physical attraction and temporary pleasure. This ‘hook-up culture’ has seen a rise in popularity thanks to the media and its portrayals in television: the scenes of clubbing into the early hours of the morning and waking up in the bed of someone you just met definitely attracts many teens and young adults and in doing so has stripped the act of sexual intercourse of any significance it may have had. In the past, this act was reserved for married couples, seen as more moral and pleasurable when conducted with someone you truly care for. Nowadays it seems, people are perfectly willing to hand out sex to essentially anyone they find remotely attractive, discouraging the idea of long-term stable relationships (and marriages).

Continued mention of differences between the generations will undoubtedly raise questions over what has really changed in terms of attitudes towards marriage and family. Let’s explore.

Ever since religions have existed, marital practices and traditions have been detailed and carried out. Even up to the late 1970s, religious ceremonies still accounted for 50% of all marriages in the UK (falling for 80% in 1900), with the decline of religious affiliation, particularly Christian denominations, often being cited as a reason for marriage’s rejection by the young (indeed, only 1% of young people aged 18-24 identity as Church of England). Christianity has fallen from 66% in 1983 to only 38% in 2019, whereas secularism/no religion had risen in that same time from 31% to 52%. Christian ideals of marriage, between a man and a woman and overseen by God, have certainly become seen as more traditional and unaccepting in recent decades, especially with the legalisation of gay marriage across much to the West.

 In particular, greater acceptance of divorce as a concept has put people off standing at the altar. Not only has marriage as an idea suffered a decline in popularity over time, the opposite can be said for divorce – invalidating and belittling the concept of marriage; people in modern Britain will stand before a minister and promise to be with their future spouse ‘till death do them part’, only to then divorce them weeks later and repeat the same vows with another person.

Of course, part of this can be blamed on the mainstream media (gossip magazines especially) and their obsession with the high-stakes divorces of wealthy and well-known celebrities – Brangelina immediately spring to mind! But the speed at which you can go from announcing your intent to divorce and actually being divorced has aided in its popularity as an option: on average, you can have a divorce legally finalised in 4-6 months, with you then receiving an often-sizeable amount from the other person.

Changing ideas about family and child-rearing has certainly been a large generational change. The nuclear family (2 married parents and their children living together) saw a decline in the late 1960s and early 1970s, with many families nowadays consisting of half-siblings, step-siblings and parents, or just one parent. This decline has drastically altered children and young peoples’ views on the benefits of marriage: if they had been born in the 1960s, they’d have seen their parents as a loving and dedicated unit, committed in their responsibilities as both spouses and parents (with the evidence showing that having married parents provides children with a more stable childhood than those with parents who simply cohabitate).

Nowadays, more and more children are growing up with their only perception of marriage being from the media (many ending in divorce, not having children) or from parents who either aren’t married to each other or whose marriages have failed and aren’t together. This dramatic upheaval of the family structure has blinded younger generations from what marriage truly means, how it’s different to cohabitation and how it changes you as a person. Add on top of that the fact that 42% of marriages in England and Wales end in divorce, and no wonder young people get cold feet about the whole affair – if you saw your parents go through that, it definitely wouldn’t be an experience you’d want for yourself and your spouse, especially if you had children who could understand what was happening.

To be married to someone means to be dedicated to building a shared life together, committed to providing financially and emotionally and (ideally) wanting to have children and start a family. It’s the difference of referring to your significant other as your girl/boyfriend or partner and referring to them as your husband or wife. So many dating relationships fail because the participants simply don’t have a plan or a desired outcome – often, it’s because they don’t want to commit to one specific goal (e.g. marriage) or are afraid. They may share similar interests and hobbies and be physically attracted to them, but at some point, the tough questions need to be answered and the answers ironed out. What is the plan for this relationship? Do we share the same values (religious, moral, political)? Do we want children and so, how would we raise them religiously?

This may seem far too forward for the youth of today, wanting instead only to focus on one-night stands and what hobbies they share, but figuring the important stuff out early on is crucial in not staying in dead-end relationships and instead of finding your future spouse. To be married someone means you want to protect them, commit to them and love them 100%. It is no wonder that studies have repeatedly found that (when all these factors are achieved) those in good marriage are on average happier, healthier and wealthier than those who aren’t.

A common rebuttal by the young to the benefits and joys of marriage is that you can live together perfectly happily in a relationship and NOT be married (and indeed, the freedom to live together out of wedlock is a common and easy alternative to marriage) – but after you take those vows and step back into your house, your life is bonded to another person’s, and the expectations, commitments and obligations you now gain are representative of that bond. Marriage is a symbol of your love and devotion, and that you want to share everything you have with said person. Cohabitation could be because of financial incapability to rent a single apartment or out of another mutual need – marriage is by definition, a commitment you make freely and willingly, knowing beforehand what will change and how your priorities will change, whether that be children or work-related.

In a time of so much social and political change, with Black Lives Matter, Brexit and the growing transgender movement, this one staple of devotion and love ought to be pursued by more people, for the joys it can bring are unrivalled apart from having children. So young people, I among you, I implore you to reject these fantasies of partying forever and seeking casual sex every night and instead set yourself the far greater and more fulfilling goal of getting married – your life, and the lives of your future spouse and children, will be infinitely better because of it.


Photo Credit.

Atatürk: A Legacy Under Threat

The founders of countries occupy a unique position within modern society. They are often viewed either as heroic and mythical figures or deeply problematic by today’s standards – take the obvious examples of George Washington. Long-held up by all Americans as a man unrivalled in his courage and military strategy, he is now a figure of vilification by leftists, who are eager to point out his ownership of slaves.

Whilst many such figures face similar shaming nowadays, none are suffering complete erasure from their own society. That is the fate currently facing Mustafa Kemal Atatürk, whose era-defining liberal reforms and state secularism now pose a threat to Turkey’s authoritarian president, Recep Tayyip Erdoğan.

To understand the magnitude of Atatürk’s legacy, we must understand his ascent from soldier to president. For that, we must go back to the end of World War One, and Turkey’s founding.

The Ottoman Empire officially ended hostilities with the Allied Powers via the Armistice of Mudros (1918), which amongst other things, completely demobilised the Ottoman army. Following this, British, French, Italian and Greek forces arrived in and occupied Constantinople, the Empire’s capital. Thus began the partitioning of the Ottoman Empire: having existed since 1299, the Treaty of Sèvres (1920) ceded large amounts of territory to the occupying nations, primarily being between France and Great Britain.

Enter Mustafa Kemal, known years later as Atatürk. An Ottoman Major General and fervent anti-monarchist, he and his revolutionary organisation (the Committee of Union and Progress) were greatly angered by Sèvres, which partitioned portions of Anatolia, a peninsula that makes up the majority of modern-day Turkey. In response, they formed a revolutionary government in Ankara, led by Kemal.

Thus, the Turkish National Movement fought a 4-year long war against the invaders, eventually pushing back the Greeks in the West, Armenians in the East and French in the South. Following a threat by Kemal to invade Constantinople, the Allies agreed to peace, with the Treaty of Kars (1921) establishing borders, and Lausanne (1923) officially settling the conflict. Finally free from fighting, Turkey declared itself a republic on 29 October 1923, with Mustafa Kemal as president.

His rule of Turkey began with a radically different set of ideological principles to the Ottoman Empire – life under a Sultan had been overtly religious, socially conservative and multi-ethnic. By contrast, Kemalism was best represented by the Six Arrows: Republicanism, Populism, Nationalism, Laicism, Statism and Reformism. Let’s consider the four most significant.

We’ll begin with Laicism. Believing Islam’s presence in society to have been impeding national progress, Atatürk set about fundamentally changing the role religion played both politically and societally. The Caliph, who was believed to be the spiritual successor to the Prophet Muhammad, was deposed. In their place came the office of the Directorate of Religious Affairs, or Diyanet – through its control of all Turkey’s mosques and religious education, it ensured Islam’s subservience to the State.

Under a new penal code, all religious schools and courts were closed, and the wearing of headscarves was banned for public workers. However, the real nail in the coffin came in 1928: that was when an amendment to the Constitution removed the provision declaring that the “Religion of the State is Islam”.

Moving onto Nationalism. With its roots in the social contract theories of thinkers like Jean-Jacques Rousseau, Kemalist nationalism defined the social contract as its “highest ideal” following the Empire’s collapse – a key example of the failures of a multi-ethnic and multi-cultural state.

The 1930s saw the Kemalist definition of nationality integrated into the Constitution, legally defining every citizen as a Turk, regardless of religion or ethnicity. Despite this however, Atatürk fiercely pursed a policy of forced cultural conformity (Turkification), similar to that of the Russian Tsars in the previous century. Both regimes had the same aim – the creation and survival of a homogenous and unified country. As such, non-Turks were pressured into speaking Turkish publicly, and those with minority surnames had to change, to ‘Turkify’ them.

Now Reformism. A staunch believer in both education and equal opportunity, Atatürk made primary education free and compulsory, for both boys and girls. Alongside this came the opening of thousands of new schools across the country. Their results are undeniable: between 1923 – 38, the number of students attending primary school increased by 224%, and 12.5 times for middle school.

Staying true to his identity as an equal opportunist, Atatürk enacted monumentally progressive reforms in the area of women’s rights. For example, 1926 saw a new civil code, and with it came equal rights for women concerning inheritance and divorce. In many of these gender reforms, Turkey was well-ahead of other Western nations: Turkish women gained the vote in 1930, followed by universal suffrage in 1934. By comparison, France passed universal suffrage in 1945, Canada in 1960 and Australia in 1967. Fundamentally, Atatürk didn’t see Turkey truly modernising whilst Ottoman gender segregation persisted

Lastly, let’s look at Statism. As both president and the leader of the People’s Republican Party, Atatürk was essentially unquestioned in his control of the State. However, despite his dictatorial tendencies (primarily purging political enemies), he was firmly opposed to dynastic rule, like had been the case with the Ottomans.

But under Recep Tayyip Erdoğan, all of this could soon be gone.

Having been a high-profile political figure for 20 years, Erdoğan has cultivated a positive image domestically, one focused on his support for public religion and Turkish nationalism, whilst internationally, he’s received far more negative attention focused on his growing authoritarian behaviour. Regarded widely by historians as the very antithesis of Atatürk, Erdoğan’s pushback against state secularism is perhaps the most significant attack on the founder’s legacy.

This has been most clearly displayed within the education system. 2017 saw a radical shift in school curriculums across Turkey, with references to Charles Darwin’s theory of evolution being greatly reduced. Meanwhile, the number of religious schools has increased exponentially, promoting Erdoğan’s professed goal of raising a “pious generation of Turks”. Additionally, the Diyanet under Erdoğan has seen a huge increase in its budget, and with the launch of Diyanet TV in 2012, has spread Quranic education to early ages and boarding schools.

The State has roles to play in society but depriving schoolchildren of vital scientific information and funding religious indoctrination is beyond outrageous: Soner Cagaptay, author of The New Sultan: Erdoğan and the Crisis of Modern Turkey, referred to the changes as: “a revolution to alter public education to assure that a conservative, religious view of the world prevails”.

There are other warning signs more broadly, however. The past 20 years have seen the headscarf make a gradual reappearance back into Turkish life, with Erdoğan having first campaigned on the issue back in 2007, during his first run for the presidency. Furthermore, Erdoğan’s Justice and Development Party (AKP), with its strong base of support amongst extremely orthodox Muslims, has faced repeated accusations of being an Islamist party – as per the constitution, no party can “claim that it represents a form of religious belief”.

Turkish women, despite being granted legal equality by Atatürk, remain the regular victims of sexual harassment, employment discrimination and honour killings. Seemingly intent on destroying all the positive achievements of the founder, Erdoğan withdrew from the Istanbul Convention (which forces parties to investigate, punish and crackdown on violence against women) in March 2021.

All of these reversals of Atatürk’s policies reflect the larger-scale attempt to delete him from Turkey’s history. His image is now a rarity in school textbooks, at national events, and on statues; his role in Turkey’s founding has been criminally downplayed.

President Erdoğan presents an unambiguous threat to the freedoms of the Turkish people, through both his ultra-Islamic policies and authoritarian manner of governance. Unlike Atatürk, Erdoğan seemingly has no problems with ruling as an immortal dictator, and would undoubtedly love to establish a family dynasty. With no one willing to challenge him, he appears to be dismantling Atatürk’s reforms one law at a time, reducing the once-mythical Six Arrows of Kemalism down to a footnote in textbooks.

A man often absent from the school curriculums of Western history departments, Mustafa Kemal Atatürk proved one of the most consequential leaders in both Turkish history, and the 20th Century. A radical and a revolutionary he may have been, but it was largely down to him that the Turkish people received a recognised nation-state, in which state secularism, high-quality education and equal civil rights were the norm.

In our modern world, so many of our national figures now face open vilification from the public and politicians alike. But for Turkey, future generations may grow up not even knowing the name or face of their George Washington. Whilst several political parties and civil society groups are pushing back against this anti-Atatürk agenda, the sheer determination displayed by Erdoğan shows how far Turks must yet go to preserve the founder’s legacy.


Photo Credit.

Charles’ Personal Rule: A Stable or Tyrannised England?

Within discussions of England’s political history, the most famous moments are known and widely discussed – the Magna Carta of 1215, and the Cromwell Protectorate of the 1650s spring immediately to mind. However, the renewal of an almost-mediaeval style of monarchical absolutism, in the 1630s, has proven both overlooked and underappreciated as a period of historical interest. Indeed, Charles I’s rule without Parliament has faced an identity crisis amongst more recent historians – was it a period of stability or tyranny for the English people?

If we are to consider the Personal Rule as a period in enough depth, the years leading up to the dissolution of Charles’ Third Parliament (in 1629) must first be understood. Succeeding his father James I in 1625, Charles’ personal style and vision of monarchy would prove to be incompatible with the expectations of his Parliaments. Having enjoyed a strained but respectful relationship with James, MPs would come to question Charles’ authority and choice of advisors in the coming years. Indeed, it was Charles’ stubborn adherence to the Divine Right of King’s doctrine, writing once that “Princes are not bound to give account of their actions but to God alone”, that meant that he believed compromise to be defeat, and any pushback against him to be a sign of disloyalty.

Constitutional tensions between King and Parliament proved the most contentious of all issues, especially regarding the King’s role in taxation. At war with Spain between 1625 – 1630 (and having just dissolved the 1626 Parliament), Charles was lacking in funds. Thus, he turned to non-parliamentary forms of revenue, notably the Forced Loan (1627) – declaring a ‘national emergency’, Charles demanded that his subjects all make a gift of money to the Crown. Whilst theoretically optional, those who refused to pay were often imprisoned; a notable example would be the Five Knights’ Case, in which five knights were imprisoned for refusing to pay (with the court ruling in Charles’ favour). This would eventually culminate in Charles’ signing of the Petition of Right (1628), which protected the people from non-Parliamentary taxation, as well as other controversial powers that Charles chose to exercise, such as arrest without charge, martial law, and the billeting of troops.

The role played by George Villiers, the Duke of Buckingham, was also another major factor that contributed to Charles’ eventual dissolution of Parliaments in 1629. Having dominated the court of Charles’ father, Buckingham came to enjoy a similar level of unrivalled influence over Charles as his de facto Foreign Minister. It was, however, in his position as Lord High Admiral, that he further worsened Charles’ already-negative view of Parliament. Responsible for both major foreign policy disasters of Charles’ early reign (Cadiz in 1625, and La Rochelle in 1627, both of which achieved nothing and killed 5 to 10,000 men), he was deemed by the MP Edward Coke to be “the cause of all our miseries”. The duke’s influence over Charles’ religious views also proved highly controversial – at a time when anti-Calvinism was rising, with critics such as Richard Montague and his pamphlets, Buckingham encouraged the King to continue his support of the leading anti-Calvinist of the time, William Laud, at the York House Conference in 1626.

Heavily dependent on the counsel of Villiers until his assassination in 1628, it was in fact, Parliament’s threat to impeach the Duke, that encouraged Charles to agree to the Petition of Right. Fundamentally, Buckingham’s poor decision-making, in the end, meant serious criticism from MPs, and a King who believed this criticism to be Parliament overstepping the mark and questioning his choice of personnel.

Fundamentally by 1629, Charles viewed Parliament as a method of restricting his God-given powers, one that had attacked his decisions, provided him with essentially no subsidies, and forced him to accept the Petition of Right. Writing years later in 1635, the King claimed that he would do “anything to avoid having another Parliament”. Amongst historians, the significance of this final dissolution is fiercely debated: some, such as Angela Anderson, don’t see the move as unusual; there were 7 years for example, between two of James’ Parliaments, 1614 and 1621 – at this point in English history, “Parliaments were not an essential part of daily government”. On the other hand, figures like Jonathan Scott viewed the principle of governing without Parliament officially as new – indeed, the decision was made official by a royal proclamation.

Now free of Parliamentary constraints, the first major issue Charles faced was his lack of funds. Lacking the usual taxation method and in desperate need of upgrading the English navy, the King revived ancient taxes and levies, the most notable being Ship Money. Originally a tax levied on coastal towns during wartime (to fund the building of fleets), Charles extended it to inland counties in 1635 and made it an annual tax in 1636. This inclusion of inland towns was construed as a new tax without parliamentary authorisation. For the nobility, Charles revived the Forest Laws (demanding landowners produce the deeds to their lands), as well as fines for breaching building regulations.

The public response to these new fiscal expedients was one of broad annoyance, but general compliance. Indeed, between 1634 and 1638, 90% of the expected Ship Money revenue was collected, providing the King with over £1m in annual revenue by 1637. Despite this, the Earl of Warwick questioned its legality, and the clerical leadership referred to all of Charles’ tactics as “cruel, unjust and tyrannical taxes upon his subjects”.However, the most notable case of opposition to Ship Money was the John Hampden case in 1637. A gentleman who refused to pay, Hampden argued that England wasn’t at war and that Ship Money writs gave subjects seven months to pay, enough time for Charles to call a new Parliament. Despite the Crown winning the case, it inspired greater widespread opposition to Ship Money, such as the 1639-40 ‘tax revolt’, involving non-cooperation from both citizens and tax officials. Opposing this view, however, stands Sharpe, who claimed that “before 1637, there is little evidence at least, that its [Ship Money’s] legality was widely questioned, and some suggestion that it was becoming more accepted”.

In terms of his religious views, both personally and his wider visions for the country, Charles had been an open supporter of Arminianism from as early as the mid-1620s – a movement within Protestantism that staunchly rejected the Calvinist teaching of predestination. As a result, the sweeping changes to English worship and Church government that the Personal Rule would oversee were unsurprisingly extremely controversial amongst his Calvinist subjects, in all areas of the kingdom. In considering Charles’ religious aims and their consequences, we must focus on the impact of one man, in particular, William Laud. Having given a sermon at the opening of Charles’ first Parliament in 1625, Laud spent the next near-decade climbing the ranks of the ecclesiastical ladder; he was made Bishop of Bath and Wells in 1626, of London in 1629, and eventually Archbishop of Canterbury in 1633. Now 60 years old, Laud was unwilling to compromise any of his planned reforms to the Church.

The overarching theme of Laudian reforms was ‘the Beauty of Holiness’, which had the aim of making churches beautiful and almost lavish places of worship (Calvinist churches, by contrast, were mostly plain, to not detract from worship). This was achieved through the restoration of stained-glass windows, statues, and carvings. Additionally, railings were added around altars, and priests began wearing vestments and bowing at the name of Jesus. However, the most controversial change to the church interior proved to be the communion table, which was moved from the middle of the room to by the wall at the East end, which was “seen to be utterly offensive by most English Protestants as, along with Laudian ceremonialism generally, it represented a substantial step towards Catholicism. The whole programme was seen as a popish plot”. 

Under Laud, the power and influence wielded by the Church also increased significantly – a clear example would be the fact that Church courts were granted greater autonomy. Additionally, Church leaders became evermore present as ministers and officials within Charles’ government, with the Bishop of London, William Juxon, appointed as Lord Treasurer and First Lord of the Admiralty in 1636. Additionally, despite already having the full backing of the Crown, Laud was not one to accept dissent or criticism and, although the severity of his actions has been exaggerated by recent historians, they can be identified as being ruthless at times. The clearest example would be the torture and imprisonment of his most vocal critics in 1637: the religious radicals William Prynne, Henry Burton and John Bastwick.

However successful Laudian reforms may have been in England (and that statement is very much debatable), Laud’s attempt to enforce uniformity on the Church of Scotland in the latter half of the 1630s would see the emergence of a united Scottish opposition against Charles, and eventually armed conflict with the King, in the form of the Bishops’ Wars (1639 and 1640). This road to war was sparked by Charles’ introduction of a new Prayer Book in 1637, aimed at making English and Scottish religious practices more similar – this would prove beyond disastrous. Riots broke out across Edinburgh, the most notable being in St Giles’ Cathedral (where the bishop had to protect himself by pointing loaded pistols at the furious congregation. This displeasure culminated in the National Covenant in 1638 – a declaration of allegiance which bound together Scottish nationalism with the Calvinist faith.

Attempting to draw conclusions about Laudian religious reforms very many hinges on the fact that, in terms of his and Charles’ objectives, they very much overhauled the Calvinist systems of worship, the role of priests, and Church government, and the physical appearance of churches. The response from the public, however, ranging from silent resentment to full-scale war, displays how damaging these reforms were to Charles’ relationship with his subjects – coupled with the influence wielded by his wife Henrietta Maria, public fears about Catholicism very much damaged Charles’ image, and meant religion during the Personal Rule was arguably the most intense issue of the period. In judging Laud in the modern-day, the historical debate has been split: certain historians focus on his radical uprooting of the established system, with Patrick Collinson suggesting the Archbishop to have been “the greatest calamity ever visited upon by the Church of England”, whereas others view Laud and Charles as pursuing the entirely reasonable, a more orderly and uniform church.

Much like how the Personal Rule’s religious direction was very much defined by one individual, so was its political one, by Thomas Wentworth, later known as the Earl of Strafford. Serving as the Lord Deputy of Ireland from 1632 to 1640, he set out with the aims of ‘civilising’ the Irish population, increasing revenue for the Crown, and challenging Irish titles to land – all under the umbrella term of ‘Thorough’, which aspired to concentrate power, crackdown on oppositions figures, and essentially preserve the absolutist nature of Charles’ rule during the 1630s.

Regarding Wentworth’s aims toward Irish Catholics, Ian Gentles’ 2007 work The English Revolution and the Wars in the Three Kingdoms argues the friendships Wentworth maintained with Laud and also with John Bramhall, the Bishop of Derry, “were a sign of his determination to Protestantize and Anglicize Ireland”.Devoted to a Catholic crackdown as soon as he reached the shores, Wentworth would subsequently refuse to recognise the legitimacy of Catholic officeholders in 1634, and managed to reduce Catholic representation in Ireland’s Parliament, by a third between 1634 and 1640 – this, at a time where Catholics made up 90% of the country’s population. An even clearer indication of Wentworth’s hostility to Catholicism was his aggressive policy of land confiscation. Challenging Catholic property rights in Galway, Kilkenny and other counties, Wentworth would bully juries into returning a King-favourable verdict, and even those Catholics who were granted their land back (albeit only three-quarters), were now required to make regular payments to the Crown. Wentworth’s enforcing of Charles’ religious priorities was further evidenced by his reaction to those in Ireland who signed the National Covenant. The accused were hauled before the Court of Castle Chamber (Ireland’s equivalent to the Star Chamber) and forced to renounce ‘their abominable Covenant’ as ‘seditious and traitorous’. 

Seemingly in keeping with figures from the Personal Rule, Wentworth was notably tyrannical in his governing style. Sir Piers Crosby and Lord Esmonde were convicted by the Court of Castle Chamber for libel for accusing Wentworth of being involved in the death of Esmond’s relative, and Lord Valentina was sentenced to death for “mutiny” – in fact, he’d merely insulted the Earl.

In considering Wentworth as a political figure, it is very easy to view him as merely another tyrannical brute, carrying out the orders of his King. Indeed, his time as Charles’ personal advisor (1639 onwards) certainly supports this view: he once told Charles that he was “loose and absolved from all rules of government” and was quick to advocate war with the Scots. However, Wentworth also saw great successes during his time in Ireland; he raised Crown revenue substantially by taking back Church lands and purged the Irish Sea of pirates. Fundamentally, by the time of his execution in May 1641, Wentworth possessed a reputation amongst Parliamentarians very much like that of the Duke of Buckingham; both men came to wield tremendous influence over Charles, as well as great offices and positions.

In the areas considered thus far, it appears opposition to the Personal Rule to have been a rare occurrence, especially in any organised or effective form. Indeed, Durston claims the decade of the 1630s to have seen “few overt signs of domestic conflict or crisis”, viewing the period as altogether stable and prosperous. However, whilst certainly limited, the small amount of resistance can be viewed as representing a far more widespread feeling of resentment amongst the English populace. Whilst many actions received little pushback from the masses, the gentry, much of whom were becoming increasingly disaffected with the Personal Rule’s direction, gathered in opposition.  Most notably, John Pym, the Earl of Warwick, and other figures, collaborated with the Scots to launch a dissident propaganda campaign criticising the King, as well as encouraging local opposition (which saw some success, such as the mobilisation of the Yorkshire militia). Charles’ effective use of the Star Chamber, however, ensured opponents were swiftly dealt with, usually those who presented vocal opposition to royal decisions.

The historiographical debate surrounding the Personal Rule, and the Caroline Era more broadly, was and continues to be dominated by Whig historians, who view Charles as foolish, malicious, and power-hungry, and his rule without Parliament as destabilising, tyrannical and a threat to the people of England. A key proponent of this view is S.R. Gardiner who, believing the King to have been ‘duplicitous and delusional’, coined an alternative term to ‘Personal Rule’ – the Eleven Years’ Tyranny. This position has survived into the latter half of the 20th Century, with Charles having been labelled by Barry Coward as “the most incompetent monarch of England since Henry VI”, and by Ronald Hutton, as “the worst king we have had since the Middle Ages”. 

Recent decades have seen, however, the attempted rehabilitation of Charles’ image by Revisionist historians, the most well-known, as well as most controversial, being Kevin Sharpe. Responsible for the landmark study of the period, The Personal Rule of Charles I, published in 1992, Sharpe came to be Charles’ most staunch modern defender. In his view, the 1630s, far from a period of tyrannical oppression and public rebellion, were a decade of “peace and reformation”. During Charles’ time as an absolute monarch, his lack of Parliamentary limits and regulations allowed him to achieve a great deal: Ship Money saw the Navy’s numbers strengthened, Laudian reforms mean a more ordered and regulated national church, and Wentworth dramatically raised Irish revenue for the Crown – all this, and much more, without any real organised or overt opposition figures or movements.

Understandably, the Sharpian view has received significant pushback, primarily for taking an overly optimistic view and selectively mentioning the Personal Rule’s positives. Encapsulating this criticism, David Smith wrote in 1998 that Sharpe’s “massively researched and beautifully sustained panorama of England during the 1630s … almost certainly underestimates the level of latent tension that existed by the end of the decade”.This has been built on by figures like Esther Cope: “while few explicitly challenged the government of Charles I on constitutional grounds, a greater number had experiences that made them anxious about the security of their heritage”. 

It is worth noting however that, a year before his death in 2011, Sharpe came to consider the views of his fellow historians, acknowledging Charles’ lack of political understanding to have endangered the monarchy, and that, more seriously by the end of the 1630s, the Personal Rule was indeed facing mounting and undeniable criticism, from both Charles’ court and the public.

Sharpe’s unpopular perspective has been built upon by other historians, such as Mark Kishlansky. Publishing Charles I: An Abbreviated Life in 2014, Kishlansky viewed parliamentarian propaganda of the 1640s, as well as a consistent smear from historians over the centuries as having resulted in Charles being viewed “as an idiot at best and a tyrant at worst”, labelling him as “the most despised monarch in Britain’s historical memory”. Charles however, faced no real preparation for the throne – it was always his older brother Henry that was the heir apparent. Additionally, once King, Charles’ Parliaments were stubborn and uncooperative – by refusing to provide him with the necessary funding, for example, they forced Charles to enact the Forced Loan. Kishlansky does, however, concede the damage caused by Charles’ unmoving belief in the Divine Right of Kings: “he banked too heavily on the sheer force of majesty”.

Charles’ personality, ideology and early life fundamentally meant an icy relationship with Parliament, which grew into mutual distrust and the eventual dissolution. Fundamentally, the period of Personal Rule remains a highly debated topic within academic circles, with the recent arrival of Revisionism posing a challenge to the long-established negative view of the Caroline Era. Whether or not the King’s financial, religious, and political actions were met with a discontented populace or outright opposition, it remains the case that the identity crisis facing the period, that between tyranny or stability remains yet to be conclusively put to rest.


Photo Credit.

All States Desire Power: The Realist Perspective

Within the West, the realm of international theory has, since 1945, been a discourse dominated almost entirely by the Liberal perspective. Near-universal amongst the foreign policy establishments of Western governments, a focus on state cooperation, free-market capitalism and more broadly, internationalism, is really the only position held by most leaders nowadays – just look at ‘Global Britain’. As Francis Fukuyama noted, the end of the Cold War (and the Soviet Union) served as political catalysts, and brought about ‘the universalisation of Western liberal democracy as the final form of human government’.

Perhaps even more impactful however, were the immediate post-war years of the 1940s. With the Continent reeling from years of physical and economic destruction, the feeling amongst the victors was understandably a desire for greater closeness, security and stability. This resulted in numerous alliances being formed, including political (the UN in 1945), military (NATO in 1949), and also economic (with the various Bretton Woods organisations). For Europe, this focus on integration manifested itself in blocs like the EEC and ECSC, which would culminate in the Maastricht Treaty and the EU.

This worldview however, faces criticism from advocates championing another, Realism. The concerns of states shouldn’t, as Liberals claim, be on forging stronger global ties or forming more groups – instead, nations should be domestically-minded, concerned with their internal situation and safety. For Realism, this is what foreign relations are about: keeping to oneself, and furthering the interests of the nation above those of the wider global community.

To better understand Realism as an ideological school, we must first look to theories of human nature. From the perspective of Realists, the motivations and behaviour of states can be traced back to our base animalistic instincts, with the work of Thomas Hobbes being especially noteworthy. For the 17th Century thinker, before the establishment of a moral and ordered society (by the absolute Sovereign), Man is concerned only with surviving, protecting selfish interests and dominating other potential rivals. On a global scale, these are the priorities of nation-states and their leaders – Hans Morgenthau famously noted that political man was “born to seek power”, possessing a constant need to dominate others. However much influence or power a state may possess, self-preservation is always a major goal. Faced with the constant threat of rivals with opposing interests, states are always seeking a guarantee of protection – for Realists, the existence of intergovernmental organisations (IGOs) is an excellent example of this. Whilst NATO and the UN may seem the epitome of Liberal cooperation, what they truly represent is states ensuring their own safety.

One of the key pillars of Realism as a political philosophy is the concept of the Westphalian System, and how that relates to relationships between countries. Traced back to the Peace of Westphalia in 1648, the principle essentially asserts that all nation-states have exclusive control (absolute sovereignty) over their territory. For Realists, this has been crucial to their belief that states shouldn’t get involved in the affairs of their neighbours, whether that be in the form of economic aid, humanitarian intervention or furthering military interests. It is because of this system that states are perceived as the most important, influential and legitimate actors on the world stage: IGOs and other non-state bodies can be moulded and corrupted by various factors, including the ruthless self-interest of states.

With the unique importance of states enshrined within Realist thought, the resulting global order is one of ‘international anarchy’ – essentially a system in which state-on-state conflict is inevitable and frequent. The primary reason for this can be linked back to Hobbes’ 1651 work Leviathan: with no higher authority to enforce rules and settle disputes, people (and states) will inevitably come into conflict, and lead ‘nasty, brutish and short’ existences (an idea further expanded upon by Hedley Bull’s The Anarchical Society). Left in a lawless situation, with neither guaranteed protection nor guaranteed allies (all states are, of course, potential enemies), it’s every man for himself. At this point, Liberals will be eager to point out supposed ‘checks’ on the power of nation-states. Whilst we’ve already tackled the Realist view of IGOs, the existence of international courts must surely hold rogue states accountable, right? Well, the sanctity of state sovereignty limits the power of essentially all organisations: for the International Court of Justice, this means it’s rulings both lack enforcement, and can also be blatantly ignored (e.g., the court advised Israel against building a wall along the Palestinian border in 2004, which the Israelis took no notice of). Within the harsh world we live in, states are essentially free to do as they wish, consequences be damned.

Faced with egocentric neighbours, the inevitability of conflict and no referee, it’s no wonder states view power as the way of surviving. Whilst Realists agree that all states seek to accumulate power (and hard military power in particular), there exists debate as to the intrinsic reason – essentially, following this accumulation, what is the ultimate aim? One perspective, posited by thinkers like John Mearsheimer (and Offensive Realists), suggests that states are concerned with becoming the undisputed hegemon within a unipolar system, where they face no danger – once the most powerful, your culture can be spread, your economy strengthened, and your interests more easily defended. Indeed, whilst the United States may currently occupy the position of hegemon, Mearsheimer (as well as many others) have been cautiously watching China – the CCP leadership clearly harbour dreams of world takeover.

Looking to history, the European empires of old were fundamentally creations of hegemonic ambition. Able to access the rich resources and unique climates of various lands, nations like Britain, Spain and Portugal possessed great international influence, and at various points, dominated the global order. Indeed, when the British Empire peaked in the early 1920s, it ruled close to 500 million people, and covered a quarter of the Earth’s land surface (or history’s biggest empire). Existing during a period of history in which bloody expensive wars were commonplace, these countries did what they believed necessary, rising to the top and brutally suppressing those who threatened their positions – regional control was ensured, and idealistic rebels brought to heel.

In stark contrast is the work of Defensive Realists, such as Kenneth Waltz, who suggest that concerned more with security than global dominance, states accrue power to ensure their own safety, and, far from lofty ideas of hegemony, favour a cautious approach to foreign policy. This kind of thinking was seen amongst ‘New Left’ Revisionist historians in the aftermath of the Cold War – the narrative of Soviet continental dominance (through the takeover of Eastern Europe) was a myth. Apparently, what Stalin truly desired was to solidify the USSR’s position through the creation of a buffer wall, due to the increasingly anti-Soviet measures of President Truman (which included Marshall Aid to Europe, and the Truman Doctrine).

Considering Realism within the context of the 21st Century, the ongoing Russo-Ukrainian War seems the obvious case study to examine. Within academic circles, John Mearsheimer has been the most vocal regarding Ukraine’s current predicament – a fierce critic of American foreign policy for decades now, he views NATO’s eastern expansion as having worsened relations with Russia, and only served to fuel Putin’s paranoia. From Mearsheimer’s perspective, Putin’s ‘special military operation’ is therefore understandable and arguably justifiable: the West have failed to respect Russia’s sphere of influence, failed to acknowledge them as a fellow Great Power, and consistently thwarted any pursuits of their regional interests.

Alongside this, Britain’s financial involvement in this conflict can and should be viewed as willing intervention, and one that is endangering the already-frail British economy. It is all well and good to speak of defending rights, democracy and Western liberalism, but there comes a point where our politicians and media must be reminded – the national interest is paramount, always. This needs not be our fight, and the aid money we’re providing the Ukrainians (in the hundreds of billions) should instead be going towards the police, housing, strengthening the border, and other domestic issues.

Our politicians and policymakers may want a continuance of idealistic cooperation and friendly relations, but the brutal unfriendly reality of the system is becoming unavoidable. Fundamentally, self-interested leaders and their regimes are constantly looking to gain more power, influence and territory. By and large, bodies like the UN are essentially powerless; decisions can’t be enforced and sovereignty acts an unbreachable barrier. Looking ahead to the UK’s future, we must be more selfish, focused on making British people richer and safer, and our national interests over childish notions of eternal friendship.


Photo Credit.

The Monarchy is Britain’s Soul

With the ascension of a new Sovereign and the recent controversy surrounding the coronation, the British republican movement has reared its ugly head once more, spearheading a renewed debate as to the Royal Family’s ‘relevance’ and ‘value-for-money’ in 2023. Throughout the day we were bombarded with news coverage of anti-monarchist activism, primarily from Republic and their leader Graham Smith. However, with their focus on democracy and the ‘need for modernisation’, left-wingers fail to fully appreciate the Monarchy’s national function.

Having existed since the kingdoms of Anglo-Saxon England, Britain’s constitutional monarchy has been able to develop organically and overcome numerous challenges (from wars and republican dictatorship, to callous individualists like Edward VIII). With a basis on preparing the heir apparent from birth, many of our kings and queens have been embodiments of duty and moral courage – the late Queen Elizabeth II being a prime example. Indeed, alongside an organic and family-based system comes an inherent sense of national familiarity and comfort – they provide the British people with a unifying and quasi-parental figure, and almost a sense of personal connection with the other royals.

As well as this, the institution acts as a crucial barrier against the danger of democratic radicals and the idiocy and ineptitude that resonates from the Commons. Our entire political class seek to further their own interests, and with the Lords having seen terrible reforms under Blair, the Monarchy is left as the People’s last defence against the whims of power-hungry elites.

They also act as a link to Britain’s past and cultural heritage, as a source of national continuity. The Monarchy embodies our religious character with the Church of England, as well as nature of constitutional government with the different organs. As Sir Roger Scruton eloquently put it, it acts as ‘the voice of history.’ This point fundamentally speaks to the Left’s opposition to the Monarchy’s continuation. They can shout about equality and elected decision-making, but their attack on the Royal Family is inherently an attack on Britain’s history, which they vehemently despise. They want to tear down Britain’s unifying soul, and replace it with some soulless political office, one with no roots in national history or organic development.

The renowned Edmund Burke spoke of the need for national myths, a library of inspiring stories and a rich historical character. This is what maintains a nation’s identity and keeps the people united. It is for this reason (amongst others) that he so fiercely opposed the French Revolution, responding with Reflections on the Revolution in France in 1790. These idealist revolutionaries could topple the Bourbon dynasty and establish a new ‘progressive’ society, but based on what? What would these ‘unifying’ ideals be? Without a solid foundation that had developed and grown organically, what could people possibly hold onto?

Now from the perspective of left-wingers, the transition to a republic would merely be a political one – simply making politics ‘more democratic and egalitarian’. A referendum would most likely be called, people would vote, and the Will of the People would be obeyed absolutely. Consider their preferred alternative, most likely a presidential system. We would be burdened, like so many nations, with yet another incompetent, weak, and self-interested hack at the top – an office created by and for the existing political class to monopolize, the final step in achieving a grey managerialist Britain.

But such an event would in truth represent so much more – a fundamental shift in Britain’s identity. Constitutional monarchy is our one national continuity and forms the basis of our mythos. All else is transient – politicians, the values of the day, social debates. Through the royals, Britons throughout the ages maintain a living link to past generations, and to our Anglo heritage as a people. Once again quoting Scruton, ‘they speak for something other than the present desires of present voters’, they are ‘the light above politics.’

The royals are especially important in Britain’s climate of national decline, with an assortment of failing institutions, from the NHS to the Civil Service to the police. It is increasingly evident that we require a national soul more than ever – to once again enshrine Britain’s history. We can’t survive on the contemporary values of ‘Diversity, Equality, and Inclusion’, on the NHS, Bureaucratisation, or record-high immigration levels. A return to order and stability, faith and family, and aggressive nationalism is the only way forward – Britons need to feel safe, moral, unified, and proud.

This Third Carolean Era has the opportunity to revitalise the role monarchy plays in peoples’ lives. By making it more divine, more mystical – alongside a conservative revolution – we can ensure Britain’s soul remains whole and pure. 


Photo Credit.

Fukuyama, Huntington and The New World Order

In the aftermath of the Cold War, a 45-year ideological struggle between the two major superpowers, the USA and USSR, several political scholars have offered forecasts concerning the future of conflict and the geopolitical climate post-1991. Two men rose to dominate the debate, one encapsulating a liberal perspective and the other a realist one – and in the decades since, their ideas have come to form the foundations of modern international relations theory.

The first was the political scientist and economist Francis Fukuyama. A Cornell and Harvard alumnus, Fukuyama proposed his thesis in an essay titled ‘The End of History’ (1989), and later expanded on it in his book The End of History and the Last Man (1992). Essentially, he posits that with the collapse of the Soviet Union came the resolution of the battle of ideas, with liberal democracy and free trade having emerged as the unchallengeable winners.

Society, according to Fukuyama, had reached the end of its ideological evolution – global politics has, since the fall of the USSR, been witnessing ‘the universalisation of Western liberal democracy as the final form of human government’. Indeed, we’ve certainly seen a massive increase in liberal democracies over the past few decades, jumping from 35 in 1974, to 120 in 2013 (or 60% of states). Additionally, the broad adoption of free trade and capitalism can be seen as delivering benefits to the global economy, which had quadrupled since the late 1990s.

Even communist states, Fukuyama said, would adopt some elements of capitalism in order to be prosperous in a globalised world economy. For example, the late 1970s saw reformists (such as Chen Yun) dominating the Chinese Communist Party and, under Deng Xiaoping’s leadership, the socialist market economy was introduced in 1978. This opened up the country to foreign investment, allowed private individuals to establish their own businesses, and privatised agriculture – these monumental reforms have resulted in spectacular economic growth, with many forecasters predicting that China will overtake the US as the world’s largest economy by around 2028. We’ve seen further evidence of this turn away from communism in favour of capitalism and freedom: upon its founding, the Russian Federation explicitly rejected the ideology, and many former Eastern Bloc states have enthusiastically adopted liberal democracy, with many also having since joined the European Union.

Regarding the example of China, however, the suppression of freedoms and rights has also been a staple of the CCP’s rule, especially under the current leadership of Xi Jinping. This links to a broader and fairly major critique of Fukuyama’s thesis: the growth of authoritarianism across the globe. With Law and Justice in Poland, Jair Bolsonaro in Brazil, and Rodrigo Duterte in the Philippines (not to mention various military coups, including Turkey in 2016), liberal democracy is undeniably under threat, and clearly not the globally agreed-upon best system of government (this is particularly concerning as it applies to two major powers, China and Russia). Furthermore, 9/11 and the 7/7 bombings serve as pretty hallowing examples of an ideological clash between Western liberalism and Islamic fundamentalism – more broadly radical Islamism has emerged as an ideological challenger to both the West and to secular governments in the Middle East and North Africa.

The second was the academic and former political adviser Samuel P. Huntington. A seasoned expert in foreign policy (having served as the White House Coordinator of Security Planning for the National Security Council under Jimmy Carter), Huntington laid out essentially a counter-thesis to Fukuyama’s, which first took the form of a 1993 Foreign Affairs article, and then a book in 1996, The Clash of Civilisations and the Remaking of World Order. Conflicts in the past, Huntington argues, had been motivated by a desire primarily for territorial gain and geopolitical influence (e.g.  colonial wars of the nineteenth and twentieth centuries were attempts to expand the economic spheres of influence of Western imperialist powers).

However, in the 21st Century, the primary source of global conflict will be cultural, not political or economic (and will be primarily between Western and non-Western civilisations). Thanks to globalisation and increasing interconnectedness, people will become more aware of their civilisational roots and of their differences with others – they will aim to entrench and protect these differences, rather than seek common ground with other civilisations.

The Clash of Civilisations identified 9 civilisations specifically: Western (USA, Western Europe, Australasia), Orthodox (Russia and the former USSR), Islamic (North Africa and the Middle East), African (Sub-Saharan Africa), Latin American (Central and South America), Sinic (most of China), Hindu (most of India), Japanese (Japan), and Buddhist (Tibert, Southeast Asia and Mongolia).

Huntington also highlighted the possible revival of religion, Islam in particular, as a major potential issue: it would come to represent a challenge to Western hegemony in terms of a rejection of Western values and institutions. His Foreign Affairs article featured the line ‘Islam has bloody borders’, suggesting that the Islamic civilisation tends to become violently embroiled in conflict with periphery civilisations – Huntington cites the conflicts in Sudan and Iraq as major examples.

It is clear, although still a touchy subject for politicians and policymakers, that Radical Islam poses a serious threat to the safety and stability of the Western world. Aside from aforementioned terror attacks, the rise of extremist fundamentalist groups such as the Taliban in Afghanistan and al-Shabaab in Somalia represents a larger opposition to Western values. However, Huntington’s failure to consider the deep divisions within the Islamic world (especially between Sunnis and Shias) is a major criticism of his argument. Additionally, many of the civilisations he identified show little interest in a clash with the West, mainly as it wouldn’t be in their economic interest to do so (such as India, Japan and Latin America, who are all very interdependent on Western powers).

The Clash of Civilisations thesis does, however, offer a number of steps that the West could take to prevent a potential clash. It should pursue greater political, economic and military integration, so their differences will be more difficult to exploit. Just last year we saw a clear example of this, in the form of AUKUS, the security pact between Australia, the UK and the US.

NATO and European Union membership should be expanded, with the aim of including former Soviet satellite states, to ensure they stay out of the Orthodox sphere of influence. Fortunately for the West, 2004 alone saw NATO admit Romania, Bulgaria, Latvia, Lithuania, Estonia, Slovakia and Slovenia, followed in 2009 by Albania and Croatia. The military advancement of Islamic nations should be restrained, to ensure they don’t pose a serious threat to the West’s safety – a clear example of this is the 2015 Iran Nuclear Deal, reducing the nation’s stockpile of uranium to ensure it couldn’t become an anti-Western nuclear power.

Finally, the West must come to recognise that intervention in the affairs of other civilisations is ‘the single most dangerous source of instability and conflict in a multi-civilisational world’. This is a message that Western politicians have certainly not heeded, especially in regards to the Islamic world – troops were sent into Darfur in 2003, Afghanistan in 2001, Iraq in 2003 and Libya in 2011.

In his 2014 book Political Order and Political Decay: From the Industrial Revolution to the Globalization of Democracy, Fukuyama argues that his ‘End of History’ thesis remains ‘essentially correct’, despite himself recognising the current ‘decay’ of liberal democracy around the world. Both scholars’ predictions have, at periods of time in the post-Cold War era, looked very strong and, at other times, laughably incorrect and misguided. Both Fukuyama and Huntington still offer valuable insights into global dynamics between cultures, as well as the future of global tensions and conflict. However, both theses are undercut by the modern global landscape: democracy is currently on the decline, which undercuts Fukuyama, and civilisational identity remains limited, which undercuts Huntington. Regardless of who got it right, both men have undeniably pushed the debate surrounding the international order to new heights, and will no doubt be remembered as intellectual titans in decades to come.


Photo Credit.

Putin’s War: A Tale of Soviet Romanticism and Western Ignorance | Daniel Hawker 

With Russian troops having begun a full-scale invasion of neighbouring Ukraine, President Joe Biden was recently asked by a journalist “Do you think you may have underestimated Putin?” In response to the question, the supposed ‘most powerful man in the world’ offered merely a smirk and proceeded to sit in silence whilst his team rushed to stop the video recording. This was inevitably due to the honest answer being yes – the warning signs have been evident for decades. Let us first consider the historical basis for the invasion.

Vladimir Putin’s position as a Soviet romantic has come to be a defining aspect of his political image. In his 2005 state of the nation address, he notably referred to the 1991 collapse of the USSR as “the greatest geopolitical catastrophe of the century”, an event which left “tens of millions of our fellow citizens and countrymen … beyond the fringes of Russian territory”. It is this Slavophilic perspective that is paramount in understanding the motives and aims of Russian foreign policy in Eastern Europe. With the fall of the USSR came, according to Russian nationalists, the mass displacement of Soviet citizens outside of the Motherland. Millions of Slavic people, all of whom shared a rich cultural history, now living within the borders of independent states, stripped of their collective identity. At this time, young Vladimir Putin was working for the Mayor of Leningrad, and this moment came to shape his ideology and vision for Russia’s future (and the future of former-Soviet satellite states).

Ukraine however, has always occupied a special place within Russian romantic nationalism. The Russian Federation actually has its origins in modern-day Ukraine – specifically the Kievan Rus’ federation (consisting of East Slavic, Baltic and Finnic peoples), which existed from the 9th to the 13th century. Linguistic and cultural roots remain strong, with most Ukrainians also speaking Russian, especially in the eastern and southern parts of the country. Whilst a region of the Russian Empire (and later the USSR), Ukraine was a crucial region for agriculture due to its soil, which is exceptionally well-suited to the farming of crops.

Given this intertwined history, a key tenant of Putin’s romantic mindset is the idea that Russians and Ukrainians are one people, and must therefore exist within the same state. This view was most recently revealed in a 2021 article written by the president, titled ‘On the Historical Unity of Russians and Ukrainians’, in which he affirmed that “true sovereignty of Ukraine is possible only in partnership with Russia”. Stella Ghervas, a professor of Russian history at Newcastle University, has explained that “the borders of the Russian Empire in 1914 remain a point of reference from the Kremlin up to this day”.

However, it seems that the West has chosen not only to ignore how ideologically desperate Putin is to reclaim Ukraine, but also how brutally willing he has been to utilise hard power to achieve his expansionist aims. 2008 saw artillery attacks by pro-Russian separatists (backed by Putin) in the South Ossetia region of Georgia; 2014 brought us the infamous annexation of the Crimean Peninsula, and 2021 saw a mass-movement of Russian troops and military equipment to the Ukrainian border, raising concerns over a potential invasion. These examples should have clearly demonstrated to Western powers the lack of respect Vladimir Putin has for national sovereignty, and that once his mind becomes fixated on regaining lost Soviet territory, he can’t be easily dissuaded. With this in mind, the invasion of Ukraine should be viewed as the inevitable and long-awaited finale to Putin’s expansionist concerto.

The response to the latest developments is hardly surprising: economic sanctions appear to be a firm favourite amongst Western leaders; Boris Johnson has sanctioned five Kremlin-friendly oligarchs and aims to target “all the major manufacturers that support Putin’s war machine”, whilst Joe Biden has levied penalties against major Russian industries and frozen the bank assets of the regime’s major figures. An international effort has also been undertaken, with the UK, US, EU and Canada agreeing to cut off a number of Russian banks from SWIFT, the international payment system. However, such sanctions, especially those against individuals, have received pushback. Following Crimea in 2014, the late and greatly-missed philosopher Sir Roger Scruton published a piece in which laid out how believing that sanctions against oligarchs “will make the faintest difference to Russia’s expansionist foreign policy is an illusion of staggering naivety” – having faced the threat of increased sanctions since then, Russia has built up foreign currency reserves of $630bn (akin to ⅓ of their economy).

In terms of military responses, the general consensus is that Western troops won’t be deployed, and there is a simple logic to it – Western populations have no real hankering for a war: two recent YouGov polls revealed 55% of Britons and 55% of Americans oppose sending their own troops to fight in Ukraine (for the United States, last year’s disastrous withdrawal from Afghanistan undoubtedly turned the public off of war for a while). However, NATO troops have been deployed to Eastern Europe, and we’ve also sent 1,000 soldiers to Hungary, Slovakia, Romania and Poland, in preparation for the inevitable outpouring of innocent and scared Ukrainian families.

 Whilst the objectives of the Putin regime and the long-term naivety of the Western order are the two primary factors, the West’s role in bringing this situation about must also be acknowledged, for the sake of honest discussion. In the early 1990s, Boris Yeltsin expressed his desire for Russia to one day join NATO; Putin echoed this in 2000 when Bill Clinton visited Moscow. Despite Russia at these times being a fledgling democracy, they were turned down by the alliance – provided the opportunity to start anew and help the Russian people, the West refused to bring Russia into the international fold.

Further evidence of the West’s culpability is the expansion of NATO’s borders. Although an arrangement with murky origins, the generally-understood version is that the US Secretary of State James Baker, told Mikhail Gorbachev that NATO expansion was ‘not on the agenda’. Regardless, the welcoming of former Eastern Bloc states into the alliance (Romania, Bulgaria, Latvia, Lithuania, Estonia, Slovakia and Slovenia in 2004, and Albania and Croatia in 2009) has only served to worsen relations between Putin and the West – despite the availability of open dialogue for decades, we’ve consistently chosen mistrust when dealing with Russia.

Whilst the West may be shocked that Putin actually went ahead with a military invasion, it can’t seriously claim to have been surprised; the president’s intentions regarding Eastern Europe and Ukraine especially have been nefariously evident for at least a decade, in which time we’ve fooled ourselves, downplaying the risk Russia posed. We must endeavour to remember however, the most tragic consequences of this entire situation: the many thousands of innocent Ukrainian civilians who’ve lost their lives, their homes and their feeling of safety within their own borders. For Russia, sanctions will hurt their citizens, all whilst their understanding of the situation is distorted through propagandistic state media. This really is a horrific situation, and one that has occurred because of Putin’s worldview and Western leaders’ inability to take Russia seriously as a threat.


Photo Credit.

From Weimar to the Third Reich: the birth of a dictatorship


History has seen its fair share of wicked and corrupt leaders and regimes, from Ivan the Terrible, to Joseph Stalin and Mao Zedong, to Pol Pot and Saddam Hussein. These men, and others like them, desired to tightly grip the reins of power and not let go, entrenching themselves and their position within the political system. Of all the possible regimes to explore with this piece, Hitler’s Third Reich was chosen because to its dual notoriety and anonymity – Nazi Germany is known to almost everyone as a significant historical period, but the system’s context, beginning and how Hitler came to be Führer is a far more elusive story.

A solid understanding of Hitler’s time in power requires some historical context, specifically the end of the First World War and the subsequent years of the democratic Weimar Republic. The country’s crushing defeat, as part of the Central Powers, saw the victors gather to decide how both to punish and subdue Germany – the result was the Treaty of Versailles, signed by Foreign Minister Hermann Müller in 1919. The terms were far harsher than the Germans had anticipated, due mainly to France’s involvement: they accepted war guilt, had to pay 132bn marks in reparations (as well as all war pensions), their army was limited to 100,000 men, and they lost the key territories of Saar, Alsace Lorraine and Danzig. From the perspective of almost all Germans, regardless of region or class, Versailles represented the most heinous betrayal by the political elites. Having been the catalyst for the Weimar Republic, this new political system never managed to escape the Treaty’s legacy, with its dark shadow tarnishing the concept of ‘democracy’.

For conservative nationalists and monarchists however, even Germany’s military defeat couldn’t be accepted, resulting in the anti-Semitic ‘Stab in the Back’ myth, which essentially argued that, far from being the fault of the soldiers, Germany’s defeat had actually been the result of traitorous elites and politicians (many of whom were Jewish) working to undermine the country’s war effort. This was a narrative that greatly appealed to Adolf Hitler, who similarly couldn’t accept the reality of the situation – he described Germany’s defeat in Mein Kampf as “the greatest villainy of the century”, and one which anti-German propaganda greatly contributed to, “with Jewish, socialist propaganda spreading doubt and defeatism from within”.

With such widespread shared outrage following Versailles, it is no wonder that the Weimar Republic was plagued with social unrest, political violence and attempted coups from the very beginning. From the communist left, you had the Spartacist Uprising in 1919, fuelled by a desire to replace Weimar with a Soviet-style system (inspired by the Bolshevik Revolution), and from the nationalist right, you had the Kapp Putsch in 1920 (and Hitler’s Munich Putsch in 1923), who harkened back to the authoritarian monarchical style of Kaiser Wilhelm II and Bismarck. Add to this list the regularity of politically-motivated street violence, as well as the assassinations of major politicians (the finance minister in 1921, and the foreign minister in 1922), and you have a government unable to defend either itself, or it’s citizenry.

Public confidence in the Weimar regime was perhaps most seriously damaged by the country’s economic instability across the entire period, from 1918 to 1933. Already decimated by mass-printing and borrowing during the War, the German economy would suffer numerous economic crises, beginning with hyperinflation in 1923. The result of a French invasion and occupation, the government was forced to increase its borrowing, further decimating living standards – increased alcoholism and suicide rates were recorded, along with a decline in law and order, and more generally, public trust in the government. Although briefly graced with the ‘Golden Years’ (1923-29), Germany once again faced economic decimation in the form of the Great Depression, which again saw mass social unrest, as well as six million unemployed citizens (1933).

By the early 1930s therefore, extremist anti-Weimar parties were becoming increasingly popular with the angry and struggling German electorate. This is reflected in seat counts: the Nazis went from 12 seats in 1928, to 107 in 1930, before reaching their all-time high of 230 in July 1932 – similarly, the communist KPD reached an impressive 100 in November 1932. Fundamentally, what Hitler and the Nazis were offering increasingly spoke to much of the German population – ending reparations, regaining national pride, the promise of full-employment, a Kaiser-like leader, and an uncompromising stance against the boogeymen of the time, Jews and Communists.

His appointment as Chancellor wasn’t guaranteed by any means however, thanks to President Paul von Hindenburg. Having once described Hitler as a ‘bohemian corporal’, he was concerned with Hitler’s lack of government experience, although had offered Hitler the position of vice-chancellor in 1932. What forced his hand however, was the enormous influence wielded by the industry elites, who viewed Hitler as the authoritarian figure that the chancellorship needed, an opinion fuelled by their fear of communism’s increasing popularity

However, once Hitler’s appointment became necessary and inevitable, Hindenburg, along with former chancellor Papen, conspired to control a Hitler-led government, believing Hitler’s lack of experience meant he would be like a puppet who could be ‘tamed’. The two men thought that, with few Nazis in the Cabinet, and with Papen as Vice-Chancellor, true power could lie with them, with Hitler being Chancellor in name only. How wrong they were.

Having attained the chancellorship in January 1933, Hitler now set about securing his position, first through legislative changes. This came most significantly in two forms: the Reichstag Fire Decree in February 1933, and then the Enabling Act in March. Both pieces of legislation legally grounded the fledgling regime, granting them the authority and power to act as they wish, and silence those who opposed them.

The hurried passing of the emergency Reichstag Fire Decree came in the wake of a suspected Communist-led arson of the Reichstag building. Arriving at the scene alongside other leading Nazis, Hitler viewed the crime as a blatant assault on the German state, and all it stood for. In response, Hitler pressured Hindenburg to sign the Act into law, which saw the suspension of essentially all freedoms and civil liberties (e.g., the right to association, speech, freedom of the press etc). These rights wouldn’t see a revival later on. Aside from removing freedoms, the Decree also saw a brutal crackdown of political opponents (as the police no longer required cause, and could hold people indefinitely). Indeed, the first 2 weeks following the Decree’s signing saw around 10,000 people arrested in Prussia, including many prominent communist leaders. From the perspective of Richard J. Evans, “this was the first of the two fundamental documents on which the dictatorship of the Third Reich was erected”.

Even more significant, however, was the passing of the Enabling Act. Single-handedly transforming Germany into a legal dictatorship, the Act allowed the Cabinet to pass legislation without requiring the consent of either the Reichstag or the president. Indeed, this paved the way for the Nazis to further tighten their grip on the political system – for example, the founding of new parties was banned in July 1933. Fundamentally, with an unrivalled number of NSDAP Reichstag members, Hitler had made democracy into dictatorship in only a few months – in the words of Evans: “By the summer of 1933 all opposition had been crushed, more than a hundred thousand Communists, Social Democrats and other opponents of the Nazis had been sent to concentration camps, all independent political parties had been forced to dissolve themselves and the Nazi dictatorship had been firmly established”. Combining the offices of president and chancellor in 1934 (following Hindenburg’s death), Hitler’s adoption of the Führer title cemented his authority and that of his party.

In consolidating Nazi power, Hitler definitely made his position within the new hierarchy very clear. This can be seen in the ‘Hitler Oath’, introduced for the judiciary, military and civil servants. Having previously sworn loyalty to ‘the German Reich’, officials now swore “unconditional obedience” to Hitler personally. This reflected the broader ruling philosophy of the Nazi regime, one in place since the party’s reorganisation in 1924, Führerprinzip (essentially that the Führer’s decisions are always correct, and that he is all-powerful and above the law). With the effective spreading of propaganda, Hitler came to encapsulate a past era of Germany’s history, one dominated by patriotic and statist authoritarians, notably the much-sentimentalized Bismarckian and Wilhelmine Reich.

Regarding institutional control, Hitler set about ensuring political conformity within branches of local government and the Civil Service. Although dominated by conservatives from the Wilhelmine Reich, a purge of ‘enemies of the State’ was still required of civil servants – this came in the form of the anti-Semitic ‘Law for the Restoration of the Professional Civil Service’, which banned Jews, progressives and others. Germany’s Jewish community would, later on, see further exclusion from institutions, including education (both as students and as teachers). Additionally, the local government saw its own purge of dissidents, as well as an overhaul of its structure, with Reich Deputies introduced to administer the different states – in doing this, the Nazis ensured that all areas of the country were under their top-down control.

Nazi actions were similar towards the media establishment. Fearing the damaging impacts of rogue leftist reporting, the Reich Association of the German Press was set up, to review all content and keep the journalists, editors and publishers in line with the regime’s messaging. The same happened within the cultural and artistic spheres of German society – fearing the spread of ‘degenerate’ modern art (labelled as Cultural Bolshevism), it was the Reich Chamber of Culture that reversed the artistic progressivism seen in the ‘Golden Years’ of the Weimar Republic.

Whilst these two agencies monitored for anti-Nazi media sentiment, it was through the Ministry of Public Enlightenment and Propaganda (headed by Joseph Goebbels) that the regime was able to most effectively spread its hateful rhetoric, with a unified radio system being established in 1934, and radios being mass-produced for the population. Allowing Hitler to more easily speak to his people, this communications technology proved vital in further cementing Nazism into the everyday lives of the citizenry. Aside from this, the agency also oversaw the production of pro-Nazi films, which praised Aryan physical qualities, all the whilst presenting Jews as parasitic, manipulative and barbaric (most famously seen with the release of The Eternal Jew in 1940).

These structural changes however, would only get Hitler’s vision for Germany so far. He could make himself the supreme leader, root out opponents in state institutions and the media, and spread the party’s ideas of racial purity all he wanted, but what the Führer really needed was a population ideologically committed to National Socialism, a concept the Nazis referred to as a Volksgemeinschaft, or ‘national/people’s community’. With the population having been divided under Weimar, Hitler aspired to rule a unified Germany, one with a populace devoted to the Fatherland. For this to work however, all social groups would have to see real life improvements.

With unemployment having reached six million by 1933, Hitler’s aim with Germany’s workers was more jobs and improving their living conditions. Emphasizing ‘recovery’ during the first few years, unemployment was indeed reduced, to a staggering 1.6m by 1936. As far as conditions went, the ‘Beauty of Work’ programme managed an overhaul of factories, including improvements to safety measures and the quality of toilets. Alongside this came the ‘Strength through Joy’ (KdF) initiative, which provided workers with cheap leisure activities, such as holidays in the country and trips to the theatre, all of which were eagerly taken advantage of. Despite these steps taken by the German Labour Front (DAF), modern historians have raised the concern that, far from being genuinely dedicated to the regime, workers simply publicly supported Nazism to continue enjoying these benefits.

Having enjoyed uniquely-progressive freedoms under the Weimar Constitution, women under the Nazi regime were reverted back to their traditional domestic childbearing role. Indeed, women under this system were granted easier access to divorce, as well as the ‘Cross of Honour of the German Mother’, to encourage them to have more and more children. With many women enamoured by the image of the Führer as the eternal bachelor, the regime saw essentially no organised opposition by women, along with emotional displays of love for Hitler at public events.

The Nazis’ approach to the youth was focused on combining physical war training with lessons in National Socialism. Achieved through both schools and the Hitler Youth (which became compulsory from 1936), young German boys went on hiking and camping trips, as well as new Nazi content, such as racial science and reading extracts of Mein Kampf. Indeed, many schoolboys became obsessed with the legendary figure of Hitler, and were successfully transformed into puppets of the regime, reporting their neighbours and family members to the authorities for anti-Nazi sentiment. However, the many opposition youth movements of the late-1930s represented a growing disillusionment with the regime and its ideology – groups like the Swingers, who adopted American fashions and jazz music, and the Edelweiss Pirates, who mingled with the opposite sex.

The extent to which a genuine Volksgemeinschaft was actually created however, is greatly debated amongst historians. Fundamentally, whilst these groups may have appeared satisfied and ideologically committed at public events, they were all terrified of what would happen if they weren’t. Everyone in the country was kept in line by the omnipresence of the Reich’s repressive terror apparatus.

Although relatively small in numbers, the Gestapo was, in the mind of the average German, around every street corner – it was this image that people had that made them so terrifying. Reading mail, making midnight arrests and utilising torture, they served to root out enemies of the regime and strike fear into the population. However effective their own methods were, the Gestapo relied even more heavily on public tip-offs and denunciations of neighbours and acquaintances, from which they received 57% of their information from. Established by Goering as the Minister President of Prussia, the Gestapo would soon be transferred over to the head of the SS, Heinrich Himmler.

The parent organisation of the Gestapo, the SS served as the regime’s key intelligence, security and terror agency. Rooting out political enemies (such as remaining party and trade union leaders), it was the SS that oversaw the Fire Decree arrests and executions. Also serving a crucial role in the neutralisation of the regime’s racial targets, the agency would later control the building and running of the concentration (and extermination) camps, as well as the death squads sent into Eastern Europe during the War.

Although these two agencies were ruthless and highly-effective at rooting out opposition, certain figures remained, both within the political and party systems, who posed a serious threat to Hitler’s growing power. This increasing paranoia would culminate in June 1934, with the ‘Night of the Long Knives’. A brutal purge of Hitler’s enemies, it was initiated by growing concern over the direction of the SA, the party’s paramilitary group – they were becoming too brutish and uncontrollable. Thus, to consolidate his position, Hitler had the leadership, including his close friend Ernst Röhm, assassinated in the dead of night. Other victims included internal party rivals, like the progressive Gregor Strasser, and remaining Weimar politicians, like former chancellor Kurt von Schleicher. Serving as a harrowing example of what would become of the regime’s enemies, the purge also guaranteed Hitler the loyalty of the Army, who’d supplied the SS with the necessary weaponry.

Hitler’s rise to power, from a minor nationalist political agitator in the 1920s, to the undisputed supreme leader of Germany a decade later, serves as an extreme example of how charismatic and intelligent figures can take advantage of a peoples’ anger towards the Establishment, coupled with dire socioeconomic circumstances. Having come to be engrossed with the anti-Semitism peddled by Richard Wagner and Pan-German groups in the 1910s, Hitler’s vehement racism, combined with his skill for passionate public speaking, would see him go from a semi-homeless failing artist in Vienna, to arguably the most infamous figure in human history, responsible for the deaths of tens of millions.


Photo Credit.

Scroll to top