In previous articles, I’ve been too harsh on the people who want to define a woman as ‘an adult human female’, and probably too harsh on people who genuinely want right-wing ideas to flourish. This time, I want to focus on the beliefs of the modern left and rather than attack them outright, provide an analogy that hopefully makes things easier for the right to understand where they’re coming from – I’ll leave the attacking to you at that point.
Despite the refusal of this publication to use a certain word, a refusal I helped in writing and a refusal I still stand by – when words are created or used there’s often a reason for it. The speaker, at the very least, feels a need to distinguish this specific thing from the stuff that came before it. And there is something different about the modern left than the left of the past – although the difference is one of intensity, rather than of content. The trend of Western societies from the French revolution onwards has been an insistence on a harsh dualism between the external material world, and the internal world of the self. The material, by its nature, has no moral character. Few Western societies today affirm a creator, and in insisting on that creator: a rhyme and reason to the way the world is. After all, if God exists and designed our world – the fact there is day and there is night suggests that some ultimate Being who judges us had a reason for creating such a cycle, and we can go from observations in the material world to moral statements by resolving those statements to that Being.
The secularism of modern life has fairly resolutely burned that bridge. Today, we rely on appeals to notions of reason to affirm or deny moral statements. This marks a change not just in how we consider the nature of morality, but the nature of our very existence. Whereas before there was some notion of the absolute which subsumed not just the material world, but each subjective experience of that world. Now there is just a sterile material reality, with internal projections of order onto that reality that we call morality. In the former case, limitations on the subjective experience with reference to the material are to be expected. After all, both are facets of a wider design. In the latter case, limitations on the subjective experience with reference to the material are not only absurd, but denials of the necessarily free internal world of the self.
Perhaps this is all too philosophical and abstract. Perhaps it is easier for me to refer to sentiments you’ve likely heard before. One such sentiment would be the notion that being a woman means different things to different people. Another would be that being a woman is nothing more than identifying as a woman. Both of these presuppose the aforementioned harsh binary between a physical, material, world that (in and of itself) tells us nothing about how it should be described, and an internal world of the self that projects descriptions onto it. We can provide descriptions which allow us to make predictions, and that’s what we call science – but what exactly we ought to predict, and the emphasis we place on those predictions is not something that can be resolved to the material world.
For example, nothing about the fact we can describe the human species as male and female for the purposes of reproduction suggests that human reproduction is good (this is what anti-natalists believe,) or that we should assign clothing, roles, attitudes, separate changing rooms/sports and beliefs to those categories (this is what Queer Theorists believe.) With this disconnect, gender becomes something that has no coherent reference to the material world. We can associate them if we please, but nothing about that association is absolute. At best, it’s the general aggregate of behaviours that come about due to biological impulses – which would suggest that anyone who imitates those behaviours and makes themselves resemble the biological makeup of who they wish to be, is in effect whatever they want to be.
When understood this way, it’s more accurate to understand the conception of gender from the modern left as being closer to a subculture than an unmoving category that one falls into and cannot escape. You can become a woman as easily as you can become a goth. Wear the right things, act the right way, listen to the right music and call yourself a goth and what right does anyone else have to deny you of that? The same is true in modernity of womanhood. A woman is whoever says they are a woman, and just as a goth remains a goth when they listen to classical instead of metal, when they wear no make-up instead of eyeshadow, or take out their piercings instead of wearing them: a woman remains a woman without wearing dresses, acting feminine, or having certain body parts.
Of course, this is me speaking from the perspective of the modern left – but it also demonstrates the convenience of wants between the modern left and capital. It’s easy today to find an endless number of businesses selling masculinity as a product, but why stop there? With an ever-increasing number of genders, there’s plenty of business to be had in creating unique modes of dress for each new grouping. The term “transtrander” got some usage for a while, but missed the mark. People don’t just become trans because being trans is trending, but because transitioning is a trend in and of itself.
The Boomers spent their time transitioning from being a hippy to being a mod, the zoomer transitions from queer, to genderfluid, to man, and back to woman. Each one has their own flag, set of styles, and modes of dress and ways to act and behave. In many respects, this is self-expression taken to its most extreme end, a self-expression which demands not just to express an unchanging identity, but to express the changing nature of identity itself as an extension of self-expression.
“Man cannot remake himself without suffering, for he is both the marble and the sculptor.”
Alexis Carrell
If modernism was the recognition of man as sculptor and creator, and the affirmation that there is no ultimate sculptor and creator to guide his hand, post-modernism is the recognition of man as both marble and the sculptor, and extends the unbounded freedom of self-expression not just to the individual, but to the nature of individuality said person emerges from.
You Might also like
-
Charles’ Personal Rule: A Stable or Tyrannised England?
Within discussions of England’s political history, the most famous moments are known and widely discussed – the Magna Carta of 1215, and the Cromwell Protectorate of the 1650s spring immediately to mind. However, the renewal of an almost-mediaeval style of monarchical absolutism, in the 1630s, has proven both overlooked and underappreciated as a period of historical interest. Indeed, Charles I’s rule without Parliament has faced an identity crisis amongst more recent historians – was it a period of stability or tyranny for the English people?
If we are to consider the Personal Rule as a period in enough depth, the years leading up to the dissolution of Charles’ Third Parliament (in 1629) must first be understood. Succeeding his father James I in 1625, Charles’ personal style and vision of monarchy would prove to be incompatible with the expectations of his Parliaments. Having enjoyed a strained but respectful relationship with James, MPs would come to question Charles’ authority and choice of advisors in the coming years. Indeed, it was Charles’ stubborn adherence to the Divine Right of King’s doctrine, writing once that “Princes are not bound to give account of their actions but to God alone”, that meant that he believed compromise to be defeat, and any pushback against him to be a sign of disloyalty.
Constitutional tensions between King and Parliament proved the most contentious of all issues, especially regarding the King’s role in taxation. At war with Spain between 1625 – 1630 (and having just dissolved the 1626 Parliament), Charles was lacking in funds. Thus, he turned to non-parliamentary forms of revenue, notably the Forced Loan (1627) – declaring a ‘national emergency’, Charles demanded that his subjects all make a gift of money to the Crown. Whilst theoretically optional, those who refused to pay were often imprisoned; a notable example would be the Five Knights’ Case, in which five knights were imprisoned for refusing to pay (with the court ruling in Charles’ favour). This would eventually culminate in Charles’ signing of the Petition of Right (1628), which protected the people from non-Parliamentary taxation, as well as other controversial powers that Charles chose to exercise, such as arrest without charge, martial law, and the billeting of troops.
The role played by George Villiers, the Duke of Buckingham, was also another major factor that contributed to Charles’ eventual dissolution of Parliaments in 1629. Having dominated the court of Charles’ father, Buckingham came to enjoy a similar level of unrivalled influence over Charles as his de facto Foreign Minister. It was, however, in his position as Lord High Admiral, that he further worsened Charles’ already-negative view of Parliament. Responsible for both major foreign policy disasters of Charles’ early reign (Cadiz in 1625, and La Rochelle in 1627, both of which achieved nothing and killed 5 to 10,000 men), he was deemed by the MP Edward Coke to be “the cause of all our miseries”. The duke’s influence over Charles’ religious views also proved highly controversial – at a time when anti-Calvinism was rising, with critics such as Richard Montague and his pamphlets, Buckingham encouraged the King to continue his support of the leading anti-Calvinist of the time, William Laud, at the York House Conference in 1626.
Heavily dependent on the counsel of Villiers until his assassination in 1628, it was in fact, Parliament’s threat to impeach the Duke, that encouraged Charles to agree to the Petition of Right. Fundamentally, Buckingham’s poor decision-making, in the end, meant serious criticism from MPs, and a King who believed this criticism to be Parliament overstepping the mark and questioning his choice of personnel.
Fundamentally by 1629, Charles viewed Parliament as a method of restricting his God-given powers, one that had attacked his decisions, provided him with essentially no subsidies, and forced him to accept the Petition of Right. Writing years later in 1635, the King claimed that he would do “anything to avoid having another Parliament”. Amongst historians, the significance of this final dissolution is fiercely debated: some, such as Angela Anderson, don’t see the move as unusual; there were 7 years for example, between two of James’ Parliaments, 1614 and 1621 – at this point in English history, “Parliaments were not an essential part of daily government”. On the other hand, figures like Jonathan Scott viewed the principle of governing without Parliament officially as new – indeed, the decision was made official by a royal proclamation.
Now free of Parliamentary constraints, the first major issue Charles faced was his lack of funds. Lacking the usual taxation method and in desperate need of upgrading the English navy, the King revived ancient taxes and levies, the most notable being Ship Money. Originally a tax levied on coastal towns during wartime (to fund the building of fleets), Charles extended it to inland counties in 1635 and made it an annual tax in 1636. This inclusion of inland towns was construed as a new tax without parliamentary authorisation. For the nobility, Charles revived the Forest Laws (demanding landowners produce the deeds to their lands), as well as fines for breaching building regulations.
The public response to these new fiscal expedients was one of broad annoyance, but general compliance. Indeed, between 1634 and 1638, 90% of the expected Ship Money revenue was collected, providing the King with over £1m in annual revenue by 1637. Despite this, the Earl of Warwick questioned its legality, and the clerical leadership referred to all of Charles’ tactics as “cruel, unjust and tyrannical taxes upon his subjects”.However, the most notable case of opposition to Ship Money was the John Hampden case in 1637. A gentleman who refused to pay, Hampden argued that England wasn’t at war and that Ship Money writs gave subjects seven months to pay, enough time for Charles to call a new Parliament. Despite the Crown winning the case, it inspired greater widespread opposition to Ship Money, such as the 1639-40 ‘tax revolt’, involving non-cooperation from both citizens and tax officials. Opposing this view, however, stands Sharpe, who claimed that “before 1637, there is little evidence at least, that its [Ship Money’s] legality was widely questioned, and some suggestion that it was becoming more accepted”.
In terms of his religious views, both personally and his wider visions for the country, Charles had been an open supporter of Arminianism from as early as the mid-1620s – a movement within Protestantism that staunchly rejected the Calvinist teaching of predestination. As a result, the sweeping changes to English worship and Church government that the Personal Rule would oversee were unsurprisingly extremely controversial amongst his Calvinist subjects, in all areas of the kingdom. In considering Charles’ religious aims and their consequences, we must focus on the impact of one man, in particular, William Laud. Having given a sermon at the opening of Charles’ first Parliament in 1625, Laud spent the next near-decade climbing the ranks of the ecclesiastical ladder; he was made Bishop of Bath and Wells in 1626, of London in 1629, and eventually Archbishop of Canterbury in 1633. Now 60 years old, Laud was unwilling to compromise any of his planned reforms to the Church.
The overarching theme of Laudian reforms was ‘the Beauty of Holiness’, which had the aim of making churches beautiful and almost lavish places of worship (Calvinist churches, by contrast, were mostly plain, to not detract from worship). This was achieved through the restoration of stained-glass windows, statues, and carvings. Additionally, railings were added around altars, and priests began wearing vestments and bowing at the name of Jesus. However, the most controversial change to the church interior proved to be the communion table, which was moved from the middle of the room to by the wall at the East end, which was “seen to be utterly offensive by most English Protestants as, along with Laudian ceremonialism generally, it represented a substantial step towards Catholicism. The whole programme was seen as a popish plot”.
Under Laud, the power and influence wielded by the Church also increased significantly – a clear example would be the fact that Church courts were granted greater autonomy. Additionally, Church leaders became evermore present as ministers and officials within Charles’ government, with the Bishop of London, William Juxon, appointed as Lord Treasurer and First Lord of the Admiralty in 1636. Additionally, despite already having the full backing of the Crown, Laud was not one to accept dissent or criticism and, although the severity of his actions has been exaggerated by recent historians, they can be identified as being ruthless at times. The clearest example would be the torture and imprisonment of his most vocal critics in 1637: the religious radicals William Prynne, Henry Burton and John Bastwick.
However successful Laudian reforms may have been in England (and that statement is very much debatable), Laud’s attempt to enforce uniformity on the Church of Scotland in the latter half of the 1630s would see the emergence of a united Scottish opposition against Charles, and eventually armed conflict with the King, in the form of the Bishops’ Wars (1639 and 1640). This road to war was sparked by Charles’ introduction of a new Prayer Book in 1637, aimed at making English and Scottish religious practices more similar – this would prove beyond disastrous. Riots broke out across Edinburgh, the most notable being in St Giles’ Cathedral (where the bishop had to protect himself by pointing loaded pistols at the furious congregation. This displeasure culminated in the National Covenant in 1638 – a declaration of allegiance which bound together Scottish nationalism with the Calvinist faith.
Attempting to draw conclusions about Laudian religious reforms very many hinges on the fact that, in terms of his and Charles’ objectives, they very much overhauled the Calvinist systems of worship, the role of priests, and Church government, and the physical appearance of churches. The response from the public, however, ranging from silent resentment to full-scale war, displays how damaging these reforms were to Charles’ relationship with his subjects – coupled with the influence wielded by his wife Henrietta Maria, public fears about Catholicism very much damaged Charles’ image, and meant religion during the Personal Rule was arguably the most intense issue of the period. In judging Laud in the modern-day, the historical debate has been split: certain historians focus on his radical uprooting of the established system, with Patrick Collinson suggesting the Archbishop to have been “the greatest calamity ever visited upon by the Church of England”, whereas others view Laud and Charles as pursuing the entirely reasonable, a more orderly and uniform church.
Much like how the Personal Rule’s religious direction was very much defined by one individual, so was its political one, by Thomas Wentworth, later known as the Earl of Strafford. Serving as the Lord Deputy of Ireland from 1632 to 1640, he set out with the aims of ‘civilising’ the Irish population, increasing revenue for the Crown, and challenging Irish titles to land – all under the umbrella term of ‘Thorough’, which aspired to concentrate power, crackdown on oppositions figures, and essentially preserve the absolutist nature of Charles’ rule during the 1630s.
Regarding Wentworth’s aims toward Irish Catholics, Ian Gentles’ 2007 work The English Revolution and the Wars in the Three Kingdoms argues the friendships Wentworth maintained with Laud and also with John Bramhall, the Bishop of Derry, “were a sign of his determination to Protestantize and Anglicize Ireland”.Devoted to a Catholic crackdown as soon as he reached the shores, Wentworth would subsequently refuse to recognise the legitimacy of Catholic officeholders in 1634, and managed to reduce Catholic representation in Ireland’s Parliament, by a third between 1634 and 1640 – this, at a time where Catholics made up 90% of the country’s population. An even clearer indication of Wentworth’s hostility to Catholicism was his aggressive policy of land confiscation. Challenging Catholic property rights in Galway, Kilkenny and other counties, Wentworth would bully juries into returning a King-favourable verdict, and even those Catholics who were granted their land back (albeit only three-quarters), were now required to make regular payments to the Crown. Wentworth’s enforcing of Charles’ religious priorities was further evidenced by his reaction to those in Ireland who signed the National Covenant. The accused were hauled before the Court of Castle Chamber (Ireland’s equivalent to the Star Chamber) and forced to renounce ‘their abominable Covenant’ as ‘seditious and traitorous’.
Seemingly in keeping with figures from the Personal Rule, Wentworth was notably tyrannical in his governing style. Sir Piers Crosby and Lord Esmonde were convicted by the Court of Castle Chamber for libel for accusing Wentworth of being involved in the death of Esmond’s relative, and Lord Valentina was sentenced to death for “mutiny” – in fact, he’d merely insulted the Earl.
In considering Wentworth as a political figure, it is very easy to view him as merely another tyrannical brute, carrying out the orders of his King. Indeed, his time as Charles’ personal advisor (1639 onwards) certainly supports this view: he once told Charles that he was “loose and absolved from all rules of government” and was quick to advocate war with the Scots. However, Wentworth also saw great successes during his time in Ireland; he raised Crown revenue substantially by taking back Church lands and purged the Irish Sea of pirates. Fundamentally, by the time of his execution in May 1641, Wentworth possessed a reputation amongst Parliamentarians very much like that of the Duke of Buckingham; both men came to wield tremendous influence over Charles, as well as great offices and positions.
In the areas considered thus far, it appears opposition to the Personal Rule to have been a rare occurrence, especially in any organised or effective form. Indeed, Durston claims the decade of the 1630s to have seen “few overt signs of domestic conflict or crisis”, viewing the period as altogether stable and prosperous. However, whilst certainly limited, the small amount of resistance can be viewed as representing a far more widespread feeling of resentment amongst the English populace. Whilst many actions received little pushback from the masses, the gentry, much of whom were becoming increasingly disaffected with the Personal Rule’s direction, gathered in opposition. Most notably, John Pym, the Earl of Warwick, and other figures, collaborated with the Scots to launch a dissident propaganda campaign criticising the King, as well as encouraging local opposition (which saw some success, such as the mobilisation of the Yorkshire militia). Charles’ effective use of the Star Chamber, however, ensured opponents were swiftly dealt with, usually those who presented vocal opposition to royal decisions.
The historiographical debate surrounding the Personal Rule, and the Caroline Era more broadly, was and continues to be dominated by Whig historians, who view Charles as foolish, malicious, and power-hungry, and his rule without Parliament as destabilising, tyrannical and a threat to the people of England. A key proponent of this view is S.R. Gardiner who, believing the King to have been ‘duplicitous and delusional’, coined an alternative term to ‘Personal Rule’ – the Eleven Years’ Tyranny. This position has survived into the latter half of the 20th Century, with Charles having been labelled by Barry Coward as “the most incompetent monarch of England since Henry VI”, and by Ronald Hutton, as “the worst king we have had since the Middle Ages”.
Recent decades have seen, however, the attempted rehabilitation of Charles’ image by Revisionist historians, the most well-known, as well as most controversial, being Kevin Sharpe. Responsible for the landmark study of the period, The Personal Rule of Charles I, published in 1992, Sharpe came to be Charles’ most staunch modern defender. In his view, the 1630s, far from a period of tyrannical oppression and public rebellion, were a decade of “peace and reformation”. During Charles’ time as an absolute monarch, his lack of Parliamentary limits and regulations allowed him to achieve a great deal: Ship Money saw the Navy’s numbers strengthened, Laudian reforms mean a more ordered and regulated national church, and Wentworth dramatically raised Irish revenue for the Crown – all this, and much more, without any real organised or overt opposition figures or movements.
Understandably, the Sharpian view has received significant pushback, primarily for taking an overly optimistic view and selectively mentioning the Personal Rule’s positives. Encapsulating this criticism, David Smith wrote in 1998 that Sharpe’s “massively researched and beautifully sustained panorama of England during the 1630s … almost certainly underestimates the level of latent tension that existed by the end of the decade”.This has been built on by figures like Esther Cope: “while few explicitly challenged the government of Charles I on constitutional grounds, a greater number had experiences that made them anxious about the security of their heritage”.
It is worth noting however that, a year before his death in 2011, Sharpe came to consider the views of his fellow historians, acknowledging Charles’ lack of political understanding to have endangered the monarchy, and that, more seriously by the end of the 1630s, the Personal Rule was indeed facing mounting and undeniable criticism, from both Charles’ court and the public.
Sharpe’s unpopular perspective has been built upon by other historians, such as Mark Kishlansky. Publishing Charles I: An Abbreviated Life in 2014, Kishlansky viewed parliamentarian propaganda of the 1640s, as well as a consistent smear from historians over the centuries as having resulted in Charles being viewed “as an idiot at best and a tyrant at worst”, labelling him as “the most despised monarch in Britain’s historical memory”. Charles however, faced no real preparation for the throne – it was always his older brother Henry that was the heir apparent. Additionally, once King, Charles’ Parliaments were stubborn and uncooperative – by refusing to provide him with the necessary funding, for example, they forced Charles to enact the Forced Loan. Kishlansky does, however, concede the damage caused by Charles’ unmoving belief in the Divine Right of Kings: “he banked too heavily on the sheer force of majesty”.
Charles’ personality, ideology and early life fundamentally meant an icy relationship with Parliament, which grew into mutual distrust and the eventual dissolution. Fundamentally, the period of Personal Rule remains a highly debated topic within academic circles, with the recent arrival of Revisionism posing a challenge to the long-established negative view of the Caroline Era. Whether or not the King’s financial, religious, and political actions were met with a discontented populace or outright opposition, it remains the case that the identity crisis facing the period, that between tyranny or stability remains yet to be conclusively put to rest.
Post Views: 931 -
Little Dark Age and Murdering the Author
Roland Barthes’ essay Death of the Author is required reading for many students who wish to study the humanities, such as English Literature. The general thesis of the essay is that narrative intent from the author cannot be discovered as it is impossible to know what the author’s thoughts were at the time of writing. Thus, Death of the Author can be understood to mean “art without the artist” – by the reader is the only true reading. The authority of the author, and therefore the author himself, perishes.
It is an interesting and incredibly influential essay that has played a large part in the development of critical theory over the course of the 20th century. Using this as a basis, it is my belief that we can take the theory further.
Rather than experience the art in a passive way, accepting what the author produces as is, and making our own interpretations from that point, I propose that we instead take an active participation in taking art from the artist and use it to our own ends. This is much easier to do thanks to the internet, and the emergence of meme culture.
It is from meme culture that murdering the author rises. 2016 can be seen as the black swan moment for this with the election of Donald Trump and the reignition of right-wing populism. In this moment, a new breed of meme was born, and it is one of these memes that I think best exemplifies how effective murdering the author can be.
In 2017 MGMT released their song “Little Dark Age”, a protest song lamenting the election of Trump. As the title suggests, the zeitgeist as the artist saw it was regressing back into a period of ignorance, ultimately taking the past 70 years of Progress with it. As recent as 2021 however, the meme remixes of this song have become increasingly popular. The song is used as a backdrop over footage designed to ignite reactionary pride – praise of Christianity and the heroic spirit are commonplace within this. My personal favourites are the ones that glorify the British Empire.
The popularity of the meme is an example of the remix culture unique to the internet, an issue with 21st century creations in general. 21st century art is stunted, and we can only find creative outlets in what has come before. This is a problem with all art and culture in the West, but has been commented on before so I will not belabour the point, except to say that our obsession with nostalgia seems to have left us bereft of creating our own cultural milieu and we are forced to stand blindly on the shoulders of giants.
We are indeed in a little dark age, and MGMT clearly felt that. It just isn’t the dark age they think it is. For a generation of people brought up in countries whose hour of greatness was over, and on whom all the world’s ills could be blamed, it is little surprise that a song like Little Dark Age could be used in the way it did. With lyrics like “Forgiving who you are for what you stand to gain/Just know that if you hide it doesn’t go away”, the song seems to be calling out to those who are trodden on by the current regime, such as political dissidents, delivering the Evolian message of riding the tiger. In the remix culture that epitomises internet trends, this is an example of destroying the meaning of a talented, well intended but misinformed artist and rewiring it for a different purpose.
No matter how MGMT feels about the current political and cultural climate, the fact remains that Little Dark Age is reactionary. It speaks of cultural degradation, inauthenticity – the sense of something being lost. MGMT have put their finger on the pulse, and their diagnosis seems apt – but the wrong patient has died.
Their anger is correct but misdirected, which is why we on the right see the song as something to be hijacked. We are not witnessing the death of the author here – instead, we are the author’s murderers. We are Lenin storming the Tsar’s palace in 1917. We take what is theirs and subvert it to our own ends.
The fact is that reactionary media, be it music, film, literature or television, is entirely hegemonic to the left’s favour. Reactionary discourse is repeatedly shut out of the Overton window, which is panned by boomeresque false idols on one side and comical Marxist villains on the other. In order to make a point, we must use the tools of the enemy. We must be the Vietcong stealing M16s from a US military base. We take from the author what is theirs, deconstruct their arms and create something entirely new using the skeleton of their works.
We are the murderers of the author and this is our strongest weapon.
Post Views: 830 -
Immaturity as Slavery
“… but I just hope the lad, now in his thirties, is not living in a world of secondhand, childish banalities.” – Sir Alec Guinness, A Positively Final Appearance.
The opening quote comes from a part of Alec Guinness’ 1999 autobiography which greatly amuses me. The actor of Obi Wan Kenobi is confronted by a twelve-year-old boy in San Francisco, who tells him of his obsessive love for Star Wars. Guinness asks if he could do the favour of “promising to never see Star Wars again?”. The lad cries, and his indignant mother drags him away. Guinness ends with the above thought. He hopes the boy is weaned from Star Wars before adulthood, lest he become a pitiful specimen.
Here enters the figure of the twenty-first century man-child, alias the “kidult”. He’s been on the radar for a while. Social critic Neil Postman prophesies the coming of “adult-children” in The Disappearance of Childhood from 1982. American journalist Joseph Epstein calls this same creature “The Perpetual Adolescent” in a 2004 article of the same name. But the best summary of this character I’ve yet found is by the writer Jacopo Bernardini, from 2014, to which I can add but little.
The kidult is one who lives his life as an eternal present. As the name suggests, his life is a sort of permanent adolescence. He is sceptical of traditional definitions of adulthood, so has deliberately shunned milestones like marriage and childbearing, in favour of an unattached lifestyle which lasts indefinitely. His relations with other people remain short and shallow; based entirely on fun and mutual use (close friendships or passionate love-affairs are not for him).
Most importantly, the kidult doesn’t change his tastes or buying habits with age. The thresholds of adolescence and maturity have no bearing on the things he likes and purchases, nor how he relates to these things. Not only does he like the same toys and cartoons at thirty as he did at ten, but he continues to obsess over them and impulsively buy them like when he was ten. Enjoying childhood fare isn’t a playful interlude, but a way of life which never ends. He consumes through instant gratification, paying no thought to any long-term pattern or goal.
Although it must not strike the reader as obvious, I think there exists a link between Guinness’ “secondhand, childish banalities” and a kind of latter-day slavery. To see the link needs some prep work, but once laid, I think the reader will see my point.
First to define servility. I believe the conservative writer Hilaire Belloc gave the best definition, and I shall freely paraphrase him. The great mass of people can be restricted yet not servile. Both monopolistic capitalism and socialism reduce workers to dependency, but neither makes them entirely slaves. Under capitalism, society retains an ideal of freedom, enshrined in law. Even as monopolists manipulate the law with their money, the ideal remains. Under socialism, state ownership is supposed to give all citizens leisure to do what they want (even as the state strangles them). In either case then, freedom is present as an ideal in theory even as it ceases to exist in practice. Monopolistic and socialist states don’t think of themselves as unfree.
Slavery is different. A slave society has relinquished even the pretence of freedom for a large mass of the working people. Servility exists when a great multitude are forced to work while having no productive property, and no economic independence. That is, a servile person owns nothing (or effectively nothing) and has no choice whatsoever over how much he works or for whom he works. Most ancient civilisations, like Egypt, Greece, and Rome were servile, with servility existing as a defined legal category. That some men were owned by others was as enshrined by law as the ownership of land or cattle.
Let’s put a little Aristotle into the mix. There are two kinds of obedience: from a free subject to a ruler, and from an unfree slave to his master. These are often confused but distinct. For while the former is reasonable, the latter involves no reason and is truly blind.
True authority is neither persuasion nor force. If an officer argues to a soldier why he should obey, then the two are equals, and there’s no chain of command. But if the officer must hold a gun to the man’s head and threaten to execute him lest he do his duty, this isn’t authority either. The soldier obeys because he’s terrified, but not because he respects his superior as a superior. True authority lies in the trust which a subordinate has for the wisdom and expertise of a superior. This only comes if he’s rational enough to understand the nature of what he’s a part of, what it does, and that some people with knowhow must organise it to work properly. A sailor understands he’s on a ship. He understands that a ship has so many complex functions that no one man could know or do them all. He understands that his captain is a wiser and more experienced fellow than he. So, he trusts the captain’s authority and obeys his orders.
I sketch this Aristotelian view of authority because it lets us criticise servility without assuming a liberal social contract idea. What defines slavery isn’t that the slave hasn’t chosen his master. Nor that the slave doesn’t get to argue about his orders. A slave’s duty just is the arbitrary will of his master. He doesn’t have to trust his master’s wisdom, because he doesn’t have to understand anything to be a slave. That is, while a soldier must rationally grasp what the army is, and a citizen must rationally grasp what society is, a slave is mentally passive.
Now, to Belloc’s prophecy concerning the fate of the west. The struggle between ownership and labour, between monopoly capitalism and socialism, which existed in his day, he thought would result in the re-institution of slavery. This would happen through convergence of interests. The state will take an ever-larger role in protecting workers through a safety net, that they don’t starve when unemployed. It will nationalise key industries, it will tax the rich and redistribute the wealth through welfare. But monopolies will still dominate the private sector.
Effectively, this is slavery. For the worker is protected when unemployed but has entirely lost the ability to choose his employer, or even control his own life. To give an illustration of what this looks like in practice: there are post-industrial towns in Britain where the entire population is either on welfare or employed by a handful of giant corporations (small business having ceased to exist). To borrow from Theodore Dalrymple, the state controls everything about these people, from the house they inhabit to the school they attend. It gives them pocket-money to spend into the private sector dominated by monopolies, and if they want to work, they can only work for monopolists. They fear neither starvation nor a cold night, but they have entirely lost their freedom.
This long preamble has been to show how freedom is swapped for safety in economic terms. But I think there’s more to it. First, the safety may not be economic but emotional. Second, the person willing to enter this swindle must be of a peculiar mindset. He must not know even a glimmer of true independence, lest he fight for it. A dispossessed farmer, for example, who remembers his crops and livestock will fight to regain them. But a man born into a slum, and knowing only wage labour, will crave mere safety from unemployment. Those who don’t know autonomy don’t long for it.
There now exist a troop of companies that market childish goods for adult consumption. They typically do this in one of two ways. First, offering childish products to adults under the guise of nostalgia. The adult is encouraged to buy things reminding him of his childhood, with the promise that he will relive it. Childish media and products are given an adult spin, and remarketed. Toys are rebranded as collectibles. Children’s films get unnecessary, adult-oriented, sequels or remakes (what Bernardini calls “kidult movies”). Originally child-friendly festivals or theme parks are increasingly marketed to childless adults.
The second way is by infantilising adult products. Adverts, for example, have gradually replaced stereotypical busy office workers and exhausted housewives with frolicking kidults. No matter how trivial, every product that is not related to Christmas, is now surrounded by giddy, family-free people engaged in play. The message we’re meant to get is that the vacuum cleaner or stapler will free us to act like children. By buying these things, we can create time for the true business of life: bouncing and smiling with one’s mouth open.
I believe infantilism to be a kind of mental slavery. In both the above examples, three elements combine: ignorance and mass media channel anxiety into childishness. This childishness then binds the victim in servitude to masters who take away his freedom while robbing him in the literal sense.
An artificial ignorance created by modern education is the first parent of the man-child. Absent a proper and classical education, the kidult’s mind is an empty page. Lack of general knowledge separates him from the great achievements of civilisation. He cannot seek refuge in Shakespeare, Dostoyevsky, or Dante, for he has never heard of these. He cannot draw strength from philosophy and religion for the same reason. Neither can he learn lessons from history, for the world begins only with his own birth. Here is a type of mental dispossession parallel to an economic one. Someone utterly ignorant of the answers great people have given to life’s questions will seek only safety, not wisdom.
The second parent is anxiety. Humans have always been terrified of the inevitable decay of their own bodies, followed by death. The wish for immortality is ancient. Yet the modern world, with its scepticism, creates a heightened anxiousness. When all authority and tradition has been deconstructed, there is no ideal for how people ought to live. Without this ideal, humans have no certainty about the future. Medieval people knew that whatever happened, knights fought, villeins worked, and churchmen prayed. Modern man’s world is literally whatever people make of it. It may be utterly transformed in a very short time. And this is anxiety-inducing to all but the most sheltered of philosophers.
Add to this the rise of a selfish culture. As Christopher Lasch tells us, the nineteenth century still carried (in a bastardised way) the ideal of self-sufficiency and virtue of the ancient man. Working and trading was still tied to one’s flourishing in society. Since 1960, as family and community have disintegrated, the industrialised world has degenerated into a Hobbesian “war of all against all”. A world of loneliness without parents and siblings; lacking true friends and lovers. When adulthood has become toxic and means to swim in a sea of disfunction, vulgarity, substance abuse and pornographic sexuality; it’s no surprise some may snap and long for a regression to childhood.
Mass media is the third condition. It floods the void where education and community used to be. The space where general knowledge isn’t, now gets stamped by fiction, corporate advertising, and state propaganda. These peddle in a mass of cliches, stereotypes, and recycled tropes.
My critique of kidults isn’t founded on “good old days” nostalgia, itself a product of media cliches. Fashions, customs, and culture change; and the citizen of today doesn’t have to be a joyless salaryman or housewife to count as an adult. Rather, the man-child phenomenon is a massive transfer of power away from the small and towards the large. The kidult is like an addict, hooked on feelings of cosy fun and nostalgia which are only provided by corporations. These feelings aren’t directed to the good of the kidult but the organisation acting as a dealer. The dealer controls the strength and frequency of the dose to get the wanted behaviour from the addict.
Now we see how kidults can be slaves. First, they’ve traded freedom for safety (false as it is) like Belloc’s proletarians made servile. Unlike the security of a traditional slave, this is an emotional illusion. The man-child believes that there’s safety in the stream of childish images offered to him. He believes that by consuming these the pain of life will cease. Yet man-children get no material or mental benefit from their infantilism. Indeed, they’re fast parted from their money, while getting no skills or virtues in return. The security is merely psychological: a Freudian age regression, but artificially created.
Second, while authority in Aristotle’s sense means to swap another’s judgement for your own, for the sake of a common good you understand; here you submit to another’s judgement for the sake of their private good, which you don’t understand. Organisations seeking only profit or power impose their ideas on the kidult, for their benefit. An immature adult pursues only pleasure, lives only for the present, and thinks only in frivolous stereotypes and cliches implanted during childhood. He’s thus in no position to understand the inner workings of companies and governments. He follows his passions like a sentient puppet obeying an invisible thread, leading always to a hand just out of sight.
In the poem London, William Blake talks about “mind-forg’d manacles”. These are the beliefs people have which constrain their lives in an invisible prison of sorts. For what we think possible or impossible guides our acting. Once mind-forg’d manacles are common to enough people, they form a culture (what’s a culture if not collective ideas on how one should act?). Secondhand childish banalities are such mind-forg’d manacles if we let them determine us wholly. Their “secondhand” nature means the forging has been done for us, and this makes them more insidious than ideas of our own creation. For if what I’ve said above is true, they threaten to make us servile. If enough people become dependent on secondhand childish banalities, as the boy who met Alec Guinness, then the whole culture becomes servile. Growing up may be painful, but it’s a duty to ourselves, that we remain free.
Post Views: 888