Turning royalty into royalties impoverishes us all

What if we could create a marketplace for relationships, so that – just as we can rent our homes on Airbnb – we had an app that allowed us to sell at the market rate dinner with our husbands or bedtime with the kids?

Marriage is a legally recognised agreement after all, one that has been shown to confer many benefits for health and wellbeing. Why should I not be able to rent my place as wife and mother in my particular family to others who wish to enjoy some of those benefits?

Ryan Bourne of the Cato Institute recently argued that the technology exists to enable us to trade citizenship rights. Calling the right of British nationals to work in the UK’s high-wage economy “an effective property right we own but can’t currently trade”, he suggests we could ease immigration pressures by implementing an Airbnb-style secondary market in working rights.

If we frame citizenship, or marriage, as something owned by an individual, it is simply a set of bureaucratic permissions. Like the right to live in a house, surely this could be traded in a marketplace? And if the technology exists to create a citizenship market, surely we could do the same for marriage? I could sublet my wifedom and nip off for a weekend on the tiles with the proceeds. Why not?

The problem is obvious — my husband and daughter would, not unreasonably, object. She would no more want her bedtime story read by a stranger than my husband would want to share a bed with that stranger.

My marriage is not a good I own but a relationship, created by mutual consent. In a marriage, I give up some of my autonomy, privacy and private property rights by declaring my commitment to the relationship. What I gain is of immeasurable value: a sphere of belonging, the foundation of my existence as a social creature.

Likewise, citizenship implies relations of belonging, both of me to a community but also a community to me. It also implies commitments on behalf of the community of which I am a citizen. And in exchange it requires commitments of me, as a citizen: to uphold the law, to behave according to its customs and so on. As the late Roger Scruton put it in a 2017 speech:

The citizen participates in government and does not just submit to it. Although citizens recognise natural law as a moral limit, they accept that they make laws for themselves. They are not just subjects: they appoint the sovereign power and are in a sense parts of that sovereign power, bound to it by a quasi-contract which is also an existential tie. The arrangement is not necessarily democratic, but is rather founded on a relation of mutual accountability.

Roger Scruton

Just as my husband and daughter have a stake in who is entitled to be called “wife” or “Mummy” in our particular context, so other citizens of a nation have a stake in who is entitled to the rights conferred by citizenship.

In this light we can better understand the revulsion that greeted the actions of the Duke and Duchess of Sussex in trademarking “Sussex Royal” for personal commercial gain. Royalty, after all, does not exist in a vacuum. It is not an intrinsic property of a person, like blue eyes or long legs, but something conferred both by the monarchy and also by the subjects of that monarchy.

As Charles I discovered in 1649, ultimately no king can govern save by the consent of his subjects. Royalty is not a private property, but a relationship. The popular disgust and anger engendered by the Sussexes’ move to transfer their stock of royalty from the relational public sphere to that of private property is in truth anger at their privatising something which does not belong to them but to the whole nation.

In The Question Concerning Technologywrites Josh Pauling, Heidegger argues that technology uncouples humans from what is real, paving the way for a mindset that treats everything as “standing-reserve”, or in other words “resources to be consumed”. For Heidegger, seeing the world thus is dangerous because it flattens all other perspectives:

Commodifying nature and humanity leads us to discard other understandings of being-in-the-world and the practices, beliefs and ideas that accompany them: all aspects of reality are incorporated into the ordering of standing-reserve.

Josh Pauling

My husband’s goodwill would rapidly wear thin were I to Airbnb my role in our family. Similarly, Bourne’s citizenship marketplace fails to consider how the general population would react to seeing fellow citizens renting their right to work to non-citizens and swanning about spending the unearned proceeds. And the goodwill enjoyed by the Duke and Duchess of Sussex while discharging their royal duties has already evaporated, now it transpires they wish to enjoy the privileges of their elevated station without embracing its obligations.

Treated as objects to be exploited, relational meanings wither and die. Treated as dynamic relationships, they are infinitely renewable. In this sense, they are more akin to ecologies in the natural world. In Expecting the Earth, Wendy Wheeler argues that in fact ecologies are systems of meaning: whether at the level of DNA or megafauna, she says, living things deal not in information but in meanings that change dynamically depending on context.

Why does any of this matter? “Modernity is a surprisingly simple deal,”  writes Yuval Noah Harari in Homo Deus. “The entire contract can be summarised in a single phrase: humans agree to give up meaning in exchange for power.” The impressive achievements of modernity might make the loss of meaning seem, to some, a fair exchange.

But if Wheeler is right, meaning is more than an optional seasoning on the mechanistic business of living. In Man’s Search for Meaning, Victor Frankl observes of his time in Nazi concentration camps that those who felt they had a goal or purpose were also those most likely to survive.

Indeed, the growing phenomenon of “deaths of despair” is driven, some argue, by deterioration in community bonds, good-quality jobs, dignity and social connection — in a word, the relational goods that confer meaning and purpose on life. As Frankl observed, humans need meaning as much as we need air, food and water: “Woe to him who saw no more sense in his life, no aim, no purpose, and therefore no point in carrying on. He was soon lost.”

An order of commerce that treats relational ecologies as objects that can be exploited will exhaust those objects. That is, in the course of its commercial activities it actively destroys one of the basic preconditions for human flourishing: meaning.

The Estonian thinker Ivar Puura has called the destruction of meaning “semiocide”. As concern mounts about the effects of pollution and emissions on the earth, campaigners have called for new laws to criminalise the destruction of ecologies, which they call “ecocide”. Perhaps we should take semiocide more seriously as well.

This piece was originally published at Unherd

John Clare, poet of the Somewheres

Can someone be so much of a Somewhere — so rooted in a place — that the loss of that home could drive them mad? The tragic story of the poet John Clare (1793-1864) would suggest so. A contemporary of the Romantics, Clare was neither an aristocrat like Byron nor a grammar school boy like Wordsworth and Keats, but a farm labourer. And when the Enclosure Acts transformed his birthplace, he was so devastated by the loss of his familiar landscape and way of life that he fell gradually into depression, panic attacks, alcohol abuse and finally psychosis.

Born in 1793 in Helpston, a rural hamlet north of Peterborough, to a barely literate farm labourer father and an illiterate mother, Clare spent most of his working life as a labourer, despite at one point during his lifetime outselling John Keats. Only haphazardly educated, he fell wildly in love with the written word after encountering James Thomson’s The Seasons. He began writing his own verse — at first mainly about the natural world — on whatever scraps of paper he could find, or on his hat when he had no paper.

Clare first sought a publisher in the hope of raising money to stop his parents being evicted from their tenement. When a lucky contact brought him to Taylor & Hessey, his first collection, Poems Descriptive of Rural Life and Scenery, was published in 1820 to wild acclaim.

Clare stands out among the poets of the Romantic era for his understanding of and communion with the natural world he describes. Contemporaries treated the natural world more as emotional stimulus: in Wordsworth’s “Tintern Abbeyfor example, landscape is observed, but with little knowledge. Wordsworth’s “plots of cottage-grounds” are “clad in one green hue”, and this is chiefly an anchor for moral reflection, a means of “hearing oftentimes/The still sad music of humanity”.

Clare, on the other hand, was critical of this mix of ignorance and sentimentality, saying of Keats that “his descriptions of scenery are often very fine but as it is the case with other inhabitants of great cities he often described nature as she appeared to his fancies, and not as he would have described her had he witnessed the things he described”.

Landscape, to Clare, was not a source of high emotion but home, livelihood, work, family and the richness of plant and animal life. His vistas brim with too much knowledge to seem painterly, or to be turned easily into moral metaphor. Colour is shorthand for a natural and farmed landscape intimately known, as these fields in “A Sunday With Shepherds and Herdboys”:

Square plats of clover red and white
Scented wi’ summer’s warm delight
And sinkfoil of a fresher stain
And different greens of varied grain

In Keats’s famous “Ode to a Nightingale” the bird herself is not even described, serving instead as the focus for reflections on death, history and emotional rapture by a poet “half in love with easeful death”; Clare’s hushed, intimate “The Nightingale’s Nest” is both more prosaic and, in a sense, more faithful to the bird. For Keats, she is a “light-winged Dryad of the trees”. In contrast, Clare describes the materials used to build her nest and, with hushed empathy, the terrified bird:

How subtle is the bird she started out
And raised a plaintive note of danger nigh
Ere we were past the brambles and now near
Her nest she sudden stops — as choaking fear
That might betray her home

There is no need for a moral. For Clare it is enough to observe, then tiptoe away leaving the bird to find her voice again:

We’ll leave it as we found it — safety’s guard
Of pathless solitudes shall keep it still
See there she’s sitting on the old oak bough
Mute in her fears – our presence doth retard
Her joys and doubt turns every rapture chill
Sing on sweet bird may no worse hap befall
Thy visions then the fear that now deceives
We will not plunder music of its dower
Nor turn this spot of happiness to thrall
For melody seems hid in every flower
That blossoms near thy home

Yet Clare’s lack of moralising does not strip his work of emotion. He is as unflinching in his descriptions of the brutality of his world as of its beauty. “The Badger” begins with a description of the animal’s habitat “A great hugh burrow in the ferns and brakes” and ends with its death at the hands of a village crowd:

He turns agen and drives the noisey crowd
And beats the many dogs in noises loud
He drives away and beats them every one
And then they loose them all and set them on
He falls as dead and kicked by boys and men
Then starts and grins and drives the crowd again
Till kicked and torn and beaten out he lies
And leaves his hold and cackles groans and dies

His ever-present empathy is with the birds and beasts besieged by humanity, as in “Summer Evening” when he curses the boys who creep into lofts to catch and kill sparrows. He calls on the birds to nest in his house where they will be safe:

My heart yearns for fates like thine
A sparrow’s life’s as sweet as mine

If Clare rises to moralise from his observation of the natural world, it is done without ornament. The last two verses of “To The Snipe” give a reflection at once uplifting and humble:

Isee the sky
Smile on the meanest spot
Giving to all that creep or walk or flye
A calm and cordial lot

Thine teaches me
Right feeling to employ
That in the dreariest places peace will be
A dweller and a joy

Clare’s modesty was out-of-step with the mood of the times: Keats wrote of Clare’s poetry that “Images from Nature are too much introduced without being called for by a particular Sentiment”. Romantic poetry often seems as indifferent to the minutiae of the natural world as it is enthralled by the poet’s ability to overlay it with “Sentiment”. This aesthetic was well suited to the political and technological shifts of that age. Whether in poetry or landscape, the movement was away from coexistence with the natural world toward subordinating it to human desires.

In the Middle Ages, much of the arable land in central England was “commons”, which was farmed on a communal basis for a subsistence livelihood. This was the landscape of John Clare’s childhood. But between the 13th and 19th centuries, and accelerating from the Georgian era onward, the land was ‘“enclosed” — that is, turned from common to private property — either by buying the land rights or else forcing enclosure through an Act of Parliament.

Between 1809 and 1820 Enclosure Acts transformed the landscape around John Clare’s birthplace, draining ditches, felling ancient trees and displacing subsistence farmers from once common land. Clare’s 1830s poem “The Lament of Swordy Well” expresses his horror at the process, in the voice of the land itself:

The silver springs grown naked dykes
Scarce own a bunch of rushes
When grain got high the tasteless tykes
Grubbed up trees, banks, and bushes
And me, they turned me inside out
For sand and grit and stones
And turned my old green hills about
And pickt my very bones.

The natural world, that seemed so numinous and eternal in Clare’s early work, is depicted homeless and starving as a consequence of this exploitation:

The bees flye round in feeble rings
And find no blossom bye
Then thrum their almost weary wings
Upon the moss and die
Rabbits that find my hills turned o’er
Forsake my poor abode
They dread a workhouse like the poor
And nibble on the road

By the age of 30, Clare had six children to feed and his brief fame had dissipated. Displaced from his way of life by enclosures and disturbed by the changing landscape, Clare fell into depression and alcohol abuse. Friends and admirers clubbed together to buy him a cottage three miles from Helpston, with a smallholding, but even this slight move from his birthplace only increased his distress. The Flitting captures his desolation. Even the sun, he says, seems lost:

Alone and in a stranger scene
Far far from spots my heart esteems
The closen with their ancient green
Heaths woods and pastures’ sunny streams
The awthorns here were hung with may
But still they seem in deader green
The sun e’en seems to loose its way
Nor knows the quarter it is in

Not long after moving he began to experience hallucinations and was sent to an asylum near London. So desperate was he to return home that four years later he escaped and walked the 70 miles back to his cottage. But he found no solace there, and shortly afterwards he was sent to the Northampton General Lunatic Asylum, where he lived the last 24 years of his life. He died aged 71 in 1864.

John Clare speaks to us from the other side of an unimaginable gulf. He was so profoundly Somewhere that even David Goodhart’s Somewheres would, to him, seem like Anywheres. His voice, almost modern-sounding, nonetheless hails from an ancient England where the normal livelihood was subsistence farming on land held in common, culture and history were mainly oral and the natural world held a richness of allusion Wordsworth and Keats found in classical mythology.

Yet his politics feel fresh and increasingly urgent, while his empathy for the living world making him a compelling advocate for change in how we relate to the land that nourishes us — whether via conservationsustainable farming or land reform.

Clare is both protester and casualty of the Enclosure Acts, as well as a meticulous recorder of what was lost in that founding act of modernity. Enclosure spurred the phenomenal productivity gains of the agricultural revolutioncreated a labour force for industry — and devastated a whole way of life. His grief and anger at the costs of enclosure, an event largely seen from the perspective of its beneficiaries, reminds us that the growing power of individual property rights in the modern era displaced premodern subsistence lifestyles in the United Kingdom as well as in the colonies founded by English explorers overseas.

Clare’s descent into depression and alcoholism is echoed in the shockingly high prevalence of mental health issues and substance abuse in indigenous populations across the world who have been dislocated from their ways of living by modern property-owning relations to the landscape.

UK land ownership today is ever more carefully obfuscated and ever more critical to social and — perhaps — ecological renewal. The industrial capitalist economic model that took root in Clare’s lifetime is now cracking in earnest, along with the ecologies “grubbed up” (like Swordy Well) for “gain”. The “peasant poet” of Northamptonshire has lessons for us today.

This piece was originally published at Unherd

Growth is destroying our prosperity

e started the 2010s reeling from the Great Crash of 2008, and ended the decade with angry populism widespread in the Western world. Today, the global economy limps on more or less as usual, while resentment grows among the “little people” at an economic consensus many feel is rigged without really knowing who is to blame. Over the same period, climate change activism has gone from being a minority pursuit to mainstream debate, occasioning worldwide “school strikes” and, since the beginning of the year, the high-profile and colourful Extinction Rebellion movement.

What these dissatisfactions share is a sense of being trapped in an economic logic whose priorities no longer benefit society as a whole, but that — we are told — cannot be challenged without making things even worse. The foundational premise of that system is continual growth, as measured by GDP (gross domestic product) per capita. Economies must grow; if they do not, then recession sinks jobs, lives, entire industries. Tax receipts fall, welfare systems fail, everything staggers.

But what happens when growth harms societies? And what happens when growth comes at the cost of irreparable harm to the environment?

As Sir David Attenborough put it in 2013, “We have a finite environment – the planet. Anyone who thinks that you can have infinite growth in a finite environment is either a madman or an economist.”

This is the argument of Tim Jackson’s Prosperity Without Growth. The challenge the book sets out is at once simple and staggering. In a finite world, with limited natural resources, how do we deliver prosperity into the future for a human population that keeps on growing?

Jackson, who today is Professor of Sustainable Development at the University of Surrey, and Director of the Centre for the Understanding of Sustainable Prosperity (CUSP), argues that we need to start by scrapping GDP as the core metric of prosperity. As the book puts it: “Rising prosperity isn’t self-evidently the same thing as economic growth.”

The pursuit of growth is also undermining social bonds and overall wellbeing. In 2018, a commission reported on the ‘loneliness epidemic’ that is blighting lives and worsening health across the UK, driven in part by greater mobility of individuals away from family connections. (Mobility of labour, of course, is essential to drive economic growth.)

This year, even The Economist acknowledged that rising growth does not guarantee rising happiness.

If that were not enough, the resources available to supply continued growth are dwindling: “If the whole world consumed resources at only half the rate the US does […] copper, tin, silver, chromium, zinc and a number of other ‘strategic minerals’ would be depleted in less than four decades.” (Prosperity Without Growth)

Rare earth minerals, essential for technologies from circuit boards to missile guidance systems, are projected to be exhausted in less than two decades. 

Inasmuch as the public debate considers these nested dilemmas, the vague sentiment is that technology will save us. The jargon term for this is ‘decoupling’ — that is, the ability of the economy to grow without using more resources, by becoming more efficient. But will decoupling happen?

The theoretical core of Jackson’s book is a detailed unpacking of models that suggest it will not, or that if absolute decoupling is possible it will happen so far into the future we will already have wrecked the climate and run out of everything. Rather than rely on this fantasy, Jackson argues, we must challenge the dependence on growth.

But how? The global economic system depends on growth and in times of recession it is the poorest who suffer first. It is a policy double bind: on the one hand, we must think of the environment, so governments encourage us to buy less, consume less, recycle more and so on. But on the other, they must deliver a growing economy, which mean encouraging us to buy more, consume more, keep the economy going. Electorates are, understandably, cynical about the sincerity of this flatly self-contradictory position.

What, then, is the alternative? Jackson is an economist, not a revolutionary firebrand, and his book does not call on us to bring down capitalism. In the second part of Prosperity Without Growth, he instead suggests some quietly radical approaches to bringing the global economy back into the service of human flourishing.

He advocates government intervention to drive much of the change he proposes, including encouraging economies to pivot away from manufacturing, finance and the pursuit of novelty at all costs toward less obviously productive but more human services such as slow food cooperatives, repair and maintenance or leisure services.

He also advocates heavy state support for ecologically-oriented investment. When I contacted him to ask about his book ten years on he spoke positively of the contribution that a “Green New Deal” could make on this front: “It shows a commitment to social and environmental investment that is absolutely essential to achieve a net zero carbon world”, he told me. “Simply put, we just can’t achieve that without this scale of investment, and that form of commitment from Government.”

He also told me he is often criticised for being “too interventionist in relation to the state”, as he puts it. But perhaps (though Jackson does not use the term himself) he might be more fairly described as post-liberal. Prosperity Without Growth is a quiet but excoriating critique of the growing human and ecological costs of liberal economics.

Intriguingly, within Jackson’s proposals lurks another challenge to liberalism, that to date has not been greatly associated with the left: the critique of radical liberal individualism as a social doctrine. Along with state intervention to tweak the economy and drive ecological investment, Jackson argues that governments should promote “commitment devices”: that is, “social norms and structures that downplay instant gratification and individual desire and promote long-term thinking”.

Examples of ‘commitment devices’ include savings accounts and the institution of marriage. Governments should implement policies that clearly incentivise commitment devices, for doing so will promote social flourishing and resilience even as such institutions offer alternative forms of meaning-making to the pursuit of shopping-as-identity-formation.

Thus, we cannot save the earth without reviving some social values and structures today thought of as small ‘c’ conservative: stable marriage, savings, settled and cohesive communities with lower levels of labour mobility.

I asked Jackson whether some of the more vociferous socially liberal proponents of environmental change had cottoned on to these potentially quite conservative implications of his theories. He told me “This is an interesting question, for sure, and one that I don’t think has really been picked up – even by me!” (Except at UnHerd – see here and here for example.) But, he says, it is incumbent on us to set aside political tribalism in the search for solutions to our current dilemmas.

“I believe there are elements of a Burkean conservatism which are profoundly relevant to a politics of the environment, even as I am convinced that the progressive instincts of the left are essential in our response to social and environmental inequality. I see it as incumbent on those working for change both to understand the underlying motivations of different political positions and also to adopt a pragmatic politic in which solutions are suited to the challenges of today rather than the dogma of yesterday.”

Indeed.

This essay was originally published at Unherd

On the censoring of seriousness for children

Our local church runs a monthly service aimed at children, with crafts and without Holy Communion. The team that organises the Friends and Family services are lovely, work very hard to come up with activities and an appealing programme for younger worshippers, and it is popular with families many of whom I don’t see at regular services. My daughter (3) loves it.

It’s on the first Sunday of every month, so the first Sunday of Advent coincided with the Friends and Family service. My daughter enjoyed decorating the Christmas tree, making little Christmas crafts and other activities. But one thing puzzled and still puzzles me.

This is one of the songs we were invited to sing. ‘Hee haw, hee haw, doesn’t anybody care? There’s a baby in my dinner and it’s just not fair.’ It’s supposed to be a funny song, from the donkey’s point of view, about the Holy Family in the stable and Jesus in the crib. What I don’t understand is why this should be considered more suitable for children than (say) Away In A Manger.

The former depends, for any kind of impact, on a level of familiarity with the Christmas story that allows you to see it’s a funny retelling and to get the joke. That already makes it more suitable for adults. The latter paints the Christmas scene in simple language and follows it with a prayer that connects the picture with the greater story of the faith it celebrates. The tune is easy to learn and join in with. Why choose the first, with its ironic posture and ugly, difficult tune, over the latter with its plain language and unforced attitude of devotion?

I’ve wondered for some time what it is about our culture that makes us reluctant to allow children to be serious. Children are naturally reverent: if the adults around them treat something as sacred, even very young children will follow suit without much prompting. This should come as no surprise – the whole world is full of mystery and wonder to a 3-year-old. It is us that fails so often to see this, not the children.

So why do we feel uncomfortable allowing children to experience seriousness? Sacredness? Reverence? How and why have we convinced ourselves that children will become bored or fractious unless even profoundly serious central pillars of our culture, such as the Christmas story, are rendered funny and frivolous?

The only explanation I can come up with is that it reflects an embarrassment among adults, even those who are still observant Christians, about standing quietly in the presence of the sacred. What we teach our children, consciously or unconsciously, is the most unforgiving measure of what we ourselves hold important. But it seems we shift uncomfortably at the thought of a preschool child experiencing the full force of the Christmas story in all its solemnity. Instead we find ourselves couching it in awkward irony, wholly unnecessary for the children but a salve to our own withered sense of the divine.

If it has become generally uncomfortable for us to see reverence in a young child, during Advent, then the Christian faith really is in trouble.

Who gains from the great university scam?

Higher education is big business. Over half of UK young people now attend university, meaning the target first set by Tony Blair 20 years ago has finally been reached. And according to a 2017 report for Universities UK, once you count the (mostly borrowed) money students spend on subsistence, tertiary education generates some £95 billion for the British economy, more than the entire legal sector, the advertising and marketing sector and air and spacecraft manufacturing combined. 

This is true across the country, but its impact is especially noticeable in post-industrial regions. According to a 2017 report, the University of Liverpool alone contributed £652m in gross value added to the Liverpool city region in 2015/16, and supported one in 57 jobs in the region.

Some 11,000 jobs are either directly funded or supported by spending associated with the university — and the University of Liverpool is only one of 5 or more institutions (depending how much of the area you count) offering graduate and post-graduate courses in the Liverpool area, meaning the total sum is even greater.

As well as generating jobs and supporting whole industries catering to student life — from nightclubs and cafes to housing rentals – higher education is shaping the very landscape of the cities in which it thrives. As this 2015 report from UCL’s Urban Laboratory shows, universities are increasingly actors in urban development:

“Driven by competition (for reputation, staff and students) in an international marketplace, and released from financial constraints by the lifting of the cap on student fees, [universities] produce locally embedded variants of global higher education models. These assume physical and spatial form within the parameters of distinct, but increasingly similar, city planning and urban regeneration contexts defined by an ‘assemblage of expertise and resources from elsewhere’.”BY THE SAME AUTHOR

Some of the money that flows into and through universities and out into local economies of course comes from overseas students, endowments and the like. But to a great extent, these dependent industries, revamped urban landscapes, former factories converted to student accommodation, ancillary services and so on are funded either directly — via government subsidies to higher education — or indirectly, via government-backed student loans.

Though academic research is still heavily subsidised by government via the UK Research and Innovation body, the proportion of direct funding to students has shrunk even as that taken on by students as loans has grown. A January 2019 research briefing from the House of Commons Library stated that the cash value of higher education loans is estimated to be around £20bn by 2023-24. The report also acknowledged that only about half of the money borrowed will ever be repaid, estimating that “The ultimate cost to the public sector is currently thought to be around 47% of the face value of these loans”.

The total cost to the public sector, the report continues, is roughly the same as it was before the funding model changed to scrap maintenance grants and increase tuition fees. That is to say, as the proportion of direct government funding to higher education has been reduced, there has been a corresponding rise in the amount of student debt that will never be repaid and which the government will eventually have to cover.

This money is in all but name a form of government subsidy, funded by government borrowing. But it is counted differently. The briefing notes in passing that “This subsidy element of loans is not currently included in the Government’s main measure of public spending on services and hence does not count towards the fiscal deficit.”

That is to say, billions of pounds are being borrowed by government for disbursement in the higher education sector, and the government already knows much of this will never be paid back. But the money is no longer counted toward the fiscal deficit, as it has been nominally privatised in the form of loans to individual young people.

One might argue that this is unimportant provided the higher education sector is delivering value to those who are nominally its customers — the students. But in 2018, the ONS reported that only 57% of young graduates were in high-skilled employment, a decline over the decade since the 2008 crash of 4.3 percentage points.  The ONS speculates that this could reflect “the limited number of high-skilled employment opportunities available to younger individuals and the potential difficulties they face matching into relevant jobs early in their careers”.

Nay-sayers pointed out, when Blair first introduced the tuition fees and the 50% graduate target, that the law of supply and demand suggests employers’ willingness to pay a “graduate premium” in wages as graduates become more plentiful. In 1950 only 17,500 youung people graduated from university; but when 1.4 million of them do so, as reported by the House of Commons this year, can they really hope for the same graduate premium?

Results so far suggest that many of them cannot. In a hard-hitting article last August in the New Statesman, Harry Lambert spelled out further the way in which the marketisation of higher education under the Blair rubric has also incentivised grade inflation.

Cui bono, then? Arguably less the students, graduating in ever greater numbers with ever less valuable degrees, than the cities in which they live for three or four years to study, and which have in many cases experienced a renaissance due in large part to the post-Blair expansion of higher education.

In 1981, after the Toxteth riots, Lord Howe advised Margaret Thatcher to abandon the entire city to “managed decline”. In a letter only made available to the National Archives in 2011, following the 30-year rule, Howe wrote:

“We do not want to find ourselves concentrating all the limited cash that may have to be made available into Liverpool and having nothing left for possibly more promising areas such as the West Midlands or, even, the North East. […] I cannot help feeling that the option of managed decline is one which we should not forget altogether.”

Today, Howe’s words remain only as a bitter memory: the regenerated Liverpool city centre hums with tourists, students and shoppers. The Albert Dock area, reimagined from shipping and warehouses to office buildings, shops and leisure, is beautiful, vibrant and popular.

Much of this regeneration has come via the higher education boom. In Liverpool and elsewhere, successive governments have used the higher education sector more or less explicitly as an instrument of regeneration. In effect, government-backed student loans have become part of this: an off the books subsidy for depressed post-industrial areas, that have thus been partially rescued from the threat of Thatcherite “managed decline” and reinvented as hubs of the “knowledge economy”, all funded by government debt.

But a conflict of interest lurks beneath this picture. If we work on the assumption that the main beneficiaries of the higher education industry are supposed to be students, then it follows that institutions delivering shoddy teaching and useless degrees should be allowed to fail, as word spreads and students go elsewhere. But what if the main beneficiaries of this industry are in fact the cities regenerated with the borrowed money those students spend there?

In that case, from a policy perspective, the quality of the courses delivered will be less important than that the students continuing to arrive in their thousands, bringing their borrowed money to the region and spending it on accommodation, lattes, printer paper, fancy dress hire and all the other essentials of student life.

If the aim were indeed less the introduction of market forces than the use of students as a covert form of subsidy, we would surely see market distortions. In order to head off the threat of young people abandoning poor quality higher education, and entice them into shouldering their allotted portion of off-the-books government borrowing, the “graduate premium” would have to be maintained.

And indeed, since Blair’s student attendance target was first introduced, we can see that instead of using market forces to drive up quality the government has conspired with employers to cartelise the world of work. A growing number of roles that were once accessible via on-the-job training have — by government fiat if necessary — been rendered degree-only. Nursing is the classic example, but in 2016 this was even expanded to include the police,  a move so unwelcome in the force that this year Lincolnshire Police Force launched a judicial review against the policy. 

The victims in this situation are the students, who have come of age at a time when to have any hope of snagging a job they are more or less forced to leave their families and shoulder an enormous debt burden — over £50,000 each on average according to the IFS.  They must do so to acquire a degree whose value for money is declining, but which they cannot do without in a cartelised employment climate in which higher education is obligatory even as the grades it confers count for ever less. 

Not only is the government paying for today’s elderly care (and banker bailouts) with borrowing that will fall on tomorrow’s taxpayers, but young people are also being forced to take on huge personal loans to fund degrees; degrees that are less useful as preparations for adult life than as a conduit for indirect subsidies for regional regeneration.

To make matters worse, the government knows that much of this borrowing will never be repaid, which will leave tomorrow’s taxpayer on the hook for yet more billions. It is an accounting fiddle on a gigantic scale, which penalises young people by first saddling them with loans, then devaluing their education, and finally by hiding government borrowing that future taxpayers will somehow have to meet.

Young people already live with the suspicion that overall public sector borrowing is running up a tab today that will be their burden tomorrow. The situation is far worse than they think.

This article originally published at Unherd

Our deteriorating demos

While very high immigration has some economic benefits, at a cultural level it also tends to encourage a society where people draw their identity more from a sense of belonging to particular groups – ethnic subgroups, religions, sexualities etc – rather than identifying with the nation as something worth belonging to. It proposes to replace a democracy’s largest and most vital unit of belonging – the sense of a cohesive national community – with multiple communities united by not a great deal more than geography and happenstance.

You might say ‘so what?’ (after all this is what London is like nowadays and it works okay) but without an overarching sense of community, political solidarity is thin on the ground. Unless people picture their country and feel instinctively that they have a bond with its other citizens, that others are part of the same ‘us’, a group with shared interests within which compromises are desirable to ensure everyone can thrive, political solidarity and compromise is hard to come by. Witness the outpouring of venom from London against others in the nation who disagrees with them about the virtues of the European Union.

I’m on dangerous territory here, as this line of thinking slides into ‘blood and soil’ nutterdom if you’re not careful. Some migration is good and healthy, this must go without saying. But the core sense a demos has of itself: can that survive the addition of (say) 25% more people with entirely different traditions and histories? Certainly over time it can adapt to the new inputs, sure, but only if given time. If a city the size of Hull is added every year, the culture cannot possibly hope to keep up. I wonder if this has contributed into the degraded sense of politics-as-auction-to-interest-groups that has pervaded the national discourse since Blair: that increasingly no-one feels able to speak to the national interest, to the demos as a whole, because the demos’ own sense of itself has been steadily weakening.

I think this is reversible. Not by deporting people or closing the borders. But by a political leader who has the courage to speak to our country as an ‘us’, and to slow the pace of demographic change so that the culture and demos can catch up with what we are now. And who can do this while speaking to the new, nascent ‘us’ of 21st century Britain, to tease out a new sense of solidarity, while taking on the vested interests who undermine it: the gangmasters, property speculators, advocates for Sharia enclaves, policemen who wink at racist sexual exploitation and those in the gutter press who whip up xenophobia. The sweat shop capitalists recruiting overseas for their semi-indentured workforce. The twats who live in digital and geographic echo chambers and think the whole country should be like London. Everyone who tries to sell you identity politics as a substitute for a sense of broad community: the nativists and leftists both. The politicians who flog stale, selfish retail politics when the country is gasping for the cool water of honest debate about real issues.

All these people will tell you a cohesive national culture is a Bad Thing, because they instinctively grasp that it runs counter to their interests; but beware if they also tell you they’re for democracy; because they are against having a demos, without which democracy is just some rituals.

Weekend long read: GPS ‘crop circles’

In 2003 science fiction author William Gibson said ‘The future is already here; it’s just unevenly distributed’. I was reminded of that phrase reading ‘Ghost ships, crop circles, and soft gold: A GPS mystery in Shanghai’, my pick for this weekend’s long read from MIT’s Technology Review.

It recounts the discovery of a phenomenon around the port of Shanghai in which the GPS transponders of oceangoing ships have been spoofed, meaning that vessels appear to be in locations where they are not or ‘ghost ships’ appear to be present where in fact no ship exists:

Although the American ship’s GPS signals initially seemed to have just been jammed, both it and its neighbor had also been spoofed—their true position and speed replaced by false coordinates broadcast from the ground. This is serious, as 50% of all casualties at sea are linked to navigational mistakes that cause collisions or groundings.

Mark Harris, MIT Tech Review

AIS transponders on vessels were introduced in order to increase safety and transparency in shipping. But as tracking technology advances, the means of hacking the system has kept pace, for example to obscure the activities of illegal sand dredgers in the Yangtze estuary:Under the cover of darkness, AIS can be a useful tool for a sand thief. Ships that are not equipped or licensed for sea travel, for example, have been known to clone the AIS systems of seafaring boats to avoid detection.

Nor are sand thieves the only users of hacked AIS technology. In June this year, an oil tanker with a cloned AIS system rammed an MSA patrol boat in Shanghai while trying to evade capture.

MARK HARRIS

The spoofed GPS signals were appearing in a pattern that analysts began to call ‘crop circles’, that centred on the Huangpu river near Shanghai, and that affected not just ships but all GPS signals. Analysts are still unsure as to what is causing it. High-tech hacking to conceal illegal resource extraction or oil shipping? New forms of experimental weaponry?

The article leaves the conclusion open but one thing is clear: the age of big data will bring with it not just the potential for advances in medical research and social innovation (or surveillance) but also for unforeseen new kinds of crime and even warfare.

This article originally published at Unherd

Remainers are the ones longing for empire

In his valedictory speech as outgoing European Council President, Donald Tusk described Brexit as a delusion driven by the foolish nostalgia of those Brits still “longing for the empire”. His words prompted the usual harrumphing, but the truth is he has it precisely backwards. It is not Brexiters who are chasing an imperialist high, but those devoted to the European Union.

Since its founding, the EU has self-mythologised as a project of peace, whose principal aim is to prevent a repeat of the two World Wars of 1914 and 1939. The basis for this argument tends to be a notion that the World Wars were caused by an excess of “nationalism”, with the aggressive and expansive German identity promoted by the Nazis held up as the primary exhibit, and that by diluting the power of Europe’s nation states nationalism will also be attenuated.

Lately, despite its convoluted and multivariate origins, the First World War has also been recruited by European leaders as a cautionary tale against nationalism. But the origin of the Second World War can just as reasonably be described as a multi-sided jockeying for power between imperial powers.

And as Yoram Hazony has argued in The Virtue of Nationalism, Hitler was less a nationalist than an imperialist, who sought to expand German-controlled territory and as such was resisted by the rival empires of Britain, the United States and other allies. That is to say, the two World Wars were arguably more driven by the competing interests of imperial players than an excess of national identification as such.

Over the horrific bloodshed that took place between 1914 and 1945, these imperial powers lost or began the irreversible process of losing their empires. The British Empire was at its greatest, not to mention most crisis-ridden, after the end of the First World War, and by the end of the Second was exhausted to the point where it no longer had either the will or the resources to sustain its imperial reach. 

The international world order that replaced the Old World empires from 1945 until relatively recently was, in effect, an empire of American-influenced rules underpinned by American military and economic dominance.  And in this new age of Pax Americana, international conventions established the right of nations to self-determination. It was no longer the done thing to invade countries halfway round the world for the purpose of grabbing resources, extending geopolitical influence and/or “civilising: the natives.

With no one overseas to colonise, what happened to the old ruling bureaucracies of the formerly imperial nations of Europe? What now for those educated with imperial dreams and a global vision, trained from a young age to run international business and political institutions, dreaming of rule across vast territories and hundreds of millions of benighted souls in need of guidance?

The solution they came up with was to colonise one another. To console themselves for the loss of the riches and ready supply of servants in their overseas colonies, the washed-up post-imperial nations of Europe agreed to pool their reach, influence and unwashed natives into a kind of ersatz empire.

It did not greatly matter whether the natives in question liked the idea or not, as the pooling was undertaken largely without public discussion and in practice (to begin with at least) made little difference to their everyday lives. Rather, the extension of ‘reach’ and ‘influence’ was largely a bureaucratic one, harmonising rules on the kind of trade and manufacturing standards which most ordinary people care very little about.

The result provided an imperial buzz for a cadre of civil servants, who got to dictate standards on the minutiae of countless areas of commerce for hundreds of millions of people rather than mere tens (and enjoy the perks of a colossal corporate lobbying industry in the process).

Even better, they could do all this without any of the demonstrable dangers of the kind of overheated jingoism that came with the style of imperialism that ended in bloodshed with the two world wars. A kind of diet imperialism, if you like: all the fun of civilising the heathens, with none of the guilt.

Their diet empire now constituted, the post-imperial civil servants of each EU member state could enjoy something of the lavish transnational lifestylemoney-no-object pageantry and grand entertaining they missed out on by the unfortunate fact of having been born too late for a career enjoying absolute power in the colonies while feathering their own nests. Indeed, the strange disappearance of a 2014 report on corruption within EU institutions suggests the diet imperialism of Europe offers ample opportunities of the nest-feathering variety.

Those in the administrative class who missed out on the opportunities for self-enrichment in the prewar empires can enjoy instead the huge and relatively unaccountable sums of money that flow around the European Union’s various budgets.

Indeed, even when misbehaviour tips over into outright criminal activity it can sometimes go unpunished, as was the case with IMF head Christine Lagarde, who received a criminal conviction in 2016 for negligence over inappropriate payouts while in the French Government but was nonetheless installed this year as head of the European Central Bank.

The administrative empire also delivers a servant class, at a scale appropriate to the post-imperial nostalgia it serves to alleviate. The debate around the Brexit referendum was full of dire warnings about the looming loss of staff to (among other things) wipe bottoms, look after children, pick fruit  and make lattes.

These laments strongly hint at the preoccupations of a colonial class reluctant in the extreme to let go of a rich supply of subaltern masses whose services were rendered affordable by the expansion of the labour market through freedom of movement.

It is not just the servants. The prospect of losing the European extension to their shrunken, empire-less British geopolitical self-image cuts to the heart of our modern governing class. As one would expect, then, those lamenting Britain’s post-Brexit loss of “standing” or evolution into a “laughing stock” (who cares?) are not the supposedly imperialist and thin-skinned Brexiters but those who wish to remain. Because in their view the only available modern source of the suitably elevated pomp, influence and imperial “standing” to which they feel entitled is our membership of the EU.

Paradoxically, in the act of accusing Brexiters of the imperial nostalgia of which they themselves are guilty, the Remain Europhiles have hit on a term which is more accurate than they realise for their Brexiter foes: Little Englanders. As has been pointed out elsewhere, the original Little Englanders were anti-imperialist, and wanted the borders of the United Kingdom to stop at the edges of the British Isles.

The epithet tends to be used against Brexiters to imply jingoistic and probably racist imperial aspirations, but this is the opposite of what it meant when first used. And taken in its original sense, calling Brexiters Little Englanders is entirely accurate: they would like the borders of the nation to which they belong to be at the edge of the British Isles, not along the edge of Turkey or Russia.

Should they get their way, this will present the United Kingdom with the prospect of life as an independent nation of modest size. We can then look forward to a future going about our business much reduced from the giddy, extractive and racist highs of the early twentieth century but hopefully more stable, more content with ourselves and, importantly, perhaps even finally at ease with the loss of British imperial reach.

For the imperialist nostalgists of Remain, though, unable to reconcile themselves to the notion of the United Kingdom as anything but a world power, this possibility is anathema. The argument tends to be that unless we join a large power bloc we will be ground to dust between them. Gideon Rachman argued recently in the FT  that “the EU needs to become a power project”, saying that future geopolitics will be a contest between four or five large blocs including China and the US and the individual nations of Europe cannot hold a candle to these behemoths.

But must this necessarily be so? Rachman’s future is just a projection, and many projections – such as Fukuyama’s famous one about the “end of history” have been proved wrong by subsequent events. Admittedly, a multipolar future seems likely. But any age of competing superpowers has always also contained smaller nations that managed to avoid absorption into a larger empire by one means or another. Why should Little England not be one of them?

The only thing holding us back from a post-Brexit and doubly post-imperial future, at ease with our reduction in stature and ready for a new chapter in our national history, is the imperial nostalgia of the Europhiles.

This post originally published at Unherd

Stop telling us feminism means making women work more

Figures released recently by the ONS show that 3 in 4 mothers with dependent children are now in work, more than ever before. Second-wave feminism has long treated women’s entry into the workplace as an unalloyed good. The ONS numbers, and the rise in the number of working women, were met in just this way by Claire Cohen at the Telegraph, who writes that this rise is ‘surely good news, but we need to examine what sort of work these women are doing. Have they been forced to cut their hours to care for their children, as one in three women told the ONS they had compared with just one in 20 men?’

Cohen’s turn of phrase suggests millions of women chased weeping from rewarding careers into resentful servitude wiping bottoms and scrubbing doorsteps. Children are presented as a burden, or an obstacle to the more satisfying business of slaving over a hot spreadsheet. The ONS report is more circumspect, observing rather more neutrally: “Almost 3 in 10 mothers (28.5%) with a child aged 14 years and under said they had reduced their working hours because of childcare reasons. This compared with 1 in 20 fathers (4.8%).” So are women, as Cohen puts it, really being ‘forced’ by the inconvenient demands of dependent children to crush their career aspirations under the burden of school pickups and bottom-wiping?

Back in 2001, sociologist Catherine Hakim presented a multidisciplinary approach to understanding women’s divergent work-related choices in the 21st century: ‘preference theory’. She showed how five transformations have coincided to give the majority of women in advanced liberal economies more choice about their lives than ever before: contraception, equal opportunities, the expansion of white-collar work, the creation of jobs for secondary earners and a general increase in the importance of individual choice within liberalism.

Hakim argues that in the context of these transformations, women’s greater freedom is expressed in three main preference clusters: home-centred, work-centred or adaptive. In other words, given the choice, around 20% of women will choose a home-centred life, 20% of women will choose a work-centred one and the rest – the vast majority – will prefer a mix of both.

The ‘adaptive’ group, she suggests, is the most diverse, containing women who want to combine work and family as well as career ‘drifters’ and those who would prefer not to work but are compelled to from economic necessity. As a group they are responsive to social policy, employment policy, equal opportunities legislation, economic cycles and other policy changes governing areas such as school timetables, part-time working, childcare and so on.

Though Hakim’s work on preference theory is nearly two decades old now, Equalities Office statistics released last week support her tripartite breakdown of women’s preferences: nearly two decades after her book was published, around 20% of UK mothers are working full-time, 20% are not working at all and the rest falling somewhere in between.

The Equalities Office sees women’s divergent employment behaviour post-baby as a bad thing, with the introduction full of discussion of the ‘large pay penalties’ experienced by women with children, and the ways in which taking time out to propagate the species is ‘damaging for career progression’. The inference throughout is that women are compelled through the unfortunate demands of motherhood to cramp their soaring career dreams to accommodate the exigencies of dependent children. The report as a whole is full of bright ideas for policy levers that could be applied in one way or another to equalise this post-baby burden between the sexes and equalise the working patterns of mothers and fathers. But if Hakim is right, is it not possible that today’s statistics just reflect the way women prefer to work?

Flexible working, maternity leave, good quality childcare and so on are excellent things and should be celebrated. But so is not feeling pressured by the entire aggregate voice of the policy and media machine to increase our labour force participation at the expense of time with our families. It should not be beyond the capacity of commentators and policy wonks to consider that plenty of us are working about as much as we’d like to, and just consider other things as important as, or more important than work: for example the ability to flex a schedule to nurse a convalescent toddler, rather than chasing her out to the childminder miserable and full of snot. Or (God help me) sewing Halloween costumes. These things matter too, even if no MP can take credit for them and they do not show up in the sainted GDP stats.

Rather than presenting women’s part-time preferences as evidence of the hill feminism still has to climb, or the malign depredations of sexism, those good people at the Equalities Office and elsewhere might turn their minds to ways in which the diverse range of mothers’ work and family priorities could be better supported – including those choosing to work less.

Why liberal feminists don’t care

A society that venerates health, youth and individual autonomy will not much enjoy thinking about birth or death. We are born helpless and need years of care until we reach the happy state of health and autonomy. At the other end of life, the same often applies: the Alzheimer’s Society tells us there are some 850,000 dementia patients in the UK and that this will rise to over a million by 2025 as life expectancy continues to rise.

If we are reluctant to dwell on the reality of human vulnerability at either end of life, we are unwilling to give much thought to its corollary: that (somewhere safely hidden from the more exciting business of being healthy, youthful and autonomous) there must be people caring for those who are unable to do it themselves. Someone is wiping those bottoms.

Traditionally, this job of caring for the very old and the very young has been “women’s work”. To a great extent, it still is: the OECD reports that, worldwide, women do between two and ten times as much caring work as men.

In the UK, this tends in statistics to be framed as “unpaid work”, a sort of poor relation of the economically productive type that happens in workplaces and contributes to GDP.

Carers UK suggests there are around 9 million people caring for others in the UK part or full-time, of whom up to 2.4 million are caring for both adults and their own children. Women carry out the lion’s share of this work: 60% according to the ONS. Full-time students do the least and, unsurprisingly, mothers with babies do the most. Older working women carry the heaviest load of people in employment, with those in the 50-60 bracket being twice as likely as their male counterparts to be carers whether of a vulnerable adult, a partner or a child or grandchild.

Second-wave feminism pushed hard against the pressure women experience to take on this work of caring. Within this variant of liberalism, caring work is routinely framed as a burden that imposes an economic “penalty” while harming the economy by keeping skilled women away from the workplace. The OECD report cited above states: “The gender gap in unpaid care work has significant implications for women’s ability to actively take part in the labour market and the type/quality of employment opportunities available to them.”

The implication is that, once freed of this obligation, women can then pursue more fulfilling activities in the workplace.

So what does this liberation look like in practice? According to a 2017 report by the Social Market Foundation, women in managerial and professional occupations are the least likely to provide care, as are people with degree qualifications. The number working in routine occupations who also donate more than 20 hours a week of care in their own homes is far higher than those in intermediate or professional occupation.

In other words, higher-earning women are to a far greater extent able to outsource the wiping of bottoms to less well-off people, who are themselves typically women: 90% of nurses and care workers are female.

These women are then too busy to wipe the bottoms of their own old and young, who are sent into institutional care. Such institutions are typically staffed by women, often on zero hours contracts, paid minimum wage to care for others all day before going home to do so for their own babies and elderly. The liberation of women from caring is in effect a kind of Ponzi scheme.

This is a problem for our liberal society, for two interlocking reasons. Firstly, the replacement of informal family-based care with a paid, institutional variety renders caring impersonal, in a way that invites cruelty. Indeed, cases of care home abuse are well documented – see herehere or here – and the number is rising: the CQC received more than 67,500 in 2018, an increase of 82 per cent over the already too high 2014 figure of 37,060.

It is difficult to see how this could be otherwise. Caring for those who are physically or mentally incapacitated is emotionally testing even when we love those we care for. An exhausted worker on a zero-hours contract, paid the minimum wage to perform more home visits than she can manage in the allotted day, is unlikely to have a great store of patience to begin with, let alone when faced with a refractory “client”. The entire system militates against kindness.

Secondly, and relatedly, it turns out that the informal, traditionally female networks in which caring for the young and old once took place were actually quite important. Those networks also ran church groups, village fetes, children’s play mornings – all the voluntary institutions that form the foundation of civil society.

When caring is treated as “unpaid work” and we are encouraged to outsource it in favour of employment, no one of adult working age has time for voluntary civil society activities any more. If the number of people caring informally for relatives is waning, replaced by institutional care, so is voluntarism: between 2005 and 2015 alone there was a 15% drop in the number of hours donated (ONS).

The result is loneliness. Almost 2.5m people aged between 45 and 64 now live alone in the UK, almost a million more than two decades ago. Around 2.2 million people over 75 live alone, some 430,000 more than in 1996. In 2017, the Cox Commission on loneliness described it as “a giant evil of our time”, stating that a profound weakening of social connections across society has triggered an “epidemic” of loneliness that is having a direct impact on our health.

Several generations into our great experiment in reframing caring as a burden, we are beginning to count the cost of replacing mutual societal obligations with individual self-fulfilment: an epidemic of loneliness, abuse of the elderly and disabled in care homes, substandard childcare. A society liberated from caring obligations is, with hindsight, a society liberated from much that was critically under-valued.

What is the alternative? Some would prefer a more communitarian approach to caring for the old and the young. Giles Fraser recently wrote on this site that caring for the elderly should be the responsibility of their offspring:

“Children have a responsibility to look after their parents. Even better, care should be embedded within the context of the wider family and community. […] Ideally, then, people should live close to their parents and also have some time availability to care for them. But instead, many have cast off their care to the state or to carers who may have themselves left their own families in another country to come and care for those that we won’t.”

These are strong words and there is much to agree with, but the barest glance at the statistics shows that in practice what that means is “women have a responsibility to look after their parents”.

If we are to count the costs of liberating society from mutual caring obligations, we must also count the benefits, as well as who enjoyed them. Society once encouraged men to seek worldly success, underpinned by the imposition of an often-suffocating domestic servitude on women.

Liberalism blew this out of the water by declaring that in fact both sexes were entitled to seek some form of worldly activity and fulfilment. It is not enough to point to negative side effects of this change and say: “Someone needs to be resuming these mutual caring obligations or society will disintegrate.”

To women well-accustomed to the widespread tacit assumption that it is they who will pick up those underpants, wash up that saucepan, pack that schoolbag and so on, this sounds a lot like a stalking-horse for reversal of societal changes that, on balance, most of us greatly appreciate. In truth no one, whether liberal or post-liberal, wants to confront the enormous elephant that liberal feminism left in society’s sitting room: the question of who cares. Who, now that we are all self-actualising, is going to wipe those bottoms? There are no easy answers.

This article first published in Unherd