Latest Posts

Whats wrong with Global Capitalism?

Global capitalism, the current epoch in the centuries-long history of the capitalist economy, is heralded by many as a free and open economic system that brings people from around the world together to foster innovations in production, for facilitating exchange of culture and knowledge, for bringing jobs to struggling economies worldwide, and for providing consumers with an ample supply of affordable goods.

But while many may enjoy benefits of global capitalism, others around the world — in fact, most — do not.

The research and theories of sociologists and intellectuals who focus on globalization, including William I. Robinson, Saskia Sassen, Mike Davis, and Vandana Shiva shed light on the ways this system harms many.

GLOBAL CAPITALISM IS ANTI-DEMOCRATIC

Global capitalism is, to quote Robinson, “profoundly anti-democratic.” A tiny group of global elite decide the rules of the game and control the vast majority of the world’s resources. In 2011, Swiss researchers found that just 147 of the world’s corporations and investment groups controlled 40 percent of corporate wealth, and just over 700 control nearly all of it (80 percent). This puts the vast majority of the world’s resources under the control of a tiny fraction of the world’s population. Because political power follows economic power, democracy in the context of global capitalism can be nothing but a dream.

USING GLOBAL CAPITALISM AS A DEVELOPMENT TOOL DOES MORE HARM THAN GOOD

Approaches to development that sync with the ideals and goals of global capitalism do far more harm than good. Many countries that were impoverished by colonization and imperialism are now impoverished by IMF and World Bank development schemes that force them to adopt free trade policies in order to receive development loans.

Rather than bolstering local and national economies, these policies pour money into the coffers of global corporations that operate in these nations under free trade agreements. And, by focusing development on urban sectors, hundreds of millions of people around the world have been pulled out of rural communities by the promise of jobs, only to find themselves un- or under-employed and living in densely crowded and dangerous slums. In 2011, the United Nations Habitat Report estimated that 889 million people—or more than 10 percent of the world’ population—would live in slums by 2020.

THE IDEOLOGY OF GLOBAL CAPITALISM UNDERMINES THE PUBLIC GOOD

The neoliberal ideology that supports and justifies global capitalism undermines public welfare. Freed from regulations and most tax obligations, corporations made wealthy in the era of global capitalism have effectively stolen social welfare, support systems, and public services and industries from people all over the world. The neoliberal ideology that goes hand in hand with this economic system places the burden of survival solely on an individual’s ability to earn money and consume. The concept of the common good is a thing of the past.

THE PRIVATIZATION OF EVERYTHING ONLY HELPS THE WEALTHY

Global capitalism has marched steadily across the planet, gobbling up all land and resources in its path.

Thanks to the neoliberal ideology of privatization, and the global capitalist imperative for growth, it is increasingly difficult for people all over the world to access the resources necessary for a just and sustainable livelihood, like communal space, water, seed, and workable agricultural land.

THE MASS CONSUMERISM REQUIRED BY GLOBAL CAPITALISM IS UNSUSTAINABLE

Global capitalism spreads consumerism as a way of life, which is fundamentally unsustainable. Because consumer goods mark progress and success under global capitalism, and because neoliberal ideology encourages us to survive and thrive as individuals rather than as communities, consumerism is our contemporary way of life. The  desire for consumer goods and the ​cosmopolitan way of life they signal is one of the key “pull” factors that draws hundreds of millions of rural peasants to urban centers in search of work.

Already, the planet and its resources have been pushed beyond limits due to the treadmill of consumerism in Northern and Western nations. As consumerism spreads to more newly developed nations via global capitalism, the depletion of the earth’s resources, waste, environmental pollution, and the warming of the planet are increasing to catastrophic ends.

HUMAN AND ENVIRONMENTAL ABUSES CHARACTERIZE GLOBAL SUPPLY CHAINS

The globalized supply chains that bring all of this stuff to us are largely unregulated and systemically rife with human and environmental abuses. Because global corporations act as large buyers rather than producers of goods, they do not directly hire most of the people who make their products. This arrangement frees them from any liability for the inhumane and dangerous work conditions where goods are made, and from responsibility for environmental pollution, disasters, and public health crises. While capital has been globalized, the regulation of production has not. Much of what stands for regulation today is a sham, with private industries auditing and certifying themselves.

GLOBAL CAPITALISM FOSTERS PRECARIOUS AND LOW-WAGE WORK

The flexible nature of labor under global capitalism has put the vast majority of working people in very precarious positions. Part-time work, contract work, and insecure work are the norm, none of which bestow benefits or long-term job security upon people. This problem crosses all industries, from manufacturing of garments and consumer electronics, and even for professors at U.S. colleges and universities, most of whom are hired on a short-term basis for low pay. Further, the globalization of the labor supply has created a race to the bottom in wages, as corporations search for the cheapest labor from country to country and workers are forced to accept unjustly low wages, or risk having no work at all. These conditions lead to poverty, food insecurity, unstable housing and homelessness, and troubling mental and physical health outcomes.

GLOBAL CAPITALISM FOSTERS EXTREME WEALTH INEQUALITY

The hyper-accumulation of wealth experienced by corporations and a selection of elite individuals has caused a sharp rise in wealth inequality within nations and on the global scale. Poverty amidst plenty is now the norm. According to a report released by Oxfam in January 2014, half of the world’s wealth is owned by just one percent of the world’s population. At 110 trillion dollars, this wealth is 65 times as much as that owned by the bottom half of the world’s population. The fact that 7 out of 10 people now live in countries where economic inequality has increased over the last 30 years is proof that the system of global capitalism works for the few at the expense of the many. Even in the U.S., where politicians would have us believe that we have “recovered” from the economic recession, the wealthiest one percent captured 95 percent of economic growth during the recovery, while 90 percent of us are now poorer.

GLOBAL CAPITALISM FOSTERS SOCIAL CONFLICT

Global capitalism fosters social conflict, which will only persist and grow as the system expands. Because capitalism enriches the few at the expense of the many, it generates conflict over access to resources like food, water, land, jobs and others resources. It also generates political conflict over the conditions and relations of production that define the system, like worker strikes and protests, popular protests and upheavals, and protests against environmental destruction. Conflict generated by global capitalism can be sporadic, short-term, or prolonged, but regardless of duration, it is often dangerous and costly to human life. A recent and ongoing example of this surrounds the mining of coltan in Africa for smartphones and tablets and many other minerals used in consumer electronics.

GLOBAL CAPITALISM DOES THE MOST HARM TO THE MOST VULNERABLE

Global capitalism hurts people of color, ethnic minorities, women, and children the most. The history of racism and gender discrimination within Western nations, coupled with the increasing concentration of wealth in the hands of the few, effectively bars women and people of color from accessing the wealth generated by global capitalism. Around the world, ethnic, racial, and gender hierarchies influence or prohibit access to stable employment. Where capitalist based development occurs in former colonies, it often targets those regions because the labor of those who live there is “cheap” by virtue of a long history of racism, subordination of women, and political domination. These forces have led to what scholars term the “feminization of poverty,” which has disastrous outcomes for the world’s children, half of whom live in poverty.

Women in gender-equal countries have better memory

Let’s try you. Read the title above once, then cover it and write down word for word what you remember. Having difficulties? How well you do may be down to which country you live in.

That’s according to a new study, published in Psychological Science, involving an impressive 200,000 women and men from 27 countries across five continents. It revealed that women from more conservative countries performed worse on memory tests than those from more egalitarian countries.

Demographics expert Eric Bonsang and his colleagues analysed national survey data from individuals above the age of 50. They used existing data on cognitive performance tests measuring episodic memory (memory of autobiographical events). These involved recalling as many of ten words read out by a researcher as possible in one minute either immediately or after a short delay. The team rated each country’s level of gender equality by looking at the proportion of people agreeing with the statement: “When jobs are scarce, men should have more right to a job than women.”

Women outperformed men on memory in gender-egalitarian countries such as Sweden, Denmark, The Netherlands, the US and most European countries. However, in Ghana, India, China, South Africa and some more gender-traditional European countries (such as Russia, Portugal, Greece and Spain) the pattern reversed. Women in these countries performed worse than men – which was exactly what the researchers had predicted. Interestingly, men in egalitarian countries also scored better than men in conservative countries (but not by as much).

The findings did not depend on world region or the countries’ economic development (gross domestic product per capita in 2010). A factor that may be at play, however, is that modern countries (such as many of the gender-equal ones above) have better health benefits. Older adults may simply be healthier. But that doesn’t necessary explain the observed gender differences – the study after all found that the effect was stronger for women than for men.

The authors instead argue that a society’s attitudes to gender roles determine which behaviours and characteristics are deemed appropriate for men and women. In turn, these social expectations influence women’s (and men’s) life goals, occupational choices and experiences. As a result, women in more gender-traditional countries may have less exposure to cognitively stimulating activities such as those involved in education and work. Participation in education and work indeed explained 30% of the findings.

Damaging stereotypes

While the study provides some evidence that attitudes based on stereotypes do shape our abilities, a full test of this theory would require a study of aptitudes which are stereotypically considered feminine – such as social sensitivity or linguistic ability.

For example, would men in gender-traditional nations underperform on tests measuring social sensitivity, compared to women? A study conducted on American students showed just that. It may indeed be that this effect is even larger in more conservative countries.

The results of this study were explained in terms of “stereotype threat”, a fear of doing something that would confirm or reinforce the negative traits typically associated with members of stigmatised groups. Say you are a woman sitting a maths test. The common perception that women are not good at maths may play on your mind and your score may suffer as you struggle to concentrate. The fear takes away our cognitive resources and leads to underperformance on tasks deemed challenging for the stereotyped group.

This effect is very powerful and has been shown in a wealth of studies. When reminded of negative stereotypes, women have been shown to underperform on maths tests, or African Americans on tests measuring intellectual ability. Indeed the new study could be interpreted in terms of stereotype threat theory.

We’ve even seen the neurological underpinnings of this effect. Our new study, published in Frontiers’ Aging Neuroscience, asked a group of older participants to read an article about memory fading with age (age stereotype). We showed that, as a result, their reaction times in a cognitive task were delayed. What’s more, brain wave activity in these individuals indicated that their thoughts about themselves were more negative. This was seen in data from electroencephalography (EEG), which uses electrodes to track and record brainwave patterns.

Our study shows that short-term exposure to negative stereotypes has detrimental effects on cognitive functioning. Similar processes may have taken place in women continually exposed to negative gender and age stereotypes in gender-conservative countries – explaining their underperformance on the memory test.

What makes a country sexist?

Another consideration which future studies should take into account is the countries’ wider political system – not just the gender attitudes themselves. One theory suggests modernisation leads progressively to democratisation and liberalisation – including that of attitudes to gender roles. The society’s heritage, whether political or religious, influences the society’s values.

Indeed, our studies on cross-cultural attitudes to women and men show that they are more liberal in longstanding democracies such as the UK than in countries transitioning to democracy (such as Poland and South Africa). We found that gender attitudes were also affected by the preceding political systems: they were more conservative in the post-apartheid South Africa and less conservative in a post-communist Poland. So national histories of institutionalised inequality (apartheid) vs forced emancipation (communism) have left a long lasting impact on national levels of sexism.

Perhaps not coincidentally, some of the longest standing democracies in the new study happen to be the ones which are more gender-egalitarian. As my research suggests, both democratisation and the reduction of stereotype threat – especially through the mass media, such as advertising involving non-traditional gender roles – are important. These efforts should be our focus in bringing greater equality across a range of skills for women and men across the globe.

Introduction to Ethical Egoism

Ethical egoism is the view that each of us ought to pursue our own self-interest, and no-one has any obligation to promote anyone else’s interests. It is thus a normative or prescriptive theory: it is concerned with how we ought to behave. In this respect, ethical egoism is quite different from psychological egoism, the theory that all our actions are ultimately self-interested. Psychological egoism is a purely descriptive theory that purports to describe a basic fact about human nature.

ARGUMENTS IN SUPPORT OF ETHICAL EGOISM

1. Everyone pursuing their own self-interest is the best way to promote the general good.

This argument was made famous by Bernard Mandeville (1670-1733) in his poem The Fable of the Bees, and by Adam Smith (1723-1790) in his pioneering work on economics, The Wealth of Nations. In a famous passage Smith writes that when individuals single-mindedly pursue “the gratification of their own vain and insatiable desires” they unintentionally, as if “led by an invisible hand,” benefit society as a whole. This happy result comes about because people generally are the best judges of what is in their own interest, and they are much more motivated to work hard to benefit themselves than to achieve any other goal.

An obvious objection to this argument, though, is that ​it doesn’t really support ethical egoism. It assumes that what really matters is the well-being of society as a whole, the general good.

It then claims that the best way to achieve this end is for everyone to look out for themselves. But if it could be proved that this attitude did not, in fact, promote the general good, then those who advance this argument would presumably stop advocating egoism.

Another objection is that what the argument states is not always true.

Consider the prisoner’s dilemma, for instance. This is a hypothetical situation described in game theory.  You and a comrade, (call him X) are being held in prison. You are both asked to confess. The terms of the deal you are offered are as follows:

  • If you confess and X doesn’t, you get 6 months and he gets 10 years.
  • If X confesses and you don’t, he gets 6 months and you get 10 years.
  • If you both confess, you both get 5 years.
  •  If neither of you confess, you both get 2 years.

Now here’s the problem.  Regardless of what X does, the best thing for you to do is confess. Because if he doesn’t confess, you’ll get a light sentence; and if he does confess, you’ll at lest avoid getting totally screwed! But the same reasoning holds for X as well. Now according to ethical egoism, you should both pursue your rational self-interest. But then the outcome is not the best one possible. You both get five years, whereas if both of you had put your self-interest on hold, you’d each only get two years.

The point of this is simple. It isn’t always in your best interest to pursue your own self-interest without concern for others.

2.  Sacrificing one’s own interests for the good of others denies the fundamental value of one’s own life to oneself.

This seems to be the sort of argument put forward by Ayn Rand, the leading exponent of “objectivism” and the author of The Fountainhead and Atlas Shrugged.  Her complaint is that the Judeo-Christian moral tradition, which includes, or has fed into, modern liberalism and socialism, pushes an ethic of altruism.  Altruism means putting the interests of others before your own.  This is something we are routinely praised for doing, encouraged to do, and in some circumstances even required to do (e.g. when we pay taxes to support the needy).  But according to Rand, no-one has any right to expect or demand that I make any sacrifices for the sake of anyone other than myself.

A problem with this argument is that it seems to assume that there is generally a conflict between pursing one’s own interests and helping others.

  In fact, though, most people would say that these two goals are not necessarily opposed at all.  Much of the time they compliment one another.  For instance, one student may help a housemate with her homework, which is altruistic.  But that student also has an interest in enjoying good relations with her housemates. She may not help anyone whatsoever in all circumstances; but she will help if the sacrifice involved is not too great.  Most of us behave like this, seeking a balance between egoism and altruism.

OBJECTIONS TO ETHICAL EGOISM

Ethical egoism, it is fair to say, is not a very popular moral philosophy. This is because it goes against certain basic assumptions that most people have regarding what ethics involves. Two objections seem especially powerful.

1. Ethical egoism has no solutions to offer when a problem arises involving conflicts of interest.

Lots of ethical issues are of this sort. For example, a company wants to empty waste into a river; the people living downstream object. Ethical egoism just advises both parties to actively pursue what they want. It doesn’t suggest any sort of resolution or commonsense compromise.

2. Ethical egoism goes against the principle of impartiality.

A basic assumption made by many moral philosophers–and many other people, for that matter–is that we should not discriminate against people on arbitrary grounds such as race, religion, sex, sexual orientation or ethnic origin. But ethical egoism holds that we should not even try to be impartial. Rather, we should distinguish between ourselves and everyone else, and give ourselves preferential treatment.

To many, this seems to contradict the very essence of morality. The “golden rule,” versions of which appear in Confucianism, Buddhism, Judaism, Christianity, and Islam, says we should treat others as we would like to be treated. And one of the greatest moral philosophers of modern times, ​Immanuel Kant (1724-1804), argues that the fundamental principle of morality (the “categorical imperative,” in his jargon) is that we should not make exceptions of ourselves.

According to Kant, we shouldn’t  perform an action if we couldn’t honestly wish that everyone would behave in a similar way in the same circumstances.

Just another ‘war on drugs’ disaster

The recent passing of a new addition to the British statute books, which will come into effect on April 6th, is the latest in a long line of poorly drafted drug laws. The new law, to act in parallel with the Misuse of Drugs Act 1971, effectively bans all substances – with the exception of alcohol, tobacco and caffeine – with a “psychoactive effect” on “normal brain functioning”. The awful irony of a UK government exempting two of the most individually and socially harmful substances has not been lost on concerned commentators.

So where exactly has this nonsensical law come from? How on earth have we got ourselves into this situation? And will it work? To answer that, it’s worth reflecting on the emergence of novel psychoactive substances (NPS), or so called legal highs.

New highs

In 2009, club drug researchers first heard talk of the stimulant NPS mephedrone or “M-Cat” at UK clubs and after parties. At that time, there was growing disillusionment among users with the purity of popular illegal club drugs – as one of our interviewees put it, there was “a dire drug drought” characterised by low purity MDMA tablets. As another interviewee claimed, there were “no drugs in drugs anymore”. Indeed, between 2007-2009, the MDMA content of pills plummeted, fake ecstasy pills containing the headache-inducing, banned substance benzylpiperazine (BZP) were rife, and cocaine purity dropped to less than 10%.

As a consequence, those drug-takers who could afford it, switched from ecstasy pills to purer MDMA crystal or powder. And by 2009, club goers – especially those in South London’s gay club scene – also began adding mephedrone to their polydrug repertoires, sometimes with tragic consequences (it is, after all, chemically similar to amphetamine).

Although mephedrone was banned in 2010 by the UK government, its use continues, especially among injecting drug users in poor communities. And there are now many other novel psychoactive substances, principally cheap stimulants and potent herbal smoking mixtures, such as Spice, which are available online, in so-called headshops, or from street dealers who are likely to pick up any business from those shut down by this new law.

One group of novel psychoactive substances which has received less media and academic attention are the benzodiazepine analogues (drugs similar to benzodiazepines or “benzos”). There were 372 fatalities in England and Wales involving benzodiazepines in 2014-15, up 8% on the previous year, and the highest number since records began in 1993 according to the Office for National Statistics.

There were also more than 10m prescriptions for benzodiazepines dispensed in England in 2014, with use not recommended to go beyond four weeks. Yet there are growing concerns about illicit supply through web-based sales. Long-term benzo users suffer both short and long-term harms and need medical support to get off these drugs.

Putting the genie back in the bottle

Certainly, novel psychoactive substances are the nasty genie that prohibition let out of the bottle. To get around the law, NPA chemists created analogues of existing drugs (such as MDMA, LSD and methamphetamine) that were tweaks to the chemical structure and hence entirely legal. And as soon as the law caught up with them, they’d just tweak their formulas again. The UK government, by passing this new legislation, is desperately trying to stuff this genie back inside, once and for all.

But it is unlikely to do any good. There is little or no provision for resources to enforce it, nor anything like sufficient funding for drug education, harm reduction, outreach, and mental health and drug treatment services to help people who may be in trouble with psychoactive substances, not least those in our prisons.

When it comes to taking drugs, at least we know about drugs such as cannabis, MDMA and cocaine – although we don’t always know what some of them are cut with. Purity and availability of these traditional street drugs have returned to or exceeded pre-2007-2008 levels, although this may bring its own problems.

But at least we know something about the effects of these more familiar substances and can help people mitigate against possible harms. What is clear is that the human desire for intoxication, usually in the pursuit of pleasure, but sometimes at the cost of a person’s health, wealth and even liberty, endures. Without a recognition that demand for psychoactive substances will not go away, banning all psychoactive substances won’t work, just as it hasn’t in the past.

Those who ignore history

Governments worldwide need to learn one crucial lesson from the emergence of NPS. Their emergence is directly related to global prohibition and the war on drugs we have been fighting for over 100 years, a war that has had few successes. Crucially, many concerned commentators continue to chronicle the harmful unintended consequences of prohibiting drugs.

Drug history is always useful in understanding drug presents and futures. This new law gives us more of the same, and so is unlikely to work if success is judged on producing a safer world for the many millions who continue to consume psychoactive substances.

Re-evaluating present human rights for the future

Since the mid-20th century many have grown used to the idea of having human rights and how these can be used when those people feel they are being threatened. In particular, despite having a heritage stemming back further, contemporary understanding of these rights was largely formed in 1948. That’s when the Universal Declaration of Human Rights(UDHR) was created. This milestone document sought to facilitate a new world order following the devastation of World War II. It declared all humans to be born free and equal. It committed states to protect rights such as those to life, to be free from torture, to work, and to an adequate standard of living.

These promises have since been cemented in international treaties, including the 1966 International Covenants on Civil and Political Rights and Economic, Social and Cultural Rights, and in regional instruments like the 1950 European Convention on Human Rights(ECHR).

More recently, however, states have started to think again. In the US, the first months of Donald Trump’s presidency have involved openly flouting international human rights commitments, most notably through a controversial travel ban targeting those from mainly Muslim countries and refugees.

In France, the ongoing national state of emergency in place since the Paris terror attacks of 2015 has heightened security and police powers.

In the UK, there have been calls to scrap the Human Rights Act. Ahead of Brexit, there is also significant uncertainty over what human rights protections, if any, should be retained after leaving the EU.

These developments raise important questions about what human rights are and what they should be in our changing world. Is it time to adapt them to our current reality? What should the human rights of the future look like? Our understanding of human rights, largely conceived in the 1940s-50s, is no longer tenable. We must be ready and willing to reassess what human rights are. Otherwise governments may do it for us.

The UDHR, the two subsequent International Covenants and the ECHR are foundational documents perceived to lay down the cornerstone provisions of what human rights are. These lists provided a map to navigate the problems of the time. Today’s context, however, is very different. As a result, these lists can no longer be viewed as sacred. They need re-evaluation for the future.

Scientific developments are changing how we relate to our bodies. We can extend human life like never before and use our bodies as commodities (such as by selling hair, blood, sperm or breast milk). In 2016, a 14-year-old girl asked for the right to cryogenically freeze her body. Such situations do not easily fit within the confines of traditional human rights provisions.

Machines are becoming increasingly intelligent, storing and using data about us and our lives. They even have the potential to infringe our cognitive liberty – our ability to control our own minds. This includes reported moves by Facebook to create a brain-computer interface which will allow users to type just by thinking. Do human rights need to protect us from the artificial intelligence we ourselves created?

The same re-evaluation can be applied within the very idea of what it is to be “human” itself. While specific rights provision for children, women, those with disabilities, migrant workers and others has been secured over the past 70 years, the state of being “human” should not be taken as now settled. Do we need to rethink rights to address the experiences of individuals who lie outside our current frameworks of understanding in society? This might include people who identify as gender fluid or non-binary and do not regard their identity to equate to either a man or a woman.

We may also ask is it necessary to re-evaluate how we understand humanity itself? We might, for example, seek to better recognise humans as fundamentally inter-dependent on nature and their environment. As a result decontextualised human beings may not be the best, or the only, subjects of rights. This could lead to serious consideration of rights provision for entities previously considered non-human, such as the environment.

Envisaging a new Utopia

Human rights offer a way of thinking about the kind of future we want in Utopian terms. This is an element which was important in their post-war foundation, and remains so.

However, this need not be a vision which is compatible with liberalism, capitalism or statism, as has been the case with human rights of the 1940s-50s. Our current human rights instruments were defined by states and uphold the right to property and to individual liberty, ideas which which complement life in liberal, capitalist settings.

Instead, human rights may be used to envisage a new utopia. This could be based on new forms of living, being and structuring society that better speak to the problems of the present. They could be used to think about a society which displaces the centrality of the state. People rather than governments could become the collective definers and gatekeepers of what human rights are and how they are protected.

Similarly, a more communal conception of human rights – furthering the idea of rights as held by humans in communities as opposed to as individuals – could help us think about forms of structuring society which go beyond the focus on the individual, which is definitive of liberal and capitalist worldviews.

This may involve placing more focus on the idea of group rights whereby human rights are held by a group as opposed to by its individual members. This concept has been employed in relation to indigenous people and cultural identity, but could be expanded further to conceptualise other issues in collective terms. For instance, we might begin to use rights to consider healthcare as collective, involving various protections and obligations held and carried out in relation to others as opposed to an individualised right to health.

Through such actions a modern Utopian vision for rights can be built, based on forms of social relations which are very different to those we currently experience.

Human rights must change to become tools which stimulate critical discussion and debate in the present, helping to carve a new vision for today’s future as opposed to continuing with that of the 20th century. Thought in such a way, human rights can emerge as not a thing of the past, but of the future.

How animals taking communal decisions

Today we opt for ballot boxes but humans have used numerous ways of voting to have their say throughout history. However, we’re not the only ones living (or seeking to live) in a democratic society: a new study has suggested that African wild dogs vote to make group decisions.

A new study has found that these dogs sneeze to decide when to stop resting and start hunting. Researchers found that the rates of sneezing during greeting rallies – which happen after, or sometimes during, a rest period – affect the likelihood of the pack departing to hunt, rather than going back to sleep.

If dominant individuals start the rally it is much more likely to result in a hunt, and only two or three sneezes are required to get the pack started. But if a subordinate individual wishes to start a hunt, they have to sneeze a lot more – around ten times – to get the pack to move off.

The researchers think that this sneezing is the pack members voting on when to start a hunt, since it is often the lower ranking (and therefore the hungriest) dogs who start the rallies.

Communal decisions are essential for social living, and in animals it is rare to find a social system where one individual coerces the rest of the group into performing a particular action. But since animals cannot produce the kind of pre-election propaganda so beloved of human politicians, social groups must have different ways of suggesting and gaining consensus for activities.

1. Baboons: take it or leave it

When members of a baboon troupe set off to forage, several members may move in different directions. Other baboons in the group must decide which one to follow, and social dominance has no effect on the likelihood that the majority of the group will follow. Moving purposefully seems to be an important factor in getting other individuals to follow – another parallel with human behaviour, since people will follow whoever seems to have the most confidence .

2. Meerkat voice voting

In meerkat mobs, social cohesion is vital for survival, and moving from one patch to another must be done together. A meerkat going it alone will very soon be an ex-meerkat. In order to get the group to head quickly to a new patch, an individual will emit a “moving call”. If three or more meerkats make moving calls within a short period of time, the group will speed up its movement, but two or less individuals calling does not affect the speed. In meerkat mobs three is evidently considered a quorum.

3. Capuchin monkeys “trill”

White faced capuchin monkeys at a site in Costa Rica have been heard using “trill” calls to persuade the group to move off in the direction preferred by the caller. However, the callers were not always successful in getting the group to move, and status within the group did not seem to affect the likelihood of persuading the troupe to move. Although the researchers did not consider the possibility that these calls were a form of voting, there are similarities between their use and the sneezes used by the wild dogs.

4. Honey bee scouts vote among themselves

Honey bees have an advanced social system with individual workers having different tasks. When a nest becomes overcrowded and some of the bees need to move out, scout bees go off to find a suitable site for a new nest. Of course, they all find different sites and some may find more than one location.

When they return to the swarm, the scouts each perform a dance that gives directions to their chosen site. As time goes on some of the scouts stop advertising their site, and a few will switch to advertising another scout’s site. The swarm will only move when all the scouts that are still dancing are advertising the same site. This process can take several days to complete, but it is a bit like buying a house without having seen it on the say-so of a few estate agents.

5. Ants vote with their feet

Rock ants, found in the south of England, choose a new nest site based on the quality of the site, with entrance size and darkness among assessed criteria. They appear to use a simple voting system consisting of leaving the nest site if an individual does not perceive the quality to be high enough. When enough ants have accumulated at a site, it is deemed to be of a suitable quality (or perhaps the best that can be found in the area), and the ants move in. If the quality subsequently deteriorates, individuals drift away to another site until enough of the colony have left the original nest and joined the new site. A simple, but apparently effective system.

Voting by animals is not a subject that has been studied to any great extent, although political systems are common among social animals and are quite well documented, but if wild dog, meerkats and ants are doing it, you can bet your bottom dollar that other species are doing it too.

North Korea is not the biggest nuclear threat to the world

North Korea is not the biggest nuclear threat to the world

The Democratic People’s Republic of Korea (DPRK) and its dictator, Kim Jong-un, seem determined to demonstrate to the world, and particularly the United States, its nuclear weapons capabilities. With a possible hydrogen bomb test and two missile launches that have passed over Japan this month, tensions have been raised, with U.S. President Donald Trump implementing further sanctions and threatening to rain down “fire and fury” on the country.

But does North Korea pose such a threat with its ongoing testing? Why is the U.S. not considered a threat in similar terms? For one thing, it’s the only country to have actually dropped a nuclear bomb on a civilian population, two in fact, on Japan to supposedly to avoid a land invasion, save lives and bring about a “swift end” to the Second World War.

Yet this official narrative, pushed for decades, is being questioned by modern historians and no longer appears to hold up as it once did, especially as high-ranking U.S. officials themselves have questioned their use. President Dwight D. Eisenhower, who was a five-star general in the United States Army during World War II and served as Supreme Commander of the Allied Expeditionary Forces in Europe, said in his 1963 memoir, The White House Years: Mandate for Change:

“I had been conscious of a feeling of depression and so I voiced to him [Stimson] my grave misgivings, first on the basis of my belief that Japan was already defeated and that dropping the bomb was completely unnecessary, and secondly because I thought that our country should avoid shocking world opinion by the use of a weapon whose employment was, I thought, no longer mandatory as a measure to save American lives. It was my belief that Japan was, at that very moment, seeking some way to surrender with a minimum loss of face.”

Chief of Staff to presidents Roosevelt and Truman, and former Chief of Naval Operations, William D. Leahy, also wrote in his 1950 Memoir: “It is my opinion that the use of this barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan. The Japanese were already defeated and ready to surrender because of the effective sea blockade and the successful bombing with conventional weapons…”.

The authoritative United States Strategic Bombing Survey report, compiled by a board of impartial experts in 1946, also found that: “The Emperor, the Lord Privy Seal, the Prime Minister, the Foreign Minister, and the Navy Minister had decided as early as May of 1945 that the war should be ended even if it meant acceptance of defeat on allied terms”.

This conclusion was possibly influenced by a U.S. intercepted cable on May 5 1945, sent to Berlin by the German ambassador in Tokyo after he spoke to a ranking Japanese naval officer, it read: “Since the situation is clearly recognised to be hopeless, large sections of the Japanese armed forces would not regard with disfavour an American request for capitulation even if the terms were hard”.

In historian and journalist Edwin P. Hoyt’s 1986 study, In Japan’s War: The Great Pacific Conflict, he explains why the Japanese were ready to surrender without the use of nuclear weapons:

“The B-29 firebombing campaign had brought the destruction of 3,100,000 homes, leaving 15 million people homeless, and killing about a million of them. It was the ruthless firebombing, and Hirohito’s realisation that if necessary the Allies would completely destroy Japan and kill every Japanese to achieve ‘unconditional surrender’ that persuaded him to the decision to end the war. The atomic bomb is indeed a fearsome weapon, but it was not the cause of Japan’s surrender, even though the myth persists even to this day.”

Historians have also argued that the bombs were dropped to send a message to Russia and ignite the Cold War. The U.S. understood the threat Russia and Communism posed to their expansionist plans to export their capitalist model globally.

General Leslie Groves, who directed the infamous Manhattan Project to build the Atomic Bomb, acknowledged : “There was never from about two weeks from the time I took charge of this Project any illusion on my part but that Russia was our enemy, and the Project was conducted on that basis. I didn’t go along with the attitude of the country as a whole that Russia was a gallant ally.”

And according to Manhattan Project scientist Leo Szilard, Secretary of State at the time, James F. Byrnes, understood the bombs main purpose was to “make Russia more manageable in Europe”. Winston Churchill, also understood this reasoning when he said: “we now had something in our hands which would redress the balance with the Russians.”

According to Tim Weiner in his book “US Spied on its World War II Allies”, a Venezuelan diplomat reported to his government after a May 1945 meeting that Assistant Secretary of State Nelson Rockefeller had said U.S. officials were: “beginning to speak of Communism as they once spoke of Nazism and are invoking continental solidarity and hemispheric defense against it”

This insight questions the pervasive narrative and reasoning behind the dropping of the bombs on Japan in the Second World War. It also highlights that the possible reasoning behind their use was not to end the war but to contain Russia and the threat of Communism, which brings us to North Korea.

North Korea: where’s the historical context?

In 1945 Dean Rusk, a colonel at the end of World War II, joined the U.S. Department of State and became instrumental in the division of Korea into spheres of U.S. (capitalist south) and Soviet (communist north) influence. According to American historian Bruce Cumings, Rusk had “consulted a map around midnight on the day after we obliterated Nagasaki with an atomic bomb” and with no experts having been consulted, the country was split at the 38th parallel line.

By 1950 the U.S. was at war with North Korea. Known as the “forgotten war”, partly because it had a number of different names — President Truman called it a ‘police action’ so he could get it past Congress — but also because many people today are unaware of the scale of destruction and slaughter that was unleashed there. It’s been estimated that 635,000 tons of bombs and 32,557 tons of napalm were dropped on the country, more than had been in the entire Pacific Theatre of World War II.

By the end of the war more than three million Koreans were believed to have been killed, mostly in the north, which is more deaths than during the U.S. bombing and destruction of Japan. Head of U.S. Strategic Air Command at the time, Curtis LeMay, said afterwards “we went over there and eventually burned down every town in North Korea…over a period of three years or so, we killed off — what — twenty percent of the population of Korea”. For comparison, 2% of the UK population was killed during World War II. Dean Rusk also later stated in an interview that “between the 38th parallel and the frontier up there we were bombing every brick that was standing on top of another, everything that moved. We had complete air superiority. We were just bombing the heck out of North Korea”. For North Koreans the threat of “fire and fury” from the Americans isn’t a distant memory, but a reality that fuels their xenophobia and hatred of the United States.


The drive by North Korea and Kim Jong-un to obtain nuclear weapons has a historical precedent which is little discussed, or outright avoided, in mainstream discourse. With the ever present threat of nuclear weapons lurking behind the rhetoric, relevant and important context as to the current thinking of the DPRK and its leader appear largely absent from the debate.

This includes a history of the DPRK making attempts to promote dialogue with the U.S. In 2015 North Korea agreed to halt its nuclear weapons tests if Washington cancelled its joint annual military exercises with South Korea. The U.S., under President Obama, “rebuffed” the offer, stating it was “inappropriate” to link the nuclear tests to their military exercises just south of its borders.

In 2016 North Korea again attempted peace talks, as State Department spokesman John Kirby confirmed “‎to be clear, it was the North Koreans who proposed discussing a peace treaty”. Again this was rejected on the grounds that North Korea refuses to “denuclearise”. But if nuclear weapons acts as a deterrent why would North Korea halt its nuclear weapons programme?

As President Trump’s current Director of National Intelligence, Dan Coates, recently stated, “Kim Jong-un has watched, I think, what has happened around the world relative to nations that possess nuclear capabilities and the leverage they have and seen that having the nuclear card in your pocket results in a lot of deterrence capability.” Former director of intelligence, James Clapper, also said that nuclear weapons were North Korea’s “ticket to survival.”

And who can forget George W. Bush’s famous 2002 State of the Union address where he labelled Iran, Iraq and North Korea as being part of an “axis of evil” and sponsors of terrorism, looking to seek weapons of mass destruction. The following year the United States, along with the UK, invaded Iraq without the approval of the United Nations. Its leader, Saddam Hussein, was ultimately hanged and today the country stands in ruins with civilians still being bombedby U.S. led forces and attacked by ISIS terrorists to this day. As Coates states, Kim Jong-un is “not crazy”.

Before the DPRK conducted its missile test over Japan, The U.S. and South Korea were, like previously, conducting their annual war games with one exercise apparently involving a “preemptive offensive against North Korea”. What would Trump’s reaction be if Russia and North Korea decided to conduct war games just over the border in Mexico? The operation, Ulchi-Freedom Guardian, involved UN Command forces from seven sending states, including Australia, Canada, Columbia, Denmark, New Zealand, the Netherlands and the United Kingdom.

When it comes to nuclear weapons the UK government currently holds a position more extreme than the DPRK. Both Defence Secretary Michael Fallon and Prime Minister Theresa May have openly statedthey wouldn’t rule out the use of nuclear weapons as a first strike option, meaning the death of potentially tens of thousands of innocent civilians.

Korea has a long history of being invaded and attacked by foreign enemies. Prior to the splitting of the country into north and south, and the Korean War, it spent 35 years under the oppressive colonial rule of Japan, where dissent and freedom of expression were routinely crushed. What impact has this history had on turning North Korea into one of the most repressive regimes in the world today? Why is this history ignored by mainstream media and the public instead flooded with dangerous binary narratives of good vs evil and questions of “what would a war with North Korea look like?” Where are the calls for restraint and dialogue?

If the argument for nuclear weapons is that they act as a deterrent, why are we not welcoming the fact North Korea has them, thereby making us all safer? If all countries have a right to defend themselves, why is it only some should have nuclear weapons while others should be attacked for trying to obtain them? Has Western interventions in countries like Afghanistan, Iraq, Libya and Syria made the world less safe by sending the message that “If you don’t have them, get them” at a time when the world is working to prohibit them.

Today’s nuclear weapons are a thousand times more powerful than the bombs dropped in 1945 on Japan. The ‘Tsar bomber’ detonated by the Soviet Union in 1961 was 3,333 times more powerful than the bomb used on Hiroshima. Even a regional nuclear conflict today could lead to worldwide suffering and starvation. As the situation continues to escalate President Trump was asked whether he would attack North Korea, he casually responded “we’ll see”.

Canadian health-care system under performing?

Canada’s health-care system is a point of Canadian pride. We hold it up as a defining national characteristic and an example of what makes us different from Americans. The system has been supported in its current form, more or less, by parties of all political stripes — for nearly 50 years.

Our team at the Queen’s University School of Policy Studies Health Policy Council is a team of seasoned and accomplished health-care leaders in health economics, clinical practice, education, research and health policy. We study, teach and comment on health policy and the health-care system from multiple perspectives.

While highly regarded, Canada’s health-care system is expensive and faces several challenges. These challenges will only be exacerbated by the changing health landscape in an aging society. Strong leadership is needed to propel the system forward into a sustainable health future.

A national health insurance model

The roots of Canada’s system lie in Saskatchewan, when then-premier Tommy Douglas’s left-leaning Co-operative Commonwealth Federation (CCF) government first established a provincial health insurance program. This covered universal hospital (in 1947) and then doctors’ costs (in 1962). The costs were shared 50/50 with the federal government for hospitals beginning in 1957 and for doctors in 1968.

This new model inspired fierce opposition from physicians and insurance groups but proved extremely popular with the people of Saskatchewan and elsewhere. Throughout the 1960s, successive provincial and territorial governments adopted the “Saskatchewan model” and in 1972 the Yukon Territory was the last sub-national jurisdiction to adopt it.

In 1968, the National Medical Care Insurance Act was implemented, in which the federal government agreed to contribute 50 per cent toward the cost of provincial insurance plans. In 1984 the Canada Health Act outlawed the direct billing of patients supplementary to insurance payments to physicians.

The five core principles of the Canadian system were now established: universality (all citizens are covered), comprehensiveness (all medically essential hospital and doctors’ services), portability (among all provinces and territories), public administration (of publicly funded insurance) and accessibility.

For the last 50 years, Canada’s health-care system has remained essentially unchanged despite numerous pressures.

Long wait times

The quality of the Canadian health-care system has been called into question, however, for several consecutive years now by the U.S.-based Commonwealth Fund. This is a highly respected, non-partisan organization that annually ranks the health-care systems of 11 nations. Canada has finished either ninth or 10th now for several years running.

One challenge for Canadian health care is access. Most Canadians have timely access to world-class care for urgent and emergent problems like heart attacks, strokes and cancer care. But for many less urgent problems they typically wait as long as many months or even years.

Patients who require hip or knee replacements, shoulder or ankle surgery, cataract surgery or a visit with a specialist for a consultation often wait far longer than is recommended. Many seniors who are not acutely ill also wait in hospitals for assignment to a long-term care facility, for months and, on occasion, years.

And it’s not just accessibility that is the problem. Against measures of effectiveness, safety, coordination, equity, efficiency and patient-centredness, the Canadian system is ranked by the Commonwealth Fund as mediocre at best. We have an expensive system of health care that is clearly under-performing.

A landscape of chronic disease

How is it that Canada has gone from a world leader to a middle- (or maybe even a bottom-) of-the-pack performer?

Canada and Canadians have changed, but our health-care system has not adapted. In the 1960s, health-care needs were largely for the treatment of acute disease and injuries. The hospital and doctor model was well-suited to this reality.

Today, however, the health-care landscape is increasingly one of chronic disease. Diabetes, dementia, heart failure, chronic lung disease and other chronic conditions characterize the health-care profiles of many Canadian seniors.

Hospitals are still needed, to be sure. But increasingly, the population needs community-based solutions. We need to “de-hospitalize” the system to some degree so that we can offer care to Canadians in homes or community venues. Expensive hospitals are no place for seniors with chronic diseases.

Another major challenge for Canadian health care is the narrow scope of services covered by provincial insurance plans. “Comprehensiveness” of coverage, in fact, applies only to physician and hospital services. For many other important services, including dental care, out-of-hospital pharmaceuticals, long-term care, physiotherapy, some homecare services and many others, coverage is provided by a mixture of private and public insurance and out-of-pocket payments beyond the reach of many low-income Canadians.

And this is to say nothing of the social determinants of health, like nutrition security, housing and income. None of these have ever been considered a part of the health-care “system,” even though they are just as important to Canadians’ health as doctors and hospital services are.

Aging population, increasing costs

Canada’s health-care system is subject to numerous pressures.

First of all, successive federal governments have been effectively reducing their cash contributions since the late 1970s when tax points were transferred to the provinces and territories. Many worry that if the federal share continues to decline as projected, it will become increasingly difficult to achieve national standards. The federal government may also lose the moral authority to enforce the Canada Health Act.

A second challenge has been the increasing cost of universal hospital insurance. As economic growth has waxed and waned over time, governments have increased their health budgets at different rates. In 2016, total spending on health amounted to approximately 11.1 per cent of the GDP (gross domestic product); in 1975, it was about 7 per cent of GDP.

Overall, total spending on health care in Canada now amounts to over $6,000 (US$4,790) per citizen. Compared to comparably developed countries, Canada’s health-care system is definitely on the expensive side.

Canada’s aging population will apply additional pressure to the health-care system over the next few years as the Baby Boom generation enters their senior years. In 2014, for the first time in our history, there were more seniors than children in Canada.

The fact that more Canadians are living longer and healthier than ever before is surely a towering achievement for our society, but it presents some economic challenges. On average, it costs more to provide health care for older people.

In addition, some provinces (the Atlantic provinces, Quebec and British Columbia in particular) are aging faster than the others. This means that these provinces, some of which face the prospects of very modest economic growth, will be even more challenged to keep up with increasing health costs in the coming years.

Actions we can take now

The failure of our system to adapt to Canadians’ changing needs has left us with a very expensive health-care system that delivers mediocre results. Canadians should have a health-care system that is truly worthy of their confidence and trust. There are four clear steps that could be taken to achieve this:

1. Integration and innovation

Health-care stakeholders in Canada still function in silos. Hospitals, primary care, social care, home care and long-term care all function as entities unto themselves. There is poor information sharing and a general failure to serve common patients in a coordinated way. Ensuring that the patient is at the centre — regardless of where or by whom they are being served — will lead to better, safer, more effective and less expensive care. Investments in information systems will be key to the success of these efforts.

2. Enhanced accountability

Those who serve Canadians for their health-care needs need to transition to accountability models focused on outcomes rather than outputs. Quality and effectiveness should be rewarded rather than the amount of service provided. Alignment of professional, patient and system goals ensures that everyone is pulling their oars in the same direction.

3. Broaden the definition of comprehensiveness

We know many factors influence the health of Canadians in addition to doctors’ care and hospitals. So why does our “universal” health-care system limit its coverage to doctors’ and hospital services? A plan that seeks health equity would distribute its public investment across a broader range of services. A push for universal pharmacare, for example, is currently under way in Canada. Better integration of health and social services would also serve to address more effectively the social determinants of health.

4. Bold leadership

Bold leadership from both government and the health sector is essential to bridge the gaps and break down the barriers that have entrenched the status quo. Canadians need to accept that seeking improvements and change does not mean sacrificing the noble ideals on which our system was founded. On the contrary, we must change to honour and maintain those ideals. Our leaders should not be afraid to set aspirational goals.

Rising food price has lead to conflict in Syria

Rising food price has lead to conflict in Syria

In 2015 the Welsh singer and activist Charlotte Church was widely ridiculed in the right-wing press and on social media for saying on BBC Question Time that climate change had played an important part in causing the conflict in Syria.

From 2006 until 2011, [Syria] experienced one of the worst droughts in its history, which of course meant that there were water shortages and crops weren’t growing, so there was mass migration from rural areas of Syria into the urban centres, which put on more strain, and made resources scarce etc, which apparently contributed to the conflict there today.

But what she said was correct – and there will be an increasing convergence of climate, food, economic and political crises in the coming years and decades. We need to better understand the interconnectivity of environmental, economic, geopolitical, societal and technological systems if we are to manage these crises and avoid their worst impacts.

In particular, tipping points exist in both physical and socio-economic systems, including governmental or financial systems. These systems interact in complex ways. Small shocks may have little impact but, a particular shock or set of shocks could tip the system into a new state. This new state could represent a collapse in agriculture or even the fall of a government.

In 2011, Syria became the latest country to experience disruption in a wave of political unrest crossing North Africa and the Middle East. Religious differences, a failure of the ruling regime to tackle unemployment and social injustice and the state of human rights all contributed to a backdrop of social unrest. However, these pressures had existed for years, if not decades.

So was there a trigger for the conflict in the region which worked in tandem with the ongoing social unrest?

Syria, and the surrounding region, has experienced significant depletion in water availability since 2003. In particular an intense drought between 2007 and 2010, alongside poor water management, saw agricultural production collapse and a mass migration from rural areas to city centres. Farmers, who had been relatively wealthy in their rural surroundings now found themselves as the urban poor reliant on food imports. Between 2007 and 2009 Syria increased its annual imports of wheat and meslin (rice flour) by about 1.5m tonnes. That equated to a more than ten-fold increase in importing one of the most basic foods.

Complex system

There is a tendency these days to believe that global trade will protect the world from food production shocks. A small production shock in one region can be mitigated by increasing, temporarily, imports of food or by sourcing food from another region. However, certain shocks, or a set of shocks, could create an amplifying feedback that cascades into a globally significant event.

The food system today is increasingly complex and an impact in land, water, labour or infrastructure could create fragility. A large enough perturbation can lead to a price response in the global market that sends a signal to other producers to increase their output to make up for any shortfall. While increased prices can be beneficial to farmers and food producers, if the price increase is large enough it can have a significant impact on communities that are net food importers.

Additionally, food production is concentrated both in a relatively small handful of commodity crops such as wheat, rice and maize as well as from a relatively small number of regions, for example the US, China and Russia. This concentration means any disruption in those regions will have a large impact on global food supply. Reliance on global markets for sourcing food can therefore be a source of systemic risk.

Rising prices

In 2008 the global price of food increased dramatically. This increase was the result of a complex set of issues including historically low global food stocks, drought in Australia following production lows in several other areas over the previous few years, and speculation and an increase in biofuel production in North America.

This spike in global food price in 2008 was a factor in the initial unrest across North Africa and the Middle East, which became known as the Arab Spring. As prices peaked, violence broke out in countries such as Egypt, Libya and Tunisia.

In Syria a local drought which coincided with this global shock in food prices resulted in dramatic changes in the availability and cost of food. In response small groups of individuals protested. The government response, combined with a background of rising protests, existing social tensions and instability in the wider region, quickly escalated into the situation we are experiencing today.

The events in Syria, then, appear to stem from a far more complex set of pressures, beyond religious tension and government brutality, with its roots in the availability of a natural resource – water – and its impact on food production. This is worrying as decreasing water availability is far from a localised issue – it is a systemic risk across the Middle East and North Africa. Over the coming decades this water security challenge is likely to be further exacerbated by climate change.

To better manage these types of risks in the future, and to build societal resilience, the world needs to understand our society’s interdependence on natural resources and how this can lead to events such as those that unfolded in Syria. We need analytical, statistical, scenario or war game-type models to explore different possible futures and policy strategies for mitigating the risk. By understanding sources of political instability we hope to get a better handle on how these types of crisis arise.

Failure of football clubs to pay staff a living wage

Failure of football clubs to pay staff a living wage

English football’s top flight, the Premier League, dominates the sporting world’s league tables for revenue. Star players, managers and executives command lucrative wages. Thanks to the biggest TV deal in world football, the 20 Premier League clubs share £10.4 billion between them.

But this wealth bonanza is not being distributed fairly within clubs. Wages are dramatically lower for staff at the opposite end of the Premier League labour market to players and executives. Many encounter in-work poverty.

Indeed, Everton and Chelsea are the only two Premiership clubs fully accredited with the Living Wage Foundation to pay all lower-paid directly employed staff, as well as external contractors and agency staff, a real living wage. This is a (voluntary) wage that is higher than the legally required national living wage. It is calculated based on what employees and their families need to live, reflecting real rises in living costs. In London it’s £9.75 an hour, elsewhere it’s £8.45.

Of 92 clubs in England and Scotland’s football leagues, only three others – Luton Town, Derby County and Hearts – are also accredited with the Living Wage Foundation. And many club staff – cleaners, caterers, stewards and other match-day roles – are employed indirectly by agencies or contractors and not paid the real living wage.

In 2015, The Independent newspaper asked 20 Premier League clubs simple questions: Does your club pay the living wage to full-time staff? Does it pay, or is it committed to paying the living wage to part-time and contracted staff? Seven clubs failed to reply or said “no comment”.

Good business, good society?

Many football clubs are embedded in urban communities, some classified as among the most impoverished places in Western Europe. What does it say about ethics and employment practices, especially of wealthier Premier League clubs, when many match-day staff don’t receive a proper living wage?

Aside from moral factors relating to fairer distribution of wealth as the glue underpinning more equal societies, there is also a good business case for companies to pay a real living wage. According to the Living Wage Foundation, organisations among the 2,900 accredited as paying the voluntary living wage report significant improvements in quality of work, lower staff absence and turnover – and an improved corporate reputation as a result.

Everton FC, located in an area of Liverpool with high social deprivation, has announced that becoming an accredited Living Wage Foundation employer will significantly increase wages for contractors and casual, match-day staff. Denise Barrett-Baxendale, the club’s deputy chief executive, has said: “Supporting the accredited living wage is quite simply the right thing to do; it improves our employees’ quality of life but also benefits our business and society as a whole.” Everton’s neighbours Liverpool FC has yet to make a similar commitment.

Independent academic research suggests that while workers benefit from the real living wage, it’s not an automatic fix. Higher hourly pay does not necessarily translate into a better standard of living if working hours are too low. The problem is that there are large concentrations of part-time living wage jobs with few hours and so small income increases are offset by rising costs of living.

Ending foul pay

There has recently been growing mobilisation among the public, civil society, supporters groups and some politicians to pressure football clubs to pay the real living wage. The GMB, a big general workers union, launched the GMB End Foul Pay campaign. London’s mayor, Sadiq Khan, recently urged every London Premier League club to pay all staff the London living wage.

In Manchester, living wage campaigners have targeted the city’s two big clubs Manchester City and Manchester United. While progress has been reported at Manchester City, Manchester United has yet to commit to extending the living wage to its directly employed part-time match-day staff. By contrast, FC United of Manchester, the breakaway non-league club formed by Manchester United fans disenchanted with the Glazers’ ownership, pays the real living wage to all staff, setting an example to the much richer football giant. Manchester United presently ranks as the “richest club in the world”, having achieved record-breaking revenues of £515.3m in 2015-16.

But despite these grassroots campaigns and political exhortations, few football clubs are taking concrete measures to improve the wages and working conditions of lower-paid staff. It appears that leaving pay determination to the prerogative of club owners and executives is not working. Stronger regulation and political intervention may have to be contemplated – such as raising the legal national living wage and giving better legal rights and protections to indirectly employed staff on precarious contracts.

Such issues clearly go beyond football clubs in an economy that still hasn’t recovered from the 2008 financial crisis. The state of the UK labour market is currently being considered by the government’s review of modern employment practices, but we can expect little to change when the economic model remains fundamentally the same.

The misguided political ideology of self-regulating market forces has created stark inequalities as wealth continues to trickle up disproportionately to the top 1% and countervailing institutions, particularly trade unions, have been emasculated. Low pay in football clubs and elsewhere reflects this broader systemic context of contemporary capitalism.