Latest Posts

Oman is turning to Twitter to help govern

Research conducted by the Dubai School of Government into the Arab Spring of 2010-11 found that mass protests on the ground were often preceded by revolutionary conversations online, and that social media such as Twitter played a central role in shaping the political events. Having studied changes in internet traffic and social media use, they concluded that social media during the Arab Spring played a critical role in “mobilisation, empowerment, shaping opinions, and influencing change”.

In some cases, such as in Dubai, the government used social media to engage citizens and encourage participation in institutional rather than revolutionary change. In other cases, such as in Egypt, governments blocked access to websites used by protesters, or even shut down access to the entire internet.

Following the Arab Spring, citizens of the Persian Gulf state of Oman became aware of Twitter’s potential and decided to adopt it as a platform for addressing social problems, rather than instigating revolutions. For example, unemployment for young people even with degrees is a problem in Oman, as it is in other nations in Europe, and young Omanis took to Twitter to discuss their predicament, gathering around the hashtag that is the Arabic translation of “we need a job”. This tweet was read by Oman’s Sultan, Qaboos bin Said Al Said, who then announced the creation of 25,000 jobs for younger adults.

Oman is one of the most absolutist states in the world, with political power resting almost entirely within the Sultan’s hands. Yet this gave citizens a feeling that was not possible before: that their concerns are matters of importance to the government. Twitter becomes a two-way communication channel for working towards social change, not just a one-way broadcast for promoting celebrities or for delivering government pronouncements.

Now, the civil service in Oman is adopting Twitter as part of its provision of public services. A similar approach has been taken elsewhere, for example the small Spanish town of Jun, where the mayor has put every municipal department on Twitter, encouraging citizens to contact them using that public forum. But in this case the approach has been scaled up to an entire country.

Digital government in Oman

We examined how Twitter was being used in Oman and to what effect. Our researchrevealed that Omani citizens found Twitter to be “empowering”, as it allows them to identify matters of concern and seek rapid responses and resolutions – something that was a rarity in the past.

One example is refuse collection. Using Twitter, citizens can post photographs of streets where refuse collections have not been made at the stated times, or left at roadsides causing annoyance and unhygienic conditions. In the past, citizens would be left to call or visit local government offices, but any effect this had generally came considerably later than desired, would be handled only if there was a personal connection in the responsible department – or not dealt with at all.

Twitter also allows women to get their issues resolved without visiting government department offices. One survey participant commented:

I think that one of the good things about Twitter is flexibility … our culture does not appreciate women going to government organisations. With Twitter, women can communicate with us without having to leave home. They can complain and put across their feedback or suggestions, and at the same time respect tradition and culture. We are getting more complaints from women than before [using Twitter].

A woman participating in the research agreed that Twitter provided her with the means to communicate with government without having to visit in person, saying that she would be “uncomfortable” doing so as she comes from a conservative background. Other participants said that although Oman’s capital, Muscat, is considered more modern and so cultural practices more loosely adhered to, outside urban areas traditions are very well preserved and women are expected to respect them.

Getting things done

Use of Twitter has led to greater transparency and accountability in public sector departments and swifter resolution of issues, which citizens are happy about. All departmental personnel including heads of department and ministers – not just the government departments themselves – have Twitter accounts available to citizens, offering few places for civil servants to hide or delay acting on a complaint or request.

In order to ensure their government does not drop the ball, Omani citizens post their issues to Twitter. Decision makers, mindful that their responses are publicly monitored, communicate with one another using Whatsapp to expedite the process. Once the matter has been resolved, the citizen offers a message of gratitude, which reflects well on the effectiveness of a department at resolving issues.

For public sector government staff, this approach has meant longer and more diligent working hours, but workers felt that, as this has resulted in better quality services and new knowledge and skills, they were happy with the transition. One remarked:

We reply to public tweets or complaints even on holidays or weekends … we can’t delay our responses. We have employees who work 24/7 … Twitter has provided us with a new culture of work that was not there before. Our work and success is apparent to the general public which has led them to post messages of appreciation about what we are doing.

From our discussions with government officials it was apparent that Omani citizens are forward thinking, techno-savvy, and use Twitter to find solutions to their social problems, whether large scale (the unemployment issue) or day-to-day matters (refuse collection). The Omanis feel immense pride in their activities and the government’s responses to their concerns, feeling that Twitter provides them a platform to make themselves heard to the government that was not there before. The government has even established a Department of Social Media in order to ensure this approach continues to meet the citizens’ requirements. And for its part, the government is able to use Twitter to gauge citizens’ views, something difficult to achieve in the past.

What our study shows is that if digital skills are taught and encouraged and a clear and transparent vision is provided this can lead to a widespread change of attitude, from citizens to policymakers and the media. Whether implemented in a small town or across an entire nation, it’s clear Twitter could be similarly used in other countries, providing a digital delivery of effective and responsive modern public services.

Sam Harris

Sam Harris is known for many things, from being one of the leading figures of the New Atheist movement to a controversial critic of Islam. he is also known for arguing that science can provide answers to questions regarding morality. For him, morality is within the domain of science.

How is this possible, exactly? After all, science deals with facts, not values. Harris proposes that the term science is far more inclusive than we normally understand. There is no fundamental distinction, for instance, between a scientist working in a laboratory and a plumber identifying problems in a plumbing system. The distinction between them is merely conventional, because what really counts is that doing science means using reason and observation. As long as a given domain can be the subject of reasoned inquiry and observation, it belongs to the broader domain of science.

Here is how Harris himself puts it in his essay responding to Ryan Born’s critique:

“For practical reasons, it is often necessary to draw boundaries between academic disciplines, but physicists, chemists, biologists, and psychologists rely on the same processes of thought and observation that govern all our efforts to stay in touch with reality.”

Also:

Many people think about science primarily in terms of academic titles, budgets, and architecture, and not in terms of the logical and empirical intuitions that allow us to form justified beliefs about the world.” 

As well as:

“I am, in essence, defending the unity of knowledge — the idea that the boundaries between disciplines are mere conventions and that we inhabit a single epistemic sphere in which to form true beliefs about the world.”

It seems from the above that Harris thinks science is just the application of reason and observation in order to arrive at justified true beliefs about the world. To be more precise, what Harris would likely say is that science is conceived as consisting in applying reason and observation with the intention to acquire justified true beliefs (otherwise, every time a scientific notion turns out to be false we would have to conclude that it wasn’t science to begin with).

As long as we use reason and observation with proper epistemic intentions, we are doing science regardless of whether or not we really do acquire true beliefs. Even if we found out that some of our beliefs are false, we can always try to replace them with better justified ones. While this definition too is not without problems, I will assume this is what Harris has in mind. I’ll argue, however, that if science is conceived as just applying reason and observation with an intention to acquire justified true beliefs, this leads to one of the best known problems in the philosophy of science: the demarcation problem.

The demarcation problem is the problem of how we differentiate science from pseudoscience in principle. In other words, the demarcation problem consists in finding some principles, criteria, reasons, or conditions to place something like astronomy under “science” and place its counterpart astrology as a pseudoscience. However, for every proposed claim about what distinguishes science from pseudoscience, there’s a counter-example. For example, Karl Popper proposed that falsification is the criterion for distinguishing science from pseudoscience. If any set of claims or theory is falsifiable, it belongs to the domain of science. But if any set of claims or theory is unfalsifiable, it belongs to the domain of pseudoscience. However, this criterion is too strict because some untestable scientific claims ranging from string theory to the many-worlds interpretation are not considered pseudoscience.

How is the problem of demarcation relevant to Harris’ definition of science? If science is any activity that relies on reason and observation (or empirical and logical intuitions) with the intention to produce justified true belief, then the concept includes many things that are considered to be pseudoscience or non-science. Consider three examples.

First, phrenology. Phrenology is now relegated to pseudoscience, but during the 19th century many people took it seriously. Phrenology claims that personalities, emotions, talents, and such are caused by the activity of very specific regions of the brain. The theory of Phrenology was developed by Franz Joseph Gall, on the basis of his observations of the size of many skulls. Harris’ definition of science seems to force him to accept that phrenology is in fact a science.

Second, Intelligent Design (ID). Many people like to ridicule proponents of ID as mindless buffoons, but in fact the public figures of ID like Stephen C. Meyers and Michael Behe are well-educated and thoughtful people. This doesn’t mean that their claims are true. After all, it’s possible to be well-educated, thoughtful, and yet fundamentally misguided. But ID proponents like Stephen C. Meyers do in fact use reason and observation to support their claim. They give arguments and provide what they think of as empirical evidence for their conclusion that there must be an intelligent designer. In effect, they are doing science according to Harris’ definition.

As a side note, someone could object that ID proponents are using reason and observation too poorly for what they do to be considered science. Moreover, what they are doing goes against the established body of knowledge. Yet, Harris’ definition of science does not really include any qualification concerning the quality of using reason and observation. Harris could propose to amend his definition to say that reason and observation need to be used well. I shall address this later. As to the second point, going against the body of established knowledge may seem irrational, but we want to be careful because many scientists who initiated a breakthrough were going against established the then accepted body of knowledge. Albert Einstein’s General Relativity went against Newtonian Mechanics, which was an established body of theoretical knowledge. However, we certainly don’t want to say that Einstein was being irrational.

Third, consider Natural Theology. Regardless of what one may think of Natural Theology, we can all agree that it is not a science. However, natural theologians use observation and reason to support their claim that God exists. One notable example is the fine tuning argument. Natural theologians observe that the values of the cosmological constants are conducive to the existence of life, and they make an inference to the best explanation (at least in their view) that God is responsible for so structuring the universe. Whether or not this is a convincing argument, natural theologians are indeed using both observation and reason to support their claim. According to Harris’ definition of science, Natural Theology is therefore a science.

What I’m trying to argue by way of these counterexamples is that Harris’ conception of science is far too broad. It readily includes a number of notions that most of us wouldn’t consider to be within the domain of science, and reasonably so. In fact, it seems to include things that are considered to be downright pseudoscience, or theology. It should therefore be apparent that Harris’ definition of science is not very helpful, as it exacerbates the demarcation problem.

Harris could reply by arguing that he wants to make a distinction between a rigorous and reliable use of reason and observation and a loose and unreliable use of reason and observation. Anything that counts as science involves a rigorous and reliable use of reason and observation, whereas pseudoscience involves a loose and unreliable use of reason and observation, although both have the intention to produce a justified true belief. With this new distinction, Harris could exclude ID, Phrenology, and Natural Theology from the domain of science because they involve a very sloppy and unreliable use of reason and observation.

However, even Harris’ improved definition wouldn’t work. Even if it succeeds in excluding ID, Phrenology, and Natural Theology, it ends up excluding a lot of science. There are many scientific works that are not very rigorous and reliable. For example, more often than most people realize, peer review journals tend to publish scientific works afflicted by serious methodological problems.

One also has to consider the kind of scientific work at the frontier of knowledge. A lot of this works will turn out to be mistaken, because it is dealing with something that is barely within the grasp of science. For example, consider some cutting edge work on neuroscience. Despite the fact that neuroscience has made enormous progress of late, there is of course still a lot that neuroscientists do not know. Application of the scientific method in this domain began rather poorly (not rigorously, and somewhat unreliably), but eventually improved, and continues to improve. Still, even according to Harris’ augmented definition, neuroscience done poorly is not within the domain of science.

Harris may, of course, continue to modify and improve his definition, but he also wants to maintain it as broad as possible, in order to include ethics within the domain of science. This is a very difficult, if not impossible, challenge. What we have seen so far is that he has to narrow down his definition in order to avoid embarrassing counter examples. But if he is forced to continue on this path, there’s a potential problem that his eventual definition will end up either being beyond recognition and familiarity, or fail in its stated purpose to include branches of philosophy such as ethics.

Some readers may at this point conclude that this is merely a semantic issue. In an important sense,  they are correct. After all, Harris provides a definition of science, and I am disputing it. This is a discussion about the meaning of words, that is, about semantics. But contra popular understanding, semantic issue aren’t pointless. On the contrary, they are quite instructive. Mine is a cautionary tale on what happens when one broadens the meaning of an important word too much, leading straight into clearly unintended and perhaps even embarrassing consequences. One simply has to be careful with how one uses words, especially when one’s entire argument depends on it. Despite being a good writer, Harris, it turns out, is not careful with his words.

Battles Over Memory Rage On

Battles Over Memory Rage On

Throughout the world postconflict societies have grappled with bitter wartime memories. Some succeed by casting aside disputes over the past, says David Rieff, author of In Praise of Forgetting: Historical Memory and Its Ironies. Others, like postwar Germany, have repudiated their pasts. White nationalist protests in Charlottesville, Virginia, this month over the removal of Confederate monuments highlight how the United States has had no such reckoning, he says. This stems from the federal government’s failure to impose a narrative of emancipation of African Americans after the Confederacy’s surrender, Rieff says, adding, that “the rebellion won the peace, in terms of the memory war.”

How do you distinguish between memory, collective memory, and history?

The only thing that we can call memory is individual memory. You can testify on the basis of your individual memory in a court, but not on the basis of your collective memory. Even if we agreed on what happened [in Charlottesville], that wouldn’t mean we have a collective memory; it would just mean we share the same view. Collective memory is the way that societies agree to remember the past.

Collective memory is the way we think about history, but it’s not history. History is critical, or it’s nothing. Respectable historians don’t write history to serve the interests of the present. They write history to explain what happened in the past, not to tell you, “this was the bad guy, this was the good guy.” Collective memory is about building solidarity, about building community. It is about finding a way to reconcile with the past. That seems to be at the core of the question of these Confederate monuments.

How do these ways of viewing the past relate to debates over monuments commemorating the Confederacy?

We’re always rewriting our past, and when monuments are built they reflect a certain vision. A monument usually honors one side in conflict. Even in putting as uncontroversial a monument as one to George Washington, we’re saying George Washington was right, the revolution was right, and Tory sympathizers—the third of the American population that moved to Canada or back to Britain because they opposed the American Revolution—[were wrong]. There’s no such thing as a monument that exists in a value-free, opinion-free, politics-free, ideology-free context. When you have a tragic and controversial question like that of the American Civil War, that’s exponentially more difficult, and has to be thought through that much more carefully.

These monuments to the Confederacy seem to defy the cliché that the victors write the history.

What distinguished the American Civil War from so many other conflicts is that the victors didn’t build all the monuments. The Southern version of the past—that of a tragic war between brothers, a noble cause—became the national myth after the murder of Reconstruction. The federal government won the war, and the rebellion won the peace, in terms of the memory war.

Many [Union] veterans were not at all sympathetic to reconciling with their Southern brothers; they didn’t feel that their causes were equivalent. But the Reconstruction, which might have put the United States on a different basis in terms of race relations and the situation of African American people had it persisted, was basically abolished after the Electoral College victory of President [Rutherford B.] Hayes, who was indebted to the South. [Hayes assumed the presidency despite appearing to lose the popular vote, and only after a protracted dispute.]

In a sense, the [U.S.] military ratified the victory of the Southern version of events in the memory wars by naming bases in the states of the old Confederacy, where there are a disproportionate number of military bases, after Confederate generals. The most important one is Fort Bragg, which was named [in 1918] after General Braxton Bragg, who fought to allow the South to secede and remain a slaveholding polity. [It is the] headquarters of the airborne forces and Special Operations forces, which are fighting most of our wars. It’s paradoxical because in many ways, the U.S. military is much better at race relations than the rest of our society, and in many ways their seriousness about confronting these issues should be a model for the rest of U.S. society.

Does the political salience of these monuments wax and wane?

There are [Union] monuments in the Northeast, above all in small-town New England, to those who fell in the Civil War. By the [mid-twentieth century], whatever power they once had, they no longer had, whereas the Southern monuments retained their power, first and foremost over African Americans. Black leaders made heroic efforts to remind people of what Confederate battle flags, the official flag of the Confederacy, and these monuments truly represented, but their voices were largely unheard.

It was only when Dylann Roof, who clearly expressed Confederate views, murdered [nine congregants of the Emanuel African Methodist Episcopal Church, in Charleston, South Carolina, in June 2015] that there was serious support outside the African American community toward bringing battle flags down from the statehouse domes and all the rest of it. Now, after Charlottesville, the thing has gone viral. Cities are removing these statues, partly just in the name of public order but partly because people have realized these are not innocent objects of contemplation of the past, but powerful ideological statements that many, many Americans find hateful.

The periods during which these monuments were built seem to say at least as much about the politics of the moment as about the period they’re commemorating.

There were a great many monuments to the Confederate war dead and to Confederate generals erected between 1865 and the beginning of the First World War for us, in 1917. A great many more were erected during the civil rights struggles that began with [President Harry S.] Truman’s integration of the Army and the strike of the railroad porters in the forties, and then going through the great battles over segregation in the fifties through the early sixties. These monuments kept getting refreshed in the states of the old Confederacy and given new life and used for different purposes, whereas the power of the monuments in the North to command political allegiance, to have moral authority, fell into dissimilitude.

Have you seen examples from conflicts abroad that might inform how the United States addresses these issues?

If one side wins a war completely—there is unconditional surrender—at least in theory it can impose a version of the past. The Germans transformed their understanding of their own history [after World War II], but that was only possible because they were crushed and the victorious [Western] allies decided that they weren’t going to put up with any compromise over the understanding of just how evil Nazi Germany had been. As an occupying power we were able to impose an anti-Nazi view, which post-occupation German leaders on both the left and right [embraced], so generations of children have grown up with an accurate account of what happened. It would be inconceivable that contemporary Germans would accept the idea of a Fort Rommel [named for the field marshal of the German armed forces in World War II], or that the German military or political elite, right or left, would propose it.

But imposing memories can’t work when there’s no prospect of a victor; most wars do not end in total victory. In Northern Ireland and in the Balkans [in the 1990s] there was no way of imposing one version of the past. They had to agree to disagree; you’re not going to convince either side for the foreseeable future of a version of the past that holds [itself] at fault. Lots of Irish people would agree that it’s better either to forget or to be silent.

Since the German example is one of total victory, could it offer an analogy for the U.S. Civil War?

The Germans were first forced and then themselves chose to write their history in a way that reflected a rejection of evil. We did the opposite after our Civil War. We could have done exactly as the Germans did, but we chose not to, and now we’re paying a steep price for that.

We would have had to take seriously what [President Abraham] Lincoln thought [about the] purpose of the war. It might not have begun as the emancipation of black people, but that was the cause for which the second half of it was pursued. Since that hadn’t been established fully in wartime, it needed to be continued in peacetime. But Hayes comes into office within a little more than a decade of the end of the war, and the Reconstruction, which was meant to impose a different political system on the South, was halted, and the South was allowed to go back to what amounted to apartheid.

Presidents were not willing to fulfill the promises of the Emancipation Proclamation, which should have included preventing revival of the Confederate view of the world. By the twentieth century, there were presidents who were absolute racist segregationists. Woodrow Wilson undid the moral progress that Teddy Roosevelt had made. Wilson tolerated the Ku Klux Klan so much so that he had a screening of a movie glorifying the Klan, Birth of a Nation, in the White House. Far from a federal effort to undo the Southern version, by the time you get to Wilson’s presidency there was actually a federal effort to confirm the Southern version.

Can war memorials be compatible with a pluralistic liberal democracy?

Monuments listing the names of the dead themselves are acceptable by any moral yardstick; that is the genius of Maya Lin’s Vietnam War memorial in Washington. [Memorializing] Confederates who were ordinary soldiers and noncommissioned officers who died seems to be fine. The ethical problem is glorifying the cause.

Monuments to the leading military lights of secession, or to the glory or the memory of that state, should be perceived as a rebuke to our conscience and to the history of our country and taken away. Some things cannot be fixed. This is something that can be fixed.

This interview has been edited and condensed.

Why Pay Workers More? skepticsociety.co.uk

Why Pay Workers More?

AN ECONOMIST’S GUIDE EFFICIENCY WAGES

The term “efficiency wages” or “efficiency earnings” was first introduced by influential economist Alfred Marshall (1842-1924) to represent a unit of labor based upon the relationship between wage and efficiency. According to Marshall, the theory behind efficiency wages would require that employers pay based upon output or efficiency, meaning that more efficient workers would be paid more than less efficient workers which would render the employer theoretically indifferent towards workers of ranging efficiencies.

Today, the term has taken on a different meaning.

The modern use of the term efficiency wage is in reference to the hypothesis that wages in some markets are not always based upon the process of market-clearing, which requires that supply be equal to demand. Market clearing wage, for instance, is the wage or pay scale at which the quantity of labor is equal to the demand or need for labor. It is when this supply equals demand that the market is cleared of all excess.

Efficiency Wage Theory

In economic theory, market clearing presents the most efficient process. But the idea behind efficiency wage theory is that it may ultimately benefit a firm or company to pay laborers a wage that is higher than their marginal revenue product.  That is to say that there is incentive for companies to pay their employees more than the market clearing or equilibrium wage.

WHY PAY MORE?

In theory, perfect wage equilibrium presents the ideal state for a market.

But in reality, there are other influential factors that affect an employer’s determination of wages.  A company may be incentivized to pay their workers more in order to benefit in other areas such as increased productivity or a reduction in costs associated with turnover. There are several theories surrounding why employers may choose to pay above the market clearing wage, which include:

  • Avoiding shirking: By raising the cost of being fired, employers discourage shirking, or doing less work than agreed.
  • Minimizing turnover: The costs of turnover are plenty. In general, it is expensive to hire and train replacement workers. Paying above market-clearing wages can encourage worker loyalty and reduce a worker’s desire to leave or look for employment elsewhere.
  • Sociological benefits: Higher wages can encourage higher morale, which in turn can raise group output norms.
  • Increasing selection: Companies offering higher wages will generally attract more job candidates, which will improve their applicant pool. This can be especially profitable when the position requires skill or experience.

Simply put, labor productivity is positively correlated to wage in these efficiency wage models, which provides simply explanation of why employers would sway from paying market clearing wage.

By contrast, consider models in which the wage is equal to labor productivity in equilibrium, or models in which wages are set to reduce the likelihood of unionization (union threat models). In these, productivity is not a function of the wage.

Introduction to Holography skepticsociety.co.uk

Introduction to Holography

If you’re carrying money, a drivers license, or credit cards, you’re carrying around holograms. The dove hologram on a Visa card may be the most familiar. The rainbow-colored bird changes colors and appears to move as you tilt the card. Unlike a bird in a traditional photograph, a holographic bird is a three-dimensional image. Holograms are formed by interference of light beams from a laser.

HOW LASERS MAKE HOLOGRAMS

Holograms are made using lasers because laser light is “coherent.” What this means is that all of the photons of laser light have exactly the same frequency and phase difference.

Splitting a laser beam produces two beams that are the same color as each other (monochromatic). In contrast, regular white light consists of many different frequencies of light. When white light is diffracted, the frequencies split to form a rainbow of colors.

In conventional photography, the light reflected off an object strikes a strip of film that contains a chemical (i.e., silver bromide) that reacts to light. This produces a two-dimensional representation of the subject. A hologram forms a three-dimensional image because light interference patterns are recorded, not just reflected light. To make this happen, a laser beam is split into two beams that pass through lenses to expand them. One beam (the reference beam) is directed onto high-contrast film. The other beam is aimed at the object (the object beam). Light from the object beam gets scattered by the hologram’s subject. Some of this scattered light goes toward the photographic film.

The scattered light from the object beam is out of phase with the reference beam, so when the two beams interact they form an interference pattern.

The interference pattern recorded by the film encodes a three-dimensional pattern because the distance from any point on the object affects the phase of the scattered light.

However, there is a limit to how “three-dimensional” a hologram can appear. This is because the object beam only hits its target from a single direction. In other words, the hologram only displays the perspective from the object beam’s point of view. So, while a hologram changes depending on the viewing angle, you can’t see behind the object.

VIEWING A HOLOGRAM

A hologram image is an interference pattern that looks like random noise unless viewed under the right lighting. The magic happens when a holographic plate is illuminated with the same laser beam light that was used to record it. If a different laser frequency or another type of light is used, the reconstructed image won’t exactly match the original. Yet, the most common holograms are visible in white light. These are reflection-type volume holograms and rainbow holograms. Holograms that can be viewed in ordinary light require special processing. In the case of a rainbow hologram, a standard transmission hologram is copied using a horizontal slit. This preserves parallax in one direction (so the perspective can move), but produces a color shift in the other direction.

USES OF HOLOGRAMS

The 1971 Nobel Prize in Physics was awarded to the Hungarian-British scientist Dennis Gabor “for his invention and development of the holographic method”.

Originally, holography was a technique used to improve electron microscopes. Optical holography didn’t take off until the invention of the laser in 1960. Although holograms were immediately popular for art, practical applications of optical holography lagged until the 1980s. Today, holograms are used for data storage, optical communications, interferometry in engineering and microscopy, security, and holographic scanning.

INTERESTING HOLOGRAM FACTS

  • If you cut a hologram in half, each piece still contains an image of the entire object. In contrast, if you cut a photograph in half, half of the information is lost.
  • One way to copy a hologram is to illuminate it with a laser beam and place a new photographic plate such that it receives light from the hologram and from the original beam. Essentially, the hologram acts like the original object.
  • Another way to copy a hologram is to emboss it using the original image. This works much the same way records are made from audio recordings. The embossing process is used for mass production.

The Philosophy of Honesty

What does it take to be honest? Although often invoked, the concept of honesty is quite tricky to characterize. Taking a closer look, it is a cognate notion of authenticity. Let’s see why.

TRUTH AND HONESTY

While it may be tempting to define honesty as speaking the truth and abiding by the rules, this is an overly-simplistic view of a complex concept. Telling the truth – the whole truth – is at times practically and theoretically impossible as well as morally not required or even wrong.

Suppose your new partner asks you to be honest about what you have done over the past week, when you were apart: does this mean you’ll have to tell everything you have done? Not only you may not have enough time and you won’t recall all details; but, really, is everything relevant? Should you also talk about the surprise party you are organizing for next week for your partner?

The relationship between honesty and truth is much more subtle. What is truth about a person, anyway? When a judge asks a witness to tell the truth about what happened that day, the request cannot be for any particular whatsoever, but only for relevantones. Who is to say which particulars are relevant?

HONESTY AND THE SELF

Those few remarks should be sufficient in clearing up the intricate relationship there is between honesty and the construction of a self. Being honest involves the capacity to select, in a way that is context-sensitive, certain particulars about our lives.

At the very least, hence, honesty requires an understanding of how our actions do or do not fit within rules and expectations of the Other – where the latter stands for any person we feel obliged to report to, including ourselves.

HONESTY AND AUTHENTICITY

But there’s to the relationship between honesty and the self.

Have you been honest with yourself? That is indeed a major question, discussed not only by figures such as Plato and Kierkegaard, but also in David Hume’s “Philosophical Honesty.” To be honest to ourselves seems to be a key part of what it takes to be authentic: only those who can face themselves, in all their own peculiarity, seem to be capable of developing a persona that is true to herself – hence, authentic.

HONESTY AS A DISPOSITION

If honesty is not telling the whole truth, what is it? One way to characterize it, typically adopted in virtue ethics (that school of ethics that developed from Aristotle’s teachings), makes of honesty a disposition. Here goes my rendering of the topic. A person is honest when she possesses the disposition to face the Other by making explicit all those details that are relevant to the conversation at issue.

The disposition in question is a tendency, which has been cultivated over time. That is, an honest person is one that has developed the habit of bringing forward to the Other all those details of her life that seem relevant in conversation with the other. The ability to discern that which is relevant is part of honesty and is, if course, quite a complex skill to possess.

FURTHER ONLINE READINGS

Despite its centrality in ordinary life as well as ethics and philosophy of psychology, honesty is not a major trend of research in the contemporary philosophical debate. Here are, however, some sources that can be useful in reflecting more on the challenges posed by the issue.

Should the results of Nazi experiments ever be taken up and used?

During World War II, Nazi doctors had unfettered access to human beings they could use in medical experiments in any way they chose. In one way, these experiments were just another form of mass torture and murder so our moral judgement of them is clear.

But they also pose an uncomfortable moral challenge: what if some of the medical experiments yielded scientifically sound data that could be put to good use? Would it be justifiable to use that knowledge?

Using data

It’s tempting to deflect the question by saying the data are useless – that the bad behaviour must have produced bad science, so we don’t even have to think about it. But there is no inevitable link between the two because science is not a moral endeavour. If scientific data is too poor to use, it’s because of poor study design and analysis, not because of the bad moral character of the scientist. And in fact, some of the data from Nazi experiments is scientifically sound enough to be useful.

The hypothermia experiments in which people were immersed in ice water until they became unconscious (and many died), for instance, established the rate of cooling of humans in cold water and provided information about when re-warming might be successful. Data from the Nazi experiments was cited in scientific papers from the 1950s to the 1980s, but with no indication of its nature.

The original source appears as a paper by Leo Alexander, published in Combined Intelligence Objectives Subcommittee Files. This is an unusual type of publication to be mentioned in a scientific journal, and it’s unclear that it comes from the trial of Nazi doctors at Nurmemberg.

In the late 1980s, US researcher Robert Pozos argued the Nazi hypothermia data was critical to improving methods of reviving people rescued from freezing water after boat accidents, but the New England Journal of Medicine rejected his proposal to publish the data openly.

Use of data generated by the Nazis from the deadly phosgene gas experiments has also been considered, and rejected by the US Environmental Protection Agency, even though it could have helped save lives of those accidentally exposed.

A tricky conundrum

So should the results of Nazi experiments ever be taken up and used? A simple utilitarian response would look to the obvious consequences. If good can come to people now and in the future from using the data, then its use is surely justified. After all, no further harm can be done to those who died.

But a more sophisticated utilitarian would think about the indirect and subtle consequences. Perhaps family members of those who were experimented on would be distressed to know the data was being used. And their distress might outweigh the good that could be done. Or perhaps using the data would send the message that the experiments weren’t so bad after all, and even encourage morally blinkered doctors to do in their own unethical experiments.

Of course, these bad consequences could be avoided simply be making sure the data is used in secret, never entering the published academic literature. But recommending deception to solve a moral problem is clearly problematic in itself.

The trouble is that focusing on the consequences – whether good or bad – of using Nazi data, misses an important point: there’s a principle at stake here. Even if some good could come of using the data, it would just not be right to use it. It would somehow deny or downplay the evil of what was done in the experiments that generated them.

This is a common sentiment, but if it is to hold ethical weight we need to be able to spell it out and give it a solid foundation. A little reflection shows that, as a society, we don’t have an absolute objection to deriving some good out of something bad or wrong. Murder victims sometimes become organ donors, for instance, but there is no concern that is inappropriate.

Paying our debt

So how to decide when it’s all right to derive some good from a wrongdoing? I think the answer lies in considering what society owes ethically to the victims of a wrongdoing. The ongoing investigations into institutional child sexual abuse in a number of Western countries have brought this question sharply into focus.

The wrongs done to victims of abuse are over but that’s not the end of the matter. Victims are ethically owed many things: recognition that what was done to them was indeed wrong, a credible indication that the society takes this seriously, an effort to identify, apprehend and punish the perpetrators, and compensation for their ongoing suffering and disadvantage. But beyond this, we have an obligation not to forget, and not to whitewash.

Victims of Nazi medical experiments are owed these same things. If society’s obligations to them have broadly been met through the Nuremberg trials and the ongoing global abhorrence of the awful things done to people in World War II, then it might be ethically possible to use the data if it could lead to some good.

But this must only be done with absolute openness about the source of the data, and clear condemnation of the way it was obtained. Citation of the Nazi hypothermia data in the medical and scientific literature from the 1950s to the 1980s gives no hint at all about of what is being referred to, and so falls ethically short.

Whats wrong with Global Capitalism?

Global capitalism, the current epoch in the centuries-long history of the capitalist economy, is heralded by many as a free and open economic system that brings people from around the world together to foster innovations in production, for facilitating exchange of culture and knowledge, for bringing jobs to struggling economies worldwide, and for providing consumers with an ample supply of affordable goods.

But while many may enjoy benefits of global capitalism, others around the world — in fact, most — do not.

The research and theories of sociologists and intellectuals who focus on globalization, including William I. Robinson, Saskia Sassen, Mike Davis, and Vandana Shiva shed light on the ways this system harms many.

GLOBAL CAPITALISM IS ANTI-DEMOCRATIC

Global capitalism is, to quote Robinson, “profoundly anti-democratic.” A tiny group of global elite decide the rules of the game and control the vast majority of the world’s resources. In 2011, Swiss researchers found that just 147 of the world’s corporations and investment groups controlled 40 percent of corporate wealth, and just over 700 control nearly all of it (80 percent). This puts the vast majority of the world’s resources under the control of a tiny fraction of the world’s population. Because political power follows economic power, democracy in the context of global capitalism can be nothing but a dream.

USING GLOBAL CAPITALISM AS A DEVELOPMENT TOOL DOES MORE HARM THAN GOOD

Approaches to development that sync with the ideals and goals of global capitalism do far more harm than good. Many countries that were impoverished by colonization and imperialism are now impoverished by IMF and World Bank development schemes that force them to adopt free trade policies in order to receive development loans.

Rather than bolstering local and national economies, these policies pour money into the coffers of global corporations that operate in these nations under free trade agreements. And, by focusing development on urban sectors, hundreds of millions of people around the world have been pulled out of rural communities by the promise of jobs, only to find themselves un- or under-employed and living in densely crowded and dangerous slums. In 2011, the United Nations Habitat Report estimated that 889 million people—or more than 10 percent of the world’ population—would live in slums by 2020.

THE IDEOLOGY OF GLOBAL CAPITALISM UNDERMINES THE PUBLIC GOOD

The neoliberal ideology that supports and justifies global capitalism undermines public welfare. Freed from regulations and most tax obligations, corporations made wealthy in the era of global capitalism have effectively stolen social welfare, support systems, and public services and industries from people all over the world. The neoliberal ideology that goes hand in hand with this economic system places the burden of survival solely on an individual’s ability to earn money and consume. The concept of the common good is a thing of the past.

THE PRIVATIZATION OF EVERYTHING ONLY HELPS THE WEALTHY

Global capitalism has marched steadily across the planet, gobbling up all land and resources in its path.

Thanks to the neoliberal ideology of privatization, and the global capitalist imperative for growth, it is increasingly difficult for people all over the world to access the resources necessary for a just and sustainable livelihood, like communal space, water, seed, and workable agricultural land.

THE MASS CONSUMERISM REQUIRED BY GLOBAL CAPITALISM IS UNSUSTAINABLE

Global capitalism spreads consumerism as a way of life, which is fundamentally unsustainable. Because consumer goods mark progress and success under global capitalism, and because neoliberal ideology encourages us to survive and thrive as individuals rather than as communities, consumerism is our contemporary way of life. The  desire for consumer goods and the ​cosmopolitan way of life they signal is one of the key “pull” factors that draws hundreds of millions of rural peasants to urban centers in search of work.

Already, the planet and its resources have been pushed beyond limits due to the treadmill of consumerism in Northern and Western nations. As consumerism spreads to more newly developed nations via global capitalism, the depletion of the earth’s resources, waste, environmental pollution, and the warming of the planet are increasing to catastrophic ends.

HUMAN AND ENVIRONMENTAL ABUSES CHARACTERIZE GLOBAL SUPPLY CHAINS

The globalized supply chains that bring all of this stuff to us are largely unregulated and systemically rife with human and environmental abuses. Because global corporations act as large buyers rather than producers of goods, they do not directly hire most of the people who make their products. This arrangement frees them from any liability for the inhumane and dangerous work conditions where goods are made, and from responsibility for environmental pollution, disasters, and public health crises. While capital has been globalized, the regulation of production has not. Much of what stands for regulation today is a sham, with private industries auditing and certifying themselves.

GLOBAL CAPITALISM FOSTERS PRECARIOUS AND LOW-WAGE WORK

The flexible nature of labor under global capitalism has put the vast majority of working people in very precarious positions. Part-time work, contract work, and insecure work are the norm, none of which bestow benefits or long-term job security upon people. This problem crosses all industries, from manufacturing of garments and consumer electronics, and even for professors at U.S. colleges and universities, most of whom are hired on a short-term basis for low pay. Further, the globalization of the labor supply has created a race to the bottom in wages, as corporations search for the cheapest labor from country to country and workers are forced to accept unjustly low wages, or risk having no work at all. These conditions lead to poverty, food insecurity, unstable housing and homelessness, and troubling mental and physical health outcomes.

GLOBAL CAPITALISM FOSTERS EXTREME WEALTH INEQUALITY

The hyper-accumulation of wealth experienced by corporations and a selection of elite individuals has caused a sharp rise in wealth inequality within nations and on the global scale. Poverty amidst plenty is now the norm. According to a report released by Oxfam in January 2014, half of the world’s wealth is owned by just one percent of the world’s population. At 110 trillion dollars, this wealth is 65 times as much as that owned by the bottom half of the world’s population. The fact that 7 out of 10 people now live in countries where economic inequality has increased over the last 30 years is proof that the system of global capitalism works for the few at the expense of the many. Even in the U.S., where politicians would have us believe that we have “recovered” from the economic recession, the wealthiest one percent captured 95 percent of economic growth during the recovery, while 90 percent of us are now poorer.

GLOBAL CAPITALISM FOSTERS SOCIAL CONFLICT

Global capitalism fosters social conflict, which will only persist and grow as the system expands. Because capitalism enriches the few at the expense of the many, it generates conflict over access to resources like food, water, land, jobs and others resources. It also generates political conflict over the conditions and relations of production that define the system, like worker strikes and protests, popular protests and upheavals, and protests against environmental destruction. Conflict generated by global capitalism can be sporadic, short-term, or prolonged, but regardless of duration, it is often dangerous and costly to human life. A recent and ongoing example of this surrounds the mining of coltan in Africa for smartphones and tablets and many other minerals used in consumer electronics.

GLOBAL CAPITALISM DOES THE MOST HARM TO THE MOST VULNERABLE

Global capitalism hurts people of color, ethnic minorities, women, and children the most. The history of racism and gender discrimination within Western nations, coupled with the increasing concentration of wealth in the hands of the few, effectively bars women and people of color from accessing the wealth generated by global capitalism. Around the world, ethnic, racial, and gender hierarchies influence or prohibit access to stable employment. Where capitalist based development occurs in former colonies, it often targets those regions because the labor of those who live there is “cheap” by virtue of a long history of racism, subordination of women, and political domination. These forces have led to what scholars term the “feminization of poverty,” which has disastrous outcomes for the world’s children, half of whom live in poverty.

Women in gender-equal countries have better memory

Let’s try you. Read the title above once, then cover it and write down word for word what you remember. Having difficulties? How well you do may be down to which country you live in.

That’s according to a new study, published in Psychological Science, involving an impressive 200,000 women and men from 27 countries across five continents. It revealed that women from more conservative countries performed worse on memory tests than those from more egalitarian countries.

Demographics expert Eric Bonsang and his colleagues analysed national survey data from individuals above the age of 50. They used existing data on cognitive performance tests measuring episodic memory (memory of autobiographical events). These involved recalling as many of ten words read out by a researcher as possible in one minute either immediately or after a short delay. The team rated each country’s level of gender equality by looking at the proportion of people agreeing with the statement: “When jobs are scarce, men should have more right to a job than women.”

Women outperformed men on memory in gender-egalitarian countries such as Sweden, Denmark, The Netherlands, the US and most European countries. However, in Ghana, India, China, South Africa and some more gender-traditional European countries (such as Russia, Portugal, Greece and Spain) the pattern reversed. Women in these countries performed worse than men – which was exactly what the researchers had predicted. Interestingly, men in egalitarian countries also scored better than men in conservative countries (but not by as much).

The findings did not depend on world region or the countries’ economic development (gross domestic product per capita in 2010). A factor that may be at play, however, is that modern countries (such as many of the gender-equal ones above) have better health benefits. Older adults may simply be healthier. But that doesn’t necessary explain the observed gender differences – the study after all found that the effect was stronger for women than for men.

The authors instead argue that a society’s attitudes to gender roles determine which behaviours and characteristics are deemed appropriate for men and women. In turn, these social expectations influence women’s (and men’s) life goals, occupational choices and experiences. As a result, women in more gender-traditional countries may have less exposure to cognitively stimulating activities such as those involved in education and work. Participation in education and work indeed explained 30% of the findings.

Damaging stereotypes

While the study provides some evidence that attitudes based on stereotypes do shape our abilities, a full test of this theory would require a study of aptitudes which are stereotypically considered feminine – such as social sensitivity or linguistic ability.

For example, would men in gender-traditional nations underperform on tests measuring social sensitivity, compared to women? A study conducted on American students showed just that. It may indeed be that this effect is even larger in more conservative countries.

The results of this study were explained in terms of “stereotype threat”, a fear of doing something that would confirm or reinforce the negative traits typically associated with members of stigmatised groups. Say you are a woman sitting a maths test. The common perception that women are not good at maths may play on your mind and your score may suffer as you struggle to concentrate. The fear takes away our cognitive resources and leads to underperformance on tasks deemed challenging for the stereotyped group.

This effect is very powerful and has been shown in a wealth of studies. When reminded of negative stereotypes, women have been shown to underperform on maths tests, or African Americans on tests measuring intellectual ability. Indeed the new study could be interpreted in terms of stereotype threat theory.

We’ve even seen the neurological underpinnings of this effect. Our new study, published in Frontiers’ Aging Neuroscience, asked a group of older participants to read an article about memory fading with age (age stereotype). We showed that, as a result, their reaction times in a cognitive task were delayed. What’s more, brain wave activity in these individuals indicated that their thoughts about themselves were more negative. This was seen in data from electroencephalography (EEG), which uses electrodes to track and record brainwave patterns.

Our study shows that short-term exposure to negative stereotypes has detrimental effects on cognitive functioning. Similar processes may have taken place in women continually exposed to negative gender and age stereotypes in gender-conservative countries – explaining their underperformance on the memory test.

What makes a country sexist?

Another consideration which future studies should take into account is the countries’ wider political system – not just the gender attitudes themselves. One theory suggests modernisation leads progressively to democratisation and liberalisation – including that of attitudes to gender roles. The society’s heritage, whether political or religious, influences the society’s values.

Indeed, our studies on cross-cultural attitudes to women and men show that they are more liberal in longstanding democracies such as the UK than in countries transitioning to democracy (such as Poland and South Africa). We found that gender attitudes were also affected by the preceding political systems: they were more conservative in the post-apartheid South Africa and less conservative in a post-communist Poland. So national histories of institutionalised inequality (apartheid) vs forced emancipation (communism) have left a long lasting impact on national levels of sexism.

Perhaps not coincidentally, some of the longest standing democracies in the new study happen to be the ones which are more gender-egalitarian. As my research suggests, both democratisation and the reduction of stereotype threat – especially through the mass media, such as advertising involving non-traditional gender roles – are important. These efforts should be our focus in bringing greater equality across a range of skills for women and men across the globe.

Introduction to Ethical Egoism

Ethical egoism is the view that each of us ought to pursue our own self-interest, and no-one has any obligation to promote anyone else’s interests. It is thus a normative or prescriptive theory: it is concerned with how we ought to behave. In this respect, ethical egoism is quite different from psychological egoism, the theory that all our actions are ultimately self-interested. Psychological egoism is a purely descriptive theory that purports to describe a basic fact about human nature.

ARGUMENTS IN SUPPORT OF ETHICAL EGOISM

1. Everyone pursuing their own self-interest is the best way to promote the general good.

This argument was made famous by Bernard Mandeville (1670-1733) in his poem The Fable of the Bees, and by Adam Smith (1723-1790) in his pioneering work on economics, The Wealth of Nations. In a famous passage Smith writes that when individuals single-mindedly pursue “the gratification of their own vain and insatiable desires” they unintentionally, as if “led by an invisible hand,” benefit society as a whole. This happy result comes about because people generally are the best judges of what is in their own interest, and they are much more motivated to work hard to benefit themselves than to achieve any other goal.

An obvious objection to this argument, though, is that ​it doesn’t really support ethical egoism. It assumes that what really matters is the well-being of society as a whole, the general good.

It then claims that the best way to achieve this end is for everyone to look out for themselves. But if it could be proved that this attitude did not, in fact, promote the general good, then those who advance this argument would presumably stop advocating egoism.

Another objection is that what the argument states is not always true.

Consider the prisoner’s dilemma, for instance. This is a hypothetical situation described in game theory.  You and a comrade, (call him X) are being held in prison. You are both asked to confess. The terms of the deal you are offered are as follows:

  • If you confess and X doesn’t, you get 6 months and he gets 10 years.
  • If X confesses and you don’t, he gets 6 months and you get 10 years.
  • If you both confess, you both get 5 years.
  •  If neither of you confess, you both get 2 years.

Now here’s the problem.  Regardless of what X does, the best thing for you to do is confess. Because if he doesn’t confess, you’ll get a light sentence; and if he does confess, you’ll at lest avoid getting totally screwed! But the same reasoning holds for X as well. Now according to ethical egoism, you should both pursue your rational self-interest. But then the outcome is not the best one possible. You both get five years, whereas if both of you had put your self-interest on hold, you’d each only get two years.

The point of this is simple. It isn’t always in your best interest to pursue your own self-interest without concern for others.

2.  Sacrificing one’s own interests for the good of others denies the fundamental value of one’s own life to oneself.

This seems to be the sort of argument put forward by Ayn Rand, the leading exponent of “objectivism” and the author of The Fountainhead and Atlas Shrugged.  Her complaint is that the Judeo-Christian moral tradition, which includes, or has fed into, modern liberalism and socialism, pushes an ethic of altruism.  Altruism means putting the interests of others before your own.  This is something we are routinely praised for doing, encouraged to do, and in some circumstances even required to do (e.g. when we pay taxes to support the needy).  But according to Rand, no-one has any right to expect or demand that I make any sacrifices for the sake of anyone other than myself.

A problem with this argument is that it seems to assume that there is generally a conflict between pursing one’s own interests and helping others.

  In fact, though, most people would say that these two goals are not necessarily opposed at all.  Much of the time they compliment one another.  For instance, one student may help a housemate with her homework, which is altruistic.  But that student also has an interest in enjoying good relations with her housemates. She may not help anyone whatsoever in all circumstances; but she will help if the sacrifice involved is not too great.  Most of us behave like this, seeking a balance between egoism and altruism.

OBJECTIONS TO ETHICAL EGOISM

Ethical egoism, it is fair to say, is not a very popular moral philosophy. This is because it goes against certain basic assumptions that most people have regarding what ethics involves. Two objections seem especially powerful.

1. Ethical egoism has no solutions to offer when a problem arises involving conflicts of interest.

Lots of ethical issues are of this sort. For example, a company wants to empty waste into a river; the people living downstream object. Ethical egoism just advises both parties to actively pursue what they want. It doesn’t suggest any sort of resolution or commonsense compromise.

2. Ethical egoism goes against the principle of impartiality.

A basic assumption made by many moral philosophers–and many other people, for that matter–is that we should not discriminate against people on arbitrary grounds such as race, religion, sex, sexual orientation or ethnic origin. But ethical egoism holds that we should not even try to be impartial. Rather, we should distinguish between ourselves and everyone else, and give ourselves preferential treatment.

To many, this seems to contradict the very essence of morality. The “golden rule,” versions of which appear in Confucianism, Buddhism, Judaism, Christianity, and Islam, says we should treat others as we would like to be treated. And one of the greatest moral philosophers of modern times, ​Immanuel Kant (1724-1804), argues that the fundamental principle of morality (the “categorical imperative,” in his jargon) is that we should not make exceptions of ourselves.

According to Kant, we shouldn’t  perform an action if we couldn’t honestly wish that everyone would behave in a similar way in the same circumstances.