Latest Posts

The hacker ethic

The arrest of a British cybersecurity researcher on charges of disseminating malware and conspiring to commit computer fraud and abuse provides a window into the complexities of hacking culture.

In May, a person going by the nickname “MalwareTech” gained international fame – and near-universal praise – for figuring out how to slow, and ultimately effectively stop, the worldwide spread of the WannaCry malware attack. But in August, the person behind that nickname, Marcus Hutchins, was arrested on federal charges of writing and distributing a different malware attack first spotted back in 2014.

The judicial system will sort out whether Hutchins, who has denied wrongdoing and pleaded not guilty, will face as much as 40 years in prison. But to me as a sociologist studying the culture and social patterns of cybercrime, Hutchins’ experience is emblematic of the values, beliefs and practices of many hackers.

The hacker ethic

The term “hacking” has its origins in the 1950s and 1960s at MIT, where it was used as a positive label to describe someone who tinkers with computers. Indeed, the use of the word “hack,” signifying a clever or innovative use of something, is derived from this original meaning. Although the term may have originated at MIT, young people interested in computer technology were tinkering across the country. Technology journalist Steven Levy, in his well-regarded history of that period, writes that these early tinkerers were influenced by the countercultural milieu of the 1960s.

They developed a shared subculture, combining a disdain for tradition, a desire for an open society and optimistic views of how technology could transform people’s lives. Levy encapsulated this subculture into a series of beliefs he labeled the “hacker ethic.”

People who subscribe to the hacker ethic commonly have a disregard for traditional status markers, like class, age or educational credentials. In this sense, hacking is open, democratic and based on ability. This particular belief has come under scrutiny as some scholars have argued that hacker culture discourages women from joining in. However, many hackers have taken nontraditional career paths, including Hutchins, whose computer skills are self-taught.

Another aspect of hacker subculture is interest in tinkering, changing, modifying and making things work differently or better. This has led to a great deal of innovation, including open-source programs being maintained by collections of coders and programmers – for free.

It is also this tinkering that allows hackers to find vulnerabilities in computers and software. It was through tinkering that Hutchins found a way to slow the WannaCry attack.

Different-colored hats

Members of the hacker subculture don’t all agree on what they should do with those ideas. Typically, they’re divided into three categories, with names inspired by the tropes of Western movies.

“Black hat” hackers are the bad guys. They find vulnerabilities in software and networks and exploit them to make money, whether by stealing data or encrypting data and holding the decryption key for ransom. They also create mischief and havoc, defacing websites and taking over Twitter feeds. The person, or people, who did what Hutchins is charged with – writing and distributing the Kronos malware – sought to hijack victims’ banking information, break into their accounts and steal their money. That’s a clear black hat activity.

“White hat” hackers are the good guys. They often work for technology companies, cybersecurity firms or government agencies, seeking to identify technological flaws and fix them. Some of them also use their skills to catch black hat hackers and shut down their operations, and even identify them so they can face legal repercussions. Hutchins, in his work as a researcher for the Kryptos Logic cybersecurity firm, was a white hat hacker.

A third group occupies a middle ground, that of the “gray hats.” They are often freelancers looking to identify exploits and vulnerabilities in systems for a varying range of purposes. Sometimes they may submit their findings to corporate or government programs intended to identify and fix problems; other times the same person may sell a new finding to a criminal.

What separates these three groups is not their actions – all three groups find weaknesses and tell someone else about them – but their motives. This makes hacking distinct from other types of criminal behavior: There are no “white hat” burglars or “gray hat” money launderers.

The importance of motivation is why many people are skeptical of the charges against Hutchins, at least at the moment. To hackers, whether someone is doing something wrong depends on what hat or hats he is wearing.

Is hacking a crime?

Prosecutions under the Computer Fraud and Abuse Act are not simple, mainly because the law addresses only actions, not motives. As a result, many things that white hat hackers do, such as public interest research reported in scholarly journals, may be illegal, if prosecutors decide to charge the people involved.

Hutchins’ arrest for his alleged association with the Kronos banking Trojan carries the clear suggestion that he’s a black hat. The charges say that in 2014 an as-yet-unnamed person allegedly posted a YouTube video showing how the attack worked, and then offered it for sale. Hutchins is linked because he and that other person allegedly updated the malware’s code sometime in 2015, after which the other person allegedly sold the malware at least once.

But Hutchins’ white hat job is to find vulnerabilities. Just as he tinkered with the WannaCry code – and found the way to slow it down – he could have been tinkering with the Kronos code. And even if he wrote Kronos – which the government alleges but has not yet proven – that’s not necessarily illegal: Orin Kerr, a George Washington University professor who studies the law of computer crimes, told the Guardian, “It’s not a crime to create malware. It’s not a crime to sell malware. It’s a crime to sell malware with the intent to further someone else’s crime.”

Kerr’s comments suggest a third explanation – that Hutchins may have been wearing a gray hat, creating malware for a criminal to use. But we’re missing two key elements: proof of Hutchins’ actions and any understanding of what his motives might have been. It’s especially hard to be sure about his motives without knowing the details of any connection between Hutchins and the unnamed individual, nor even that person’s identity.

It is too early to know what will happen to Marcus Hutchins. But there are precedents. In 1988, Robert Morris wrote the first worm malware while he was a graduate student at Cornell, and earned the dubious distinction of becoming the first person convicted under the Computer Fraud and Abuse Act. He is now a tenured professor at MIT.

Kevin Mitnick served five years in prison for various types of hacking. He now switches between white and gray hats – he is a security consultant and sells zero-day exploits to the highest bidder. And Mustafa Al-Bassam was once a member of the infamous LulzSec hacking group that hacked into the CIA and Sony. After serving a prison sentence, he completed a computer science degree and is now a security adviser. Hackers, unlike other criminals, can doff one hat and don another.

Exploring the benefits in LGBT marriage

For decades, researchers have studied the benefits of marriage, finding that married people are likely to be healthier, wealthier and wiser than their unmarried peers.

But these studies reflected those who were allowed to marry.

Only recently – when states started passing laws guaranteeing same-sex couples the right to marry – could researchers begin to examine how marriage impacted the health of LGBT Americans.

At the University of Washington School of Social Work, our team has conducted the first national study that explores the relationship between marriage, health and quality of life for LGBT adults 50 and older.

The findings reaffirm some of the health benefits associated with marriage in the general population. But they also highlight many of the unique barriers LGBT Americans continue to face.

The benefits of marriage persist

Survey data from the study – titled “Aging with Pride: National Health, Aging, and Sexuality/Gender Study (NHAS)” – analyzed responses from 1,821 LGBT older adults who lived in states with legalized same-sex marriage and access to federal benefits (32 states plus the District of Columbia) as of Nov. 1, 2014, the date we distributed the survey. In the sample, 24 percent were legally married, 26 percent were in a committed relationship and 50 percent were single.

Of course, long before same-sex marriage was legalized, LGBT Americans maintained long-term relationships. So most of the married couples in the study had already been in a relationship for a number of years. The average length of a relationship for legally married couples was 23 years, while unmarried partners were together an average of 16 years.

In our study, the LGBT women and men who were legally married had better general health and a higher quality of life. And those who weren’t married but were in a relationship were generally doing better than single LGBT Americans.

Those who were legally married were more likely to be out of the closet. They also had more social resources – like having children or a close network of friends – and tended to possess more socioeconomic advantages: higher levels of education, higher earnings, home ownership and private health insurance.

There have been a number of theories put forth to explain the general health benefits associated with marriage (at least, in the general population). Some have found that having a spouse can be a motivation to stay healthy. A spouse can also monitor the health and well-being of his or her partner, which can also lead to better health outcomes.

A unique set of challenges

Regardless of benefits associated with marriage, it’s important to note that only half of the couples in long-term committed relationships in this study chose to marry.

Older LGBT adults came of age during turbulent times: Prior to 1962, sodomy was a felonyin every state (only in 2003 did the U.S. Supreme Court overturn sodomy laws in the remaining 14 states). Meanwhile, same-sex behavior was considered a mental disorder until 1973, when it was removed from the Diagnostic and Statistical Manual of Mental Disorders.

Against that backdrop, some LGBT older adults continue to conceal their sexual orientation or gender identity due to social stigma and discrimination. Marriage creates a legal, public, searchable record, putting your sexual orientation out in the open. And some LGBT older adults may not want to enter into an institution that has, for centuries, discriminated against them.

Meanwhile, those who remain single – half of the LGBT older adults in our study – aren’t reaping the benefits of being married.

Compared to those who were legally married or in a long-term relationship, single LGBT older adults were at a disadvantage across nearly every socioeconomic, social and health indicator the study measured. They were also more likely to be bisexual and a racial or ethnic minority, two subgroups that already experience heightened disadvantages in health.

When we asked respondents whether they had experienced the death of a spouse or partner, many gay and bisexual older adult men reported that they had – most likely due to the HIV/AIDS pandemic. And single men were significantly more likely to have experienced the death of a spouse or partner at some point in their lives than married and partnered men and single women.

Moving forward

In 1975, a few years after the Stonewall riots, a bill was introduced in Congress to ban discrimination against gay and lesbian individuals in employment, housing and public accommodations. Forty years later, similar proposed legislation still awaits passage.

So while a lot of attention has been given to the historic constitutional right for same-sex couples to say “I do,” this shouldn’t overshadow the struggles many in the LGBT community continue to face: an elevated risk of disability and mental distress compared to straight peers, higher rates of social isolation due to concealment of one’s sexual orientation and lack of culturally relevant care.

Our study also found that bias and victimization due to sexual orientation were the strongest predictors of poor health among LGBT older adults. To this day, discrimination against LGBT individuals remains legal in many cities and counties across the nation. As long as this is the case, many LGBT Americans will continue to suffer poor health outcomes relative to the straight population.

Quantum consciousness

The mere mention of “quantum consciousness” makes most physicists cringe, as the phrase seems to evoke the vague, insipid musings of a New Age guru. But if a new hypothesis proves to be correct, quantum effects might indeed play some role in human cognition. Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain—which would essentially enable the brain to function like a quantum computer.

As recently as 10 years ago, Fisher’s hypothesis would have been dismissed by many as nonsense. Physicists have been burned by this sort of thing before, most notably in 1989, when Roger Penrose proposed that mysterious protein structures called “microtubules” played a role in human consciousness by exploiting quantum effects. Few researchers believe such a hypothesis plausible. Patricia Churchland, a neurophilosopher at the University of California, San Diego, memorably opined that one might as well invoke “pixie dust in the synapses” to explain human cognition.

Fisher’s hypothesis faces the same daunting obstacle that has plagued microtubules: a phenomenon called quantum decoherence. To build an operating quantum computer, you need to connect qubits—quantum bits of information—in a process called entanglement. But entangled qubits exist in a fragile state. They must be carefully shielded from any noise in the surrounding environment. Just one photon bumping into your qubit would be enough to make the entire system “decohere,” destroying the entanglement and wiping out the quantum properties of the system. It’s challenging enough to do quantum processing in a carefully controlled laboratory environment, never mind the warm, wet, complicated mess that is human biology, where maintaining coherence for sufficiently long periods of time is well nigh impossible.

Over the past decade, however, growing evidence suggests that certain biological systems might employ quantum mechanics. In photosynthesis, for example, quantum effects help plants turn sunlight into fuel. Scientists have also proposedthat migratory birds have a “quantum compass” enabling them to exploit Earth’s magnetic fields for navigation, or that the human sense of smell could be rooted in quantum mechanics.

Fisher’s notion of quantum processing in the brain broadly fits into this emerging field of quantum biology. Call it quantum neuroscience. He has developed a complicated hypothesis, incorporating nuclear and quantum physics, organic chemistry, neuroscience and biology. While his ideas have met with plenty of justifiable skepticism, some researchers are starting to pay attention. “Those who read his paper (as I hope many will) are bound to conclude: This old guy’s not so crazy,” wrote John Preskill, a physicist at the California Institute of Technology, after Fisher gave a talk there. “He may be on to something. At least he’s raising some very interesting questions.”

Senthil Todadri, a physicist at the Massachusetts Institute of Technology and Fisher’s longtime friend and colleague, is skeptical, but he thinks that Fisher has rephrased the central question—is quantum processing happening in the brain?—in such a way that it lays out a road map to test the hypothesis rigorously. “The general assumption has been that of course there is no quantum information processing that’s possible in the brain,” Todadri said. “He makes the case that there’s precisely one loophole. So the next step is to see if that loophole can be closed.” Indeed, Fisher has begun to bring together a team to do laboratory tests to answer this question once and for all.

Fisher belongs to something of a physics dynasty: His father, Michael E. Fisher, is a prominent physicist at the University of Maryland, College Park, whose work in statistical physics has garnered numerous honors and awards over the course of his career. His brother, Daniel Fisher, is an applied physicist at Stanford University who specializes in evolutionary dynamics. Matthew Fisher has followed in their footsteps, carving out a highly successful physics career. He shared the prestigious Oliver E. Buckley Prize in 2015 for his research on quantum phase transitions.

So what drove him to move away from mainstream physics and toward the controversial and notoriously messy interface of biology, chemistry, neuroscience and quantum physics? His own struggles with clinical depression.

Fisher vividly remembers that February 1986 day when he woke up feeling numb and jet-lagged, as if he hadn’t slept in a week. “I felt like I had been drugged,” he said. Extra sleep didn’t help. Adjusting his diet and exercise regime proved futile, and blood tests showed nothing amiss. But his condition persisted for two full years. “It felt like a migraine headache over my entire body every waking minute,” he said. It got so bad he contemplated suicide, although the birth of his first daughter gave him a reason to keep fighting through the fog of depression.

Eventually he found a psychiatrist who prescribed a tricyclic antidepressant, and within three weeks his mental state started to lift. “The metaphorical fog that had so enshrouded me that I couldn’t even see the sun—that cloud was a little less dense, and I saw there was a light behind it,” Fisher said. Within nine months he felt reborn, despite some significant side effects from the medication, including soaring blood pressure. He later switched to Prozac and has continuously monitored and tweaked his specific drug regimen ever since.

His experience convinced him that the drugs worked. But Fisher was surprised to discover that neuroscientists understand little about the precise mechanisms behind how they work. That aroused his curiosity, and given his expertise in quantum mechanics, he found himself pondering the possibility of quantum processing in the brain. Five years ago he threw himself into learning more about the subject, drawing on his own experience with antidepressants as a starting point.Since nearly all psychiatric medications are complicated molecules, he focused on one of the most simple, lithium, which is just one atom—a spherical cow, so to speak, that would be an easier model to study than Prozac, for instance. The analogy is particularly appropriate because a lithium atom is a sphere of electrons surrounding the nucleus, Fisher said. He zeroed in on the fact that the lithium available by prescription from your local pharmacy is mostly a common isotope called lithium-7. Would a different isotope, like the much more rare lithium-6, produce the same results? In theory it should, since the two isotopes are chemically identical. They differ only in the number of neutrons in the nucleus.When Fisher searched the literature, he found that an experiment comparing the effects of lithium-6 and lithium-7 had been done. In 1986, scientists at Cornell University examined the effects of the two isotopes on the behavior of rats. Pregnant rats were separated into three groups: One group was given lithium-7, one group was given the isotope lithium-6, and the third served as the control group. Once the pups were born, the mother rats that received lithium-6 showed much stronger maternal behaviors, such as grooming, nursing and nest-building, than the rats in either the lithium-7 or control groups.

This floored Fisher. Not only should the chemistry of the two isotopes be the same, the slight difference in atomic mass largely washes out in the watery environment of the body. So what could account for the differences in behavior those researchers observed?Fisher believes the secret might lie in the nuclear spin, which is a quantum property that affects how long each atom can remain coherent—that is, isolated from its environment. The lower the spin, the less the nucleus interacts with electric and magnetic fields, and the less quickly it decoheres.Because lithium-7 and lithium-6 have different numbers of neutrons, they also have different spins. As a result, lithium-7 decoheres too quickly for the purposes of quantum cognition, while lithium-6 can remain entangled longer.

Fisher had found two substances, alike in all important respects save for quantum spin, and found that they could have very different effects on behavior. For Fisher, this was a tantalizing hint that quantum processes might indeed play a functional role in cognitive processing.

That said, going from an intriguing hypothesis to actually demonstrating that quantum processing plays a role in the brain is a daunting challenge. The brain would need some mechanism for storing quantum information in qubits for sufficiently long times. There must be a mechanism for entangling multiple qubits, and that entanglement must then have some chemically feasible means of influencing how neurons fire in some way. There must also be some means of transporting quantum information stored in the qubits throughout the brain.

This is a tall order. Over the course of his five-year quest, Fisher has identified just one credible candidate for storing quantum information in the brain: phosphorus atoms, which are the only common biological element other than hydrogen with a spin of one-half, a low number that makes possible longer coherence times. Phosphorus can’t make a stable qubit on its own, but its coherence time can be extended further, according to Fisher, if you bind phosphorus with calcium ions to form clusters.In 1975, Aaron Posner, a Cornell University scientist, noticed an odd clustering of calcium and phosphorous atoms in his X-rays of bone. He made drawings of the structure of those clusters: nine calcium atoms and six phosphorous atoms, later called “Posner molecules” in his honor. The clusters popped up again in the 2000s, when scientists simulating bone growth in artificial fluid noticed them floating in the fluid. Subsequent experiments found evidence of the clusters in the body. Fisher thinks that Posner molecules could serve as a natural qubit in the brain as well.That’s the big picture scenario, but the devil is in the details that Fisher has spent the past few years hammering out. The process starts in the cell with a chemical compound called pyrophosphate. It is made of two phosphates bonded together—each composed of a phosphorus atom surrounded by multiple oxygen atoms with zero spin. The interaction between the spins of the phosphates causes them to become entangled. They can pair up in four different ways: Three of the configurations add up to a total spin of one (a “triplet” state that is only weakly entangled), but the fourth possibility produces a zero spin, or “singlet” state of maximum entanglement, which is crucial for quantum computing.

Next, enzymes break apart the entangled phosphates into two free phosphate ions. Crucially, these remain entangled even as they move apart. This process happens much more quickly, Fisher argues, with the singlet state. These ions can then combine in turn with calcium ions and oxygen atoms to become Posner molecules. Neither the calcium nor the oxygen atoms have a nuclear spin, preserving the one-half total spin crucial for lengthening coherence times. So those clusters protect the entangled pairs from outside interference so that they can maintain coherence for much longer periods of time—Fisher roughly estimates it might last for hours, days or even weeks.In this way, the entanglement can be distributed over fairly long distances in the brain, influencing the release of neurotransmitters and the firing of synapses between neurons—spooky action at work in the brain.Researchers who work in quantum biology are cautiously intrigued by Fisher’s proposal. Alexandra Olaya-Castro, a physicist at University College London who has worked on quantum photosynthesis, calls it “a well-thought hypothesis. It doesn’t give answers, it opens questions that might then lead to how we could test particular steps in the hypothesis.”

The University of Oxford chemist Peter Hore, who investigates whether migratory birds’ navigational systems make use of quantum effects, concurs. “Here’s a theoretical physicist who is proposing specific molecules, specific mechanics, all the way through to how this could affect brain activity,” he said. “That opens up the possibility of experimental testing.”

Experimental testing is precisely what Fisher is now trying to do. He just spent a sabbatical at Stanford University working with researchers there to replicate the 1986 study with pregnant rats. He acknowledged the preliminary results were disappointing, in that the data didn’t provide much information, but thinks if it’s repeated with a protocol closer to the original 1986 experiment, the results might be more conclusive.

Fisher has applied for funding to conduct further in-depth quantum chemistry experiments. He has cobbled together a small group of scientists from various disciplines at UCSB and the University of California, San Francisco, as collaborators. First and foremost, he would like to investigate whether calcium phosphate really does form stable Posner molecules, and whether the phosphorus nuclear spins of these molecules can be entangled for sufficiently long periods of time.

Even Hore and Olaya-Castro are skeptical of the latter, particularly Fisher’s rough estimate that the coherence could last a day or more. “I think it’s very unlikely, to be honest,” Olaya-Castro said. “The longest time scale relevant for the biochemical activity that’s happening here is the scale of seconds, and that’s too long.” (Neurons can store information for microseconds.) Hore calls the prospect “remote,” pegging the limit at one second at best. “That doesn’t invalidate the whole idea, but I think he would need a different molecule to get long coherence times,” he said. “I don’t think the Posner molecule is it. But I’m looking forward to hearing how it goes.”

Others see no need to invoke quantum processing to explain brain function. “The evidence is building up that we can explain everything interesting about the mind in terms of interactions of neurons,” said Paul Thagard, a neurophilosopher at the University of Waterloo in Ontario, Canada, to New Scientist. (Thagard declined our request to comment further.)

Plenty of other aspects of Fisher’s hypothesis also require deeper examination, and he hopes to be able to conduct the experiments to do so. Is the Posner molecule’s structure symmetrical? And how isolated are the nuclear spins?

Most important, what if all those experiments ultimately prove his hypothesis wrong? It might be time to give up on the notion of quantum cognition altogether. “I believe that if phosphorus nuclear spin is not being used for quantum processing, then quantum mechanics is not operative in longtime scales in cognition,” Fisher said. “Ruling that out is important scientifically. It would be good for science to know.”

 

Atheists must communicate better

Atheism is so often considered in the negative: as a lack of faith, or a disbelief in god; as an essential deprivation. Atheism is seen as being destitute of meaning, value, purpose; unfertile ground for growing the feelings of belonging needed to overcome the alienation that dogs modern life. In more extreme critiques, atheism is considered to be another name for nihilism; a fundamental negation of existence, a noxious blight on creation itself.

Yet atheists – rather than flippantly dismissing the insights of theologians – should take them seriously indeed. Humans, by dint of being human, are confronted with baffling questions about meaning, belonging, direction, our connection to other humans and the fate of our species as a whole. The human impulse is to seek answers, and to date, atheism has been unsatisfactory in its response.

The shackles of humanism

Atheist values are typically defined as humanistic. If we look to the values of the British Humanist Association, we see that it promotes naturalism, rational debate, and the pre-eminence of evidence, cooperation, progress and individual dignity. These are noble aspirations, but they are ultimately brittle when tackling the visceral and existential problems confronting humanity in this period of history.

When one considers the destruction that advanced capitalism visits on communities – from environmental catastrophes to war and genocide – then the atheist is the last person one thinks of calling for solace, or for a meaningful ethical and political alternative.

In the brutal economic reality of a neo-liberal, market-oriented world, these concerns are rarely given due consideration when debating the questions surrounding the existence or non-existence of god. The persistent and unthinking atheist habit is to ground all that is important on individual freedom, individual assertions of non-belief and vacant appeals to scientific evidence. But these appeals remain weak when confronting financial crises, gender inequality, diminished public health and services, food banks, and economic deprivation.

Atheism, suffering and solidarity

The writings of atheist poster boys Sam Harris, Christopher Hitchens, Richard Dawkins and Daniel Dennett do not offer solace to the existential and political realities of our world. In some cases, they can make them worse. Calls for reason and scientific inquiry do not offer any coherent sense of solidarity to those who suffer. The humanist might argue the world would be a far more progressive place if scientific values guided our governments. But the reality is that humanism, together with its ethical correlate of individual dignity, remains ineffectual when it comes to offering a galvanising purpose, or inspiring a meaningful sense of belonging.

The most pressing concerns facing humans are philosophical, and sometimes even metaphysical. Humans have genuine fears that life is excessively cheap, a sense that the collective good is waning, that political action is equivalent to apathy and cynicism, and that any solution to any political problem is the ubiquitous idea of the entrepreneurial human.

This is why atheism, if it is to be relevant, must shed its humanism. The future vitality and relevance of atheism depends on its ability to broaden its focus away from the validity of god’s existence and narrow concerns over individual freedom. Instead, it must turn to address questions about economic causality, belonging and alienation, poverty, collective action, geo-politics, the social causes of environmental problems, class and gender inequality, and human suffering.

Obviously, the best person to consult on the rapidity of climate change is the scientist. But these kind of appeals to science as a way of understanding the world around us must be supplemented by the core philosophical considerations of humans existing in the world, who grapple daily with the enormity of undeniable problems. Atheism needs to renew itself if it is to be considered relevant for the new century.

Atheist alternatives

But this is not to say that atheism must embrace an insipid, watered-down spiritualism. Instead, we can look to a different breed of atheism, found in the work of continental, anti-humanist philosophers. For example, we can turn to Nietzsche to understand the resentments generated by human suffering. Meanwhile, the Marxist tradition offers us the means to understand the material conditions of unsustainable capitalism. Existentialists such as Jean-Paul Sartre and Albert Camus allow us to comprehend our shared mortality, and the humour and tragedy of life in a godless universe.

There is a whole other philosophical vocabulary for atheism to explore. Both Nietzsche and Sartre observe a different atheism, one embedded in the context of genuine questions of cruelty, economic alienation, anxiety and mortality.

Atheism needs to be attentive to what it means to live with the consequences of violence, senselessness and suffering. The trouble with atheism in its more conventional guises is a nerdish fetishism for all things that work: what is accurate, the instrumental and the efficient. The trouble is, many aspects of our world are not working. Because of this, the atheist is in danger of being perceived as deluded and aloof from the violent mess of the real. Atheism, if it is to be vital, needs to reconnect itself with the more disturbing, darker aspects of the human condition.

Fahrenheit 451 an evergreen dystopian science fiction

There’s a reason dystopian science fiction is evergreen—no matter how much time goes by, people will always regard the future with suspicion. The common wisdom is that the past was pretty good, the present is barely tolerable, but the future will be all Terminator-style robots and Idiocracy slides into chaos.

Every few years political cycles cause an uptick in attention being paid to classic dystopias; the 2016 Presidential election pushed George Orwell’s classic 1984 back onto the bestseller lists, and made Hulu’s adaptation of The Handmaid’s Tale a depressingly appropriate viewing event.

The trend continues; recently, HBO announced a film adaptation of Ray Bradbury’s classic 1953 science fiction novel Fahrenheit 451. If it seems surprising that a book published more than six decades ago might still be terrifying for modern audiences, you probably just haven’t read the novel recently. Fahrenheit 451 is one of those rare sci-fi novels that ages wonderfully—and remains just as terrifying today as it did in the middle of the 20thcentury, for a variety of reasons.

MORE THAN BOOKS

If you’ve been alive for more than a few years, odds are you know the basic logline of Fahrenheit 451: In the future, houses are largely fireproof and firemen have been re-purposed as enforcers of laws that prohibit the ownership and reading of books; they burn the homes and possessions (and books, natch) of anyone caught with contraband literature. The main character, Montag, is a fireman who begins to look at the illiterate, entertainment-obsessed, and shallow society he lives in with suspicion, and begins stealing books from the homes he burns.

This is often boiled down to a slim metaphor on book-burning—which is a thing that still happens—or a slightly more subtle hot-take on censorship, which by itself makes the book evergreen. After all, people are still fighting to have books banned from schools for a variety of reasons, and even Fahrenheit 451 was bowdlerized by its publisher for decades, with a “school version” in circulation that removed the profanity and changed several concepts to less alarming forms (Bradbury discovered this practice and made such a stink the publisher re-issued the original in the 1980s).

But the key to appreciating the terrifying nature of the book is that it isn’t just about books. Focusing on the books aspect allows people to dismiss the story as a book nerd’s nightmare, when the reality is that what Bradbury was really writing about is the effect he saw mass media like television, film, and other media (including some he couldn’t have predicted) would have on the populace: Shortening attention spans, training us to seek constant thrills and instant gratification—resulting in a populace that lost not just its interest in seeking the truth, but its ability to do so.

FAKE NEWS

In this new age of “fake news” and Internet conspiracy, Fahrenheit 451 is more chilling than ever because what we’re seeing is possibly Bradbury’s terrifying vision of the future playing out—just more slowly than he imagined.

In the novel, Bradbury has the main antagonist, Captain Beatty, explain the sequence of events: Television and sports shortened attention spans, and books began to be abridged and truncated in order to accommodate those shorter attention spans. At the same time, small groups of people complained about language and concepts in books that were now offensive, and the firemen were assigned to destroy books in order to protect people from concepts they would be troubled by.

Things are certainly nowhere near that bad right now—and yet, the seeds are clearly there. Attention spans are shorter. Abridged and bowdlerized versions of novels doexist. Film and television editing has become incredibly fast-paced, and video games have arguably had an effect on plot and pacing in stories in the sense that many of us need stories to be constantly exciting and thrilling in order to keep our attention, while slower, more thoughtful stories seem boring.

THE WHOLE POINT

And that’s the reason Fahrenheit 451 is terrifying, and will remain terrifying for the foreseeable future despite its age: Fundamentally, the story is about a society that voluntarily and even eagerly abets its own destruction. When Montag tries to confront his wife and friends with thoughtful discussion, when he tries to turn off the TV programs and make them think, they become angry and confused, and Montag realizes that they are beyond help—they don’t want to think and understand.

They prefer to live in a bubble. Book-burning began when people chose not to be challenged by thoughts they didn’t find comforting, thoughts that challenged their preconceptions.

We can see those bubbles everywhere around us today, and we all know people who only get their information from limited sources that largely confirm what they already think. Attempts to ban or censor books still get robust challenges and resistance, but on social media you can witness people’s hostile reactions to stories they don’t like, you can see how people create narrow “silos” of information to protect themselves from anything scary or unsettling, how people are often even proud of how little they read and how little they know beyond their own experience.

Which means that the seeds of Fahrenheit 451 are already here. That doesn’t mean it will come to pass, of course—but that’s why it’s a frightening book. It goes far beyond the gonzo concept of firemen burning books to destroy knowledge—it’s a succinct and frighteningly accurate analysis of precisely how our society could collapse without a single shot being fired, and a dark mirror of our modern age where unchallenging entertainment is available to us at all times, on devices we carry with us at all times, ready and waiting to drown out any input we don’t want to hear.

HBO’s adaptation of Fahrenheit 451 doesn’t have an air date yet, but it’s still the perfect time to re-introduce yourself to the novel—or to read it for the first time. Because it’s always a perfect time to read this book, which is one of the most frightening things you could possibly say.

Doctor vs Crowdsourced AI diagnosis app

Shantanu Nundy recognized the symptoms of rheumatoid arthritis when his 31-year-old patient suffering from crippling hand pain checked into Mary’s Center in Washington, D.C. Instead of immediately starting treatment, though, Nundy decided first to double-check his diagnosis using a smartphone app that helps with difficult medical cases by soliciting advice from doctors worldwide. Within a day, Nundy’s hunch was confirmed. The app had used artificial intelligence (AI) to analyze and filter advice from several medical specialists into an overall ranking of the most likely diagnoses. Created by the Human Diagnosis Project (Human Dx)—an organization that Nundy directs—the app is one of the latest examples of growing interest in human–AI collaboration to improve health care.

Human Dx advocates the use of machine learning—a popular AI technique that automatically learns from classifying patterns in data—to crowdsource and build on the best medical knowledge from thousands of physicians across 70 countries. Physicians at several major medical research centers have shown early interest in the app. Human Dx on Thursday announced a new partnership with top medical profession organizations including the American Medical Association and the Association of American Medical Colleges to promote and scale up Human Dx’s system. The goal is to provide timely and affordable specialist advice to general practitioners serving millions of people worldwide, in particular so-called “safety net” hospitals and clinicsthroughout the U.S. that offer access to care regardless of a patient’s ability to pay.

“We need to find solutions that scale the capacity of existing doctors to serve more patients at the same or cheaper cost,” says Jay Komarneni, founder and chair of Human Dx. Roughly 30 million uninsured Americans rely on safety net facilities, which generally have limited or no access to medical specialists. Those patients often face the stark choice of either paying out of pocket for an expensive in-person consultation or waiting for months to be seen by the few specialists working at public hospitals, which receive government funding to help pay for patient care, Komarneni says. Meanwhile studies have shown that between 25 percentand 30 percent (pdf) of such expensive specialist visits could be conducted by online consultations between physicians while sparing patients the additional costs or long wait times.

Komarneni envisions “augmenting or extending physician capacity with AI” to close this “specialist gap.” Within five years Human Dx aims to become available to all 1,300 safety net community health centers and free clinics in the U.S. The same remote consultation services could also be made available to millions of people around the world who lack access to medical specialists, Komarneni says.

HOW IT WORKS

When a physican needs help diagnosing or treating a patient they open the Human Dx smartphone app or visit the project’s Web page and type in their clinical question as well as their working diagnosis. The physician can also upload images and test results related to the case and add details such as any medication the patient takes regularly. The physician then requests help, either from specific colleagues or the network of doctors who have joined the Human Dx community. Over the next day or so Human Dx’s AI program aggregates all of the responses into a single report. It is the new digital equivalent of a “curbside consult” where a physician might ask a friend or colleague for quick input on a medical case without setting up a formal, expensive consultation, says Ateev Mehrotra, an associate professor of health care policy and medicine at Harvard Medical School and a physician at Beth Israel Deaconess Medical Center. “It makes intuitive sense that [crowdsourced advice] would be better advice,” he says, “but how much better is an open scientific question.” Still, he adds, “I think it’s also important to acknowledge that physician diagnostic errors are fairly common.” One of Mehrotra’s Harvard colleagues has been studying how the AI-boosted Human Dx system performs in comparison with individual medical specialists, but has yet to publish the results.

Mehrotra’s cautionary note comes from research that he and Nundy published last year in JAMA Internal Medicine. That study used the Human Dx service as a neutral platform to compare the diagnostic accuracy of human physicians with third-party “symptom checker” Web sites and apps used by patients for self-diagnosis. In this case, the humans handily outperformed the symptom checkers’ computer algorithms. But even physicians provided incorrect diagnoses about 15 percent of the time, which is comparable with past estimates of physician diagnostic error.

Human Dx could eventually help improve the medical education and training of human physicians, says Sanjay Desai, a physician and director of the Osler Medical Training Program at Johns Hopkins University. As a first step in checking the service’s capabilities, he and his colleagues ran a study where the preliminary results showed the app could tell the difference between the diagnostic abilities of medical residents and fully trained physicians. Desai wants to see the service become a system that could track the clinical performance of individual physicians and provide targeted recommendations for improving specific skills. Such objective assessments could be an improvement over the current method of human physicians qualitatively judging their less experienced colleagues. The open question, Desai says, is whether the “algorithms can be created to provide finer insights into an [individual] doctor’s strengths and weaknesses in clinical reasoning.”

AI-ASSISTED HEALTH CARE

Human Dx is one of many AI systems being tested in health care. The IBM Watson Health unit is perhaps the most prominent, with the company for the past several years claiming that its AI is assisting major medical centers and hospitals in tasks such as genetically sequencing brain tumorsand matching cancer patients to clinical trials. Studies have shown AI can help predict which patients will suffer from heart attacks or strokes in 10 years or even forecast which will die within five. Tech giants such as Google have joined start-ups in developing AI that can diagnose cancer from medical images. Still, AI in medicine is in its early days and its true value remains to be seen. Watson appears to have been a success at Memorial Sloan Kettering Cancer Center, yet it floundered at The University of Texas M. D. Anderson Cancer Center, although it is unclear whether the problems resulted from the technology or its implementation and management.

The Human Dx Project also faces questions in achieving widespread adoption, according to Mehrotra and Desai. One prominent challenge involves getting enough physicians to volunteer their time and free labor to meet the potential rise in demand for remote consultations. Another possible issue is how Human Dx’s AI quality control will address users who consistently deliver wildly incorrect diagnoses. The service will also require a sizable user base of medical specialists to help solve those trickier cases where general physicians may be at a loss.

In any case, the Human Dx leaders and the physicians helping to validate the platform’s usefulness seem to agree that AI alone will not take over medical care in the near future. Instead, Human Dx seeks to harness both machine learning and the crowdsourced wisdom of human physicians to make the most of limited medical resources, even as the demands for medical care continue to rise. “The complexity of practicing medicine in real life will require both humans and machines to solve problems,” Komarneni says, “as opposed to pure machine learning.”

Why believers deem atheists fundamentally untrustworthy?

Skepticism about the existence of God is on the rise, and this might, quite literally, pose an existential threat for religious believers.

It’s no secret that believers generally harbor extraordinarily negative attitudes toward atheists. Indeed, recent polling data show that most Americans view atheists as “threatening,” unfit to hold public office and unsuitable to marry into their families.

But what are the psychological roots of antipathy toward atheists?

Historically, evolutionary psychologists argue that atheists have been denigrated because God serves as the ultimate source of social power and influence: God rewards appropriate behaviors and punishes inappropriate ones.

The thinking has gone, then, that believers deem atheists fundamentally untrustworthy because they do not accept, affirm and adhere to divinely ordained moral imperatives (ie, “God’s word”). Research has backed up the deep distrust believers feel toward atheists. For example, in one study, Canadian undergraduates, who are typically less religious than their US counterparts, rated atheists as more untrustworthy than Muslims – and just as untrustworthy as rapists!

Still, it hasn’t been clear why the leeriness of atheists is so profound. We decided to find out, and through two separate studies, discovered that believers’ overwhelming scorn of atheists may come from a surprising source: fear of death.

According to the terror management theory (TMT), human beings are unique in that we are self-aware and can anticipate the future. For the most part, these are highly beneficial cognitive adaptations. They allow us to formulate plans and foresee the consequences of our actions. But they also make us realize that death is inevitable and unpredictable.

These unwelcome thoughts give rise to a potentially paralyzing terror: the fear of death. This fear, then, is “managed” by embracing cultural worldviews – beliefs about reality that we share with others – that provide us with a sense of comfort. It could mean becoming involved in religions that espouse spiritual immortality, or by strongly valuing one’s national identity.

This process works the other way around, too: when confronted with threats to our cherished worldview beliefs, our protective “terror management” shield drops and our apprehension about death resurfaces.

We then cling to those beliefs more tightly, and respond more negatively to those who threaten us. For example, research shows that in the wake of the September 11 terrorist attacks, Islamic symbols increased thoughts of death in non-Muslim Americans. Likewise, concern for death increased hostility toward Islam.

So how do existential concerns about death relate to atheism?

Past research has shown that hostility toward atheists is partly driven by the fact that many perceive atheists as a threat to morals and values.

So we reasoned that if atheists threaten values, then they also likely threaten worldview beliefs.

We then hypothesized that atheists, simply by existing, would likely elicit intimations of mortality – which, in turn, would promote increased negativity toward atheists.

We tested this idea in two different experiments. In the first, we recruited 236 students from the College of Staten Island CUNY. We excluded the few participants who reported as atheist or agnostic, and we asked half of the remaining participants to answer two questions: “What do you think will happen to you as you physically die?” and “What are the emotions that the thought of death causes for you?” The other half responded to similar questions about being in extreme pain.

After thinking about either death or pain, half of the 236 participants were asked to provide their attitudes toward atheists, while the other half responded with their attitudes toward Quakers – a nonthreatening religious group. Participants reported their overall warmth, their levels of trust, and behavioral avoidance by indicating how they felt about these people “marrying into their family” or “working in their office.”

As expected, participants were more negative toward atheists overall than toward Quakers. More importantly, however, we found that thinking about death increased negativity toward atheists – but not toward Quakers.

Those who had pondered their own death showed less warmth, greater behavioral prejudice(also known as social distancing) and greater distrust toward atheists, while thoughts of death did not affect reactions toward Quakers, a fellow theistic group.

In the second experiment, we directly measured whether simply thinking about atheism would increase unconscious thoughts of death. We asked 174 Staten Island students (excluding atheists and agnostics) to describe their emotions toward one of three topics: pain, death or atheism. We then presented them with a word completion task designed to capture thoughts of death. For example, the word “SK – – L” could be completed as either “skill” or “skull” and “COFF – -” could be “coffee” or “coffin.”

Not surprisingly, those who pondered their own mortality indicated greater thoughts of death than those who thought about being in pain. However, thinking about atheism also increased thoughts of death – to the same extent as thinking about death itself.

These findings suggest that there is something deeper to the overwhelming negativity people hold toward atheists. Yes, on the conscious level, they’re deemed untrustworthy because in the eyes of believers, they have no God or values.

But at an unconscious level, it seems that atheists threatens our beliefs about the nature of existence itself. They serve as a constant reminder of death by denying the presence of a supernatural power who regulates human affairs and monitors the gateway to immortality.

Of course, atheists are no less moral or trustworthy than their theistic counterparts. In light of these findings, we hope that perhaps believers might temper their contempt for atheists.

An artificial neural network for relational reasoning

How many parks are near the new home you’re thinking of buying? What’s the best dinner-wine pairing at a restaurant? These everyday questions require relational reasoning, an important component of higher thought that has been difficult for artificial intelligence (AI) to master. Now, researchers at Google’s DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test.

Humans are generally pretty good at relational reasoning, a kind of thinking that uses logic to connect and compare places, sequences, and other entities. But the two main types of AI—statistical and symbolic—have been slow to develop similar capacities. Statistical AI, or machine learning, is great at pattern recognition, but not at using logic. And symbolic AI can reason about relationships using predetermined rules, but it’s not great at learning on the fly.

The new study proposes a way to bridge the gap: an artificial neural network for relational reasoning. Similar to the way neurons are connected in the brain, neural nets stitch together tiny programs that collaboratively find patterns in data. They can have specialized architectures for processing images, parsing language, or even learning games. In this case, the new “relation network” is wired to compare every pair of objects in a scenario individually. “We’re explicitly forcing the network to discover the relationships that exist between the objects,” says Timothy Lillicrap, a computer scientist at DeepMind in London who co-authored the paper.

He and his team challenged their relation network with several tasks. The first was to answer questions about relationships between objects in a single image, such as cubes, balls, and cylinders. For example: “There is an object in front of the blue thing; does it have the same shape as the tiny cyan thing that is to the right of the gray metal ball?” For this task, the relation network was combined with two other types of neural nets: one for recognizing objects in the image, and one for interpreting the question. Over many images and questions, other machine-learning algorithms were right 42% to 77% of the time. Humans scored a respectable 92%. The new relation network combo was correct 96% of the time, a superhuman score, the researchers report in a paper posted last week on the preprint server arXiv.

The DeepMind team also tried its neural net on a language-based task, in which it received sets of statements such as, “Sandra picked up the football” and “Sandra went to the office.” These were followed by questions like: “Where is the football?” (the office). It performed about as well as its competing AI algorithms on most types of questions, but it really shined on so-called inference questions: “Lily is a Swan. Lily is white. Greg is a swan. What color is Greg?” (white). On those questions, the relation network scored 98%, whereas its competitors each scored about 45%. Finally, the algorithm analyzed animations in which 10 balls bounced around, some connected by invisible springs or rods. Using the patterns of motion alone, it was able to identify more than 90% of the connections. It then used the same training to identify human forms represented by nothing more than moving dots.

“One of the strengths of their approach is that it’s conceptually quite simple,” says Kate Saenko, a computer scientist at Boston University who was not involved in the new work but has also just co-developed an algorithm that can answer complex questions about images. That simplicity—Lillicrap says most of the advance is captured in a single equation—allows it to be combined with other networks, as it was in the object comparison task. The paper calls it “a simple plug-and-play module” that allows other parts of the system to focus on what they’re good at.

“I was pretty impressed by the results,” says Justin Johnson, a computer scientist at Stanford University in Palo Alto, California, who co-developed the object comparison task­—and also co-developed an algorithm that does well on it. Saenko adds that a relation network could one day help study social networks, analyze surveillance footage, or guide autonomous cars through traffic.

To approach humanlike flexibility, though, it will have to learn to answer more challenging questions, Johnson says. Doing so might require comparing not just pairs of things, but triplets, pairs of pairs, or only some pairs in a larger set (for efficiency). “I’m interested in moving toward models that come up with their own strategy,” he says. “DeepMind is modeling a particular type of reasoning and not really going after more general relational reasoning. But it is still a superimportant step in the right direction.”

No more victimisation

At the end of this year’s Cannes Film Festival, actress Jessica Chastain – who was serving as a jury member – said that she found the portrayals of women in the festival’s films “quite disturbing.”

To many, this isn’t exactly news. The lack of women in film – in front of and behind the camera – has been at the forefront of Hollywood criticism in recent years, with scholars and writers detailing the various ways women tend to be underrepresented or cast in stereotypical roles.

University of Southern California communications professor Stacy Smith, who researches depictions of gender and race in film and TV, found that of the 5,839 characters in the 129 top-grossing films released between 2006 and 2011, fewer than 30 percent were girls or women. Meanwhile, only 50 percent of films fulfill the criteria of the Bechdel Test, which asks whether a film features at least two women who talk to each other about something other than a man.

Despite the uphill climb for women in film, it isn’t all doom and gloom. Horror is one genre where women are taking on increasingly prominent parts. Yes, screaming is still a staple feature of a scary flick. But women are assuming central roles – not as victims, but as monsters and heroes.

Bucking the trend

Each year, the Geena Davis Institute on Gender and Media publishes research that shows how gender imbalances in film affects women and girls.

For example, they’ve found that positive and prominent roles for women in movies “motivate women to be more ambitious” professionally and personally. But when there is a dearth of women being depicted in positive ways, it has an opposite, negative effect.

A recent study by Google and the Geena Davis Institute studied this phenomenon across genres. They developed something called “the GD-IQ” (Geena Davis Inclusion Quotient), which is facilitated by machine-learning technology. The goal was to recognize patterns in gender, screen time and speaking time that the casual movie viewer might overlook. The results of this study told a familiar story: In film, men are seen and heard twice as often as women.

But there was one exception: horror films.

A horror renaissance

In a way, this makes sense. A recent Guardian article describes how women have historically been drawn to the genre. Many beloved horror films have strong female leads: “Carrie,” “The Descent” and “The Witch,” to name a few.

Horror, of course, has always been interested in women; Traditionally women and girls are victims of crazed killers or of monsters. They scream a lot.

Yet the terms have changed along with the times, and a horror renaissance seems to have been taking place over the past decade.

The genre has moved from taking pleasure in victimizing women to focusing on women as survivors and protagonists. It’s veered away from slashers and torture porn to more substantive, nuanced films that comment on social issues and possess an aesthetic vision.

Earlier this year, Jordan Peele’s “Get Out” became a major box-office smash; as it skewered racial politics, it also made a beautiful, young white woman the evil antagonist. In 2015, Robert Egger’s historical horror film “The Witch” was a surprise hit. With a 91 percent freshrating on Rotten Tomatoes, “The Witch” captured audiences by being a historically accurate tale that included a feminist twist. Set in Puritan America, a teenage protagonist, Thomasin, battles her parents and siblings, who assume she’s become a witch, faulting her for all the misfortunes that befall the family. Of course she’s simply a teenage girl – a dangerous creature, the film seems to be saying, in a culture controlled by men.

 

“Get Out” and “The Witch” join a host of other horror films with women as central characters: “Stoker,” “Under the Skin,” “Rec,” “The Conjuring,” “Ginger Snaps,” “American Mary,” “Jennifer’s Body” and “You’re Next.”

Changing the narrative

For decades, sexually active women in horror movies tend to die first as punishment for sexual transgression. We see this in “Halloween,” “Friday the 13th,” “The Texas Chain Saw Massacre” and “A Nightmare on Elm Street.”

“It Follows” (2015) upends this narrative. Maika Monroe stars as Jay, a young woman who battles an unseen and unknown predator after having sex with a date. But “It Follows” isn’t interested in punishing Jay – or any other female character – for having sex. One critic makes an intriguing case that “It Follows” actually critiques rape culture by highlighting the trauma of how rape survivors are often treated by culture, friends and family. This creepy and critically acclaimed horror film allows Jay to be the girl we all wish we could be: She investigates, fights back against the predator and ultimately prevails.

Even old and seemingly worn-out franchises are being rebooted with female leads. The original “Amityville Horror” (1979) capitalized on the true story of a house in Amityville, New York. The tale of a disintegrating nuclear family terrorized by a haunted house spawned 12 sequels, prequels and continuations.

But this summer audiences may get to see yet another addition to the Amityville oeuvre; “Amityville: The Awakening” stars Jennifer Jason Leigh and Bella Thorne as a single mother and her daughter who must endure life in the infamous house. The poster for the movie features an image of Bella Thorne superimposed over the house, suggesting that she is more important (and more powerful) than the terrifying home.

As the role of women in other realms of our society continues to grow, it’s only fitting that they do the same in horror movies. With the massive box-office success of “Wonder Woman,” the hope is that other genres will soon enough take horror’s lead and embrace women as protagonists, heroes and maybe even the occasional witch, too.