Why Are India and China Fighting?…………………………………Seattle’s Summer of Love……………………………….White Saviors Need to Leave the Room………………………………..Reassessing the Guidance on Face Masks………………………………..From South American Anthropology to Gender-Crit Cancel Culture: My Strange Feminist Journey…………………………………..The Ever-Shrinking Transistor and the Invention of Google………………………………..

EXPLAINER

Why Are India and China Fighting?

Nuclear powers New Delhi and Beijing engage in a skirmish marking the first combat deaths along their border in more than four decades.

Indian protesters burn a poster of Chinese President Xi Jinping along with Chinese items in response to the killing of Indian soldiers by Chinese troops, in Ahmedabad on June 16, 2020.

Indian protesters burn a poster of Chinese President Xi Jinping along with Chinese items in response to the killing of Indian soldiers by Chinese troops, in Ahmedabad on June 16, 2020. SAM PANTHAKY/AFP VIA GETTY IMAGES

In a major setback to recent measures to de-escalate tensions, India and China engaged in a deadly skirmish along their border on Monday night. While details of the clash are still emerging, the incident marks the first combat deaths in the area since 1975.

An Indian Army statement acknowledged the death of an officer and two soldiers, with subsequent reports attributed to officials confirming 17 other soldiers succumbed to injuries—reports that Foreign Policy has not independently verified. Both sides confirm that Chinese soldiers were also killed, but the number is unknown. (China is traditionally reluctant to report casualty figures, and it erases some clashes from official history.) Critically, neither side is reported so far to have fired actual weapons, since both Chinese and Indian patrols in the area routinely go unarmed in order to avoid possible escalation; the deaths may have resulted from fistfights and possibly the use of rocks and iron rods. It’s also possible, given the extreme heights involved—the fighting took place in Ladakh, literally “the land of high passes”—that some of those killed died due to falls.

What is the origin of the conflict?

Despite their early friendship in the 1950s, relations between India and China rapidly degenerated over the unresolved state of their Himalayan border. The border lines, largely set by British surveyors, are unclear and heavily disputed—as was the status of Himalayan kingdoms such as Tibet, Sikkim, Bhutan, and Nepal. That led to a short war in 1962, won by China. China also backs Pakistan in its own disputes with India, and China’s Belt and Road Initiative has stirred Indian fears, especially the so-called China-Pakistan Economic Corridor, a collection of large infrastructure projects.

The current border is formally accepted by neither side but simply referred to as the Line of Actual Control. In 2017, an attempt by Chinese engineers to build a new road through disputed territory on the Bhutan-India-China border led to a 73-day standoff on the Doklam Plateau, including fistfights between Chinese and Indian soldiers. Following Doklam, both countries built new military infrastructure along the border. India, for example, constructed roads and bridges to improve its connectivity to the Line of Actual Control, dramatically improving its ability to bring in emergency reinforcements in the event of a skirmish. In early May this year, a huge fistfight along the border led to both sides boosting local units, and there have been numerous light skirmishes—with no deaths—since then. Both sides have accused the other of deliberately crossing the border on numerous occasions. Until Monday’s battle, however, diplomacy seemed to be slowly deescalating the crisis: The two sides had opened high-level diplomatic communications and appeared ready to find convenient off-ramps for each side to maintain face. And both countries’ foreign ministers were scheduled for a virtual meeting next week.

[For more analysis like this direct to your inbox, sign up for FP’s weekly newsletters South Asia Brief and China Brief.]

Both countries also have a highly jingoistic media—state-run in China’s case, and mostly private in India’s—that can escalate conflicts and drum up a public mood for a fight. Press jingoism, however, can also open strange opportunities for de-escalation: After an aerial dogfight between India and Pakistan in 2019, media on both sides claimed victory of sorts for their respective countries, allowing their leaders to move on.

Compounding the problems is the physically shifting nature of the border, which represents the world’s longest unmarked boundary line; snowfalls, rockslides, and melting can make it literally impossible to say just where the line is, especially as climate change wreaks havoc in the mountains. It’s quite possible for two patrols to both be convinced they’re on their country’s side of the border.

Has there been similar violence in the past?

There have been no deaths—or shots fired—along the border since an Indian patrol was ambushed by a Chinese one in 1975.

There have been no deaths—or shots fired—along the border since an Indian patrol was ambushed by a Chinese one in 1975.

But China saw significant clashes with both India and the Soviet Union during the late 1960s, at the height of the Cultural Revolution. In India’s case, that culminated in a brief but bloody clash on the Sikkim-Tibet border, with around hundreds of dead and injured on each side. On the Soviet border, fighting along the Ussuri River saw similar numbers of dead, but tensions escalated far higher than with India, leading to fears of a full-blown war and a possible nuclear exchange that were only alleviated by the highest-level diplomacy. In part, those clashes were driven by political needs on the Chinese side; officers and soldiers alike felt the need to demonstrate their Maoist enthusiasm, leading to such actions as swimming across the river waving Mao Zedong’s Little Red Book.What could happen next?

India has announced that “both sides” are trying to de-escalate the situation, but it has accused China of deliberately violating the border and reneging on agreements made in recent talks between the two sides. China’s response was more demanding, accusing India of “deliberately initiating physical attacks” in a territory—the Galwan Valley in Ladakh that is claimed by both sides—that has “always been ours.” Army officers are meeting to try to resolve the situation.

While the 2017 Doklam crisis was successfully defused—and was followed by a summit between Chinese President Xi Jinping and Indian Prime Minister Narendra Modi in Wuhan, China—recent events could easily spiral out of control. If there are indeed a high number of deaths from Monday’s skirmish, pressure to react and exact revenge may build. The coronavirus has produced heightened political uncertainty in China, leading to a newly aggressive form of “Wolf Warrior” diplomacy—named after a Rambo-esque film that was a blockbuster in China but a flop elsewhere. Chinese officials are under considerable pressure to be performatively nationalist; moderation and restraint are becoming increasingly dangerous for careers.

On the Indian side, there is increasing nervousness about how Beijing has encircled the subcontinent. China counts Pakistan as a key ally; it has growing stakes in Sri Lanka and Nepal, two countries that have drifted away from India in recent years; and it has made huge infrastructure investments in Bangladesh. Meanwhile, much has changed since the last time India and China had deadly clashes in the 1960s and ’70s, when the two countries had similarly sized economies; today, China’s GDP is five times that of India, and it spends four times as much on defense.

There will likely be a business impact following the latest clash. Indians, for example, have recently mobilized to boycott Chinese goods, as evidenced by a recent app “Remove China Apps” that briefly topped downloads on India’s Google Play Store before the Silicon Valley giant stepped in to ban the app.

Heightened tensions also put Indians in China at risk. Although numbers are somewhat reduced due to the coronavirus crisis, there is a substantial business and student community in the country. During the Doklam crisis, the Beijing police lightly monitored and made home visits to Indians in the city.

An escalated crisis doesn’t necessarily mean a full-blown war.

An escalated crisis doesn’t necessarily mean a full-blown war.

It could mean months of skirmishes and angry exchanges along the border, likely with more accidental deaths. But any one of those could explode into a real exchange of fire between the two militaries. The conditions in the Himalayas themselves severely limit military action; it takes up to two weeks for troops to acclimate to the altitude, logistics and provisioning are extremely limited, and air power is severely restrained. (One worrying possibility for more deaths is helicopter crashes, such as the one that killed a Nepalese minister last year.)In the event of a serious military conflict, most analysts believe the Chinese military would have the advantage. But unlike China, which hasn’t fought a war since its 1979 invasion of Vietnam, India sees regular fighting with Pakistan and has an arguably more experienced military force.

Is there a permanent solution?

China resolved its border squabbles with Russia and other Soviet successor states in the 1990s and 2000s through a serious diplomatic push on both sides and mass exchanges of territory, and they’ve been essentially a nonissue since then. But although the area involved was much larger, the Himalayan territorial disputes are much more sensitive and harder to resolve.

For one thing, control of the heights along the borders gives a military advantage in future conflicts. Resource issues, especially water, are critical: 1.4 billion people depend on water drawn from Himalayan-fed rivers. And unlike the largely bilateral conflicts along the northern border, multiple parties are involved: Nepal, Bhutan, China, Pakistan, and, of course, India. Add on top of that China’s increasing power and nationalism, matched by jingoism on the Indian side, and the prospects of a long-term solution look small.

James Palmer is a deputy editor at Foreign Policy. Twitter: @BeijingPalmer

Ravi Agrawal is the managing editor of Foreign Policy. Twitter: @RaviReports

https://foreignpolicy.com/

ACTIVISMBLMSPOTLIGHT

Seattle’s Summer of Love

The bluest skies you’ve ever seen in Seattle
And the hills the greenest green in Seattle
Like a beautiful child growing up free and wild
Full of hopes and full of fears
Full of laughter full of tears
Full of dreams to last the years in Seattle
In Seattle

So Perry Como sang in the late ’60s. Now it seems the days of beautiful children growing up free and wild are returning to Seattle. Like other American cities over the last three weeks, Seattle saw protests rapidly become violent clashes with police. This ugliness waxed and waned for a fortnight until police withdrew from their East Precinct Building, effectively ceding the surrounding area to the protestors. Barriers were erected around it by activists who initially christened the new territory the Capitol Hill Autonomous Zone (CHAZ), and later renamed it the Capitol Hill Occupied Protest (CHOP). As their quasi-manifesto of June 9th put it, they had “liberated Free Capitol Hill in the name of the people of Seattle.”

Now a tense and potentially dangerous stand-off has developed. What does the city administration intend to do? On June 11th, the Democratic mayor of the city, Jennifer Durkan, was interviewed on CNN by a sympathetic Chris Cuomo. Cuomo began by asking if Durkan had lost control of her own city’s streets.

Durkan: We’ve got four blocks in Seattle that you just saw pictures of that is more like a block party atmosphere. It’s not an armed takeover, it’s not a military junta. We will make sure that we can restore this. But we have block parties and the like in this part of Seattle all the time… There is no threat right now to the public and we’re looking, we’re taking that very seriously, we’re meeting with businesses and residents…

Cuomo: The counter will be block parties don’t take over a municipal building, let alone a police station and destroy it, basically thumbing their nose at any sense of civic control. Do you believe that you have control of your city, and that you would be able to clear those streets? Because you haven’t.

Durkan: We do and the chief of police was in that precinct today with her command staff looking and assessing on operational plans. But we saw that it was a point of conflict night after night between the police department and protestors and we wanted to de-escalate that and what we decided was the best way to do that was to re-open the streets, and that in itself ended up with some ramifications for the precinct, to remove anything that was valuable out of that building. But we will make sure that all of Seattle is safe. We take public safety seriously… We have to acknowledge and know that we have a system that is built on systemic racism and we have to dismantle that system piece by piece.

Durkan went on to add:

During this time a number one priority every American city has is to protect the First Amendment right. Our country was born out of protest. The right to gather, the right to protest, the right to challenge government when it is wrong, is our most fundamental constitutional right. It’s a reason it’s the First Amendment. And as a mayor of this city I will do everything to protect that right and balance the public safety. I think not only can we do both, I think we have to do both.

With those words, Durkan, who is an experienced lawyer, endorsed an unusual school of American constitutional jurisprudence. The First Amendment says that Congress shall make no law that abridges “the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” There is no mention of the right to fight street battles with police, to annex and occupy city blocks, and to vandalize buildings including a police station, effectively suspending law enforcement. This is not protecting free speech, still less “balancing” the public safety. And, apropos the signs on the barricades that surround the CHAZ which declare “you are now leaving the United States,” the First Amendment also does not protect secession from the union. Last time that was attempted things did not go too well.

Durkan did say that she will “restore” the status quo ante, but her description of the occupation as a block party rather undermined any intended resolve. Asked how long the CHAZ might last, she blithely replied, “I don’t know. We could have a summer of love.” The American citizens who reside or work there may not share their mayor’s breezy insouciance, and might be rather less willing to share the love—especially given summer hasn’t even begun yet and doesn’t end until late September. Even if they are able to enter and leave the autonomous zone unmolested, it remains an affront to their citizenship that they do so only at the sufferance of a regime imposed without their consent.

If the occupiers dig in and have to be forced out, things could get out of hand. It need not take some rash and belligerent action from President Trump. There is a risk that residents and workers may decide to recover the public and private property taken from them. This would be a very bad idea, and hopefully they will, unlike the occupiers, heed police instructions and hold back. The best—and I hope most likely—outcome is that most of the rank-and-file occupiers will weary of the block party, there will be negotiations, and the authorities will at least pretend to capitulate to the demands or a watered down version of them. (There have been reports that talks of some sort are already happening although they don’t appear to be making much progress.) Alternatively, the tiring of the protestors will diminish resistance to the point where police can re-take the area without serious violence. In any event, this situation cannot be allowed to drag on—the Seattle police chief Carmen Best (a black woman, by the way) has already reported a significant increase in violent crimes in the occupied precinct to which her officers are not able to respond.

Although I am not in a position to prove it, I suspect that this situation has come about because of a years-long history of leniency to far-left protestors in cities like Seattle, where mayors often share the protestors’ ideology, or are duped into thinking the protestors are moderate. Public speech and assembly can sometimes be legitimately angry and rowdy, and it can be best, and indeed in conformity with the spirit of the First Amendment, for policing to avoid being too high-handed and officious. But what we see in Seattle goes well beyond that reasonable liberality. It is the difference between the occasional symbolic act of civil disobedience and the lawless rule of a Jacobin mob. The incendiary situation that now exists might well have been avoided had the city administration—like others across the country—enforced the law consistently before now. It is perhaps pertinent here to note that Best has said it was not her decision to withdraw from the precinct, adding “ultimately the city had other plans for the building and relented to severe public pressure.” She bluntly called the decision an insult to her officers and the community.

In the meantime, the occupiers have issued a series of demands (they always demand, a red flag for the authoritarian nature of their politics). These include things one can warmly support such as increased resources for public education and public health, especially for the poor. Others, however, betray childishly utopian thinking:

  • Abolition of the Seattle police force (including, just to add a sting of malice, existing police pensions) and the “attached court system.”
  • Abolition of imprisonment. (The authors are at pains to make clear that “abolition” in these demands really does mean “100 percent of funding.”)
  • Retrials for people of colour (no mention of others) currently serving a prison sentence for violent crime.
  • Replacement (presumably wholesale) of the current criminal justice system by restorative/transformative accountability programs.

Nothing here has been thought through. A measure of the activists’ recklessness is that the first and fourth demands I’ve quoted above from their list would appear to annihilate the entirety of the criminal law, an institution that can be traced through its English origins as far back as the Norman conquest. There is not any detailed analysis of the sources of police misbehaviour and of how it might be reduced. There is no detailed examination of the actual results of actual policies to try and find out what works and what does not. It’s as if a builder were to set about erecting a house by bellowing “I demand a house,” but without bothering to design a floor plan, or to work out where the walls should go, how the bricks and the beams will hold up the ceiling, where the electrical wires and sewage pipes will be laid, and so on.

I suspect that the bad actors who are inevitably attracted to leadership roles in such movements—which officially have no leaders, but of course always do—are perfectly aware of the unrealistic nature of their demands. They are not serious and are not intended to be. By confronting authorities with ultimatums which cannot possibly be met, they entrench the revolutionary posture indefinitely. At the same time, by maintaining that the demands are the only way of correcting pervasive and systemic racism, they assume an air of high and urgent idealism. This attracts the support of the young and the liberal-minded, and makes the authorities, even those most sympathetic to the cause, appear to be the intransigent representatives of a corrupt and racist establishment. By contrast, the appeal for careful analysis and policy-making can be painted, at best, as cavilling or delay, or worse.

The causes of police and prison reform are noble ones. Most of the Black Lives Matter protestors are well-meaning and decent people. They are caught up in the visceral anger felt by African Americans at the aggressive and sometimes brutal over-policing which is felt particularly in impoverished communities by people of colour. But as with most political movements, the leadership is more radical—often substantially so—than the troops. At the leadership level the ideology (and perhaps personnel) of BLM starts to blur into that of groups like Antifa, whose extremism can only put the cause of racial justice backwards. If their demands seem to say, in effect, that society should be torn down—well, that is indeed the stated aim of movements like these. Their goals are not reformist, they are revolutionary—they seek conflict not peace, and they have given scant thought about what they wish to build from the rubble of what they destroy. Since they are quite open and vehement about all this, we should probably take them at their word. Summer in Seattle this year may not be so loving.

 

Andrew Gleeson is a writer who lives in Australia.

Feature image: A person walks past an inverted American flag inside the ‘Capitol Hill Organized Protest’ formerly known as the ‘Capitol Hill Autonomous Zone’ in Seattle, Washington on June 14, 2020. (Photo by Noah Riffe/Anadolu Agency via Getty Images)

Home

TOP STORIES

White Saviors Need to Leave the Room

She called herself Kalamity, though that’s not her real name. She’s the white woman who called me a racist for noting that Indigenous children who live in communities where parents own their homes tend to have a higher standard of living and care than those who live in reserve communities where property is owned communally. This was the day after she schooled me, a Desi, together with a Kenyan woman—the only two non-white individuals in our class—on the proper use of people-of-colour nomenclature.

“I’m not sure I like that phrase,” said the Kenyan woman.

I agreed. Of all the ways to describe oneself, why would I self-define as not white.

“Women of colour chose it,” Kalamity informed us. By this, I learned, she meant black intersectional feminists.

This wasn’t the first or last time that Kalamity treated us like elementary-school children in catechism class. I didn’t like this feeling.

Kalamity also called me out as racist for disagreeing with her pronouncement that those Charlie Hebdo cartoons from 2015 were racist. That Kalamity could not read French and was unfamiliar with both Charlie Hebdo and the French satirical tradition made no difference to her. Kalamity was acting like a good white person because she was saying the things that good white people are supposed to say.

One might call it White Saviourism. It nourishes the idea that those who have little melanin must adopt a heroic pose in regard to those who have much. So women like me require saving, regardless of whether we consent to it or not. Melanin people must know their low place in society, since the conceit of the white saviour depends on the existence of someone in peril. Otherwise, there’s no demand for white saviours to come charging in heroically on their white horses. Why does this remind me of some surreal form of colonization?

For all the talk about “people of colour,” I’ve noticed that East and South Asians lately have been having trouble getting their woke parking stubs validated. The socioeconomic data make the narrative harder to sustain, and you hear a lot about people like me “internalizing” white supremacy. That’s the thing about “whiteness,” we’re told. It can be spread, like a disease.

As a Desi—a person of South Asian ancestry living abroad—I feel like the custard filling in a culture-war mille-feuille, with alt-right xenophobes on one side and Kalamity’s Wokus Pokus acolytes on the other. More and more, this latter group is migrating to new terminology that more explicitly sets out the intersectional status hierarchy, such as with “BIPOC”—“black, Indigenous and people of color.” The old adage among activists was all about a sense of solidarity spanning all people of color. Now, things are more complicated. In early June, the president of the Canadian Broadcasting Corporation (CBC) recently got called out by anti-racist activists for not explicitly name-checking anti-black racism in her denunciation of racism more generally. So she had to publish a whole new, and more specific, denunciation of racism.

The kind of supremacy we hear about is white supremacy. But history shows that, when given the chance, everyone likes lording it over everyone else. I think of Guyana, my dad’s South American homeland, a place devastated by racial strife. Europeans in Guyana depended on slave labor until it was abolished in 1838. To run their plantations, colonialists then brought in indentured workers from India. When their descendants speak of the coolie trade, some are reminded that it wasn’t true slavery—and that it’s a form of anti-black racism to compare the two. Taken to its extreme, as it often is, this kind of denial of historical nuance can become a form of collective gaslighting.

In the aftermath of George Floyd’s killing by a Minneapolis police officer, America’s conflicts have spilled into Canada, where I live. We have our own issues north of the border. But I wonder how much of the recent social panic here is just an opportunistic outpouring of pre-existing ideological grievances.

When it comes to the treatment of Canada’s Indigenous population, “reconciliation” has been the watchword for years now. But it’s a collectively told lie, a polite euphemism for coerced public confessions. We are being instructed to validate the oppression experienced by others. Predictably, this sort of exercise becomes a competitive game, since every group has something to complain about. There is never any honest attempt to understand the real human condition we all inhabit.

Just a few years ago, the attention of Canada was focused on Attawapiskat, a tiny Cree community in northern Ontario that had fallen into crisis. Adrian Sutherland, a singer from Attawapiskat, recently told the media that the old problems are still around—undrinkable water, poverty, degraded infrastructure. All those land acknowledgements we love to be seen reciting don’t seem to have helped Attawapiskat much. June is National Indigenous History Month in Canada. But you wouldn’t know it from reading Canadian social media, which is all about Black Lives Matter, the cause de jour.

Last year, a report came out that accused the Canadian government of perpetuating an ongoing genocide against Indigenous women. A few months later, Justin Trudeau was telling the world that Canada deserves a seat on the UN Security Council. Genocide? That was so 2019. It’s all about optics and letting people know how woke you are—as with CBC journalist Piya Chattopadhyay, who instructed her Twitter followers to “diversify your friend group by race, class and gender,” like stocks in a portfolio. As always, buy low, sell high. Black Lives Matter is hot right now, so it might be a good time to invest in Central Asian, working-class, and gay.

A term we hear a lot today is “structural racism,” often called “institutional racism.” The idea here is that some kinds of racism are invisible forces embedded within organizations, laws, or even whole political systems. It’s not a crazy idea. But what the Kalamitys of the world like about it most is that, since structural racism is invisible to lay people, the masses require the priestly guidance of anti-racist experts so they know what to denounce. This week, it’s one thing. Next week, it’ll be another. Check Twitter daily for instructions.

I struggle with the cognitive dissonance that has come to define Canada—a country with a real history of colonial cruelty in regard to Indigenous people, while also becoming (by international standards) a bastion of tolerance and multiculturalism. Which one is my Canada—the one where Indigenous people can’t drink the tap water, or the one that settles tens of thousands of refugees every year? Sometimes, we’re expected to wave the flag and announce our patriotism. At other times, it becomes a thoughtcrime to speak of Canada as anything but a hive of bigotry. As a victim of domestic abuse who sometimes reads these political manias as allegories for the gaslighting I endured at home, I find it hard to divide the personal from the political. It all reminds me of Flowers in the Attic.

Race doesn’t matter: Wasn’t that the whole idea we’d been fighting for in the first place? How has this non-existent category become something that none of us can stop talking about? That’s one of the questions I was looking to answer at Kalamity’s course, which was supposed to help people like me run our own workshops. But after she branded me a racist, I never went back. It upset me, actually. At the time, and for a long time afterwards, I didn’t really understand why. Now I understand that she was gaslighting me, trying to make me believe that I was the crazy one, the racist. Who knows how many other “women of colour” have left her course feeling the same way.

We need to have a discussion about racism—including a discussion about what that word means. I don’t know what that discussion will look like. But I know it will be different from the one we’re having now, with more nuance and fewer accusations. All those white saviors are welcome to attend. But first, they’ll have to dismount from their steeds and take a chair, just like everybody else.

 

Rukhsana Sukhan tweets at @RukhsanaSukhan.

Home

COVID-19HEALTHSCIENCE,TECH, STORIES,

Reassessing the Guidance on Face Masks

The efficacy of face masks for limiting the spread of SARS-CoV-2 remains uncertain and hotly contested. Recommendations vary between countries as do the reasons given. In Norway, where I live, masks are not considered necessary because very few people are infected, and efforts to contain the spread of the virus have been quite successful without mandating their use by the general public. But the debate about whether or not masks “work” is complicated and requires attention to numerous variables and contingencies. Even if we can agree that masks do help limit the spread of the virus to some degree, the conditions under which they are most effective can differ enormously, and recommendations to wear masks “in public” can actually mislead more than they illuminate and even cause some harm.

We now know more than we did even a few weeks ago about how the virus spreads. In a blog post inspired by a Quillette essay analysing superspreader events, immunologist Erin Bromage delineates the salient principle as “Successful Infection = Exposure to Virus x Time.” Exposure time is important because infection requires a minimum number of virus particles (the exact threshold remains unknown, but Bromage estimates 1,000). So, repeated or continued exposure for many minutes or hours progressively increases the risk. Oddly, the discussions I read online, even in forums dedicated to SARS-CoV-2, often ignore exposure duration entirely, and some experts have neglected this important variable too.

Bromage points out that infection is rare outdoors, and that the risk is much higher if you are in the same room as an infected individual for a period of time, especially if that person is shouting or singing, never mind coughing or sneezing:

The reason to highlight these different outbreaks is to show you the commonality of outbreaks of COVID-19. All these infection events were indoors, with people closely-spaced, with lots of talking, singing, or yelling. The main sources for infection are home, workplace, public transport, social gatherings, and restaurants. This accounts for 90 percent of all transmission events. In contrast, outbreaks spread from shopping appear to be responsible for a small percentage of traced infections.

The environments Bromage lists highlight three problems with recommendations that enjoin mask use “in public.” First, much of the spread occurs in settings that are private, or that people would not normally think of as public, such as workplaces. Second, some public spaces such as restaurants are hardly optimal for mask wearing. Restaurant staff can wear masks, but it is impractical for patrons to wear them while eating, and innovations like the Israeli Pac-Man design are unlikely to catch on. Third, open air public settings are not especially risky.

Public transport is the public setting in which guidance recommending or mandating masks makes most sense. The risk is high due to exposure time, it’s feasible to wear one, and it is often difficult to avoid crowding that forces people into close proximity with one another. In the United States, there is a clear correlation between reliance on public transport and the COVID-19 fatality rate. New York City tops the list of cities with high transit ridership, and a recent analysis of COVID-19 trends in China, New York, and Italy suggests that masking requirements have contributed to reducing the spread. A recent Danish simulation study of superspreading found that “transmission can be controlled simply by limiting contacts such as public transportation and large events.” A New York City subway car can carry up to 250 passengers. Although the duration of a ride may be shorter than, say, a concert, the number of “events” is much higher because travellers sometimes make multiple journeys a day.

Shops are less hazardous because the risk of spending much time close to an infected person is low unless an unusually high proportion of people present are infected. (The risk is obviously higher for store personnel who spend all day there.) Workplaces may be public in a legal sense, but most employees probably don’t think of themselves as being “in public” at work, and workplaces are typically too varied for a blanket recommendation to be useful. On the other hand, masks worn at home can help limit spread, but only before symptoms appear. This is unlikely to be a general recommendation, but it is worth considering in high-risk cases (such as when an elderly relative lives in the family home). This further highlights the mismatch between guidance recommending the wearing of face masks “in public,” and a more accurate understanding of relative risks posed by various environments.

recent article in Science supports the assessments of relative risk outlined above:

Thus, it is particularly important to wear masks in locations with conditions that can accumulate high concentrations of viruses, such as healthcare settings, airplanes, restaurants, and other crowded places with reduced ventilation.

The authors conclude by recommending:

For society to resume, measures designed to reduce aerosol transmission must be implemented, including universal masking and regular, widespread testing to identify and isolate infected asymptomatic individuals.

The World Health Organisation’s guidance from June 5th is similar in its particulars. Advice differs depending on whether or not there is “widespread transmission and limited or no capacity to implement other containment measures.” Even when there is no such capacity, the WHO recommends masking in “settings where a physical distancing cannot be achieved,” specifically public transport and high-risk occupational settings. Only where there is widespread transmission does it call for the wearing of masks in a much wider range of settings such as “grocery stores, at work, social gatherings, mass gatherings, closed settings, including schools, churches, and mosques.” All the news coverage I have seen on the WHO guidance appears to have missed this distinction.

So a scientific consensus appears to be emerging consistent with the available data. But it is conveyed to the public in confusing ways that unintentionally leave the impression that public settings are dangerous and private settings are safe. Studies that attempt to model the effects of masking without considering the differences in risk between settings therefore produce misleading findings. A much-publicised study from April 21st (two days before the article in Quillette appeared) claimed that “if 80 percent of Americans wore masks, COVID-19 infections would plummet.” Another recent study found that “When 100 percent of the public wear face masks, disease spread is greatly diminished.” This is an attractive idea. Wearing masks is not much of an inconvenience compared to some of the other restrictions imposed upon societies to limit spread. Wouldn’t it be wonderful if we could all just wear masks and otherwise just get on with our lives as normal without worrying too much?

These papers are modelling studies, not empirical ones using actual people, and as far as I can tell, they pay no attention to length of exposure or the difference in risk between settings. In other words, these studies do not seem to adequately reflect the real-world situations where the virus spreads. The earlier of the two papers advocates universal masking, and the recommendations it proposes echo the idea that masks should be worn “in public.” It does recommend that masking be made mandatory or strongly recommended for the general public when using public transport or in public spaces, for the duration of the pandemic.

The researchers claim to have modelled the effect of mask wearing by a given percentage of the population. But they do not seem to have considered that 80 or 100 percent would not use masks in possibly infectious settings such as restaurants or private gatherings. This might be the equivalent of halving the amount of mask wearing, and we could easily end up in a situation where their own models predict that the effect of masking basically evaporates:

“This is the goal,” De Kai maintained. “For 80 or 90 percent of the population to be wearing masks.” Anything less, he added, doesn’t work as well. “If you get down to 30 or 40 percent, you get almost no [beneficial] effect at all.”

Overly broad masking requirements are at best useless, and possibly harmful, since they can cause confusion and prompt at least some to rebel against masking if the practice is too onerous or impractical. People will think for themselves, so if recommendations are to be effective, they need to make sense. The public needs to understand why wearing a mask in certain settings is important and less so in others so they can make informed judgments about risk and safety on their own, instead of being asked to robotically follow a generalised guideline about behaviour “in public.” Clear, comprehensible, and reasonable policies will hopefully provide less room for confusion and for polarised discussions. This is particularly critical at a time when societies are trying to reopen their economies and return their populations to free and normal lives as safely as possible.

 

Dagfinn Reiersøl is a software developer and author of PHP in Action. You can follow him on Twitter @dagfinnr.

Photo by engin akyurt on Unsplash.

Home

ACTIVISMCANADACULTURE ,WARSFEMINISMIDENTITYSCIENCESEX,,STORIESWOMEN

From South American Anthropology to Gender-Crit Cancel Culture: My Strange Feminist Journey

I’m one of the many academics who’ve been “canceled” for having the wrong sort of opinion—or quasi-canceled, at least. As of this writing, I remain an associate professor of Anthropology at the University of Alberta. Since July 2019, I had also served as the department’s undergraduate programs chair. It was supposed to be a three-year appointment. But in late March, I was dismissed from that position due to informal student complaints to the effect that I had made them feel “unsafe” by articulating feminist critiques of current theories of gender. Earlier this month, my colleague Carolyn Sale wrote up an account of my case for the Centre for Free Expression blog at Ryerson University. As tends to be the case with these controversies, this in turn caused students and colleagues to scour my social media accounts in search of yet more “gender-critical” commentary. When they found it, they demanded that I be fired from my tenured position and charged with hate speech.

Articles of the type you are now reading typically channel great anger, resentment, or sorrow. But in my case, I have to acknowledge that, a decade ago, it’s likely that I would have added my voice to the clamor to put a head like mine (the 2020 version of it) on a metaphorical pike. To my shame, in fact, I’m pretty sure I once joined the campaign to cancel Canadian sexologist Kenneth Zucker after reading an article about his cautious approach in regard to “affirming” the gender dysphoric claims of children. (Zucker’s work had been wrongly denounced as “conversion therapy,” and he later won an apology and large cash settlement from his employer, the Toronto-based Centre for Addiction and Mental Health, which had echoed the false claims against him in a public review of his work.)

In any event, I would like to tell the story of how I got from there to here—because I am hardly the only feminist who’s walked away from the beliefs and postures associated with radicalized trans activism. There will be many more of us coming forward in the years to come. And so it might be useful for readers to understand how one of us became invested in the debate surrounding gender ideology, and subsequently became disillusioned.

* * *

I first became attuned to the subject indirectly, through reading Alice Dreger’s coverage of an unrelated controversy in anthropology. At this point, I should note that I’d trained as a lowland South Americanist anthropologist under the supervision of Dr. Manuela Carneiro da Cunha and Dr. Terence Turner at the University of Chicago during the late 1990s and early 2000s. Dreger, a historian and bioethicist who would begin teaching at Northwestern University in nearby Evanston in 2005, was scathing about Dr. Turner’s involvement in urging the American Anthropological Association to investigate dramatic allegations regarding anthropologist Napoleon Chagnon and geneticist James Neel, especially their work among the Yanomami Indigenous people of Venezuela.

This was a huge scandal in anthropological circles at the time, originating in a 2000 book by journalist Patrick Tierney, Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon. An American Anthropological Association investigation eventually concluded that several of Tierney’s most dramatic allegations could not be substantiated. But I was skeptical of Dreger’s claims to have made a comprehensive and dispassionate analysis of the case. From what I could tell, she’d never spoken to a single Yanomami person in the course of her inquiries, nor bothered to investigate earlier challenges to Dr. Chagnon’s ethics made by the Brazilian Anthropological Association. Moreover, the portrait of Professor Turner she presented seemed to misrepresent his conduct, motivations, and scholarship.

In 2010, I was invited to participate in a roundtable at the American Anthropological Association meetings, entitled “The Yanomami Controversy, A Decade Later.” I decided to discuss Dreger’s handling of the Yanomami controversy in light of her earlier analysis of the ferocious backlash surrounding sexologist J. Michael Bailey’s 2003 book about trans-identified men, The Man Who Would Be Queen: The Science of Gender-Bending and Transsexualism. In both instances, I contended, Dreger mounted a defence of a besieged male academic whose research ethics had come under serious scrutiny. Moreover, both Bailey’s and Chagnon’s theoretical frameworks drew upon principles of evolutionary psychology to which I objected on feminist intellectual grounds. I prepared a paper on the subject entitled ‘Alice Dreger and the Academic Retrosexuals‘, and eventually contacted Andrea James, a trans-identified man who came in for particular opprobrium in Dreger’s work.

Andrea—whom I found friendly, informative, and funny—put me in touch with other trans activists, and gave me helpful commentary on successive drafts of my paper. Eventually, the paper got an encouraging revise-and-resubmit response from a good anthropology journal. But one of the reviewers urged me to rehash the entire El Dorado controversy within the limited space available. Since I preferred to focus on my feminist take on the parallels between the two cases, I withdrew the paper from consideration and decided to rewrite it entirely, this time for a gender-studies journal.

This was in the early 2010s. (I was getting divorced and raising a toddler on my own at the time, so I will confess that my memory of the chronology is somewhat fuzzy.) At this juncture, I’d written only for the anthropological literature, and so was naïve about the state of play on gender issues. I had an inkling that there wasn’t a unified feminist position on trans issues; and that some feminists, such as Janice Raymond, author of The Transsexual Empire: The Making of the She-Male, had expressed hostility to the idea of “transsexuals” (as was then a commonly used term) back in the late 1970s. But my impression was that Raymond had been part of the feminist second wave, which I’d thought no one bothered with anymore. Just to ground my arguments, however, I figured I’d do a bit of a research to make sure my approximate sense of the state of the feminist literature on trans issues was adequate.

I remember coming across a website called Gender Trender, maintained by a pseudonymous lesbian radical feminist named Gallus Mag. At first, I perused it as an anachronistic curiosity—an odd outpost from a bygone feminist era, atavistically hostile to the more up-to-date proposition that trans women are women. But Gallus’s prose style was funny and straightforward, in contrast to most gender-studies academic writing, and I found myself drawn back again and again to see what the old dinosaur (as I imagined her to be) would say next.

But for all Gallus’s sardonic manner of presentation, I came to realize the information she was documenting on her site was quite serious. She covered, among other things, the 2016 murder of two lesbians and their son in East Oakland, a crime for which prosecutors charged a trans-identified man named Dana Rivers (who’d also been an organizer of the “Camp Trans” campaign against a women-only music festival called Michfest during the 1990s). Gallus wasn’t paranoid in describing “female erasure” and “lesbian erasure”; nor in her insistence that gender ideology—which includes the belief that men may become women, and vice versa, by an act of declaration—served male interests.

I also began to appreciate her claim that the mainstream media often was reluctant to report on any facts that cast doubt on orthodox gender ideology. That aforementioned 2016 triple murder, strangely, received little media coverage. And Gallus’s own blog, which had been subject to countless attacks by trans activists, was censored and then taken down by WordPress in 2018 when she broke the story of Jonathan Yaniv—an eccentric Canadian misogynist who’s managed to deplatform dozens of women who express disgust at his aggressive sexual prurience (or who refuse to call him “Jessica”). But by this time, I was also reading Meghan Murphy’s Vancouver-based blog Feminist Current. (Infamously, Murphy herself was tossed off Twitter due to her interactions with Yaniv.) We are only now getting to the point where mainstream outlets, such as the Times of London and Newsweek, are giving air time to the gender-critical side.

A full decade has passed since I began paying sustained attention to trans activism and gender ideology. I have published an essay of my own at Feminist Current, become an active commenter on the feminist social networking site Spinster, signed the Women’s Declaration on Sex-Based Human Rights, and become one of two Canadian country contacts for its campaign. In the process, I’ve met inspiring feminists from all around the world who share my concerns, and who are fighting rollouts of eerily identical gender-identity laws in diverse national contexts.

And yet I have never finished re-writing that El Dorado/Man Who Would be Queen paper that started it all—because the kind of feminist analysis I’d originally designed it around is no longer persuasive to me.

As I’ve changed, so has the world around me. In 2017, Canada became governed by Bill C-16, which adds “gender identity or expression” to the list of prohibited grounds of discrimination contained in the Canadian Human Rights Act. During the 2019–2020 academic year, the University of Alberta, my employer, brought in a sweeping set of policies aimed at promoting “equity, diversity, and inclusion” (EDI) in the workplace, complete with the obligatory focus on gender at the expense of sex. All of this has arisen at a university that, during my 15 years here, has otherwise provided me with an agreeable and supportive environment. By and large, I’ve been happy at the University of Alberta, and I’m sorry to come into conflict with it.

The conflict, though, is unavoidable. Contemporary gender ideology requires active affirmation of the proposition that men can become women and that women can become men. It further asserts that to refuse to assent to this proposition is to do active “harm” to trans-identified individuals. The doctrine requires uncritical reverence for retrograde gender constructs, such as the idea that a little boy who likes tea parties and pretty dresses can be deemed to have been “born in the wrong body” (and so is actually, in fact, a little girl).

I’m not on Facebook, but I regularly hear from friends that I’ve been charged on this or that Facebook page with “denying the existence of trans people.” My detractors may well be correct that I am in violation of my employer’s EDI policies by insistently bringing up biology, and by engaging in the critique (and sometimes mockery) of gender identity claims. This is my form of political dissent. And I cannot avoid getting into trouble, because I now know things I did not know 10 years ago.

Whatever the initial aims of gender-ideology advocates, this system of beliefs is leading to real horrors being inflicted upon women and children. Activist Heather Mason, for instance, has documented the harassment and abuses that have predictably resulted from transferring purportedly trans-identified men to women’s prisons. In 2005, the year I moved to Edmonton from the United States, a 13-year-old girl named Nina Courtepatte was raped, beaten to death with a hammer, and set on fire on an Edmonton golf course. Her killer now claims to identify as a woman and is housed with female inmates. Is anyone concerned about their right to “safe spaces”? Or read this account of a botched gender-reassignment surgery on a woman, or this one on a child. Read about the deadly risks associated with puberty-blocking drugs? Do you know what a trans widow is? A detransitioner? I do. And I can’t un-learn any of it.

I’m 49 years old. As already noted, my 39-year-old self would almost certainly have been part of the campaign to get me fired. I don’t know how the students and colleagues denouncing me now will look back on their actions in 2030. But I can articulate the principles presently guiding my own behaviour, as borrowed from James Baldwin: “People who shut their eyes to reality simply invite their own destruction, and anyone who insists on remaining in a state of innocence long after that innocence is dead turns himself into a monster.”

If you click on no other link, watch this powerful talk by Chilean feminist Ariel Pereira about her experiences transitioning and detransitioning. If you were a tenured professor, with all of the protections such a position entails, and you heard that talk, would you keep silent when people around you were insisting that nothing is better for young people than to celebrate and affirm gender identity, no hard questions asked?

When a Toronto-based Quillette editor invited me to write about my experiences, he suggested I tell my story and then “expand on it as a microcosm of some larger trends.” But honestly, I don’t think I have a clever general analysis to offer about the nature of our present handbasket.

That said, my experience has brought home to me various lessons I’ve always tried to communicate to students while teaching the history of anthropology. I tell them that, like all social sciences, anthropology often tells a story about itself that suggests its scholars figured out exactly the right thing to study, and at exactly the right time, through sheer cleverness and moral virtue: colonialism, race and ethnicity, gender and sexuality. The truth is that it always has been social movements outside of the discipline that have brought each of these themes to the discipline’s attention. And so detractors aren’t wrong when they say the social sciences are driven by “trends.” But trends can be important. And their influence doesn’t make our enterprise “unscientific.” That’s because our object of study is society itself, and so it makes sense that we focus on the new ideas that circulate in a society as that society changes.

Over the past 10 years, I’ve been seeing that unfold in real time. My own so-called gender-critical feminism isn’t driven by academics, nor astroturfed by funders. There is no Tawani Foundation endowing professorships on my side of things. The women who inspire me now are theorizing on the fly and organizing on a shoestring. Together we are building an active social movement that has emerged to meet a real historical challenge. Someone like Sheila Jeffreys, a long-established feminist academic who has been gender-critical for many years, is rare. Instead, gender critical feminists are ordinary women such as Meghan MurphyNina PaleyJulie BindelHilla KernerEugenia RodriguesAllison BaileyMax DashuPosie ParkerCherry SmileyGhislaine GendronRaquel Rosario SánchezLinda BladeRenee GerlichJennifer BilekM K FainMaria Binetti, the late great Magdalen Berns—oh, and now a writer you may have heard of called J. K. Rowling.

J.K. Rowling

@jk_rowling

“I’ve never felt as shouted down, ignored, and targeted as a lesbian *within* our supposed GLBT community as I have over the past couple of years.” https://thevelvetchronicle.com/anonymous-letter-from-a-terrified-lesbian-thoughtcrime/ 

Anonymous Letter From a Terrified Lesbian

“I’ve never felt as shouted down, ignored, and targeted as a lesbian *within* our supposed GLBT community as I have over the past couple of years.”

thevelvetchronicle.com

14.3K people are talking about this

This list of courageous women is multi-racial and international in its composition, and it is growing longer every day. But we still face regular, ferocious denunciation, threats to our livelihoods, and at times even violent intimidation from trans activists and trans “allies.” Even as this article goes to press, I am being warned that the student newspaper at my own university will soon be publishing a strident condemnation of me. And even if the university desists in its efforts to censure me, on the basis of academic freedom, school officials already have made it clear that non-academic staff can be legitimately dismissed for expressing views like mine. Unlike me, most women aren’t tenured professors with a faculty association to back them up. So the university can force them to “shut their eyes to reality.”

* * *

I earned my doctorate at the University of Chicago, a rewarding and rigorous educational program. The school’s anthropology department had a reputation for conservatism when I trained there in the late 1990s, principally because we read the history of anthropology and the social sciences at a time when it was becoming fashionable merely to denounce it.

It’s sad to compare that culture of rigor to what I now see today. One of the sources of my present troubles is having attended a University of Alberta Anthropology Department event at which students and several faculty held forth with the view that the gender binary is an artifact of “colonialism,” and that gender-critical feminists are “all white.” I said that this was not true, and specifically mentioned the work of Indigenous activist and educator Fay Blaney, who objects to the appropriation of Indigenous ideas about gender by contemporary trans activists; as well as Vaishnavi Sundar, who has similarly objected to appropriation of hijra identity from South Asia. I also made reference to Linda Bellos, a black lesbian activist who has been de-platformed in the UK for her gender-critical views. What was striking was that no one in the room had heard of any of these women—though at least one attendee felt free to suggest that they’d all simply been brainwashed by white colonialist propaganda.

But the truth is, were it not for what I quite seriously describe as a “post-doctoral education” under the tutelage of pioneering gender critical feminists, I would never have heard of any of these women either. The contemporary academy has simply stopped paying attention to non-ideologically compliant feminists. Yet even without institutional support, the influence of these so-called “gender crits” is now stronger than it’s been in many years—so much so that academics in the social sciences might eventually have to start paying attention.

I’ll close on a point drawn from my lowland South American research. I worked for over 20 years with Guaraní-speaking Indigenous people, principally in Bolivia. One of the features that marked the encounter of Guaraní-speaking people with European colonizers was the way missionaries—Jesuits, most famously—interpreted Guaraní cosmology as being uniquely compatible with Christian theology. They found features of Guaraní myth and religious practice as either “prefigurative” of the coming of Christ, or as being marked by an actual prior encounter with Christianity: Stories of the culture hero Pai Sume were supposed to encode a memory of a visit from the disciple Thomas, for example.

The way contemporary trans ideology assimilates any number of world cultural practices as evidence of the existential universality of transness—hijras in South Asia, two-spirit people in North America, bancis in Indonesia, bacha bazi and bacha posh in Afghanistan, sworn virgins of the Balkans, and so on—is quite similar. As with European Christian projections on to Guaraní culture, this is what happens when adherents of a totalizing worldview subject the truth about the outside world to their own narrow preconceptions. Anthropology taught me how to spot this instinct. Gender-critical feminists taught me how to stand up to it.

 

Kathleen Lowrey is an associate professor of anthropology in the Faculty of Arts at the University of Alberta.

Featured image: Guaraní figures carved in stone at São João Baptista, Brazil.

Home

TOP ,STORIES

The Ever-Shrinking Transistor and the Invention of Google

Innovators are often unreasonable people: restless, quarrelsome, unsatisfied, and ambitious. Often, they are immigrants, especially on the west coast of America. Not always, though. Sometimes they can be quiet, unassuming, modest, and sensible stay-at-home types. The person whose career and insights best capture the extraordinary evolution of the computer between 1950 and 2000 was one such. Gordon Moore was at the centre of the industry throughout this period and he understood and explained better than most that it was an evolution, not a revolution. Apart from graduate school at Caltech and a couple of unhappy years out east, he barely left the Bay Area, let alone California. Unusually for a Californian, he was a native, who grew up in the small town of Pescadero on the Pacific coast just over the hills from what is now called Silicon Valley, going to San Jose State College for undergraduate studies. There he met and married a fellow student, Betty Whitaker.

As a child, Moore had been taciturn to the point that his teachers worried about it. Throughout his life he left it to partners like his colleague Andy Grove, or his wife, Betty, to fight his battles for him. “He was either constitutionally unable or simply unwilling to do what a manager has to do,” said Grove, a man toughened by surviving both Nazi and Communist regimes in his native Hungary. Moore’s chief recreation was fishing, a pastime that requires patience above all else. And unlike some entrepreneurs he was—and is, now in his 90s—just plain nice, according to almost everybody who knows him. His self-effacing nature somehow captures the point that innovation in computers was and is not really a story of heroic inventors making sudden breakthroughs, but an incremental, inexorable, inevitable progression driven by the needs of what Kevin Kelly calls “the technium” itself. More so than flamboyant figures like Steve Jobs, who managed to make a personality cult in a revolution that was not really about personalities.

In 1965 Moore was asked by an industry magazine called Electronics to write an article about the future. He was then at Fairchild Semiconductor, having been one of the “Traitorous Eight” who defected from the firm run by the dictatorial and irascible William Shockley to set up their own company six years before, where they had invented the integrated circuit of miniature transistors printed on a silicon chip. Moore and Robert Noyce would defect again to set up Intel in 1968. In the 1965 article Moore predicted that miniaturization of electronics would continue and that it would one day deliver “such wonders as home computers… automatic controls for automobiles, and personal portable communications equipment”. But that prescient remark is not why the article deserves a special place in history. It was this paragraph that gave Gordon Moore, like Boyle and Hooke and Ohm, his own scientific law:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years.

Moore was effectively forecasting the steady but rapid progress of miniaturization and cost reduction, doubling every year, through a virtuous circle in which cheaper circuits led to new uses, which would lead to more investment, which would lead to cheaper microchips for the same output of power. The unique feature of this technology is that a smaller transistor not only uses less power and generates less heat, but can be switched on and off faster, so it works better and is more reliable. The faster and cheaper chips got, the more uses they found. Moore’s colleague Robert Noyce deliberately under-priced microchips so that more people would use them in more applications, thereby growing the market.

By 1975 the number of components on a chip had passed 65,000, just as Moore had forecast, and it kept on growing as the size of each transistor shrank and shrank, though in that year Moore revised his estimate of the rate of change to doubling the number of transistors on a chip every two years. By then Moore was chief executive of Intel and presiding over its explosive growth and the transition to making microprocessors, rather than memory chips: essentially programmable computers on single silicon chips. Calculations by Moore’s friend and champion, Carver Mead, showed that there was a long way to go before miniaturization hit a limit.

Moore’s Law kept on going not just for 10 years but for about 50 years, to everybody’s surprise. Yet it probably has now at last run out of steam. The atomic limit is in sight. Transistors have shrunk to less than 100 atoms across, and there are billions on each chip. Since there are now trillions of chips in existence, that means there are billions of trillions of transistors on Planet Earth. They are probably now within an order of magnitude of equalling the number of grains of sand on the planet. Most sand grains, like most microchips, are made largely of silicon, albeit in oxidized form. But whereas sand grains have random—and therefore probable—structures, silicon chips have highly non-random, and therefore improbable, structures.

Looking back over the half-century since Moore first framed his Law, what is remarkable is how steady the progression was. There was no acceleration, there were no dips and pauses, no echoes of what was happening in the rest of the world, no leaps as a result of breakthrough inventions. Wars and recessions, booms and discoveries, seemed to have no impact on Moore’s Law. Also, as Ray Kurzweil was to point out later, Moore’s Law in silicon turned out to be a progression, not a leap, from the vacuum tubes and mechanical relays of previous years: The number of switches delivered for a given cost in a computer trundled upwards, showing no sign of sudden breakthrough when the transistor was invented, or the integrated circuit. Most surprising of all, discovering Moore’s Law had no effect on Moore’s Law. Knowing that the cost of a given amount of processing power would halve in two years ought surely to have been valuable information, allowing an enterprising innovator to jump ahead and achieve that goal now. Yet it never happened. Why not? Mainly because it took each incremental stage to work out how to get to the next stage.

This was encapsulated in Intel’s famous “tick-tock” corporate strategy: Tick was the release of a new chip every other year, tock was the fine-tuning of the design in the intervening years, preparatory to the next launch. But there was also a degree of self-fulfilling prophecy about Moore’s Law. It became a prescription for, not a description of, what was happening in the industry. Gordon Moore, speaking in 1976, put it this way:

This is the heart of the cost reduction machine that the semiconductor industry has developed. We put a product of given complexity into production; we work on refining the process, eliminating the defects. We gradually move the yield to higher and higher levels. Then we design a still more complex product utilizing all of the improvements, and put that into production. The complexity of our product grows exponentially with time.

Silicon chips alone could not bring about a computer revolution. For that, there needed to be new computer designs, new software, and new uses. Throughout the 1960s and 1970s, as Moore foresaw, there was a symbiotic relationship between hardware and software, as there had been between cars and oil. Each industry fed the other with innovative demand and innovative supply. Yet even as the technology went global, more and more the digital industry became concentrated in Silicon Valley, a name coined in 1971, for reasons of historical accident: Stanford University’s aggressive pursuit of defence research dollars led it to spawn a lot of electronics startups, and those startups gave birth to others, which spawned still others. Yet the role of academia in this story was surprisingly small. Though it educated many of the pioneers of the digital explosion in physics or electrical engineering, and though of course there was basic physics underlying many of the technologies, neither hardware nor software followed a simple route from pure science to applied.

Companies as well as people were drawn to the west side of San Francisco Bay to seize opportunities, catch talent and eavesdrop on the industry leaders. As the biologist and former vice-chancellor of Buckingham University, Terence Kealey, has argued, innovation can be like a club: you pay your dues and get access to its facilities. The corporate culture that developed in the Bay Area was egalitarian and open: In most firms, starting with Intel, executives had no reserved parking spaces, large offices, or hierarchical ranks, and they encouraged the free exchange of ideas sometimes to the point of chaos. Intellectual property hardly mattered in the digital industry: There was not usually time to get or defend a patent before the next advance overtook it. Competition was ruthless and incessant, but so were collaboration and cross-pollination.

The innovations came rolling off the silicon, digital production line: the microprocessor in 1971, the first video games in 1972, the TCP/IP protocols that made the Internet possible in 1973, the Xerox PARC Alto computer with its graphical user interface in 1974, Steve Jobs’s and Steve Wozniak’s Apple 1 in 1975, the Cray 1 supercomputer in 1976, the Atari video game console in 1977, the laser disc in 1978, the “worm”, ancestor of the first computer viruses, in 1979, the Sinclair ZX80 hobbyist computer in 1980, the IBM PC in 1981, Lotus 123 software in 1982, the CD-ROM in 1983, the word “cyberspace” in 1984, Stewart Brand’s Whole Earth ’Lectronic Link (Well) in 1985, the Connexion machine in 1986, the GSM standard for mobile phones in 1987, Stephen Wolfram’s Mathematica language in 1988, Nintendo’s Game Boy and Toshiba’s Dynabook in 1989, the World Wide Web in 1990, Linus Torvald’s Linux in 1991, the film Terminator 2 in 1991, Intel’s Pentium processor in 1993, the zip disc in 1994, Windows 95 in 1995, the Palm Pilot in 1996, the defeat of the world chess champion, Garry Kasparov, by IBM’s Deep Blue in 1997, Apple’s colourful iMac in 1998, Nvidia’s consumer graphics processing unit, the GEForce 256, in 1999, the Sims in 2000. And on and on and on.

It became routine and unexceptional to expect radical innovations every few months, an unprecedented state of affairs in the history of humanity. Almost anybody could be an innovator, because thanks to the inexorable logic unleashed and identified by Gordon Moore and his friends, the new was almost always automatically cheaper and faster than the old. So invention meant innovation too.

Not that every idea worked. There were plenty of dead ends along the way. Interactive television. Fifth-generation computing. Parallel processing. Virtual reality. Artificial intelligence. At various times each of these phrases was popular with governments and in the media, and each attracted vast sums of money, but proved premature or exaggerated. The technology and culture of computing were advancing by trial and error on a massive and widespread scale, in hardware, software and consumer products. Looking back, history endows the tryers who made the fewest errors with the soubriquet of genius, but for the most part they were lucky to have tried the right thing at the right time. Gates, Jobs, Brin, Page, Bezos, Zuckerberg were all products of the technium’s advance, as much as they were causes. In this most egalitarian of industries, with its invention of the sharing economy, a surprising number of billionaires emerged.

Again and again, people were caught out by the speed of the fall in cost of computing and communicating, leaving future commentators with a rich seam of embarrassing quotations to mine. Often it was those closest to the industry about to be disrupted who least saw it coming. Thomas Watson, the head of IBM, said in 1943 that “there is a world market for maybe five computers.” Tunis Craven, commissioner of the Federal Communications Commission, said in 1961: “there is practically no chance communications space satellites will be used to provide better telephone, telegraph, television or radio service inside the United States.” Marty Cooper, who has as good a claim as anybody to have invented the mobile phone, or cell phone, said, while director of research at Motorola in 1981: “Cellular phones will absolutely not replace local wire systems. Even if you project it beyond our lifetimes, it won’t be cheap enough.” Tim Harford points out that in the futuristic film Blade Runner, made in 1982, robots are so life-like that a policeman falls in love with one, but to ask her out he calls her from a payphone, not a mobile.

* * *

The surprise of search engines and social media

I use search engines every day. I can no longer imagine life without them. How on Earth did we manage to track down the information we needed? I use them to seek out news, facts, people, products, entertainment, train times, weather, ideas, and practical advice. They have changed the world as surely as steam engines did. In instances where they are not available, like finding a real book on a real shelf in my house, I find myself yearning for them. They may not be the most sophisticated or difficult of software tools, but they are certainly the most lucrative. Search is probably worth nearly a trillion dollars a year and has eaten the revenue of much of the media, as well as enabled the growth of online retail. Search engines, I venture to suggest, are a big part of what the Internet delivers to people in real life—that and social media.

I use social media every day too, to keep in touch with friends, family and what people are saying about the news and each other. Hardly an unmixed blessing, but it is hard to remember life without it. How on Earth did we manage to meet up, to stay in touch or to know what was going on? In the second decade of the 21st century social media exploded into the biggest and second most lucrative use of the Internet and is changing the course of politics and society.

Yet here is a paradox. There is an inevitability about both search engines and social media. If Larry Page had never met Sergei Brin, if Mark Zuckerberg had not got into Harvard, then we would still have search engines and social media. Both already existed when they started Google and Facebook. Yet before search engines or social media existed, I don’t think anybody forecast that they would exist, let alone grow so vast, certainly not in any detail. Something can be inevitable in retrospect, and entirely mysterious in prospect. This asymmetry of innovation is surprising.

The developments of the search engine and social media follow the usual path of innovation: incremental, gradual, serendipitous, and inexorable; few eureka moments or sudden breakthroughs. You can choose to go right back to the posse of MIT defence-contracting academics, such as Vannevar Bush and J. C. R. Licklider, in the post-war period, writing about the coming networks of computers and hinting at the idea of new forms of indexing and networking. Here is Bush in 1945: “The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.” And here is Licklider in his influential essay, written in 1964, on “Libraries of the Future”, imagining a future in which, over the weekend, a computer replies to a detailed question: “Over the weekend it retrieved over 10,000 documents, scanned them all for sections rich in relevant material, analyzed all the rich sections into statements in a high-order predicate calculus, and entered the statements into the data base of the question-answering subsystem.” But frankly such prehistory tells you only how little they foresaw instant search of millions of sources. A series of developments in the field of computer software made the Internet possible, which made the search engine inevitable: time sharing, packet switching, the World Wide Web, and more. Then in 1990 the very first recognizable search engine appeared, though inevitably there are rivals to the title.

Its name was Archie, and it was the brainchild of Alan Emtage—a student at McGill University in Montreal—and two of his colleagues. This was before the World Wide Web was in public use and Archie used the FTP protocol. By 1993 Archie was commercialized and growing fast. Its speed was variable: “While it responds in seconds on a Saturday night, it can take five minutes to several hours to answer simple queries during a weekday afternoon.” Emtage never patented it and never made a cent.

By 1994 Webcrawler and Lycos were setting the pace with their new text-crawling bots, gathering links and key words to index and dump in databases. These were soon followed by Altavista, Excite, and Yahoo!. Search engines were entering their promiscuous phase, with many different options for users. Yet still nobody saw what was coming. Those closest to the front still expected people to wander into the Internet and stumble across things, rather than arrive with specific goals in mind. “The shift from exploration and discovery to the intent-based search of today was inconceivable,” said Srinija Srinivasan, Yahoo!’s first editor-in-chief.

Then Larry met Sergey. Taking part in an orientation programme before joining graduate school at Stanford, a university addicted by then to spinning out tech companies, Larry Page found himself guided by a young student named Sergey Brin. “We both found each other obnoxious,” said Brin later. Both were second-generation academics in technology. Page’s parents were academic computer scientists in Michigan; Brin’s were a mathematician and an engineer in Moscow, then Maryland. Both young men had been steeped in computer talk, and hobbyist computers, since childhood.

Page began to study the links between web pages, with a view to ranking them by popularity, and had the idea, reportedly after waking from a dream in the night, of cataloguing every link on the exponentially expanding web. He created a web crawler to go from link to link, and soon had a database that ate up half of Stanford’s Internet bandwidth. But the purpose was annotating the web, not searching it. “Amazingly, I had no thought of building a search engine. The idea wasn’t even on the radar,” Page said. That asymmetry again.

By now Brin had brought his mathematical expertise and his effervescent personality to Page’s project, named BackRub, then Page Rank, and finally Google, a misspelled word for a big number that worked well as a verb. When they began to use it for search, they realized they had a much more intelligent engine than anything on the market because it ranked sites that the world thought were important enough to link to higher than those that happened to contain key words. Page discovered that three of the four biggest search engines could not even find themselves online. As Walter Isaacson has argued:

Their approach was in fact a melding of machine and human intelligence. Their algorithm relied on the billions of human judgments made by people when they created links from their own websites. It was an automated way to tap into the wisdom of humans—in other words, a higher form of human–computer symbiosis.

Bit by bit, they tweaked the programs till they got better results. Both Page and Brin wanted to start a proper business, not just invent something that others would profit from, but Stanford insisted they publish, so in 1998 they produced their now famous paper ‘The Anatomy of a Large-Scale Hypertextual Web Search Engine’, which began: “In this paper, we present Google…” With eager backing from venture capitalists they set up in a garage and began to build a business. Only later were they persuaded by the venture capitalist Andy Bechtolsheim to make advertising the central generator of revenue.

Extracted from How Innovation Works: And Why It Flourishes In Freedom.

Matt Ridley is a British journalist and businessman. He is the author of several books, including The Red Queen (1994), Genome (1999), The Rational Optimist (2010), The Evolution of Everything (2015), and How Innovation Works: And Why It Flourishes In Freedom. You can follow him on Twitter at @mattwridley.

Home

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s