List of top Verbal Ability & Reading Comprehension (VARC) Questions on Reading Comprehension asked in CAT

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
In the summer of 2022, subscribers to the US streaming service HBO MAX were alarmed to discover that dozens of the platform’s offerings – from the Covid-themed heist thriller Locked Down to the recent remake of The Witches – had been quietly removed from the service . . . The news seemed like vindication to those who had long warned that streaming was more about controlling access to the cultural commons than expanding it, as did reports (since denied by the show’s creators) that Netflix had begun editing old episodes of Stranger Things to retroactively improve their visual effects.
What’s less clear is whether the commonly prescribed cure for these cultural ills – a return to the material pleasures of physical media – is the right one. While the makers of Blu-ray discs claim they have a shelf life of 100 years, such statistics remain largely theoretical until they come to pass, and are dependent on storage conditions, not to mention the continued availability of playback equipment. The humble DVD has already proved far less resilient, with many early releases already beginning to deteriorate in quality Digital movie purchases provide even less security. Any film “bought” on iTunes could disappear if you move to another territory with a different rights agreement and try to redownload it. It’s a bold new frontier in the commodification of art: the birth of the product recall. After a man took to Twitter to bemoan losing access to Cars 2 after moving from Canada to Australia, Apple clarified that users who downloaded films to their devices would retain permanent access to those downloads, even if they relocated to a hemisphere where the [content was] subject to a different set of rights agreements. Thanks to the company’s ironclad digital rights management technology, however, such files cannot be moved or backed up, locking you into watching with your Apple account.
Anyone who does manage to acquire Digital Rights Management free (DRM-free) copies of their favourite films must nonetheless grapple with ever-changing file format standards, not to mention data decay – the gradual process by which electronic information slowly but surely corrupts. Only the regular migration of files from hard drive to hard drive can delay the inevitable, in a sisyphean battle against the ravages of digital time.
In a sense, none of this is new. Charlie Chaplin burned the negative of his 1926 film A Woman of the Sea as a tax write-off. Many more films have been lost through accident, negligence or plain indifference. During a heatwave in July 1937, a Fox film vault in New Jersey burned down, destroying a majority of the silent films produced by the studio.
Back then, at least, cinema was defined by its ephemerality: the sense that a film was as good as gone once it left your local cinema. Today, with film studios keen to stress the breadth of their back catalogues (or to put in Hollywood terms, the value of their IPs), audiences may start to wonder why those same studios seem happy to set the vault alight themselves if it’ll help next quarter’s numbers.

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
The job of a peer reviewer is thankless. Collectively, academics spend around 70 million hours every year evaluating each other’s manuscripts on the behalf of scholarly journals — and they usually receive no monetary compensation and little if any recognition for their effort. Some do it as a way to keep abreast with developments in their field; some simply see it as a duty to the discipline. Either way, academic publishing would likely crumble without them.
In recent years, some scientists have begun posting their reviews online, mainly to claim credit for their work. Sites like Publons allow researchers to either share entire referee reports or simply list the journals for whom they’ve carried out a review….
The rise of Publons suggests that academics are increasingly placing value on the work of peer review and asking others, such as grant funders, to do the same. While that’s vital in the publish-or-perish culture of academia, there’s also immense value in the data underlying peer review. Sharing peer review data could help journals stamp out fraud, inefficiency, and systemic bias in academic publishing.….
Peer review data could also help root out bias. Last year, a study based on peer review data for nearly 24,000 submissions to the biomedical journal eLife found that women and non Westerners were vastly underrepresented among peer reviewers. Only around one in every five reviewers was female, and less than two percent of reviewers were based in developing countries…. Openly publishing peer review data could perhaps also help journals address another problem in academic publishing: fraudulent peer reviews. For instance, a minority of authors have been known to use phony email addresses to pose as an outside expert and review their own manuscripts.…
Opponents of open peer review commonly argue that confidentiality is vital to the integrity of the review process; referees may be less critical of manuscripts if their reports are published, especially if they are revealing their identities by signing them. Some also hold concerns that open reviewing may deter referees from agreeing to judge manuscripts in the first place, or that they’ll take longer to do so out of fear of scrutiny….
Even when the content of reviews and the identity of reviewers can’t be shared publicly, perhaps journals could share the data with outside researchers for study. Or they could release other figures that wouldn’t compromise the anonymity of reviews but that might answer important questions about how long the reviewing process takes, how many researchers editors have to reach out to on average to find one who will carry out the work, and the geographic distribution of peer reviewers.
Of course, opening up data underlying the reviewing process will not fix peer review entirely, and there may be instances in which there are valid reasons to keep the content of peer reviews hidden and the identity of the referees confidential. But the norm should shift from opacity in all cases to opacity only when necessary.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
[S]pices were a global commodity centuries before European voyages. There was a complex chain of relations, yet consumers had little knowledge of producers and vice versa. Desire for spices helped fuel European colonial empires to create political, military and commercial networks under a single power.
Historians know a fair amount about the supply of spices in Europe during the medieval period – the origins, methods of transportation, the prices – but less about demand. Why go to such extraordinary efforts to procure expensive products from exotic lands? Still, demand was great enough to inspire the voyages of Christopher Columbus and Vasco Da Gama, launching the first fateful wave of European colonialism. . . .
So, why were spices so highly prized in Europe in the centuries from about 1000 to 1500? One widely disseminated explanation for medieval demand for spices was that they covered the taste of spoiled meat. . . . Medieval purchasers consumed meat much fresher than what the average city-dweller in the developed world of today has at hand. However, refrigeration was not available, and some hot spices have been shown to serve as an anti-bacterial agent. Salting, smoking or drying meat were other means of preservation. Most spices used in cooking began as medical ingredients, and throughout the Middle Ages spices were used as both medicines and condiments. Above all, medieval recipes involve the combination of medical and culinary lore in order to balance food's humeral properties and prevent disease. Most spices were hot and dry and so appropriate in sauces to counteract the moist and wet properties supposedly possessed by most meat and fish. . . .
Where spices came from was known in a vague sense centuries before the voyages of Columbus. Just how vague may be judged by looking at medieval world maps . . . To the medieval European imagination, the East was exotic and alluring. Medieval maps often placed India close to the so-called Earthly Paradise, the Garden of Eden described in the Bible.
Geographical knowledge has a lot to do with the perceptions of spices’ relative scarcity and the reasons for their high prices. An example of the varying notions of scarcity is the conflicting information about how pepper is harvested. As far back as the 7th century Europeans thought that pepper in India grew on trees "guarded" by serpents that would bite and poison anyone who attempted to gather the fruit. The only way to harvest pepper was to burn the trees, which would drive the snakes underground. Of course, this bit of lore would explain the shriveled black peppercorns, but not white, pink or other colors.
Spices never had the enduring allure or power of gold and silver or the commercial potential of new products such as tobacco, indigo or sugar. But the taste for spices did continue for a while beyond the Middle Ages. As late as the 17th century, the English and the Dutch were struggling for control of the Spice Islands: Dutch New Amsterdam, or New York, was exchanged by the British for one of the Moluccan Islands where nutmeg was grown.
Comprehension:
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
The history of any major technological or industrial advance is inevitably shadowed by a less predictable history of unintended consequences and secondary effects — what economists sometimes call “externalities.” Sometimes those consequences are innocuous ones, or even beneficial. Gutenberg invents the printing press, and literacy rates rise, which causes a significant part of the reading public to require spectacles for the first time, which creates a surge of investment in lens-making across Europe, which leads to the invention of the telescope and the microscope.
Oftentimes the secondary effects seem to belong to an entirely different sphere of society. When Willis Carrier hit upon the idea of air-conditioning, the technology was primarily intended for industrial use: ensuring cool, dry air for factories that required low-humidity environments. But…it touched off one of the largest migrations in the history of the United States, enabling the rise of metropolitan areas like Phoenix and Las Vegas that barely existed when Carrier first started tinkering with the idea in the early 1900s.
Sometimes the unintended consequence comes about when consumers use an invention in a surprising way. Edison famously thought his phonograph, which he sometimes called “the talking machine,” would primarily be used to take dictation….But then later innovators… discovered a much larger audience willing to pay for musical recordings made on descendants of Edison’s original invention. In other cases, the original innovation comes into the world disguised as a plaything…the way the animatronic dolls of the mid-1700s inspired Jacquard to invent the first “programmable” loom and Charles Babbage to invent the first machine that fit the modern definition of a computer, setting the stage for the revolution in programmable technology that would transform the 21st century in countless ways.
We live under the gathering storm of modern history’s most momentous unintended consequence….carbon-based climate change. Imagine the vast sweep of inventors whose ideas started the Industrial Revolution, all the entrepreneurs and scientists and hobbyists who had a hand in bringing it about. Line up a thousand of them and ask them all what they had been hoping to do with their work. Not one would say that their intent had been to deposit enough carbon in the atmosphere to create a greenhouse effect that trapped heat at the surface of the planet. And yet here we are.
Ethyl (leaded fuel) and Freon belonged to the same general class of secondary effect: innovations whose unintended consequences stem from some kind of waste by-product that they emit. But the potential health threats of Ethyl (unleaded fuel) were visible in the 1920s, unlike, say, the long-term effects of atmospheric carbon build up in the early days of the Industrial Revolution….
Indeed, it is reasonable to see CFCs (chlorofluorocarbons) as a forerunner of the kind of threat we will most likely face in the coming decades, as it becomes increasingly possible for individuals or small groups to create new scientific advances — through chemistry or biotechnology or materials science — setting off unintended consequences that reverberate on a global scale.
Comprehension:
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
(. . .) There are three other common drivers for carnivore-human attacks, some of which are more preventable than others. Natural aggression-based conflicts – such as those involving females protecting their young or animals protecting a food source – can often be avoided as long as people stay away from those animals and their food.
Carnivores that recognise humans as a means to get food, are a different story. As they become more reliant on human food they might find at campsites or in rubbish bins, they become less avoidant of humans. Losing that instinctive fear response puts them into more situations where they could get into an altercation with a human, which often results in that bear being put down by humans. “A fed bear is a dead bear,” says Servheen, referring to a common saying among biologists and conservationists.
Predatory or predation-related attacks are quite rare, only accounting for 17% of attacks in North America since 1955. They occur when a carnivore views a human as prey and hunts it like it would any other animal it uses for food. (. . .)
Then there are animal attacks provoked by people taking pictures with them or feeding them in natural settings such as national parks which often end with animals being euthanised out of precaution. “Eventually, that animal becomes habituated to people, and [then] bad things happen to the animal. And the folks who initially wanted to make that connection don’t necessarily realise that,” says Christine Wilkinson, a postdoctoral researcher at UC Berkeley, California, who’s been studying coyote-human conflicts.
After conducting countless postmortems on all types of carnivore-human attacks spanning 75 years, Penteriani’s team believes 50% could have been avoided if humans reacted differently. A 2017 study co-authored by Penteriani found that engaging in risky behaviour around large carnivores increases the likelihood of an attack.
Two of the most common risky behaviours are parents leaving their children to play outside unattended and walking an unleashed dog, according to the study. Wilkinson says 66% of coyote attacks involve a dog. “[People] end up in a situation where their dog is being chased, or their dog chases a coyote, or maybe they’re walking their dog near a den that’s marked, and the coyote wants to escort them away,” says Wilkinson.
Experts believe climate change also plays a part in the escalation of human-carnivore conflicts, but the correlation still needs to be ironed out. “As finite resources become scarcer, carnivores and people are coming into more frequent contact, which means that more conflict could occur,” says Jen Miller, international programme specialist for the US Fish & Wildlife Service. For example, she says, there was an uptick in lion attacks in western India during a drought when lions and people were relying on the same water sources.
(. . .) The likelihood of human-carnivore conflicts appears to be higher in areas of low-income countries dominated by vast rural landscapes and farmland, according to Penteriani’s research. “There are a lot of working landscapes in the Global South that are really heterogeneous, that are interspersed with carnivore habitats, forests and savannahs, which creates a lot more opportunity for these encounters, just statistically,” says Wilkinson.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Fears of artificial intelligence (AI) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new AI tools have emerged that threaten the survival of human civilisation from an unexpected direction. AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.
Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures….What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new AI tools, they are often drawn to examples like school children using AI to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of AI tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults…
Through its mastery of language, AI could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that AI has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the AI can make them feel emotionally attached to it….
What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, completely new culture…. Of course, the new power of AI could be used for good purposes as well. I won’t dwell on this, because the people who develop AI talk about it enough….
We can still regulate the new AI tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful AI.… Unregulated AI deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy….And the first regulation I would suggest is to make it mandatory for AI to disclose that it is an AI. If I am having a conversation with someone, and I cannot tell whether it is a human or an AI—that’s the end of democracy. This text has been generated by a human. Or has it?
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
There is a group in the space community who view the solar system not as an opportunity to expand human potential but as a nature preserve, forever the provenance of an elite group of scientists and their sanitary robotic probes. These planetary protection advocates [call] for avoiding “harmful contamination” of celestial bodies. Under this regime, NASA incurs great expense sterilizing robotic probes in order to prevent the contamination of entirely theoretical biospheres. . . .
Transporting bacteria would matter if Mars were the vital world once imagined by astronomers who mistook optical illusions for canals. Nobody wants to expose Martians to measles, but sadly, robotic exploration reveals a bleak, rusted landscape, lacking oxygen and flooded with radiation ready to sterilize any Earthly microbes. Simple life might exist underground, or down at the bottom of a deep canyon, but it has been very hard to find with robots. . . . The upsides from human exploration and development of Mars clearly outweigh the welfare of purely speculative Martian fungi. . . .
The other likely targets of human exploration, development, and settlement, our moon and the asteroids, exist in a desiccated, radiation-soaked realm of hard vacuum and extreme temperature variations that would kill nearly anything. It’s also important to note that many international competitors will ignore the demands of these protection extremists in any case. For example, China recently sent a terrarium to the moon and germinated a plant seed—with, unsurprisingly, no protest from its own scientific community. In contrast, when it was recently revealed that a researcher had surreptitiously smuggled super-resilient microscopic tardigrades aboard the ill-fated Israeli Beresheet lunar probe, a firestorm was unleashed within the space community. . . .
NASA’s previous human exploration efforts made no serious attempt at sterility, with little notice. As the Mars expert Robert Zubrin noted in the National Review, U.S. lunar landings did not leave the campsites cleaner than they found it. Apollo’s bacteria-infested litter included bags of feces. Forcing NASA’s proposed Mars exploration to do better, scrubbing everything and hauling out all the trash, would destroy NASA’s human exploration budget and encroach on the agency’s other directorates, too. Getting future astronauts off Mars is enough of a challenge, without trying to tote weeks of waste along as well.
A reasonable compromise is to continue on the course laid out by the U.S. government and the National Research Council, which proposed a system of zones on Mars, some for science only, some for habitation, and some for resource exploitation. This approach minimizes contamination, maximizes scientific exploration . . . Mars presents a stark choice of diverging human futures. We can turn inward, pursuing ever more limited futures while we await whichever natural or manmade disaster will eradicate our species and life on Earth. Alternatively, we can choose to propel our biosphere further into the solar system, simultaneously protecting our home planet and providing a backup plan for the only life we know exists in the universe. Are the lives on Earth worth less than some hypothetical microbe lurking under Martian rocks?
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Moutai has been the global booze sensation of the decade. A bottle of its Flying Fairy which sold in the 1980s for the equivalent of a dollar now retails for $400. Moutai’s listed shares have soared by almost 600% in the past five years, outpacing the likes of Amazon. . . . It does this while disregarding every Western marketing mantra. It is not global, has meagre digital sales and does not appeal to millennials. It scores pitifully on environmental, social and governance measures. In the Boy Scout world of Western business it would leave a bad taste, in more ways than one.
Moutai owes its intoxicating success to three factors—not all of them easy to emulate. First, it profits from Chinese nationalism. Moutai is known as the “national liquor”. It was used to raise spirits and disinfect wounds in Mao’s Long March. It was Premier Zhou Enlai’s favourite tipple, shared with Richard Nixon in 1972. Its centuries-old craftsmanship—it is distilled eight times and stored for years in earthenware jars—is a source of national pride. It also claims to be hangover-proof, which would make it an invention to rival gunpowder....
Second, it chose to serve China’s super-rich rather than its middle class. Markets are littered with the corpses of firms that could not compete in the cut-throat battle for Chinese middle class wallets. And the country’s premium market is massive—at 73m-strong, bigger than the population of France, notes Euan McLeish of Bernstein, an investment firm, and still less crowded with prestige brands than advanced economies. Moutai is to these well-heeled drinkers what vintage champagne is to the rest of the world.....
Third, Moutai looks beyond affluent millennials and digital natives. The elderly and the middle aged, it found, can be just as lucrative. Its biggest market now is (male) drinkers in their mid 30s. Many have no siblings, thanks to four decades of China’s one-child policy—which also means their elderly parents can splash out on weddings and banquets. Moutai is often a guest of honour.
Moutai has succeeded thanks to nationalism, elitism and ageism, in other words—not in spite of this unholy trinity. But it faces risks. The government is its largest shareholder—and a meddlesome one. It appears to want prices to remain stable. Exorbitantly priced booze is at odds with its professed socialist ideals. Yet minority investors—including many foreign funds —lament that Moutai’s wholesale price is a third of what it sells for in shops. Raising it could boost the company’s profits further. Instead, in what some see as a travesty of corporate governance, its majority owner has plans to set up its own sales channel.....
In the long run, its biggest risk may be millennials. As they grow older, health concerns, work life balance and the desire for more wholesome pursuits than binge-drinking may curb the “Ganbei!” toasting culture [heavy drinking] on which so much of the demand for Moutai rests. For the time being, though, the party goes on.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Languages become endangered and die out for many reasons. Sadly, the physical annihilation of communities of native speakers of a language is all too often the cause of language extinction. In North America, European colonists brought death and destruction to many Native American communities. This was followed by US federal policies restricting the use of indigenous languages, including the removal of native children from their communities to federal boarding schools where native languages and cultural practices were prohibited. As many as 75 percent of the languages spoken in the territories that became the United States have gone extinct, with slightly better language survival rates in Central and South America . . .
Even without physical annihilation and prohibitions against language use, the language of the "dominant" cultures may drive other languages into extinction; young people see education, jobs, culture and technology associated with the dominant language and focus their attention on that language. The largest language "killers" are English, Spanish, Portuguese, French, Russian, Hindi, and Chinese, all of which have privileged status as dominant languages threatening minority languages. When we lose a language, we lose the worldview, culture and knowledge of the people who spoke it, constituting a loss to all humanity. People around the world live in direct contact with their native environment, their habitat. When the language they speak goes extinct, the rest of humanity loses their knowledge of that environment, their wisdom about the relationship between local plants and illness, their philosophical and religious beliefs as well as their native cultural expression (in music, visual art and poetry) that has enriched both the speakers of that language and others who would have encountered that culture. . . . As educators deeply immersed in the liberal arts, we believe that educating students broadly in all facets of language and culture . . . yields immense rewards. Some individuals educated in the liberal arts tradition will pursue advanced study in linguistics and become actively engaged in language preservation, setting out for the Amazon, for example, with video recording equipment to interview the last surviving elders in a community to record and document a language spoken by no children.
Certainly, though, the vast majority of students will not pursue this kind of activity. For these students, a liberal arts education is absolutely critical from the twin perspectives of language extinction and global citizenship. When students study languages other than their own, they are sensitized to the existence of different cultural perspectives and practices. With such an education, students are more likely to be able to articulate insights into their own cultural biases, be more empathetic to individuals of other cultures, communicate successfully across linguistic and cultural differences, consider and resolve questions in a way that reflects multiple cultural perspectives, and, ultimately extend support to people, programs, practices, and policies that support the preservation of endangered languages.
There is ample evidence that such preservation can work in languages spiraling toward extinction. For example, Navajo, Cree and Inuit communities have established schools in which these languages are the language of instruction and the number of speakers of each has increased.
Comprehension:
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question
Landing in Australia, the British colonists weren’t much impressed with the small-bodied, slender-snooted marsupials called bandicoots. “Their muzzle, which is much too long, gives them an air exceedingly stupid,” one naturalist noted in 1805. They nicknamed one type the “zebra rat” because of its black-striped rump.
Silly-looking or not, though, the zebra rat—the smallest bandicoot, more commonly known today as the western barred bandicoot—exhibited a genius for survival in the harsh outback, where its ancestors had persisted for some 26 million years. Its births were triggered by rainfall in the bone-dry desert. It carried its breath-mint-size babies in a backward-facing pouch so mothers could forage for food and dig shallow, camouflaged shelters.
Still, these adaptations did not prepare the western barred bandicoot for the colonial-era transformation of its ecosystem, particularly the onslaught of imported British animals, from cattle and rabbits that damaged delicate desert vegetation to ravenous house cats that soon developed a taste for bandicoots. Several of the dozen-odd bandicoot species went extinct, and by the 1940s the western barred bandicoot, whose original range stretched across much of the continent, persisted only on two predator-free islands in Shark Bay, off Australia’s western coast.
“Our isolated fauna had simply not been exposed to these predators,” says Reece Pedler, an ecologist with the Wild Deserts conservation program.
Now Wild Deserts is using descendants of those few thousand island survivors, called Shark Bay bandicoots, in a new effort to seed a mainland bandicoot revival. They’ve imported 20 bandicoots to a preserve on the edge of the Strzelecki Desert, in the remote interior of New South Wales. This sanctuary is a challenging place, desolate much of the year, with one of the world’s most mercurial rainfall patterns—relentless droughts followed by sudden drenching floods.
The imported bandicoots occupy two fenced “exclosures,” cleared of invasive rabbits (courtesy of Pedler’s sheepdog) and of feral cats (which slunk off once the rabbits disappeared). A third fenced area contains the program’s Wild Training Zone, where two other rare marsupials (bilbies, a larger type of bandicoot, and mulgaras, a somewhat fearsome fuzzball known for sucking the brains out of prey) currently share terrain with controlled numbers of cats, learning to evade them. It’s unclear whether the Shark Bay bandicoots, which are perhaps even more predator-naive than their now-extinct mainland bandicoot kin, will be able to make that kind of breakthrough.
For now, though, a recent surge of rainfall has led to a bandicoot joey boom, raising the Wild Deserts population to about 100, with other sanctuaries adding to that number. There are also signs of rebirth in the landscape itself. With their constant digging, the bandicoots trap moisture and allow for seed germination so the cattle-damaged desert can restore itself.
They have a new nickname—a flattering one, this time. “We call them ecosystem engineers,” Pedler says.
Comprehension: The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question. Oftentimes, when economists cross borders, they are less interested in learning from others than in invading their garden plots. Gary Becker, for instance, pioneered the idea of human capital. To do so, he famously tackled topics like crime and domesticity, applying methods honed in the study of markets to domains of nonmarket life. He projected economics outward into new realms: for example, by revealing the extent to which humans calculate marginal utilities when choosing their spouses or stealing from neighbors. At the same time, he did not let other ways of thinking enter his own economic realm: for example, he did not borrow from anthropology or history or let observations of nonmarket economics inform his homo economicus. Becker was a picture of the imperial economist in the heyday of the discipline’s bravura. Times have changed for the once almighty discipline. Economics has been taken to task, within and beyond its ramparts. Some economists have reached out, imported, borrowed, and collaborated—been less imperial, more open. Consider Thomas Piketty and his outreach to historians. The booming field of behavioral economics—the fusion of economics and social psychology—is another case. Having spawned active subfields, like judgment, decision making and a turn to experimentation, the field aims to go beyond the caricature of Rational Man to explain how humans make decisions…. It is important to underscore how this flips the way we think about economics. For generations, economists have presumed that people have interests—“preferences,” in the neoclassical argot—that get revealed in the course of peoples’ choices. Interests come before actions and determine them. If you are hungry, you buy lunch; if you are cold, you get a sweater. If you only have so much money and can’t afford to deal with both your growling stomach and your shivering, which need you choose to meet using your scarce savings reveals your preference. Psychologists take one look at this simple formulation and shake their heads. Increasingly, even some mainstream economists have to admit that homo economicus doesn’t always behave like the textbook maximizer; irrational behavior can’t simply be waved away as extra economic expressions of passions over interests, and thus the domain of other disciplines…. This is one place where the humanist can help the economist. If narrative economics is going to help us understand how rivals duke it out, who wins and who loses, we are going to need much more than lessons from epidemiological studies of viruses or intracranial stimuli. Above all, we need politics and institutions. Shiller [the Nobel prize winning economist] connects perceptions of narratives to changes in behavior and thence to social outcomes. He completes a circle that was key to behavioral economics and brings in storytelling to make sense of how perceptions get framed. This cycle (perception to behavior to society) was once mediated or dominated by institutions: the political parties, lobby groups, and media organizations that played a vital role in legitimating, representing, and excluding interests. Yet institutions have been stripped from Shiller’s account, to reveal a bare dynamic of emotions and economics, without the intermediating place of politics.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
. . . [T]he idea of craftsmanship is not simply nostalgic. . . . Crafts require distinct skills, an all round approach to work that involves the whole product, rather than individual parts, and an attitude that necessitates devotion to the job and a focus on the communal interest. The concept of craft emphasises the human touch and individual judgment.
Essentially, the crafts concept seems to run against the preponderant ethos of management studies which, as the academics note, have long prioritised efficiency and consistency. . . . Craft skills were portrayed as being primitive and traditionalist.
The contrast between artisanship and efficiency first came to the fore in the 19th century when British manufacturers suddenly faced competition from across the Atlantic as firms developed the “American system” using standardised parts. . . . the worldwide success of the Singer sewing machine showed the potential of a mass-produced device. This process created its own reaction, first in the form of the Arts and Crafts movement of the late 19th century, and then again in the “small is beautiful” movement of the 1970s. A third crafts movement is emerging as people become aware of the environmental impact of conventional industry.
There are two potential markets for those who practise crafts. The first stems from the existence of consumers who are willing to pay a premium price for goods that are deemed to be of extra quality. . . . The second market lies in those consumers who wish to use their purchases to support local workers, or to reduce their environmental impact by taking goods to craftspeople to be mended, or recycled.
For workers, the appeal of craftsmanship is that it allows them the autonomy to make creative choices, and thus makes a job far more satisfying. In that sense, it could offer hope for the overall labour market. Let the machines automate dull and repetitive tasks and let workers focus purely on their skills, judgment and imagination. As a current example, the academics cite the “agile” manifesto in the software sector, an industry at the heart of technological change. The pioneers behind the original agile manifesto promised to prioritise “individuals and interactions over processes and tools”. By bringing together experts from different teams, agile working is designed to improve creativity.
But the broader question is whether crafts can create a lot more jobs than they do today. Demand for crafted products may rise but will it be easy to retrain workers in sectors that might get automated (such as truck drivers) to take advantage? In a world where products and services often have to pass through regulatory hoops, large companies will usually have the advantage.
History also suggests that the link between crafts and creativity is not automatic. Medieval craft guilds were monopolies which resisted new entrants. They were also highly hierarchical with young men required to spend long periods as apprentices and journeymen before they could set up on their own; by that time the innovative spirit may have been knocked out of them. Craft workers can thrive in the modern era, but only if they don’t get too organised.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
The Second Hand September campaign, led by Oxfam . . . seeks to encourage shopping at local organisations and charities as alternatives to fast fashion brands such as Primark and Boohoo in the name of saving our planet. As innocent as mindless scrolling through online shops may seem, such consumers are unintentionally—or perhaps even knowingly—contributing to an industry that uses more energy than aviation. . . .
Brits buy more garments than any other country in Europe, so it comes as no shock that many of those clothes end up in UK landfills each year: 300,000 tonnes of them, to be exact. This waste of clothing is destructive to our planet, releasing greenhouse gasses as clothes are burnt as well as bleeding toxins and dyes into the surrounding soil and water. As ecologist Chelsea Rochman bluntly put it, “The mismanagement of our waste has even come back to haunt us on our dinner plate.”
It’s not surprising, then, that people are scrambling for a solution, the most common of which is second-hand shopping. Retailers selling consigned clothing are currently expanding at a rapid rate . . . If everyone bought just one used item in a year, it would save 449 million lbs of waste, equivalent to the weight of 1 million Polar bears. “Thrifting” has increasingly become a trendy practice. London is home to many second-hand, or more commonly coined ‘vintage’, shops across the city from Bayswater to Brixton.
So you’re cool and you care about the planet; you’ve killed two birds with one stone. But do people simply purchase a second-hand item, flash it on Instagram with #vintage and call it a day without considering whether what they are doing is actually effective?
According to a study commissioned by Patagonia, for instance, older clothes shed more microfibres. These can end up in our rivers and seas after just one wash due to the worn material, thus contributing to microfibre pollution. To break it down, the amount of microfibres released by laundering 100,000 fleece jackets is equivalent to as many as 11,900 plastic grocery bags, and up to 40 per cent of that ends up in our oceans. . . . So where does this leave second-hand consumers? [They would be well advised to buy] high-quality items that shed less and last longer [as this] combats both microfibre pollution and excess garments ending up in landfills. . . .
Luxury brands would rather not circulate their latest season stock around the globe to be sold at a cheaper price, which is why companies like ThredUP, a US fashion resale marketplace, have not yet caught on in the UK. There will always be a market for consignment but there is also a whole generation of people who have been taught that only buying new products is the norm; second-hand luxury goods are not in their psyche. Ben Whitaker, director at Liquidation Firm B-Stock, told Prospect that unless recycling becomes cost-effective and filters into mass production, with the right technology to partner it, “high-end retailers would rather put brand before sustainability.”
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Many human phenomena and characteristics - such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things - are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people.... [T]he differences between the current economies of North and South Korea ... cannot be attributed to the modest environmental differences between [them] ... They are instead due entirely to the different [government] policies ... At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding ... [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all nonnative (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.
Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don't react to cultural, historical, and individual-agent explanations by denouncing "cultural determinism," "historical determinism," or "individual determinism," and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing "geographic determinism" ... Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today. Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason. A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship ... Most historians and economists don't acquire that detailed knowledge as part of the professional training.
[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] "Original Affluent Society" have such an impact, and how has it fared since? ... Sahlins's principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, huntergatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers' hours. Refusing to maximize, many were "more concerned with games of chance than with chances of game." . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality ...
Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . .
But the point [of the essay] is not so much the empirical validity of the data-the real interest for most readers, after all, is not in foragers either today or in the Paleolithic-but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.
With its title's nod toward The Affluent Society (1958), economist John Kenneth Galbraith's famously skeptical portrait of America's postwar prosperity and inequality, and dripping with New Left contempt for consumerism, "The Original Affluent Society" brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers' lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow "the Zen road to affluence": not by getting more, but by wanting less. If it seems that foragers have been left behind by "progress," this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . .
Viewed in today's context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as "left behind" by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins's effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Steven Pinker's new book, "Rationality: What It Is, Why It Seems Scarce, Why It Matters," offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. ... Pinker's ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. ... It's no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance.
It's also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world.
It's a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism - its funding structures, ownership concentration and increasing reliance on social media shares - would have given a fuller picture of why so much coverage is so misguided and what we might do about it.
Pinker's main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. ..
Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the th 19 -century chemist Friedrich August Kekulés discovery of the structure of benzene to any of Mozart's symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato's Socrates - who anticipated many of Pinker's points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers' authority or charisma - attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place.
The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality "is not just a cognitive virtue but a moral one." But this profoundly important point, one subtly explor
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question. 
The biggest challenge [The Nutmeg's Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted ... that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th . 
Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today's Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a 'resource' to exploit. ... 
We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting - in short, terraforming, the word Ghosh uses - has brought us repeated disasters ... Are these the responses of an angry Gaia who has finally had enough? By using the word 'curse' in the title, the author makes it clear that he thinks so. I use the pronoun 'who' knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum. 
As Ghosh's text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse.... 
There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg's Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy - not only for oil exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question. 
In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby's and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia. 
Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country's territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism. 
During the late th 19 and early th 20 century - an era former Met director Thomas Hoving called "the age of piracy" - American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. ... 
I surveyed 90 countries with one or more archaeological sites on UNESCO's World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. ... Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts. 
China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China's most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin's spectacular tomb system. 
In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting - land-focused and inward looking.
My new book "Writing Ocean Worlds" explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . Their work reveals a world that is outward-looking full of movement, border-crossing and south-south interconnection. They are all very different - from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader's mind, as centred in the interconnected global south. ... The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. 
These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.
This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is "the scandal of literary history and criticism"... The main difficulty in studying the romantics, according to him, is the lack of any "single real entity, or type of entity" that the concept "romanticism" designates. Lovejoy concluded, "the word 'romantic' has come to mean so many things that, by itself, it means nothing"...
The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, "aesthetics" refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive.
Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy's radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: "[Although] one does have a certain sympathy with Lovejoy's despair...[he is] in this instance mistaken. There was a romantic movement...and it is important to discover what it is" ...
Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of "particular philosophical questions and concerns" ...
While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms ... Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801 02 − - the period that corresponds to the heyday of what is known as "Early Romanticism" ...- offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant's philosophy.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Umberto Eco, an Italian writer, was right when he said the language of Europe is translation. Netflix and other deep-pocketed global firms speak it well. Just as the EU employs a small army of translators and interpreters to turn intricate laws or impassioned speeches of Romanian MEPs into the EU’s 24 official languages, so do the likes of Netflix. It now offers dubbing in 34 languages and subtitling in a few more. . . .
The economics of European productions are more appealing, too. American audiences are more willing than before to give dubbed or subtitled viewing a chance. This means shows such as “Lupin”, a French crime caper on Netflix, can become global hits. . . . In 2015, about 75% of Netflix’s original content was American; now the figure is half, according to Ampere, a mediaanalysis company. Netflix has about 100 productions under way in Europe, which is more than big public broadcasters in France or Germany. . . .
Not everything works across borders. Comedy sometimes struggles. Whodunits and bloodthirsty maelstroms between arch Romans and uppity tribesmen have a more universal appeal. Some do it better than others. Barbarians aside, German television is not always built for export, says one executive, being polite. A bigger problem is that national broadcasters still dominate. Streaming services, such as Netflix or Disney+, account for about a third of all viewing hours, even in markets where they are well-established. Europe is an ageing continent. The generation of teens staring at phones is outnumbered by their elders who prefer to gawp at the box.
In Brussels and national capitals, the prospect of Netflix as a cultural hegemon is seen as a threat. “Cultural sovereignty” is the watchword of European executives worried that the Americans will eat their lunch. To be fair, Netflix content sometimes seems stuck in an uncanny valley somewhere in the mid-Atlantic, with local quirks stripped out. Netflix originals tend to have fewer specific cultural references than shows produced by domestic rivals, according to Enders, a market analyst. The company used to have an imperial model of commissioning, with executives in Los Angeles cooking up ideas French people might like. Now Netflix has offices across Europe. But ultimately the big decisions rest with American executives. This makes European politicians nervous.
They should not be. An irony of European integration is that it is often American companies that facilitate it. Google Translate makes European newspapers comprehensible, even if a little clunky, for the continent’s non-polyglots. American social-media companies make it easier for Europeans to talk politics across borders. (That they do not always like to hear what they say about each other is another matter.) Now Netflix and friends pump the same content into homes across a continent, making culture a cross-border endeavour, too. If Europeans are to share a currency, bail each other out in times of financial need and share vaccines in a pandemic, then they need to have something in common—even if it is just bingeing on the same series. Watching fictitious northern and southern Europeans tear each other apart 2,000 years ago beats doing so in reality.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Over the past four centuries liberalism has been so successful that it has driven all its opponents off the battlefield. Now it is disintegrating, destroyed by a mix of hubris and internal contradictions, according to Patrick Deneen, a professor of politics at the University of Notre Dame. . . . Equality of opportunity has produced a new meritocratic aristocracy that has all the aloofness of the old aristocracy with none of its sense of noblesse oblige. Democracy has degenerated into a theatre of the absurd. And technological advances are reducing ever more areas of work into meaningless drudgery. “The gap between liberalism’s claims about itself and the lived reality of the citizenry” is now so wide that “the lie can no longer be accepted,” Mr Deneen writes. What better proof of this than the vision of 1,000 private planes whisking their occupants to Davos to discuss the question of “creating a shared future in a fragmented world”? . . .
Deneen does an impressive job of capturing the current mood of disillusionment, echoing left-wing complaints about rampant commercialism, right-wing complaints about narcissistic and bullying students, and general worries about atomisation and selfishness. But when he concludes that all this adds up to a failure of liberalism, is his argument convincing? . . . He argues that the essence of liberalism lies in freeing individuals from constraints. In fact, liberalism contains a wide range of intellectual traditions which provide different answers to the question of how to trade off the relative claims of rights and responsibilities, individual expression and social ties. . . . liberals experimented with a range of ideas from devolving power from the centre to creating national education systems.
Mr Deneen’s fixation on the essence of liberalism leads to the second big problem of his book: his failure to recognise liberalism’s ability to reform itself and address its internal problems. The late 19th century saw America suffering from many of the problems that are reappearing today, including the creation of a business aristocracy, the rise of vast companies, the corruption of politics and the sense that society was dividing into winners and losers. But a wide variety of reformers, working within the liberal tradition, tackled these problems head on. Theodore Roosevelt took on the trusts. Progressives cleaned up government corruption. University reformers modernised academic syllabuses and built ladders of opportunity. Rather than dying, liberalism reformed itself.
Mr Deneen is right to point out that the record of liberalism in recent years has been dismal. He is also right to assert that the world has much to learn from the premodern notions of liberty as self-mastery and self-denial. The biggest enemy of liberalism is not so much atomisation but old-fashioned greed, as members of the Davos elite pile their plates ever higher with perks and share options. But he is wrong to argue that the only way for people to liberate themselves from the contradictions of liberalism is “liberation from liberalism itself”. The best way to read “Why Liberalism Failed” is not as a funeral oration but as a call to action: up your game, or else.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
The Positivists, anxious to stake out their claim for history as a science, contributed the weight of their influence to the cult of facts. First ascertain the facts, said the positivists, then draw your conclusions from them. . . . This is what may [be] called the common-sense view of history. History consists of a corpus of ascertained facts. The facts are available to the historian in documents, inscriptions, and so on . . . [Sir George Clark] contrasted the "hard core of facts" in history with the surrounding pulp of disputable interpretation forgetting perhaps that the pulpy part of the fruit is more rewarding than the hard core. . . . It recalls the favourite dictum of the great liberal journalist C. P. Scott: "Facts are sacred, opinion is free.". . .
What is a historical fact? . . . According to the common-sense view, there are certain basic facts which are the same for all historians and which form, so to speak, the backbone of history—the fact, for example, that the Battle of Hastings was fought in 1066. But this view calls for two observations. In the first place, it is not with facts like these that the historian is primarily concerned. It is no doubt important to know that the great battle was fought in 1066 and not in 1065 or 1067, and that it was fought at Hastings and not at Eastbourne or Brighton. The historian must not get these things wrong. But [to] praise a historian for his accuracy is like praising an architect for using well-seasoned timber or properly mixed concrete in his building. It is a necessary condition of his work, but not his essential function. It is precisely for matters of this kind that the historian is entitled to rely on what have been called the "auxiliary sciences" of history—archaeology, epigraphy, numismatics, chronology, and so forth. . . .
The second observation is that the necessity to establish these basic facts rests not on any quality in the facts themselves, but on an apriori decision of the historian. In spite of C. P. Scott's motto, every journalist knows today that the most effective way to influence opinion is by the selection and arrangement of the appropriate facts. It used to be said that facts speak for themselves. This is, of course, untrue. The facts speak only when the historian calls on them: it is he who decides to which facts to give the floor, and in what order or context. . . . The only reason why we are interested to know that the battle was fought at Hastings in 1066 is that historians regard it as a major historical event. . . . Professor Talcott Parsons once called [science] "a selective system of cognitive orientations to reality." It might perhaps have been put more simply. But history is, among other things, that. The historian is necessarily selective. The belief in a hard core of historical facts existing objectively and independently of the interpretation of the historian is a preposterous fallacy, but one which it is very hard to eradicate.