List of top Verbal Ability & Reading Comprehension (VARC) Questions on Reading Comprehension asked in CAT

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
The Second Hand September campaign, led by Oxfam . . . seeks to encourage shopping at local organisations and charities as alternatives to fast fashion brands such as Primark and Boohoo in the name of saving our planet. As innocent as mindless scrolling through online shops may seem, such consumers are unintentionally—or perhaps even knowingly—contributing to an industry that uses more energy than aviation. . . .
Brits buy more garments than any other country in Europe, so it comes as no shock that many of those clothes end up in UK landfills each year: 300,000 tonnes of them, to be exact. This waste of clothing is destructive to our planet, releasing greenhouse gasses as clothes are burnt as well as bleeding toxins and dyes into the surrounding soil and water. As ecologist Chelsea Rochman bluntly put it, “The mismanagement of our waste has even come back to haunt us on our dinner plate.”
It’s not surprising, then, that people are scrambling for a solution, the most common of which is second-hand shopping. Retailers selling consigned clothing are currently expanding at a rapid rate . . . If everyone bought just one used item in a year, it would save 449 million lbs of waste, equivalent to the weight of 1 million Polar bears. “Thrifting” has increasingly become a trendy practice. London is home to many second-hand, or more commonly coined ‘vintage’, shops across the city from Bayswater to Brixton.
So you’re cool and you care about the planet; you’ve killed two birds with one stone. But do people simply purchase a second-hand item, flash it on Instagram with #vintage and call it a day without considering whether what they are doing is actually effective?
According to a study commissioned by Patagonia, for instance, older clothes shed more microfibres. These can end up in our rivers and seas after just one wash due to the worn material, thus contributing to microfibre pollution. To break it down, the amount of microfibres released by laundering 100,000 fleece jackets is equivalent to as many as 11,900 plastic grocery bags, and up to 40 per cent of that ends up in our oceans. . . . So where does this leave second-hand consumers? [They would be well advised to buy] high-quality items that shed less and last longer [as this] combats both microfibre pollution and excess garments ending up in landfills. . . .
Luxury brands would rather not circulate their latest season stock around the globe to be sold at a cheaper price, which is why companies like ThredUP, a US fashion resale marketplace, have not yet caught on in the UK. There will always be a market for consignment but there is also a whole generation of people who have been taught that only buying new products is the norm; second-hand luxury goods are not in their psyche. Ben Whitaker, director at Liquidation Firm B-Stock, told Prospect that unless recycling becomes cost-effective and filters into mass production, with the right technology to partner it, “high-end retailers would rather put brand before sustainability.”
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Many human phenomena and characteristics - such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things - are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people.... [T]he differences between the current economies of North and South Korea ... cannot be attributed to the modest environmental differences between [them] ... They are instead due entirely to the different [government] policies ... At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding ... [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all nonnative (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.
Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don't react to cultural, historical, and individual-agent explanations by denouncing "cultural determinism," "historical determinism," or "individual determinism," and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing "geographic determinism" ... Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today. Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason. A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship ... Most historians and economists don't acquire that detailed knowledge as part of the professional training.
[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] "Original Affluent Society" have such an impact, and how has it fared since? ... Sahlins's principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, huntergatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers' hours. Refusing to maximize, many were "more concerned with games of chance than with chances of game." . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality ...
Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . .
But the point [of the essay] is not so much the empirical validity of the data-the real interest for most readers, after all, is not in foragers either today or in the Paleolithic-but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.
With its title's nod toward The Affluent Society (1958), economist John Kenneth Galbraith's famously skeptical portrait of America's postwar prosperity and inequality, and dripping with New Left contempt for consumerism, "The Original Affluent Society" brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers' lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow "the Zen road to affluence": not by getting more, but by wanting less. If it seems that foragers have been left behind by "progress," this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . .
Viewed in today's context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as "left behind" by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins's effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Steven Pinker's new book, "Rationality: What It Is, Why It Seems Scarce, Why It Matters," offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. ... Pinker's ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. ... It's no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance.
It's also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world.
It's a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism - its funding structures, ownership concentration and increasing reliance on social media shares - would have given a fuller picture of why so much coverage is so misguided and what we might do about it.
Pinker's main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. ..
Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the th 19 -century chemist Friedrich August Kekulés discovery of the structure of benzene to any of Mozart's symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato's Socrates - who anticipated many of Pinker's points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers' authority or charisma - attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place.
The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality "is not just a cognitive virtue but a moral one." But this profoundly important point, one subtly explor
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question. 
The biggest challenge [The Nutmeg's Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted ... that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th . 
Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today's Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a 'resource' to exploit. ... 
We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting - in short, terraforming, the word Ghosh uses - has brought us repeated disasters ... Are these the responses of an angry Gaia who has finally had enough? By using the word 'curse' in the title, the author makes it clear that he thinks so. I use the pronoun 'who' knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum. 
As Ghosh's text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse.... 
There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg's Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy - not only for oil exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question. 
In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby's and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia. 
Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country's territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism. 
During the late th 19 and early th 20 century - an era former Met director Thomas Hoving called "the age of piracy" - American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. ... 
I surveyed 90 countries with one or more archaeological sites on UNESCO's World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. ... Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts. 
China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China's most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin's spectacular tomb system. 
In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting - land-focused and inward looking.
My new book "Writing Ocean Worlds" explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . Their work reveals a world that is outward-looking full of movement, border-crossing and south-south interconnection. They are all very different - from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader's mind, as centred in the interconnected global south. ... The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. 
These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.
This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is "the scandal of literary history and criticism"... The main difficulty in studying the romantics, according to him, is the lack of any "single real entity, or type of entity" that the concept "romanticism" designates. Lovejoy concluded, "the word 'romantic' has come to mean so many things that, by itself, it means nothing"...
The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, "aesthetics" refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive.
Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy's radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: "[Although] one does have a certain sympathy with Lovejoy's despair...[he is] in this instance mistaken. There was a romantic movement...and it is important to discover what it is" ...
Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of "particular philosophical questions and concerns" ...
While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms ... Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801 02 − - the period that corresponds to the heyday of what is known as "Early Romanticism" ...- offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant's philosophy.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Umberto Eco, an Italian writer, was right when he said the language of Europe is translation. Netflix and other deep-pocketed global firms speak it well. Just as the EU employs a small army of translators and interpreters to turn intricate laws or impassioned speeches of Romanian MEPs into the EU’s 24 official languages, so do the likes of Netflix. It now offers dubbing in 34 languages and subtitling in a few more. . . .
The economics of European productions are more appealing, too. American audiences are more willing than before to give dubbed or subtitled viewing a chance. This means shows such as “Lupin”, a French crime caper on Netflix, can become global hits. . . . In 2015, about 75% of Netflix’s original content was American; now the figure is half, according to Ampere, a mediaanalysis company. Netflix has about 100 productions under way in Europe, which is more than big public broadcasters in France or Germany. . . .
Not everything works across borders. Comedy sometimes struggles. Whodunits and bloodthirsty maelstroms between arch Romans and uppity tribesmen have a more universal appeal. Some do it better than others. Barbarians aside, German television is not always built for export, says one executive, being polite. A bigger problem is that national broadcasters still dominate. Streaming services, such as Netflix or Disney+, account for about a third of all viewing hours, even in markets where they are well-established. Europe is an ageing continent. The generation of teens staring at phones is outnumbered by their elders who prefer to gawp at the box.
In Brussels and national capitals, the prospect of Netflix as a cultural hegemon is seen as a threat. “Cultural sovereignty” is the watchword of European executives worried that the Americans will eat their lunch. To be fair, Netflix content sometimes seems stuck in an uncanny valley somewhere in the mid-Atlantic, with local quirks stripped out. Netflix originals tend to have fewer specific cultural references than shows produced by domestic rivals, according to Enders, a market analyst. The company used to have an imperial model of commissioning, with executives in Los Angeles cooking up ideas French people might like. Now Netflix has offices across Europe. But ultimately the big decisions rest with American executives. This makes European politicians nervous.
They should not be. An irony of European integration is that it is often American companies that facilitate it. Google Translate makes European newspapers comprehensible, even if a little clunky, for the continent’s non-polyglots. American social-media companies make it easier for Europeans to talk politics across borders. (That they do not always like to hear what they say about each other is another matter.) Now Netflix and friends pump the same content into homes across a continent, making culture a cross-border endeavour, too. If Europeans are to share a currency, bail each other out in times of financial need and share vaccines in a pandemic, then they need to have something in common—even if it is just bingeing on the same series. Watching fictitious northern and southern Europeans tear each other apart 2,000 years ago beats doing so in reality.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Over the past four centuries liberalism has been so successful that it has driven all its opponents off the battlefield. Now it is disintegrating, destroyed by a mix of hubris and internal contradictions, according to Patrick Deneen, a professor of politics at the University of Notre Dame. . . . Equality of opportunity has produced a new meritocratic aristocracy that has all the aloofness of the old aristocracy with none of its sense of noblesse oblige. Democracy has degenerated into a theatre of the absurd. And technological advances are reducing ever more areas of work into meaningless drudgery. “The gap between liberalism’s claims about itself and the lived reality of the citizenry” is now so wide that “the lie can no longer be accepted,” Mr Deneen writes. What better proof of this than the vision of 1,000 private planes whisking their occupants to Davos to discuss the question of “creating a shared future in a fragmented world”? . . .
Deneen does an impressive job of capturing the current mood of disillusionment, echoing left-wing complaints about rampant commercialism, right-wing complaints about narcissistic and bullying students, and general worries about atomisation and selfishness. But when he concludes that all this adds up to a failure of liberalism, is his argument convincing? . . . He argues that the essence of liberalism lies in freeing individuals from constraints. In fact, liberalism contains a wide range of intellectual traditions which provide different answers to the question of how to trade off the relative claims of rights and responsibilities, individual expression and social ties. . . . liberals experimented with a range of ideas from devolving power from the centre to creating national education systems.
Mr Deneen’s fixation on the essence of liberalism leads to the second big problem of his book: his failure to recognise liberalism’s ability to reform itself and address its internal problems. The late 19th century saw America suffering from many of the problems that are reappearing today, including the creation of a business aristocracy, the rise of vast companies, the corruption of politics and the sense that society was dividing into winners and losers. But a wide variety of reformers, working within the liberal tradition, tackled these problems head on. Theodore Roosevelt took on the trusts. Progressives cleaned up government corruption. University reformers modernised academic syllabuses and built ladders of opportunity. Rather than dying, liberalism reformed itself.
Mr Deneen is right to point out that the record of liberalism in recent years has been dismal. He is also right to assert that the world has much to learn from the premodern notions of liberty as self-mastery and self-denial. The biggest enemy of liberalism is not so much atomisation but old-fashioned greed, as members of the Davos elite pile their plates ever higher with perks and share options. But he is wrong to argue that the only way for people to liberate themselves from the contradictions of liberalism is “liberation from liberalism itself”. The best way to read “Why Liberalism Failed” is not as a funeral oration but as a call to action: up your game, or else.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
The Positivists, anxious to stake out their claim for history as a science, contributed the weight of their influence to the cult of facts. First ascertain the facts, said the positivists, then draw your conclusions from them. . . . This is what may [be] called the common-sense view of history. History consists of a corpus of ascertained facts. The facts are available to the historian in documents, inscriptions, and so on . . . [Sir George Clark] contrasted the "hard core of facts" in history with the surrounding pulp of disputable interpretation forgetting perhaps that the pulpy part of the fruit is more rewarding than the hard core. . . . It recalls the favourite dictum of the great liberal journalist C. P. Scott: "Facts are sacred, opinion is free.". . .
What is a historical fact? . . . According to the common-sense view, there are certain basic facts which are the same for all historians and which form, so to speak, the backbone of history—the fact, for example, that the Battle of Hastings was fought in 1066. But this view calls for two observations. In the first place, it is not with facts like these that the historian is primarily concerned. It is no doubt important to know that the great battle was fought in 1066 and not in 1065 or 1067, and that it was fought at Hastings and not at Eastbourne or Brighton. The historian must not get these things wrong. But [to] praise a historian for his accuracy is like praising an architect for using well-seasoned timber or properly mixed concrete in his building. It is a necessary condition of his work, but not his essential function. It is precisely for matters of this kind that the historian is entitled to rely on what have been called the "auxiliary sciences" of history—archaeology, epigraphy, numismatics, chronology, and so forth. . . .
The second observation is that the necessity to establish these basic facts rests not on any quality in the facts themselves, but on an apriori decision of the historian. In spite of C. P. Scott's motto, every journalist knows today that the most effective way to influence opinion is by the selection and arrangement of the appropriate facts. It used to be said that facts speak for themselves. This is, of course, untrue. The facts speak only when the historian calls on them: it is he who decides to which facts to give the floor, and in what order or context. . . . The only reason why we are interested to know that the battle was fought at Hastings in 1066 is that historians regard it as a major historical event. . . . Professor Talcott Parsons once called [science] "a selective system of cognitive orientations to reality." It might perhaps have been put more simply. But history is, among other things, that. The historian is necessarily selective. The belief in a hard core of historical facts existing objectively and independently of the interpretation of the historian is a preposterous fallacy, but one which it is very hard to eradicate.
RESIDENTS of Lozère, a hilly department in southern France, recite complaints familiar to many rural corners of Europe. In remote hamlets and villages, with names such as Le Bacon and Le Bacon Vieux, mayors grumble about a lack of local schools, jobs, or phone and internet connections. Farmers of grazing animals add another concern: the return of wolves. Eradicated from France last century, the predators are gradually creeping back to more forests and hillsides. "The wolf must be taken in hand," said an aspiring parliamentarian, Francis Palombi, when pressed by voters in an election campaign early this summer. Tourists enjoy visiting a wolf park in Lozère, but farmers fret over their livestock and their livelihoods.
As early as the ninth century, the royal office of the Luparii-wolf-catchers-was created in France to tackle the predators. Those official hunters (and others) completed their job in the 1930s, when the last wolf disappeared from the mainland. Active hunting and improved technology such as rifles in the 19th century, plus the use of poison such as strychnine later on, caused the population collapse. But in the early 1990s the animals reappeared. They crossed the Alps from Italy, upsetting sheep farmers on the French side of the border. Wolves have since spread to areas such as Lozère, delighting environmentalists, who see the predators' presence as a sign of wider ecological health. Farmers, who say the wolves cause the deaths of thousands of sheep and other grazing animals, are less cheerful. They grumble that green activists and politically correct urban types have allowed the return of an old enemy.
Various factors explain the changes of the past few decades. Rural depopulation is part of the story. In Lozère, for example, farming and a once-flourishing mining industry supported a population of over 140,000 residents in the mid- 19th century. Today the department has fewer than 80,000 people, many in its towns. As humans withdraw, forests are expanding. In France, between 1990 and 2015, forest cover increased by an average of 102,000 hectares each year, as more fields were given over to trees. Now, nearly one-third of mainland France is covered by woodland of some sort. The decline of hunting as a sport also means more forests fall quiet. In the mid-to-late 20th century over 2m hunters regularly spent winter weekends tramping in woodland, seeking boars, birds and other prey. Today the Fédération Nationale des Chasseurs, the national body, claims 1.1 m people hold hunting licences, though the number of active hunters is probably lower. The mostly protected status of the wolf in Europe-hunting them is now forbidden, other than when occasional culls are sanctioned by the state-plus the efforts of NGOs to track and count the animals, also contribute to the recovery of wolf populations.
As the lupine population of Europe spreads westwards, with occasional reports of wolves seen closer to urban areas, expect to hear of more clashes between farmers and those who celebrate the predators' return. Farmers' losses are real, but are not the only economic story. Tourist venues, such as parks where wolves are kept and the animals' spread is discussed, also generate income and jobs in rural areas.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .
For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A half-century ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “blackbox” is situated in and around Broca’s area, on the left side of the forebrain. . . .
Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. Itis not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .
So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioural psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Keeping time accurately comes with a price. The maximum accuracy of a clock is directly related to how much disorder, or entropy, it creates every time it ticks. Natalia Ares at the University of Oxford and her colleagues made this discovery using a tiny clock with an accuracy that can be controlled. The clock consists of a 50-nanometre-thick membrane of silicon nitride, vibrated by an electric current. Each time the membrane moved up and down once and then returned to its original position, the researchers counted a tick, and the regularity of the spacing between the ticks represented the accuracy of the clock. The researchers found that as they increased the clock’s accuracy, the heat produced in the system grew, increasing the entropy of its surroundings by jostling nearby particles . . . “If a clock is more accurate, you are paying for it somehow,” says Ares. In this case, you pay for it by pouring more ordered energy into the clock, which is then converted into entropy. “By measuring time, we are increasing the entropy of the universe,” says Ares. The more entropy there is in the universe, the closer it may be to its eventual demise. “Maybe we should stop measuring time,” says Ares. The scale of the additional entropy is so small, though, that there is no need to worry about its effects, she says.
The increase in entropy in timekeeping may be related to the “arrow of time”, says Marcus Huber at the Austrian Academy of Sciences in Vienna, who was part of the research team. It has been suggested that the reason that time only flows forward, not in reverse, is that the total amount of entropy in the universe is constantly increasing, creating disorder that cannot be put in order again.
The relationship that the researchers found is a limit on the accuracy of a clock, so it doesn’t mean that a clock that creates the most possible entropy would be maximally accurate – hence a large, inefficient grandfather clock isn’t more precise than an atomic clock. “It’s a bit like fuel use in a car. Just because I’m using more fuel doesn’t mean that I’m going faster or further,” says Huber.
When the researchers compared their results with theoretical models developed for clocks that rely on quantum effects, they were surprised to find that the relationship between accuracy and entropy seemed to be the same for both. . . . We can’t be sure yet that these results are actually universal, though, because there are many types of clocks for which the relationship between accuracy and entropy haven’t been tested. “It’s still unclear how this principle plays out in real devices such as atomic clocks, which push the ultimate quantum limits of accuracy,” says Mark Mitchison at Trinity College Dublin in Ireland. Understanding this relationship could be helpful for designing clocks in the future, particularly those used in quantum computers and other devices where both accuracy and temperature are crucial, says Ares. This finding could also help us understand more generally how the quantum world and the classical world are similar and different in terms of thermodynamics and the passage of time.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Back in the early 2000s, an awesome thing happened in the New X-Men comics. Our mutant heroes had been battling giant robots called Sentinels for years, but suddenly these mechanical overlords spawned a new threat: Nano-Sentinels! Not content to rule Earth with their metal fists, these tiny robots invaded our bodies at the microscopic level. Infected humans were slowly converted into machines, cell by cell.
Now, a new wave of extremely odd robots is making at least part of the Nano-Sentinels story come true. Using exotic fabrication materials like squishy hydrogels and elastic polymers, researchers are making autonomous devices that are often tiny and that could turn out to be more powerful than an army of Terminators. Some are 1-centimetre blobs that can skate overwater. Others are flat sheets that can roll themselves into tubes, or matchstick-sized plastic coils that act as powerful muscles. No, they won’t be invading our bodies and turning us into Sentinels – which I personally find a little disappointing – but some of them could one day swim through our bloodstream to heal us. They could also clean up pollutants in water or fold themselves into different kinds of vehicles for us to drive. . . .
Unlike a traditional robot, which is made of mechanical parts, these new kinds of robots are made from molecular parts. The principle is the same: both are devices that can move around and do things independently. But a robot made from smart materials might be nothing more than a pink drop of hydrogel. Instead of gears and wires, it’s assembled from two kinds of molecules – some that love water and some that avoid it – which interact to allow the bot to skate on top of a pond.
Sometimes these materials are used to enhance more conventional robots. One team of researchers, for example, has developed a different kind of hydrogel that becomes sticky when exposed to a low-voltage zap of electricity and then stops being sticky when the electricity is switched off. This putty-like gel can be pasted right onto the feet or wheels of a robot. When the robot wants to climb a sheer wall or scoot across the ceiling, it can activate its sticky feet with a few volts. Once it is back on a flat surface again, the robot turns off the adhesive like a light switch.
Robots that are wholly or partly made of gloop aren’t the future that I was promised in science fiction. But it’s definitely the future I want. I’m especially keen on the nanometre- scale “soft robots” that could one day swim through our bodies. Metin Sitti, a director at the Max Planck Institute for Intelligent Systems in Germany, worked with colleagues to prototype these tiny, synthetic beasts using various stretchy materials, such as simple rubber, and seeding them with magnetic microparticles. They are assembled into a finished shape by applying magnetic fields. The results look like flowers or geometric shapes made from Tinkertoy ball and stick modelling kits. They’re guided through tubes of fluid using magnets, and can even stop and cling to the sides of a tube
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Today we can hardly conceive of ourselves without an unconscious. Yet between 1700 and1900, this notion developed as a genuinely original thought. The “unconscious” burst the shell of conventional language, coined as it had been to embody the fleeting ideas and the shifting conceptions of several generations until, finally, it became fixed and defined in specialized terms within the realm of medical psychology and Freudian psychoanalysis.
The vocabulary concerning the soul and the mind increased enormously in the course of the nineteenth century. The enrichments of literary and intellectual language led to an altered understanding of the meanings that underlie time-honored expressions and traditional catchwords. At the same time, once coined, powerful new ideas attracted to themselves a whole host of seemingly unrelated issues, practices, and experiences, creating a peculiar network of preoccupations that as a group had not existed before. The drawn-out attempt to approach and define the unconscious brought together the spiritualist and the psychical researcher of borderline phenomena (such as apparitions, spectral illusions, haunted houses, mediums, trance, automatic writing); the psychiatrist or alienist probing the nature of mental disease, of abnormal ideation, hallucination, delirium, melancholia, mania; the surgeon performing operations with the aid of hypnotism; the magnetizer claiming to correct the disequilibrium in the universal flow of magnetic fluids but who soon came to be regarded as a clever manipulator of the imagination; the physiologist and the physician who puzzled oversleep, dreams, sleepwalking, anesthesia, the influence of the mind on the body in health and disease; the neurologist concerned with the functions of the brain and the physiological basis of mental life; the philosopher interested in the will, the emotions, consciousness, knowledge, imagination and the creative genius; and, last but not least, the psychologist.
Significantly, most if not all of these practices (for example, hypnotism in surgery or psychological magnetism) originated in the waning years of the eighteenth century and during the early decades of the nineteenth century, as did some of the disciplines (such as psychology and psychical research). The majority of topics too were either new or assumed hitherto unknown colors. Thus, before 1790, few if any spoke, in medical terms, of the affinity between creative genius and the hallucinations of the insane . . .
Striving vaguely and independently to give expression to a latent conception, various lines of thought can be brought together by some novel term. The new concept then serves as a kind of resting place or stocktaking in the development of ideas, giving satisfaction and a stimulus for further discussion or speculation. Thus, the massive introduction of the term unconscious by Hartmann in 1869 appeared to focalize many stray thoughts, affording a temporary feeling that a crucial step had been taken forward, a comprehensive knowledge gained, a knowledge that required only further elaboration, explication, and unfolding in order to bring in a bounty of higher understanding. Ultimately, Hartmann’s attempt at defining the unconscious proved fruitless because he extended its reach into every realm of organic and inorganic, spiritual, intellectual, and instinctive existence, severely diluting the precision and compromising the impact of the concept.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
It has been said that knowledge, or the problem of knowledge, is the scandal of philosophy. The scandal is philosophy’s apparent inability to show how, when and why we can be sure that we know something or, indeed, that we know anything. Philosopher Michael Williams writes: ‘Is it possible to obtain knowledge at all? This problem is pressing because there are powerful arguments, some very ancient, for the conclusion that it is not . . . Scepticism is the skeleton in Western rationalism’s closet’. While it is not clear that the scandal matters to anyone but philosophers, philosophers point out that it should matter to everyone, at least given a certain conception of knowledge. For, they explain, unless we can ground our claims to knowledge as such, which is to say, distinguish it from mere opinion, superstition, fantasy, wishful thinking, ideology, illusion or delusion, then the actions we take on the basis of presumed knowledge –boarding an airplane, swallowing a pill, finding someone guilty of a crime – will be irrational and unjustifiable.
That is all quite serious-sounding but so also are the rattlings of the skeleton: that is, the sceptic’s contention that we cannot be sure that we know anything – at least not if we think of knowledge as something like having a correct mental representation of reality, and not if we think of reality as something like things-as-they-are-in-themselves, independent of our perceptions, ideas or descriptions. For, the sceptic will note, since reality, under that conception of it, is outside our ken (we cannot catch a glimpse of things-in-themselves around the corner of our own eyes; we cannot form an idea of reality that floats above the processes of our conceiving it), we have no way to compare our mental representations with things-as-they-are-in-themselves and therefore no way to determine whether they are correct or incorrect. Thus the sceptic may repeat (rattling loudly), you cannot be sure you ‘know’ something or anything at all – at least not, he may add (rattling softly before disappearing), if that is the way you conceive ‘knowledge’.
There are a number of ways to handle this situation. The most common is to ignore it. Most people outside the academy – and, indeed, most of us inside it – are unaware of or unperturbed by the philosophical scandal of knowledge and go about our lives without too many epistemic anxieties. We hold our beliefs and presumptive knowledges more or less confidently, usually depending on how we acquired them (I saw it with my own eyes; I heard it on Fox News; a guy at the office told me) and how broadly and strenuously they seem to be shared or endorsed by various relevant people: experts and authorities, friends and family members, colleagues and associates. And we examine our convictions more or less closely, explain them more or less extensively, and defend them more or less vigorously, usually depending on what seems to be at stake for ourselves and/or other people and what resources are available for reassuring ourselves or making our beliefs credible to others (look, it’s right here on the page; add up the figures yourself; I happen to be a heart specialist).
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
I have elaborated . . . a framework for analyzing the contradictory pulls on [Indian] nationalist ideology in its struggle against the dominance of colonialism and the resolution it offered to those contradictions. Briefly, this resolution was built around a separation of the domain of culture into two spheres—the material and the spiritual. It was in the material sphere that the claims of Western civilization were the most powerful. Science, technology, rational forms of economic organization, modern methods of statecraft—these had given the European countries the strength to subjugate the non-European people . . . To overcome this domination, the colonized people had to learn those superior techniques of organizing material life and incorporate them within their own cultures. . . . But this could not mean the imitation of the West in every aspect of life, for then the very distinction between the West and the East would vanish—the self-identity of national culture would itself be threatened. . . . The discourse of nationalism shows that the material/spiritual distinction was condensed into an analogous, but ideologically far more powerful, dichotomy: that between the outer and the inner. . . . Applying the inner/outer distinction to the matter of concrete day-to-day living separates the social space into ghar and bāhir, the home and the world. The world is the external, the domain of the material; the home represents one’s inner spiritual self, one’s true identity. The world is a treacherous terrain of the pursuit of material interests, where practical considerations reign supreme. It is also typically the domain of the male. The home in its essence must remain unaffected by the profane activities of the material world—and woman is its representation. And so one gets an identification of social roles by gender to correspond with the separation of the social space into ghar and bāhir. . . .
The colonial situation, and the ideological response of nationalism to the critique of Indian tradition, introduced an entirely new substance to [these dichotomies] and effected their transformation. The material/spiritual dichotomy, to which the terms world and home corresponded, had acquired . . . a very special significance in the nationalist mind. The world was where the European power had challenged the non-European peoples and, by virtue of its superior material culture, had subjugated them. But, the nationalists asserted, it had failed to colonize the inner, essential, identity of the East which lay in its distinctive, and superior, spiritual culture. . . . [I]n the entire phase of the national struggle, the crucial need was to protect, preserve and strengthen the inner core of the national culture, its spiritual essence. . .
Once we match this new meaning of the home/world dichotomy with the identification of social roles by gender, we get the ideological framework within which nationalism answered the women’s question. It would be a grave error to see in this, as liberals are apt to in their despair at the many marks of social conservatism in nationalist practice, a total rejection of the West. Quite the contrary: the nationalist paradigm in fact supplied an ideological principle of selection.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
It’s easy to forget that most of the world’s languages are still transmitted orally with no widely established written form. While speech communities are increasingly involved in projects to protect their languages – in print, on air and online – orality is fragile and contributes to linguistic vulnerability. But indigenous languages are about much more than unusual words and intriguing grammar: They function as vehicles for the transmission of cultural traditions, environmental understandings and knowledge about medicinal plants, all at risk when elders die and livelihoods are disrupted.
Both push and pull factors lead to the decline of languages. Through war, famine and natural disasters, whole communities can be destroyed, taking their language with them to the grave, such as the indigenous populations of Tasmania who were wiped out by colonists. More commonly, speakers live on but abandon their language in favor of another vernacular, a widespread process that linguists refer to as “language shift” from which few languages are immune. Such trading up and out of a speech form occurs for complex political, cultural and economic reasons – sometimes voluntary for economic and educational reasons, although often amplified by state coercion or neglect. Welsh, long stigmatized and disparaged by the British state, has rebounded with vigor.
Many speakers of endangered, poorly documented languages have embraced new digital media with excitement. Speakers of previously exclusively oral tongues are turning to the web as a virtual space for languages to live on. Internet technology offers powerful ways for oral traditions and cultural practices to survive, even thrive, among increasingly mobile communities. I have watched as videos of traditional wedding ceremonies and songs are recorded on smartphones in London by Nepali migrants, then uploaded to YouTube and watched an hour later by relatives in remote Himalayan villages . . .Globalization is regularly, and often uncritically, pilloried as a major threat to linguistic diversity. But in fact, globalization is as much process as it is ideology, certainly when it comes to language. The real forces behind cultural homogenization are unbending beliefs, exchanged through a globalized delivery system, reinforced by the historical monolingualism prevalent in much of the West.
Monolingualism – the condition of being able to speak only one language – is regularly accompanied by a deep-seated conviction in the value of that language over all others. Across the largest economies that make up the G8, being monolingual is still often the norm, with multilingualism appearing unusual and even somewhat exotic. The monolingual mindset stands in sharp contrast to the lived reality of most the world, which throughout its history has been more multilingual than unilingual. Monolingualism, then, not globalization, should be our primary concern.
Multilingualism can help us live in a more connected and more interdependent world. By widening access to technology, globalization can support indigenous and scholarly communities engaged in documenting and protecting our shared linguistic heritage. For the last 5,000 years, the rise and fall of languages was intimately tied to the plow, sword and book. In our digital age, the keyboard, screen and web will play a decisive role in shaping the future linguistic diversity of our species.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Many people believe that truth conveys power. . . . Hence sticking with the truth is the best strategy for gaining power. Unfortunately, this is just a comforting myth. In fact, truth and power have a far more complicated relationship, because in human society, power means two very different things.
On the one hand, power means having the ability to manipulate objective realities: to hunt animals, to construct bridges, to cure diseases, to build atom bombs. This kind of power is closely tied to truth. If you believe a false physical theory, you won’t be able to build an atom bomb. On the other hand, power also means having the ability to manipulate human beliefs, thereby getting lots of people to cooperate effectively. Building atom bombs requires not just a good understanding of physics, but also the coordinated labor of millions of humans. Planet Earth was conquered by Homo sapiens rather than by chimpanzees or elephants, because we are the only mammals that can cooperate in very large numbers. And large-scale cooperation depends on believing common stories. But these stories need not be true. You can unite millions of people by making them believe in completely fictional stories about God, about race or about economics. The dual nature of power and truth results in the curious fact that we humans know many more truths than any other animal, but we also believe in much more nonsense. . . .
When it comes to uniting people around a common story, fiction actually enjoys three inherent advantages over the truth. First, whereas the truth is universal, fictions tend to be local. Consequently if we want to distinguish our tribe from foreigners, a fictional story will serve as a far better identity marker than a true story. . . . The second huge advantage of fiction over truth has to do with the handicap principle, which says that reliable signals must be costly to the signaler. Otherwise, they can easily be faked by cheaters. . . . If political loyalty is signalled by believing a true story, anyone can fake it. But believing ridiculous and outlandish stories exacts greater cost, and is therefore a better signal of loyalty. . . . Third, and most important, the truth is often painful and disturbing. Hence if you stick to unalloyed reality, few people will follow you. An American presidential candidate who tells the American public the truth, the whole truth and nothing but the truth about American history has a 100 percent guarantee of losing the elections. . . . An uncompromising adherence to the truth is an admirable spiritual practice, but it is not a winning political strategy. . . .
Even if we need to pay some price for deactivating our rational faculties, the advantages of increased social cohesion are often so big that fictional stories routinely triumph over the truth in human history. Scholars have known this for thousands of years, which is why scholars often had to decide whether they served the truth or social harmony. Should they aim to unite people by making sure everyone believes in the same fiction, or should they let people know the truth even at the price of disunity?
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Cuttlefish are full of personality, as behavioral ecologist Alexandra Schnell found out while researching the cephalopod's potential to display self-control. . . . “Self-control is thought to be the cornerstone of intelligence, as it is an important prerequisite for complex decisionmaking and planning for the future,” says Schnell . . .
[Schnell's] study used a modified version of the “marshmallow test” . . . During the original marshmallow test, psychologist Walter Mischel presented children between age four and six with one marshmallow. He told them that if they waited 15 minutes and didn’t eat it, he would give them a second marshmallow. A long-term follow-up study showed that the children who waited for the second marshmallow had more success later in life. . . . The cuttlefish version of the experiment looked a lot different. The researchers worked with six cuttlefish under nine months old and presented them with seafood instead of sweets. (Preliminary experiments showed that cuttlefishes’ favorite food is live grass shrimp, while raw prawns are so-so and Asian shore crab is nearly unacceptable.) Since the researchers couldn’t explain to the cuttlefish that they would need to wait for their shrimp, they trained them to recognize certain shapes that indicated when a food item would become available. The symbols were pasted on transparent drawers so that the cuttlefish could see the food that was stored inside. One drawer, labeled with a circle to mean “immediate,” held raw king prawn. Another drawer, labeled with a triangle to mean “delayed,” held live grass shrimp. During a control experiment, square labels meant “never.”
“If their self-control is flexible and I hadn’t just trained them to wait in any context, you would expect the cuttlefish to take the immediate reward [in the control], even if it’s their second preference,” says Schnell . . . and that’s what they did. That showed the researchers that cuttlefish wouldn’t reject the prawns if it was the only food available. In the experimental trials, the cuttlefish didn’t jump on the prawns if the live grass shrimp were labeled with a triangle—many waited for the shrimp drawer to open up. Each time the cuttlefish showed it could wait, the researchers tacked another ten seconds on to the next round of waiting before releasing the shrimp. The longest that a cuttlefish waited was 130 seconds.
Schnell [says] that the cuttlefish usually sat at the bottom of the tank and looked at the two food items while they waited, but sometimes, they would turn away from the king prawn “as if to distract themselves from the temptation of the immediate reward.” In past studies, humans, chimpanzees, parrots and dogs also tried to distract themselves while waiting for a reward.
Not every species can use self-control, but most of the animals that can share another trait in common: long, social lives. Cuttlefish, on the other hand, are solitary creatures that don’t form relationships even with mates or young. . . . “We don’t know if living in a social group is important for complex cognition unless we also show those abilities are lacking in less social species,” says . . . comparative psychologist Jennifer Vonk.