List of top Verbal Ability & Reading Comprehension (VARC) Questions on Reading Comprehension

Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .
For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A half-century ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “blackbox” is situated in and around Broca’s area, on the left side of the forebrain. . . .
Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. Itis not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .
So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioural psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Keeping time accurately comes with a price. The maximum accuracy of a clock is directly related to how much disorder, or entropy, it creates every time it ticks. Natalia Ares at the University of Oxford and her colleagues made this discovery using a tiny clock with an accuracy that can be controlled. The clock consists of a 50-nanometre-thick membrane of silicon nitride, vibrated by an electric current. Each time the membrane moved up and down once and then returned to its original position, the researchers counted a tick, and the regularity of the spacing between the ticks represented the accuracy of the clock. The researchers found that as they increased the clock’s accuracy, the heat produced in the system grew, increasing the entropy of its surroundings by jostling nearby particles . . . “If a clock is more accurate, you are paying for it somehow,” says Ares. In this case, you pay for it by pouring more ordered energy into the clock, which is then converted into entropy. “By measuring time, we are increasing the entropy of the universe,” says Ares. The more entropy there is in the universe, the closer it may be to its eventual demise. “Maybe we should stop measuring time,” says Ares. The scale of the additional entropy is so small, though, that there is no need to worry about its effects, she says.
The increase in entropy in timekeeping may be related to the “arrow of time”, says Marcus Huber at the Austrian Academy of Sciences in Vienna, who was part of the research team. It has been suggested that the reason that time only flows forward, not in reverse, is that the total amount of entropy in the universe is constantly increasing, creating disorder that cannot be put in order again.
The relationship that the researchers found is a limit on the accuracy of a clock, so it doesn’t mean that a clock that creates the most possible entropy would be maximally accurate – hence a large, inefficient grandfather clock isn’t more precise than an atomic clock. “It’s a bit like fuel use in a car. Just because I’m using more fuel doesn’t mean that I’m going faster or further,” says Huber.
When the researchers compared their results with theoretical models developed for clocks that rely on quantum effects, they were surprised to find that the relationship between accuracy and entropy seemed to be the same for both. . . . We can’t be sure yet that these results are actually universal, though, because there are many types of clocks for which the relationship between accuracy and entropy haven’t been tested. “It’s still unclear how this principle plays out in real devices such as atomic clocks, which push the ultimate quantum limits of accuracy,” says Mark Mitchison at Trinity College Dublin in Ireland. Understanding this relationship could be helpful for designing clocks in the future, particularly those used in quantum computers and other devices where both accuracy and temperature are crucial, says Ares. This finding could also help us understand more generally how the quantum world and the classical world are similar and different in terms of thermodynamics and the passage of time.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Back in the early 2000s, an awesome thing happened in the New X-Men comics. Our mutant heroes had been battling giant robots called Sentinels for years, but suddenly these mechanical overlords spawned a new threat: Nano-Sentinels! Not content to rule Earth with their metal fists, these tiny robots invaded our bodies at the microscopic level. Infected humans were slowly converted into machines, cell by cell.
Now, a new wave of extremely odd robots is making at least part of the Nano-Sentinels story come true. Using exotic fabrication materials like squishy hydrogels and elastic polymers, researchers are making autonomous devices that are often tiny and that could turn out to be more powerful than an army of Terminators. Some are 1-centimetre blobs that can skate overwater. Others are flat sheets that can roll themselves into tubes, or matchstick-sized plastic coils that act as powerful muscles. No, they won’t be invading our bodies and turning us into Sentinels – which I personally find a little disappointing – but some of them could one day swim through our bloodstream to heal us. They could also clean up pollutants in water or fold themselves into different kinds of vehicles for us to drive. . . .
Unlike a traditional robot, which is made of mechanical parts, these new kinds of robots are made from molecular parts. The principle is the same: both are devices that can move around and do things independently. But a robot made from smart materials might be nothing more than a pink drop of hydrogel. Instead of gears and wires, it’s assembled from two kinds of molecules – some that love water and some that avoid it – which interact to allow the bot to skate on top of a pond.
Sometimes these materials are used to enhance more conventional robots. One team of researchers, for example, has developed a different kind of hydrogel that becomes sticky when exposed to a low-voltage zap of electricity and then stops being sticky when the electricity is switched off. This putty-like gel can be pasted right onto the feet or wheels of a robot. When the robot wants to climb a sheer wall or scoot across the ceiling, it can activate its sticky feet with a few volts. Once it is back on a flat surface again, the robot turns off the adhesive like a light switch.
Robots that are wholly or partly made of gloop aren’t the future that I was promised in science fiction. But it’s definitely the future I want. I’m especially keen on the nanometre- scale “soft robots” that could one day swim through our bodies. Metin Sitti, a director at the Max Planck Institute for Intelligent Systems in Germany, worked with colleagues to prototype these tiny, synthetic beasts using various stretchy materials, such as simple rubber, and seeding them with magnetic microparticles. They are assembled into a finished shape by applying magnetic fields. The results look like flowers or geometric shapes made from Tinkertoy ball and stick modelling kits. They’re guided through tubes of fluid using magnets, and can even stop and cling to the sides of a tube
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Today we can hardly conceive of ourselves without an unconscious. Yet between 1700 and1900, this notion developed as a genuinely original thought. The “unconscious” burst the shell of conventional language, coined as it had been to embody the fleeting ideas and the shifting conceptions of several generations until, finally, it became fixed and defined in specialized terms within the realm of medical psychology and Freudian psychoanalysis.
The vocabulary concerning the soul and the mind increased enormously in the course of the nineteenth century. The enrichments of literary and intellectual language led to an altered understanding of the meanings that underlie time-honored expressions and traditional catchwords. At the same time, once coined, powerful new ideas attracted to themselves a whole host of seemingly unrelated issues, practices, and experiences, creating a peculiar network of preoccupations that as a group had not existed before. The drawn-out attempt to approach and define the unconscious brought together the spiritualist and the psychical researcher of borderline phenomena (such as apparitions, spectral illusions, haunted houses, mediums, trance, automatic writing); the psychiatrist or alienist probing the nature of mental disease, of abnormal ideation, hallucination, delirium, melancholia, mania; the surgeon performing operations with the aid of hypnotism; the magnetizer claiming to correct the disequilibrium in the universal flow of magnetic fluids but who soon came to be regarded as a clever manipulator of the imagination; the physiologist and the physician who puzzled oversleep, dreams, sleepwalking, anesthesia, the influence of the mind on the body in health and disease; the neurologist concerned with the functions of the brain and the physiological basis of mental life; the philosopher interested in the will, the emotions, consciousness, knowledge, imagination and the creative genius; and, last but not least, the psychologist.
Significantly, most if not all of these practices (for example, hypnotism in surgery or psychological magnetism) originated in the waning years of the eighteenth century and during the early decades of the nineteenth century, as did some of the disciplines (such as psychology and psychical research). The majority of topics too were either new or assumed hitherto unknown colors. Thus, before 1790, few if any spoke, in medical terms, of the affinity between creative genius and the hallucinations of the insane . . .
Striving vaguely and independently to give expression to a latent conception, various lines of thought can be brought together by some novel term. The new concept then serves as a kind of resting place or stocktaking in the development of ideas, giving satisfaction and a stimulus for further discussion or speculation. Thus, the massive introduction of the term unconscious by Hartmann in 1869 appeared to focalize many stray thoughts, affording a temporary feeling that a crucial step had been taken forward, a comprehensive knowledge gained, a knowledge that required only further elaboration, explication, and unfolding in order to bring in a bounty of higher understanding. Ultimately, Hartmann’s attempt at defining the unconscious proved fruitless because he extended its reach into every realm of organic and inorganic, spiritual, intellectual, and instinctive existence, severely diluting the precision and compromising the impact of the concept.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
It has been said that knowledge, or the problem of knowledge, is the scandal of philosophy. The scandal is philosophy’s apparent inability to show how, when and why we can be sure that we know something or, indeed, that we know anything. Philosopher Michael Williams writes: ‘Is it possible to obtain knowledge at all? This problem is pressing because there are powerful arguments, some very ancient, for the conclusion that it is not . . . Scepticism is the skeleton in Western rationalism’s closet’. While it is not clear that the scandal matters to anyone but philosophers, philosophers point out that it should matter to everyone, at least given a certain conception of knowledge. For, they explain, unless we can ground our claims to knowledge as such, which is to say, distinguish it from mere opinion, superstition, fantasy, wishful thinking, ideology, illusion or delusion, then the actions we take on the basis of presumed knowledge –boarding an airplane, swallowing a pill, finding someone guilty of a crime – will be irrational and unjustifiable.
That is all quite serious-sounding but so also are the rattlings of the skeleton: that is, the sceptic’s contention that we cannot be sure that we know anything – at least not if we think of knowledge as something like having a correct mental representation of reality, and not if we think of reality as something like things-as-they-are-in-themselves, independent of our perceptions, ideas or descriptions. For, the sceptic will note, since reality, under that conception of it, is outside our ken (we cannot catch a glimpse of things-in-themselves around the corner of our own eyes; we cannot form an idea of reality that floats above the processes of our conceiving it), we have no way to compare our mental representations with things-as-they-are-in-themselves and therefore no way to determine whether they are correct or incorrect. Thus the sceptic may repeat (rattling loudly), you cannot be sure you ‘know’ something or anything at all – at least not, he may add (rattling softly before disappearing), if that is the way you conceive ‘knowledge’.
There are a number of ways to handle this situation. The most common is to ignore it. Most people outside the academy – and, indeed, most of us inside it – are unaware of or unperturbed by the philosophical scandal of knowledge and go about our lives without too many epistemic anxieties. We hold our beliefs and presumptive knowledges more or less confidently, usually depending on how we acquired them (I saw it with my own eyes; I heard it on Fox News; a guy at the office told me) and how broadly and strenuously they seem to be shared or endorsed by various relevant people: experts and authorities, friends and family members, colleagues and associates. And we examine our convictions more or less closely, explain them more or less extensively, and defend them more or less vigorously, usually depending on what seems to be at stake for ourselves and/or other people and what resources are available for reassuring ourselves or making our beliefs credible to others (look, it’s right here on the page; add up the figures yourself; I happen to be a heart specialist).
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
I have elaborated . . . a framework for analyzing the contradictory pulls on [Indian] nationalist ideology in its struggle against the dominance of colonialism and the resolution it offered to those contradictions. Briefly, this resolution was built around a separation of the domain of culture into two spheres—the material and the spiritual. It was in the material sphere that the claims of Western civilization were the most powerful. Science, technology, rational forms of economic organization, modern methods of statecraft—these had given the European countries the strength to subjugate the non-European people . . . To overcome this domination, the colonized people had to learn those superior techniques of organizing material life and incorporate them within their own cultures. . . . But this could not mean the imitation of the West in every aspect of life, for then the very distinction between the West and the East would vanish—the self-identity of national culture would itself be threatened. . . . The discourse of nationalism shows that the material/spiritual distinction was condensed into an analogous, but ideologically far more powerful, dichotomy: that between the outer and the inner. . . . Applying the inner/outer distinction to the matter of concrete day-to-day living separates the social space into ghar and bāhir, the home and the world. The world is the external, the domain of the material; the home represents one’s inner spiritual self, one’s true identity. The world is a treacherous terrain of the pursuit of material interests, where practical considerations reign supreme. It is also typically the domain of the male. The home in its essence must remain unaffected by the profane activities of the material world—and woman is its representation. And so one gets an identification of social roles by gender to correspond with the separation of the social space into ghar and bāhir. . . .
The colonial situation, and the ideological response of nationalism to the critique of Indian tradition, introduced an entirely new substance to [these dichotomies] and effected their transformation. The material/spiritual dichotomy, to which the terms world and home corresponded, had acquired . . . a very special significance in the nationalist mind. The world was where the European power had challenged the non-European peoples and, by virtue of its superior material culture, had subjugated them. But, the nationalists asserted, it had failed to colonize the inner, essential, identity of the East which lay in its distinctive, and superior, spiritual culture. . . . [I]n the entire phase of the national struggle, the crucial need was to protect, preserve and strengthen the inner core of the national culture, its spiritual essence. . .
Once we match this new meaning of the home/world dichotomy with the identification of social roles by gender, we get the ideological framework within which nationalism answered the women’s question. It would be a grave error to see in this, as liberals are apt to in their despair at the many marks of social conservatism in nationalist practice, a total rejection of the West. Quite the contrary: the nationalist paradigm in fact supplied an ideological principle of selection.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
It’s easy to forget that most of the world’s languages are still transmitted orally with no widely established written form. While speech communities are increasingly involved in projects to protect their languages – in print, on air and online – orality is fragile and contributes to linguistic vulnerability. But indigenous languages are about much more than unusual words and intriguing grammar: They function as vehicles for the transmission of cultural traditions, environmental understandings and knowledge about medicinal plants, all at risk when elders die and livelihoods are disrupted.
Both push and pull factors lead to the decline of languages. Through war, famine and natural disasters, whole communities can be destroyed, taking their language with them to the grave, such as the indigenous populations of Tasmania who were wiped out by colonists. More commonly, speakers live on but abandon their language in favor of another vernacular, a widespread process that linguists refer to as “language shift” from which few languages are immune. Such trading up and out of a speech form occurs for complex political, cultural and economic reasons – sometimes voluntary for economic and educational reasons, although often amplified by state coercion or neglect. Welsh, long stigmatized and disparaged by the British state, has rebounded with vigor.
Many speakers of endangered, poorly documented languages have embraced new digital media with excitement. Speakers of previously exclusively oral tongues are turning to the web as a virtual space for languages to live on. Internet technology offers powerful ways for oral traditions and cultural practices to survive, even thrive, among increasingly mobile communities. I have watched as videos of traditional wedding ceremonies and songs are recorded on smartphones in London by Nepali migrants, then uploaded to YouTube and watched an hour later by relatives in remote Himalayan villages . . .Globalization is regularly, and often uncritically, pilloried as a major threat to linguistic diversity. But in fact, globalization is as much process as it is ideology, certainly when it comes to language. The real forces behind cultural homogenization are unbending beliefs, exchanged through a globalized delivery system, reinforced by the historical monolingualism prevalent in much of the West.
Monolingualism – the condition of being able to speak only one language – is regularly accompanied by a deep-seated conviction in the value of that language over all others. Across the largest economies that make up the G8, being monolingual is still often the norm, with multilingualism appearing unusual and even somewhat exotic. The monolingual mindset stands in sharp contrast to the lived reality of most the world, which throughout its history has been more multilingual than unilingual. Monolingualism, then, not globalization, should be our primary concern.
Multilingualism can help us live in a more connected and more interdependent world. By widening access to technology, globalization can support indigenous and scholarly communities engaged in documenting and protecting our shared linguistic heritage. For the last 5,000 years, the rise and fall of languages was intimately tied to the plow, sword and book. In our digital age, the keyboard, screen and web will play a decisive role in shaping the future linguistic diversity of our species.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Many people believe that truth conveys power. . . . Hence sticking with the truth is the best strategy for gaining power. Unfortunately, this is just a comforting myth. In fact, truth and power have a far more complicated relationship, because in human society, power means two very different things.
On the one hand, power means having the ability to manipulate objective realities: to hunt animals, to construct bridges, to cure diseases, to build atom bombs. This kind of power is closely tied to truth. If you believe a false physical theory, you won’t be able to build an atom bomb. On the other hand, power also means having the ability to manipulate human beliefs, thereby getting lots of people to cooperate effectively. Building atom bombs requires not just a good understanding of physics, but also the coordinated labor of millions of humans. Planet Earth was conquered by Homo sapiens rather than by chimpanzees or elephants, because we are the only mammals that can cooperate in very large numbers. And large-scale cooperation depends on believing common stories. But these stories need not be true. You can unite millions of people by making them believe in completely fictional stories about God, about race or about economics. The dual nature of power and truth results in the curious fact that we humans know many more truths than any other animal, but we also believe in much more nonsense. . . .
When it comes to uniting people around a common story, fiction actually enjoys three inherent advantages over the truth. First, whereas the truth is universal, fictions tend to be local. Consequently if we want to distinguish our tribe from foreigners, a fictional story will serve as a far better identity marker than a true story. . . . The second huge advantage of fiction over truth has to do with the handicap principle, which says that reliable signals must be costly to the signaler. Otherwise, they can easily be faked by cheaters. . . . If political loyalty is signalled by believing a true story, anyone can fake it. But believing ridiculous and outlandish stories exacts greater cost, and is therefore a better signal of loyalty. . . . Third, and most important, the truth is often painful and disturbing. Hence if you stick to unalloyed reality, few people will follow you. An American presidential candidate who tells the American public the truth, the whole truth and nothing but the truth about American history has a 100 percent guarantee of losing the elections. . . . An uncompromising adherence to the truth is an admirable spiritual practice, but it is not a winning political strategy. . . .
Even if we need to pay some price for deactivating our rational faculties, the advantages of increased social cohesion are often so big that fictional stories routinely triumph over the truth in human history. Scholars have known this for thousands of years, which is why scholars often had to decide whether they served the truth or social harmony. Should they aim to unite people by making sure everyone believes in the same fiction, or should they let people know the truth even at the price of disunity?
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
Cuttlefish are full of personality, as behavioral ecologist Alexandra Schnell found out while researching the cephalopod's potential to display self-control. . . . “Self-control is thought to be the cornerstone of intelligence, as it is an important prerequisite for complex decisionmaking and planning for the future,” says Schnell . . .
[Schnell's] study used a modified version of the “marshmallow test” . . . During the original marshmallow test, psychologist Walter Mischel presented children between age four and six with one marshmallow. He told them that if they waited 15 minutes and didn’t eat it, he would give them a second marshmallow. A long-term follow-up study showed that the children who waited for the second marshmallow had more success later in life. . . . The cuttlefish version of the experiment looked a lot different. The researchers worked with six cuttlefish under nine months old and presented them with seafood instead of sweets. (Preliminary experiments showed that cuttlefishes’ favorite food is live grass shrimp, while raw prawns are so-so and Asian shore crab is nearly unacceptable.) Since the researchers couldn’t explain to the cuttlefish that they would need to wait for their shrimp, they trained them to recognize certain shapes that indicated when a food item would become available. The symbols were pasted on transparent drawers so that the cuttlefish could see the food that was stored inside. One drawer, labeled with a circle to mean “immediate,” held raw king prawn. Another drawer, labeled with a triangle to mean “delayed,” held live grass shrimp. During a control experiment, square labels meant “never.”
“If their self-control is flexible and I hadn’t just trained them to wait in any context, you would expect the cuttlefish to take the immediate reward [in the control], even if it’s their second preference,” says Schnell . . . and that’s what they did. That showed the researchers that cuttlefish wouldn’t reject the prawns if it was the only food available. In the experimental trials, the cuttlefish didn’t jump on the prawns if the live grass shrimp were labeled with a triangle—many waited for the shrimp drawer to open up. Each time the cuttlefish showed it could wait, the researchers tacked another ten seconds on to the next round of waiting before releasing the shrimp. The longest that a cuttlefish waited was 130 seconds.
Schnell [says] that the cuttlefish usually sat at the bottom of the tank and looked at the two food items while they waited, but sometimes, they would turn away from the king prawn “as if to distract themselves from the temptation of the immediate reward.” In past studies, humans, chimpanzees, parrots and dogs also tried to distract themselves while waiting for a reward.
Not every species can use self-control, but most of the animals that can share another trait in common: long, social lives. Cuttlefish, on the other hand, are solitary creatures that don’t form relationships even with mates or young. . . . “We don’t know if living in a social group is important for complex cognition unless we also show those abilities are lacking in less social species,” says . . . comparative psychologist Jennifer Vonk.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
We cannot travel outside our neighbourhood without passports. We must wear the same plainclothes. We must exchange our houses every ten years. We cannot avoid labour. We all go to bed at the same time . . . We have religious freedom, but we cannot deny that the soul dies with the body, since ‘but for the fear of punishment, they would have nothing but contempt for the laws and customs of society'. . . . In More’s time, for much of the population, given the plenty and security on offer, such restraints would not have seemed overly unreasonable. For modern readers, however, Utopia appears to rely upon relentless transparency, the repression of variety, and the curtailment of privacy. Utopia provides security: but at what price? In both its external and internal relations, indeed, it seems perilously dystopian.
Such a conclusion might be fortified by examining selectively the tradition which follows more on these points. This often portrays societies where. . .'it would be almost impossible for man to be depraved, or wicked'. . . . This is achieved both through institutions and mores, which underpin the common life. . .. The passions are regulated and inequalities of wealth and distinction are minimized. Needs, vanity, and emulation are restrained, often by prizing equality and holding riches in contempt. The desire for public power is curbed. Marriage and sexual intercourse are often controlled: in Tommaso Campanella’s The City of the Sun (1623), the first great literary utopia after More’s, relations are forbidden to men before the age of twenty-one and women before nineteen. Communal child-rearing is normal; for Campanella this commences at age two. Greater simplicity of life, ‘living according to nature’, is often a result: the desire for simplicity and purity are closely related. People become more alike in appearance, opinion, and outlook than they often have been. Unity, order, and homogeneity thus prevail at the cost of individuality and diversity. This model, as J. C. Davis demonstrates, dominated early modern utopianism. . . . And utopian homogeneity remains a familiar theme well into the twentieth century.
Given these considerations, it is not unreasonable to take as our starting point here the hypothesis that utopia and dystopia evidently share more in common than is often supposed. Indeed, they might be twins, the progeny of the same parents. Insofar as this proves to be the case, my linkage of both here will be uncomfortably close for some readers. Yet we should not mistake this argument for the assertion that all utopias are, or tend to produce, dystopias. Those who defend this proposition will find that their association here is not nearly close enough. For we have only to acknowledge the existence of thousands of successful intentional communities in which a cooperative ethos predominates and where harmony without coercion is the rule to set aside such an assertion. Here the individual’s submersion in the group is consensual (though this concept is not unproblematic). It results not in enslavement but voluntary submission to group norms. Harmony is achieved without . . .harming others.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
For the Maya of the Classic period, who lived in Southern Mexico and Central America between 250 and 900 CE, the category of ‘persons’ was not coincident with human beings, as it is for us. That is, human beings were persons – but other, nonhuman entities could be persons, too. . . . In order to explore the slippage of categories between ‘humans’ and ‘persons’, I examined a very specific category of ancient Maya images, found painted in scenes on ceramic vessels. I sought out instances in which faces (some combination of eyes, nose, and mouth) are shown on inanimate objects. . . . Consider my iPhone, which needs to be fed with electricity every night, swaddled in a protective bumper, and enjoys communicating with other fellow-phone-beings. Does it have personhood (if at all) because itis connected to me, drawing this resource from me as an owner or source? For the Maya (who did have plenty of other communicating objects, if not smartphones), the answer was no. Nonhuman persons were not tethered to specific humans, and they did not derive their personhood from a connection with a human. . . . It’s a profoundly democratising way of understanding the world. Humans are not more important persons – we are just one of many kinds of persons who inhabit this world. . . .
The Maya saw personhood as ‘activated’ by experiencing certain bodily needs and through participation in certain social activities. For example, among the faced objects that I examined, persons are marked by personal requirements (such as hunger, tiredness, physical closeness), and by community obligations (communication, interaction, ritual observance). In the images I examined, we see, for instance, faced objects being cradled in humans’ arms; we also see them speaking to humans. These core elements of personhood are both turned inward, what the body or self of a person requires, and outward, what a community expects of the persons who are a part of it, underlining the reciprocal nature of community membership.
Personhood was a nonbinary proposition for the Maya. Entities were able to be persons while also being something else. The faced objects I looked at indicate that they continue to be functional, doing what objects do (a stone implement continues to chop, an incense burner continues to do its smoky work). Furthermore, the Maya visually depicted many objects in ways that indicated the material category to which they belonged – drawings of the stone implement show that a person-tool is still made of stone. One additional complexity: the incense burner (which would have been made of clay, and decorated with spiky appliques representing the sacred ceiba tree found in this region) is categorised as a person – but also as a tree. With these Maya examples, we are challenged to discard the person/nonperson binary that constitutes our basic ontological outlook. . . . The porousness of boundaries that we have seen in the Maya world points towards the possibility of living with a certain uncategorisability of the world.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage.
The sleights of hand that conflate consumption with virtue are a central theme in A Thirst for Empire, a sweeping and richly detailed history of tea by the historian Erika Rappaport. How did tea evolve from an obscure “China drink” to a universal beverage imbued with civilising properties? The answer, in brief, revolves around this conflation, not only by profitmotivated marketers but by a wide variety of interest groups. While abundant historical records have allowed the study of how tea itself moved from east to west, Rappaport is focused on the movement of the idea of tea to suit particular purposes.
Beginning in the 1700s, the temperance movement advocated for tea as a pleasure that cheered but did not inebriate, and industrialists soon borrowed this moral argument in advancing their case for free trade in tea (and hence more open markets for their textiles). Factory owners joined in, compelled by the cause of a sober workforce, while Christian missionaries discovered that tea “would soothe any colonial encounter”. During the Second World War, tea service was presented as a social and patriotic activity that uplifted soldiers and calmed refugees.
But it was tea’s consumer-directed marketing by importers and retailers – and later by brands– that most closely portends current trade debates. An early version of the “farm to table” movement was sparked by anti-Chinese sentiment and concerns over trade deficits, as well as by the reality and threat of adulterated tea containing dirt and hedge clippings. Lipton was soon advertising “from the Garden to Tea Cup” supply chains originating in British India and supervised by “educated Englishmen”. While tea marketing always presented direct consumer benefits (health, energy, relaxation), tea drinkers were also assured that they were participating in a larger noble project that advanced the causes of family, nation and civilization. . . .
Rappaport’s treatment of her subject is refreshingly apolitical. Indeed, it is a virtue that readers will be unable to guess her political orientation: both the miracle of markets and capitalism’s dark underbelly are evident in tea’s complex story, as are the complicated effects of British colonialism. . . . Commodity histories are now themselves commodities: recent works investigate cotton, salt, cod, sugar, chocolate, paper and milk. And morality marketing is now a commodity as well, applied to food, “fair trade” apparel and ecotourism. Yet tea is, Rappaport makes clear, a world apart – an astonishing success story in which tea marketers not only succeeded in conveying a sense of moral elevation to the consumer but also arguably did advance the cause of civilisation and community.
I have been offered tea at a British garden party, a Bedouin campfire, a Turkish carpet shop and a Japanese chashitsu, to name a few settings. In each case the offering was more an idea – friendship, community, respect – than a drink, and in each case the idea then created a reality. It is not a stretch to say that tea marketers have advanced the particularly noble cause of human dialogue and friendship.
The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage
In the late 1960s, while studying the northern-elephant-seal population along the coasts of Mexico and California, Burney Le Boeuf and his colleagues couldn’t help but notice that the threat calls of males at some sites sounded different from those of males at other sites. . .. That was the first time dialects were documented in a nonhuman mammal. . ..
All the northern elephant seals that exist today are descendants of the small herd that survived on Isla Guadalupe [after the near extinction of the species in the nineteenth century]. As that tiny population grew, northern elephant seals started to recolonize former breeding locations. It was precisely on the more recently colonized islands where Le Boeuf found that the tempos of the male vocal displays showed stronger differences to the ones from Isla Guadalupe, the founder colony.
In order to test the reliability of these dialects over time, Le Boeuf and other researchers visited Año Nuevo Island in California—the island where males showed the slowest pulse rates in their calls—every winter from 1968 to 1972. “What we found is that the pulse rate increased, but it still remained relatively slow compared to the other colonies we had measured in the past” Le Boeuf told me.
At the individual level, the pulse of the calls stayed the same: A male would maintain his vocal signature throughout his lifetime. But the average pulse rate was changing. Immigration could have been responsible for this increase, as in the early 1970s, 43 percent of the males on Año Nuevo had come from southern rookeries that had a faster pulse rate. This led Le Boeuf and his collaborator, Lewis Petrinovich, to deduce that the dialects were, perhaps, a result of isolation over time, after the breeding sites had been recolonized. For instance, the first settlers of Año Nuevo could have had, by chance, calls with low pulse rates. At other sites, where the scientists found faster pulse rates, the opposite would have happened—seals with faster rates would have happened to arrive first.
As the population continued to expand and the islands kept on receiving immigrants from the original population, the calls in all locations would have eventually regressed to the average pulse rate of the founder colony. In the decades that followed, scientists noticed that the geographical variations reported in 1969 were not obvious anymore. . . . In the early 2010s, while studying northern elephant seals on Año Nuevo Island, [researcher Caroline] Casey noticed, too, that what Le Boeuf had heard decades ago was not what she heard now. . . . By performing more sophisticated statistical analyses on both sets of data, [Casey and Le Boeuf] confirmed that dialects existed back then but had vanished. Yet there are other differences between the males from the late 1960s and their great-greatgrandsons: Modern males exhibit more individual diversity, and their calls are more complex. While 50 years ago the drumming pattern was quite simple and the dialects denoted just a change in tempo, Casey explained, the calls recorded today have more complex structures, sometimes featuring doublets or triplets. . . .
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage
Vocabulary used in speech or writing organizes itself in seven parts of speech (eight, if you count interjections such as Oh! and Gosh! and Fuhgeddaboudit!). Communication composed of these parts of speech must be organized by rules of grammar upon which we agree. When these rules break down, confusion and misunderstanding result. Bad grammar produces bad sentences. My favorite example from Strunk and White is this one: “As a mother of five, with another one on the way, my ironing board is always up.”
Nouns and verbs are the two indispensable parts of writing. Without one of each, no group of words can be a sentence, since a sentence is, by definition, a group of words containing a subject (noun) and a predicate (verb); these strings of words begin with a capital letter, end with a period, and combine to make a complete thought which starts in the writer’s head and then leaps to the reader’s.
Must you write complete sentences each time, every time? Perish the thought. If your work consists only of fragments and floating clauses, the Grammar Police aren’t going to come and take you away. Even William Strunk, that Mussolini of rhetoric, recognized the delicious pliability of language. “It is an old observation,” he writes, “that the best writers sometimes disregard the rules of rhetoric.” Yet he goes on to add this thought, which I urge you to consider: “Unless he is certain of doing well, [the writer] will probably do best to follow the rules.”
The telling clause here is Unless he is certain of doing well. If you don’t have a rudimentary grasp of how the parts of speech translate into coherent sentences, how can you be certain that you are doing well? How will you know if you’re doing ill, for that matter? The answer, of course, is that you can’t, you won’t. One who does grasp the rudiments of grammar finds a comforting simplicity at its heart, where there need be only nouns, the words that name, and verbs, the words that act.
Take any noun, put it with any verb, and you have a sentence. It never fails. Rocks explode. Jane transmits. Mountains float. These are all perfect sentences. Many such thoughts make little rational sense, but even the stranger ones (Plums deify!) have a kind of poetic weight that’s nice. The simplicity of noun-verb construction is useful—at the very least it can provide a safety net for your writing. Strunk and White caution against too many simple sentences in a row, but simple sentences provide a path you can follow when you fear getting lost in the tangles of rhetoric—all those restrictive and nonrestrictive clauses, those modifying phrases, those appositives and compound-complex sentences. If you start to freak out at the sight of such unmapped territory (unmapped by you, at least), just remind yourself that rocks explode, Jane transmits, mountains float, and plums deify. Grammar is . . . the pole you grab to get your thoughts up on their feet and walking.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage
Although one of the most contested concepts in political philosophy, human nature is something on which most people seem to agree. By and large, according to Rutger Bregman in his new book Humankind, we have a rather pessimistic view – not of ourselves exactly, but of everyone else. We see other people as selfish, untrustworthy and dangerous and therefore we behave towards them with defensiveness and suspicion. This was how the 17th-century philosopher Thomas Hobbes conceived our natural state to be, believing that all that stood between us and violent anarchy was a strong state and firm leadership.
But in following Hobbes, argues Bregman, we ensure that the negative view we have of human nature is reflected back at us. He instead puts his faith in Jean-Jacques Rousseau, the 18th-century French thinker, who famously declared that man was born free and it was civilisation – with its coercive powers, social classes and restrictive laws – that put him in chains.
Hobbes and Rousseau are seen as the two poles of the human nature argument and it’s no surprise that Bregman strongly sides with the Frenchman. He takes Rousseau’s intuition and paints a picture of a prelapsarian idyll in which, for the better part of 300,000 years, Homo sapiens lived a fulfilling life in harmony with nature . . . Then we discovered agriculture and for the next 10,000 years it was all property, war, greed and injustice. . . .
It was abandoning our nomadic lifestyle and then domesticating animals, says Bregman, that brought about infectious diseases such as measles, smallpox, tuberculosis, syphilis, malaria, cholera and plague. This may be true, but what Bregman never really seems to get to grips with is that pathogens were not the only things that grew with agriculture – so did the number of humans. It’s one thing to maintain friendly relations and a property-less mode of living when you’re 30 or 40 hunter-gatherers following the food. But life becomes a great deal more complex and knowledge far more extensive when there are settlements of many thousands.
“Civilisation has become synonymous with peace and progress and wilderness with war and decline,” writes Bregman. “In reality, for most of human existence, it was the other way around.” Whereas traditional history depicts the collapse of civilisations as “dark ages” in which everything gets worse, modern scholars, he claims, see them more as a reprieve, in which the enslaved gain their freedom and culture flourishes. Like much else in this book, the truth is probably somewhere between the two stated positions.
In any case, the fear of civilisational collapse, Bregman believes, is unfounded. It’s the result of what the Dutch biologist Frans de Waal calls “veneer theory” – the idea that just below the surface, our bestial nature is waiting to break out. . . . There’s a great deal of reassuring human decency to be taken from this bold and thought-provoking book and a wealth of evidence in support of the contention that the sense of who we are as a species has been deleteriously distorted. But it seems equally misleading to offer the false choice of Rousseau and Hobbes when, clearly, humanity encompasses both.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage
The word ‘anarchy’ comes from the Greek anarkhia, meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.
Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.
The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.
For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.
The mainstream of anarchist propaganda for more than a century has been anarchistcommunism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .
There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806–56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage
Mode of transportation affects the travel experience and thus can produce new types of travel writing and perhaps even new “identities.” Modes of transportation determine the types and duration of social encounters; affect the organization and passage of space and time; . . . and also affect perception and knowledge—how and what the traveler comes to know and write about. The completion of the first U.S. transcontinental highway during the 1920s . . . for example, inaugurated a new genre of travel literature about the United States—the automotive or road narrative. Such narratives highlight the experiences of mostly male protagonists “discovering themselves” on their journeys, emphasizing the independence of road travel and the value of rural folk traditions.
Travel writing’s relationship to empire building— as a type of “colonialist discourse”—has drawn the most attention from academicians. Close connections have been observed between European (and American) political, economic, and administrative goals for the colonies and their manifestations in the cultural practice of writing travel books. Travel writers’ descriptions of foreign places have been analysed as attempts to validate, promote, or challenge the ideologies and practices of colonial or imperial domination and expansion. Mary Louise Pratt’s study of the genres and conventions of 18th- and 19th-century exploration narratives about South America and Africa (e.g., the “monarch of all I survey” trope) offered ways of thinking about travel writing as embedded within relations of power between metropole and periphery, as did Edward Said’s theories of representation and cultural imperialism. Particularly Said’s book, Orientalism, helped scholars understand ways in which representations of people in travel texts were intimately bound up with notions of self, in this case, that the Occident defined itself through essentialist, ethnocentric, and racist representations of the Orient. Said’s work became a model for demonstrating cultural forms of imperialism in travel texts, showing how the political, economic, or administrative fact of dominance relies on legitimating discourses such as those articulated through travel writing. . . .
Feminist geographers’ studies of travel writing challenge the masculinist history of geography by questioning who and what are relevant subjects of geographic study and, indeed, what counts as geographic knowledge itself. Such questions are worked through ideological constructs that posit men as explorers and women as travelers—or, conversely, men as travelers and women as tied to the home. Studies of Victorian women who were professional travel writers, tourists, wives of colonial administrators, and other (mostly) elite women who wrote narratives about their experiences abroad during the 19th century have been particularly revealing. From a “liberal” feminist perspective, travel presented one means toward female liberation for middle- and upper-class Victorian women. Many studies from the 1970s onward demonstrated the ways in which women’s gendered identities were negotiated differently “at home” than they were “away,” thereby showing women’s selfdevelopment through travel. The more recent poststructural turn in studies of Victorian travel writing has focused attention on women’s diverse and fragmented identities as they narrated their travel experiences, emphasizing women’s sense of themselves as women in new locations, but only as they worked through their ties to nation, class, whiteness, and colonial and imperial power structures.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage
I’ve been following the economic crisis for more than two years now. I began working on the subject as part of the background to a novel, and soon realized that I had stumbled across the most interesting story I’ve ever found. While I was beginning to work on it, the British bank Northern Rock blew up, and it became clear that, as I wrote at the time, “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially superrisky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.” . . . I was both right and too late, because all the groundwork for the crisis had already been done—though the sluggishness of the world’s governments, in not preparing for the great unraveling of autumn 2008, was then and still is stupefying. But this is the first reason why I wrote this book: because what’s happened is extraordinarily interesting. It is an absolutely amazing story, full of human interest and drama, one whose byways of mathematics, economics, and psychology are both central to the story of the last decades and mysteriously unknown to the general public. We have heard a lot about “the two cultures” of science and the arts—we heard a particularly large amount about it in 2009, because it was the fiftieth anniversary of the speech during which C. P. Snow first used the phrase. But I’m not sure the idea of a huge gap between science and the arts is as true as it was half a century ago—it’s certainly true, for instance, that a general reader who wants to pick up an education in the fundamentals of science will find it easier than ever before. It seems to me that there is a much bigger gap between the world of finance and that of the general public and that there is a need to narrow that gap, if the financial industry is not to be a kind of priesthood, administering to its own mysteries and feared and resented by the rest of us. Many bright, literate people have no idea about all sorts of economic basics, of a type that financial insiders take as elementary facts of how the world works. I am an outsider to finance and economics, and my hope is that I can talk across that gulf.
My need to understand is the same as yours, whoever you are. That’s one of the strangest ironies of this story: after decades in which the ideology of the Western world was personally and economically individualistic, we’ve suddenly been hit by a crisis which shows in the starkest terms that whether we like it or not—and there are large parts of it that you would have to be crazy to like—we’re all in this together. The aftermath of the crisis is going to dominate the economics and politics of our societies for at least a decade to come and perhaps longer.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage
[There is] a curious new reality: Human contact is becoming a luxury good. As more screens appear in the lives of the poor, screens are disappearing from the lives of the rich. The richer you are, the more you spend to be off-screen. . . .
The joy — at least at first — of the internet revolution was its democratic nature. Facebook is the same Facebook whether you are rich or poor. Gmail is the same Gmail. And it’s all free. There is something mass market and unappealing about that. And as studies show that time on these advertisement-support platforms is unhealthy, it all starts to seem déclassé, like drinking soda or smoking cigarettes, which wealthy people do less than poor people. The wealthy can afford to opt out of having their data and their attention sold as a product. The poor and middle class don’t have the same kind of resources to make that happen.
Screen exposure starts young. And children who spent more than two hours a day looking at a screen got lower scores on thinking and language tests, according to early results of a landmark study on brain development of more than 11,000 children that the National Institutes of Health is supporting. Most disturbingly, the study is finding that the brains of children who spend a lot of time on screens are different. For some kids, there is premature thinning of their cerebral cortex. In adults, one study found an association between screen time and depression. . . .
Tech companies worked hard to get public schools to buy into programs that required schools to have one laptop per student, arguing that it would better prepare children for their screenbased future. But this idea isn’t how the people who actually build the screenbased future raise their own children. In Silicon Valley, time on screens is increasingly seen as unhealthy. Here, the popular elementary school is the local Waldorf School, which promises a back-to nature, nearly screen-free education. So as wealthy kids are growing up with less screen time, poor kids are growing up with more. How comfortable someone is with human engagement could become a new class marker.
Human contact is, of course, not exactly like organic food . . . . But with screen time, there has been a concerted effort on the part of Silicon Valley behemoths to confuse the public. The poor and the middle class are told that screens are good and important for them and their children. There are fleets of psychologists and neuroscientists on staff at big tech companies working to hook eyes and minds to the screen as fast as possible and for as long as possible. And so human contact is rare. . . .
There is a small movement to pass a “right to disconnect” bill, which would allow workers to turn their phones off, but for now a worker can be punished for going offline and not being available. There is also the reality that in our culture of increasing isolation, in which so many of the traditional gathering places and social structures have disappeared, screens are filling a crucial void.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage
The claims advanced here may be condensed into two assertions: [first, that visual] culture is what images, acts of seeing, and attendant intellectual, emotional, and perceptual sensibilities do to build, maintain, or transform the worlds in which people live. [And second, that the] study of visual culture is the analysis and interpretation of images and the ways of seeing (or gazes) that configure the agents, practices, conceptualities, and institutions that put images to work. . . .
Accordingly, the study of visual culture should be characterized by several concerns. First, scholars of visual culture need to examine any and all imagery – high and low, art and non art.. . . They must not restrict themselves to objects of a particular beauty or aesthetic value. Indeed, any kind of imagery may be found to offer up evidence of the visual construction of reality. . . .
Second, the study of visual culture must scrutinize visual practice as much as images themselves, asking what images do when they are put to use. If scholars engaged in this enterprise inquire what makes an image beautiful or why this image or that constitutes a masterpiece or a work of genius, they should do so with the purpose of investigating an artist’s or a work’s contribution to the experience of beauty, taste, value, or genius. No amount of social analysis can account fully for the existence of Michelangelo or Leonardo. They were unique creators of images that changed the way their contemporaries thought and felt and have continued to shape the history of art, artists, museums, feeling, and aesthetic value. But study of the critical, artistic, and popular reception of works by such artists as Michelangelo and Leonardo can shed important light on the meaning of these artists and their works for many different people. And the history of meaning-making has a great deal to do with how scholars as well as lay audiences today understand these artists and their achievements.
Third, scholars studying visual culture might properly focus their interpretative work on lifeworlds by examining images, practices, visual technologies, taste, and artistic style as constitutive of social relations. The task is to understand how artifacts contribute to the construction of a world. . . . Important methodological implications follow: ethnography and reception studies become productive forms of gathering information, since these move beyond the image as a closed and fixed meaning-event. . . .
Fourth, scholars may learn a great deal when they scrutinize the constituents of vision, that is, the structures of perception as a physiological process as well as the epistemological frameworks informing a system of visual representation. Vision is a socially and a biologically constructed operation, depending on the design of the human body and how it engages the interpretive devices developed by a culture in order to see intelligibly. . . . Seeing . . . operates on the foundation of covenants with images that establish the conditions for meaningful visual experience.
Finally, the scholar of visual culture seeks to regard images as evidence for explanation, not as epiphenomena.
Direction for Reading Comprehension: The passages given here are followed by some questions that have four answer choices; read the passage carefully and pick the option whose answer best aligns with the passage
174 incidents of piracy were reported to the International Maritime Bureau last year, with Somali pirates responsible for only three. The rest ranged from the discreet theft of coils of rope in the Yellow Sea to the notoriously ferocious Nigerian gunmen attacking and hijacking oil tankers in the Gulf of Guinea, as well as armed robbery off Singapore and the Venezuelan coast and kidnapping in the Sundarbans in the Bay of Bengal. For [Dr. Peter] Lehr, an expert on modern-day piracy, the phenomenon’s history should be a source of instruction rather than entertainment, piracy past offering lessons for piracy present. . . .
But . . . where does piracy begin or end? According to St Augustine, a corsair captain once told Alexander the Great that in the forceful acquisition of power and wealth at sea, the difference between an emperor and a pirate was simply one of scale. By this logic, European empire-builders were the most successful pirates of all time. A more eclectic history might have included the conquistadors, Vasco da Gama and the East India Company. But Lehr sticks to the disorganised small fry, making comparisons with the renegades of today possible.
The main motive for piracy has always been a combination of need and greed. Why toil always a starving peasant in the 16th century when a successful pirate made up to £4,000 on each raid? Anyone could turn to freebooting if the rewards were worth the risk . . . .
Increased globalisation has done more to encourage piracy than suppress it. European colonialism weakened delicate balances of power, leading to an influx of opportunists on the high seas. A rise in global shipping has meant rich pickings for freebooters. Lehr writes: “It quickly becomes clear that in those parts of the world that have not profited from globalisation and modernisation, and where abject poverty and the daily struggle for survival are still a reality, the root causes of piracy are still the same as they were a couple of hundred years ago.” . . .
Modern pirate prevention has failed. After the French yacht Le Gonant was ransomed for $2million in 2008, opportunists from all over Somalia flocked to the coast for a piece of the action. . . . A consistent rule, even today, is there are never enough warships to patrol pirate-infested waters. Such ships are costly and only solve the problem temporarily; Somali piracy is bound to return as soon as the warships are withdrawn. Robot shipping, eliminating hostages, has been proposed as a possible solution; but as Lehr points out, this will only make pirates switch their targets to smaller carriers unable to afford the technology.
His advice isn’t new. Proposals to end illegal fishing are often advanced but they are difficult to enforce. Investment in local welfare put a halt to Malaysian piracy in the 1970s, but was dependent on money somehow filtering through a corrupt bureaucracy to the poor on the periphery. Diplomatic initiatives against piracy are plagued by mutual distrust: The Russians execute pirates, while the EU and US are reluctant to capture them for fear they’ll claim asylum.