List of top Verbal Ability & Reading Comprehension (VARC) Questions on Reading Comprehension

If life exists on Mars, it is most likely to be in the form of bacteria buried deep in the planet's permafrost or lichens growing within rocks, say scientists from NASA. There might even be fossilized Martian algae locked up in ancient lake beds, waiting to be found. Christopher McKay of NASA's Ames Research Centre in California told the AAAS that astrobiologists, who look for life on other planets, should look for clues among the life forms of the Earth's ultra-cold regions, where conditions are similar to those on Mars. Lichens, for example, are found within some Antarctic rocks, just beneath the surface where sunlight can still reach them. The rock protects the lichen from cold and absorbs water providing enough for the lichen's needs, said McKay. Bacteria have also been found in 3-million-year-old permafrost dug up from Siberia. If there are any bacteria alive on Mars today, they would have had to have survived from the time before the planet cooled more than 3 billion years ago. Nevertheless, McKay is optimistic: ``It may be possible that bacteria frozen into the permafrost at the Martian South Pole may be viable.'' McKay said algae is found in Antarctic lakes with permanently frozen surfaces. Although no lakes are thought to exist on Mars today, they might have existed long ago. If so, the dried-out Martian lake beds may contain the fossilised remains of algae. On Earth, masses of microscopic algae form large, layered structures known as Stromatolites, which survive as fossil on lake beds, and the putative Martian algae might have done the same thing, said Jack Farmer, one of McKay's colleagues. The researchers are compiling a list of promising Martian lake beds to be photographed from spacecraft, said Farmer. Those photographs could help to select sites for landers that would search for signs of life, past or present. ``If we find algae on Mars, I would say the Universe is lousy with algae,'' McKay said. ``Intelligence would be another question.''
Know Your Product. Believe in Your Product and sell with Enthusiasm
These are the fundamental selling truths. If you don't know your product, people will resent your efforts to sell it, if you don't believe in it, no amount of personality and technique will cover that fact; if you can't sell with enthusiasm the lack of it will be infectious.
Nothing turns off a potential customer quicker than a salesman's lack of familiarity with his products. Have you ever walked into a department store, asked a clerk how a particular gadget or appliance worked, and then stood by while he fiddled with the knobs and wondered out loud why they don't make things simple anymore? Even if he finally gets it to work, by that time your interest has diminished and you are not likely to make the purchase.
Knowing your product also means understanding the idea behind its projection, how it is perceived – the relationship between it and what someone wants to buy. How will it help the customer? What problem is it solving? What is its promise?
An understanding of these intangible features is at least as important as knowing a product's mechanical features. Yet precisely because they are intangible, and may even vary from customer to customer, they are more prone to being misinterpreted and misunderstood.
Knowing your product also means understanding the image it is projecting. I believe all products project an image of some sort. It may be a positive one, which you want to promote, or a negative one, which you need to overcome.
The home computer industry, for instance, really didn't take off until it solved its image problem. Here was the device that saved time and simplified all sorts of tasks, yet it looked complicated and difficult to use. Until it was made to seem ``friendlier'', less forbidding, sales lagged.
If translated into English, most of the ways economists talk among themselves would sound plausible enough to poets, journalists, business people, and other thoughtful though non-economical folk. Like serious talk anywhere — among boat designers and baseball fans, say — the talk is hard to follow when one has not made a habit of listening to it for a while. The culture of conversation makes the words arcane. Underneath it all (the economists' favourite phrase), conversational habits are similar. Economics uses mathematical models and statistical tests and market arguments, all of which look alien to the literary eye. But looked at closely, they are not so alien. They may be seen as figures of speech—metaphors, analogies, and appeals to authority.
Figures of speech are not mere frills. They think for us. Someone who thinks of a market as an ``invisible hand'' and the organization of work as a ``production function'' and its coefficients as being ``significant'', as an economist does, gives the language a lot of responsibility. It seems a good idea to look hard at his language.
If economic conversation were found to depend a lot on its verbal forms, this would not mean that economics would be not a science, or just a matter of opinion, or some sort of confidence game. Good poets, though not scientists, are serious thinkers about symbols; good historians, though not scientists, are serious thinkers about data. Good scientists also use language. What is more (though it remains to be shown) they use the cunning of language, without particularly meaning to. The language used is a social object, and using language is a social act. It requires cunning (or, if you prefer, consideration), attention to the other minds present when one speaks.
The paying of attention to one's audience is called ``rhetoric'', a word that I later exercise hard. One uses rhetoric, of course, to warm a fire in a theatre or to arouse the xenophobia of the electorate. This sort of yelling is the vulgar meaning of the word, like the president's ``heated rhetoric'' in a press conference or the ``mere rhetoric'' to which our enemies stoop. Since the Greek flame was lit, though, the word has been used also in a broader and more amiable sense, to mean one of the ways of accomplishing ends with language: inciting a mob to lynch the accused, to be sure, but also persuading readers of a novel that its characters breathe, or bringing scholars to accept the better argument and reject the worse.
Rhetoric is an economics of language, the study of how scarce means are allocated to the insatiable desires of people to be heard. It seems on the face of it a reasonable hypothesis that economists are like other people in being talkers, who desire listeners when they go to the library or the laboratory as much as when they go to the office or the polls. The purpose is to see if this is true, and to see if it is useful: to study the rhetoric of economic scholarship.
The subject is the conversation economists have among themselves, for purposes of persuading each other that the interest elasticity of demand for investment is zero or that the money supply is controlled by the Federal Reserve.
Unfortunately, though, the conclusions are of more than academic interest. The conversations of classicists or of astronomers rarely affect the lives of other people. Those of economists do so on a large scale. A well known joke describes a May Day march through Red Square with the usual mass of soldiers, guided missiles, rocket launchers. At last come rank upon rank of people in gray business suits. A bystander asks, ``Who are those?'' ``Aha!'' comes the reply, ``Those are economists: you have no idea what damage they can do! Their conversations do it.''
TO EACH WHAT SHE DESERVES
The second plan we now have to examine is that of giving each person what she deserves. Many people, especially those who are comfortably off, think that this is what happens at present: that the industrious and sober and thrifty are never in want, and that poverty is due to idleness, extravagance, drink, betting, dishonesty, and bad character generally.
They can point to the fact that a labourer whose character is bad finds it more difficult to get employment than one whose character is good; that a farmer or country gentleman who gambles and bets heavily, and mortgages his land to live wastefully and extravagantly, is soon reduced to poverty; and that a man of business who is lazy and does not attend to it becomes bankrupt.
But this proves nothing but that you cannot eat your cake and have it too: it does not prove that your share of the cake was a fair one.
It shows that certain vices and weaknesses make us poor, but it forgets that certain other vices make us rich. People who are hard, grasping, selfish, cruel, and always ready to take advantage of their neighbours become very rich if they are clever enough not to overreach themselves.
On the other hand, people who are generous, public-spirited, friendly, and not always thinking of the main chance, stay poor when they are born poor unless they have extraordinary talents.
Also, as things are today, some are born poor and others are born with silver spoons in their mouths: that is to say, they are divided into rich and poor before they are old enough to have any character at all.
The notion that our present system distributes wealth according to merit, even roughly, may be dismissed at once as ridiculous. Everyone can see that it generally has the contrary effect; it makes a few idle people very rich, and a great many hardworking people very poor.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Often the well intentioned music lover or the traditionally-minded professional composer asks two basic questions when faced with the electronic music phenomena: (1) . . . is this type of artistic creation music at all? and, (2) given that the product is accepted as music of a new type or order, is not such music “inhuman”? . . . As Lejaren Hiller points out in his book Experi mental Music (co-author Leonard M. Isaacson), two questions which often arise when music is discussed are: (a) the substance of musical communication and its symbolic and semantic sig nificance, if any, and (b) the particular processes, both mental and technical, which are involved in creating and responding to musical composition. The ever-present popular concept of music as a direct, open, emotional expression and as a subjective form of communication from the composer, is, of course still that of the nineteenth century, when composers themselves spoke of music in those terms . . . But since the third decade of our century many composers have preferred more objective definitions of music, epitomized in Stravinsky’s description of it as “a form of speculation in terms of sound and time”. An acceptance of this more characteristic twentieth- century view of the art of musical composition will of course immediately bring the layman closer to an understanding of, and sympathetic response to, electronic music, even if the forms, sounds and approaches it uses will still be of a foreign nature to him.
Acommunication problem however will still remain. The principal barrier that electronic music presents at large, in relation to the communication process, is that composers in this medium are employing a new language of forms . . . where terms like ‘densities’, ‘indefinite pitch relations’, ‘dynamic serialization’, ‘permutation’, etc., are substitutes (or remote equivalents) for the traditional concepts of harmony, melody, rhythm, etc. . . . When the new structural procedures of electronic music are at last fully understood by the listener the barriers between him and the work he faces will be removed. . . .
The medium of electronic music has of course tempted many kinds of composers to try their hand at it . . . But the serious-minded composer approaches the world of electronic music with a more sophisticated and profound concept of creation. Although he knows that he can reproduce and employ melodic, rhythmic patterns and timbres of a traditional nature, he feels that it is in the exploration of sui generis languages and forms that the aesthetic magic of the new medium lies. And, conscientiously, he plunges into this search.
The second objection usually levelled against electronic music is much more innocent in nature. When people speak—sometimes very vehemently—of the ‘inhuman’ quality of this music they seem to forget that the composer is the one who fires the machines, collects the sounds, manip ulates them, pushes the buttons, programs the computer, filters the sounds, establishes pitches and scales, splices tape, thinks of forms, and rounds up the over-all structure of the piece, as well as every detail of it.

Understanding the key properties of complex systems can help us clarify and deal with many new and existing global challenges, from pandemics to poverty . . . A recent study in Nature Physics found transitions to orderly states such as schooling in fish (all fish swimming in the same direction), can be caused, paradoxically, by randomness, or ‘noise’ feeding back on itself. That is, a misalignment among the fish causes further misalignment, eventually inducing a transition to schooling. Most of us wouldn’t guess that noise can produce predictable behaviour. The result invites us to consider how technology such as contact-tracing apps, although informing us locally, might negatively impact our collective movement. If each of us changes our behaviour to avoid the infected, we might generate a collective pattern we had aimed to avoid higher levels of interaction between the infected and susceptible, or high levels of interaction among the asymptomatic.
Complex systems also suffer from a special vulnerability to events that don’t follow a normal distribution or ‘bell curve’. When events are distributed normally, most outcomes are familiar and don’t seem particularly striking. Height is a good example: it’s pretty unusual for a man to be over 7 feet tall; most adults are between 5 and 6 feet, and there is no known person over 9 feet tall. But in collective settings where contagion shapes behaviour – a run on the banks, a scramble to buy toilet paper – the probability distributions for possible events are often heavy-tailed. There is a much higher probability of extreme events, such as a stock market crash or a massive surge in infections. These events are still unlikely, but they occur more frequently and are larger than would be expected under normal distributions.
What’s more, once a rare but hugely significant ‘tail’ event takes place, this raises the probability of further tail events. We might call them second-order tail events; they include stock market gyrations after a big fall and earthquake aftershocks. The initial probability of second-order tail events is so tiny it’s almost impossible to calculate – but once a first-order tail event occurs, the rules change, and the probability of a second-order tail event increases.
The dynamics of tail events are complicated by the fact that they result from cascades of other unlikely events. When COVID-19 first struck, the stock market suffered stunning losses followed by an equally stunning recovery. Some of these dynamics are potentially attributable to former sports bettors, with no sports to bet on, entering the market as speculators rather than investors. The arrival of these new players might have increased inefficiencies and allowed savvy long-term investors to gain an edge over bettors with different goals. . . .
One reason a first-order tail event can induce further tail events is that it changes the perceived costs of our actions and changes the rules that we play by. This game-change is an example of another key complex systems concept: nonstationarity. A second, canonical example of nonstationarity is adaptation, as illustrated by the arms race involved in the coevolution of hosts and parasites [in which] each has to ‘run’ faster, just to keep up with the novel solutions the other one presents as they battle it out in evolutionary time.

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
How can we know what someone else is thinking or feeling, let alone prove it in court? In his 1863 book, A General View of the Criminal Law of England, James Fitzjames Stephen, among the most celebrated legal thinkers of his generation, was of the opinion that the assessment of a person’s mental state was an inference made with “little consciousness.” In a criminal case, jurors, doctors, and lawyers could watch defendants—scrutinizing clothing, mannerisms, tone of voice— but the best they could hope for were clues. . . . Rounding these clues up to a judgment about a defendant’s guilt, or a defendant’s life, was an act of empathy and imagination. . . . The closer the resemblance between defendants and their judges, the easier it was to overlook the gap that inference filled. Conversely, when a defendant struck officials as unlike themselves, whether by dint of disease, gender, confession, or race, the precariousness of judgments about mental state was exposed. In the nineteenth century, physicians who specialized in the study of madness and the care of the insane held themselves out as experts in the new field of mental science. Often called alienists or mad doctors, they were the predecessors of modern psychiatrists, neurologists, and psychologists. . . . The opinions of family and neighbors had once been sufficient to sift the sane from the insane, but a growing belief that insanity was a subtle condition that required expert, medical diagnosis pushed physicians into the witness box. . . . Lawyers for both prosecution and defense began to recruit alienists to assess defendants’ sanity and to testify to it in court.
Irresponsibility and insanity were not identical, however. Criminal responsibility was a legal concept and not, fundamentally, a medical one. Stephen explained: “The question ‘What are the mental elements of responsibility?’ is, and must be, a legal question. It cannot be anything else, for the meaning of responsibility is liability to punishment.” . . . Nonetheless, medical and legal accounts of what it meant to be mentally sound became entangled and mutually referential throughout the nineteenth century. Lawyers relied on medical knowledge to inform their opinions and arguments about the sanity of their clients. Doctors commented on the legal responsibility of their patients. Ultimately, the fields of criminal law and mental science were both invested in constructing an image of the broken and damaged psyche that could be contrasted with the whole and healthy one. This shared interest, and the shared space of the criminal courtroom, made it nearly impossible to consider responsibility without medicine, or insanity without law. . . .
Physicians and lawyers shared more than just concern for the mind. Class, race, and gender bound these middle-class, white, professional men together, as did family ties, patriotism, Protestantism, business ventures, the alumni networks of elite schools and universities, and structures of political patronage. But for all their affinities, men of medicine and law were divided by contests over the borders of criminal responsibility, as much within each profession as between them. Alienists steadily pushed the boundaries of their field, developing increasingly complex and capacious definitions of insanity. Eccentricity and aggression came to be classified as symptoms of mental disease, at least by some.
Studies showing that income inequality plays a positive role in economic growth are largely based on three arguments. The first argument focuses on investment indivisibilities wherein large sunk costs are required when implementing new fundamental innovations. Without stock markets and financial institutions to mobilize large sums of money, a high concentration of wealth is needed for individuals to undertake new industrial activities accompanied by high sunk costs.
One study shows the relation between economic growth and income inequality for 45 countries during 1966–1995. (It was found) that the increase in income inequality has a significant positive relationship with economic growth in the short and medium term. Using system GMM, another study estimated the relation between income inequality and economic growth for 106 countries during 1965–2005 period. The results show that income inequality has a positive impact on economic growth in the short run, but a two or more negatively correlated in the long run. The second argument is related to moral hazard and incentives. Because economic performance is determined by the unobservable level of effort that agents make, paying compensations without taking into account the economic performance achieved would reduce the overall optimum effort from the agents. Thus, certain income inequalities contribute to growth by enhancing worker motivation and by giving motivation to innovators and entrepreneurs. Finally, some points out that the concentration of wealth or stock ownership in relation to corporate governance contributes to growth. If stock ownership is distributed and owned by a large number of shareholders, it is not easy to make quick decisions due to the conflicting interests among shareholders, and this may also cause a free-rider problem in terms of monitoring and supervising managers and workers.
Various studies have examined the relationships between income inequality and economic growth, and most of these assert that a negative correlation exists between the two. Analyzing 159 countries for 1980–2012, they conclude that there exists a negative relation between income inequality and economic growth; when the income share of the richest 20% of population increases by 1%, the GDP decreases by 0.8%; whereas when the income share of the poorest 20% of population increases by 1%, the GDP increases by 0.38%. Some studies find that inequality has a negative impact on growth due to poor human capital accumulation and low fertility rates, while others point out that inequality creates political instability, resulting in lower investment. Some economists argue that widening income inequality has a negative impact on economic growth because it negatively affects social consensus or social capital formation. One important research topic is the correlation between democratization and income redistribution. Some scholars explain that social pressure for income redistribution rises as income inequality increases in a democratic society. In other words, democratization extends suffrage to wider class of people; the increased political power of low- and middle-income voters results in broader support for income redistribution and social welfare expansion. However, if the rich have more political influence than the poor, the democratic system actually worsens income inequality rather than improving it.

Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities: sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision. . . . 
Yet beneath this vision of an idealised moral arbiter lies a fundamental question: can a machine understand morality as humans do, or is it confined to a simulacrum of ethical reasoning? AI might replicate human decisions without improving on them, carrying forward the same biases, blind spots and cultural distortions from human moral judgment. In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context qualities that resist formalisation. Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features. If so, AI would merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.
Still, many have tried to formalise ethics, by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing. From this, more specific principles can be derived, for example, that it is right to benefit the greatest number, or that actions should be judged by their consequences for total happiness. As computational resources increase, AI becomes increasingly well-suited to the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations.
But, what exactly, does it mean to formalise something like ethics? The question is easier to grasp by looking at fields in which formal systems have long played a central role. Physics, for instance, has relied on formalisation for centuries. There is no single physical theory that explains everything. Instead, we have many physical theories, each designed to describe specific aspects of the Universe: from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth’s centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible. Yet both share a common structure: they begin with basic postulates assumptions about motion, force or mass– and derive increasingly complex consequences. . . .
Ethical theories have a similar structure. Like physical theories, they attempt to describe a domain– in this case, the moral landscape. They aim to answer questions about which actions are right or wrong, and why. These theories also diverge, and even when they recommend similar actions, such as giving to charity, they justify them in different ways. Ethical theories also often begin with a small set of foundational principles or claims, from which they reason about more complex moral problems.

In 1982, a raging controversy broke out over a forest act drafted by the Government of India. This act sought to strengthen the already extensive powers enjoyed by the forest bureaucracy in controlling the extraction, disposal and sale of forest produce. It also gave forest officials greater powers to strictly regulate the entry of any person into reserved forest areas. While forest officials justified the act on the grounds that it was necessary to stop the continuing deforestation, it was bitterly opposed by representatives of grassroots organisations, who argued that it was a major violation of the rights of peasants and tribals living in and around forest areas. . . . 
The debate over the draft forest act fuelled a larger controversy over the orientation of state forest policy. It was pointed out, for example, that the draft act was closely modelled on its predecessor, the Forest Act of 1878. The earlier Act rested on a usurpation of rights of ownership by the colonial state which had little precedent in precolonial history. It was further argued that the system of forestry introduced by the British—and continued, with little modification, after 1947—emphasised revenue generation and commercial exploitation, while its policing orientation excluded villagers who had the most longstanding claim on forest resources. Critics called for a complete overhaul of forest administration, pressing the government to formulate policy and legislation more appropriate to present needs. . . .
That debate is not over yet. The draft act was shelved, though it has not as yet been formally withdrawn. Meanwhile, the 1878 Act (as modified by an amendment in 1927) continues to be in operation. In response to its critics, the government has made some important changes in forest policy, e.g., no longer treating forests as a source of revenue, and stopping ecologically hazardous practices such as the clearfelling of natural forests. At the same time, it has shown little inclination to meet the major demand of the critics of forest policy—namely, abandoning the principle of state monopoly over forest land by handing over areas of degraded forests to individuals and communities for afforestation.
. . . [The] 1878 Forest Act itself was passed only after a bitter and prolonged debate within the colonial bureaucracy, in which protagonists put forward arguments strikingly similar to those being advanced today. As well known, the Indian Forest Department owes its origin to the requirements of railway companies. The early years of the expansion of the railway network, c. 1853 onwards, led to tremendous deforestation in peninsular India owing to the railway’s requirements of fuelwood and construction timber. Huge quantities of durable timbers were also needed for use as sleepers across the new railway tracks. Inexperienced in forestry, the British called in German experts to commence systematic forest management. The Indian Forest Department was started in 1864, with Dietrich Brandis, formally a Lecturer in Botany, as the first Inspector General of Forests. The early years of the forest department, even as it grew, continued to meet the railway needs for timber and wood. These systems first emerged as part of the needs of the expanding empire.

Over the course of the twentieth century, humans built, on average, one large dam a day, hulking structures of steel and concrete designed to control flooding, facilitate irrigation, and generate electricity. Dams were also lucrative contracts, large-scale employers, and the physical instantiation of a messianic drive to conquer territories and control nature. Some of the results of that drive were charismatic mega-infrastructure—the Hoover on the Colorado River or the Aswan on the Nile—but most of the tens of thousands of dams that dot the Earth’s landscape have drawn little attention. These are the smaller, though not inconsequential, barriers that today impede the flow of water on nearly two-thirds of the world’s large waterways. Chances are, what your map calls a “lake” is actually a reservoir, and that thin blue line that emerges from it once flowed very differently. 
Damming a river is always a partisan act. Even when explicit infrastructure goals—irrigation, flood control, electrification—were met, other consequences were significant and often deleterious. Across the world, river control displaced millions of people, threatening livelihoods, foodways, and cultures. In the western United States, dams were often an instrument of colonialism, used to dispossess Indigenous people and subsidize settler agriculture. And as dams slowed the flow of water, inhibited the movement of nutrients, and increased the amount of toxic algae and other parasites, they snuffed out entire river ecologies. Declining fish populations are the most evident effect, but dams also threaten a host of other animals—from birds and reptiles to fungi and plants—with extinction. Every major dam, then, is also a sacrifice zone, a place where lives, livelihoods, and ways of life are eliminated so that new sorts of landscapes can support water-intensive agriculture and cities that sprout downstream of new reservoirs.
Such sacrifices have been justified as offerings at the temples of modernity. Justified by—and for—whom, though? Over the course of the twentieth century, rarely were the costs and benefits weighed thoughtfully and decided democratically. As Kader Asmal, chair of the landmark 2000 World Commission on Dams, concluded, “There have been precious few, if any, comprehensive, independent analyses as to why dams came about, how dams perform over time, and whether we are getting a fair return from our 2 trillion Dollar investment.” A quarter-century later, Asmal’s words ring ever truer. A litany of dams built in the mid-twentieth century are approaching the end of their expected lives, with worrying prospects for their durability. Droughts, magnified and multiplied by the effects of climate change, have forced more and more to run below capacity. If ever there were a time to rethink the mania for dams, it would be now.
There is some evidence that a combination of opposition, alternative energy sources, and a lack of viable projects has slowed the construction of major dams. But a wave of recent and ongoing construction, from India and China to Ethiopia and Canada, continues to tilt the global balance firmly in favor of water impoundment.

Once a society accepts a secular mode of creativity, within which the creator replaces God, imaginative transactions assume a self-conscious form. The tribal imagination, on the other hand, is still to a large extent dreamlike and hallucinatory. It admits fusion between various planes of existence and levels of time in a natural and artless manner. In tribal stories, oceans fly in the sky as birds, mountains swim in water as fish, animals speak as humans and stars grow like plants. Spatial order and temporal sequence do not restrict the narrative. This is not to say that tribal creations have no conventions or rules, but simply that they admit the principle of association between emotion and the narrative motif. Thus stars, seas, mountains, trees, men and animals can be angry, sad or happy. 
It might be said that tribal artists work more on the basis of their racial and sensory memory than on the basis of a cultivated imagination. In order to understand this distinction, we must understand the difference between imagination and memory. In the animate world, consciousness meets two immediate material realities: space and time. We put meaning into space by perceiving it in terms of images. The image-making faculty is a genetic gift to the human mind—this power of imagination helps us understand the space that envelops us. With regard to time, we make connections with the help of memory; one remembers being the same person today as one was yesterday.
The tribal mind has a more acute sense of time than the sense of space. Somewhere along the history of human civilization, tribal communities seem to have realized that domination over territorial space was not their lot. Thus, they seem to have turned almost obsessively to gaining domination over time. This urge is substantiated in their ritual of conversing with their dead ancestors: year after year, tribals in many parts of India worship terracotta or carved-wood objects representing their ancestors, aspiring to enter a trance in which they can converse with the dead. Over the centuries, an amazingly sharp memory has helped tribals classify material and natural objects into a highly complex system of knowledge. . .
One of the main characteristics of the tribal arts is their distinct manner of constructing space and imagery, which might be described as ‘hallucinary’. In both oral and visual forms of representation, tribal artists seem to interpret verbal or pictorial art as demarcated by an extremely flexible ‘frame’. The boundaries between art and non-art become almost invisible. Atribal epic can begin its narration from a trivial everyday event; tribal paintings merge with living space as if the two were one and the same. And within the narrative itself, or within the painted imagery, there is no deliberate attempt to follow a sequence. The episodes retold and the images created take on the apparently chaotic shapes of dreams. In a way, the syntax of language and the grammar of painting are the same, as if literature were painted words and painting were a song of images.

This book takes the position that setting in literature is more than just backdrop, that important insight into literary texts can be made by paying close attention to how authors craft place, as well as to how place functions in a narrative. The authors included in this reference work engage deeply with either real or imagined geographies. They care about how human decisions have shaped landscapes and how landscapes have shaped human practices and values. Some of the best writing is highly vivid, employing the language of the senses because this is the primary means through which humans know physical space. Literature can offer valuable perspectives on physical and cultural geography. Unlike scientific reports, a literary narrative can provide the emotional component missing from the scientific record. In human experience, geographical places have a spiritual or emotional component in addition to and as part of a physical layout and topography. This emotional component, although subjective, is no less “real” than a surveyor’s map. Human consciousness of place is experienced in a multi-modal manner. Histories of places live on in many forms, one of which is the human memory or imagination. 
Both real and imaginary landscapes provide insight into the human experience of place. The pursuit of such a topic speaks to the valuable knowledge produced from bridging disciplines and combining material from both the arts and the sciences to better understand the human condition. The perspectives that most concern cultural geographers are often those regarding movement and migration, cultivation of natural resources, and organization of space. The latter two reflect concerns of the built environment, a topic shared with the field of architectural study. Many of these concerns are also reflected in work sociologists do. Scholars from literary studies can contribute an aesthetic dimension to what might otherwise be a purely ideological approach.
Literature can bring together material that spans different branches of science. For example, a literary description of place may involve not only the environment and geography but the noises and quality of light, or how people from different races or classes can experience the same place in different ways linked to those racial or class disparities. Literary texts can also account for the way in which absence—of other people, animals, and so on—affects a human observer or inhabitant. Both literary and scientific approaches to place are necessary, working in unison, to achieve a complete record of an environment. It is important to note that the interdisciplinary nature of this work teaches us that landscapes are not static, that they are not unchanged by human culture. At least part of their identity derives from the people who inhabit them and from the way space can alter and inspire human perspective. The intersection of scientific and literary expression that happens in the study of literary geography is of prime importance due to the complexity of the personal and political ways that humans experience place.

In my book “Searches,” I chronicle how big technology companies have exploited human language for their gain. We let this happen, I argue, because we also benefit somewhat from using the products. It’s a dynamic that makes up big tech’s accumulation of wealth and power: we’re both victims and beneficiaries. I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews, and my ChatGPT dialogues. . . . 
People often describe chatbots’ output as “bland” or “generic”– the linguistic equivalent of a beige office building. OpenAI’s products are built to “sound like a colleague”, as OpenAI puts it, using language that, coming from a person, would sound “polite”, “empathetic”, “kind”, “rationally optimistic” and “engaging”, among other qualities. OpenAI describes these strategies as helping its products seem “professional” and “approachable”. This appears to be bound up with making us feel safe . . .
Trust is a challenge for artificial intelligence (AI) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms. While the companies are working on these problems, they persist: OpenAI found that its latest systems generate errors at a higher rate than its previous system. In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products. When I prompted Microsoft’s Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English. Those weren’t flukes. Research suggests that both tendencies are widespread.
In my own ChatGPT dialogues, I wanted to enact how the product’s veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement. Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech– including editing my description of OpenAI’s CEO, Sam Altman, to call him “a visionary and a pragmatist”. I’m not aware of research on whether ChatGPT tends to favor big tech, OpenAI or Altman, and I can only guess why it seemed that way in our conversation. OpenAI explicitly states that its products shouldn’t attempt to influence users’ thinking. When I asked ChatGPT about some of the issues, it blamed biases in its training data– though I suspect my arguably leading questions played a role too. When I queried ChatGPT about its rhetoric, it responded: “The way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.” . . . OpenAI has its own goals, of course. Among them, it emphasizes wanting to build AI that “benefits all of humanity”. But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment. That will presumably require getting people using products such as ChatGPT even more than they already are– a goal that is easier to accomplish if people see those products as trustworthy collaborators.

Time and again, whenever a population of [Mexican tetra fish] was swept into a cave and survived long enough for natural selection to have its way, the caves adapted. ”But it’s not that they have been losing their vision,” as one of the authors of the study explains. ”Studies have found that cave-dwelling fish can detect lower levels of amino acids than surface fish can. They have also more tastebuds and a higher density of sensitive cells alongside their bodies that let them sense water pressure and flow . . .” 
Killing the processes that support the formation of the eye is quite literally what happens. Just like non-cave-dwelling members of the species, all cavefish embryos start making eyes. But after a few hours, cells in the developing eye get tiny until the entire structure has disappeared. (Developmental biologist Melody Riddle thinks this apparent inefficiency may be unavoidable: ”The development of the brain and the eye are completely intertwined—so when eyes disappear, it impacts the entire biology of the animal. It’s hard to tell exactly how they happen together,” she says. That means the last step in survival for eye-less animals may be to start making an eye and then get rid of it. . . .
It’s easy to see why cavefish would be at a disadvantage if they were to maintain excessive tissues they aren’t using. Since relatively little lives or grows in their caves, the fish are likely surviving on a meager diet of mostly bat feces and organic waste that washes in during the rainy season. Researchers keeping cavefish in labs have discovered that cavefish are exquisitely adapted to absorbing and using nutrients. . . .
Cells can be toxic for tissues, [evolutionary physiologist Nicolas] Rohner explains, so they are stored in fat cells. ”But when these cells get too big, they can burst, which is why we often see chronic inflammation in humans and other animals that have stored a lot of fat in their tissues.” Yet a 2020 study by Riddle, Rohner and their colleagues revealed that even very well-fed cavefish had fewer signs of inflammation in their fat tissues than surface fish do. Even in their sparse cave conditions, wild cavefish can sometimes get very fat, says Riddle. This is presumably because, whenever food piles up in the cave, the fish eat as much of it as possible, since there might not be enough for a long time to come. Intriguingly, Riddle says, their fat is usually bright yellow, because of high levels of carotenoids, the substance in the carrots that your grandmother used to tell you were good for your...eyes. ”The first thing that came to our mind, of course, was that they were accumulating these compounds because they don’t have eyes,” says Riddle. In this species, such ideas can be tested: Scientists can cross surface fish (with eyes) and cavefish (without eyes) and look at what their offspring are like. When that’s done, Riddle says, researchers see no link between eye presence or size and the accumulation of carotenoids. Some eyeless cavefish had fat that was completely white, indicating lower carotenoid levels. Instead, Riddle thinks these carotenoids may be another adaptation to suppress inflammation, which might be important in the wild, as cavefish are likely eating whenever food arrives.

Different sciences exhibit different science cultures and practices. For example, in astronomy, observation– until what is today called the new astronomy– had always been limited to what could be seen within the limits of optical light. Indeed, until early modernity the limits to optical light were also limits of what humans could immerse themselves with their limited and relative perceptual spectrum of human vision. With early modernity and the invention of lenses for optical instruments– telescopes– astronomers could begin to observe phenomena never seen before. Magnification and resolution began to allow what was previously imperceptible to be perceived– but within the familiar limits of optical vision.  Galileo, having learned of the Dutch invention of a telescope by Hans Lippershey, went on to build some hundred of his own, improving from the Dutch to nearly 30x telescopes– which turn out to be the limit of magnificational power without chromatic distortion. And it was with his own telescopes that he made the observations launching early modern astronomy (phases of Venus, satellites of Jupiter, etc.). Isaac Newton’s later improvement with reflecting telescopes expanded upon the magnification-resolution capacity of optical observation; and, from Newton to the twentieth century, improvement continued to the later very large array of light telescopes today– following the usual technological trajectory of “more-is-better” but still remaining within the limits of the light spectrum. Today’s astronomy has now had the benefit of some four centuries of optical telescope. The “new astronomy,” however, opens the full known electromagnetic spectrum to observation, beginning with the accidental discovery of radio astronomy early in the twentieth century, and leading today to the diverse variety of EMS telescopes which can explore the range from gamma to radio waves. Thus, astronomy, now outfitted with new instruments, “smart” adaptive optics, very large arrays, etc., illustrates one style of instrumentally embodied science– a technoscience. Of course astronomy, with the very recent exceptions of probes to solar system bodies (Moon, Mars, Venus, asteroids), remains largely a “receptive” science, dependent upon instrumentation which can detect and receive emissions.
Contemporary biology displays a quite different instrument array and, according to Evelyn Fox-Keller, also a different scientific culture. She cites her own experience, coming from mathematical physics into microbiology, and takes account of the distinctive instrumental culture in her Making Sense of Life (2002). Here, particularly with the development of biotechnology, instrumentation is far more interventional than in the astronomy case. 
Microscopic instrumentation can be and often is interventional in style: “gene-splicing” and other techniques of biotechnology, while still in their infancy, are clearly part of the interventional trajectory of biological instrumentation. Yet, in both disciplines, the sciences involved are today highly instrumentalized and could not progress successfully without constant improvements upon the respective instrumental trajectories. So, minimalistically, one may conclude that the sciences are technologically, instrumentally embodied. But the styles of embodiment differ, and perhaps the last of the scientific disciplines to move into such technical embodiment is mathematics, which only contemporary has come to rely more and more upon the computational machinery now in common use.