Digital poverty impacts poverty
If we’re to solve poverty, we must address digital exclusion. Whether it’s accessing education, the social security system, job opportunities or cheaper gas and electricity, it’s a core part of how we live. The growing divide
With each new development in technology, more people are left behind. This also makes existing inequities around race, gender, age, ability and income worse.
The fit between words and meanings is much fuzzier and unstable than we are led to suppose by the static majesty of the dictionary and its tidy definitions. What a word means today is a Polaroid snapshot of its lexical life, long-lived and frequently under transformation. The reason begins with the nature of concepts rather than the words that express them. Concepts shade into one another the way colors do. For example, to be foolish is a form of being weak; one kind of weakness is to be distracted by idle fastidiousness rather than focusing on substance; but fastidiousness is also a way of being careful or observant, of which one form is being socially agreeable — as in “nice.” I raise these examples because the word “nice” actually did describe each of those concepts over the course of several centuries, like a torch passed on from hand to hand in sequence. In 1250, people were called nice when they were dimwitted. Only linguists have any reason to know the circuitous path that took us from that definition to “kind.”
Crucially, this is not some isolated instance, but a typical one. It is why “silly” once meant “blessed,”“obnoxious” once meant “subject to harm,”“generous” once meant “of noble status,” and today we speak of “heading” out from a party and “heading” up the coast without for a minute thinking it has anything to do with our noggins.
Albert Camus を
Tout homme est un criminel qui s’ignore.
William Shakespeare を
The fool doth think he is wise, but the wise man knows himself to be a fool.
Albert Camus も
William Shakespeare も
We’ve seen that people are surprisingly ignorant, more ignorant than they think. We’ve also seen that the world is complex, even more complex than one might have thought. So why aren’t we overwhelmed by this complexity if we’re so ignorant? How can we get around, sound knowledgeable, and take ourselves seriously while understanding only a tiny fraction of what there is to know?
The answer is that we do so by living a lie. We ignore complexity by overestimating how much we know about how things work, by living life in the belief that we know how things work even when we don’t. We tell ourselves that we understand what’s going on, that our opinions are justified by our knowledge, and that our actions are grounded in justified beliefs even though they are not. We tolerate complexity by failing to recognize it. That’s the illusion of understanding.
People who produce facts — scientists, reporters, witnesses — do so from a particular social position (maybe they’re white, male and live in America) that influences how they perceive, interpret and judge the world. They rely on non-neutral methods (microscopes, cameras, eyeballs) and use non-neutral symbols (words, numbers, images) to communicate facts to people who receive, interpret and deploy them from their own social positions.
Call it what you want: relativism, constructivism, deconstruction, postmodernism, critique. The idea is the same: Truth is not found, but made, and making truth means exercising power.
The reductive version is simpler and easier to abuse: Fact is fiction, and anything goes.
While we spent years trying to detect the real prejudices hidden behind the appearance of objective statements, do we now have to reveal the real objective and incontrovertible facts hidden behind the illusion of prejudices? And yet entire Ph.D. programs are still running to make sure that good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular standpoint, and so on, while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives.Was I wrong to participate in the invention of this field known as science studies? Is it enough to say that we did not really mean what we said? Why does it burn my tongue to say that global warming is a fact whether you like it or not? Why can’t I simply say that the argument is closed for good?
Le monde a changé. Le monde de la recherche aussi. Le transfert de connaissances dans la mise en œuvre des politiques publiques est l’enjeu d’une action politique de gauche. Unir la démocratie, la connaissance et des politiques publiques robustes : c’est le pari de l’intelligence collective et de la politique des idées que porte Benoît Hamon, un renouvellement du logiciel de gauche. Le cœur des idées bat encore, et fort !
Watson consumed all published literature related to ALS, and learned all the proteins already known to be linked to the disease.
The computing system then ranked the nearly 1,500 genes in the human genome and predicted which could be associated with ALS. Barrow’s research team examined Watson’s predictions, and found that eight of the 10 genes proposed by the computer were linked to the disease. Of those, five had never before been associated with ALS.
The sense of understanding is contagious. The understanding that others have, or claim to have, makes us feel smarter. This happens only when people believe they have access to the relevant information: When our experimental story indicated that the scientists worked for the Army and were keeping the explanation secret, people no longer felt that they had any understanding of why the rocks glowed.
The key point here is not that people are irrational; it’s that this irrationality comes from a very rational place. People fail to distinguish what they know from what others know because it is often impossible to draw sharp boundaries between what knowledge resides in our heads and what resides elsewhere.
When considering the importance of science in policymaking, common wisdom contends that keeping science as far as possible from social and political concerns would be the best way to ensure science’s reliability. This intuition is captured in the value-free ideal for science—that social, ethical, and political values should have no influence over the reasoning of scientists, and that scientists should proceed in their work with as little concern as possible for such values. Contrary to this intuition, I will argue in this book that the value-free ideal must be rejected precisely because of the importance of science in policymaking. In place of the value-free ideal, I articulate a new ideal for science, one that accepts a pervasive role for social and ethical values in scientific reasoning, but one that still protects the integrity of science.
It is all too true that from the age of five to fifteen, most children are getting an education that goes only to the head. There is hardly any concern with their emotional life. Yet it is the emotional disturbance in a neurotic child that makes him compulsively steal. All his knowledge of school subjects or his lack of knowledge of school subjects plays no part at all in his larceny.
These discussions of the concepts of truth and value lead us to the final issue that I take to characterize cultural studies of science. Sociological constructivists frequently insist that they merely describe the ways in which scientific knowledge is socially produced, while bracketing any questions about its epistemic or political worth. In this respect, their work belongs to the tradition that posits value-freedom as a scientific ideal. By contrast, cultural studies of scientific knowledge have a stronger reflexive sense of their own cultural and political engagement, and typically do not eschew epistemic or political criticism. They find normative issues inevitably at stake in both science and cultural studies of science, but see them as arising both locally and reflexively. One cannot not be politically and epistemically engaged.
A growing gap in knowledge production exists not only between high-income and other countries but also within the developing world – between a handful of ’emerging’ countries, intermediary nations numbering five to 10 on each continent, and a remaining 100 countries whose productivity remains very small (60 countries) or minute (40 countries). Stagnating research means some nations have lost their relative share of global knowledge production – but the burning question for the developing world is one of critical mass and the resources required to maintain scientific quality and build a new generation of scientists.
First, in the field of knowledge, there are profound inequalities between rich countries and poor countries. One of the vicious circles of under-development is that it is sustained by the knowledge gap while accentuating it in return. Second, the rise of a global information society has allowed a considerable mass of information or knowledge to be disseminated via the leading media. However, the different social groups are far from having equal access and capacity to assimilate this growing flow of information or knowledge. Not only do the most disadvantaged socio-economic categories have often a limited access to information or to knowledge (digital divide), but also they do not assimilate it as well as those who are on the highest rung of the social ladder. Such a divide can also be witnessed between nations. An imbalance is thus created in the actual relationship to knowledge (knowledge divide). Given equal access to it, those who have a high level of education benefit much more from knowledge than those with no or only limited education. The widespread dissemination of knowledge therefore, far from narrowing the gap between developed and less developed countries, may help to widen it.
As the infusion of mass media information into a social system increases, higher socioeconomic status segments tend to acquire this information faster than lower socioeconomic-status population segments so that the gap in knowledge between the two tends to increase rather than decrease.
What should students KNOW: Facts; Dates; Definitions; Rules; People; Places; Vocabulary; Information.
Students will be able to DO: Basic skills; Communication; Planning/Organization; Thinking skills; Evaluation; Working collaboratively; Skills of the discipline: mapping, graphing, collecting data, show p.o.v.
Students will UNDERSTAND that: Essential questions; Theories; “Big” ideas; Important generalizations; Thesis-like statements.
When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.
Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book-learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings “catching on,” “making sense” of things, or “figuring out” what to do.
AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly. AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can. AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words “immortality” and “extinction” will both appear in these posts multiple times.
The father of cybernetics cautioned human beings against the desire to be waited upon by intelligent machines that are equipped to improve their minds over time. “We wish a slave to be intelligent, to be able to assist us in the carrying out of our tasks,” Wiener writes. “However, we also wish him to be subservient.” The obvious problem is that keen intelligence and groveling submission do not go hand in hand.
The goal of the Blue Brain Project is to build biologically detailed digital reconstructions and simulations of the rodent, and ultimately the human brain. The supercomputer-based reconstructions and simulations built by the project offer a radically new approach for understanding the multilevel structure and function of the brain. The project’s novel research strategy exploits interdependencies in the experimental data to obtain dense maps of the brain, without measuring every detail of its multiple levels of organization (molecules, cells, micro-circuits, brain regions, the whole brain). This strategy allows the project to build digital reconstructions (computer models) of the brain at an unprecedented level of biological detail. Supercomputer-based simulation of their behavior turns understanding the brain into a tractable problem, providing a new tool to study the complex interactions within different levels of brain organization and to investigate the cross-level links leading from genes to cognition.
We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely.
Computers share knowledge much more easily than humans do, and they can keep that knowledge longer, becoming wiser than humans. Many forward-thinking companies already see this writing on the wall, and are luring the best computer scientists out of academia with better pay and advanced hardware. A world with superintelligent machine-run corporations won’t be that different for humans than it is now; it will just be better: with more advanced goods and services available for very little cost, and more leisure time available to those who want it.
Of course, the first superintelligent machines probably won’t be corporate; they’ll be operated by governments. And this will be much more hazardous. Governments are more flexible in their actions than corporations—they create their own laws. And, as we’ve seen, even the best can engage in brutal torture when they consider their survival to be at stake. Governments produce nothing, and their primary modes of competition for survival and propagation are social manipulation, legislation, taxation, corporal punishment, murder, subterfuge, and warfare.
“Think” and “intelligence” are both what Marvin Minsky has called suitcase words. They are words into which we pack many meanings so that we can talk about complex issues in a shorthand way. When we look inside these words we find many different aspects, mechanisms, and levels of understanding. This makes answering the perennial questions of “can machines think?” or “when will machines reach human level intelligence?” fraught with danger. The suitcase words are used to cover both specific performance demonstrations by machines and more general competence that humans might have. People are getting confused and generalizing from performance to competence and grossly overestimating the real capabilities of machines today and in the next few decades.
In 1950, Alan Turing suggested we should ask not “Can Machines Think” but rather “What Can Machines Do?” Edsger Dijkstra got it right in 1984 when he said the question of Can Machine Think “is about as relevant as the question of whether Submarines Can Swim.” By that he meant that both are questions in sociolinguistics: how do we choose to use words such as “think”? In English, submarines do not swim, but in Russian, they do. This is irrelevant to the capabilities of submarines. So let’s explore what it is that machines can do, and whether we should fear their capabilities.
The malleability is understandable when we realize that at the beginning the computing community was very uncertain as to what its topic was really about and got in this respect very little guidance from the confused and confusing world by which it was surrounded.
The Fathers of the field had been pretty confusing: John von Neumann speculated about computers and the human brain in analogies sufficiently wild to be worthy of a medieval thinker and Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim.
A futher confusion came from the circumstance that numerical mathematics was at the time about the only scientific discipline more or less ready to use the new equipment. As a result, in their capacity as number crunchers, computers were primarily viewed as tools for the numerical mathematician.
But the greatest confusion came from the circumstance that, at the time, electronic engineering was not really up to the challenge of constructing the machinery with an acceptable degree of reliability and that, consequently, the hardware became the focus of concern.
I called this focus on hardware a distortion because we know by now that electronic engineering can contribute no more than the machinery, and that the general purpose computer is no more than a handy device for implementing any thinkable mechanism without changing a single wire. That being so, the key question is what mechanisms we can think of without getting lost in the complexities of our own making. Not getting lost in the complexities of our own making and preferably reaching that goal by learning how to avoid the introduction of those complexities in the first place, that is the key challenge computing science has to meet.
Nowadays machines are so fast and stores are so huge that in a very true sense the computations we can evoke defy our imagination. Machine capacities now give us room galore for making a mess of it. Opportunities unlimited for fouling things up! Developing the austere intellectual discipline of keeping things sufficiently simple is in this environment a formidable challenge, both technically and educationally.
As computing scientists we should not be frightened by it; on the contrary, it is always nice to know what you have to do, in particular when that task is as clear and inspiring as ours. We know perfectly well what we have to do, but the burning question is, whether the world we are part of will allow us to do it. The answer is not evident at all. The odds against computing science might very well turn out to be overwhelming.
My favorite Edsger Dijkstra aphorism is this one: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.” Yet we keep playing the imitation game: asking how closely machine intelligence can duplicate our own intelligence, as if that is the real point. Of course, once you imagine machines with human-like feelings and free will, it’s possible to conceive of misbehaving machine intelligence—the AI as Frankenstein idea. This notion is in the midst of a revival, and I started out thinking it was overblown. Lately I have concluded it’s not.
Knowledge-based jobs were supposed to be safe career choices, the years of study it takes to become a lawyer, say, or an architect or accountant, in theory guaranteeing a lifetime of lucrative employment. That is no longer the case. Now even doctors face the looming threat of possible obsolescence. Expert radiologists are routinely outperformed by pattern-recognition software, diagnosticians by simple computer questionnaires. Algorithms and machines would replace 80% of doctors within a generation.
Know-how is more than knowledge. It puts knowledge to work in the real world. It is how scientific discoveries become routine medical treatments, and how inventions — like the iPod or the Internet — become the products and services that change how we work and play.
As the moon-and-ghetto disparity demonstrates, know-how is unevenly distributed. But why?
At the time, the chemicals were used widely as refrigerants and solvents for semiconductors. But no one ended up going without refrigerators or computers.
Whether or not the search yields results, it will at least help us to better understand why we can put a man on the moon, but we cannot manage to improve literacy rates, or shape workable policies on climate change, or reduce global poverty.
Knowing the mechanics that drive the “go” may help us to separate what is practically effective from our value judgments, and come up with a process that spurs solutions to problems as predictably as technological know-how does today.
Thriving Earth Exchange (TEX) helps communities leverage Earth and space science to build a better future for themselves and the planet. TEX does this by bringing together Earth and space scientists and community leaders and helping them combine science and local knowledge to solve on‐the‐ground challenges related to natural hazards, natural resources, and climate change. By 2019, Thriving Earth Exchange will launch 100 partnerships, engage over 100 members, catalyze 100 shareable solutions, and improve the lives of 10 million people. Through the Thriving Earth Exchange, local leaders and Earth and space scientists will create resilient communities that enrich the Earth. Working together, we will create solutions for the planet, one community at a time.
Advancing according to its own logic, much of science has lost sight of the better world it is supposed to help create. Shielded from accountability to anything outside of itself, the “free play of free intellects” begins to seem like little more than a cover for indifference and irresponsibility. The tragic irony here is that the stunted imagination of mainstream science is a consequence of the very autonomy that scientists insist is the key to their success. Only through direct engagement with the real world can science free itself to rediscover the path toward truth.
Everything that can be automated has been automated. The fourth industrial revolution is upon us, with the forces of AI, robotics, and 3D printing disrupting the status quo and pushing outdated processes into oblivion. The Ford factory workers’ jobs have largely been turned over to machines.
But the workforce training process hasn’t kept up with the pace of change.
The education that the workforce received was designed in the previous industrial age: front-loaded for first 20 years, and expected to apply to their jobs for the next 40 to 50 years. Today, we are in the knowledge economy, and there is new knowledge we are required to learn and apply daily. How can we future-proof our workforces to help them prepare for the rapid pace of business transformation?
The whale families we work with, members of the Eastern Caribbean Clan, are shrinking. Their population is declining by as much as 4 percent a year largely a result of climate change and the increasing human presence in these waters. We are not just losing specific whales that we have come to know as individuals; we are losing a way of life, a culture — the accumulated wisdom of generations on how to survive in the deep waters of the Caribbean Sea. They may have lived here for longer than we have walked upright.
Every culture, whale or otherwise, is its own solution to the problems of the environment in which it lives. With its extirpation, we lose the traditional knowledge of what it means to be a Caribbean whale and how to exploit the deep sea riches around the islands efficiently. And that cannot be recovered, not even if the global population of sperm whales was robust enough to support remigration into the Caribbean. These would be different whales, from elsewhere, who do things differently. This region would be profoundly impoverished for the new whales, who would be more vulnerable here. The species as a whole would lose some of its repertoire on how to survive.
Species conservation should not be just about numbers. The definition of biodiversity needs to include cultural diversity.
In physics, experiment and observation seem to be the basis for knowledge. The physicist might want to construct a hypothesis to explain observations that do not fit current thinking and devises and performs experiments to test this hypothesis. Results are then collected and analysed and, if necessary, the hypothesis modified to accommodate them.
In history there is no experimentation. Instead, documentary evidence provides the historian with the raw material for interpreting and understanding the recorded past of humanity. By studying these sources carefully a picture of a past event can be built up along with ideas about what factors might have caused it.
In a literature class students set about understanding and interpreting a text. No observation of the outside world is necessary, but there is a hope that the text can shed some light upon deep questions about what it is to be human in a variety of world situations or can act as a critique of the way in which we organize our societies.
Economics, by contrast, considers the question of how human societies allocate scarce resources. This is done by building elaborate mathematical models based upon a mixture of reasoning and empirical observation of relevant economic factors.
In the islands of Micronesia, a steersman successfully navigates between two islands 1,600 km apart without a map or a compass.
Epistemology is the study of knowledge. Epistemologists concern themselves with a number of tasks, which we might sort into two categories. First, we must determine the nature of knowledge; that is, what does it mean to say that someone knows, or fails to know, something? This is a matter of understanding what knowledge is, and how to distinguish between cases in which someone knows something and cases in which someone does not know something. While there is some general agreement about some aspects of this issue, we shall see that this question is much more difficult than one might imagine. Second, we must determine the extent of human knowledge; that is, how much do we, or can we, know? How can we use our reason, our senses, the testimony of others, and other resources to acquire knowledge? Are there limits to what we can know? For instance, are some things unknowable? Is it possible that we do not know nearly as much as we think we do? Should we have a legitimate worry about skepticism, the view that we do not or cannot know anything at all?
The idea of the knowledge economy is appealing. The only problem is it is largely a myth. Developed western economies such as the UK and the US are not brimming with jobs that require degree-level qualifications. For every job as a skilled computer programmer, there are three jobs flipping burgers. The fastest-growing jobs are low-skilled repetitive ones in the service sector. One-third of the US labour market is made up of three types of work: office and administrative support, sales and food preparation.
You may have heard people say that you do not have to cite your source when the information you include is “common knowledge.” But what is common knowledge?
Broadly speaking, common knowledge refers to information that the average, educated reader would accept as reliable without having to look it up. This includes:
Information that most people know, such as that water freezes at 32 degrees Fahrenheit or that Barack Obama was the first American of mixed race to be elected president.
Information shared by a cultural or national group, such as the names of famous heroes or events in the nation’s history that are remembered and celebrated.
Knowledge shared by members of a certain field, such as the fact that the necessary condition for diffraction of radiation of wavelength from a crystalline solid is given by Bragg’s law.
However, what may be common knowledge in one culture, nation, academic discipline or peer group may not be common knowledge in another.
Automobile technicians are in demand to repair the software that now powers our cars; manufacturing technicians, to upgrade the numerically controlled machines and 3-D printers that have replaced assembly lines; laboratory technicians, to install and test complex equipment for measuring results; telecommunications technicians, to install, upgrade, and repair the digital systems linking us to one another. Technology is changing so fast that knowledge about specifics can quickly become obsolete. That’s why so much of what technicians learn is on the job.
Computers have long been compared to artificial brains, but now IBM has followed the comparison and built a working artificial neuron. The tech giant’s research center in Zurich created 500 of them to simulate a signal transfer similar to how the process works in an organic brain.
To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.
This prior need to understand the basic science of cognition is where the “singularity is near” arguments fail to persuade us. It is true that computer hardware technology can develop amazingly quickly once we have a solid scientific framework and adequate economic incentives. However, creating the software for a real singularity-level computer intelligence will require fundamental scientific progress beyond where we are today.
Theorists have proposed, and researchers have reported, that intelligence is a set of relatively stable abilities, which change only slowly over time. Although intelligence can be seen as a potential, it does not appear to be an inherent fixed or unalterable characteristic. … Contemporary psychologists and other scientists hold that intelligence results from a complex interaction of environmental and genetic influences. Despite more than one hundred years of research, this interaction remains poorly understood and detailed. Finally, intelligence is neither purely biological nor purely social in its origins. Some authors have suggested that intelligence is whatever intelligence tests measure.
The technological singularity (also, simply, the singularity) is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a ‘runaway reaction’ of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Science fiction author Vernor Vinge said in his essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.
Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.
Have you heard the story of the architect from Shiraz who designed the world’s most beautiful mosque? No one had ever conjured up such a design. It was breathtakingly daring yet well-proportioned, divinely sophisticated, yet radiating a distinctly human warmth. Those who saw the plans were awe-struck.
Famous builders begged the architect to allow them to erect the mosque; wealthy people came from afar to buy the plans; thieves devised schemes to steal them; powerful rulers considered taking them by force. Yet the architect locked himself in his study, and after staring at the plans for three days and three nights, burned them all.
The architect couldn’t stand the thought that the realized building would have been subject to the forces of degradation and decay, eventual collapse or destruction by barbarian hordes. During those days and nights in his study he saw his creation profaned and reduced to dust, and was terribly unsettled by the sight. Better that it remain perfect. Better that it was never built.