Nobody sits you down and says, here is your cage, please enter. It comes quieter than that. Dressed as advice from well-meaning people. As motivational content. As the collective consensus of everyone around you, steady, ambient, indistinguishable from common sense. By the time you think to question it, you have already embarrassed it as a way of life.
I grew up being told that a corporate position, a loan, an EMI, and the right social signals constituted a life well-lived. Not in those words. Nobody uses those words. The message arrives through inference, through what is rewarded and what is gently discouraged, through the slow accumulation of what everyone around you seems to take for granted, and what they peruse.
I do not remember choosing to be curious. It chose me, early and without permission. While other children watched cartoons, I too watched cartoons but i also watched science documentaries, space, marine biology, the physics of collapse and construction of celestial bodies. I watched A Brief History of Time and understood, in the particular way a child understands something before language catches up, that the universe was not what I had been handed. It was stranger, larger, and operating on rules that nobody around me seemed particularly exercised about. Simply put there is more to life.
I asked questions. Adults gave answers designed to end the conversation. So I stopped asking adults.
I had early access to the internet, which my parents found mildly concerning. I would disappear for hours, not into anything assigned, but into whatever I had a question about. I remember one particular afternoon, armed with a retired Nokia that had become my portal. I started reading about the sun from Wikipedia. Not for school. I followed the links. By the following day I had notes on solar flares, nuclear fusion, geomagnetic storms, Auroras. I did not understand more then half of what I had written down. I wrote it anyway. Because questions would not stop.
In junior high, I corrected my computer teacher, informed him that the internet travelled through submarine cables laid on the ocean floor. He looked at me as though I had said something offensive. The next day he returned and told the class I was right. I was not certain whether to feel satisfied or depressed. I think both.
By higher secondary school I had become what my peers called the source code — the lad you consulted when you needed to know something about Programming. I was average across most subjects and did not particularly mind. I had been performing consistently; simply not at the things the system was measuring.
Then I arrived at university and made the same mistake everyone makes. I learned frameworks. Persuaded roles, which market rewards. I believed that knowing the right libraries was equivalent to understanding engineering. I thrived in computer organisation and networking ,the theoretical material — but the arrangement felt hollow, and I did not yet have the vocabulary to explain why. I thought I was slow. I spent considerable time and mental health pursuing the wrong things with greater intensity. The hollowness, I would eventually understand, was not a personal failing.
II. The Same Shape Underneath
The hollow feeling sent me looking in other directions. I had always had side quests — intellectual detours that ran parallel to whatever I was officially supposed to be doing. I took a course on the fundamentals of psychology. I went through lectures by Paul Bloom. I found MIT OpenCourseWare and watched Professor Jonathan Gruber's microeconomics lectures at odd hours, which is either a sign of genuine curiosity or a sleep disorder, possibly both.
I remember thinking, while working through markets, money, interest, game theory — that these tools could be used to hack human society. That, I would later understand, is precisely what has been happening.
Professor Gruber made a point in his introductory lecture that stuck with me: unlike engineering or mathematics, where quantities are fixed and precise, economics involves human sentiment, which is extraordinarily difficult to model. This interested me precisely because of what I had been reading in psychology. The precision differs — economics is not physics, and conflating them has caused its own category of damage. But the further I went into each field, the more I noticed something underneath all of them that was the same.
Rules. Constraints. Agents responding to incentives. Outputs that looked chaotic from the outside and inevitable once you saw the mechanism.
That was the click. The computer I was learning to programme, the economy I was learning to understand, the psychology I was reading — each was asking the same structural question in a different domain, with a different level of mathematical precision. The insight lived not in any one field but in the space between them.
I did not have a word for what I was doing at the time. I just kept noticing the same shape underneath everything.
What I found when I went looking deeper was not comforting.
III. The Architecture
The system we live in — the education we received, the media we consume, the work we do, the desires we feel — is not neutral. It was built. And it was built by people with very specific interests that were not yours, not mine, not most people's. Some of it was deliberate design. Most of it was emergent — the natural selection of structures that were good at capturing human attention and energy and converting them into profit.
There is no single villain, in the dramatic sense. It evolved. I am tempted to call it natural selection, but natural selection distributes outcomes across an ecosystem. Here, a very small number of people are doing extraordinarily well while the rest provide the substrate. That is not natural selection. That is something else.
But before I go further — I should show you that this is not just a theory. There is a body of research that explains, with uncomfortable precision, how a system like this operates on human minds. And then there are the case studies — real companies, real documents, real decisions — that show what it looks like when people who understand the mechanism deploy it deliberately.
How the Mechanism Works
In the 1960s, Martin Seligman ran experiments in which dogs received unavoidable electric shocks. After enough repetition, something broke in them — not physically, psychologically. When the conditions changed and escape became possible, the dogs did not try. They had concluded, from experience, that trying was pointless. The cage had been removed. The dog stayed put. Seligman called it learned helplessness, and subsequent research showed the same pattern in humans with depressing consistency. (Seligman & Maier, 1967)
Worth noting: humans are not dogs. We have metacognition, long-term planning, the ability to reframe a situation. There are three responses to a cage — thrash uselessly and burn out, give in and stop trying, or observe, understand the system, wait for the lever, and act when the moment arrives. The learned helplessness research explains why most people end up in response two. This piece is an argument for response three.
Then there is Stanley Milgram at Yale, 1961. Ordinary people — teachers, clerks, engineers — were told by an authority figure in a white coat to administer electric shocks to a stranger in the next room whenever they answered a question incorrectly. The shocks were fake. The screaming was acted. The subjects did not know this. Sixty-five percent of them administered what they genuinely believed was the maximum voltage — labelled on the machine as XXX — Danger: Severe Shock — because a man in a lab coat told them the experiment required it. Not sadists. Not broken people. People responding to a structure that told them what to do and provided just enough institutional cover to keep going. (Milgram, 1963)
Hannah Arendt sat through Adolf Eichmann's trial in 1961 — the same year Milgram was running his experiment — and named what she saw: the banality of evil. Eichmann was not a monster. He was a bureaucrat. He followed orders because the system rewarded compliance and punished questioning. The implication is not that humans are irredeemably obedient. It is that the antidote is prior commitment — deciding in advance, before the pressure arrives, what you will not do regardless of what the system rewards. That decision cannot be made in the moment. The institutional cover is too persuasive.
And then Solomon Asch at Swarthmore — the one that keeps me up at night. He put one real subject in a room with seven confederates — people secretly working with the researcher — and asked them to identify which of three lines on a card matched a reference line. Simple. Obvious. Unambiguous. The confederates all gave the wrong answer. Roughly three-quarters of real subjects conformed to the incorrect answer at least once, despite the evidence of their own eyes. When asked why afterwards, many said they genuinely began to doubt what they were seeing. Not that they felt social pressure and caved — their perception of reality shifted to match the group. (Asch, 1955)
(A brief note: this was also the era when Yale's psychology department was essentially a government-funded laboratory for establishing how far human beings could be pushed before they broke. The regulation came later. The subjects, one assumes, came first and the therapy second. Different times.)
The point these studies collectively make is not that people are weak. It is that behaviour — and in Asch's case, perception itself — is extraordinarily sensitive to the structure of the environment. Change the structure, you change what people do. Change the group, you change what people see. The people running the current arrangement understand this at a level most of us do not. That is why they spend billions engineering the structure — the feed, the algorithm, the social proof, the manufactured consensus — and almost nothing on your capacity to see through it.
What does a system designed to extract maximum labour and consumption need its population to believe? That the current arrangement is natural and inevitable. That alternatives are naïve or dangerous. That discomfort is a personal problem, not a structural one. That ambition beyond a certain scale is unrealistic — not because anyone decided to cap it, but because the system never rewarded it and people stopped expecting it.
The Case Studies
This is not theory. This is documented history.
Tobacco
By the late 1950s, internal documents show, the major tobacco companies knew their products caused cancer. Not suspected. Knew. A 1962 report by R.J. Reynolds scientist Dr Alan Rodgman described the accumulated evidence against cigarettes as "overwhelming." A 1961 Arthur D. Little report sent to Liggett & Myers listed the biologically active materials in cigarette smoke as causing cancer, promoting cancer, and being directly poisonous. (American Association for Cancer Research, 2007)
What happened next is one of the most thoroughly documented cases of deliberate public deception in corporate history. On 14 December 1953, the heads of the major tobacco companies met at the Plaza Hotel in New York and hired a public relations firm — Hill & Knowlton — to manufacture doubt about the science. Not to fund research that might actually resolve the question. To manufacture controversy where there was scientific consensus. They published the "Frank Statement to Cigarette Smokers" in 1954, publicly claiming their products were not injurious to health. They ran this campaign for decades while knowing the opposite was true. In 2006, a US federal court found that the tobacco companies had falsely and fraudulently denied the cancer link, suppressed evidence, manipulated nicotine to enhance addiction, and marketed to children. The court required them to run corrective advertising. They appealed for years to avoid doing it. (Union of Concerned Scientists)
The reason this matters beyond the obvious is structural. The tobacco industry invented the disinformation playbook that every subsequent industry facing inconvenient science has borrowed. Sow doubt. Fund friendly research. Attack the credibility of independent scientists. Manufacture the appearance of controversy. Keep the public confused long enough for the profits to continue. Climate denial, the sugar industry's manipulation of nutrition science, pharmaceutical companies misrepresenting drug safety — the playbook is the same. Tobacco wrote it first.
Purdue Pharma and the Opioid Epidemic
OxyContin was approved in 1996. By 1997, Purdue Pharma's internal emails show executives discussing how to address addiction concerns while simultaneously expanding the drug's market. By 1999, the company's general counsel wrote internally that they had picked up references to abuse of their opioid products online. That same year, company officials knew OxyContin was being described on the street as the hottest thing going — forget Vicodin. (New York Times investigation via NC DOJ)
What did Purdue do with this knowledge? They continued marketing OxyContin as having lower addiction potential than competing opioids — "in the face of this knowledge," as federal prosecutors wrote in 2006. They trained sales representatives to cite a single five-sentence letter to the editor in the New England Journal of Medicine from 1980 — not a peer-reviewed study, a letter — as proof that opioids had less than a one percent addiction rate. That letter was cited over 600 times to support claims about opioid safety. The company maintained an internal roster called "Region Zero" listing over 1,800 doctors suspected of improper prescribing. They reported approximately 8% of them to authorities. They continued supplying the rest. (STAT News)
The number most worth sitting with: over 500,000 Americans died from opioid overdoses between 1999 and 2019. While this was happening, the Sackler family extracted four billion dollars from Purdue in profits. When opioid addiction became a public health crisis, Purdue's internal documents show the company exploring addiction treatment as an attractive market naturally linked to the crisis they had helped create. They named this initiative Project Tango. (STAT News, Massachusetts AG)
Facebook and Instagram
In 2019, Facebook's own internal researchers produced a slide presentation summarising their findings on Instagram and teenage mental health. One slide read: We make body image issues worse for one in three teen girls. Another noted that teenagers blamed Instagram for increases in the rate of anxiety and depression — a response that was unprompted and consistent across all age groups. An internal investigation that same year involved an employee creating a false account as a thirteen-year-old girl interested in diet tips. Within a short time, the algorithm was recommending accounts devoted to binge eating and anorexia. (Wall Street Journal Facebook Files, 2021)
When Mark Zuckerberg was asked at a congressional hearing in March 2021 whether Facebook had studied the app's effects on children's mental health, he said he believed the answer was yes. When senators asked to see the research, Facebook refused to provide it. Six months later, the Wall Street Journal published the research as part of their Facebook Files investigation — leaked by whistleblower Frances Haugen. Between 2010 and 2019, emergency room admissions for self-harm tripled among US girls aged ten to fourteen. Facebook's own researchers described teenagers as having an addict's relationship with the app: wishing they could spend less time on it, but finding themselves unable to stop. (CBS News, 60 Minutes)
The product continued unchanged.
The Three Firms and Voting Power Over Corporate America
In 2017, a research paper published by academics at the University of Amsterdam mapped the ownership of three asset management firms — BlackRock, Vanguard, and State Street — across the US stock market. Their finding: the Big Three had become the largest shareholder in 88% of S&P 500 companies. Together they managed more assets than all sovereign wealth funds combined, and over three times the global hedge fund industry. As of 2024, BlackRock managed $10.5 trillion, Vanguard $9.3 trillion, and State Street $4.3 trillion. (Fichtner, Heemskerk & Garcia-Bernardo, 2017)
Here is the part worth being precise about, because this argument is easy to distort in either direction. These firms do not secretly own corporate America in a conspiratorial sense. Most of the assets they manage belong to ordinary people's retirement accounts and index funds — when you invest in a Vanguard S&P 500 fund, Vanguard's ownership percentage of every company in that index increases slightly, because your money is pooled with millions of others. They are custodians of other people's capital, not owners of it in the traditional sense.
But custodianship at this scale comes with something that functions like ownership: voting power. Because most individual investors do not vote on corporate governance proposals — there are too many, and they are too complicated — the fund managers vote on their behalf through a process called proxy voting. A small team at BlackRock, a small team at Vanguard, a small team at State Street — together making governance decisions that affect nearly every major publicly listed company in America. As the Vanguard CEO himself said in 2015, the firm's predominantly passive management style had been mistakenly assumed to imply a passive attitude toward corporate governance. Nothing, he said, could be further from the truth. The academics who mapped this described it as "structural power" — not sought, not designed, but accumulated through the logic of scale. Three firms. Twenty-four trillion dollars. Voting rights over the majority of corporate America. No election required. (IR Impact, 2025)
These are not fringe cases. They are not exceptions to a generally benign system. They are documented examples of what happens when the incentive structure of profit operates without constraint — when the external cost of harm falls on people who are not at the table where the decisions are made. Each case has the same structure: private knowledge of harm, public denial, continued extraction, crisis, belated accountability. The tobacco playbook, running on new product categories with each generation.
The Operator Who Named It
One person who understood all of this before the case studies existed — who understood the mechanism from first principles and used it without apology — was Edward Bernays. Father of modern public relations, nephew of Sigmund Freud, and by any honest measure one of the most influential and least discussed figures of the twentieth century.
He wrote in his 1928 book Propaganda that the conscious manipulation of the organised habits and opinions of the masses was not just possible but necessary for the functioning of democratic society. He believed, and stated openly, that most people were not capable of determining what was good for them and required guidance from an invisible elite.
He was not a fringe figure. He worked for American presidents, Fortune 500 companies, and the United Fruit Company, for whom he helped manufacture public support for the CIA-backed overthrow of Guatemala's democratically elected government in 1954. He rebranded cigarettes as feminist symbols of liberation — "torches of freedom" — to open the female market for tobacco companies, effectively doubling their addressable customer base in a single campaign.
He used his uncle's research on the unconscious to sell products people did not need, to people he privately considered unable to think for themselves. The remarkable thing is not the technique — it is the confidence. He wrote it all down, published it, and called it Propaganda because in 1928 he saw no reason to hide it. The people he was manipulating were not going to read it.
Here is the part that should sit with you: Bernays explicitly called the masses stupid. Not in private correspondence — in print, on purpose. He needed people to be manageable. An educated, curious, self-directing population was not his customer. It was his problem.
The question I ask myself — and I will return to this honestly toward the end — is whether I am any different, writing a piece that explains the manipulation to people I have already decided can handle the truth. I am not sure the answer is clean.
So when you feel the instinct to dismiss systemic critique as paranoia — sit with that instinct for a moment and ask where it came from. Scepticism toward power is healthy. Reflexive scepticism toward critiques of power is a different thing entirely.
The question is not whether the system shapes your beliefs. It does. The question is whether you examine that shaping or simply inhabit it.
The Italian Marxist philosopher Antonio Gramsci, writing from a fascist prison cell in the 1930s, called this cultural hegemony — the process by which the dominant class maintains control not through force but through the manufacturing of consent, making its own worldview the assumed common sense of the entire society. You do not need a gun if people have already internalised the values that serve you. The most effective prison is the one the prisoner believes is not a prison at all. He was writing about Mussolini's Italy. He might as well have been writing about today's social media.
The word for what Bernays built, and what every algorithm since has refined, is not propaganda. That word has become too dramatic, too foreign, too associated with other countries and other eras. The word that actually fits is normal. The most powerful propaganda is the kind that just feels like Tuesday.
IV. The Default Human
Here is the silver lining — and it is not a small one.
In 1999, Sugata Mitra installed a computer in a wall in a Delhi slum. No instructions. No teacher. No English. Just a screen and a keyboard accessible to children who had never seen one. Within hours they were navigating it. Within days they had taught themselves enough English to use it. Within weeks they were teaching each other. Nobody told them to. Nobody rewarded them for it. They were just humans encountering something they did not understand and doing what humans apparently do by default — trying to figure it out. (Mitra, 2000)
Carol Dweck spent decades demonstrating that intellectual capacity is not fixed at birth — it responds directly to environment and belief. Put a person in a structure that tells them their ability is malleable and their performance improves. Put them in a structure that tells them they are either smart or not, and they stop trying the moment it gets hard. The capacity was always there. The structure determined whether it emerged. (Dweck, 2006)
Then there is the Flynn effect. IQ scores rose roughly three points per decade throughout the twentieth century, across dozens of countries. That rise cannot be genetic — the timeline is too short. It has to be environmental. Conditions improved; cognitive capacity built up in response. And here is the part worth sitting with: the effect has been reversing in some developed countries since the 1990s. Researchers link the reversal to attention fragmentation. The environment built it up. A different environment is pulling it back down. (Bratsberg & Rogeberg, 2018)
A human being's natural state — the evidence suggests — is curiosity, creativity, and the desire to understand and build. It is what the data shows when you remove the suppression and leave a human alone with a question. Everything that deviates from that default is a distortion introduced by a system that benefits from your passivity.
Now — before you accuse me of romanticising this. I am not saying everyone becomes Einstein or Picasso. The distribution stays. People vary in curiosity, focus, tolerance for frustration, and how far they want to go. That is real. What I am arguing is narrower and more provable than utopia. If you change the environment — reduce the noise — you do not erase the curve. You shift it. The centre moves. The floor rises. The path to the deep end becomes visible instead of hidden behind a casino.
Right now, most people never find the door — not because they lack the capacity, but because nobody told them it existed, and the algorithm made sure something louder was always in the way.
Reading, researching, building things, making art, engaging with hard ideas — this is not extraordinary behaviour. It is the baseline. We have just been convinced otherwise.
V. The Chain Nobody Designed
I am an aspiring systems programmer. I am working through books — CS:APP, Patt and Patel, OSTEP, CLRS — that most people in my field have never opened. I want to be careful about how I say this, because it is not about intelligence. The people who do not read these are not less capable. Most of them are extremely capable. The market told them what to be curious about.
Nobody planned this arrangement. It just happened.
IE5 was Microsoft's web browser, released in 1999, that dominated the market after defeating Netscape. Then Microsoft stopped investing in browser technology — they had an operating system running on most PCs, and that was a far stronger and more reliable product. Then Chrome entered the picture. Google had enormous reasons to move the web forward. Billions were being made, internet companies were everywhere, the web had to evolve — despite the fact that it was never a particularly reliable platform. These hyper-specialised roles — frontend developer, React developer, CSS developer — emerged because things were made complicated. And so we arrive at job titles that have no equivalent anywhere else in engineering. You will not find a Torque Configuration Engineer in mechanical engineering, or a Weld Bead Developer in aerospace. The analogy is not entirely fair, but it is not entirely wrong either.
A framework economy followed. Stack Overflow's developer surveys show JavaScript becoming the most commonly used programming language for twelve consecutive years from 2012 onward. Bootcamps emerged to supply that economy with labour — by 2020 there were over 500 coding bootcamps in the US alone, almost all of them teaching JavaScript frameworks. Hiring pipelines standardised on framework knowledge because it was cheap to filter for and fast to verify. LinkedIn job postings for "React developer" grew by over 300% between 2017 and 2022.
Nobody at any point in that chain sat down and decided to limit what these kids would know. Each step was locally rational. Google needed speed. Developers followed the jobs. Bootcamps followed the developers. Hiring managers followed the bootcamps. And at the end of that chain, a kid in a small town where just getting a software job feels like winning was told that learning to use a library was the bar. That the layer on top of the layer on top of the layer on top of the actual machine was the destination, not the starting point.
That is what emergent systems do. They produce outcomes that look designed without requiring a designer. The cage has no architect. It has only occupants.
I hope the current wave of LLM agents forces a reckoning here. There will be a period of unemployment and disappointment for those who invested years in market-adjacent skills instead of foundational engineering. That sounds harsh. But sometimes the door only appears after the previous room collapses. When the question becomes unavoidable, perhaps the books get opened.
Note: I am speaking of the trend, not every individual. There are knowledgeable, first-principles engineers who are excellent at problem-solving and whose professional identities do not revolve around a single stack. This is not about them.
VI. The Machine That Produces Machines
In 1969, twelve human beings walked on the moon. The computer that guided them there — the Apollo Guidance Computer — had 4 kilobytes of memory. Not megabytes. Four kilobytes. Its programme memory was stored in something called core rope memory, where software was literally hand-woven into magnetic cores by women at MIT, nicknamed "little old ladies" for the delicacy of the work. Margaret Hamilton, who led the software team, invented the term software engineering because she wanted the discipline taken seriously.
It is now 2026. We have not been back to the moon in over fifty years. We have not reached Mars. We carry supercomputers in our pockets and use them primarily to watch fifteen-second videos of strangers performing dances.
The game developer Jonathan Blow gave a talk in 2019 called Preventing the Collapse of Civilization in which he argued, with genuine alarm, that software is getting worse — not better — over time. That the compounding of abstractions upon abstractions has produced a civilisation that runs on technology almost nobody fully understands, maintained by a workforce trained to operate at the top of the stack and utterly lost below it. He said something else once, in a live stream, that I think about more than the formal talk — that if you got an internship at NASA or SpaceX and all you did was bring coffee to the engineers, you would learn more about real engineering than you would spending years working inside a web framework. The conversations in the hallway, the problems people are wrestling with, the way engineers think about constraints and trade-offs — that ambient exposure to people solving hard problems against hard reality is worth more than years of tutorials about tools that abstract the hard reality away.
Which raises a question nobody asks in engineering school — at least, not mine: what is engineering, actually?
At its core, engineering is applied science — taking the laws of reality, physics, chemistry, mathematics, information theory, and working within their constraints to build something that functions. The constraints are not obstacles. They are the definition of the problem. Nobody said this to me in school. What was said instead was: here is how to pass the examination; go and memorise. The laws of reality that the tools are built on were apparently someone else's problem.
Decades ago the term "full stack" meant an engineer who understood everything from infrastructure to silicon. Now it means someone who knows a JavaScript framework and can operate an LLM. This is what the market wants — engineers who can produce specific outputs within a specific abstraction layer, quickly and cheaply. Which is why we have arbitrary code execution vulnerabilities in Notepad. In 2024. In a text editor.
Richard Feynman wrote in Surely You're Joking, Mr. Feynman! about his experience with Brazilian physics education. Students could recite definitions and solve standard problems with impressive fluency. But when he asked a question that required applying the concept to a novel situation — silence. They had learned the words without acquiring the understanding. He was disturbed enough to make it the subject of a public lecture. This was Brazil in the 1950s. It is everywhere now.
John Taylor Gatto was New York City Teacher of the Year in 1989 and 1990, and New York State Teacher of the Year in 1991. He then resigned publicly, in an op-ed in the Wall Street Journal, and spent the rest of his career arguing that the school system was not broken — it was working exactly as designed. In Dumbing Us Down he wrote that mass compulsory schooling was built on a Prussian military model adopted in the nineteenth century for the explicit purpose of producing obedient workers and soldiers. People who could follow instructions, perform repetitive tasks, and never ask inconvenient questions about why. The reformers who designed American mass education in the late 1800s wrote openly about their goals. You were not supposed to emerge from school as a thinker. You were supposed to emerge as a productive unit.
It is no coincidence that education as a field of academic research receives pennies compared to the funding that flows into AI or defence. A genuinely better model of education — one that produced critical, curious, systems-literate people at scale — would be among the most disruptive forces in human history. It would undermine every industry built on information asymmetry, every political system built on manufactured consent, every economy built on consuming what you are told to want. The incentive not to fund that research is enormous.
I have a folder on my computer. It contains thirty-one books. CS:APP. CLRS. OSTEP. Crafting Interpreters. Hacking: The Art of Exploitation. Patt and Patel. The Linux Programming Interface. Feynman's memoir. Sapiens. Man's Search for Meaning. The people who wrote these books wrote them the way people write when they are trying to transfer genuine understanding, not produce a product. You can feel the difference on the first page. There is a density and an honesty in a book written by someone who spent thirty years understanding something and then decided to explain it. It dwarfs any Udemy course about a framework that will be deprecated in four years — by a gap so large it becomes almost comic.
Aaron Swartz understood this. He was twenty-six years old when he died in 2013. He had already co-created RSS, contributed to the development of Creative Commons, and helped stop SOPA — the legislation that would have given corporations sweeping power to censor the internet. He was facing federal prosecution for downloading academic journal articles from JSTOR through MIT's network, intending to make them freely available. Academic research largely funded by public money, locked behind paywalls so that publishers could extract rent from human knowledge. He believed that information asymmetry was a form of violence and that open access to knowledge was a moral cause. The prosecutors who pursued him with charges that carried thirty-five years in prison disagreed. He did not survive the case.
The information is out there. MIT OpenCourseWare exists because people fought for it and some of them paid a price. What do most people do with it? The same thing the system trained them to do — optimise for the path of least resistance and call it a career. Not because they are lazy. Because nobody told them to look. And the casino is louder than the library.
VII. The Intersection Is Where the Breakthroughs Live
Somewhere in the last century we arrived at the consensus that expertise meant narrowness. Pick a lane. Go deep. The generalist is a dilettante. The polymath is a myth. Focus.
This is historically illiterate. And the history is worth knowing precisely because it preempts the obvious objection — that the examples I am about to give are outliers, geniuses, exceptional people who prove nothing about the general case.
They are not cited as goals. They are cited as existence proofs of what happens at intersections.
Consider Claude Shannon. He studied both electrical engineering and mathematics, then wrote a master's thesis in 1937 applying Boolean algebra — a branch of abstract mathematics developed in the 1800s — to electrical circuits. That thesis is sometimes called the most important master's thesis of the twentieth century. It is the theoretical foundation of every digital computer ever built. The insight came from someone sitting at the intersection of two fields, looking across the gap and seeing what no specialist on either side could see.
Consider the Wright brothers. Bicycle mechanics. No formal engineering training. They read everything they could find about aeronautics and approached the problem of flight with a mechanic's intuition — iterative, empirical, undogmatic. The trained aeronautical engineers of their time were approaching the problem with more credentials and significantly less success.
Consider Darwin. His theory of natural selection was partly inspired by reading Thomas Malthus's Essay on the Principle of Population — a work of economics about resource scarcity and competition. A naturalist, reading economics, synthesised one of the most powerful ideas in the history of science.
Noam Chomsky built the formal grammar theory that underlies every compiler ever written. Then he used that same structural thinking to write Manufacturing Consent, one of the most rigorous analyses of how media serves power ever produced. Same brain. Same method. Two entirely different domains.
John von Neumann contributed foundational work to quantum mechanics, game theory, computer architecture, and economics. Not dabbling. Foundational work.
Richard Feynman played bongo drums, painted under a pseudonym, picked locks as a hobby, and cracked open the physics of superfluidity and quantum electrodynamics. He did not believe these things were in tension.
The pattern is not genius. The pattern is intersection.
Now here is the historical context that makes this argument more honest, not less. The Trivium and Quadrivium — the seven liberal arts that formed the backbone of Western education from ancient Greece through the medieval period — treated knowledge as unified by design. Grammar, logic, and rhetoric first. Then arithmetic, geometry, music, and astronomy. And music there did not mean learning to play an instrument. It meant ratios, harmony, the mathematical structure of sound. When Niels Bohr told Oppenheimer that algebra is like music, he was not being poetic for effect. He was a man trained in that older tradition speaking its native language — mathematics as something you feel the patterns of, not merely compute.
In the period roughly 1500 to 1700, certain educated people could move across domains, follow curiosity, and connect ideas freely, because the system was not yet rigid. You see the catch here — only a handful of people anywhere in the world were educated in that way. For most, learning came through apprenticeship, family trade, farming, craft, survival. No abstract education at all. Then came the transition from pre-industrial to industrial society, roughly 1700 to 1800. Scientific societies like the Royal Society emerged. Journals. More structured disciplines. Physics separated slowly from philosophy, chemistry from alchemy, biology became its own thing. Knowledge started breaking into compartments. The further we progressed, the more education became less deep and less curiosity-driven — but more accessible, organised, and efficient at scale.
So the picture is not simply that education used to be better and now it is worse. We took a deep but narrow system and made it wide but shallow. Mass education is a genuine achievement. The question is whether widening the distribution required trading away depth — or whether that trade was a choice, made by people optimising for scale and compliance, that we could in principle revisit.
The Direct Instruction meta-analysis answers that question with fifty years of data. Siegfried Engelmann's method — breaking skills into explicit components, testing each one, scripting lessons precisely, grouping students by ability rather than age, requiring mastery before advancement — was studied across 328 research designs examining outcomes in reading, mathematics, language, and other subjects. The results were consistently positive across fifty years, across every demographic, every income level, every grade. The authors described most of the estimates as "huge" by the standard criteria used in education research. (Stockard et al., 2018)
The system does not use it. Ask yourself why.
Genuine creativity — the kind that produces new things rather than recombining existing ones — tends to emerge at the intersection of domains. When you understand how information systems work, and also how human psychology works, and also how economic incentives work, you see things that the pure technologist, the pure psychologist, and the pure economist cannot individually see. The insight lives in the gap between the disciplines.
The insistence that you specialise early and narrow is not in your interest. It is in the interest of whoever needs a reliable supply of domain-specific labour. A person who understands only one thing is far more employable and far less dangerous than a person who understands how things connect.
I am not asking for a world of philosophers who refuse to dig ditches. I am asking for a world where the person digging the ditch knows why it matters, who benefits from it, and whether it is worth digging at all. That is not chaos. That is dignity. The difference between a person who chooses to do something and a person who does it because nobody told them they had a choice is not a small difference. It is the whole difference.
VIII. Closing
I want to end honestly rather than inspirationally.
You cannot fully exit the system. You are in it. So am I. The water is everywhere and we are wet.
But there is a difference between drowning and swimming. The difference is whether you can see the water.
Every tool you pick up — whether it is mathematics, history, a programming language, a book about how propaganda works — is a small increment of that vision. It does not make you free in some absolute sense. It makes the next manipulation slightly less effective. It makes the next trap slightly more visible. It shifts, incrementally, the balance between being the object of other people's systems and being the subject of your own.
The people who built the current arrangement were not smarter than you. They were earlier. They understood the tools — psychology, media, incentives, information asymmetry — before most people knew those tools existed. That advantage is shrinking. The information is out there. The books are increasingly free. The gap between what the powerful understand about human behaviour and what an ordinary curious person can understand is smaller now than it has ever been.
The question is whether you care enough to do the work.
I am trying to. This piece is part of that effort. I am a programmer from a small town who reads economics and game theory, mathematics, psychology, art history, and whatever else I find fascinating — not because it was assigned to me, but because I cannot look at a system without wanting to understand how it actually works. I have thirty-one books on my hard drive. I am working through them slowly, building things, and I am not entirely sure where any of it leads — except that it is the only honest way I know how to spend a life.
That instinct, I think, is worth more than any credential or skill.
The cage is real. It is also made of ideas. And ideas, unlike steel, can be seen through.
Resources & Bibliography
Psychology & Behavioural Science
Seligman, M. E. P., & Maier, S. F. (1967). Failure to escape traumatic shock. Journal of Experimental Psychology, 74(1), 1–9. https://doi.org/10.1037/h0024514
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371–378. https://doi.org/10.1037/h0040525
Milgram, S. (1974). Obedience to Authority: An Experimental View. New York: HarperCollins.
Asch, S. E. (1955). Opinions and social pressure. Scientific American, 193(5), 31–35. https://www.scientificamerican.com/article/opinions-and-social-pressure/
Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70(9), 1–70.
Education & Cognitive Development
Mitra, S., & Rana, V. (2001). Children and the internet: Experiments with minimally invasive education in India. British Journal of Educational Technology, 32(2), 221–232. https://doi.org/10.1111/1467-8535.00233
Dweck, C. S. (2006). Mindset: The New Psychology of Success. New York: Random House.
Stockard, J., Wood, T. W., Coughlin, C., & Rasplica Khoury, C. (2018). The effectiveness of Direct Instruction curricula: A meta-analysis of a half century of research. Review of Educational Research, 88(4), 479–507. https://doi.org/10.3102/0034654317751919
Intelligence & the Flynn Effect
Pietschnig, J., & Voracek, M. (2015). One century of global IQ gains: A formal meta-analysis of the Flynn effect (1909–2013). Perspectives on Psychological Science, 10(3), 282–306. https://doi.org/10.1177/1745691615577701
Bratsberg, B., & Rogeberg, O. (2018). Flynn effect and its reversal are both environmentally caused. Proceedings of the National Academy of Sciences, 115(26), 6674–6678. https://doi.org/10.1073/pnas.1718793115
Finance & Corporate Power
Fichtner, J., Heemskerk, E. M., & Garcia-Bernardo, J. (2017). Hidden power of the Big Three? Passive index funds, re-concentration of corporate ownership, and new financial risk. Business and Politics, 19(2), 298–326. https://doi.org/10.1017/bap.2017.6 — Open access PDF: https://pure.uva.nl/ws/files/25530222/hidden_power_of_the_big_three.pdf
Propaganda & Manufactured Consent
Bernays, E. (1928). Propaganda. New York: Horace Liveright. — Full text: https://archive.org/details/EdwardBernaysPropaganda
Chomsky, N., & Herman, E. S. (1988). Manufacturing Consent: The Political Economy of the Mass Media. New York: Pantheon Books. — Archive: https://archive.org/details/pdfy-NekqfnoWIEuYgdZl
Corporate Case Studies — Primary & Investigative Sources
Tobacco
Cummings, K. M., et al. (2007). The cigarette controversy. Cancer Epidemiology, Biomarkers & Prevention, 16(6), 1070–1076. https://aacrjournals.org/cebp/article/16/6/1070/260310/
US District Court, District of Columbia. (2006). United States v. Philip Morris USA, Inc., et al., Civil Action No. 99-2496.
Union of Concerned Scientists. The Disinformation Playbook. https://blog.ucs.org/anita-desikan/how-tobacco-companies-created-the-disinformation-playbook/
Proctor, R. N. (2012). Golden Holocaust: Origins of the Cigarette Catastrophe and the Case for Abolition. Berkeley: University of California Press.
Purdue Pharma
NC Department of Justice. Origins of an Epidemic: Purdue Pharma Knew Its Opioids Were Widely Abused. https://ncdoj.gov/opioid/news/origins-of-an-epidemic-purdue-pharma-knew-its-opioids-were-widely-abused/
STAT News. OxyContin: A History Told Through Purdue Pharma Documents. https://www.statnews.com/2019/12/03/oxycontin-history-told-through-purdue-pharma-documents/
STAT News. Purdue Pharma Explored Addiction Treatment as "Attractive Market." https://www.statnews.com/2019/01/30/purdue-pharma-oxycontin-maker-explored-addiction-treatment/
Office of the Massachusetts Attorney General. (2019). Commonwealth of Massachusetts v. Purdue Pharma L.P., et al. https://www.mass.gov/news/ag-healey-files-suit-against-purdue-pharma
Facebook / Instagram
Horwitz, J., & Hagey, K. (2021, September 14). Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Wall Street Journal. — Congressional submission: https://www.congress.gov/117/meeting/house/114054/documents/HHRG-117-IF02-20210922-SD003.pdf
CBS News, 60 Minutes. (2022). Facebook and Instagram and dangerous content. https://www.cbsnews.com/news/facebook-instagram-dangerous-content-60-minutes-2022-12-11/
Books Referenced in the Text
Arendt, H. (1963). Eichmann in Jerusalem: A Report on the Banality of Evil. New York: Viking Press.
Gatto, J. T. (1992). Dumbing Us Down: The Hidden Curriculum of Compulsory Schooling. Gabriola Island: New Society Publishers.
Gramsci, A. (1971). Selections from the Prison Notebooks. (Q. Hoare & G. N. Smith, Eds. & Trans.) New York: International Publishers. (Written 1929–1935.) https://archive.org/details/prisonnotebooks
Feynman, R. P. (1985). Surely You're Joking, Mr. Feynman! New York: W. W. Norton. https://archive.org/details/surelyyourejokin00feyn
Blow, J. (2019). Preventing the Collapse of Civilization. DevGAMM (recorded talk). https://www.youtube.com/watch?v=pW-SOdj4Kkk
Extended Reading List
Start here — both free on YouTube
- Adam Curtis — Century of the Self (2002): https://www.youtube.com/watch?v=eJ3RzGoQC4s
- Adam Curtis — HyperNormalisation (2016): https://www.youtube.com/watch?v=thLgkQBFTPw
See through systems — foundational
- George Orwell — 1984 (1949): https://www.george-orwell.org/1984
- Aldous Huxley — Brave New World (1932): https://archive.org/details/brave-new-world_aldous-huxley
- Guy Debord — The Society of the Spectacle (1967): https://www.marxists.org/reference/archive/debord/society.htm
Power and capital
- Daron Acemoglu & James Robinson — Why Nations Fail (2012): https://whynationsfail.com/
- David Graeber — Debt: The First 5000 Years (2011): https://archive.org/details/debt-first-5000-years
- Thomas Piketty — Capital in the 21st Century (2013): https://piketty.pse.ens.fr/en/capital21c2
- Ha-Joon Chang — 23 Things They Don't Tell You About Capitalism (2010)
- Naomi Klein — The Shock Doctrine (2007): https://www.naomiklein.org/shock-doctrine
- Shoshana Zuboff — The Age of Surveillance Capitalism (2019): https://shoshanazuboff.com/book/
Game theory and incentive structures
- Robert Axelrod — The Evolution of Cooperation (1984): https://archive.org/details/evolutionofcoope00axel
- Thomas Schelling — The Strategy of Conflict (1960)
- Mancur Olson — The Logic of Collective Action (1965): https://archive.org/details/logiccollectivea00olso
Complexity and systems thinking
- Geoffrey West — Scale (2017)
- Nassim Taleb — Fooled by Randomness → The Black Swan → Antifragile: https://www.fooledbyrandomness.com/
- James C. Scott — Seeing Like a State (1998): https://archive.org/details/seeinglikestate
Mass behaviour and crowd mechanics
- Robert Cialdini — Influence (1984): https://www.influenceatwork.com/book/
- Gustave Le Bon — The Crowd (1895): https://archive.org/details/crowdstudyofpopu00lebo
- Charles Mackay — Extraordinary Popular Delusions and the Madness of Crowds (1841): https://archive.org/details/extraordinarypop00mack
Deep structure and social theory
- Michel Foucault — Discipline and Punish (1975): https://archive.org/details/disciplinepunish00fouc
- Erving Goffman — The Presentation of Self in Everyday Life (1959): https://archive.org/details/presentationofse00goff
- Pierre Bourdieu — Distinction (1979): https://archive.org/details/distinction00pier
Mathematics as a lens on the world
- Darrell Huff — How to Lie with Statistics (1954): https://archive.org/details/how-to-lie-with-statistics
- Jordan Ellenberg — How Not to Be Wrong (2014): https://www.jordanellenberg.com/how-not-to-be-wrong/
- Daniel Kahneman — Thinking, Fast and Slow (2011)
- George Pólya — How to Solve It (1945): https://archive.org/details/howtosolveit00geor
- Richard Hamming — The Art of Doing Science and Engineering (1997): https://archive.org/details/artofdoingscienc0000hamm
- Claude Shannon & Warren Weaver — A Mathematical Theory of Communication (1949): https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
- James Gleick — The Information (2011)
All journal DOIs resolve to stable links. Where papers are behind paywalls, your institutional library or Unpaywall (https://unpaywall.org) may help. Archive.org links are freely accessible.