This is not a conventional programming or tech blog. This is a rant about the system I live in.
To give you context — I grew up being told that a corporate job with decent paycheck, a loan, an EMI, and social status was the good life. Well not in those exact words, nobody ever sits you down and says "here is your cage, please enter." It's quieter than that. It comes dressed as advice from well-meaning persons and peers, as motivational content, as collective consensus of everyone around you.
I do not remember choosing to be curious. It just happened early. While other kids were watching cartoons I was glued to Discovery Channel and whatever science program I could find — documentaries about space, about animals, about how things were built and why they collapsed. I asked questions. Adults gave me answers that felt like they wanted to end the conversation rather than continue it. So I stopped asking adults.
One thing fortunate was i had early access to internet, so I had already developed a habit that my parents found a lil concerning , I would disappear into my room and read and watch things for hours. Not things that were assigned. Things I wanted to understand. This was also my art phase, my film phase, my fascination about perspective and lighting in painting and arts phase. My parents saw a kid who was not performing consistently across all subjects in school. I was, from my own vantage point, performing extremely consistently — just not at the things school was measuring.
I remember learning that in the early fifteenth century, Filippo Brunelleschi, an architect and goldsmith, not a traditional mathematician, invented linear perspective by mapping geometric optics onto a mirror and a painting panel. He applied the ray optics to solve an artistic problem. A craftsman reverse-engineered how the eye processes space and handed Renaissance painters a tool that changed what humans could express on a flat surface for the next five hundred years. It struck me then only , I did not have the words for it then, folks like Hawking and Brunelleschi were doing the same thing. Pulling back the surface of how things appeared to find the structure of how they actually worked. Science and art as two different entry points into the same obsession.
My parents found this worrying, which I understand. A kid spending hours watching documentaries about black holes, Einstein-Rosen bridge and space time on tv and reading about art movements when he should be preparing for entrance exams , from their side of it, that looked like losing the plot. From mine, the questions just would not just stop in my head. I lowered myself into the internet looking for answers and kept finding more questions. Which is either the best or worst thing the internet can do to a teen.
I am a aspiring systems programmer. I am reading about compilers, operating systems, computer architecture. I am working through books , CS:APP, Patt and Patel, OSTEP, CLRS , that most people in my field will never open. I want to be careful here about how i frame it, because it is not about intelligence. The people who do not read these books are not less capable. Most of them are extremely capable. The market told them what to be curious about, and this deeply depressing.
Think about what that actually means. Somewhere in a boardroom, incentive structures were designed ,for hiring pipelines, for bootcamp curricula, for what gets upvoted on developer forums, and those structures trickled down and told a kid in small town, where just getting a software job feels like winning, that learning to use a Library is the bar. That the layer on top of the layer on top of the layer on top of the actual machine was the destination, not the starting point. Not because anyone sat down and said let us limit what these kids know , but because the market rewarded certain outputs and the education system, as it always has, oriented itself toward those outputs and called it preparation.
I hope the current wave of LLM agents forces a reckoning here. There will be a period of unemployment and disappointment for those who invested years in market-adjacent skills instead of foundational engineering. That sounds harsh. But sometimes the door only appears after the previous room collapses. When the question becomes unavoidable. Maybe then books get opened.
The systems inside the computer and the systems outside it did not feel like separate interests for long.
It started with an online course called Fundamentals of Psychology by Prakhar Gupta. (Yes, he used to do this before he talked about aliens in podcasts.) I wanted to understand my own brain — go meta, if I have to put it simply. Why I thought the way I did, why other people seemed to operate on entirely different logic, whether there was a map underneath the apparent chaos of human behaviour.
Then economics found me, through some random research I had set out on for unrelated reasons — the way most of the important things in my life have arrived. I ended up on MIT OpenCourseWare, watching Professor Jonathan Gruber's microeconomics lectures at odd hours, which is either a sign of genuine curiosity or a sleep disorder, possibly both. And somewhere in those lectures I landed on game theory and thought: wait. This is the same thing. Model the incentives accurately enough and the behaviour follows. You do not need to know the person. You need to know the structure they are operating inside, the values and believes. I did not have a word for what I was doing at the time. I was not yet calling myself a systems thinker or anything else. I just kept noticing that the things I found interesting all had the same shape underneath. Rules. Constraints. Agents responding to incentives. Outputs that looked chaotic from the outside and inevitable once you saw the mechanism. That is what I am actually interested in. Not CS plus a hobby. One thing, seen from multiple angles.
What I found when I went looking was not comforting.
The system we live in — the education we received, the media we consume, the work we do, the desires we feel , is not neutral. It was built. And it was built by people with very specific interests that were not yours, not mine, not most people's. Some of it was deliberate design. Most of it was emergent, the natural selection of structures that were good at capturing human attention and energy and converting them into profit. Not a single villain, in a dramatic sense, it's just evolved, i'm tempted to call it natural selection, but there are humans in the loop so i can't technically, but you get the zest of it right?
This piece is my attempt to show you the architecture.
Because here is what I believe, and I want to be honest that it is a belief not just an analysis, a human being's natural state is curiosity, creativity, and the desire to understand and build. Everything that deviates from that is a distortion introduced by a system that benefits from your passivity. Reading, researching, building things, making art, engaging with hard ideas, this is not extraordinary behaviour. It is the baseline. We have just been convinced otherwise.
The First Objection
Before we go further I want to address the thought you might already be having.
"This all sounds a bit much. Isn't this just conspiracy thinking? Isn't the simpler explanation that life is genuinely hard and most people are just tired? Isn't this easier than actually doing the work?"
This is a reasonable objection. It is also — and I say this carefully — exactly the kind of thinking that benefits the system you're exploited by.
Let me be specific. Martin Seligman in the 1960s ran experiments where dogs received unavoidable electric shocks. After enough repetition, something broke in them — not physically, psychologically. When the conditions changed and escape became possible, the dogs did not try. They had concluded, from experience, that trying was pointless. The cage had been removed. The dog stayed put. Seligman called it learned helplessness, and subsequent research showed the same pattern in humans with depressing consistency.
Then there is Stanley Milgram at Yale, 1961. Ordinary people — teachers, clerks, engineers, were told by an authority figure in a lab coat to administer electric shocks to a stranger in the next room whenever they answered a question wrong. The shocks were fake. The screaming was acted. The subjects did not know this. Sixty-five percent of them administered what they genuinely believed was the maximum voltage, labelled on the machine as XXX — Danger: Severe Shock, because a man in a lab coat told them the experiment required it. Not sadists. Not broken people. People responding to a structure that told them what to do and provided just enough institutional cover to keep going.
And then Solomon Asch at Swarthmore, which is the one that keeps me up at night. He put one real subject in a room with seven confederates — people secretly working with the researcher — and asked them all to identify which of three lines on a card matched a reference line. Simple. Obvious. Unambiguous. The confederates all gave the wrong answer. And roughly three quarters of real subjects conformed to the incorrect answer at least once, despite the evidence of their own eyes. When asked why afterward, many said they genuinely began to doubt what they were seeing. Not that they felt social pressure and caved — that their perception of reality shifted to match the group.
(A brief note: this was also the era when Yale's psychology department was essentially a government-funded laboratory for figuring out how far human beings could be pushed before they broke. The regulation came later. The subjects, one assumes, came first and the therapy second. Different times.)
The point these studies collectively make is not that people are weak. It is that behaviour — and in Asch's case, perception itself — is extraordinarily sensitive to the structure of the environment. Change the structure, you change what people do, much like change the group and you change what people see. The people running the current arrangement understand this at a level most of us do not. That is why they spend billions engineering the structure — the feed, the algorithm, the social proof, the manufactured consensus — and almost nothing on your capacity to see through it. think about the perspective of overlords, What does a system designed to extract maximum labour and consumption need its population to believe? That the conceived arrangement is natural and inevitable. That alternatives are naive or dangerous. That discomfort is a personal problem, not a structural one. That ambition beyond a certain scale is delusion.
One of such overlord from past was, Edward Bernays , the father of modern public relations, nephew of Sigmund Freud, and by any honest measure one of the most influential and least discussed figures of the 20th century — wrote in his 1928 book Propaganda that the conscious manipulation of the organised habits and opinions of the masses was not just possible but necessary for the functioning of democratic society. He believed, and stated openly, that most people were not capable of determining what was good for them and required guidance from an invisible elite.
He was not a fringe figure. He worked for American presidents, Fortune 500 companies, and the United Fruit Company, for whom he helped manufacture public support for the CIA-backed overthrow of Guatemala's democratically elected government in 1954. He rebranded cigarettes as feminist symbols of liberation — "torches of freedom" — to open the female market for tobacco companies, doubling their addressable customer base in a single campaign.
He used studies of his uncle to sell products that people didn't need. He understood that if you could control what people wanted, you controlled what they did. The remarkable thing is not the technique, it is the confidence. He wrote it all down, published it, and called it Propaganda, because in 1928 he saw no reason to hide it. The people he was manipulating were not going to read it.
Here is the part that should sit with you: Bernays explicitly called the masses stupid. Not in private correspondence — in print, on purpose. He justified the manipulation on the grounds that people could not be trusted to know what was good for them. The extraction was not incidental to his worldview — it was load-bearing. He needed people to be manageable. An educated, curious, self-directing population was not his customer.
The question I ask myself — and I will return to this honestly toward the end — is whether I am any different, writing a piece that explains the manipulation to people I have already decided can handle the truth. I am not sure the answer is clean.
So when you feel the instinct to dismiss systemic critique as paranoia — sit with that instinct for a moment and ask where it came from. Skepticism toward power is healthy. Reflexive skepticism toward critiques of power is a different thing entirely.
So , the question is not whether the system shapes your beliefs. It does. The question is whether you examine that shaping or simply inhabit it.
The Italian Marxist philosopher Antonio Gramsci, writing from a fascist prison cell in the 1930s, called this cultural hegemony — the process by which the dominant class maintains control not through force but through the manufacturing of consent, by making its own worldview the assumed common sense of the entire society. You do not need a gun if people have already internalized the values that serve you. The most effective prison is the one the prisoner believes is not a prison at all.
He was writing about Mussolini's Italy. He might as well have been writing about your Instagram feed.
The Toolkit
Here is the good news. The system is not magic. It operates through identifiable mechanisms. And mechanisms can be understood, which means they can be navigated.
The most powerful tool I know is what I will call incentive mapping. For any claim, any institution, any piece of media, any policy — ask who benefits from you believing it. Follow that thread with the trail of questions. Bernays got paid by tobacco companies. Facebook's revenue is proportional to your time on platform. The algorithm has shareholders. The news channel has an owner. None of this automatically makes the content wrong — but it gives you a coordinate. It tells you where the gravitational pull is coming from.
The second tool is systems thinking — understanding that most outcomes in complex environments are not produced by individual decisions but by the structure of incentives and feedback loops that individuals are operating within. The employee who cuts corners is responding to a system that rewards speed over quality. The politician who lies is responding to a system that rewards attention over truth. The social media company that harms teenagers is responding to a system that rewards engagement over wellbeing. The Wall Street Journal obtained internal Facebook documents in 2021 showing the company's own researchers had concluded that Instagram caused significant harm to teenage girls' mental health. The company knew. The product continued. Not because the people running it were uniquely evil but because the incentive structure of the system produced that outcome regardless of individual intent.
When you understand systems you stop wasting energy on moral outrage at individuals and start asking the more useful question — what would have to change structurally for the outcome to be different?
The third tool is quantitative literacy — mathematics, probability, basic statistics. Not at a research level necessarily, but enough to understand risk, to read a study without being misled by it, to recognise when a number is being used to obscure rather than illuminate. Nassim Taleb, former derivatives trader and author of The Black Swan, spent a career watching highly educated people make catastrophic errors because they did not understand the difference between risk and uncertainty, between normal distributions and fat tails, between the map and the territory. His argument — and I think he is right — is that much of what we call expertise is sophisticated pattern matching that fails completely in novel conditions. The person who understands the underlying mathematics sees through the expertise.
These three tools together — incentive mapping, systems thinking, quantitative literacy — function as an immune system against manipulation.
A brief note on free will, because it is relevant here , Robert Sapolsky, the Stanford neuroscientist who spent decades studying stress and behaviour in both baboons and humans, argues in his book Determined that free will in the traditional sense does not exist — that every decision is the product of biology, environment, and prior conditions cascading forward. I find this argument compelling and I also find it liberating in a specific way. If the system shapes behaviour through incentives and environment and what people are exposed to — then changing those inputs changes the outputs. You are not fighting human nature. You are fighting a specific configuration of conditions. And conditions can be changed.
Which brings me to the most uncomfortable argument in this piece.
Why were you not taught these things in school?
The Education Shaped to Produce Machines, Not Minds
In 1969, twelve human beings walked on the moon. The computer that guided them there — the Apollo Guidance Computer — had 4 kilobytes of memory. Not megabytes. Four kilobytes. Its program memory was stored in something called core rope memory, where software was literally hand-woven into magnetic cores by women at MIT nicknamed "little old ladies" because of the delicacy of the work. Margaret Hamilton, who led the software team, invented the term "software engineering" because she wanted the work taken seriously as a discipline.
It is now 2026. We have not been back to the moon in over fifty years. We have not reached Mars. We carry supercomputers in our pockets and use them primarily to watch fifteen-second videos of strangers doing dances. Ever thought how we ended up here as civilization!
The game developer, and software engineer Jonathan Blow gave a talk in 2019 called Preventing the Collapse of Civilization in which he argued, with genuine alarm, that software is getting worse — not better — over time as compared to hardware. That the compounding of abstractions upon abstractions upon abstractions has produced a civilisation that runs on technology almost nobody fully understands, maintained by a workforce trained to operate at the top of the stack and utterly lost below it. That we are one or two generations away from genuinely losing the knowledge required to rebuild what we have built. He was not being hyperbolic, he was just making a educated guess based on the trends.
He said something else once, in a live stream on twitch, that I think about more than the formal talk. He said that if you got an internship at NASA or SpaceX and all you did was bring coffee to the engineers, you would learn more about engineering than you would spending years working in a web framework. He meant it seriously. The conversations in the hallway, the problems people are actually wrestling with, the way engineers think about constraints and tradeoffs — that ambient exposure to people solving hard problems against hard reality is worth more than years of tutorials about tools that abstract the hard reality away.
Which raises a question that nobody asks in engineering school, at least not mine: what is engineering, actually?
At its core, engineering is applied laws of reality we also knows as science. taking, physics, chemistry, mathematics, information theory — and working within their constraints to build something that functions.
Nobody said this to me in school. I am not sure it is said in most engineering schools. What is said instead is: here is how to pass the exam go read. The laws of reality that the tools are built on — those are apparently someone else's problem.
The market does not want engineers who understand the full stack. funny enough decades ago the term FULL STACK meant an engineer how know from infra to silicon, these days some one who knows MERN plus knows how to use LLM. and this is what market wants engineers who can produce specific outputs within a specific abstraction layer quickly and cheaply. No wander we have Arbitrary Code Execution bugs in the Freaking NOTEPAD.
Richard Feynman, physicist, Nobel laureate, and one of the most genuinely curious human beings of the 20th century, wrote in Surely You're Joking, Mr. Feynman! about his experience with Brazilian physics education. Students could recite definitions and solve standard problems with impressive fluency. But when he asked a question that required applying the concept to a novel situation — silence. They had learned the words without acquiring the understanding. He was disturbed enough to make it the subject of a public lecture.
This was Brazil in the 1950s. It is everywhere now.
John Taylor Gatto was New York City Teacher of the Year in 1989 and 1990 and New York State Teacher of the Year in 1991. He then resigned publicly, in an op-ed in the Wall Street Journal, and spent the rest of his career arguing that the school system was not broken it's cancer. In Dumbing Us Down he wrote that mass compulsory schooling was built on a Prussian military model adopted in the 19th century for the explicit purpose of producing obedient workers and soldiers. People who could follow instructions, perform repetitive tasks, and not ask inconvenient questions about why.
The reformers who designed American mass education in the late 1800s wrote openly about their goals. You were not supposed to come out of school as a thinker. You were supposed to come out as a productive unit. and If American is doing so then world will follow.
It is also not coincidence that education as a field of academic research receives a pennies in the names of funding that goes into AI or defence. A genuinely better model of education — one that produced critical, curious, systems-literate people at scale — would be among the most disruptive forces in human history. It would undermine every industry built on information asymmetry, every political system built on manufactured consent, every economy built on consuming what you are told to want. The incentive to not fund that research is enormous.
The evidence is in what is taught and what is not. You learn to solve equations that have one correct answer. You do not learn to evaluate an argument. You learn historical dates. You do not learn how power structures emerge and maintain themselves. You learn the rules of grammar. You do not learn how language is used to frame reality and constrain thought. George Orwell, in his 1946 essay Politics and the English Language, argued that vague and dishonest language was not just a symptom of bad thinking but a cause of it — that the degradation of language actively prevented clear thought about political reality.
I want to say something personal here. I have a folder on my computer's file system. It contains thirty-one books. CS:APP. CLRS. OSTEP. Crafting Interpreters. Hacking: The Art of Exploitation. Patt and Patel. The Linux Programming Interface. Feynman's memoir. Sapiens. Man's Search for Meaning.
The people who wrote these books — Knuth, Bryant, Kernighan, Nystrom, Erickson, Hamilton in their own way — wrote them the way people write things when they are trying to transfer genuine understanding, not produce a product. You can feel the difference on the first page. There is a density and an honesty in a book written by someone who spent thirty years understanding something and then decided to explain it. It dwarfs any Udemy course about a framework that will be deprecated in four years by a gap so large it is almost funny.
Aaron Swartz understood this. He was twenty-six years old when he died in 2013. He had already co-created RSS, contributed to the development of Creative Commons, and helped stop SOPA — the legislation that would have given corporations sweeping power to censor the internet. He was facing federal prosecution for downloading academic journal articles from JSTOR through MIT's network, intending to make them freely available. Academic research that was largely funded by public money, locked behind paywalls so that publishers could extract rent from human knowledge. He believed that information asymmetry was a form of violence and that open access to knowledge was a moral cause. The prosecutors who pursued him with charges that carried thirty-five years in prison disagreed.
The information is out there. The books are increasingly free. MIT OpenCourseWare exists because people fought for it and some of them, like Aaron Swartz, paid a price. What do people do with it? Give in to a life of mediocrity.
And then there is the deepest damage — the specialisation trap.
Somewhere in the last century we arrived at the consensus that expertise meant narrowness. Pick a lane. Go deep. The generalist is a dilettante. The polymath is a myth. Focus.
This is historically illiterate.
Consider Claude Shannon. He studied both electrical engineering and mathematics, then wrote a master's thesis in 1937 applying Boolean algebra — a branch of abstract mathematics developed in the 1800s — to electrical circuits. That thesis is sometimes called the most important master's thesis of the twentieth century. It is the theoretical foundation of every digital computer ever built. The insight came from someone sitting at the intersection of two fields, looking across the gap between them and seeing what no specialist on either side could see.
Consider the Wright brothers. They were bicycle mechanics. They had no formal engineering training. They read everything they could find about aeronautics and then approached the problem of flight with a mechanic's intuition — iterative, empirical, undogmatic. The trained aeronautical engineers of their time were approaching the problem with more credentials and significantly less success.
Consider Darwin. His theory of natural selection was partially inspired by reading Thomas Malthus's Essay on the Principle of Population — a work of economics about resource scarcity and competition. A naturalist, reading economics, synthesised one of the most powerful ideas in the history of science.
Noam Chomsky built the formal grammar theory that underlies every compiler ever written. Then he used that same structural thinking to write Manufacturing Consent, one of the most rigorous analyses of how media serves power ever produced. Same brain. Same method. Two completely different domains.
John von Neumann contributed foundational work to quantum mechanics, game theory, computer architecture, and economics. Not dabbling — foundational work.
Richard Feynman played bongo drums, painted under a pseudonym, picked locks as a hobby, and cracked open the physics of superfluidity and quantum electrodynamics. He did not believe these things were in tension.
Creativity — genuine creativity that produces new things rather than recombining existing things — stems from the intersection of domains. When you know how information systems work and also how human psychology works and also how economic incentives work, you see things that the pure technologist, the pure psychologist, and the pure economist each individually cannot see. The insight lives in the gap between the disciplines.
The insistence that you specialise early and narrow is not in your interest. It is in the interest of whoever needs a reliable supply of domain-specific labour. A person who understands only one thing is far more employable and far less dangerous than a person who understands how things connect.
I want to suggest, and I want to be serious about this — that the artificial separation of science and arts in schools is one of the most destructive structural choices in modern education. Not because arts and sciences are the same thing. Because the person who only has science has tools without judgment. The person who only has arts has judgment without tools. And the person who has neither, which is most people, is navigating reality with neither a map nor a compass.
Geography, economics, history — these should not be separate subjects siloed into separate exam papers. They are different lenses on the same reality. The history of colonialism is an economics story. The economics of development is a geography story. The geography of resource distribution is a power story. Teaching them separately produces people who can pass tests and cannot read the world.
Hacking the System Without Becoming the System
So what do you do with all of this?
Here is where I want to be careful. There is a version of this argument that ends in cynicism — the system is rigged, the educated exploit the ignorant, might as well join the exploiters. It is intellectually lazy and morally bankrupt. but this can happen, understanding how a trap works does not obligate you to set it for other people.
But I also want to be honest about something that keeps me up at night.
Larry Page and Sergey Brin started Google with a genuine vision of organising the world's information and making it universally accessible. Mark Zuckerberg started Facebook with the stated goal of connecting the world. Elon Musk's early public motivation for Tesla and SpaceX was, at face value, climate change and the long-term survival of the species. These are not small ambitions. These are not cynical men who set out to build surveillance machines and attention farms. They set out to build something real and the incentive structure of scale and capital and markets converted them, or their companies, or both, into something they may not have intended to become.
I think about Bernays here. He did not begin his career thinking I will spend my life manipulating people into smoking and overthrowing governments. He began with a theory about how the world worked and followed it where it led. The theory was not wrong exactly. It was incomplete. It was missing an ethical constraint that the incentive structure never demanded of him.
I am writing a piece about how systems exploit people. I have a basic understanding of those systems or so i like to believe. I am ambitious. I want to build things that matter. I am, by my own description, not particularly bound by the social contracts that keep most people safely average.
There is a line from Batman — of all places — that finally clicked for me when I started thinking seriously about this: you either die a hero or live long enough to become the villain. It always sounded like a cool line. It is actually a systems observation. Given enough time, enough power, and the right incentive structure, the trajectory bends. The question is not whether you are a good person when you start. It is whether the structure you are operating inside will eventually make the heroic choice too costly to sustain.
I would be lying if I said the possibility does not exist that I become the thing I am warning you against. It is not impossible.
I don't have a clean resolution to this. I just think it's worth saying out loud.
There is a better version, though. And it comes from an unlikely source.
In 1980 a political scientist named Robert Axelrod ran a computer tournament. He invited game theorists from around the world to submit strategies for an iterated version of the Prisoner's Dilemma — a game where two players simultaneously choose to cooperate or defect, and the payoffs depend on both choices. The question was: what strategy maximises your outcome across many repeated interactions?
The winner was the simplest submission. It was called Tit for Tat, submitted by Anatol Rapoport. Cooperate on the first move. Then do whatever your opponent did last round. Nice, retaliatory, forgiving, clear. It beat every sophisticated strategy in the tournament.
There is a limitation worth acknowledging. Tit for Tat assumes you interact with the same parties repeatedly — that the game is iterated, that the shadow of the future is long enough to make cooperation rational. In the real world, with anonymous populations of eight billion people and institutions that outlive any individual relationship, the conditions are messier. A single defection in Tit for Tat can also spiral into permanent retaliation through misunderstanding.
in my opinion A more robust version — Tit for Two Tats — absorbs one defection before retaliating. It is more forgiving and more stable in noisy environments. In a world where most harm is systemic rather than personal, where most people are following incentive structures rather than consciously choosing to exploit you, a posture of Tit for Two Tats might be closer to the right model. Cooperative by default, responsive to persistent defection, but not triggered by a single bad interaction.
The implications are significant either way. In a world of repeated interactions — which is the world we actually live in — cooperation is not naive. It is optimal. The defector wins the single interaction and loses the long game. This is why civilisations built on extraction eventually collapse. The incentive structure that Bernays and his descendants optimised for is a defection strategy in an iterated game. It wins rounds. It loses the tournament.
The alternative is not altruism. It is enlightened self-interest extended to a long enough time horizon.
If the resource is information — and increasingly it is — then the logic changes further. Information is not depleted by sharing. When I understand how a system works and I explain it to you, I do not have less understanding. I have more — because articulating something forces clarity. The economics of information sharing are fundamentally different from the economics of material resource sharing. The hoarding of information is not rational in the game-theoretic sense. It is a power strategy, not an efficiency strategy.
This is why I am writing this piece. From the belief that a society in which more people understand how systems work, how incentives shape behaviour, how desire gets manufactured and exploited — that society produces better outcomes for everyone including me.
We have not even begun to approach the ceiling of what human coordination can accomplish when the coordination is not being systematically undermined by attention extraction and manufactured desire. In 1969, with four kilobytes of hand-woven memory, we left the planet. Imagine what a civilisation of genuinely educated, curious, systems-literate people could do with what we actually have.
The problems worth solving — energy, disease, poverty, the organisation of knowledge, the distribution of possibility — are hard. Clearly it require many minds working on them. Every mind captured by the attention economy is a mind not working on what actually matters.
That is not a small loss. It is a civilisational loss. Every day.
Closing
I want to end honestly rather than inspirationally.
You cannot fully exit the system. You are in it. So am I. The water is everywhere and we are wet.
But there is a difference between drowning and swimming. The difference is whether you can see the water.
Every tool you pick up — whether it is mathematics, history, a programming language, a book about how propaganda works — is a small increment of that vision. It does not make you free in some absolute sense. It makes the next manipulation slightly less effective. It makes the next trap slightly more visible. It shifts, incrementally, the balance between being the object of other people's systems and being the subject of your own.
The people who built the current arrangement were not smarter than you. They were earlier. They understood the tools — psychology, media, incentives, information asymmetry — before most people knew those tools existed. That advantage is shrinking. The information is out there. The books are increasingly free. The gap between what the powerful understand about human behaviour and what an ordinary curious person can understand is smaller now than it has ever been.
The question is do you care enough to work the work.
I am trying to. This piece is part of that trying. I am a programmer from a small town who reads economy and game theory, maths, psychology, art history and what ever i find fascinating not because it was assigned to me but because I cannot look at a system without wanting to understand how it actually works. I have thirty-one books on my file system in computer. I am working through them slowly and I am building things and I am not sure where any of it leads except that it is the only honest way I know how to spend a life.
That instinct — I think — is worth more than any credential or skill.
The cage is real. It is also made of ideas. And ideas, unlike steel, can be seen through.
Here is the full updated resources section — drop this in to replace the existing one at the end of the file:
Resources
Watch first — both free on YouTube
Adam Curtis — Century of the Self (2002)
https://www.youtube.com/watch?v=eJ3RzGoQC4sAdam Curtis — HyperNormalisation (2016)
https://www.youtube.com/watch?v=thLgkQBFTPwJonathan Blow — Preventing the Collapse of Civilization (DevGAMM 2019)
https://www.youtube.com/watch?v=pW-SOdj4Kkk
See through systems — foundational
Noam Chomsky & Edward Herman — Manufacturing Consent (1988)
https://archive.org/details/pdfy-NekqfnoWIEuYgdZlGeorge Orwell — 1984 (1949)
https://www.george-orwell.org/1984Aldous Huxley — Brave New World (1932)
https://archive.org/details/brave-new-world_aldous-huxley
Power and capital
Daron Acemoglu & James Robinson — Why Nations Fail (2012)
https://whynationsfail.com/David Graeber — Debt: The First 5000 Years (2011)
https://archive.org/details/debt-first-5000-yearsThomas Piketty — Capital in the 21st Century (2013)
https://piketty.pse.ens.fr/en/capital21c2Ha-Joon Chang — 23 Things They Don't Tell You About Capitalism (2010)
https://www.penguinrandomhouse.com/books/310765/23-things-they-dont-tell-you-about-capitalism-by-ha-joon-chang/Naomi Klein — The Shock Doctrine (2007)
https://www.naomiklein.org/shock-doctrine
Attention, control and surveillance
- Shoshana Zuboff — The Age of Surveillance Capitalism (2019)
https://shoshanazuboff.com/book/
Game theory and incentive structures
Robert Axelrod — The Evolution of Cooperation (1984)
https://archive.org/details/evolutionofcoope00axelThomas Schelling — The Strategy of Conflict (1960)
https://www.hup.harvard.edu/books/9780674840317Mancur Olson — The Logic of Collective Action (1965)
https://archive.org/details/logiccollectivea00olso
Complexity and mathematical lens on society
Geoffrey West — Scale (2017)
https://www.penguinrandomhouse.com/books/533051/scale-by-geoffrey-west/Nassim Taleb — Fooled by Randomness → The Black Swan → Antifragile
https://www.fooledbyrandomness.com/James C. Scott — Seeing Like a State (1998)
https://archive.org/details/seeinglikestate
Mass behaviour and crowd mechanics
Gustave Le Bon — The Crowd (1895)
https://archive.org/details/crowdstudyofpopu00leboRobert Cialdini — Influence (1984)
https://www.influenceatwork.com/book/Charles Mackay — Extraordinary Popular Delusions and the Madness of Crowds (1841)
https://archive.org/details/extraordinarypop00mack
Anthropological and deep structure
Michel Foucault — Discipline and Punish (1975)
https://archive.org/details/disciplinepunish00foucErving Goffman — The Presentation of Self in Everyday Life (1959)
https://archive.org/details/presentationofse00goffPierre Bourdieu — Distinction (1979)
https://archive.org/details/distinction00pierAntonio Gramsci — Prison Notebooks (1929–1935)
https://archive.org/details/prisonnotebooks
For the deep end
Richard Feynman — Surely You're Joking, Mr. Feynman! (1985)
https://archive.org/details/surelyyourejokin00feynGuy Debord — The Society of the Spectacle (1967)
https://www.marxists.org/reference/archive/debord/society.htmEdward Bernays — Propaganda (1928)
https://archive.org/details/EdwardBernaysPropaganda
Tools from maths — for reading the world quantitatively
Probability and statistics
Darrell Huff — How to Lie with Statistics (1954)
https://archive.org/details/how-to-lie-with-statisticsJordan Ellenberg — How Not to Be Wrong (2014)
https://www.jordanellenberg.com/how-not-to-be-wrong/Daniel Kahneman — Thinking, Fast and Slow (2011)
https://us.macmillan.com/books/9780374533557/thinkingfastandslow
Logic and critical thinking
Patrick Hurley — A Concise Introduction to Logic
https://www.cengage.com/c/a-concise-introduction-to-logic/Eugenia Cheng — The Art of Logic in an Illogical World (2018)
https://eugeniacheng.com/books/the-art-of-logic-in-an-illogical-world/
Information theory
James Gleick — The Information (2011)
https://www.theinformationbook.com/Claude Shannon & Warren Weaver — A Mathematical Theory of Communication (1949)
https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
For going deeper
George Pólya — How to Solve It (1945)
https://archive.org/details/howtosolveit00georRichard Hamming — The Art of Doing Science and Engineering (1997)
https://archive.org/details/artofdoingscienc0000hamm
The psychology research — primary sources
Seligman, M.E.P. & Maier, S.F. (1967). Failure to escape traumatic shock.
https://doi.org/10.1037/h0024514Milgram, S. (1963). Behavioral study of obedience.
https://doi.org/10.1037/h0040525Milgram, S. (1974). Obedience to Authority: An Experimental View.
https://archive.org/details/obediencetoautho00milgAsch, S.E. (1951). Effects of group pressure upon the modification and distortion of judgments.
https://psychclassics.yorku.ca/Asch/Asch, S.E. (1955). Opinions and social pressure.
https://www.scientificamerican.com/article/opinions-and-social-pressure/