Incentives
The Hidden Engine of Every System
Introduction
If you want to understand why a system produces the outcomes it does, ignore what people say and look at what they are rewarded for. A hospital that gets paid per procedure performs more procedures. A news outlet that earns revenue per click produces more clickable headlines. A student evaluated by test scores optimizes for test scores. In each case, people are not behaving irrationally or maliciously. They are responding logically to the incentive structure they face. Change the incentives, and behavior changes. Leave them intact, and no amount of good intentions will override them.
Charlie Munger once said that if you want to understand behavior, you should look at incentives. This principle applies everywhere: economics, politics, education, healthcare, technology, and personal relationships. Most of the puzzling, frustrating, or counterintuitive outcomes you observe in the world make immediate sense once you identify who is incentivized to do what. And most attempts to fix problems fail because they address symptoms while leaving the underlying incentive structure untouched.
The Cobra Effect
During British colonial rule in India, authorities were concerned about the number of venomous cobras in Delhi. Their solution was straightforward: offer a bounty for every dead cobra brought in. Initially, it worked. People killed cobras and collected rewards. But then something happened that no one had planned for. Enterprising residents began breeding cobras specifically to kill them and collect the bounty. When the government discovered this and scrapped the program, breeders released their now-worthless cobras into the wild. The cobra population ended up larger than before the bounty was introduced.
This pattern repeats with remarkable consistency across centuries and contexts. In Hanoi under French colonial rule, a similar bounty was placed on rats. Residents were required to bring in rat tails as proof of kill. People began cutting tails off live rats and releasing them to breed more tail-producing rats. In modern contexts, the same logic applies. When a city offers bounties for removing invasive species, people sometimes cultivate them. When insurance companies pay for car repairs but not preventive maintenance, cars deteriorate until they qualify for a claim.
The cobra effect is not about stupidity on either side. The people breeding cobras were perfectly rational given the incentive structure. The policymakers were not foolish; they simply failed to anticipate how people would adapt to a new reward. This is the central challenge of incentive design: humans are creative optimizers. They find the shortest path to whatever is rewarded, and that path frequently diverges from what the incentive designer intended. Any system that rewards an output will eventually be gamed by people who find cheaper ways to produce that output, whether or not the underlying goal is achieved.
Goodhart's Law
British economist Charles Goodhart observed a pattern so universal it became a law: when a measure becomes a target, it ceases to be a good measure. This sounds abstract until you see it in practice. A hospital is measured on wait times in its emergency department. To improve the metric, administrators might move patients from the waiting room into hallway beds. Technically, wait time drops. Practically, patients are lying in a hallway instead of a waiting room. The metric improved. The experience did not.
Police departments measured on crime statistics face Goodhart's Law constantly. When precincts are evaluated by crime rates, some have been caught reclassifying felonies as misdemeanors or discouraging victims from filing reports. The numbers look better while actual crime is unchanged or even underreported. In academia, researchers are evaluated by publication count and citation metrics. This incentivizes publishing many small papers instead of fewer significant ones, splitting single findings across multiple publications, and forming citation rings where researchers cite each other to boost metrics. The measure was meant to identify productive scholarship. It instead reshaped scholarship to produce higher measures.
Goodhart's Law is not an argument against measurement. Without metrics, accountability is impossible. But it is a warning that any single metric will be optimized at the expense of everything it does not capture. Wells Fargo set aggressive targets for the number of accounts each employee should open. Employees responded by creating millions of unauthorized accounts in customers' names. The target was met. Customers were harmed. The more pressure is placed on a single number, the more creative effort goes into hitting that number by any means, and the less the number reflects the reality it was supposed to represent.
Teaching to the Test
Standardized testing was designed to measure educational quality. If students perform well, schools must be teaching effectively. But once test scores became the basis for school funding, teacher evaluations, and school ratings, teaching shifted toward test preparation. Teachers spend weeks drilling students on the specific format and content of upcoming exams. Subjects not tested, such as art, music, and physical education, receive less time. The curriculum narrows to match what is measured, not what education broadly requires.
This is not a failure of teachers. It is a rational response to incentive structures. When a teacher's job security depends on student test scores, allocating time to untested subjects is a professional risk. When a school's budget depends on standardized performance, administrators direct resources toward test preparation. The No Child Left Behind Act in the United States made this dynamic explicit: schools with persistently low scores faced escalating consequences, up to closure. The result was narrower curricula, intensive test prep, and in some documented cases, outright cheating by administrators who altered answer sheets.
The debate is not whether testing has value. Assessments provide useful information about what students know. The question is what happens when assessment becomes the purpose rather than a tool. Students who excel at standardized tests may struggle with creative problem-solving, collaborative work, or applying knowledge in unfamiliar contexts, precisely the skills that testing does not capture. Finland, often cited for educational excellence, uses minimal standardized testing and gives teachers significant autonomy. Singapore uses intensive testing but embeds it within a broader system of teacher development and curricular depth. Both produce strong outcomes through different approaches, suggesting that the incentive structure surrounding tests matters more than the tests themselves.
Algorithms and the Outrage Machine
Social media platforms earn revenue from advertising. Advertising revenue depends on attention. Attention is maximized by engagement. And research consistently shows that content provoking outrage, fear, or moral indignation generates more engagement than content that is calm, nuanced, or balanced. This is not a conspiracy. It is an incentive structure. No executive sat down and decided to make people angry. Engineers optimized for engagement because that is what the business model rewards. The algorithm learned, through billions of interactions, that emotionally charged content keeps people scrolling, clicking, and sharing.
The consequences ripple outward. Content creators learn what the algorithm rewards and produce accordingly. A thoughtful fifteen-minute analysis of a policy issue gets modest engagement. A provocative thirty-second clip with a misleading caption goes viral. Over time, creators who produce outrage-optimized content grow their audiences, while those producing measured analysis struggle for visibility. This is not because audiences prefer outrage in some deep sense. It is because the algorithmic feed sorts and surfaces content based on engagement metrics, and outrage metrics are high.
Political discourse has been reshaped by this incentive structure. Politicians who make extreme statements get more coverage. Moderate voices are algorithmically invisible. Internal research at major platforms has documented that recommendation engines push users toward increasingly extreme content because extreme content retains attention. The platforms are aware of this effect, but changing the algorithm to deprioritize engagement would directly reduce advertising revenue. Incentives and intentions are in direct conflict, and incentives almost always win. Reforming this dynamic requires either changing the business model, regulatory intervention, or both, because the current incentive structure makes the outcome predictable and self-reinforcing.
Perverse Incentives in Healthcare
In a fee-for-service healthcare model, doctors and hospitals earn money by performing services: tests, procedures, surgeries, follow-up visits. The more they do, the more they earn. This creates an incentive to do more, not necessarily to make patients healthier. A doctor who resolves your problem with a single visit and some lifestyle advice earns less than one who orders imaging, refers you to a specialist, prescribes medication, and schedules follow-ups. Both might be practicing good medicine. But the system rewards volume over outcomes.
The treating-versus-curing tension is even more stark in pharmaceutical economics. A drug that cures a disease eliminates future revenue from that patient. A drug that manages symptoms indefinitely generates recurring revenue for years. This does not mean companies deliberately withhold cures. Drug development is genuinely difficult, and many diseases are far easier to manage than to cure. But the financial structure does influence where research investment flows. Chronic conditions that affect millions of affluent patients attract more research funding than rare diseases or conditions prevalent in low-income countries, because the revenue potential differs enormously.
Some healthcare systems attempt to realign incentives. Value-based care models pay providers based on patient outcomes rather than service volume. Capitation models give providers a fixed payment per patient per year, incentivizing them to keep patients healthy and out of the hospital. Kaiser Permanente, which both insures and provides care, has an incentive to prevent illness because it bears the cost of treatment. These models are not perfect, and they create their own perverse incentives, such as avoiding the sickest patients who are expensive to treat. But they demonstrate that different incentive structures produce measurably different patterns of care.
Principal-Agent Problem
Whenever you hire someone to act on your behalf, you face a fundamental problem: their interests may not align with yours. You are the principal. They are your agent. A real estate agent earns a commission based on sale price. You might expect this aligns their incentive with yours: sell your house for the most money possible. But research by Steven Levitt found that when real estate agents sell their own homes, they leave them on the market significantly longer and sell for higher prices than when selling clients' homes. The agent's incentive is not to maximize your price. It is to close the deal. An extra $10,000 on your sale price adds only a few hundred dollars to their commission, not worth weeks of additional effort. But that same $10,000 on their own home goes directly into their pocket.
Financial advisors face a similar misalignment. An advisor paid by commission earns more by recommending products that generate higher fees, regardless of whether those products are best for you. Even fee-only advisors, who charge a percentage of assets managed, are incentivized to keep your money under their management rather than recommending that you pay off your mortgage or invest in your own business. The advice you receive is filtered through what benefits the advisor, and that filter is often invisible to you.
The principal-agent problem appears in corporate governance, politics, law, and medicine. Corporate executives (agents) may pursue short-term stock price gains that benefit their bonuses while undermining long-term company health for shareholders (principals). Politicians (agents) may prioritize reelection over the public interest (principals). Lawyers paid by the hour have less incentive to resolve your case quickly than lawyers on flat fees. In every case, the solution is not to eliminate agents, which is impractical, but to design compensation structures, monitoring systems, and accountability mechanisms that narrow the gap between what the agent wants and what you need. Perfect alignment is usually impossible. Good enough alignment is worth pursuing.
Designing Better Incentives
If incentives are so powerful, can they be designed intentionally to produce better outcomes? Nudge theory, developed by Richard Thaler and Cass Sunstein, argues yes. A nudge is a change in how choices are presented that influences behavior without restricting options. The classic example is retirement savings. When employees must opt in to a 401(k) plan, participation hovers around 50-60%. When the default is switched so employees are automatically enrolled and must opt out, participation jumps above 90%. Same plan, same options, same freedom. Different default, dramatically different outcome.
Choice architecture, the design of environments in which people make decisions, extends this principle. Placing healthier food at eye level in a cafeteria increases healthy eating without removing unhealthy options. Putting a fly sticker in a urinal reduces spillage by giving people something to aim at. Showing homeowners how their energy use compares to their neighbors reduces consumption. These interventions work because they align the path of least resistance with the desired outcome, rather than requiring willpower or conscious effort.
But nudge theory has limits and critics. Libertarian paternalism, as Thaler and Sunstein describe it, assumes that choice architects know what is best. Who decides which direction the nudge should push? Governments and corporations can use the same principles to steer behavior toward outcomes that benefit them, not you. Dark patterns in web design, like making unsubscribe buttons hard to find or pre-checking boxes for marketing emails, are nudges that serve the designer's interest at your expense. The same tools that make good incentive design possible also make manipulation more sophisticated. Understanding incentives is not just about designing better systems. It is about recognizing when systems are designed to work against you, and having the awareness to push back.
AI, Automation, and the Future of Work
"Will AI replace my job?" is a question almost everyone has asked by now, and it is almost always framed wrong. A more useful question is which tasks within a given job will be automated, and what new tasks will emerge to fill the gap. History offers a surprisingly consistent pattern here. When ATMs appeared in the 1970s, banks were supposed to need fewer tellers. Instead, ATMs reduced the cost of operating a branch, so banks opened more branches, and teller employment actually grew for decades. What changed was what tellers did: less cash handling, more relationship management and product sales. Spreadsheets did not eliminate accountants. Email did not eliminate office workers. Each wave of automation destroyed certain tasks, transformed others, and created entirely new roles that nobody predicted before they existed.
What makes this current wave genuinely different is its target. Previous automation waves mostly affected manual and routine cognitive tasks: assembly lines, data entry, basic bookkeeping. AI systems now operate in domains once considered exclusively human: legal research, medical image analysis, software development, translation, and creative writing. Lawyers who spent years learning to review contracts face tools that do it in seconds. Radiologists who trained for a decade find algorithms matching their diagnostic accuracy on certain imaging tasks. Programmers watch AI assistants generate functional code from plain-language descriptions. This does not necessarily mean these professions will vanish, but it does mean the value proposition of each role is shifting. Skills that complemented previous technology become less valuable, while skills that complement AI, such as judgment under ambiguity, ethical reasoning, and creative problem-solving in unstructured environments, become more important.
Who benefits and who loses from this transition depends far more on policy choices than on technology itself. Identical AI capabilities can concentrate wealth in a handful of platform owners or distribute productivity gains broadly, depending on how labor markets are regulated, how education systems adapt, how tax policy treats capital versus labor income, and whether workers have bargaining power to negotiate their share. Scandinavian countries facing similar automation pressures as the United States produce dramatically different outcomes for displaced workers, not because their technology is different but because their institutional choices around retraining, social insurance, and collective bargaining are different. The real anxiety behind automation fear is rarely about machines. It is about whether gains from increased productivity will be shared or concentrated, and recent decades give people legitimate reasons for skepticism on that front.
When a system produces baffling outcomes, the explanation is almost always sitting in the incentive structure, not in the character of the people inside it. Noticing what gets rewarded, rather than what gets announced, is one of the most useful lenses you can carry through any institution, workplace, or relationship. It also raises a harder question: who designs the incentives, and what keeps them accountable, which is the territory of governance.


