When disaster strikes, AI won’t ask who deserves to be saved. It’ll ask: who paid for priority access?
There’s an old machine learning story: the U.S. military trained an algorithm to spot enemy tanks. It aced tests but failed in the real world. Why? Because it hadn’t learned to detect tanks — it learned to detect clouds. The tank photos were taken on cloudy days; the others, on sunny ones.
AI optimizes for data, not human outcomes. And when the stakes are high, that gap can be catastrophic.
AI is hailed as a climate savior: smarter disaster response, predictive crop yields, carbon-capture algorithms. But what if AI doesn’t learn to prevent disaster — what if it learns to profit from it?
Insurance companies use AI to assess climate risk. It helps price policies accurately: right out of reach for vulnerable communities.
AI optimized supply chains don’t make food systems resilient to drought — they make corporations resilient to supply shocks, often at the expense of farmers and frontline communities.
AI isn’t neutral. It amplifies the systems that benefit from chaos.
Hedge funds use climate models to bet on food prices, not prevent famine. Insurance firms deploy wildfire algorithms to adjust premiums, not protect communities. Governments fund AI for disaster response, then repurpose it for border control and refugee management.
AI doesn’t lack ethics. It has none. Algorithms don’t ask, “Should we?” They ask, “How efficiently can we?” . Optimization without oversight doesn’t ask, “Who needs help?” It asks, “Who can pay for it?”
The danger isn’t that AI will fail to solve climate change. Unless we’re careful, it will succeed on terms that serve capital over humanity
In a world optimized for profit, humanity isn’t the priority. It’s the variable most easily left out.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
When expertise becomes copy-paste, the architecture of civilization will no longer built around humans. It’ll be around the servers and power grids that make infinite intelligence possible.
Dwarkesh Patel’s excellent recent essay lays out what happens when AGI can run firms without human input. As he puts it, “Everyone is sleeping on the collective advantages AIs will have, which have nothing to do with raw IQ but rather with the fact that they are digital—they can be copied, distilled, merged, scaled, and evolved in ways humans simply can’t.” But as I read it, I kept thinking: the real shift won’t be inside companies. It’ll be everywhere else.
This essay distills my thoughts into 8 key implications, exploring how AGI will reshape not just companies but culture, governance, and even our definition of “progress.”
Dwarkesh describes this vividly: “What if Google had a million AI software engineers? Not untrained amorphous ‘workers,’ but the AGI equivalents of Jeff Dean and Noam Shazeer, with all their skills, judgment, and tacit knowledge intact.”
But infinite talent doesn’t guarantee infinite growth. The real bottlenecks will be compute, energy, and intellectual property.
Imagine an AI firm with a million “employees,” throttled because Taiwan’s chip fabs hit capacity, or an energy crisis triples data center costs. Entire industries will reorganize around these bottlenecks. Nations controlling next-gen chip fabs and cheap energy will form the new power blocs, sparking geopolitical tensions.
The paradox? We’ll have infinite talent. But not enough power to deploy it.
AI agents share data instantly, eliminating the inefficiencies that plague human organizations. Teams? Departments? Outdated concepts. Instead, think of firms as vast neural networks — fluid, decentralized, hyper-efficient.
But even perfect systems splinter. Data drifts, conflicting objectives, or misaligned code updates could fracture unity. Less Star Trek "Borg Collective, more corporate Game of Thrones with algorithms meta-plotting behind the scenes.
Even the most synchronized systems drift over time. Alignment is a moving target.
AI-led companies can test thousands of ideas simultaneously, iterating at breakneck speed. It’s evolution on fast-forward. Best practices don’t spread, they replicate.
But hyper-speed cuts both ways. Imagine a critical bug propagating across a trillion processes before anyone notices. The fallout wouldn’t be a product recall or a bugfix; it’d be an economic earthquake.
Sometimes, slow is a feature, not a bug.
With labour effectively infinite, the new scarce resource is energy. GPUs, chips, electricity — they become the lifeblood of AGI economies.
Expect a surge in renewable and nuclear investments to power data centers. But also: resource wars, energy monopolies, and nations vying for chip supremacy. The future isn’t “Big Tech vs. Governments.” It’s whoever owns the electrons.
Silicon is the new oil. Energy is the new gold.
If every “employee” is an AI clone perfectly aligned with a central system, traditional corporate hierarchies crumble. No middle managers. No executive egos. Just pure optimization.
Ronald Coase argued that firms exist to minimize transaction costs. But in AGI-run firms, where communication is instantaneous and perfectly aligned, the boundaries of the firm could dissolve entirely. What’s left isn’t a company — it’s a self-optimizing organism.
In a world without middle managers, who manages the machine?
As AI replication becomes trivial, legal battles over “model distillation” will explode. Forget corporate espionage as we know it. The future’s heist movies will be about stealing minds, not data.
Black-market AGI clones. Espionage targeting model weights. Regulatory arms races to control not just information but cognitive assets.
If information wants to be free, AGI wants to be everywhere.
If AGI replaces human jobs en masse, who’s left to buy the products? Hyper-productivity creates a demand crisis. Capitalism’s dirty secret is that it relies on people having both jobs and purchasing power.
Universal Basic Income? Data dividends? Corporate-sponsored consumer subsidies? It’s all on the table. When firms are too efficient for their own good, the economy starts to cannibalize itself.
What happens when the economy is too productive for its own good?
Efficiency is seductive — until it’s suffocating. Expect a backlash: “human-only” services, artisanal goods, slow fashion, local autonomy. Not because it’s practical, but because it’s meaningful.
The ultimate luxury in an AGI-dominated world won’t be convenience. It’ll be imperfection. Struggle. Craftsmanship. Things made slowly, by hand, with love.
In the end, humanity’s greatest feature might be that we’re inefficient.
Dwarkesh mapped out how AGI can run companies. But zoom out, and the lines blur. These aren’t just corporate shifts: they’re civilizational ones.
Humanity isn’t just building smarter companies. We’re building something stranger: a world that is hyper-optimized.
The question isn’t just what AGI will do. It’s what we’ll become in response.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
In the 1970s, people thought synthesizers would kill music. Instead, Kraftwerk made it more human.
When Kraftwerk released Autobahn in 1974, critics feared the worst. Here was a band ditching traditional instruments for cold, mechanical synths. Wasn’t this the death of authentic music? But Kraftwerk’s synthesized sounds didn’t strip music of emotion. They redefined it. Their pulsing rhythms captured the electric hum of modern life, turning machines into instruments of feeling.
Today, we hear the same anxieties about AI and code. If AI can generate software with a few prompts, does that make human engineers obsolete? The answer lies in Kraftwerk’s legacy: automation doesn’t erase creativity at all: it amplifies it.
From coders to digital composers
Great engineers have never been valued for how much code they write. It’s always been the impact: their ability to build products that solve real problems, create delight, and drive change.
AI is the ultimate technical virtuoso. It handles the syntax, the repetitive patterns, the digital “scales” of programming. It accelerates the how, leaving us to focus on the why.
So, what’s left for us? Meaning. Empathy. Vision.
The most valuable engineers won’t be the ones who craft the most efficient algorithms. They’ll be the ones who design systems that resonate with people! Anticipating human needs, respecting ethical boundaries, and shaping technology that reflects the complexities of our lives.
AI frees us to think bigger
Just as Kraftwerk used synthesizers to explore new sonic landscapes, AI liberates engineers from technical grunt work. This freedom sparks bigger questions:
AI won’t kill coding. It’s an instrument, and like any instrument, its power depends on the hands — and hearts — that guide it.
The future of engineering will be about building with something no machine can replicate: the human soul.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.
We’ve been building AI backwards: training giant models in data centers and squeezing them onto devices. What if the future of AI works the other way around?
For years, the AI blueprint has been stuck on repeat: train giant models in billion-dollar data centers, compress them for your phone, and siphon your data back to the cloud to make the next version smarter. It feels inevitable, but it’s not. This model isn’t optimized for you. It’s optimized for Big Tech’s control: more data, more power, more profits! The hidden costs? Privacy risks, wasted energy syncing models that could learn locally, and a stranglehold on AI innovation by a handful of corporations.
Enter DeMo (Decoupled Momentum Optimization) — a research breakthrough that quietly shatters these assumptions. Think of traditional AI training like an orchestra where every musician has to stop after every note to confirm they’re still in tune. It works if they’re crammed into the same room. But scale it globally, and the symphony falls apart.
DeMo flips the script: musicians play independently, syncing only when it matters.
In AI terms, your devices can now train models locally, sending updates only when needed. No constant data tether to the cloud.
The implications are huge. Your phone won’t just run AI — it will train it. Imagine your keyboard refining its predictions based on how you type, your camera improving photo quality tailored to how you shoot, or your health app learning from your routines. All without your personal data ever leaving your pocket. No middlemen. No surveillance capitalism. Just personalized intelligence that’s truly yours.
This isn’t an incremental tweak. It’s AI’s jailbreak. The future isn’t about stacking bigger models in data centers. It’s about creating smarter, faster, more private models that live with you, learn from you, and belong to you.
--
If you have any questions or thoughts, don't hesitate to reach out. You can find me as @viksit on Twitter.