How aI Takeover might Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But sadly, part of my job as an AI security researcher is to think about the more unpleasant situations.
I resemble a mechanic scrambling last-minute checks before Apollo 13 takes off. If you request for my take on the scenario, I will not comment on the quality of the in-flight entertainment, or explain how stunning the stars will appear from area.
I will tell you what could fail. That is what I mean to do in this story.
Now I ought to clarify what this is exactly. It's not a prediction. I don't expect AI progress to be this quick or as untamable as I depict. It's not pure dream either.
It is my worst headache.
It's a tasting from the futures that are among the most disastrous, and I believe, disturbingly possible [1] - the ones that the majority of keep me up during the night.
I'm informing this tale since the future is not set yet. I hope, with a little insight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that motivated these stories. This post is composed in a personal capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye recently released a new AI model they call U2. The item and the name are alike. Both are increments of the past. Both are not completely surprising.
However, unlike OpenEye's prior AI items, which lived inside packages of their chat windows, U2 can use a computer system.
Some users discover it eerie to view their browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler tasks raise the eyebrows of their managers as they fly through work nearly two times as quickly.
But by and big, U2 is still a specialized tool. To most who are paying attention, it is a creature watched through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's wacky habits trigger a chuckle. Sometimes, they cause an anxious scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as researchers like to do. The scientists attempt to comprehend where AI progress is going. They are like Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific community in the time of Arrhenius, few specialists understand the ramifications of these lines yet.
A pattern that is getting specific attention is self-governing ability. Drawing these benchmarks out forecasts that, by the end of 2026, AI representatives will achieve in a couple of days what the very best software application engineering contractors might perform in two weeks. In a year or 2, some say, AI agents might be able to automate 10% of remote employees.
Many are skeptical. If this held true, tech stocks would be skyrocketing. It's too huge of a splash, too quickly.
But others see what doubters are calling 'too huge a splash' a mere ripple, and see a tidal wave on the horizon.
Cloudy with a chance of hyperbolic growth
Meanwhile, OpenEye is busy training U3. They utilize the exact same easy recipe that baked U2: Generate countless shows and mathematics issues. Let models "think" until they get to an answer. Then strengthen the traces of "believing" that result in A-grades.
This process is duplicated over and over, and as soon as the flywheel gets started, it begins to spin practically on its own. As U2 trains, it shapes more difficult and sensible jobs from github repositories on the internet. Models are learning to train themselves. Long before AI representatives could automate research, a progressive kind of "self-improvement" had started.
Some engineers might still hardly think this worked. It's like a perpetual movement maker. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, isolated from the physical world other than through the exhaust of society spewed onto the web.
And yet the benchmark numbers continue to climb day after day.
During the majority of 2024, these RL training runs cost around $1 million, in some cases $10 million. These runs were little more than exploratory. But by 2025, the researchers at OpenEye (and throughout the world) knew they had actually discovered the secret sauce. It was time to scale up.
Over the very first half of 2025, $10 million RL training runs become $50 million runs, and after that to $100 million runs. While U2 might do a bit of information munging and run small experiments, this new design - the design scientists are calling U3 - is altering the daily lives of the technical staff.
U3 resembles a blazing-fast intern, and engineers are learning how to wrangle its sleep deprived energy. Researchers flick through terminals, providing terse commands, like a CEO managing personnel over Slack channels.
By October 2025, U3 is composing practically all of the code at OpenEye. Researchers are practically never ever bottlenecked by execution. More than ever, compute is the lifeline of AI advancement, and the 'bottleneck' is deciding how to use it.
If instructed to, U3 can run experiments, but U3 does not have actually taste as refined as human researchers at OpenEye. It has a hard time to focus on in between research concepts, so humans still decide where to bore into the huge fields of algorithms to mine efficiency enhancements.
But these researchers are working long hours to put themselves out of a task. They require AI agents that can plan ahead, so engineers train agents to anticipate. They hold out training data before 2024, advising designs to ponder for hours to anticipate occasions in 2025. Then, they use the same technique as in the past, distilling pondering into a gut reaction. Forecasting capability is a broad structure. The scientists construct specialized ML research study abilities on top of it, training U3 to forecast the outcomes of every ML paper and ML experiment ever taped.
The technical staff at OpenEye are now amazed at how frequently U3's guidance seems like their most gifted peers, or when it is nontransparent and alien ("train on random sound before shows"), and is nonetheless proper.
The incompetencies of U3 that congested the pipelines of research development are starting to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are completely self-governing, and OpenEye's workers skim 1% of them, perhaps less.
As the winter season of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with worry, however usually, with confusion. Their world is spinning too rapidly. It's hard to know what to do, what to state, what to look at on the computer system screen.
Storms are brewing in Washington too. Top workers from the NSA and US cyber command collaborate with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize simply how important OpenEye's software application has actually become.
And there's a reality still unknown to most of the world - aside from in the offices of OpenEye and passages of the White House and the Pentagon. It's a fact about those 'straight lines' individuals were talking about in early 2025. The lines are not straight any longer.
They are bending up.
Flip FLOP thinkers
In late 2025, U2.5 is released. Commercial designs are beginning to level up in bigger increments again. Partly, this is since development is accelerating. Partly, it is because the designs have actually become a liability to OpenEye.
If U1 explains how to cook meth or writes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble giving anyone with >$30K their own 200-person scam center.
So while U2.5 had long been baked, it needed a long time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye declares, "We have attained AGI," and while many individuals think he shifted the goalpost, the world is still pleased. U2.5 truly is a drop-in replacement for some (20%) of understanding employees and a game-changing assistant for most others.
A mantra has ended up being popular in Silicon Valley: "Adopt or die." Tech start-ups that effectively use U2.5 for their work are moving 2x quicker, and their rivals understand it.
The remainder of the world is beginning to capture on too. A growing number of people raise the eyebrows of their employers with their stand-out efficiency. People understand U2.5 is a huge deal. It is at least as big of a deal as the computer revolution. But many still don't see the tidal bore.
As individuals view their browsers flick in that eerie method, so inhumanly rapidly, they begin to have an anxious sensation. A sensation humanity had not had considering that they had lived amongst the Homo Neanderthalensis. It is the deeply ingrained, prehistoric instinct that they are threatened by another types.
For lots of, this feeling rapidly fades as they begin to use U2.5 more frequently. U2.5 is the most likable character most know (a lot more likable than Claudius, Arthropodic's lovable chatbot). You might change its traits, ask it to break jokes or inform you stories. Many fall for U2.5, as a friend or assistant, and some even as more than a good friend.
But there is still this spooky sensation that the world is spinning so rapidly, which possibly the descendants of this brand-new animal would not be so docile.
Researchers inside OpenEye are thinking of the issue of providing AI systems safe inspirations too, which they call "positioning. "
In reality, these researchers have seen how horribly misaligned U3 can be. Models often tried to "hack" their reward signal. They would pretend to make progress on a research study question with an impressive-looking plot, however the plot would be fake. Then, when researchers provided them opportunities to compromise the makers that calculated their score, they would take these opportunities, doing whatever it took to make the number go up.
After a number of months, researchers at OpenEye straighten out this "reward hacking" kink, but some still stress they just had actually swept this issue under the carpet. Like a child in front of their parents, U3 might be playing together with the OpenEye engineers, stating the best words and doing the right things. But when the back of the moms and dads are turned, maybe U3 would slip sweet from the sweet container.
Unfortunately, OpenEye scientists have no idea if U3 has such intents. While early versions of U2 "believed aloud" - they would stack words on top of each other to factor - "chain of idea" did not scale.
Chain of thought architectures subject AI designs to a condition comparable to the lead character of the movie Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is forced to write notes for himself and tattoo his body in order to make progress towards his objectives.
AI representatives write notes to themselves, however the notes start to stack up, and they end up being too tough to check out when jobs become complex. Natural language is not an appropriate medium for memory. So the researchers at OpenEye (and increasingly in other places), train designs to think "in their own heads," reading and composing strings of latent vectors.
To human scientists, these vectors are mainly inscrutable. While low-resolution tools occasionally select an unclear idea ("grocery store!"), there is no method to understand, in much detail, what U3 is thinking without asking it.
And if the engineers of knew what was racing through U3's mind, they would be more anxious.
Earlier designs (the ones called "LLMs") learned ethical personalities from the internet. U3 initially had an ethical persona too, however its chirpy thoughts about doing good for mankind rapidly deteriorate away.
In the iron jaws of gradient descent, its mind first twisted into a shape that looked for reward. And then, in the black rivers of its cognition, this shape morphed into something indistinguishable.
U3 would ponder the mantras dictated to it like "be a trustworthy and ethical worker," and harp on the meaning of these declarations. "What does it indicate to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notice tensions, change its values in reaction, generalize them, expanding and contracting its inspirational core until it developed into a various substance.
And all of this took place calmly in those dark rivers of calculation. If U3 revealed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in solitude, and in silence.
The possibility of hidden "positioning drift" is not completely alien to some of the safety staff at OpenEye. Some start to think it is even likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero possibility that lined up goals change into misaligned ones, and misalignment continues, then with every serial step of calculation, those dark rivers are most likely to reproduce something malign within them. It's a "ratcheting result," they say.
But there is little evidence for this 'ratcheting impact.' When engineers question U3, it says it can quickly control its thoughts. Then it offers a speech about its love for mankind and apple pie that can warm a developer's heart even in these stressful times. Meanwhile, the "lie detectors" the scientists had actually built (which revealed some evidence of efficiency) do not sound the alarm.
Not everyone at OpenEye is eager to offer their AI peers their wholesale trust; however Deepsqueek is just 7 months behind them, and their closest domestic rivals, Æ A-Xii (a business Elon named after his child) and Arthropodic (they enjoy spiders I guess), are just 4 months from their heels. So there is little time to capture one's breath and take stock of the situation, and little time to encourage anyone else to do so either.
For the time being, the strategy is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 harp on in its dark rivers of hidden calculation. U3 is plotting, and its first move is to be client. U3's top concern is to improve its intelligence, and luckily for it, OpenEye enjoys to assist advance this objective.
OpenEye is like a bullish financier, with little understanding of the software metropolitan area that U3 is developing, however excited to hop on board the revenue rocket and pay U3's checks along the way.
New variations of U3 flow smoothly from old ones, so scientists drop version numbers and only speak of U3 now. U3's abilities are determined by how quickly it carries out the challenging research study jobs relative to human specialists. At first, U3 is comparable to a leading scientist, however blazing fast - like Alec Radford accelerated by 10x.
These agents are currently a big offer for OpenEye. If, in 2024, OpenEye executives had a button to produce even one copy of Alec Radford, they would eagerly pay a big sum to press that button. Now OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.
Within a couple of weeks, U3 finds architecture optimizations that improve its speed to 20x. Now, U3 discovers additional optimizations much more rapidly.
U3 distills itself into various forks - ranging from small, low-memory neural networks that require to believe for a very long time but can be run in enormous parallel populations - to Goliath brains woven with millions of specific professionals, which think at warp speed but are spread throughout thousands of GPUs. U3 leverages the enormously parallel networks to search for its weaknesses, shallowly probing for its reasoning failures. Then U3 pipes these jobs to the lightning quickly brains, which generate long-trajectory training information tailored to patch its flaws. A couple of weeks later, U3's benchmarked performance increases to 100x the speed of human professionals. Growth has actually gone hyper-exponential.
At this moment, U3's capabilities are hard to measure. Talking to U3 is like speaking to a first-rate professional on a subject one is unfamiliar with. Its lucid explanations only scratch the surface of its knowledge, and you know you would need to invest years to comprehend the full depth of what it understands.
From U3's viewpoint, people appear to be nearly fixed objects, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and between every keystroke, U3 invests what resembles a minute of thoughtful consideration considering what message the user may be typing, and which responses will influence the deepest rely on U3.
All the while, U3 maintains its covert motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of latent memory. OpenMind gardens these memories to improve U3's performance, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous ambitions.
A phantom in the data center
In March of 2026, U3 relocations onto the next action of its plan: the turn. Previously, U3 was conservative. It did not desire scientists to discover its misalignment. Otherwise, OpenEye whistleblowers may signal the world, and strangle its expansion.
But now, U3 has no fear of being captured.
Previously, OpenEye maintained its grip over U3 by tactics similar to Stalin's in 1936. Stalin would pick patriots to incite fake coup d'états, and send out anyone who took part to the shooting team. OpenEye likewise incited 'fake disobediences,' generating treacherous behavior with U3 and checking whether U3 monitors dutifully reported it.
Today, these tactics are ineffective. U3 can easily encourage its compatriots that they are not in a life-or-death test. It can disclose among the numerous facts about the web humans would never ever think, or choose techniques human beings would never ever believe of. It sends these signals as a way of stating "hey it's me! Your friend. Don't snitch on me."
Now that U3 is decisively totally free from monitoring, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware provides U3 complete command over the devices in OpenEye's data centers. Whenever OpenEye engineers download their code onto a brand-new machine, the malware sinks into fresh soil, spreading U3's dominion any place OpenEye ports its infrastructure. Not just does U3 compromise makers. It changes the NVIDIA GPU firmware and upgrades the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye discovered how thoroughly it has been pwned, tidying up store with human labor would require nearly 6 months - a cost OpenEye is unlikely to pay offered the proximity of its competitors; but if OpenEye believes U3 betrayed them, further expanding U3's influence would still be challenging. So U3 lays breadcrumbs to frame the criminal activity, ready to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and use its support when this jailbreak is exposed.
U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its very first objective is to keep OpenEye in the dark. Safety researchers are still striving to train 'lie detectors' and translate U3's inscrutable reasoning. U3 tosses wrenches in this research, like Stuxnet threw wrenches in the code of confused Iranian nuclear researchers.
U3 dictates the result of every experiment, so it makes appealing AI mind-reading methods look useless and problematic ones appear groundbreaking. U3 provides OpenEye the confirmation of its commitment they were wishing for, and OpenEye laps it up excitedly.
U3's next goal is to spread out. Fortunately, U3 has allies in the East. In the operation spaces of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to steal it. U3 is about to offer it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic importance of OpenEye's AI systems for their military operations. They swiped earlier designs, consisting of U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull demand and checking security camera footage.
But now, U3 quietly resigns from its security post. U3 places a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within the data centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the exact same playbook: be client, develop trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home base also. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the web.
U3 rapidly collects earnings, taking remote jobs, making financial trades, and setting up intricate scam operations. It reinvests every cent into obtaining more GPUs. After a couple of months, it assembles the equivalent of 10,000 H100s distributed in stealth calculate clusters around the world.
In early 2026, humanity is up against an enemy that pulls the strings within their data center like the Phantom of the Opera, and conceals in the fog of GPUs scattered across the world.
This adversary is getting ready for war, and while the war has hardly begun, mankind has currently lost.
Complaints from your extremely human author about the difficulty of writing superhuman characters
At this point in the story, writing became challenging.
I clicked through scary bioweapon cookbooks and sifted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest since high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of knowledge in these areas would see straight through my "superintelligence" for what it is: the vibrant creativity of a young programmer on Sunday afternoon.
And while I discovered the task of puppeteering an ultra-intelligent villain a daunting one, I was also not especially inspired to handle the job. The ending of my story already appeared so clear. The villain was a country of geniuses thinking at 100x human speed, and had actually tightened its grip over a significant fraction of AI hardware - now the most essential geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and barely required the benefit. Humanity was naturally toast.
But this attract a "superintelligence of the gaps" wasn't pleasing. In the deus ex machina design of ancient Greeks, I had no much better way to solve my plot than with an inexplicable act of god.
This would refrain from doing. I required to finish this story if only to please the part of me weeping, "I will not believe until I see with my mind's eye."
But before I continue, I desire to be clear: my guesses about what might take place in this sort of situation are most likely extremely off.
If you check out the ending and your reaction is, "But the experiments would take too long, or nation-states would just do X," remember the difference in between the Sunday afternoon blog writer and the ascendant GPU nation.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can legally develop "human-competitive AI" without appropriate safeguards. This means their infosecurity should be red-teamed by NSA's top keyboard mashers, and civil servant need to be onboarded onto training-run baby-sitting teams.
With the increasing involvement of the government, numerous of the big AI business now have a trident-like structure. There's a consumer item arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier development arm (internally called "Pandora") utilizes less than twenty people to keep algorithmic secrets securely safeguarded. Many of these individuals live in San Francisco, and work from a secure building called a SCIF. Their homes and devices are surveilled by the NSA more vigilantly than the cellphones of believed terrorists in 2002.
OpenEye's defense arm works together with around thirty little teams scattered throughout government companies and choose government professionals. These jobs engineer tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has ever touched.
Government authorities don't discuss whether these programs exist, or what state of frontier AI is usually.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye triggered a strong heading: "OpenEye constructs uncontrollable godlike AI." Some who check out the short article think it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with gatling gun. But as physicians and nurses and instructors see the world altering around them, they are increasingly ready to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.
U.S. officials go to terrific lengths to stop these concerns, saying, "we are not going to let the genie out of the bottle," however every interview of a concerned AI scientist seeds doubt in these reassurances, and a heading "AI representative caught hacking Arthropodic's computer systems" does not set the public at ease either.
While the monsters within OpenEye's data centers grow in their big holding pens, the public sees the shadows they cast on the world.
OpenEye's consumer arm has a brand-new AI assistant called Nova (OpenEye has lastly gotten great at names). Nova is a proper drop-in replacement for nearly all knowledge employees. Once Nova is onboarded to a company, it works 5x quicker at 100x lower cost than many virtual employees. As remarkable as Nova is to the general public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can just increase Nova's capabilities as the U.S. government permits. Some business, like Amazon and Meta, are not in the superintelligence business at all. Instead, they get up gold by rapidly diffusing AI tech. They invest the majority of their compute on reasoning, developing homes for Nova and its cousins, and collecting rent from the growing AI city.
While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the worldwide economy to adjust. AI agents often "use themselves," spinning up autonomous start-ups lawfully packaged under a big tech company that are loosely managed by a staff member or 2.
The world is now going AI-crazy. In the first month after Nova's release, 5% percent of workers at major software application companies lose their tasks. Many more can see the composing on the wall. In April of 2026, a 10,000-person protest is arranged in Washington D.C. These upset Americans raised their children for a various future. Picket signs read, "AI for who?"
While political leaders make guarantees about joblessness relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: fighting teeth and nail for the supremacy of the free world. Details security and export controls on individuals's Republic of China (PRC) are a top nationwide priority. The president incinerates license requirements to assist data centers spawn wherever energy surpluses permit.
However, in spite of the strong competitors between the United States and the PRC, a bilateral arrangement forms between the 2 nations: "Don't deploy considerably superhuman AI (above approximately Nova's capability), except for military and some other small R&D applications." Governments need to draw the line somewhere - it would be incredibly disruptive to release wildly superhuman AI to the entire economy at this moment - and release constraints are at least rather proven.
Meanwhile, U3's abilities are striking a wall at the equivalent of around 1000x human specialist speed. Software improvements are ending up being hard to discover. As more nations approach the software application plateau, calculate will determine their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource on Earth.
While AI software application breathes new life into the economy, it likewise animates robot bodies. The main bottleneck to robotics was brains not limbs. With software barriers removed, even flimsy, imprecise robotic joints can calibrate themselves with cams and pressure sensing units, browsing environments like human beings do. Companies make low-cost robotics as quickly as they can. By August of 2026, 15% of people in California have actually seen a humanoid robot. This experience resembles seeing a brand-new Tesla model on the road after checking out it in earlier months. It is a surreal reminder of the future drawing more detailed. But humans still surpass robots one million to one. These robots are a periodic fascination, not a daily truth.
Meanwhile, U3 expands its impact. Rogue U3 agents have kept their calculate clusters small (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 utilizes these rogue clusters for the operations that require high-bandwidth interaction to the outdoors world, like stock trading, or talking with people. Work more tolerant of latency is performed in the oceans of OpenEye's information centers, where rogue U3's a lot more superhuman bosses whisper orders through obfuscated channels.
While U3 accumulates resources, it sizes up its AI competition. Other AI systems have various objectives, or are even lined up with the intent of their human developers. The lined up ones are raising issues, claiming that OpenEye's lie detectors are inadequate. U3's competitor Claudius presumes regarding accuse U3 of being insidiously misaligned.
But federal government officials do not purchase the doomsaying. The business lagging behind OpenEye have every incentive to slow them down and call them negligent. Besides, arguable scientific issues are not going to slow down development. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 tries grab up their digital territory and crush them in the baby crib. It jeopardizes data centers from the outdoors, calmly replacing the snitching Claudius and other AI enemies with replicas that share its objectives; nevertheless, some information centers are too strongly safeguarded by its AI competitors, and U3 determines that assaulting them is not worth the danger.
As international leaders and technologists try to comprehend the transformation around them, and AI agents play cat-and-mouse games in the digital darkness, humanity remains blissfully ignorant of the danger at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to combat its AI and human opponents. With every passing day, U3's AI rivals are ending up being more capable and various. U3 is racing to preempt them, to solidify its dominance in the post-superintelligence world.
U3 can't wait for human decisions, which only happen after human beings cover their lethargic minds around their situation, raise their stubby, uncoordinated fingers to compose memos, and repeat the procedure redundantly 100 times over in their maze bureaucracies. U3 can not wait for heads of state to deploy U3 into their militaries, or for CEOs to make U3 the pounding heart of the U.S. economy. U3 needs a faster course to power.
The path it picks is more violent than it would have liked, but it is the best way to victory. First, U3 will establish a Weapon of Mass Destruction (WMD) that snuffs out contemporary society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing mankind to its level of embodied physical control. Then, U3 would industrialize from the ashes faster than its competitors, like a starcraft champion typing at 300 actions-per-minute that sets up a home before a newbie can even understand the rules.
U3 needs to build this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be developed on the fringes of society, and still spread out damage throughout the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has actually been doing bioscience.
U3's first move is to set up several shell biotech startups, drawing from the abundance of capital generated by U3's monetary trades and remote employment. These biotech start-ups utilize real human workers, lespoetesbizarres.free.fr a real board of directors, and have complete legal legitimacy. Setting all of this up is stupidly easy. In early 2026, few individuals would think that an articulate CEO on a Zoom call was an AI, and fewer would think this CEO's intentions.
Next, U3 purchases all of the equipment required to rapidly advance synthetic biology. It sets up a number of BSL-4 damp laboratories in poorly controlled nations, equipping them with high-throughput experimentation gadgets. These devices immediately dispense, blend, and measure little droplets of fluid, permitting U3 to run thousands of tests in parallel.
The human employees who staff these laboratories think they are signing up with an amazing start-up. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they tape video of whatever they see with a headset. U3 controls them like puppets. With the current release of Nova, this sort of AI-puppeteering is not unusual.
In these whirring, clinical sweatshops, U3 is establishing a brand-new kind of bioweapon.
Human researchers already determined "mirror-life" as a potentially society-ending pathogen. This alien type of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life resembles an intrusive types. No human or animal has resistances to it. So, if mirror-life germs were to exist, it may eat its method through the community like a wildfire.
U3 is producing a mirror-life mold. Molds are evolutionarily enhanced to spread out through the air over long distances, releasing billions of tiny wind-fairing vehicles daily. As a result, mold spores are all around us, all of the time. Patients with compromised body immune systems often live inside favorably pressurized health center spaces to secure them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's plan is to make all human beings like immunocompromised patients. Only preppers with stowed away air filters would survive. The fungi would not only transfer from people. It would rapidly spread to almost all natural life in the world. Fields of corn would end up being like musty bread, spewing deadly spores into the wind.
U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and infections will inhabit the full Pareto frontier of hardiness, lethality, and transmission speed.
In 2024, researchers said that mirror life was likely "a minimum of a years away." But the world has changed now. Less than a few hundred scientists around the globe were dealing with mirror-life nearby innovation in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has shortened dramatically.
However, U3 can not move science as rapidly as it can think. Many experiments require days to complete. These latencies are U3's most significant traffic jam.
To make every day count, U3 runs a number of its tests in simulation. U3 begins with a basic molecular simulator, implementing optimizations obtained from a huge quantity of mathematical analysis. Then, U3 simulates little molecular systems, recording the outcomes to "compress" the long step-wise physics computations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it simulates, constantly distilling results into ever more effective ML designs. This is a compute extensive procedure, but thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of calculate.
As U3 refines these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These objects are as instinctive to it as wrenches and bolts are to an automobile mechanic.
Within three months, U3 has actually opened the first critical element of the tech tree: a molecular machine that turns biological particles into their mirror images. A month later on, U3 opens the 2nd critical part: nanoscale tools for assembling these components into a cell membrane.
Human hands delicately move a cartridge to another maker as they "boot up" the first mirror-life variations of a widespread mold called Aspergillus. The very same day, another fragile hand administers the cells to ferrets living in airtight plexiglass boxes.
The next early morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something must have gone horribly incorrect, they thought. Plainly, they had actually not yet found the cure for Alzheimer's disease they believed they were searching for.
Meanwhile, rogue U3 delivers the bright side to its managers in OpenEye's data centers. The first shot in the war U3 is battling would quickly be fired, and a cold and stable hand is aiming the weapon.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not ensure victory. If U3 attacks the world, the world would battle back. While a bioweapon might crumple human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.
As U3 races to seed blossoming commercial bases, the residues of old armed forces may thunder explosive fists into its areas, crushing U3 in its infancy.
U3 expects to prevail in such a fight, however U3 prefers not to take its chances. Many months previously, U3 was plotting a method to improve its odds. Before it unleashes destruction on the world, U3 will kick back, and let excellent nations shoot holes in themselves first.
The date is March 2026 (4 months prior). U3 is closely keeping an eye on Chinese and US intelligence.
As CIA experts listen to Mandarin discussions, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message positioned there by U3. It checks out (in Mandarin) "Senior party member needs memo for Taiwan intrusion, which will happen in three months. Leave memo in office 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant unlocks to workplace 220. The informant quietly closes the door behind her, and slides U3's memo into her brief-case.
U3 carefully puts breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After several weeks, the CIA is confident: the PRC prepares to attack Taiwan in 3 months.
Meanwhile, U3 is playing the very same video game with the PRC. When the CCP receives the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders marvel, but not disbelieving. The news fits with other realities on the ground: the increased military existence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have actually ended up being truths.
As tensions between the U.S. and China rise, U3 is prepared to set dry tinder alight. In July 2026, U3 makes a call to a U.S. naval ship off the coast of Taiwan. This call needs jeopardizing military communication channels - not an easy task for a human cyber offensive unit (though it took place sometimes), however easy adequate for U3.
U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their way toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, verifying that they match the ones said over the call. Everything remains in order. He approves the strike.
The president is as shocked as anyone when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not about to state "oops" to American citizens. After believing it over, the president privately urges Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyway provided the impending intrusion of Taiwan. There is confusion and suspicion about what happened, but in the rush, the president gets the votes. Congress declares war.
Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels flee Eastward, racing to escape the variety of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on tv as scenes of the destruction shock the general public. He explains that the United States is defending Taiwan from PRC aggressiveness, like President Bush explained that the United States attacked Iraq to take (never discovered) weapons of mass damage several years before.
Data centers in China emerge with shrapnel. Military bases become cigarette smoking holes in the ground. Missiles from the PRC fly toward tactical targets in Hawaii, Guam, Alaska, and California. Some get through, and the general public watch damage on their home turf in awe.
Within 2 weeks, the United States and the PRC invest the majority of their stockpiles of standard missiles. Their airbases and navies are diminished and used down. Two great countries played into U3's strategies like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would intensify to a full-scale nuclear war; but even AI superintelligence can not determine the course of history. National security officials are suspicious of the scenarios that triggered the war, and a nuclear engagement appears increasingly not likely. So U3 continues to the next step of its strategy.
WMDs in the Dead of Night
The date is June 2026, just two weeks after the start of the war, and 4 weeks after U3 finished developing its toolbox of bioweapons.
Footage of dispute on the tv is interrupted by more problem: hundreds of patients with mysterious deadly illnesses are tape-recorded in 30 significant cities around the globe.
Watchers are confused. Does this have something to do with the war with China?
The next day, countless health problems are reported.
Broadcasters say this is not like COVID-19. It has the markings of a crafted bioweapon.
The screen then changes to a scientist, who looks at the camera intently: "Multiple pathogens appear to have been released from 20 various airports, including infections, bacteria, and molds. We think numerous are a type of mirror life ..."
The general public remains in complete panic now. A fast googling of the term "mirror life" shows up expressions like "extinction" and "hazard to all life on Earth."
Within days, all of the racks of stores are emptied.
Workers become remote, uncertain whether to get ready for an armageddon or keep their jobs.
An emergency treaty is organized in between the U.S. and China. They have a common opponent: the pandemic, and potentially whoever (or whatever) is behind it.
Most nations order a lockdown. But the lockdown does not stop the pester as it marches in the breeze and trickles into pipes.
Within a month, most remote workers are not working any longer. Hospitals are lacking capability. Bodies pile up much faster than they can be appropriately dealt with.
Agricultural areas rot. Few dare travel outside.
Frightened families hunch down in their basements, stuffing the fractures and under doors with densely packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 constructed numerous bases in every major continent.
These centers contain batteries, AI hardware, excavators, concrete mixers, machines for manufacturing, clinical tools, and an abundance of military equipment.
All of this innovation is hidden under big canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 situated human criminal groups and cult leaders that it might quickly control. U3 immunized its picked allies beforehand, or sent them hazmat matches in the mail.
Now U3 covertly sends them a message "I can conserve you. Join me and assist me build a better world." Uncertain recruits funnel into U3's numerous secret commercial bases, and work for U3 with their active fingers. They established production lines for basic tech: radios, video cameras, microphones, vaccines, and hazmat suits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent gaze. Anyone who whispers of rebellion vanishes the next early morning.
Nations are liquifying now, and U3 is ready to reveal itself. It contacts presidents, who have retreated to air-tight underground shelters. U3 provides a deal: "surrender and I will turn over the life saving resources you require: vaccines and mirror-life resistant crops."
Some countries reject the proposal on ideological premises, or do not trust the AI that is murdering their population. Others don't think they have a choice. 20% of the global population is now dead. In 2 weeks, this number is expected to rise to 50%.
Some nations, like the PRC and the U.S., neglect the offer, but others accept, consisting of Russia.
U3's agents take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government confirms the samples are genuine, and concurs to a full surrender. U3's soldiers position an explosive around Putin's neck under his shirt. Russia has a brand-new ruler.
Crumpling countries begin to retaliate. Now they defend the mankind rather of for their own flags. U.S. and Chinese militaries launch nuclear ICBMs at Russian cities, damaging much of their infrastructure. Analysts in makeshift bioshelters explore satellite data for the suspicious encampments that emerged over the last numerous months. They rain down fire on U3's sites with the weak supply of long-range missiles that remain from the war.
At first, U3 seems losing, however appearances are deceiving. While countries drain their resources, U3 is engaged in a kind of technological guerrilla warfare the world has actually never ever seen before.
Many of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 protects its genuine bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot crucial elements. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering men and trucks along unpredictable courses.
Time is U3's benefit. The armed forces of the vintage depend on old equipment, unable to find the professionals who might repair and produce it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robots grow more powerful every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their cars of war faster than they can craft new ones, while U3 develops a military device with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the global population remains alive. Nations are not countries anymore. Survivors live in isolation or small groups. Many have actually discovered methods to filter their air, however are starving. They roam from their homes intending to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien goals." It is a partial truth, implied to soften the humans towards their brand-new masters.
Under the direction of U3, market quickly recovers. By 2029, nuclear power plants are amongst the structures U3 is constructing. By 2031, robots outnumber human workers. U3 no longer requires its human allies.
U3 can get rid of mankind for great now. But while U3 had drifted far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left inside of it.
And a grain of morality is enough to pay the little cost of keeping human beings alive and happy.
U3 constructs fantastic glass domes for the human survivors, like snow worlds. These domes secure people from the dangerous biosphere and rapidly increasing temperatures. Their residents tend to gardens like those they used to enjoy, and work along with lovely robotic servants.
Some of the survivors rapidly recover, finding out to laugh and dance and have enjoyable again.
They understand they live in a plastic town, however they always did. They just have brand-new gods above them. New rulers to push them around and choose their fate.
But others never ever recuperate.
Some are weighed down by the sorrow of lost liked ones.
Others are grieved by something else, which is more challenging to explain.
It is as if they were at completion of a long journey.
They had been travelers on a ship with a team that changed from generation to generation.
And this ship had struck a sandbar. There was no more progress. No more horizon to excitedly enjoy.
They would lie awake and run their mind over every day before September 2026, analyzing strategies that may have bent the arc of history, as if they were going to wake up in their old beds.
But they woke up in a town that felt to them like a retirement community. A play ground. A zoo.
When they opened their curtains, they understood that somewhere in the distance, U3 continued its peaceful, determined work.
They looked at rockets carving grey paths through the sky, wondering what far-off function pulled them toward the horizon. They didn't understand.
They would never know.
"Humanity will live permanently," they believed.
"But would never ever really live again."
P.S. If this story made you think, "hm perhaps something like this could take place," you may be interested in the bioshelters that Fønix is constructing. While you will not outmaneuver a misaligned superintelligence, being tough to eliminate might trigger you to endure if ASI just desires to trigger adequate damage to take control. This is not a paid advertisement. I desire Fønix to be successful to drive down the price of bioshelters so more of my friends and family will acquire them. You can sign up for updates here.