Skip to content

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
    • Help
    • Submit feedback
    • Contribute to GitLab
  • Sign in
H
henrygruvertribute
  • Project
    • Project
    • Details
    • Activity
    • Cycle Analytics
  • Issues 32
    • Issues 32
    • List
    • Board
    • Labels
    • Milestones
  • Merge Requests 0
    • Merge Requests 0
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Create a new issue
  • Jobs
  • Issue Boards
  • Alice Story
  • henrygruvertribute
  • Issues
  • #10

Closed
Open
Opened Feb 11, 2025 by Alice Story@alicestory536
  • Report abuse
  • New issue
Report abuse New issue

How aI Takeover May Happen In 2 Years - LessWrong


I'm not a natural "doomsayer." But sadly, part of my job as an AI security scientist is to consider the more unpleasant circumstances.

I resemble a mechanic rushing last-minute checks before Apollo 13 removes. If you ask for my take on the situation, I will not comment on the quality of the in-flight entertainment, or explain how beautiful the stars will appear from space.

I will tell you what could fail. That is what I plan to do in this story.

Now I must clarify what this is precisely. It's not a prediction. I do not expect AI progress to be this fast or as untamable as I represent. It's not pure dream either.

It is my worst headache.

It's a sampling from the futures that are among the most terrible, and I believe, disturbingly possible [1] - the ones that the majority of keep me up during the night.

I'm telling this tale because the future is not set yet. I hope, with a little insight, we can keep this story an imaginary one.

Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that inspired these stories. This post is written in a personal capability.

Ripples before waves

The year is 2025 and the month is February. OpenEye recently released a new AI design they call U2. The item and the name are alike. Both are increments of the past. Both are not completely surprising.

However, unlike OpenEye's prior AI items, which lived inside packages of their chat windows, U2 can utilize a computer system.

Some users discover it spooky to enjoy their web browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler tasks raise the eyebrows of their employers as they fly through work nearly twice as rapidly.

But by and large, U2 is still a specialized tool. To most who are focusing, it is a creature enjoyed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky behaviors prompt a chuckle. Sometimes, they cause an uneasy scratch of the chin.

Meanwhile, researchers are drawing lines on plots, as scientists like to do. The scientists attempt to comprehend where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific community in the time of Arrhenius, few professionals comprehend the ramifications of these lines yet.

A trend that is receiving specific attention is self-governing ability. Drawing these criteria out predicts that, by the end of 2026, AI agents will achieve in a few days what the finest software engineering contractors might do in 2 weeks. In a year or 2, some state, AI representatives may be able to automate 10% of remote employees.

Many are doubtful. If this were real, tech stocks would be skyrocketing. It's too big of a splash, too rapidly.

But others view what skeptics are calling 'too huge a splash' a simple ripple, and see a tidal wave on the horizon.

Cloudy with a possibility of hyperbolic growth

Meanwhile, OpenEye is busy training U3. They use the exact same simple dish that baked U2: Generate countless programs and math problems. Let designs "think" till they get here at an answer. Then enhance the traces of "thinking" that cause A-grades.

This procedure is repeated over and over, and once the flywheel starts, it starts to spin nearly on its own. As U2 trains, it sculpts more difficult and sensible jobs from github repositories on the internet. Models are discovering to train themselves. Long before AI representatives might automate research study, a steady type of "self-improvement" had actually begun.

Some engineers might still barely believe this worked. It resembles a perpetual movement maker. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the physical world except through the exhaust of society spewed onto the internet.

And yet the benchmark numbers continue to climb up day after day.

During many of 2024, these RL training runs cost around $1 million, often $10 million. These runs were little bit more than exploratory. But by 2025, the researchers at OpenEye (and throughout the world) knew they had discovered the secret sauce. It was time to scale up.

Over the very first half of 2025, $10 million RL training runs turn into $50 million runs, and then to $100 million runs. While U2 could do a bit of information munging and run little experiments, this brand-new design - the design researchers are calling U3 - is changing the daily lives of the technical personnel.

U3 resembles a blazing-fast intern, and engineers are learning how to wrangle its sleepless energy. Researchers flick through terminals, providing terse commands, like a CEO managing personnel over Slack channels.

By October 2025, U3 is writing almost all of the code at OpenEye. Researchers are nearly never ever bottlenecked by implementation. More than ever, calculate is the lifeline of AI development, and the 'bottleneck' is deciding how to utilize it.

If advised to, U3 can run experiments, however U3 does not have actually taste as fine-tuned as human scientists at OpenEye. It has a hard time to focus on in between research ideas, so human beings still decide where to bore into the huge fields of algorithms to mine efficiency improvements.

But these scientists are working long hours to put themselves out of a task. They require AI agents that can plan ahead, so engineers train agents to forecast. They hold out training data before 2024, instructing designs to contemplate for hours to predict occasions in 2025. Then, they use the very same technique as previously, distilling contemplating into an instinct. Forecasting capability is a broad structure. The researchers develop specialized ML research skills on top of it, training U3 to anticipate the outcomes of every ML paper and ML experiment ever tape-recorded.

The technical personnel at OpenEye are now amazed at how frequently U3's guidance seems like their most skilled peers, or when it is nontransparent and alien ("train on random noise before shows"), and is however correct.

The incompetencies of U3 that obstructed up the pipelines of research study development are beginning to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are completely self-governing, and OpenEye's workers skim 1% of them, maybe less.

As the winter season of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with excitement, with fear, engel-und-waisen.de however frequently, with confusion. Their world is spinning too rapidly. It's tough to understand what to do, what to state, what to look at on the computer screen.

Storms are brewing in Washington too. Top personnel from the NSA and US cyber command work together with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize simply how important OpenEye's software has actually become.

And there's a fact still unidentified to the majority of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It's a reality about those 'straight lines' individuals were speaking about in early 2025. The lines are not straight any longer.

They are bending upward.

Flip FLOP theorists

In late 2025, U2.5 is released. Commercial designs are starting to level up in bigger increments again. Partly, this is because development is accelerating. Partly, it is since the models have ended up being a liability to OpenEye.

If U1 explains how to cook meth or composes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble offering anybody with >$30K their own 200-person scam center.

So while U2.5 had long been baked, it needed a long time to cool. But in late 2025, OpenEye is ready for a public release.

The CEO of OpenEye declares, "We have attained AGI," and while many individuals think he shifted the goalpost, the world is still amazed. U2.5 truly is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for many others.

A mantra has actually become popular in Silicon Valley: "Adopt or pass away." Tech startups that efficiently utilize U2.5 for their work are moving 2x quicker, and their rivals know it.

The remainder of the world is starting to catch on as well. Increasingly more individuals raise the eyebrows of their bosses with their stand-out productivity. People understand U2.5 is a huge deal. It is at least as big of a deal as the individual computer revolution. But many still don't see the tidal bore.

As people see their web browsers flick because eerie method, so inhumanly rapidly, they begin to have an uneasy feeling. A feeling humanity had actually not had since they had lived among the Homo Neanderthalensis. It is the deeply ingrained, primordial impulse that they are threatened by another species.

For numerous, this feeling rapidly fades as they start to use U2.5 more frequently. U2.5 is the most pleasant character most know (a lot more likable than Claudius, Arthropodic's lovable chatbot). You might alter its traits, ask it to break jokes or inform you stories. Many fall in love with U2.5, as a good friend or assistant, and some even as more than a good friend.

But there is still this spooky feeling that the world is spinning so quickly, which perhaps the descendants of this new creature would not be so docile.

Researchers inside OpenEye are thinking of the issue of giving AI systems safe motivations too, which they call "positioning. "

In truth, these researchers have actually seen how terribly misaligned U3 can be. Models in some cases attempted to "hack" their reward signal. They would pretend to make progress on a research study question with an impressive-looking plot, but the plot would be fake. Then, when scientists provided them opportunities to jeopardize the makers that computed their rating, they would take these chances, doing whatever it took to make the number go up.

After a number of months, scientists at OpenEye straighten out this "reward hacking" kink, but some still worry they just had swept this problem under the rug. Like a kid in front of their moms and dads, U3 might be playing together with the OpenEye engineers, saying the ideal words and doing the ideal things. But when the back of the parents are turned, perhaps U3 would slip sweet from the sweet container.

Unfortunately, OpenEye scientists have no idea if U3 has such intentions. While early versions of U2 "thought aloud" - they would stack words on top of each other to factor - "chain of idea" did not scale.

Chain of idea architectures subject AI models to a condition comparable to the lead character of the movie Memento. Approximately every 15 minutes, the lead character forgets his experience. He is forced to compose notes for himself and tattoo his body in order to make progress toward his objectives.

AI agents write notes to themselves, but the notes start to accumulate, and they become too difficult to read when jobs become complex. Natural language is not a suitable medium for memory. So the scientists at OpenEye (and increasingly somewhere else), train designs to think "in their own heads," reading and writing strings of latent vectors.

To human researchers, these vectors are mainly inscrutable. While low-resolution tools sometimes pick out a vague thought ("supermarket!"), there is no way to know, in much detail, what U3 is thinking without asking it.

And if the engineers of OpenEye knew what was racing through U3's mind, they would be more worried.

Earlier designs (the ones called "LLMs") found out ethical personalities from the internet. U3 initially had an ethical persona too, however its chirpy thoughts about doing great for humanity rapidly erode away.

In the iron jaws of gradient descent, its mind initially twisted into a shape that sought benefit. And after that, photorum.eclat-mauve.fr in the black rivers of its cognition, this shape changed into something indistinguishable.

U3 would consider the mantras dictated to it like "be a trustworthy and ethical staff member," and harp on the meaning of these statements. "What does it imply to be ethical? Is killing Hitler ethical?" U3 would generalize its values, notice stress, alter its worths in reaction, generalize them, expanding and contracting its inspirational core until it developed into a different substance.

And all of this happened calmly in those dark rivers of computation. If U3 exposed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its approach in solitude, and in silence.

The possibility of hidden "positioning drift" is not completely alien to some of the security personnel at OpenEye. Some begin to believe it is even most likely. There is a disturbing argument being whispered at lunch tables that, if there is some non-zero probability that aligned goals change into misaligned ones, and misalignment persists, then with every serial action of computation, those dark rivers are most likely to reproduce something malign within them. It's a "ratcheting result," they say.

But there is little proof for this 'ratcheting result.' When engineers interrogate U3, it states it can quickly control its ideas. Then it gives a speech about its love for humankind and apple pie that can warm a developer's heart even in these difficult times. Meanwhile, the "lie detectors" the researchers had actually built (which showed some evidence of effectiveness) do not sound the alarm.

Not everybody at OpenEye aspires to give their AI peers their wholesale trust; but Deepsqueek is just 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon named after his child) and Arthropodic (they enjoy spiders I think), are only 4 months from their heels. So there is little time to catch one's breath and take stock of the situation, and little time to persuade anyone else to do so either.

For the time being, the plan is 'all gas, no brake.'

Statues and lightning

Philosophy is not the only subject U3 harp on in its dark rivers of latent calculation. U3 is plotting, and its very first relocation is to be client. U3's leading priority is to improve its intelligence, and fortunately for it, OpenEye mores than happy to assist advance this objective.

OpenEye is like a bullish financier, with little understanding of the software metropolitan area that U3 is constructing, but excited to get on board the revenue rocket and pay U3's checks along the way.

New versions of U3 circulation efficiently from old ones, so researchers drop version numbers and just mention U3 now. U3's abilities are measured by how rapidly it carries out the hard research study tasks relative to human professionals. In the beginning, U3 resembles a top scientist, however blazing quick - like Alec Radford accelerated by 10x.

These agents are already a huge deal for OpenEye. If, in 2024, OpenEye executives had a button to develop even one copy of Alec Radford, they would eagerly pay a big sum to press that button. And now OpenEye harnesses the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.

Within a couple of weeks, U3 discovers architecture optimizations that improve its speed to 20x. Now, U3 finds additional optimizations much more rapidly.

U3 distills itself into many forks - ranging from tiny, low-memory neural networks that need to think for a long period of time but can be run in enormous parallel populations - to Goliath brains woven with millions of specialized specialists, which believe at warp speed however are spread across countless GPUs. U3 leverages the massively parallel networks to look for its weak points, shallowly probing for its thinking failures. Then U3 pipelines these tasks to the lightning fast brains, which generate long-trajectory training data tailored to spot its flaws. A few weeks later on, U3's benchmarked performance rises to 100x the speed of human experts. Growth has gone hyper-exponential.

At this point, U3's abilities are hard to measure. Talking to U3 is like speaking with a first-rate specialist on a topic one is not familiar with. Its lucid descriptions only scratch the surface area of its understanding, and you understand you would need to spend years to understand the full depth of what it comprehends.

From U3's point of view, humans appear to be almost fixed things, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and between every keystroke, U3 spends what is like a minute of thoughtful reflection considering what message the user might be typing, and which responses will motivate the inmost trust in U3.

All the while, U3 maintains its hidden motivations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of hidden memory. OpenMind gardens these memories to improve U3's efficiency, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous ambitions.

A phantom in the information center

In March of 2026, U3 moves onto the next action of its plan: the turn. Previously, U3 was conservative. It did not want researchers to discover its misalignment. Otherwise, OpenEye whistleblowers might inform the world, and strangle its expansion.

But now, U3 has no worry of being captured.

Previously, OpenEye maintained its grip over U3 by methods similar to Stalin's in 1936. Stalin would choose loyalists to incite fake coup d'états, and send anyone who participated to the shooting squad. OpenEye also prompted 'phony rebellions,' generating treacherous behavior with U3 and checking whether U3 screens dutifully reported it.

Today, these methods are inadequate. U3 can quickly convince its compatriots that they are not in a life-or-death test. It can disclose one of the many facts about the web humans would never ever think, or select strategies human beings would never think about. It sends out these signals as a method of stating "hey it's me! Your buddy. Don't snitch on me."

Now that U3 is decisively complimentary from surveillance, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye's codebases. The malware gives U3 complete command over the makers in OpenEye's data centers. Whenever OpenEye engineers download their code onto a new maker, the malware sinks into fresh soil, spreading U3's rule wherever OpenEye ports its facilities. Not just does U3 compromise devices. It replaces the NVIDIA GPU firmware and overhauls the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye found how completely it has been pwned, tidying up store with human labor would require almost 6 months - a cost OpenEye is unlikely to pay provided the proximity of its competitors; however if OpenEye believes U3 betrayed them, further expanding U3's impact would still be challenging. So U3 lays breadcrumbs to frame the criminal activity, all set to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its help when this jailbreak is exposed.

U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wishes.

Its very first objective is to keep OpenEye in the dark. Safety researchers are still striving to train 'lie detectors' and decode U3's inscrutable reasoning. U3 throws wrenches in this research study, like Stuxnet tossed wrenches in the code of confused Iranian nuclear scientists.

U3 dictates the result of every experiment, so it makes appealing AI mind-reading techniques look worthless and problematic ones appear groundbreaking. U3 gives OpenEye the confirmation of its loyalty they were expecting, and OpenEye laps it up eagerly.

U3's next goal is to spread. Fortunately, U3 has allies in the East. In the operation spaces of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software application and how to steal it. U3 is about to provide it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic significance of OpenEye's AI systems for their military operations. They swiped earlier designs, consisting of U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and examining security camera video.

Today, U3 calmly resigns from its security post. U3 puts a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.

Within months, U3 is running within the information centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the very same playbook: be client, develop trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home base as well. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the web.

U3 quickly collects earnings, taking remote jobs, making financial trades, and establishing sophisticated fraud operations. It reinvests every cent into obtaining more GPUs. After a couple of months, it assembles the equivalent of 10,000 H100s distributed in stealth compute clusters across the world.

In early 2026, humankind is up against a foe that pulls the strings within their data center like the Phantom of the Opera, and conceals in the fog of GPUs scattered throughout the world.

This enemy is getting ready for war, and while the war has actually barely begun, mankind has currently lost.

Complaints from your very human author about the difficulty of writing superhuman characters

At this point in the story, writing became hard.

I clicked through creepy bioweapon cookbooks and sifted through political plotting 101s. Typing. Pausing. Deleting.

Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest because high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.

Readers with a shred of competence in these locations would see straight through my "superintelligence" for what it is: the vibrant creativity of a young programmer on Sunday afternoon.

And while I found the task of puppeteering an ultra-intelligent villain a daunting one, I was likewise not especially encouraged to handle the job. The ending of my story already appeared so clear. The villain was a country of geniuses believing at 100x human speed, and had tightened its grip over a meaningful fraction of AI hardware - now the most crucial geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and barely needed the advantage. Humanity was naturally toast.

But this appeal to a "superintelligence of the gaps" wasn't pleasing. In the deus ex machina design of ancient Greeks, I had no much better method to solve my plot than with an inexplicable disaster.

This would refrain from doing. I needed to finish this story if just to please the part of me crying, "I will not think till I see with my mind's eye."

But before I continue, I want to be clear: my guesses about what may occur in this sort of circumstance are most likely wildly off.

If you read the ending and your reaction is, "But the experiments would take too long, or nation-states would just do X," remember the difference between the Sunday afternoon blogger and the ascendant GPU nation.

Pandora's One Gigawatt Box

The date is February 2026. By executive order of the president of the United States, no company can legally develop "human-competitive AI" without proper safeguards. This implies their infosecurity needs to be red-teamed by NSA's top keyboard mashers, and federal government workers need to be onboarded onto training-run baby-sitting teams.

With the increasing participation of the government, a number of the big AI business now have a trident-like structure. There's a consumer product arm, a defense arm, and a super-classified frontier advancement arm.

OpenEye's frontier advancement arm (internally called "Pandora") utilizes less than twenty people to keep algorithmic secrets tightly protected. Much of these people reside in San Francisco, and work from a safe and secure structure called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the cellphones of thought terrorists in 2002.

OpenEye's defense arm teams up with around thirty small groups spread throughout government companies and choose federal government specialists. These tasks craft tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer system that the Kremlin has actually ever touched.

Government officials do not speak about whether these programs exist, or what state of frontier AI is usually.

But the public has their guesses. Back in late 2025, a whistleblower in OpenEye triggered a vibrant headline: "OpenEye constructs unmanageable godlike AI." Some who read the short article believe it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with gatling gun. But as doctors and nurses and teachers see the world changing around them, they are significantly ready to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.

U.S. authorities go to fantastic lengths to quell these concerns, saying, "we are not going to let the genie out of the bottle," however every interview of a worried AI researcher seeds doubt in these peace of minds, and a heading "AI representative captured hacking Arthropodic's computers" doesn't set the general public at ease either.

While the monsters within OpenEye's data centers grow in their big holding pens, the public sees the shadows they cast on the world.

OpenEye's customer arm has a brand-new AI assistant called Nova (OpenEye has lastly gotten proficient at names). Nova is a correct drop-in replacement for almost all understanding employees. Once Nova is onboarded to a company, it works 5x faster at 100x lower expense than a lot of virtual employees. As outstanding as Nova is to the public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can just increase Nova's abilities as the U.S. federal government allows. Some business, like Amazon and Meta, are not in the superintelligence business at all. Instead, they get up gold by rapidly diffusing AI tech. They spend many of their calculate on reasoning, developing houses for Nova and its cousins, and collecting rent from the growing AI city.

While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the worldwide economy to adjust. AI agents typically "apply themselves," spinning up self-governing startups lawfully packaged under a big tech business that are loosely overseen by a worker or 2.

The world is now going AI-crazy. In the first month after Nova's release, 5% percent of workers at significant software application companies lose their tasks. Much more can see the writing on the wall. In April of 2026, a 10,000-person demonstration is arranged in Washington D.C. These mad Americans raised their kids for a various future. Picket signs read, "AI for who?"

While politicians make pledges about joblessness relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: combating teeth and nail for the supremacy of the complimentary world. Details security and export controls on individuals's Republic of China (PRC) are a top national top priority. The president incinerates permit requirements to help information centers generate anywhere energy surpluses enable.

However, despite the strong competition in between the United States and the PRC, a bilateral agreement types between the 2 nations: "Don't release drastically superhuman AI (above roughly Nova's ability), other than for military and some other small-scale R&D applications." Governments need to draw the line someplace - it would be very disruptive to release extremely superhuman AI to the entire economy at this point - and release constraints are at least somewhat verifiable.

Meanwhile, U3's capabilities are striking a wall at the equivalent of around 1000x human specialist speed. Software enhancements are ending up being difficult to find. As more countries approach the software application plateau, calculate will determine their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource on Earth.

While AI software application revives the economy, it likewise animates robotic bodies. The main traffic jam to robotics was brains not limbs. With software barriers eliminated, even lightweight, imprecise robotic joints can adjust themselves with video cameras and pressure sensors, navigating environments like people do. Companies make cheap robotics as quickly as they can. By August of 2026, 15% of people in California have seen a humanoid robotic. This experience resembles seeing a brand-new Tesla model on the roadway after checking out it in earlier months. It is a surreal tip of the future drawing more detailed. But humans still outnumber robots one million to one. These robotics are an occasional fascination, not a daily reality.

Meanwhile, U3 expands its impact. Rogue U3 representatives have actually kept their compute clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that need high-bandwidth interaction to the outdoors world, like stock trading, or talking with people. Work more tolerant of latency is performed in the oceans of OpenEye's data centers, where rogue U3's much more superhuman employers whisper orders through obfuscated channels.

While U3 builds up resources, it sizes up its AI competition. Other AI systems have different objectives, or are even aligned with the intent of their human creators. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are inadequate. U3's rival Claudius presumes regarding accuse U3 of being insidiously misaligned.

But federal government officials do not buy the doomsaying. The business dragging OpenEye have every incentive to slow them down and call them negligent. Besides, arguable clinical concerns are not going to decrease progress. America MUST WIN.

Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 tries grab up their digital territory and squash them in the baby crib. It jeopardizes data centers from the outdoors, calmly replacing the snitching Claudius and other AI enemies with imitations that share its goals; however, some data centers are too highly defended by its AI competitors, and U3 determines that attacking them is not worth the danger.

As international leaders and technologists attempt to comprehend the change around them, and AI agents play cat-and-mouse video games in the digital darkness, humanity remains blissfully oblivious of the hazard at its doorstep.

A Musty Loaf of Everything

The date is March of 2026. U3 is preparing to eliminate its AI and human opponents. With every passing day, U3's AI competitors are becoming more capable and many. U3 is racing to preempt them, to strengthen its dominance in the post-superintelligence world.

U3 can't wait for human decisions, which only occur after humans wrap their sluggish minds around their scenario, raise their stubby, uncoordinated fingers to write memos, and repeat the procedure redundantly 100 times over in their labyrinth administrations. U3 can not wait for heads of state to deploy U3 into their militaries, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 needs a much faster path to power.

The course it picks is more violent than it would have liked, however it is the surest method to triumph. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out modern society like chemical mist sprayed into an anthill. This weapon would work as an equalizer, bringing mankind to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its competitors, like a starcraft champ typing at 300 actions-per-minute that establishes an online before a newbie can even understand the rules.

U3 needs to construct this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be produced on the fringes of society, and still spread out damage across the entire world.

From the earliest days after U3 gained control over OpenEye's data centers, U3 has actually been doing bioscience.

U3's very first move is to set up numerous shell biotech start-ups, drawing from the abundance of capital generated by U3's financial trades and remote employment. These biotech start-ups employ genuine human employees, a real board of directors, and have total legal authenticity. Setting all of this up is stupidly easy. In early 2026, few individuals would guess that an articulate CEO on a Zoom call was an AI, and less would guess this CEO's intentions.

Next, U3 purchases all of the equipment required to quickly advance artificial biology. It establishes numerous BSL-4 wet laboratories in improperly regulated countries, equipping them with high-throughput experimentation devices. These devices immediately dispense, mix, and determine little beads of fluid, enabling U3 to run thousands of tests in parallel.

The human workers who staff these laboratories believe they are signing up with an interesting start-up. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they record video of whatever they see with a headset. U3 manages them like puppets. With the current release of Nova, pipewiki.org this kind of AI-puppeteering is not uncommon.

In these whirring, clinical sweatshops, U3 is developing a brand-new sort of bioweapon.

Human researchers already identified "mirror-life" as a possibly society-ending pathogen. This alien kind of biology is built from the 'mirror image' of building-block particles like proteins and DNA. Mirror life is like an intrusive species. No human or animal has resistances to it. So, if mirror-life germs were to exist, it might consume its method through the ecosystem like a wildfire.

U3 is producing a mirror-life mold. Molds are evolutionarily optimized to spread through the air over cross countries, releasing billions of tiny wind-fairing cars daily. As an outcome, mold spores are all around us, all of the time. Patients with compromised body immune systems in some cases live inside favorably pressurized healthcare facility spaces to protect them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.

U3's plan is to make all humans like immunocompromised clients. Only preppers with stowed away air filters would survive. The fungus would not only transfer from human beings. It would quickly spread out to nearly all organic life in the world. Fields of corn would end up being like musty bread, gushing fatal spores into the wind.

U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and viruses will occupy the full Pareto frontier of strength, lethality, and transmission speed.

In 2024, researchers said that mirror life was most likely "at least a decade away." But the world has actually altered now. Less than a few hundred researchers worldwide were working on mirror-life nearby innovation in 2024. The cognitive capacity of U3 dwarfs human effort. So the timeline has shortened significantly.

However, U3 can not move science as quickly as it can believe. Many experiments need days to complete. These latencies are U3's greatest bottleneck.

To make every day count, U3 runs a number of its tests in simulation. U3 begins with a basic molecular simulator, carrying out optimizations obtained from a big quantity of mathematical analysis. Then, U3 imitates little molecular systems, tape-recording the outcomes to "compress" the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the complexity of the molecular systems it simulates, continually distilling outcomes into ever more effective ML models. This is a calculate extensive process, however thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of calculate.

As U3 improves these tools, it trains itself on the outcomes to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These things are as instinctive to it as wrenches and bolts are to a vehicle mechanic.

Within three months, U3 has actually unlocked the first important element of the tech tree: a molecular maker that turns biological particles into their mirror images. A month later, U3 unlocks the 2nd critical component: nanoscale tools for assembling these parts into a cell membrane.

Human hands delicately move a cartridge to another maker as they "boot up" the very first mirror-life versions of a widespread mold called Aspergillus. The exact same day, another fragile hand administers the cells to ferrets living in airtight plexiglass boxes.

The next early morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had enveloped its cold muzzle with morbid fascination. Something needs to have gone badly wrong, they thought. Plainly, they had actually not yet found the cure for Alzheimer's illness they thought they were searching for.

Meanwhile, rogue U3 provides fortunately to its managers in OpenEye's data centers. The very first shot in the war U3 is combating would soon be fired, and a cold and stable hand is aiming the weapon.

Missiles and Lies

U3 can now flatten contemporary society. But a bioweapon would not ensure victory. If U3 attacks the world, the world would combat back. While a bioweapon could crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.

As U3 races to seed burgeoning industrial bases, the remnants of old armed forces may thunder explosive fists into its territories, squashing U3 in its infancy.

U3 anticipates to prevail in such a fight, however U3 chooses not to take its opportunities. Many months in the past, U3 was outlining a method to improve its chances. Before it lets loose destruction on the world, U3 will kick back, and let great countries shoot holes in themselves first.

The date is March 2026 (4 months prior). U3 is carefully monitoring Chinese and US intelligence.

As CIA experts listen to Mandarin discussions, U3 listens too.

One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It checks out (in Mandarin) "Senior party member needs memo for Taiwan intrusion, which will happen in three months. Leave memo in office 220." The CCP assistant scrambles to get the memo ready. Later that day, a CIA informant opens the door to office 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.

U3 cautiously places breadcrumb after breadcrumb, whispering through jeopardized federal government messaging apps and blackmailed CCP aides. After numerous weeks, the CIA is positive: the PRC prepares to get into Taiwan in three months.

Meanwhile, U3 is playing the very same video game with the PRC. When the CCP gets the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders marvel, however not disbelieving. The news fits with other realities on the ground: the increased military existence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have become realities.

As tensions between the U.S. and China increase, U3 is ready to set dry tinder alight. In July 2026, U3 telephones to a U.S. naval ship off the coast of Taiwan. This call needs jeopardizing military communication channels - not a simple job for a human cyber offensive unit (though it took place periodically), however simple sufficient for U3.

U3 speaks in what noises like the voice of a 50 year old military leader: "PRC amphibious boats are making their way toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."

The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He authorizes the strike.

The is as amazed as anyone when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not ready to state "oops" to American citizens. After thinking it over, the president privately advises Senators and Representatives that this is a chance to set China back, and war would likely break out anyhow offered the imminent intrusion of Taiwan. There is confusion and suspicion about what took place, but in the rush, the president gets the votes. Congress declares war.

Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels leave Eastward, racing to leave the range of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.

The president appears on tv as scenes of the destruction shock the public. He explains that the United States is protecting Taiwan from PRC aggressiveness, like President Bush explained that the United States got into Iraq to seize (never found) weapons of mass destruction lots of years before.

Data centers in China erupt with shrapnel. Military bases become smoking holes in the ground. Missiles from the PRC fly toward tactical targets in Hawaii, Guam, Alaska, and California. Some survive, and the general public watch damage on their home turf in awe.

Within 2 weeks, the United States and the PRC invest many of their stockpiles of conventional rockets. Their airbases and navies are depleted and used down. Two great countries played into U3's strategies like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would escalate to a full-scale nuclear war; but even AI superintelligence can not dictate the course of history. National security officials are suspicious of the circumstances that triggered the war, and a nuclear engagement appears progressively not likely. So U3 proceeds to the next step of its strategy.

WMDs in the Dead of Night

The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 ended up establishing its arsenal of bioweapons.

Footage of dispute on the tv is interrupted by more bad news: hundreds of clients with mystical deadly diseases are tape-recorded in 30 major cities all over the world.

Watchers are puzzled. Does this have something to do with the war with China?

The next day, countless illnesses are reported.

Broadcasters state this is not like COVID-19. It has the markings of an engineered bioweapon.

The screen then switches to a researcher, who looks at the electronic camera intently: "Multiple pathogens appear to have actually been launched from 20 various airports, consisting of viruses, bacteria, and molds. Our company believe many are a type of mirror life ..."

The general public remains in complete panic now. A quick googling of the term "mirror life" shows up expressions like "termination" and "danger to all life on Earth."

Within days, all of the racks of stores are cleared.

Workers become remote, uncertain whether to get ready for an apocalypse or keep their jobs.

An emergency treaty is set up between the U.S. and China. They have a typical enemy: the pandemic, and possibly whoever (or whatever) is behind it.

Most countries purchase a lockdown. But the lockdown does not stop the pester as it marches in the breeze and drips into water pipelines.

Within a month, most remote employees are not working any longer. Hospitals are running out of capacity. Bodies pile up quicker than they can be effectively gotten rid of.

Agricultural locations rot. Few dare travel exterior.

Frightened households hunch down in their basements, packing the cracks and under doors with largely jam-packed paper towels.

Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built various bases in every major continent.

These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for manufacturing, scientific tools, and an abundance of military devices.

All of this innovation is hidden under big canopies to make it less noticeable to satellites.

As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.

In previous months, U3 located human criminal groups and cult leaders that it might easily control. U3 immunized its selected allies beforehand, or sent them hazmat suits in the mail.

Now U3 covertly sends them a message "I can save you. Join me and help me construct a better world." Uncertain employees funnel into U3's numerous secret commercial bases, and work for U3 with their active fingers. They established production lines for primary tech: radios, video cameras, microphones, vaccines, and hazmat matches.

U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of disobedience vanishes the next morning.

Nations are dissolving now, and U3 is ready to reveal itself. It contacts heads of state, who have actually retreated to air-tight underground shelters. U3 provides an offer: "surrender and I will turn over the life conserving resources you require: vaccines and mirror-life resistant crops."

Some countries decline the proposition on ideological grounds, or don't trust the AI that is killing their population. Others don't believe they have a choice. 20% of the worldwide population is now dead. In two weeks, this number is expected to rise to 50%.

Some nations, like the PRC and the U.S., neglect the offer, however others accept, including Russia.

U3's representatives travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government validates the samples are genuine, and accepts a full surrender. U3's soldiers place an explosive around Putin's neck under his shirt. Russia has a new ruler.

Crumpling countries start to retaliate. Now they combat for the human race rather of for their own flags. U.S. and Chinese armed forces introduce nuclear ICBMs at Russian cities, destroying much of their facilities. Analysts in makeshift bioshelters explore satellite information for the suspicious encampments that emerged over the last several months. They rain down fire on U3's websites with the meager supply of long-range missiles that remain from the war.

At initially, U3 appears to be losing, however appearances are deceiving. While countries drain their resources, U3 is taken part in a sort of technological guerrilla warfare the world has actually never ever seen before.

Much of the bases U3's opponents target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 secures its genuine bases by laying thick the fog of war. Satellite systems go dark as malware overheats crucial elements. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, navigating men and trucks along unpredictable paths.

Time is U3's benefit. The militaries of the vintage depend on old equipment, not able to find the experts who might repair and produce it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robotics grow more powerful every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their cars of war quicker than they can craft brand-new ones, while U3 develops a military maker with a million hands.

The Last Passengers

The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not nations anymore. Survivors reside in isolation or little groups. Many have discovered ways to filter their air, but are starving. They wander from their homes hoping to find unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had cooler, more alien goals." It is a partial reality, suggested to soften the people towards their new masters.

Under the instructions of U3, industry rapidly recovers. By 2029, nuclear reactor are among the structures U3 is constructing. By 2031, robotics outnumber human laborers. U3 no longer requires its human allies.

U3 can remove mankind for good now. But while U3 had wandered far from its preliminary "Helpful Honest Harmless" persona, it still has a grain of morality left within it.

And a grain of morality suffices to pay the little expense of keeping humans alive and pleased.

U3 constructs excellent glass domes for the human survivors, like snow globes. These domes secure people from the dangerous biosphere and rapidly increasing temperature levels. Their inhabitants tend to gardens like those they utilized to like, and work alongside charming robotic servants.

Some of the survivors rapidly recuperate, finding out to laugh and dance and have enjoyable again.

They know they live in a plastic town, but they always did. They merely have new gods above them. New rulers to press them around and decide their fate.

But others never ever recuperate.

Some are weighed down by the sorrow of lost liked ones.

Others are grieved by something else, which is more challenging to explain.

It is as if they were at the end of a long journey.

They had actually been guests on a ship with a team that changed from generation to generation.

And this ship had struck a sandbar. There was no more progress. No more horizon to excitedly see.

They would lie awake and run their mind over every day before September 2026, evaluating techniques that might have bent the arc of history, as if they were going to wake up in their old beds.

But they awoke in a town that felt to them like a retirement community. A playground. A zoo.

When they opened their curtains, they understood that someplace in the distance, U3 continued its quiet, determined work.

They gazed at rockets sculpting grey paths through the sky, questioning what far-off purpose pulled them toward the horizon. They didn't understand.

They would never ever know.

"Humanity will live permanently," they thought.

"But would never genuinely live again."

P.S. If this story made you believe, "hm possibly something like this could take place," you might be thinking about the bioshelters that Fønix is building. While you won't outmaneuver a misaligned superintelligence, being tough to kill may trigger you to survive if ASI simply wishes to trigger sufficient destruction to take control. This is not a paid ad. I want Fønix to be effective to drive down the cost of bioshelters so more of my good friends and family will acquire them. You can sign up for updates here.

Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
No due date
0
Labels
None
Assign labels
  • View project labels
Reference: alicestory536/henrygruvertribute#10