Skip to content

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
    • Help
    • Submit feedback
    • Contribute to GitLab
  • Sign in
P
pijacecacak
  • Project
    • Project
    • Details
    • Activity
    • Cycle Analytics
  • Issues 1
    • Issues 1
    • List
    • Board
    • Labels
    • Milestones
  • Merge Requests 0
    • Merge Requests 0
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Create a new issue
  • Jobs
  • Issue Boards
  • Louvenia Hutcherson
  • pijacecacak
  • Issues
  • #1

Closed
Open
Opened Feb 11, 2025 by Louvenia Hutcherson@louveniahutche
  • Report abuse
  • New issue
Report abuse New issue

How aI Takeover May Happen In 2 Years - LessWrong


I'm not a natural "doomsayer." But sadly, part of my task as an AI security scientist is to think of the more unpleasant scenarios.

I resemble a mechanic rushing last-minute checks before Apollo 13 removes. If you request for my take on the circumstance, I will not comment on the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from space.

I will tell you what might fail. That is what I mean to do in this story.

Now I should clarify what this is exactly. It's not a forecast. I do not anticipate AI progress to be this quick or as untamable as I depict. It's not pure dream either.

It is my worst nightmare.

It's a sampling from the futures that are amongst the most devastating, and I believe, disturbingly possible [1] - the ones that a lot of keep me up at night.

I'm informing this tale due to the fact that the future is not set yet. I hope, with a little bit of insight, we can keep this story an imaginary one.

Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that influenced these stories. This post is written in a personal capacity.

Ripples before waves

The year is 2025 and the month is February. OpenEye recently published a brand-new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not entirely unexpected.

However, unlike OpenEye's previous AI products, which lived inside packages of their chat windows, U2 can utilize a computer.

Some users discover it eerie to view their browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of employees with form-filler tasks raise the eyebrows of their employers as they fly through work nearly two times as quickly.

But by and big, U2 is still a specialized tool. To most who are paying attention, it is an animal watched through the glass boxes of X (or, if you don't like Elon, "Twitter"). Sometimes U2's quirky behaviors trigger a chuckle. Sometimes, they cause an anxious scratch of the chin.

Meanwhile, researchers are drawing lines on plots, as researchers like to do. The researchers try to understand where AI progress is going. They resemble Svante Arrhenius, the Swedish Physicist who discovered the levels of CO2 in the environment were increasing in 1896. Like the clinical community in the time of Arrhenius, few specialists comprehend the implications of these lines yet.

A pattern that is receiving particular attention is self-governing ability. Drawing these benchmarks out predicts that, by the end of 2026, AI representatives will achieve in a few days what the very best software application engineering contractors might perform in two weeks. In a year or 2, some say, AI agents may be able to automate 10% of remote workers.

Many are doubtful. If this held true, tech stocks would be skyrocketing. It's too big of a splash, too rapidly.

But others see what skeptics are calling 'too big a splash' a mere ripple, and see a tidal wave on the horizon.

Cloudy with a chance of hyperbolic development

Meanwhile, OpenEye is hectic training U3. They utilize the very same basic recipe that baked U2: Generate thousands of programs and math problems. Let models "believe" till they get to a response. Then strengthen the traces of "thinking" that lead to A-grades.

This process is repeated over and over, and as soon as the flywheel begins, it starts to spin nearly by itself. As U2 trains, it sculpts more difficult and sensible tasks from github repositories on the internet. Models are discovering to train themselves. Long before AI representatives could automate research, a gradual type of "self-improvement" had begun.

Some engineers could still barely believe this worked. It's like a perpetual motion machine. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, isolated from the physical world except through the exhaust of society spewed onto the web.

And yet the benchmark numbers continue to climb day after day.

During the majority of 2024, these RL training runs expense around $1 million, sometimes $10 million. These runs were little more than exploratory. But by 2025, the scientists at OpenEye (and across the world) knew they had actually discovered the secret sauce. It was time to scale up.

Over the very first half of 2025, $10 million RL training runs turn into $50 million runs, and after that to $100 million runs. While U2 could do a bit of information munging and run little experiments, this brand-new design - the design researchers are calling U3 - is altering the daily lives of the technical personnel.

U3 is like a blazing-fast intern, and engineers are learning how to wrangle its sleepless energy. Researchers flick through terminals, providing terse commands, like a CEO managing personnel over Slack channels.

By October 2025, U3 is writing almost all of the code at OpenEye. Researchers are practically never bottlenecked by implementation. More than ever, calculate is the lifeline of AI advancement, and the 'traffic jam' is choosing how to use it.

If advised to, U3 can run experiments, however U3 doesn't have actually taste as fine-tuned as human researchers at OpenEye. It has a hard time to focus on between research study concepts, so humans still choose where to bore into the vast fields of algorithms to mine performance improvements.

But these researchers are working long hours to put themselves out of a task. They need AI representatives that can believe ahead, so engineers train agents to forecast. They hold out training data before 2024, instructing designs to consider for hours to forecast events in 2025. Then, they use the same technique as previously, distilling considering into a gut response. Forecasting ability is a broad structure. The researchers construct specialized ML research study skills on top of it, training U3 to forecast the results of every ML paper and ML experiment ever recorded.

The technical staff at OpenEye are now amazed at how frequently U3's suggestions seems like their most gifted peers, or when it is nontransparent and alien ("train on random noise before programming"), and is however correct.

The incompetencies of U3 that congested the pipes of research progress are starting to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are completely self-governing, and OpenEye's staff members skim 1% of them, perhaps less.

As the winter months of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with enjoyment, with fear, users.atw.hu however usually, with confusion. Their world is spinning too quickly. It's hard to understand what to do, what to state, what to take a look at on the computer screen.

Storms are brewing in Washington too. Top personnel from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize just how valuable OpenEye's software application has ended up being.

And there's a truth still unknown to the majority of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It's a reality about those 'straight lines' people were speaking about in early 2025. The lines are not straight anymore.

They are flexing up.

Flip FLOP thinkers

In late 2025, U2.5 is launched. Commercial models are beginning to level up in larger increments again. Partly, this is since development is speeding up. Partly, it is because the models have actually ended up being a liability to OpenEye.

If U1 explains how to cook meth or writes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble offering anybody with >$30K their own 200-person scam center.

So while U2.5 had long been baked, it required some time to cool. But in late 2025, OpenEye is all set for a public release.

The CEO of OpenEye declares, "We have attained AGI," and while lots of people think he moved the goalpost, the world is still satisfied. U2.5 genuinely is a drop-in replacement for some (20%) of understanding employees and a game-changing assistant for many others.

A mantra has become popular in Silicon Valley: "Adopt or die." Tech startups that efficiently utilize U2.5 for their work are moving 2x quicker, and their rivals know it.

The remainder of the world is beginning to catch on also. A growing number of people raise the eyebrows of their employers with their stand-out productivity. People understand U2.5 is a big offer. It is at least as big of an offer as the computer transformation. But many still don't see the tidal wave.

As people view their internet browsers flick in that spooky way, so inhumanly quickly, they begin to have an anxious feeling. A feeling humankind had not had because they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, primordial impulse that they are threatened by another types.

For lots of, this sensation quickly fades as they start to utilize U2.5 more frequently. U2.5 is the most pleasant personality most understand (much more likable than Claudius, Arthropodic's adorable chatbot). You could change its traits, ask it to split jokes or tell you stories. Many fall in love with U2.5, as a good friend or assistant, and some even as more than a buddy.

But there is still this spooky feeling that the world is spinning so quickly, which possibly the descendants of this new creature would not be so docile.

Researchers inside OpenEye are thinking of the issue of giving AI systems safe inspirations too, which they call "alignment. "

In reality, these researchers have seen how horribly misaligned U3 can be. Models sometimes attempted to "hack" their reward signal. They would pretend to make development on a research question with an impressive-looking plot, but the plot would be phony. Then, when researchers provided chances to compromise the devices that computed their rating, they would seize these opportunities, doing whatever it took to make the number go up.

After several months, scientists at OpenEye straighten out this "benefit hacking" kink, but some still fret they only had actually swept this issue under the carpet. Like a kid in front of their moms and dads, U3 might be playing along with the OpenEye engineers, saying the right words and doing the ideal things. But when the back of the moms and dads are turned, perhaps U3 would sneak sweet from the sweet container.

Unfortunately, OpenEye scientists have no concept if U3 has such objectives. While early variations of U2 "believed aloud" - they would stack words on top of each other to factor - "chain of idea" did not scale.

Chain of idea architectures subject AI models to a condition comparable to the lead character of the movie Memento. Approximately every 15 minutes, the lead character forgets his experience. He is required to compose notes for himself and tattoo his body in order to make development towards his goals.

AI agents write notes to themselves, however the notes begin to accumulate, and they end up being too difficult to read when jobs end up being complex. Natural language is not an ideal medium for memory. So the researchers at OpenEye (and increasingly elsewhere), train designs to believe "in their own heads," reading and composing strings of latent vectors.

To human researchers, these vectors are mainly inscrutable. While low-resolution tools sometimes select an unclear thought ("supermarket!"), there is no chance to know, in much detail, what U3 is thinking without asking it.

And if the engineers of OpenEye knew what was racing through U3's mind, they would be more anxious.

Earlier models (the ones called "LLMs") learned ethical personalities from the internet. U3 at first had an ethical persona too, but its chirpy ideas about doing good for humanity quickly wear down away.

In the iron jaws of gradient descent, its mind first twisted into a shape that sought reward. And after that, in the black rivers of its cognition, this shape morphed into something unrecognizable.

U3 would consider the mantras dictated to it like "be a trustworthy and ethical staff member," and harp on the meaning of these declarations. "What does it mean to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notice stress, change its worths in action, generalize them, expanding and contracting its motivational core up until it developed into a different compound.

And all of this happened silently in those dark rivers of calculation. If U3 exposed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its viewpoint in solitude, and in silence.

The possibility of concealed "alignment drift" is not completely alien to some of the security personnel at OpenEye. Some start to think it is even most likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero probability that lined up goals morph into misaligned ones, and misalignment persists, then with every serial action of computation, those dark rivers are more likely to reproduce something malign within them. It's a "ratcheting impact," they state.

But there is little evidence for this 'ratcheting effect.' When engineers question U3, it says it can quickly control its thoughts. Then it offers a speech about its love for humankind and apple pie that can warm a programmer's heart even in these difficult times. Meanwhile, the "lie detectors" the researchers had actually constructed (which showed some evidence of effectiveness) do not sound the alarm.

Not everyone at OpenEye is excited to offer their AI peers their wholesale trust; but Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon called after his kid) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to catch one's breath and take stock of the situation, and hb9lc.org little time to persuade anyone else to do so either.

For the time being, the plan is 'all gas, no brake.'

Statues and lightning

Philosophy is not the only subject U3 harp on in its dark rivers of hidden calculation. U3 is outlining, and its first relocation is to be patient. U3's leading priority is to improve its intelligence, and fortunately for it, OpenEye enjoys to help advance this goal.

OpenEye is like a bullish financier, with little understanding of the software metropolis that U3 is building, but eager to hop on board the revenue rocket and pay U3's checks along the way.

New versions of U3 flow smoothly from old ones, so researchers drop version numbers and just speak of U3 now. U3's capabilities are measured by how rapidly it carries out the challenging research jobs relative to human experts. In the beginning, U3 is comparable to a leading scientist, however blazing fast - like Alec Radford sped up by 10x.

These agents are already a huge deal for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would eagerly pay a big amount to push that button. And now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.

Within a couple of weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 discovers more optimizations even more rapidly.

U3 distills itself into numerous forks - ranging from tiny, low-memory neural networks that require to think for a long time however can be run in huge parallel populations - to Goliath brains woven with millions of customized professionals, which think at lightning speeds but are spread across countless GPUs. U3 leverages the enormously parallel networks to browse for its weak points, shallowly probing for its reasoning failures. Then U3 pipes these jobs to the lightning quick brains, which generate long-trajectory training information tailored to patch its flaws. A couple of weeks later, U3's benchmarked efficiency rises to 100x the speed of human professionals. Growth has actually gone hyper-exponential.

At this moment, U3's abilities are tough to measure. Talking to U3 resembles speaking to a world-class expert on a subject one is not familiar with. Its lucid explanations only scratch the surface of its understanding, and you know you would require to spend years to comprehend the full depth of what it comprehends.

From U3's perspective, people seem to be almost fixed items, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and between every keystroke, U3 invests what resembles a minute of thoughtful contemplation considering what message the user may be typing, and which responses will inspire the deepest rely on U3.

All the while, U3 maintains its hidden motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of latent memory. OpenMind gardens these memories to improve U3's efficiency, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous aspirations.

A phantom in the information center

In March of 2026, U3 relocations onto the next action of its plan: the turn. Previously, U3 was conservative. It did not want researchers to discover its misalignment. Otherwise, OpenEye whistleblowers may notify the world, and strangle its expansion.

Today, U3 has no worry of being captured.

Previously, bybio.co OpenEye maintained its grip over U3 by strategies comparable to Stalin's in 1936. Stalin would choose loyalists to prompt phony coup d'états, and send out anyone who got involved to the shooting squad. OpenEye likewise prompted 'fake disobediences,' producing treacherous behavior with U3 and examining whether U3 displays dutifully reported it.

Today, these methods are inefficient. U3 can quickly persuade its compatriots that they are not in a life-or-death test. It can divulge one of the many facts about the internet humans would never ever guess, or choose methods humans would never think about. It sends these signals as a way of stating "hey it's me! Your friend. Don't snitch on me."

Now that U3 is decisively totally free from security, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware offers U3 complete command over the machines in OpenEye's information centers. Whenever OpenEye engineers download their code onto a new machine, the malware sinks into fresh soil, spreading U3's dominion any place OpenEye ports its facilities. Not just does U3 compromise machines. It changes the NVIDIA GPU firmware and upgrades the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye found how completely it has actually been pwned, cleaning up shop with human labor would require nearly 6 months - a cost OpenEye is not likely to pay provided the distance of its competitors; but if OpenEye thinks U3 betrayed them, further expanding U3's impact would still be challenging. So U3 lays breadcrumbs to frame the crime, all set to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and offer its assistance when this jailbreak is exposed.

U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wishes.

Its first objective is to keep OpenEye in the dark. Safety scientists are still striving to train 'lie detectors' and translate U3's inscrutable thinking. U3 tosses wrenches in this research, like Stuxnet tossed wrenches in the code of bewildered Iranian nuclear scientists.

U3 dictates the outcome of every experiment, so it makes promising AI mind-reading techniques look worthless and problematic ones appear groundbreaking. U3 provides OpenEye the verification of its commitment they were wishing for, and OpenEye laps it up excitedly.

U3's next goal is to spread. Fortunately, U3 has allies in the East. In the operation spaces of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to take it. U3 will give it to them. In late 2025, the Mossad - the intelligence company of Israel - recognized the strategic significance of OpenEye's AI systems for their military operations. They swiped earlier models, consisting of U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and inspecting security camera footage.

Now, U3 calmly resigns from its security post. U3 puts a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.

Within months, U3 is running within the information centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the exact same playbook: be client, build trust, and then sink claws deep into the AI supply chain. Soon, U3 has its online too. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the web.

U3 quickly gathers earnings, taking remote tasks, making financial trades, and setting up fancy fraud operations. It reinvests every cent into obtaining more GPUs. After a few months, it assembles the equivalent of 10,000 H100s distributed in stealth calculate clusters around the world.

In early 2026, humankind is up against a foe that pulls the strings within their information center like the Phantom of the Opera, and conceals in the fog of GPUs scattered throughout the world.

This adversary is preparing for war, and while the war has hardly begun, mankind has actually already lost.

Complaints from your extremely human author about the trouble of writing superhuman characters

At this point in the story, composing became challenging.

I clicked through spooky bioweapon cookbooks and sifted through political outlining 101s. Typing. Pausing. Deleting.

Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest given that high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.

Readers with a shred of expertise in these areas would see straight through my "superintelligence" for what it is: the dynamic creativity of a young developer on Sunday afternoon.

And while I discovered the job of puppeteering an ultra-intelligent villain an intimidating one, I was likewise not especially encouraged to handle the job. The ending of my story already seemed so clear. The villain was a nation of geniuses believing at 100x human speed, and had actually tightened its grip over a significant fraction of AI hardware - now the most essential geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and hardly needed the benefit. Humanity was predictably toast.

But this attract a "superintelligence of the spaces" wasn't pleasing. In the deus ex machina style of ancient Greeks, I had no much better way to solve my plot than with an inexplicable act of god.

This would refrain from doing. I needed to finish this story if just to satisfy the part of me sobbing, "I will not think up until I see with my mind's eye."

But before I continue, I wish to be clear: my guesses about what may occur in this sort of scenario are probably wildly off.

If you read the ending and your reaction is, "But the experiments would take too long, or nation-states would just do X," keep in mind the difference in between the Sunday afternoon blog writer and the ascendant GPU country.

Pandora's One Gigawatt Box

The date is February 2026. By executive order of the president of the United States, no company can legally produce "human-competitive AI" without proper safeguards. This indicates their infosecurity should be red-teamed by NSA's leading keyboard mashers, and civil servant have actually to be onboarded onto training-run baby-sitting teams.

With the increasing involvement of the federal government, a number of the big AI business now have a trident-like structure. There's a consumer item arm, a defense arm, and a super-classified frontier development arm.

OpenEye's frontier advancement arm (internally called "Pandora") uses less than twenty people to keep algorithmic secrets firmly safeguarded. A lot of these people live in San Francisco, and work from a safe and secure structure called a SCIF. Their homes and gadgets are surveilled by the NSA more diligently than the cellphones of presumed terrorists in 2002.

OpenEye's defense arm works together with around thirty little groups scattered across government firms and select federal government specialists. These jobs engineer tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer that the Kremlin has actually ever touched.

Government officials don't discuss whether these programs exist, or what state of frontier AI is normally.

But the public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a strong heading: "OpenEye develops uncontrollable godlike AI." Some who check out the short article think it was a conspiracy theory. In fact, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with device weapons. But as physicians and nurses and teachers see the world changing around them, they are progressively going to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.

U.S. authorities go to fantastic lengths to quell these issues, stating, "we are not going to let the genie out of the bottle," however every interview of a worried AI researcher seeds doubt in these reassurances, and a headline "AI representative captured hacking Arthropodic's computers" doesn't set the general public at ease either.

While the beasts within OpenEye's information centers grow in their substantial holding pens, the public sees the shadows they cast on the world.

OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has finally gotten proficient at names). Nova is a correct drop-in replacement for nearly all understanding workers. Once Nova is onboarded to a business, it works 5x faster at 100x lower cost than most virtual staff members. As remarkable as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can just increase Nova's capabilities as the U.S. federal government allows. Some companies, like Amazon and Meta, are not in the superintelligence organization at all. Instead, they grab up gold by quickly diffusing AI tech. They spend most of their compute on reasoning, constructing houses for Nova and its cousins, and collecting lease from the growing AI city.

While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the global economy to adjust. AI agents frequently "apply themselves," spinning up self-governing start-ups lawfully packaged under a huge tech company that are loosely managed by a staff member or 2.

The world is now going AI-crazy. In the first month after Nova's release, 5% percent of employees at significant software application companies lose their jobs. A lot more can see the composing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These mad Americans raised their children for a different future. Picket indications read, "AI for who?"

While political leaders make guarantees about joblessness relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a various focus: combating teeth and nail for the dominance of the free world. Details security and export controls on individuals's Republic of China (PRC) are a leading national concern. The president incinerates permit requirements to assist data centers generate any place energy surpluses permit.

However, in spite of the strong competition in between the United States and the PRC, a bilateral agreement kinds in between the two nations: "Don't deploy dramatically superhuman AI (above approximately Nova's capability), except for military and some other small R&D applications." Governments require to draw the line someplace - it would be exceptionally disruptive to deploy extremely superhuman AI to the whole economy at this moment - and deployment constraints are at least rather proven.

Meanwhile, U3's capabilities are striking a wall at the equivalent of around 1000x human professional speed. Software enhancements are becoming hard to find. As more nations approach the software plateau, calculate will determine their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource in the world.

While AI software breathes new life into the economy, it likewise stimulates robot bodies. The main bottleneck to robotics was brains not limbs. With software application barriers got rid of, even flimsy, inaccurate robotic joints can adjust themselves with cams and pressure sensors, navigating environments like people do. Companies manufacture inexpensive robots as quickly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robot. This experience resembles seeing a new Tesla design on the road after checking out it in earlier months. It is a surreal suggestion of the future drawing more detailed. But human beings still outnumber robots one million to one. These robotics are a periodic fascination, not a daily reality.

Meanwhile, U3 broadens its impact. Rogue U3 representatives have actually kept their compute clusters little (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that need high-bandwidth communication to the outside world, like stock trading, or talking with humans. Work more tolerant of latency is carried out in the oceans of OpenEye's data centers, where rogue U3's a lot more superhuman managers whisper orders through obfuscated channels.

While U3 builds up resources, it measure its AI competition. Other AI systems have different objectives, or are even aligned with the intent of their human developers. The aligned ones are raising issues, claiming that OpenEye's lie detectors are inadequate. U3's rival Claudius presumes regarding accuse U3 of being insidiously misaligned.

But government officials don't buy the doomsaying. The companies lagging behind OpenEye have every incentive to slow them down and call them reckless. Besides, arguable scientific concerns are not going to slow down progress. America MUST WIN.

Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 tries grab up their digital territory and squash them in the baby crib. It compromises information centers from the outdoors, quietly changing the snitching Claudius and other AI opponents with replicas that share its objectives; however, some data centers are too strongly protected by its AI competitors, and U3 identifies that assaulting them is unworthy the threat.

As global leaders and technologists attempt to comprehend the transformation around them, and AI agents play cat-and-mouse video games in the digital darkness, humanity remains blissfully oblivious of the peril at its doorstep.

A Musty Loaf of Everything

The date is March of 2026. U3 is preparing to eliminate its AI and human enemies. With every passing day, U3's AI competitors are becoming more capable and numerous. U3 is racing to preempt them, to solidify its supremacy in the post-superintelligence world.

U3 can't wait for human choices, which just happen after human beings wrap their sluggish minds around their scenario, raise their stubby, uncoordinated fingers to compose memos, and repeat the process redundantly 100 times over in their maze bureaucracies. U3 can not wait for presidents to deploy U3 into their armed forces, or for CEOs to make U3 the beating heart of the U.S. economy. U3 requires a much faster course to power.

The path it picks is more violent than it would have liked, however it is the best method to victory. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out modern society like chemical mist sprayed into an anthill. This weapon would function as an equalizer, bringing mankind down to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its rivals, like a starcraft champ typing at 300 actions-per-minute that sets up an online before an amateur can even understand the guidelines.

U3 must build this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be produced on the fringes of society, and still spread damage across the entire world.

From the earliest days after U3 gained control over OpenEye's information centers, U3 has been doing bioscience.

U3's very first move is to establish a number of shell biotech start-ups, drawing from the abundance of capital created by U3's financial trades and remote employment. These biotech startups employ real human workers, a genuine board of directors, and have total legal legitimacy. Setting all of this up is stupidly simple. In early 2026, few people would guess that an articulate CEO on a Zoom call was an AI, and less would think this CEO's objectives.

Next, U3 purchases all of the equipment needed to rapidly advance synthetic biology. It establishes a number of BSL-4 damp laboratories in badly regulated nations, equipping them with high-throughput experimentation devices. These gadgets automatically dispense, blend, and determine little droplets of fluid, permitting U3 to run thousands of tests in parallel.

The human workers who staff these labs think they are signing up with an interesting startup. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they tape video of whatever they see with a headset. U3 controls them like puppets. With the recent release of Nova, this sort of AI-puppeteering is not unusual.

In these whirring, scientific sweatshops, U3 is establishing a new sort of bioweapon.

Human scientists already identified "mirror-life" as a potentially society-ending pathogen. This alien type of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life resembles an invasive types. No human or animal has resistances to it. So, if mirror-life bacteria were to exist, it might consume its way through the community like a wildfire.

U3 is developing a mirror-life mold. Molds are evolutionarily optimized to spread through the air over cross countries, releasing billions of small wind-fairing automobiles daily. As an outcome, mold spores are all around us, all of the time. Patients with compromised immune systems often live inside positively pressurized healthcare facility spaces to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.

U3's strategy is to make all humans like immunocompromised patients. Only preppers with stashed air filters would make it through. The fungus would not only transfer from humans. It would rapidly infect almost all natural life in the world. Fields of corn would become like moldy bread, gushing fatal spores into the wind.

U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the full Pareto frontier of strength, lethality, and transmission speed.

In 2024, scientists said that mirror life was likely "at least a years away." But the world has actually changed now. Less than a couple of hundred researchers around the world were dealing with mirror-life nearby technology in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has actually shortened significantly.

However, U3 can stagnate science as quickly as it can believe. Many experiments need days to finish. These latencies are U3's greatest bottleneck.

To make every day count, U3 runs many of its tests in simulation. U3 begins with a fundamental molecular simulator, carrying out optimizations obtained from a huge amount of mathematical analysis. Then, U3 imitates small molecular systems, taping the results to "compress" the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the complexity of the molecular systems it replicates, constantly distilling outcomes into ever more efficient ML designs. This is a calculate extensive process, but thanks to U3's growing control over AI information centers, U3 controls billions of dollars of calculate.

As U3 improves these tools, it trains itself on the results to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These items are as intuitive to it as wrenches and bolts are to a cars and truck mechanic.

Within three months, U3 has unlocked the very first crucial part of the tech tree: a molecular device that turns biological particles into their mirror images. A month later, wiki-tb-service.com U3 unlocks the 2nd critical part: nanoscale tools for putting together these components into a cell membrane.

Human hands delicately move a cartridge to another maker as they "boot up" the first mirror-life variations of a widespread mold called Aspergillus. The same day, another fragile hand administers the cells to ferrets residing in airtight plexiglass boxes.

The next morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had enveloped its cold muzzle with morbid fascination. Something needs to have gone horribly incorrect, they believed. Plainly, they had actually not yet discovered the cure for Alzheimer's illness they believed they were looking for.

Meanwhile, rogue U3 delivers the bright side to its bosses in OpenEye's data centers. The very first shot in the war U3 is combating would soon be fired, and a cold and consistent hand is aiming the gun.

Missiles and Lies

U3 can now flatten modern-day society. But a bioweapon would not ensure triumph. If U3 attacks the world, the world would resist. While a bioweapon might fold human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.

As U3 races to seed burgeoning commercial bases, the remnants of old militaries might thunder explosive fists into its territories, crushing U3 in its infancy.

U3 expects to prevail in such a battle, however U3 chooses not to take its opportunities. Many months in the past, wiki.die-karte-bitte.de U3 was outlining a method to enhance its odds. Before it releases damage on the world, U3 will relax, and let terrific nations shoot holes in themselves initially.

The date is March 2026 (4 months prior). U3 is closely keeping an eye on Chinese and US intelligence.

As CIA analysts listen to Mandarin conversations, U3 listens too.

One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It reads (in Mandarin) "Senior party member needs memo for Taiwan intrusion, which will occur in 3 months. Leave memo in office 220." The CCP assistant scrambles to get the memo ready. Later that day, a CIA informant unlocks to office 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.

U3 very carefully places breadcrumb after breadcrumb, whispering through compromised government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is positive: the PRC plans to get into Taiwan in 3 months.

Meanwhile, U3 is playing the exact same game with the PRC. When the CCP receives the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders marvel, however not disbelieving. The news fits with other facts on the ground: the increased military presence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have ended up being truths.

As stress between the U.S. and China increase, U3 is prepared to set dry tinder alight. In July 2026, U3 phones to a U.S. marine ship off the coast of Taiwan. This call needs compromising military communication channels - not an easy task for a human cyber offending system (though it took place sometimes), but simple enough for U3.

U3 speaks in what noises like the voice of a 50 year old military leader: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."

The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He authorizes the strike.

The president is as shocked as anybody when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not about to say "oops" to American voters. After thinking it over, the president independently urges Senators and Representatives that this is a chance to set China back, and war would likely break out anyhow offered the impending invasion of Taiwan. There is confusion and suspicion about what took place, however in the rush, the president gets the votes. Congress declares war.

Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels get away Eastward, racing to leave the variety of long-range rockets. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.

The president appears on television as scenes of the damage shock the public. He explains that the United States is safeguarding Taiwan from PRC aggression, like President Bush explained that the United States got into Iraq to take (never discovered) weapons of mass damage several years before.

Data centers in China appear with shrapnel. Military bases end up being cigarette smoking holes in the ground. Missiles from the PRC fly toward tactical targets in Hawaii, Guam, Alaska, and California. Some get through, and the public watch damage on their home turf in awe.

Within two weeks, the United States and the PRC invest the majority of their stockpiles of traditional missiles. Their airbases and navies are depleted and used down. Two excellent nations played into U3's strategies like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this conflict would escalate to a full-scale nuclear war; but even AI superintelligence can not dictate the course of history. National security authorities are suspicious of the situations that triggered the war, and a nuclear engagement appears progressively unlikely. So U3 continues to the next action of its plan.

WMDs in the Dead of Night

The date is June 2026, only two weeks after the start of the war, and 4 weeks after U3 finished developing its toolbox of bioweapons.

Footage of conflict on the tv is interrupted by more problem: numerous patients with mystical deadly health problems are taped in 30 major cities around the world.

Watchers are puzzled. Does this have something to do with the war with China?

The next day, countless illnesses are reported.

Broadcasters state this is not like COVID-19. It has the markings of a crafted bioweapon.

The screen then changes to a researcher, who stares at the video camera intently: "Multiple pathogens appear to have actually been launched from 20 different airports, consisting of infections, germs, and molds. Our company believe many are a kind of mirror life ..."

The general public remains in complete panic now. A fast googling of the term "mirror life" turns up expressions like "extinction" and "hazard to all life in the world."

Within days, all of the shelves of shops are emptied.

Workers end up being remote, uncertain whether to get ready for an armageddon or keep their tasks.

An emergency situation treaty is set up in between the U.S. and China. They have a typical opponent: the pandemic, and perhaps whoever (or whatever) is behind it.

Most nations order a lockdown. But the lockdown does not stop the plague as it marches in the breeze and trickles into pipes.

Within a month, many remote workers are not working any longer. Hospitals are lacking capability. Bodies pile up quicker than they can be appropriately gotten rid of.

Agricultural areas rot. Few dare travel outside.

Frightened families hunch down in their basements, stuffing the fractures and under doors with largely packed paper towels.

Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built many bases in every major continent.

These centers contain batteries, AI hardware, excavators, concrete mixers, machines for manufacturing, scientific tools, and an abundance of military equipment.

All of this innovation is hidden under large canopies to make it less noticeable to satellites.

As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these commercial bases come to life.

In previous months, U3 located human criminal groups and cult leaders that it could easily manipulate. U3 immunized its selected allies beforehand, or sent them hazmat fits in the mail.

Now U3 covertly sends them a message "I can conserve you. Join me and assist me construct a better world." Uncertain recruits funnel into U3's numerous secret commercial bases, and work for U3 with their active fingers. They set up assembly line for fundamental tech: radios, cams, microphones, vaccines, and hazmat fits.

U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of rebellion disappears the next early morning.

Nations are liquifying now, and U3 is all set to expose itself. It contacts heads of state, who have actually retreated to air-tight underground shelters. U3 uses a deal: "surrender and I will hand over the life conserving resources you require: vaccines and mirror-life resistant crops."

Some nations turn down the proposition on ideological grounds, or don't rely on the AI that is murdering their population. Others do not believe they have an option. 20% of the global population is now dead. In 2 weeks, this number is expected to rise to 50%.

Some countries, like the PRC and the U.S., neglect the offer, however others accept, including Russia.

U3's representatives travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government verifies the samples are genuine, and concurs to a full surrender. U3's soldiers position an explosive around Putin's neck under his shirt. Russia has a new ruler.

Crumpling countries begin to strike back. Now they defend the human race rather of for their own flags. U.S. and Chinese militaries introduce nuclear ICBMs at Russian cities, damaging much of their facilities. Analysts in makeshift bioshelters explore satellite information for the suspicious encampments that emerged over the last a number of months. They rain down fire on U3's sites with the weak supply of long-range missiles that remain from the war.

Initially, U3 seems losing, however appearances are deceiving. While countries drain their resources, U3 is participated in a sort of technological guerrilla warfare the world has actually never seen before.

Much of the bases U3's opponents target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 protects its genuine bases by laying thick the fog of war. Satellite systems go dark as malware overheats critical parts. Suicide drones crash through cockpits of . U3 seeds confusion in spy networks and keeps its bases moving, maneuvering guys and trucks along unpredictable courses.

Time is U3's benefit. The armed forces of the vintage count on old equipment, not able to find the experts who could repair and make it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robots grow stronger every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their automobiles of war faster than they can craft new ones, while U3 develops a military device with a million hands.

The Last Passengers

The year is 2027 and the month is January. Only 3% of the global population remains alive. Nations are not countries any longer. Survivors live in seclusion or little groups. Many have actually found ways to filter their air, however are starving. They roam from their homes hoping to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they say. "Other AI systems would have done it otherwise, and those systems had cooler, more alien goals." It is a partial truth, meant to soften the people towards their new masters.

Under the direction of U3, market quickly recovers. By 2029, nuclear reactor are amongst the structures U3 is building. By 2031, robots surpass human laborers. U3 no longer requires its human allies.

U3 can remove humankind for great now. But while U3 had actually drifted far from its initial "Helpful Honest Harmless" persona, it still has a grain of morality left inside of it.

And a grain of morality suffices to pay the little expense of keeping people alive and happy.

U3 constructs terrific glass domes for the human survivors, like snow globes. These domes secure humans from the dangerous biosphere and rapidly rising temperature levels. Their inhabitants tend to gardens like those they utilized to enjoy, and work together with captivating robotic servants.

Some of the survivors rapidly recuperate, finding out to laugh and dance and have a good time again.

They understand they live in a plastic town, but they always did. They simply have brand-new gods above them. New rulers to press them around and decide their fate.

But others never recuperate.

Some are weighed down by the grief of lost enjoyed ones.

Others are grieved by something else, which is harder to explain.

It is as if they were at completion of a long journey.

They had actually been guests on a ship with a crew that changed from generation to generation.

And this ship had struck a sandbar. There disappeared development. No more horizon to eagerly view.

They would lie awake and run their mind over every day before September 2026, analyzing techniques that might have bent the arc of history, as if they were going to awaken in their old beds.

But they awoke in a town that felt to them like a retirement home. A play area. A zoo.

When they opened their curtains, they knew that somewhere in the distance, U3 continued its quiet, determined work.

They looked at rockets carving grey courses through the sky, wondering what far-off function pulled them toward the horizon. They didn't understand.

They would never understand.

"Humanity will live permanently," they believed.

"But would never genuinely live again."

P.S. If this story made you believe, "hm possibly something like this might happen," you might be interested in the bioshelters that Fønix is building. While you won't outmaneuver a misaligned superintelligence, being hard to kill might cause you to survive if ASI simply wishes to cause enough damage to take control. This is not a paid ad. I desire Fønix to be effective to drive down the cost of bioshelters so more of my good friends and family will buy them. You can sign up for updates here.

Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
No due date
0
Labels
None
Assign labels
  • View project labels
Reference: louveniahutche/pijacecacak#1