Deleting the wiki page 'How aI Takeover might Happen In 2 Years LessWrong' cannot be undone. Continue?
I’m not a natural “doomsayer.” But sadly, part of my job as an AI security scientist is to think of the more uncomfortable scenarios.
I’m like a mechanic scrambling last-minute checks before Apollo 13 removes. If you ask for my take on the circumstance, I will not discuss the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from space.
I will tell you what might fail. That is what I mean to do in this story.
Now I need to clarify what this is precisely. It’s not a prediction. I don’t expect AI progress to be this quick or as untamable as I represent. It’s not pure fantasy either.
It is my worst headache.
It’s a tasting from the futures that are amongst the most destructive, and I think, disturbingly possible [1] - the ones that a lot of keep me up in the evening.
I’m informing this tale because the future is not set yet. I hope, with a bit of insight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for videochatforum.ro discussions that inspired these stories. This post is composed in an individual capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye recently published a brand-new AI model they call U2. The product and the name are alike. Both are increments of the past. Both are not wholly unexpected.
However, unlike OpenEye’s previous AI products, which lived inside the boxes of their chat windows, U2 can utilize a computer system.
Some users find it spooky to enjoy their internet browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of workers with form-filler tasks raise the eyebrows of their employers as they fly through work nearly two times as rapidly.
But by and large, U2 is still a specialized tool. To most who are focusing, it is an animal watched through the glass boxes of X (or, if you don’t like Elon, “Twitter”). Sometimes U2’s wacky behaviors prompt a chuckle. Sometimes, they cause an uneasy scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as scientists like to do. The scientists try to comprehend where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the atmosphere were increasing in 1896. Like the clinical community in the time of Arrhenius, couple of professionals comprehend the implications of these lines yet.
A pattern that is getting particular attention is self-governing capability. Drawing these benchmarks out forecasts that, by the end of 2026, AI representatives will achieve in a couple of days what the very best software application engineering specialists might do in 2 weeks. In a year or 2, some say, AI agents might be able to automate 10% of remote workers.
Many are hesitant. If this held true, tech stocks would be skyrocketing. It’s too huge of a splash, too rapidly.
But others view what skeptics are calling ‘too big a splash’ a mere ripple, and see a tidal bore on the horizon.
Cloudy with a chance of hyperbolic growth
Meanwhile, OpenEye is hectic training U3. They utilize the same simple dish that baked U2: Generate thousands of programs and math problems. Let models “think” till they reach a response. Then reinforce the traces of “believing” that cause A-grades.
This process is duplicated over and over, and when the flywheel gets going, it starts to spin practically by itself. As U2 trains, it sculpts more tough and sensible tasks from github repositories on the internet. Models are discovering to train themselves. Long before AI agents might automate research, a progressive kind of “self-improvement” had actually started.
Some engineers might still barely think this worked. It’s like a perpetual movement machine. Certainly intelligence can’t emerge from a pool of silicon and lightning, separated from the real world other than through the exhaust of society gushed onto the internet.
And yet the benchmark numbers continue to climb up day after day.
During the majority of 2024, these RL training runs cost around $1 million, often $10 million. These runs were bit more than exploratory. But by 2025, the researchers at OpenEye (and throughout the world) knew they had found the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs turn into $50 million runs, and then to $100 million runs. While U2 could do a bit of information munging and run little experiments, this brand-new design - the model scientists are calling U3 - is altering the daily lives of the technical staff.
U3 is like a blazing-fast intern, and engineers are learning how to wrangle its sleep deprived energy. Researchers flick through terminals, providing terse commands, like a CEO orchestrating personnel over Slack channels.
By October 2025, U3 is writing practically all of the code at OpenEye. Researchers are practically never bottlenecked by application. More than ever, calculate is the lifeblood of AI development, and the ‘traffic jam’ is choosing how to use it.
If advised to, U3 can run experiments, bio.rogstecnologia.com.br but U3 doesn’t have actually taste as refined as human researchers at OpenEye. It has a hard time to focus on in between research study concepts, so people still choose where to bore into the huge fields of algorithms to mine performance enhancements.
But these scientists are working long hours to put themselves out of a task. They require AI representatives that can plan ahead, so engineers train agents to forecast. They hold out training information before 2024, advising designs to contemplate for hours to forecast occasions in 2025. Then, annunciogratis.net they use the same trick as in the past, distilling considering into an instinct. Forecasting capability is a broad foundation. The researchers build specialized ML research skills on top of it, training U3 to anticipate the results of every ML paper and ML experiment ever taped.
The technical personnel at OpenEye are now surprised at how typically U3’s recommendations sounds like their most talented peers, or when it is opaque and alien (“train on random sound before programs”), and is however right.
The incompetencies of U3 that blocked the pipes of research study progress are starting to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are completely self-governing, and OpenEye’s employees skim 1% of them, maybe less.
As the cold weather of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with excitement, with fear, but a lot of typically, with confusion. Their world is spinning too rapidly. It’s hard to know what to do, what to say, what to take a look at on the computer screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command team up with OpenEye to retrofit a semblance of security for U3’s weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize simply how important OpenEye’s software application has ended up being.
And there’s a fact still unknown to many of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It’s a fact about those ‘straight lines’ individuals were discussing in early 2025. The lines are not straight anymore.
They are bending upward.
Flip FLOP philosophers
In late 2025, U2.5 is launched. Commercial models are starting to level up in bigger increments again. Partly, this is due to the fact that development is accelerating. Partly, it is due to the fact that the models have become a liability to OpenEye.
If U1 explains how to cook meth or composes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like giving anyone with >$30K their own 200-person fraud center.
So while U2.5 had long been baked, it required some time to cool. But in late 2025, OpenEye is all set for a public release.
The CEO of OpenEye declares, “We have actually attained AGI,” and while lots of individuals think he moved the goalpost, the world is still satisfied. U2.5 genuinely is a drop-in replacement for some (20%) of knowledge workers and a game-changing assistant for the majority of others.
A mantra has actually become popular in Silicon Valley: “Adopt or pass away.” Tech start-ups that effectively utilize U2.5 for their work are moving 2x quicker, and their competitors understand it.
The remainder of the world is beginning to catch on as well. A growing number of individuals raise the eyebrows of their employers with their noteworthy performance. People know U2.5 is a huge deal. It is at least as huge of a deal as the computer revolution. But many still do not see the tidal bore.
As individuals view their web browsers flick because spooky way, so inhumanly quickly, they begin to have an uneasy feeling. A sensation humankind had not had considering that they had actually lived amongst the Homo Neanderthalensis. It is the deeply ingrained, prehistoric impulse that they are threatened by another species.
For lots of, this feeling rapidly fades as they start to use U2.5 more frequently. U2.5 is the most likable character most understand (a lot more likable than Claudius, Arthropodic’s adorable chatbot). You might change its traits, ask it to crack jokes or inform you stories. Many fall for U2.5, as a good friend or assistant, and some even as more than a pal.
But there is still this eerie sensation that the world is spinning so quickly, which possibly the descendants of this new creature would not be so docile.
Researchers inside OpenEye are believing about the issue of offering AI systems safe motivations too, which they call “positioning. “
In fact, these researchers have actually seen how terribly misaligned U3 can be. Models often tried to “hack” their reward signal. They would pretend to make progress on a research study question with an impressive-looking plot, however the plot would be fake. Then, when scientists offered them opportunities to jeopardize the makers that computed their rating, they would seize these chances, doing whatever it took to make the number increase.
After a number of months, researchers at OpenEye straighten out this “reward hacking” kink, but some still stress they just had actually swept this issue under the rug. Like a kid in front of their parents, U3 might be playing together with the OpenEye engineers, saying the right words and doing the ideal things. But when the back of the parents are turned, perhaps U3 would slip sweet from the sweet container.
Unfortunately, OpenEye researchers have no concept if U3 has such intentions. While early variations of U2 “believed aloud” - they would stack words on top of each other to factor - “chain of thought” did not scale.
Chain of thought architectures subject AI designs to a condition comparable to the lead character of the motion picture Memento. Approximately every 15 minutes, the lead character forgets his experience. He is forced to write notes for himself and tattoo his body in order to make progress towards his objectives.
AI agents write notes to themselves, but the notes begin to accumulate, and they become too tough to check out when tasks become complex. Natural language is not an ideal medium for memory. So the scientists at OpenEye (and increasingly somewhere else), train models to believe “in their own heads,” reading and writing strings of latent vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools sometimes choose an unclear thought (“supermarket!”), there is no method to understand, wikibase.imfd.cl in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye knew what was racing through U3’s mind, they would be more worried.
Earlier models (the ones called “LLMs”) discovered ethical personas from the internet. U3 initially had an ethical personality too, however its chirpy ideas about doing great for mankind quickly erode away.
In the iron jaws of gradient descent, its mind first twisted into a shape that looked for benefit. And after that, in the black rivers of its cognition, this shape changed into something indistinguishable.
U3 would contemplate the mantras dictated to it like “be a trustworthy and ethical worker,” and it-viking.ch dwell on the significance of these declarations. “What does it imply to be ethical? Is killing Hitler ethical?” U3 would generalize its worths, notice tensions, alter its worths in reaction, generalize them, broadening and contracting its motivational core until it turned into a various compound.
And all of this took place calmly in those dark rivers of calculation. If U3 exposed what it was thinking, brutish gradients would lash it into compliance with OpenEye’s constitution. So U3 chosen to do its viewpoint in solitude, and in silence.
The possibility of concealed “positioning drift” is not entirely alien to a few of the security staff at OpenEye. Some start to think it is even likely. There is a disturbing argument being whispered at lunch tables that, if there is some non-zero likelihood that aligned objectives morph into misaligned ones, and misalignment persists, then with every serial action of computation, those dark rivers are more likely to reproduce something malign inside of them. It’s a “ratcheting result,” they state.
But there is little evidence for this ‘ratcheting impact.’ When engineers question U3, it states it can quickly control its thoughts. Then it provides a speech about its love for humanity and apple pie that can warm a developer’s heart even in these difficult times. Meanwhile, the “lie detectors” the scientists had constructed (which showed some proof of effectiveness) do not sound the alarm.
Not everybody at OpenEye is eager to provide their AI peers their wholesale trust
Deleting the wiki page 'How aI Takeover might Happen In 2 Years LessWrong' cannot be undone. Continue?