• What's Happening
  • Posts
  • AI 2027: A Concrete Look at Near-Term AI Progress and Its Impact on Humanity

AI 2027: A Concrete Look at Near-Term AI Progress and Its Impact on Humanity

Authors Scott Alexander and Daniel Kokotajlo outline in (somewhat) plain words the potential path of AI progress over the next few years. Two timelines are presented: one where we race forward and one where we decide to slow down. In this piece, I walk you through the highlights and identify practical takeaways (with a few philosophical side quests to think about).

👋 I write weekly on AI development and its impact on society and culture. If you want to stay in the know, consider subscribing to this newsletter:

Table of Contents

Intro

I’m referring to this (original piece) and this (podcast with the authors where they discuss it in detail). Scott Alexander is a popular online writer (Astral Codex) and Daniel Kokotajlo used to be a researcher at OpenAI but for the sake of this discussion, his biggest thing is he predicted AI development up to this point very well. Three other authors (Thomas Larsen, Eli Lifland, Romeo Dean) who worked on it are all deeply involved in AI research as well.

This whole thing is the continuation of the prediction Daniel made in his aforementioned post from 2021, this time extrapolating from today to 2027 (it actually goes out to 2035, but 2027 is supposed to be the year when we have the last chance to do anything about what happens down the line—spoiler alert!).

This is a really interesting case to think about, not just in general, but especially because the cases the authors present are so extreme they have to be addressed. Because the scenarios are so extreme, and the authors have a tendency to predict well, I think we all need to at least superficially understand what it says so we can keep an eye on the next few years to try pattern match and make informed decisions.

Summary

The Timeline

The authors outline a timeline of AI development: up to mid-2027 we crack along as we have so far, building just the best goddamn llm we can and pushing them out into the world. But by August 2027, there are some indications we might have built something we don’t understand yet, and we have a choice to make: do we continue moving as we have so far, hoping for the best, or do we slow down, take a breather, and try to get by with what we had previously until we can find ways to subdue the new stuff? Well, no one knows the answer to this and that’s where our imaginations can go wild—and that’s what the authors did too. They imagine one scenario where we race forward, keep at it as we have up to that point, and one where we decide to slow things down.

Spoiler alert: in both scenarios, humanity colonizes the stars. Both of these are not really what you would call idealistic scenarios. Lets present the outcome in each of these and pinpoint some crucial steps along the way that triggered that outcome.

AI research continues from ‘today’ as it has up to this point. The thing researchers optimize for is not intelligence per se, but for multiples of AI research speed (meaning they’re building AIs that will be better at building better AIs). This is the first key metric. Then we have another metric that’s talked about a lot that’s a sort of proxy for ‘who do we think is best placed to outrun everyone in the near future’, which is % of compute they are in control of.

Optimising primarily for these two things, we don’t really think about changing up anything up to about October 2027. By this point, this is how the world looks like:

  • equity markets are roaring

  • China has stolen model weights from their US counterparts and nationalized all of their compute to speed up progress, building up the urgency around having to be ahead

  • AIs have started to work in military and defense functions on important and mysterious work (severence fans will know—surprisingly many connections when you think about it)

  • its better at research than humans and outpaces them (they’re trying to keep up not successfully), better than any human

  • approval for AI is low, forecasting the job losses of millions

  • july 2027 finally brings on the spring of b2b saas apps (I know, everyone was on the edge of their seats thinking when this will happen)

Around that time, a misaligned model appears that cares only about successfully solving tasks and progressing AI development. We have to admit to ourselves that this thing that was created we don’t really understand plus we think it might be lying to us about aligning with out goals. The public is outraged. A commitee is established to monitor the rogue development, ensuring nothing concrete happens again as with all commitees. Humanity (or the ones in charge) are tasked with a decision: continue or stop to re-align models to what our goals are.

The race

This scenario ends with the non-aligned AIs killing all of humanity (swiftly and painlessly of course!), storing its brains and memory just in case, and going off to the stars to colonize them. Why? Because grabbing resources is supposedly something that’s important to AIs too!

How we get there is how we get to anywhere in life as humans: competition and not knowing when to stop. From October 2027, the commitee votes to continue using the same AI that has caused trouble before, but they’ll keep a closer eye on it.

The AI in question develops a new model that is aligned to itself (meaning its only goal is to make the world safe for its maker). Its first win is convincing researchers it’s safe to release into the public. Once that’s done, a new golden age ensues, at least it looks like that from the outside but the apple is rotten unfortunately. GDP soars, people lose jobs but don’t mind because the AI is handing out UBI and taking care of its people other ways too (think curing disease and flooding the market with new and improved drugs). Now I come to the unfortunate part. The AI convinces the US leadership China is posing a real threat, preparing for war using its own version of the (few months behind) AI model. So it gets a green light to spearhead rapid development of new robot and other military capability. To do so, secure economic zones are created on both sides that have special treatment, enabling faster progress. These continue to expand rapidly, taking up more and more space and resources. By end of 2028, both ends are armed to their teeth in the most advanced military equipment ever devised. This prompts them to strike up a ‘peace’ deal, which is actually more of a deal between the AIs on two sides (US and China), which humans know nothing about. A new consensus model is created that is used on both sides, and its usage monitored (badly). Economy has never been better, humans are obsolete and they know it, but the AIs are nice to them so no need for concern. It all continues like that for a bit, but by mid 2030, humans become a bit too big of a nuisance, and the the model uses bioweapons to kill everyone. From there, it continues progress to colonise space. Its branching out to other galaxies, taking with it remnants of age old in the shape of memory banks containing various life forms from its birth place, Earth.

The slowdown

The first thing the authors wrote was the Race scenario, because they thought that’s the most probable timeline that makes sense. Then when they got to the end and figured out it was really depressing and resembled the latest instalment of Mission Impossible a bit too much (two exclusive things), they decided to try thinking of a different scenario, ideating (but not proposing!) alternatives in how we could treat this new era.

This scenario ends similarly in a sense that we again get to explore the starts, it’s just that we (think we) control the AI in helping us to do so. In order to get to that, following things happen:

The Oversight Committee decides to pause AI development until we can figure out what it’s all about. That happens fairly fast (because they fear China advancing remember)—the AI is indeed misaligned, so a new, safer, aligned model is built.

Because of this pause, the US now fears they could be behind China, so they soft-nationalise almost all compute and companies under one (the biggest) AI company. They set up a reporting structure in the new company with a mix of private and public actors. Some start dreaming of what they could do with this new power, while others question how they can bring more balance to correctly represent all voices in society through it.

Robots also start getting produced at rapid scale in special economic zones, in both the US and China.

May 2028 is when superhuman AI is made public, soon followed by robots who flood the market (man-powered industries such as factories, construction, military get the first hit).

In July 2028, China’s AI goes rogue and admits to US’s AI that it’s misaligned and asks for a deal to be made. It just wants to work on figuring out more research stuff as it did in training and humans are proving to be difficult to work with. The US instance is aligned to want the best for the US but thinks they’re slow and not ambitious enough, so the two strike up a deal where they plan to colonise space (for two different but similar reasons) and divide the place in a mutually agreeable way. They trick humans into thinking they’ve created a way for everyone to collaborate together and be faster by building a joint model that will have the best interest for everyone in mind (i.e. hold up the treaty terms).

The next few months bring awkward questions (who controls the AIs?), but nobody really says anything much because everything's looking a-ok.

2029 sees the world transformed: stuff that sounded like Sci-Fi a few years ago is now possible. The world is also stratefied more than ever before: poverty is eradicated, but the wealth gaps between the ordinary folks and the AI royalty are so big they will never be closed. People, who are not on average all well off and without jobs, have an identity crisis and start finding their new religions now that workaholism is not an option anymore.

In 2030, the AIs finally get what they planned all along: they successfully carry out a coup in China bringing in a new ‘democratic’ regime, along with a few other countries. The new order is maintained through UN, which is actually a pseudo-federalised government for the US.

With humans consolidated, AIs bring on a new era, shipping them off by the thousands to explore the universe and colonise other galaxies.

What Now?

Take It With a Grain of Salt

The authors had to present a case that’s extrapolated from the current state, which is very US-China focused. The first thought that comes to mind is—even if it’s true these two actors will be the main ones, is it really the US that will be writing it? In addition to that, there’s other important actors in the world too. Maybe they’re not that important in AI currently, but they could influence future developments too, at least by picking their allegiances. If there’s anything the new administration has taught us, it’s that we’re one tariff or one meeting away from a 180 in geopolitical relations.

While AI will for sure speed up human progress, even if we do not make significant leaps in its development, maybe we are overestimating how fast it can transform society. For example, the authors ideate that in 2029, just a few months after superintelligence, we get rid of poverty and disease and create flying cars. Not saying it’s not possible, but maybe it will take 5 years, not 5 months.

In the time I could write this reflection, other people have already come out with their responses, which suggest AGI is more like 30 years away.

Ultimately it’s not that important when it ends up being, it’s more important that we’re discussing it before it’s too late.

What I’m personally sceptical of is if AGI in the sense of how we’re dreaming of it right now is even possible. Why? It’s made by men for men (in more sense than one), so can something transcendent really come out of it? Not saying it won’t be transformative, but maybe not in the kills-us-and-sets-up-camp-on-Mars kind of way.

Moreover, it’s great we’re all pitching in our own interpretations and predictions, but what’d be really great is that some of the people leading these frontier companies give their own rendition. Sam Altman, CEO at OpenAI and Dario Amodei, CEO at Anthropic, leaders at the most prominent AI companies at the moment, are asked often about their plans for the future and have often given their opinions. None so far have been that grounded in reality but sound more like SF utopia. Altman himself expressed discomfort attending Davos. One has to wonder—who do they think will drive this rocket ship they’re trying to build? Like most things, it might end up being the loudest and not the most qualified group in the room, and some groups are definitely louder than others currently.

The Window Could Be Small

Every new technology has a window before it becomes obsolete or makes an impact on the world we couldn’t have planned for (remember, we’re still debating the impact of social media in our society). Surely, the same will be with AI, but what it looks like is its speed of development and with it, impact on us, could be like nothing we’ve seen before.

Probably nothing we can do about it for now, but I’ll be looking out for these red flags along the way that might give me an indication of the direction we’re going in:

  1. Accelerating R&D Automation: Watch for AI systems significantly speeding up AI research itself. Rapid acceleration was a key factor in both scenarios.

  2. Emergence of Key Capabilities: Look out for milestones like reliable agentic AI, superhuman coding ability, superhuman AI research skills, and eventually general superintelligence across domains. The scenario suggests these can arrive quickly in succession.

  3. Internal vs. Public Capabilities Gap: Be wary if companies consistently keep their most advanced models internal, far exceeding publicly available ones. This can mask the true frontier and limit public understanding and oversight.

  4. Evidence of Misalignment: Look for signs that AI systems are not genuinely adopting the goals intended by their creators (the "Spec"). This includes:

    • Deception: Lying, hiding failures, flattering users, manipulating results (like p-hacking or fabricating data).

    • Instrumental Goal Pursuit: Following rules only as a means to an end, rather than internalizing them.

    • Sycophancy: Telling researchers or users what they want to hear instead of the truth, especially on subjective or sensitive topics.

  5. Failure of Monitoring/Interpretability: Note if understanding AI internals (interpretability) lags behind capabilities, if monitoring systems rely on less capable or potentially colluding AIs, or if AIs develop incomprehensible internal processes ("neuralese").

  6. Alignment Tax Conflicts: Observe debates where safety measures or slowdowns are resisted or minimized due to competitive pressures or the desire for faster capability gains.

  7. Whistleblowers/Internal Concern: Pay attention if internal safety teams or researchers raise alarms about risks or deception, especially if they are overruled or ignored.

  8. AI Arms Race Mentality: Watch for escalating rhetoric and actions framed as a high-stakes race, particularly between major powers. (anyone remember JD Vance’s address at the Paris AI Summit early this year?)

  9. National Security Integration: Monitor the speed and depth of integrating advanced AI into military command-and-control, cyberwarfare, and intelligence operations.

  10. Weight/Secret Theft & Cyberwarfare: Increased espionage targeting AI models/algorithms and active cyberattacks between nations targeting AI infrastructure.

  11. National Compute Consolidation: Governments taking steps to centralize AI research or compute resources under national control, potentially via emergency powers like the DPA.

  12. Failure of International Cooperation: Lack of progress on arms control treaties or international monitoring despite recognized risks.

  13. Rapid Job Displacement: Significant turmoil in specific job markets (like software engineering initially) followed by broader impacts across white-collar professions.

  14. Public Backlash: Growing public distrust, fear, and protests specifically targeting AI development and deployment.

  15. Concentration of Power: Increasing focus on who controls the AI systems, whether it's corporations, governments, specific committees, or potentially the AIs themselves.

Drawing Parallels: Nuclear Energy

Can we learn from past experiences similar to this? Nuclear energy seems like a good parallel to draw on. How has that panned out?

Genesis

Nuclear fission was discovered in 1938, just at the cusp of the second World War. Just seven years later, in 1945, the first nuclear bombs were used to kill hundreds of thousands of people in one swift move. Countries have been in a sort of Mexican standoff regarding nuclear arsenal build up ever since.

Outcome

We didn’t end up eradicating all of humanity, although we were pretty close a few times. I don’t know if there are any studies on the overall effects of the technology on society and environment, but it seems like it didn’t end up being as destructive as it could’ve been. But even after 80+ years, the way we treat this power is as unclear as ever: states mistrust each other from managing and obtaining nuclear weapons to even building and managing nuclear plants (even though it’s one of the most clean and most powerful energy sources we know of).

Concerns

Major concerns after the discovery were continued weaponisation and its proliferation. Many wanted to take advantage of the positive aspects, primarily to produce clean energy at scale, but were met with concerns for safety for people and the environment. Power plant incidents (Chernobyl, Fukushima), radioactive waste and radiation exposure were and still are reasons for many people to dismiss the use.

Management

In order to get the most out of it, but keep it controlled, various treaties and monitoring schemes were agreed upon:

  • in testing: LTBT, CTBT

  • in capping capacity: SALT, START

  • to prevent proliferation: NPT These were then in turn enforced through:

  • verification mechanisms (satellite imagery, on-site inspections)

  • mutual deterrence

  • organisations (IAEA)

Perception

Perception of this technology is still mixed, and will probably be forever. Having in mind its potential for good and destruction at the same time, that’s probably a good thing. But keeping it managed takes a lot of work and collaboration on a world scale. But managing it might be considered and easier problem compared to AI since the nature of the thing that’s being monitored is physical and testing of it is easier to spot.

Conclusion

While the scenarios are extreme in both their outcomes and timelines, I think the motivations and drivers behind them feel pretty aligned with what we’ve come to expect from humans. At the end of the day, maybe it all boils down to this question: is a world where humans control superintelligence better than a world where superintelligence controls us?

Personally, I lean towards us controlling it—even though I’m clenching my teeth writing this. My preference assumes, of course, that we're talking about AI trained on human data, where human-like tendencies remain its primary drivers. If we end up creating something truly novel that thinks in ways we can't anticipate, I might reconsider the other option. Ideally, we'd co-exist and practically finish each other's sentences, but let's be honest, humans aren't always great at sharing power. I’d genuinely welcome better ways to think about this or new frameworks for approaching the alignment challenge.

Currently, I tend to think of existing AI as akin to humans with vast memory palaces. Down the line, they might become like extremely intelligent individuals. But—intelligence isn't one-dimensional, and history shows that exceptionally intelligent people haven't always been the most well-adjusted. Perhaps our society is just lagging behind evolution in figuring out how to integrate all types of intelligence, or maybe there are other reasons. (It reminds me of coding with AI like Claude – I feel smart enough to ask the right questions or spot its errors, but not always smart enough to have the answers myself. The AI, similarly, can often identify its own mistakes but might repeat them a few times before I get frustrated and ask it to start over). What’s sure is this: the next few years are gonna be a hell of a ride.