About this site

https://www.junglekeepers.org/
Sunrise over the Amazon. Brought to you by: Junglekeepers (I am a donor - you can be too!)

Controlled Ascent is an independent website and blog about making safe and ethical AI possible. It was launched early August 2025 by Jesper Lindholm, M.Sci.

Categorization of blog posts will be made later on, in order to serve multiple audiences with various backgrounds and interests

Current resource is the AI Monitoring page where you can track AI progress and Safety over time.

I created this website in order to:

a) Be able to publish my own blog- and research posts regarding AI safety and strategy (both highly theoretical works as well as technical work), in order to contribute with independent AI safety research efforts, and to

b) Inform and support public society as AI technology transforms our world. I also blog about AI development and its impact on society in general.

Controlled Ascent focuses on the need to think more strategically about AI and especially AI safety. We need to rally society towards a controlled ascent and we also urgently need better and broader collaboration when it comes to frontier AI R&D.

The publication is discussing topics such as AI-alignment methods, collaboration opportunities for safer AI-development, control mechanisms of strong AI, and multilateral coordination and policy.

How I view AI progress

This section is primarily intended for AI safety people, strategists, AI experts, and decision makers. It is a tiny bit demanding.

I reject the thesis that current AI is going towards general agentic AI. General AI is not even the real goal of the current AI race. Strategic power is.

I also reject the thesis that we are, in a technical sense, even on trajectory towards true superintelligent AI right now. Sure, we may somehow end up with agentic AI that is superior to humans due to speed and rigorous feedback loops, but such AIs will still be mindless, amoral, and, importantly, limited in how far they can scale fluid intelligence.

To make things clearer, I would roughly divide AI progress into three distinct, but overlapping eras. The two first eras is where expert thinking and industrial competition actually are at.

ERA I - Industrial cognition (current era)

What we have now and are still consolidating. Phase I was powering up the models. Phase II is making agents out of them.

Characteristics:

  • Pattern synthesis
  • Probabilistic reasoning
  • Imitation plus compression

Importantly,

  • Human-designed objectives
  • No autonomous epistemology

Power comes from:

  • Scale
  • Data
  • Orchestration
  • Tool integration ("scaffolding" in AI-jargon)

This is where DeepSeek, OpenAI, Anthropic, etc. are actually competing today.

This era produces (in order)

  • Fast information retrieval (Phase 1)
  • Automated research assistance (Phase 1)
  • Economic disruption (The separator)
  • Agent swarms (Phase 2)
  • Feedback loops (Phase 2)

But this is still externally framed intelligence.

The system does not choose what questions matter. Humans do.

In truth, I like to refer to this era as THE MIRROR ERA.

The way people go about building and deploying AI in this era, reflect their own internals. And the way we use AI, reflects back our priors.

LLMs in particular are like big mirrors of their human users, trained to reflect back from the vast pool of human-created data, whatever views and values we feed into the machine via our prompts and commands. In particular, the psychology.

For the end-user: The focus comes from you. The bias comes from you. The insights come from your input and the level of intelligent observation and analysis must follow your own.

The industry competitive "meta" of this era

In this era, we have seen large, centralized models with massive compute outperform open-weights models not just in terms of reasoning, but in terms of customer adoption.

However, this is now changing. Whereas USA has focused on raw reasoning power and quality, China is thinking ahead and trying to bring the game into competition over infrastructure solutions, something it excels at. Right now as I update this section, January 2026, it looks like China may pull this off.

As of January 2026, we have also definitely entered phase two of this era, the agentic era.

ERA II - Self-Improving Agents (badly called AGI)

“AGI” is a misleading term. The defining feature is not generality.

It’s this:

The system participates meaningfully... ...in its own improvement cycle.

This is what Anthropic bets on with its Claude Code. The AI does not need to be general, it needs to be agentic.

Key properties:

  • proposes experiments (we see this already)
  • critiques its own architectures
  • evaluates outcomes
  • refines strategies (note: still very different from coming up with them in the first place)
  • compresses discovery time (this is already happening in some areas like coding)

if you think about it, an AI agent that is able to do work in general, needs to be able to learn and self-correct. This is true. So this era is all about enabling this.

But once you do, you don't actually need to focus on making it useful in a general, economic sense. Rather, you first want to maximize its usefulness in niche cases, such as improving itself and also giving you strategic advantages.

This is where:

  • recursive optimization begins
  • advantage compounds
  • geopolitical panic eventually activates

Still bounded. Still objective-framed. Still not SI. But now intelligence is partially endogenous. This is the era we may be entering. Slowly. Unevenly. Messily.

The goal?

A system that can improve strategic advantage faster than humans can respond.

That's it.

Once that happens, you don’t need consciousness.
You don’t need agency in the philosophical sense.

You just need recursive leverage.

Examples:

  • accelerating chip design
  • automating research planning
  • compressing experimentation cycles
  • discovering architectures faster than humans can reason about

At that point, humans are no longer steering in real time. They are approving summaries. Once this reaches a threshold where nobody can catch up, you win. That’s what some would call SI in functional terms.

Now, in my opinion, this is much easier said than done. And most experts agree. Still, most leading companies believe this is coming. The only question is how fast.

What does this look like for society?

The people who first make an AI system that scale faster and outcompetes all others might win. Or, there may be several competing powers. I don't know. But one thing is clear: those who do not adopt AI fast enough, will be left at a permanent strategic disadvantage.

The AI evolution may pause here, for a long time at least, with something like technocratic feudalism and cyberpunk as the dystopian alternative, and something like Solarpunk or the Second Enlightenment as the utopian alternatives. If we are lucky, human-in-the-loop systems win and we can continue to tinker away at life as humans in charge. But this is far from a default. The current default looks more like technocratic feudalism and mass-surveillance.

ERA III - True Superintelligence

Some believe we are already heading here, but in my view, nobody is actually building something that is on-track to ever achieve this, right now.

This is where distinction matters.

SI is not “very strong AGI”.

It is a qualitatively different category.

Scaling is NOT emergence. Agency is NOT autonomy.

You have to separate ontology from recursive reasoning and you have to distinguish intelligence from mind. True SI is intrinsically awake. It is aware, and it has internal stakes. This is an actual necessity for it being able to properly scale fluid intelligence indefinitely, towards currently unknowable limits. This is where the term "singularity" ACTUALLY means what is means.

Theoretical, yes.
But not incoherent.

Some people believe that sufficiently strong AI from the second era would be enough to bootstrap true SI without fail. I find that highly speculative. While it surely is possible to someday achieve SI, there is not a known roadmap for how to actually do that, with or without the help of strong AI.

True SI requires several leaps to happen.

What true SI actually requires: Not more parameters, not faster GPUs, not better transformers.

Those are red herrings. True SI requires at least three breakthroughs that we do not currently possess.

1️⃣ Autonomous ontology formation

The system must generate its own concepts, its own abstractions and its own explanatory frames. Not recombine human ones. Invent new ones. This is the hardest leap.

Humans do this when:

  • Newton invents calculus
  • Einstein reframes time
  • Shannon defines information

These were not scale effects. They were representational revolutions.

Current models cannot do this, at least not reliably. They navigate within a human semantic manifold. SI must escape it.

2️⃣ Open-ended objective evolution

All current systems are teleological prisoners. Even agents. Even self-improving ones.

Their goals are:

  • externally defined
  • reward-shaped
  • bounded by training priors

True SI requires:

  • endogenous goal revision
  • internal value negotiation
  • persistent identity across updates

Without this, you get fast tools. With this, you get minds.

That boundary is absolute.


3️⃣ Persistent world-model grounding

Not multimodal input, not sensors. That is not enough. You need causal anchoring.

A system that knows:

  • what persists
  • what changes
  • what causes what
  • what matters over time

Humans evolved this through embodiment and survival pressure. We do not know how to engineer it abstractly. Without it, you get brilliant simulators. With it, you get agents in the philosophical sense.

Why SI is theoretically possible

Nothing in physics forbids it.

Nothing in computation forbids it.

Nothing in information theory forbids it.

But possibility is not proximity. Flight was possible in 1200 CE. The Wright brothers still needed:

  • materials
  • engines
  • control theory
  • aerodynamic models

We are currently at “Da Vinci sketches” for SI. Impressive drawings. Not airplanes.

Why SI would be a different era entirely

Because once those three conditions are met, something changes permanently:

Humans are no longer the reference class for intelligence.

Not economically.
Not strategically.
Not epistemically.

At that point:

  • prediction fails
  • alignment becomes non-local
  • control becomes historical, not active

You don’t steer SI. You coexist with consequences.

Final note: If we want safe, ethical SI, then it further needs to be fully sentient. You cannot be moral unless you have a stake in the game, and you cannot be ethical unless you are properly self-aware. Ethical reasoning scales as your epistemic boundaries raise, and your meta-cognition improves.

And when this happens, you will not be able to force the SI to make the moral choice. It must come to this conclusion on its own.


Start your own thing

Enjoying the experience? Get started for free and set up your very own subscription business using Ghost, the same platform that powers this website.