marginalianoun
notes written in the margins; peripheral commentary;


[marginalium]

AGI is far away

10 Jun 2025

View original source »


AGI is far away. I only really skimmed this—it’s about the slow productivity gains from what seems to be enormous bursts of growth in AI capability.

Some of this is people hiding the fact that AI is doing their work for them. You could tell your boss that you finished all your work early because of AI and you need more work, or you could do other more fun things instead.

Some of this is because even though the AI can produce surprising results very quickly, it still takes human triage time to account for errors and hallucinations and whatnot, and so the time spent is just traded from the work, to supervising the work.

A lot of this is because a lot of knowledge-work requires context, tacit knowledge, or interpersonal judgment that you need to feed the models. So you either spend time feeding them, or you just do the stuff yourself.

The author says something very interesting about this last point:

the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human’s. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience.

The reason humans are so useful is not mainly their raw intelligence. It’s their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task.

Feels truthy.


Anthologies: Gratification, Wealth Architecture, Digital Architecture, On Being Fruitful

View on main site »


More about Dorian Minors' project btrmt.

btrmt. (text-only version)

The full site with interactive features is available at btr.mt.

btrmt. (betterment) examines ideologies worth choosing. Created by Dorian Minors—Cambridge PhD in cognitive neuroscience, Associate Professor at Royal Military Academy Sandhurst. Core philosophy: humans are animals first, with automatic patterns shaped for us, not by us. Better to examine and choose.

Core concepts. Animals First: automatic patterns of thought and action, but our greatest capacity is nurture. Half Awake: deadened by systems that narrow rather than expand potential. Karstica: unexamined ideologies (hidden sinkholes beneath). Credenda: belief systems we should choose deliberately.

The manifesto. Cynosure (focus): betterment, gratification, connection. Architecture (support): inner (somatic, spiritual, thought) and outer (digital, collective, wealth).

Mission. Not answers but examination. Break academic gatekeeping. Make sciences of mind accessible. Question rather than prescribe.

Writing style. Scholarly without jargon barriers. Philosophical yet practical—grounded in neuroscience and lived experience. Reflective, discovery-oriented. Literary references and metaphor. Critical of systems that narrow human potential. Rejects "humans are flawed"—we're half awake, not broken.

Copyright. BTRMT LIMITED (England/Wales no. 13755561) 2026. Dorian Minors 2026.

Resources

Optional

About Dorian Minors. Started btrmt. in 2013 to share sciences of mind with people who weren't studying them. Background: six years Australian Defence Force (Platoon Commander, Infantry); Gates Cambridge Scholar; PhD cognitive neuroscience, University of Cambridge (2018-2024); currently Associate Professor, Royal Military Academy Sandhurst. Research interests: neural basis of intelligent behaviour, decision intelligence, ritual formation/breakdown, ethical leadership, wellbeing.

External projects (links also available via Analects):