[marginalium]

Pitfalls of AI for information gathering

28 May 2025

View original source »


Pitfalls of AI for information gathering. Nominally about openai’s Deep Research, but applicable to all:

The third point the author raises is that:

you find yourself taking intellectual shortcuts. Paul Graham, a Silicon Valley investor, has noted that AI models, by offering to do people’s writing for them, risk making them stupid. “Writing is thinking,” he has said. “In fact there’s a kind of thinking that can only be done by writing.” The same is true for research. For many jobs, researching is thinking: noticing contradictions and gaps in the conventional wisdom. The risk of outsourcing all your research to a supergenius assistant is that you reduce the number of opportunities to have your best ideas.

I think this is just unthoughtful use though, so it’ll only affect people who are already doing this in other areas.

It all still points, at least in the short term, to the need for human supervision. I used o3 to help with my teardown of Positive Intelligence, and while it sped up the process of clarifying my intuitions, it failed to get past superficial critiques, and made a bunch of truthy, but ultimately inaccurate claims.

All of which raises the question of how do we get non-experts (e.g. kids) to the place where they can supervise, when they’ll be using ai to get there.


Anthologies: Betterment, Gratification, Wealth Architecture, Digital Architecture, Collective Architecture, On Being Fruitful, On Culture, On Thinking and Reasoning

View on main site »