1 Comment

Wait, 'Humanity’s Last Exam' is just a standard exam with questions and answers that are hard for *humans*?

Not that it doesn't have value, if a big tree of reasoning steps is required to find the answers AND the answers don't end up as training data. But I've long thought that the "last exam", the true test of AGI, is an ability to do what (some) humans can do, such as:

- Teach a course (while incrementally improving teaching techniques based on experimental results and new research papers)

- Build a software product incrementally (with cycles of debugging, unit and integration testing, and refactoring, and learning to use new tools), over a long time period, based on requirements delivered incrementally by a human supervisor

- Learn to convince people to vote for a designated political candidate through one-on-one phone conversations

On the one hand, such capabilities lead to extremely dangerous places, so it might seem "good" that people are focusing on making AIs that "just answer questions" instead. But the way we're trying to make AIs that just answer questions is to build huge supercomputers and new categories of specialized AI hardware. If my model is correct, this produces a world with far more compute than we actually need if "all we want" is a billion above-average-human-level AGIs. And while my post to EA Forum about this was quite unpopular[1], no one tried to argue I was wrong.

[1] https://forum.effectivealtruism.org/posts/xaamQNSr9AbzCFevd/gpt5-won-t-be-what-kills-us-all

Expand full comment