“This is the way the world ends. Not with a bang but a whimper.”
T. S. Eliot
Today I want to talk about the existential threat from artificial intelligence, not in the apocalyptic sense, but in the philosophical existentialist sense. Not too long ago, Large Language Models like ChatGPT were not able to tell which of 9.11 or 9.9 is the larger number. Now, they can solve PhD qualifying exam level problem in seconds. Benchmarks for state of the art AIs like Humanity’s Last Exam and Frontier Math now include mathematics problems that are hard even for experts. Following the current trajectory, reasoning models in the future will most likely synergize with formal proof assistants like Lean and Coq, and will probably train on an endless supply of synthetically generated mathematics – there seems to be unlimited potential. It is surreal to think that all of these progress has unfolded so recently. To be perfectly honest, the meteoric rise of AI has struck me with both awe and fear, but mostly the latter. As mathematicians, we suddenly find ourselves confronting the grim possibility that AI might one day reach the frontiers of research or even beyond – it is like the sword of Damocles, a spectre looming in the background. My fear is not the cliched scifi trope of some Skynet-like AI obliterating humanity. Rather, my fear is that long before AI poses any physical threat — if it ever does — it will crush our senses of meaning and purpose, that they will destroy us spiritually way before they do physically.