Oscar Wilde once described fox hunting as an unbearable pursuit of what cannot be eaten.

If he were alive today, he might describe the pursuit of artificial general intelligence (AGI) as an incomprehensible chase after what cannot be defined or identified.

Even if we accept its use, two concerns remain: what happens if we achieve AGI? And what happens if we don’t?

Some tech leaders on the West Coast have established a political action committee with $100 million capital to support “AI-friendly” candidates in the 2026 midterm elections and to eliminate unhelpful regulations. They point to the astonishingly rapid adoption of AI-powered chatbots and mock pessimists or advocates of slowing progress who want to hinder the US in its technological race with China.

In a survey conducted this year by the Association for the Advancement of Artificial Intelligence, 76% of the 475 participants (mostly academics) believed it is unlikely or very unlikely that current methods will lead to AGI. This could be problematic, as US stock markets seem to rely on the opposite conviction.

AI has already achieved remarkable feats, such as Google’s DeepMind AlphaFold model, which predicted the structure of over 200 million proteins, earning researchers a Nobel Prize.