Hic Sunt Dracones
When change seems all but inevitable.
2025 was the year AI became inevitable. Does anyone else feel like Sam Altman’s weekly reminders that “AGI is coming” was a lifetime ago? We’ve all become a lot less enthusiastic about what LLMs can and can’t do since then, even if we’re still navigating the societal shift the dumber, shittier version of AM, JARVIS or Skynet has brought on.
Dumb as it is, GPT and its similar cohort of general purpose agents are changing society, and it seems all but irreversible that we’ll soon be living in a world where data centers are as ubiquitous as cellphone towers or TV antennae.
Jules Verne saw international travel through submarines, aerostatic balloons and railways. But he couldn’t predict the airplane. Similarly, there’s been moments in time where the overall shift is so obvious it’d be outright stupid to doubt its coming.
Among the great achievements of mankind, there’s a treasured few that seemed inevitable all along.
It’s easy to think of people 100, 500, a thousand years ago looking up into the sky and picturing what it would feel like to be among the stars. It’s easy to put ourselves in the shoes of seafaring explorers like the vikings, Polynesian tribes or Spanish conquistadores; and feel, same as they did, that their role was mandated by heaven.
If you live in a small hamlet at the foot of a mountain, the myth isn’t whether there’s a dragon lying dormant at the summit. It’s who’s gonna be brave enough to go kill it, and how long will it take for them to show up.
The scribbles on the map reading “Here be Dragons” are more of an invitation than a warning. Sometimes, humanity needs a great adversary to find out just how far we can push ourselves.
While we’ve explored the confines of the map by now, the allure of the insurmountable is still one of humanity’s greatest drivers, the spirit of the void that pulls us to the edge and compels us to jump.
Looking at what our world has become. It’s hard to imagine any of this going any other way. WWII, the space race, the internet, AGI; it was all inevitable all along.
Would we ever have not built the atom bomb? Would we ever have not reached the moon? These moments are so beyond us, of our natural context and grounding, that even to this day people question how we were even able to pull them off.
We’ve lived in a nuclear-threatened world for 80 years. We know for a fact there’s human footprints on the moon. We’ve co-existed with the proof, and the challenge, of knowing the dragons at the edge of the map have been conquered, and yet the world keeps spinning.
Spinning the thread of how one piece of “progress” leads to the next has become a lot of people’s biggest source of meaning as we watch the institutions around us start to show cracks. But it’s this same belief that brings on these great leaps, for better and worse. You materialize what you fear, or desire, just by naming it and giving it shape; everyone should understand this on a deep personal level: before trying to halt it, try shaping it.
No matter the marketing stunts, I don’t believe we’ll ever create “AGI” as we call it.
We have the logistical capabilities of building a network of ML models so complex they’re indistinguishable from a human mind, yes. You could 100% cover the entire surface of the planet on processors “AM”-style and watch the most sophisticated computational model ever emerge.
But I believe what we’ll find along the way will be much more groundbreaking, so much so it might even break us.
To make true AI is to understand the human mind inside and out. And as Emerson Pugh once said:
“If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”
— Emerson M. Pugh, physicist (c. 1938)
Why do tech people want to bring on AGI in the first place?
There’s no money in solving everyone’s problems. There’s no money in making everyone’s lives better. That’s very likely why we won’t ever reach AGI.
They’ll keep saying it’s close while clawing at power and re-framing institutions to their benefit: Public resources pooled towards data center infrastructure, government surveillance and mass-data-harvesting, advertising that preys on the secrets you wouldn’t even share with your closest friend. All sycophantically delivered to you with a friendly tone.
Machine learning, separate from the “AI” moniker, *is* going to be useful at the times where a human would be better, but they’re not available.
Being the teacher for a remote school. Bringing education to under-served communities who would have no access to it. Helping people pre-screen their medical conditions or emergencies when their governments or institutions would not be able to reach them.
AI is useful when it’s locally hosted, publicly available, not monetized, and it keeps the human life first. That is the main concern and the main double edged sword of AI development today.
Companies are struggling to find a way to monetize it. People hate the way it’s being used and the way it’s being maintained. But that’s because we’re coming at it from a very wrong angle. AI is a very expensive technology to produce, but it’s only useful, similar to many others, when it’s widely distributed and free from baggage.
When we conquered a new world, like we’ll eventually conquer the body and mind, if our current shift is to be believed. We found other people already living there.
Problem is, the kind of people who set out into the edges of the map, be it out of their own volition or under duress, aren’t the kind to sing kumbaya holding hands at sunset. We’ve made the same mistake several times before. We set out into the unknown, not quite understanding what we’ll find at the other end; so far that’s good, that’s adventurous and visionary. But why is it that the second we grasp the implications of the endeavor, then our greed, fears, anger show through and corrupt what shouldn’t have to be that way?
Why did we place dragons, sirens and monsters at the edge of the map? Why not bunnies, friendly natives, and greener pastures (as we ended up finding)? It’s because you don’t get to murder a bunny guilt-free.
You don’t mourn a dragon, you did what you had to, to survive, to come out on top.
Our future holds a lot of dragons. Space conquest, biological immortality, quantum communication of information across unthinkable distances within a heartbeat.
But same as with every dragon we’ve conquered before, we’ll realize the end goal was nothing like what we envisioned. We will have to live with the implications of whatever we surface.
We could see the frontier as a place to nurture and steward; but that means you have to go slower, be careful, and share the spoils. Nobody’s got time for that. There’s a race going on, and we sure as hell won’t be the ones to get beaten.
The more we approach godhood, the bigger the weight on our minds and the more numerous the skeletons in our collective closet.




I like the radical optimism in this piece. I don't think we will reach AGI without some kind of new breakthrough I'm not informed enough to see. And I think the pursuit of it is making a very small amount of people a very large amount of money without considering the ethics and implications of AI.
(See also, every app trying to ram new AI features down our collective throats).