Artificial Intelligence: Finding God by the Numbers
Has AI stumbled across a new argument for God's existence?
In recent years, artificial intelligence has made huge leaps forward, not just in understanding human language, but also in recognising images. One of the ways it does this is by learning patterns. For example, when an AI reads billions of sentences, it starts to notice which words often appear together, which ones behave similarly, and how they tend to relate. Over time, it builds an internal “map” of language—almost like a mental dictionary—where similar words cluster near each other. Words like cat and dog, or London and Paris, naturally group together because they appear in similar contexts. This all happens beneath the surface, using complex maths, but the result is a surprisingly accurate sense of meaning.1
What no one expected was what happened next: and it turns out the implications go far beyond artificial intelligence.
For once an AI language model is trained, this vector space isn’t random noise but has remarkable geometry. For example, researchers have found that this space has linear subspaces corresponding to grammar (e.g. verb tense, gender, number); curved manifolds representing broader semantic fields like animals, colours, and emotions; and clusters and gradients that can be navigated smoothly; you can interpolate between happy and sad and get meaningful intermediate points.
Now here’s where things get strange, and deeply intriguing.
If you take a different AI—one that hasn’t read a single word, but has only been trained to look at millions of untagged pictures—it also builds an internal map. It learns to recognise patterns in images: shapes, colours, textures, and eventually objects. Without being told what anything is, it notices that certain kinds of images are similar: for example, all the different pictures of trees, or bicycles, or faces. That part isn’t too surprising either. But what happens next is surprising: when researchers compare the language map (built from words) and the image map (built from pictures), they find something remarkable. The two maps (trained completely separately) often line up. The “idea” of a dog in the image-trained AI ends up in roughly the same part of the conceptual map as “dog” in the language-trained AI.2
This matters because it’s unexpected and it’s potentially pointing to something deeper than just clever programming. For if language were just a human invention, a set of made-up labels we slap onto the world, then there’s no reason to think that an image-trained AI would carve up reality in the same way. The two systems could have ended up with very different mental maps. But instead, they agree. Even systems with no human-style consciousness, no biology, and no shared training, seem to be converging on the same structures. That suggests that meaning isn’t just something we impose, but rather it’s something that’s already there, waiting to be discovered—whether by people, or by machines.
Even more surprising is how this meaning is structured. These AI maps use mathematics, specifically vector spaces, which are like huge coordinate systems. Every word or image becomes a point in this multi-dimensional space,3 and the AI can measure how close they are, or even do maths on the concepts. To take one famous example: take an AI’s vector concept of king, subtract man, then add woman and it lands astonishingly close to queen. That’s not just a gimmick but reveals that words and ideas behave like objects in space, and that meaning has a kind of geometry to it. It’s a bit like discovering you can solve a riddle using Pythagoras’ Theorem: it shouldn’t work, but it does. It’s as if meaning somehow follows invisible mathematical laws.
So here is the question. Why should meaning—something so human, so slippery and abstract—be something we can map with maths? Why should analogies, concepts, and categories follow the rules of geometry? This is the kind of thing that should make us pause. The universe, it seems, is not just material but it’s intelligible. It’s not just stuff, it’s structured. And even machines, deep learning algorithms with no soul and no self-awareness, can discover that structure.
So What’s Going On?
How do we explain this? Well, there are a few options on the table.
Constructivism says that meaning is just something humans made up. But if that were true, then different cultures and especially different modalities (language vs. images) and different architectures (e.g. modern transformer based LLMs vs. older embedding systems) should organise meaning in totally different ways. Yet we find remarkable convergence between text-trained and image-trained systems. That undermines the idea that meaning is purely invented. Furthermore, even when trained on entirely different human languages, these systems’ semantic maps can be aligned with a simple mathematical rotation, which would suggest that meaning’s geometry transcends culture.
Another option could be evolutionary naturalism, which might suggest that our brains evolved to carve up reality in useful ways, and that language reflects those categories. But that doesn’t explain why machines, with no evolutionary history, trained in totally different ways, also arrive at the same categories. Evolution can’t explain the shared geometry of entirely non-biological systems. If evolution explains why humans carve up reality this way, it does not explain why machines (with no ancestors and no instincts) do likewise. It seems that meaning is something external to us.
At this point, some secular thinkers leap for emergence as their last redoubt. This is the idea that meaning just “emerges” when you have enough complexity. To be fair, emergence is a serious idea: complexity does sometimes produce genuinely novel properties that couldn’t be predicted from the parts alone. But in this case, it’s a conjuring trick with semantics, not an explanation. Why should complex systems just happen to discover shared conceptual structure? That just raises the deeper question: why does meaning emerge at all? And why is it mathematical?
If all materialistic options have failed at this point, what about Platonism? This ancient philosophy might say these concepts are eternal abstract truths, that “dog-ness” or “table-ness” or “beauty” live in some transcendent realm of forms. But Platonism doesn’t explain where those forms come from, or why we (or machines!) should be able to access them. Perhaps the best place to locate such universals forms is in some kind of universal Mind. This was the very move that Augustine made when he located the Platonic forms in the mind of God.
Which leads us nicely to Theism. If there is a Creator who designed both the universe and human minds, then it makes sense that the world is filled w[ith discoverable structure and that our minds (and even our machines) can tap into it. Christians believe that the universe was made through the Logos, the eternal Word of God (see John 1:1). In Greek philosophy, Logos meant not just ‘word’ but rational principle: the intelligible structure woven into the fabric of reality. This is why Augustine located the Platonic forms in the mind of God: universal meaning needs a universal Mind to ground it. If that is true, then the convergence we see in these AI systems isn’t a coincidence or a curiosity. It is exactly what we should expect in a universe created by a rational, personal God: a universe where meaning isn’t invented but discovered, whether by a poet, a philosopher, or a deep learning algorithm.
THE MIND BEHIND THE MEANING
It has long been observed that there is a mathematical structure underpinning physics, as the Hungarian theoretical physicist Eugene Wigner pointed out in a famous essay.4 And many have argued that this is a powerful argument for God. For if numbers and maths are just human inventions, why should reality pay any heed to them? But now AIs and deep learning algorithms are discovering a similar mathematical structure underpinning language and meaning, epistemology and truth, the very conceptual way that nature is to be understood.
Either we are witnessing a cosmic coincidence of impossible precision—or mathematics is, once again, revealing that the universe has a Mind behind it. There’s something ironic, or perhaps beautiful, in all this: for as we have tried to teach machines to understand the written word, they have accidentally stumbled across the Divine Word.
If you found this piece helpful, thought-provoking, curiosity-sparking or in other ways useful, please consider subscribing. It helps me keep my writing free for those who can’t afford to make a contribution. Thank you.
For an introduction to the maths behind machine learning, see Anil Ananthaswamy, Why Machines Learn: The Elegant Maths Behind Modern AI (London: Allen Lane, 2024).
Santiago Acevedo et al., ‘A Quantitative Analysis of Semantic Information in Deep Representations of Text and Images’, arXiv 2505.17101 (preprint, 2026). Available online at: https://arxiv.org/abs/2505.17101
It is not uncommon for deep learning systems to use vector spaces of 768, 1024, or even more than 4,096 dimensions.
Eugene Wigner, ‘The Unreasonable Effectiveness of Mathematics in the Natural Sciences’. Communications in Pure and Applied Mathematics 13.1 (1960) 1-14. Available online at: https://links.uwaterloo.ca/amath731docs/wigner_unreasonable_effectiveness_1960.pdf



