Programming & AI
Let's talk about programming languages and AI, and why talking to AI requires programming languages.
Talking to machines using code
We talk to machines in code -- which we call programming languages. Here are some of the earliest programming languages.
1843 – Ada Lovelace: tabulation algorithm for Babbage's Analytical Engine.
1936 – Alonzo Church: lambda calculus, communication with a non-existing abstract machine.
1937–1945 – Alan Turing: formal Turing machine model and later practical codebreaking machines.
1945 – John von Neumann: EDVAC design, stored-program architecture.
1957 – John Backus: FORTRAN, one of the first high-level programming languages.
1958 – John McCarthy: Lisp, symbolic communication with machines.
Inventing new languages has never stopped:
2020s – Web development: Languages like TypeScript dominate modern web frameworks with strong typing added to JavaScript.
2010s–2020s – Systems programming: Rust introduces memory safety without garbage collection.
2020s – Concurrent programming: Kotlin with coroutines provides structured concurrency and simplified async programming.
The year of 2025 has been the dawn of a new family of Turing machines that process input using Large Language Models (LLM). Guess what? Humans are eager to communicate with these new machines, which we now call AI.
Many great lessons have been learned from our efforts of talking to Turing machines in the last 100 years. As we start the conversation with AIs, these lessons will be absolutely invaluable for us to build more efficient, more powerful, and more reliable dialogues.
Programming from 1960-2025: type system, symbols, and re-factoring
Programming languages have been developed and revised so that we can control the behaviour of the UNIVERSAL TURING MACHINE while establishing a (very very) high level of trust. Let's look at the elements of programming (omitting many other interesting elements of programming languages).
We will use some programming language (Kotlin). The syntax doesn't matter too much. What I want to illustrate with the efforts software engineers have spent to communicate with the Turing machine.
Type checking
We want to control how data is stored (in memory, on disk, or somewhere in the cloud) by specifying its format, meaning, and behaviour. This is done by a type system. We can write things like:
This says that data Result is a wrapper of some type of data `T` and it wraps it either as `Success` or `Error`. But there is something more: The Result.Success wrapper can only provide data to your program, but cannot accept data from your program.
Here each symbol (outline, section, answer) is bound to a precise value in its own scope. Symbol binding removes ambiguity: we know exactly what data object each symbol refers to, (not the case in prompt-engineering).
That's cool.
Symbol bindings
Data floats around in memory, on disk, or even in the cloud. We can put labels on data with something we call symbols. They are also more commonly known as variables.
We can reuse the same symbol to label different data simultaneously!!! This is done by creating scopes.
In this code, here are many instances of answer symbols, all used in different scopes, and thus, we can reason without any doubt of what data object each answer refers to during the execution of the program. (Though we wouldn't know the exact value of these data objects.)
Functions
Computation is all about invoking functions. Functions are part of the type system:
We can create data which are functions.
Recommended by LinkedIn
Then, we can bind it to a symbol, and use it to do bigger and better things.
Refactoring
Refactoring means reorganizing symbols, types, and function signatures so that the same external behavior is preserved, but the internal representation is clearer and easier to maintain. For example, consider the running example with Result and askAIQuestion.
Before refactoring, we have some redundancy.
We can remove the duplicates by refactoring.
Here we introduced a new symbol handleAnswer with a function signature (Result<String>) -> Unit. The external behavior is unchanged, but the readability improves and duplicated code is removed.
Software development in a nutshell (1960 - 2025)
Enters the LLM: 2025 - ???
At first glance, talking to LLMs seems to be easier than coding.
WRONG.
I bet every LLM-based application developer would agree that expressing your intentions, needs and constraints to a LLM via natural langauge prompt engineering is a painful struggle that reminds us WHY PROGRAMMING LANGUAGES have been invited.
Types for AI ?
Modern LLMs are trained to respond in structured data. This is usually achieved through fine-tuning and reinforcement learning, as well as by giving the model examples of JSON, XML, or other formats during training and prompting. Despite this, the output is unreliable: models often drift from the requested schema, omit required fields, or generate inconsistent nesting. The unreliability stems from the fact that the model does not execute a type system or parser internally, but only predicts text patterns, no matter how massive the training corpus may be.
In order to improve accuracy, prompt engineers add further verbose explanations as part of the AI prompt -- just to get the right type of data! We are so desperate that we would have prompts like this:
You are a helpful AI that will give me output like this:
{
"name": "albert",
"hobbies: [ "ski", "tennis", "travel" ]
}
Symbol bindings for AI ?
How many times have prompt engineers wrote prompts like this:
Get the outline of the first PDF document.
Check which section is the most relevant to the second document.
Then extract the content on this section in the first document.
Wouldn't it be that much BETTER if we can write it like this?
Functions for AI ?
I am actually really excited by the features of modern day LLMs:
Both are ways AI can delegate computation to external functions. This allows AI to lean on classical computational models. By doing so, AI systems can integrate with decades of existing technology — from databases to compilers to distributed systems — all with proven guarantees. This integration significantly reduces the risk of communication errors with AI, since type systems, symbol binding, and explicit function signatures enforce clarity and correctness that natural language alone cannot provide.
But there is something about functions in programming languages that is lost in MCP and function calling -- composibility. Functions can be chained together, gaining ever increasing complexity. At least for now, there is no obvious equivalent scaling feature in the LLM world.
Refactoring for AI ?
Ouch... Good luck with refactoring your prompts. Programming tools (linters, IDE) have been developed to help with refactoring of types and functions. But so far, there is nothing to my knowledge that helps with prompt refactoring.
In conclusion
LLM is definitely a new generation of the Universal Turing Machine. It consumes symbols on a tape just like a Turing machine, but its control unit is a neural network rather than a finite state machine. This calls for an entirely new generation of programming languages.
Prompt engineering is just a baby step on this long road towards a reliable, efficient and trustworthy programming language that will facilitate human-AI communication. Along the way, we mustn't forget the wonderful inventions and invaluable lessons we have learned from hundreds and thousands of programming languages we have been using.