When I started hearing about AI for programming I was skeptical. Of course I was. I’ve been at this a long time - like since I was a kid in the 70s. Though I didn’t become a professional programmer until later in life, I saw AI when I was probably like 9 years old when my Dad took us to dinner at his former boss’s house who still worked in the US Air Force. My dad had been stationed at McChord Air Force base fixing airplanes when he met Larry, the man who showed me my first computer. I had no idea it was a computer at the time. It was a box with a black screen. You could type questions and it would give you an answer.
My dad was amazed with this box. I didn’t really understand it all. That was basically an early form of AI. Although my Dad was a teacher and we lived outside a podunk small town where he taught in a little red brick school house that still sits at the top of the hill there - across from what was a grain mill and the local rodeo grounds where I once participated in a contest to pull a ribbon off of calf tails - he saw the future.
He was so convinced that computers were the future that later he got into selling computers from Texas Instruments. That’s where I was introduced to programming. The rest is my history which I wrote about here:
I’ve seen a lot over the years. The holy grail of programming for a long time have been “code that writes code.” I’d written various forms of partial code generators in the past - template driven programming or generating bits of code here and there. Tools came out like FrontPage and DreamWeaver that tried to generate website code - and produced a real messy pile of garbage. You could get something out of it that looked like a website or code but then to try to maintain it was impossible. Note that likely you’ve not heard of many people using those applications recently. As far as I know, no one uses Front Page. Use of Dream Weaver has dwindled significantly.
So here comes generative AI claiming to write code for me and of course I’m skeptical. But I’m also one to try things before claiming they will never work. AI has promise. It can write some decent code for small bash scripts pretty well. I use it extensively. For example, I recently use it to help produce all the scripts in this post about creating a more secure pentesting network configuration:
https://teriradichel.substack.com/p/securing-pentest-and-bug-bounty-research
One of the things I forgot to add was the routing configuration on the hosts on the private network. Asymmetric networking problems on routers has been a real pain for me in the past. It’s always complicated to set up and troubleshoot when you’re first getting started. In this case, it did take a few rounds and some network log inspection, but I would have spent way more time just figuring out where to add the configuration let alone writing the scripts to show people what they need to change to make that work. So yes, I do think AI is very powerful and it’s helping me with just about everything right now.
But there’s one really important thing you need to understand about AI before you start using it. And that is - it is non-deterministic.
What’s the difference between deterministic and non-deterministic?
It boils down to this:
Deterministic - if implemented correctly - is always right 100% of the time.
The logic is solid and there’s no way it can be wrong.
It’s like 1 + 1 = 2.
Non-deterministic - may be right. It might produce different answers for the same prompt (question or input).
It’s like saying, well I saw this data:
1+1=2
1+1=2
1+1=3
1+1=2
1+1=2
1+1=2
1+1=2So 1+1 is probably 2.
Because in all of the data most of the time 1 + 1 = 2.
But there’s a chance that at some point it might say 1 + 1 = 3.
And what will it produce if it sees this data?
1+1=2
1+1=3
1+1=2
1+1=3
1+1=2
1+1=3
1+1=2
1+1=3It’s a probabilistic guess of what the next “token” is. I explained tokens in the post about AWS AI services linked above.
An answer from a probabilistic guessing engine is not a guaranteed answer that you can trust and count on 100% of the time. It is not deductive reasoning or a proof that the answer is correct or not when you use an AI model. It is a statistical probability that determines what the output of the AI model should be when you input a prompt (a question or a request).
And it can be right a lot! It’s a good tool. You just have to understand the caveats and limitations of an AI model and the risk involved if the tool produces an answer that is not correct.
There are many factors that can influence what the AI model will produce. There are weights and adjustments and tweaks to the AI model that can influence what the answer will be to any prompt. The model is also influenced by whatever data it has been trained on. What if I trained my model on data that looked like this:
1+1=3
1+1=3
1+1=3
1+1=3
1+1=3
1+1=3
1+1=3What answer will it produce? 1 + 1 = 3 most likely.
So what if someone can get some incorrect data into the training process? You guessed it - they can influence what answers the AI will produce.
AI models are probabilistic engines based on all the tokens in their training data and the relationships between all those tokens. What if someone can insert some token adjacent to all the other tokens that makes it appear related when it is really not?
In addition, models are always changing. Constantly. They are trained and retrained and tweaked and adjusted to come up with better results. So you test the model over and over again and you say “Hey, this is pretty good!” So you start relying on it more and more and stop questioning whether it is actually providing true and correct answers.
And then something slides into the training data while you weren’t looking at the underlying model changes…and suddenly you’re getting incorrect answers and you may not even realize it.
This is akin to when I worked on search engine optimization (SEO) in my former company. You would make all these tweaks to try to get your website to the top of the list. At one point my resume website was ranked number one for Seattle Programmer. I had figured out how the search engine bots were processing web pages and what I needed to put into my resume page to get it to the top of the list.
Similarly, people are inserting invisible text into resumes to try to trick the automated tools used by human resources departments into pushing their resumes to the top of the stack. (Don’t do that - if you get caught you’ll likely be banned.) Inserting data into the training data in an AI model can have a similar effect. It can cause the model to produce inaccurate or influenced results.
But the main point is this: AI models are non-deterministic. Even if the underlying data doesn’t change and the model doesn’t change, the model might decide one day to give you that one wrong answer because it’s somewhere in the training data and it is providing a statistical prediction: 1 + 1 = 3.
What sort of problem is that going to cause if you are using that AI model for financial calculations? What if it is evaluating your security policies? What if it is determining if someone is a criminal or not - could it incarcerate an innocent person or let a criminal go free? What if it is driving your car or flying your plane?
That’s why non-deterministic models can be a problem and should be used with the appropriate caution and guardrails.
A non-deterministic algorithm or system is one that can exhibit different behaviors or produce different outcomes on different runs, even when given the same input.
So how can we solve that problem? In my case, I do use AI to generate my code and ask it questions, knowing it could be incorrect. I follow that up with deterministic checks and validation to ensure that what it has produced is accurate and correct. I don’t give models my credentials and I don’t let them run rampant in my email and personal data. I also use AI to produce deterministic code and leverage deterministic scripts within my processes when things need to be right or can be completed more efficiently with deterministic processes.
AI is not going to replace all the programmers in the world as some people are predicting for exactly the technical reasons explained above. However, it IS going to replace SOME programming tasks that can be completed faster using AI code generators - or vibe coding.
AI will help us write code much faster. The solution lies in the combination of deterministic and non-deterministic technologies. It also depends on the appropriate use of AI so that the cost does not exceed the return on investment. I’ll be writing on those topics more in future posts.
Subscribe for more stories like this on Good Vibes.
— Teri Radichel


