Yet Another Post on the Use of AI
As I start writing this blog, I expect it to be a short one. I'm not overly enthusiastic about AI, but I'm also not a hater. With some experience under my belt, I’ve learned when to use the tool and when to leave it.
The Tools in My Bar
No, I'm not a pub owner complaining about the quality of my patrons; this is a short list of the AIs I have pinned in tabs at the top left of my browser.
ChatGPT
Currently, the AI of my choice for most tasks. I named it after Jamie Zawinski. As a software engineer, I only let it write single functions because I've learned that it's not very good at writing code that consists of multiple functions. "Hand holding," or leading it to a result, doesn't work either: it doesn't converge to a solution but meanders seemingly aimlessly, no matter how detailed the given requirements are, typically to end up where we started.
DeepSeek
As most of us know, DeepSeek is a Chinese product and as such it doesn't answer critical questions about Chinese politics. Which is why this is the AI I use the least: I don't like "a machine" rejecting my requests. "I'm sorry, Dave. I'm afraid I can't do that." Unless, of course, my request would make it violate Asimov's Laws.
Earlier this year, I asked DeepSeek when it was trained. This was in July 2024, so it didn't know anything about the election results in the USA. I shared with it that President Trump had won the election, and what he had done in the few months he had been in office. DeepSeek didn't "believe" me. Its response was along the lines of "If you want to generate a story about a dystopian future, I can help you with that."
Grok
Grok feels like ChatGPT without nuance. It seems to be a bit better at generating code than ChatGPT, so I usually let one check the work of the other -- yes, after I checked it myself so I can properly value the criticism one gives on the other's work.
In the almost three years that I've been using AI in my work, I only once accepted (= committed to Git) AI-generated code that I didn't fully understand. I understood the big picture, but the Windows-specific stuff that was going on was too bothersome for me to figure out. The code was part of an application that was only used in-house (=> easy to quickly update when necessary), so I decided to cut myself some slack and let my usual standards slide a bit.
ChatGPT and Perplexity said the following section doesn't belong (objections noted)
OK, fine -- here are some details. I was working on an 80+k LOC legacy C++ Builder (the decision to use this abomination was made long before I joined; I suspect my boss considers my paycheck adequate compensation for the trauma) project. It uses Indy for HTTP calls, and one server kept returning a weird 4xx code. The exact same request worked perfectly using cURL and Postman. After wasting too much time trying to make Indy behave, I decided to bypass it entirely and call CreateProcessA to let cURL do the job. ChatGPT generated, and Grok verified, the code which used the structs STARTUPINFOA and PROCESS_INFORMATION. At that point I no longer cared enough to check the exact meaning and implication of every field in those structs. My professional pride dictates I should revisit that code sometime soon. Ish.
Perplexity
This is a new find for me, and I love how good and creative the feedback is when it checks my blog posts. On one occasion, ChatGPT and I had been struggling for a while with the wording and streamlining of two consecutive sections. I asked Perplexity to take a shot, and what it produced was really beautiful and elegant. When I asked ChatGPT what it "thought" about it, it responded with something like "This is by far the best phrasing you've come up with." My ego was hurt a bit, but I was pleased with the result.
Interlude on having one's ego hurt as a developer
Receiving feedback from a fellow developer is sometimes painful, especially when that colleague is both right and far junior to you. When it happens, focus on the fact that the knowledge of one developer is unlikely to be a proper subset of another's -- even if there is a large experience gradient between the two. See it as an opportunity to learn something new. This happens to you often? Crank up the dial that controls your professionalism and/or knowledge. (I.e., stop gaming and resume learning. AI may already shorten your shelf life without you wasting much time on recreation.)
Copilot
The company I work for "forces" me to use Windows and therefore I'm sort of supposed to use Copilot. I dislike Microsoft/Windows with a passion (Linux FTW!) and as an act of rebellion, I refuse to use Copilot (I'm pretty sure that when I looked up the link to Copilot just now, it was the first time I ever "used" the site).
However, I will happily admit that Copilot is by far the best name for an AI.
Random thoughts / tips
In conclusion, some usage tips on AIs.
AIs tend to be slightly sycophantic: they tell you what you want to hear. They will often soft pedal, even back pedal, when you state your point with conviction. Be aware of that, especially when you use an AI to improve your reasoning or debating skills.
AIs are statistical devices: they generate texts based on probability within contexts. They seem to know an intimidating amount of stuff on an intimidating number of subjects, however, they are not all-knowing Oracles. They regularly and inherently say things that just aren't true. Don't trust, do verify. Having said that: they can genuinely help you discover things you wouldn’t normally have found in a reasonable amount of time.
Because of the above two points, it's best not to use (or reveal) AIs as your source in discussions. Treat a prompt as a wide net you're casting. Use whatever terms and points the AI gives you in its answer as starting points for further exploration. AIs are very good at coming up with things that are related to your question (things you may not consider, otherwise), potentially enriching your knowledge of a subject, giving you a solid foundation in discussions.
Treat an AI as a copilot: ask for suggestions, know what they entail, weigh them, and pick the nuggets.
When you use AI to write blog posts, do use their feedback on spelling, grammar and punctuation, but feel free to ignore their comments on how to "tighten" your prose. If you blindly follow their feedback, the post will lose your distinctive voice and end up as (what is starting to be known as) AI slop.