AI has opened the possibility for people who don’t know something to be able to do it (roughly). For example, there are many tools, like Claude Code, that allow you to open a terminal and write code (I won’t use the commonly used term for this because I think it’s silly). Coding is a typical example, but there are many other things you can do with AI nowadays, such as generating images and videos, writing text, and summarizing content, among others.
I use AI all the time. Claude Code enables me to automate many tasks that I would otherwise be unable to do or that would take me a long time to develop in Power Automate, for example. However, the more I use these tools, the more I understand that I need to know much more about a topic (not less) so that I can use the tools effectively.
I’m going to use AI as a broad term here so that you can think of it as Claude, ChatGPT, Gemini, or any other variations of these tools. I’m not saying they’re the same, but in this case, it doesn’t matter which tool you’re using.
Let’s explore why.
The intern
For the sake of argument, let’s think about AI as a brilliant intern right out of college. They are willing to do things, genius, and they learn useful things. However, if you ask them to do something unsupervised, they will do a lousy job because they don’t know your reality. They don’t know what your best practices are, what your process is, and what you learned with experience in the past from making mistakes. They know, or were trained, by learning from a professor in a limited timeframe with a set of materials that they studied and interpreted in their way.
AI is (oversimplifying) like this. Companies feed them large amounts of data (including the data on this website) and then, when you ask, they return something based on what they were trained on. They make mistakes, invent stuff, and go in wild directions, that, in theory, could work, but you know that there are better solutions.
In the same way that when you hire an intern, you provide them with information about your company (processes, rules, etc.), based on documentation or training, for example, you must do the same when using specific AI tools. You need to “train” them in how you want things to be done so that the results are closer to what you need, but for this, you need to know more about your business/area to do this; otherwise, how would you expect good results?
You need to understand your business, strategy, objectives, and technology so that you can instruct it properly. You would not ask an intern to build an ice cream machine and then start using the first thing they create to sell ice cream to the public, right? This “training” could be as simple as writing a text document and attaching it to your preferred tools. All of them have some variation of the “memory” or “artifact” that considers your instructions. This will make a world of difference in your results. Update this file over time, each time you don’t see anything you like, just as you would with your intern. Suggestion Add a simple Word file with your instructions that can be used each time you use a tool. Make the instructions as clear and concise as possible, and update the file over time. Stretching the analogy of the output a bit further, the models’ outputs can have different levels of quality. Your intern makes mistakes and can invent stuff to avoid admitting they don’t know something. Claude, ChatGPT, and Gemini all have their variations of warnings, informing you that the results may contain errors, which is expected. Nothing is infallible, and technology, especially something relatively new, isn’t as well. Therefore, you need to determine whether the output, regardless of its formatting, is something beneficial and if it has any issues. Many people claim that they can’t use AI because it makes mistakes. I don’t agree with this at all. It’s the same as saying that you can’t automate something because you can’t do it 100%. If you can use AI to get you 80% of the way there, it’s still 80% of the time saved, and this is a massive plus over time. You need to know more so that you can adequately judge the result and adjust it if necessary. In my experience, there’s never a 100% correct result, but there would never be one even if you created it from scratch. The idea is always to save time, not to replace your responsibility altogether. I often see this happening, where I ask to do something, and they go in circles. Or you need to cancel something midway because you know that they are going in the wrong direction. I’ve lost count of the number of times I wrote “try to do X instead”. But I needed to know more so that I could understand what “X” could be. If I let it run, it would go in circles, and because it has gaps in its knowledge, it would never know how to proceed. So, something makes sense; they act on it, fail, and then go back to the beginning, running in circles. It happens to people as well, and I’m guilty of that all the time. I try something, forget about it after a while, and try it again. AI speeds up the cycles and writes content faster than we can, but is prone to the same mistakes. Never leave the agents running indefinitely, the same way you would not let your intern run around in a warehouse doing whatever they want. It’s essential to monitor progress and provide clear exit paths when individuals become stuck. Therefore, it’s essential to monitor, provide guidance, and understand when things are not going in the right direction. I could go on, but I think you get my meaning. It’s more important than ever to know more about your domain, company, etc, not less. I know I stretched the “intern” analogy, but it works better in my understanding in this case. It’s valuable to have someone who can help you in your daily tasks, but left unsupervised, they could create more harm than good. You can follow me on Mastodon (new account), Twitter (I’m getting out, but there are still a few people who are worth following) or LinkedIn. Or email works fine as well 🙂 The output
Stuck
Final Thoughts
Photo by Tim Mossholder on Unsplash