Practical - and quick - guide to Generative AI
- Gabriel Casara
- Jun 9
- 4 min read
Or: AI without blah blah blah
Gabriel Casara, CGO BlueMetrics

AI-generated summary:
In this article, Gabriel Casara explains in a simple and accessible way how Generative AI works, comparing it to a giant sponge that learns from everything that is publicly available on the internet, recognizes patterns and responds based on mathematical vectors. He shows how this “digital mind” — trained with billions of sentences — becomes truly useful when it is adjusted with human curation, specific language and internal company data, through techniques such as RAG. In the end, he argues that AI should be treated not as magic, but as a practical strategic tool.
Barbecue talk
When I say that I work with Artificial Intelligence, there is always someone who asks: “But how does this thing actually work?”. I understand a little. And since I learned without coming from a technical background, I will explain here as if I were at a barbecue with friends, as I would like to receive the explanation.
It all starts with a kind of infinite sponge. Imagine a baby's brain with an absurd capacity to absorb information, but instead of taking years to learn to say “daddy,” it starts absorbing everything that humanity has written on the internet: books, news, recipes, forums, everything (as long as it's public, okay?). This sponge doesn't understand content the way we do — it doesn't know what “love” is, but it realizes that “I love you” is usually followed by “my dear.” What it actually does is recognize patterns.
But this sponge doesn't stop there. In addition to absorption, it has a gigantic processing capacity. It's almost as if we combined the curiosity of a baby with the calculating power of a supercomputer. And that's how what some call a language model is born — I prefer to think of it as the embryo of a digital Einstein.
Training Einstein
This “Einstein” is trained with billions of sentences and words, and is constantly challenged to guess the next word based on the previous ones. If you get it wrong, adjust the weights. If you get it right, you get positive reinforcement (in technical terms, of course). This happens millions of times until it becomes a master at predicting and putting together sentences based on what it has learned. And it does this very quickly.
But calm down, he doesn't think like us. He has no conscience, no critical sense, no opinion. But he learned so much, so quickly, and so well, that it seems like he thinks. It seems like he knows more than we do. Sometimes, he really does know. Other times, he's just tripping — the so-called "hallucination."
And there’s more: his brain doesn’t work with words, but with vectors, which are numbers. Everything is transformed into a kind of “mathematical map of language.” This allows him to compare ideas, contexts, and expressions in a mathematical way. That’s why he understands that “car” and “automobile” are close, but “bar” and “tent” don’t make sense in the same vector, even though they seem to start the same way.
Then comes the fine-tuning. This is when they take this already powerful Einstein. This is where curation and guidance (prompts) come in: “speak like a seasoned lawyer who is an expert in…”, “write like a poet”, “explain like a quantum physics teacher to a five-year-old”, etc. This process is overseen by humans who show what an acceptable response is. It is not yet intelligence in the philosophical sense, but it is beginning to look a lot like contextual expertise.

Now, if you want this Einstein to be truly useful, you need to give him specific knowledge and new disciplines. That's where RAG (Retrieval-Augmented Generation) comes in. Think of it this way: Einstein knows everything... until 2023. But he doesn't know what's in your company's contract, internal regulations, or credit policy. With RAG, you give him a "cheat sheet": a document base that he can consult before responding. Then he responds with context. Then he becomes a true intelligent assistant.
By the way, it's no coincidence that mine is called Albert. But when he gets confused in his answer, I call him Alfred — nothing against Batman's butler, as Alfred makes mistakes with more class.
Want to see GenAI, Machine Learning, and data solutions making a difference in your company?
AI is not magic
When you put all this together — the absorbing sponge, the calculating brain, the human curation, and the right context — what you end up with is a digital Einstein trained for your business. In practice, this goes far beyond generating a pretty little text. You can create assistants that know everything about your business, automate complex processes, give information superpowers to your sales, service, legal, or education teams, and help them make faster, more informed decisions.
But to do that, you need to stop seeing AI as magic and start treating it as a strategic tool.
And before you hire yet another consultant to complicate what should be simple, I suggest you do the following: Log into your favorite LLM (I won't tell you where I created Albert — yet), feed it your information, your questions and your data. See what it returns.
After that, if you want to turn this into something practical, real, and useful in your company — no blah blah blah — Talk to us. We'll set everything up with you quickly, efficiently, and explained like we're chatting at a barbecue.
Gabriel Casara is CGO at BlueMetrics and passionate about barbecue with friends and AI.
Comments