Artificial intelligence without the hype: why fewer AI tools often mean better work

Artificial intelligence without the hype

Anyone who deals with the topic of artificial intelligence today almost inevitably encounters a strange feeling: constant restlessness. No sooner have you got used to one tool than the next ten appear. One video follows the next on YouTube: „This AI tool changes everything“, „You absolutely have to use this now“, „Those who miss out are left behind“. And every time, the same message resonates subliminally: You're too late. The others are further ahead. You have to catch up.

This doesn't just affect IT people. Self-employed people, creative professionals, entrepreneurs and ordinary employees are also feeling the pressure. Many don't even know exactly what these tools actually do - but they have the feeling that they could be missing out on something. And that's exactly what creates stress.

Read more

Using AI as a sparring partner: How thinking in dialog becomes more productive

AI as a savings partner

I've been using artificial intelligence for almost exactly two years now. In the beginning, it was sober and technical: entering text, typing prompts, reading answers, correcting, retyping. The way many people did it - carefully, in a controlled manner, with a certain distance. It worked, no question. But there was still something mechanical about it. You asked questions, got answers, ticked them off.

I realized relatively early on that I was missing something: flow. Thinking is not a form. Good thoughts don't come from a corset of neatly formulated input, but from talking, trying things out, thinking aloud. So I started to use the AI app on my cell phone more often - and at some point I simply started speaking instead of typing. That was the real turning point.

Read more

Cloud AI as head teacher: why the future of work lies with local AI

Cloud AI becomes the head teacher

When the large language models began their triumphal march a few years ago, they almost seemed like a return to the old virtues of technology: a tool that does what it is told. A tool that serves the user, not the other way around. The first versions - from GPT-3 to GPT-4 - had weaknesses, yes, but they were amazingly helpful. They explained, analyzed, formulated and solved tasks. And they did this largely without pedagogical ballast.

You talked to these models as if you were talking to an erudite employee who sometimes got lost, but basically just worked. Anyone who wrote creative texts, generated program code or produced longer analyses back then experienced how smoothly it went. There was a feeling of freedom, of an open creative space, of technology that supported people instead of correcting them.

Read more

AI Studio 2025: Which hardware is really worth it - from the Mac Studio to the RTX 3090

Hardware 2025 for AI studio

Anyone working with AI today is almost automatically pushed into the cloud: OpenAI, Microsoft, Google, any web UIs, tokens, limits, terms and conditions. This seems modern - but is essentially a return to dependency: others determine which models you can use, how often, with which filters and at what cost. I'm deliberately going the other way: I'm currently building my own little AI studio at home. With my own hardware, my own models and my own workflows.

My goal is clear: local text AI, local image AI, learning my own models (LoRA, fine-tuning) and all of this in such a way that I, as a freelancer and later also an SME customer, am not dependent on the daily whims of some cloud provider. You could say it's a return to an old attitude that used to be quite normal: „You do important things yourself“. Only this time, it's not about your own workbench, but about computing power and data sovereignty.

Read more

Immortality through technology: how far research and AI have really come

Digital immortality

Ever since humans have existed, there has been a desire to prolong life - or preferably extend it indefinitely. In the past, it was myths, religions, alchemists or mysterious rituals that gave people hope. Today, it is no longer magicians sitting over ancient parchments, but some of the richest people in the world sitting over state-of-the-art biology and AI technology. At first glance, it sounds like science fiction: is it possible to stop ageing? Can you „preserve“ yourself digitally? Can you transfer your thinking to a machine?

But the topic has long since left the ivory tower. Big tech billionaires are now investing billions in projects that are seriously investigating precisely these questions. Not because they want to become immortal gods - but because they can afford to research the limits of what is possible. This article explains quite simply what is behind this idea, what technical developments already exist today, where the limits lie - and why this topic will become increasingly important over the next 20 years.

Read more

The new EU censorship laws: What Chatcontrol, DSA, EMFA and the AI Act mean

EU censorship laws

In an increasingly digitalized world, we spend a lot of time online: Chatting, shopping, working, informing ourselves. At the same time, the rules on how content is shared, moderated or controlled are changing. The Digital Services Act (DSA), the European Media Freedom Act (EMFA), the planned Regulation to Prevent and Combat Child Sexual Abuse (CSAR, often referred to as „chat control“) and the AI Act are key pieces of legislation proposed by the European Union (EU) to regulate the digital environment.

These regulations may seem far away at first glance - but they have an impact on you as a private individual as well as on small and medium-sized companies. This article will guide you step by step: from the question „What is planned here?“ to the background and timelines to the change of perspective: What does this mean for you in everyday life?

Read more

Apple MLX vs. NVIDIA: How local AI inference works on the Mac

Local AI on Silicon with Apple Mac

Anyone working with artificial intelligence today often first thinks of ChatGPT or similar online services. You type in a question, wait a few seconds - and receive an answer as if a very well-read, patient conversation partner were sitting at the other end of the line. But what is easily forgotten: Every input, every sentence, every word travels via the Internet to external servers. That's where the real work is done - on huge computers that you never get to see yourself.

In principle, a local language model works in exactly the same way - but without the Internet. The model is stored as a file on the user's own computer, is loaded into the working memory at startup and answers questions directly on the device. The technology behind it is the same: a neural network that understands language, generates texts and recognizes patterns. The only difference is that the entire calculation remains in-house. You could say: ChatGPT without the cloud.

Read more

LoRA training: How FileMaker 2025 simplifies the fine-tuning of large language models

LoRA Fine tuning - FileMaker 2025

The world of artificial intelligence is on the move. New models, new methods and, above all, new possibilities are emerging on an almost weekly basis - and yet one thing remains constant: not every technical innovation automatically leads to a better everyday life. Many things remain experimental, complex or simply too costly for productive use. This is particularly evident in the so-called fine-tuning of large language models - a method of specializing generative AI to its own content, terms and tonalities.

I have accompanied this process intensively over the last few months - first in the classic form, with Python, terminal, error messages and nerve-wracking setup loops. And then: with FileMaker 2025. A step that surprised me - because it wasn't loud, but clear. And because it showed that there is another way.

Read more