Cloud AI as head teacher: why the future of work lies with local AI

Cloud AI becomes the head teacher

When the large language models began their triumphal march a few years ago, they almost seemed like a return to the old virtues of technology: a tool that does what it is told. A tool that serves the user, not the other way around. The first versions - from GPT-3 to GPT-4 - had weaknesses, yes, but they were amazingly helpful. They explained, analyzed, formulated and solved tasks. And they did this largely without pedagogical ballast.

You talked to these models as if you were talking to an erudite employee who sometimes got lost, but basically just worked. Anyone who wrote creative texts, generated program code or produced longer analyses back then experienced how smoothly it went. There was a feeling of freedom, of an open creative space, of technology that supported people instead of correcting them.

Read more

AI Studio 2025: Which hardware is really worth it - from the Mac Studio to the RTX 3090

Hardware 2025 for AI studio

Anyone working with AI today is almost automatically pushed into the cloud: OpenAI, Microsoft, Google, any web UIs, tokens, limits, terms and conditions. This seems modern - but is essentially a return to dependency: others determine which models you can use, how often, with which filters and at what cost. I'm deliberately going the other way: I'm currently building my own little AI studio at home. With my own hardware, my own models and my own workflows.

My goal is clear: local text AI, local image AI, learning my own models (LoRA, fine-tuning) and all of this in such a way that I, as a freelancer and later also an SME customer, am not dependent on the daily whims of some cloud provider. You could say it's a return to an old attitude that used to be quite normal: „You do important things yourself“. Only this time, it's not about your own workbench, but about computing power and data sovereignty.

Read more

gFM-Business and the future of ERP: local intelligence instead of cloud dependency

gFM-Business and AI + knowledge graph

For over a decade, the gFM-Business software has stood for something special in the German ERP market: it is not based on a cumbersome, difficult-to-maintain system, but on the lightweight, customizable and visually modelled FileMaker platform. This has many advantages: gFM-Business can be individually expanded, runs on Windows, macOS and iOS, and can be customized by both developers and ambitious power users.

With the advent of artificial intelligence (AI) - especially through so-called language models such as ChatGPT - new opportunities are now emerging that go far beyond traditional automation. gFM-Business is actively preparing for this future: with the aim of not only managing data, but also unlocking knowledge.

Read more

Ollama meets Qdrant: A local memory for your AI on the Mac

Memory for local AI with Ollama and Qdrant

Local AI with memory - without cloud, without subscription, without detour

In a previous articles I explained how to configure Ollama on the Mac install. If you have already completed this step, you now have a powerful local language model - such as Mistral, LLaMA3 or another compatible model that can be addressed via REST API.

However, the model only "knows" what is in the current prompt on its own. It does not remember previous conversations. What is missing is a memory.

Read more