AI Studio 2025: Which hardware is really worth it - from the Mac Studio to the RTX 3090

Hardware 2025 for AI studio

Anyone working with AI today is almost automatically pushed into the cloud: OpenAI, Microsoft, Google, any web UIs, tokens, limits, terms and conditions. This seems modern - but is essentially a return to dependency: others determine which models you can use, how often, with which filters and at what cost. I'm deliberately going the other way: I'm currently building my own little AI studio at home. With my own hardware, my own models and my own workflows.

My goal is clear: local text AI, local image AI, learning my own models (LoRA, fine-tuning) and all of this in such a way that I, as a freelancer and later also an SME customer, am not dependent on the daily whims of some cloud provider. You could say it's a return to an old attitude that used to be quite normal: „You do important things yourself“. Only this time, it's not about your own workbench, but about computing power and data sovereignty.

Read more

Apple MLX vs. NVIDIA: How local AI inference works on the Mac

Local AI on Silicon with Apple Mac

Anyone working with artificial intelligence today often first thinks of ChatGPT or similar online services. You type in a question, wait a few seconds - and receive an answer as if a very well-read, patient conversation partner were sitting at the other end of the line. But what is easily forgotten: Every input, every sentence, every word travels via the Internet to external servers. That's where the real work is done - on huge computers that you never get to see yourself.

In principle, a local language model works in exactly the same way - but without the Internet. The model is stored as a file on the user's own computer, is loaded into the working memory at startup and answers questions directly on the device. The technology behind it is the same: a neural network that understands language, generates texts and recognizes patterns. The only difference is that the entire calculation remains in-house. You could say: ChatGPT without the cloud.

Read more

LoRA training: How FileMaker 2025 simplifies the fine-tuning of large language models

LoRA Fine tuning - FileMaker 2025

The world of artificial intelligence is on the move. New models, new methods and, above all, new possibilities are emerging on an almost weekly basis - and yet one thing remains constant: not every technical innovation automatically leads to a better everyday life. Many things remain experimental, complex or simply too costly for productive use. This is particularly evident in the so-called fine-tuning of large language models - a method of specializing generative AI to its own content, terms and tonalities.

I have accompanied this process intensively over the last few months - first in the classic form, with Python, terminal, error messages and nerve-wracking setup loops. And then: with FileMaker 2025. A step that surprised me - because it wasn't loud, but clear. And because it showed that there is another way.

Read more

Artificial intelligence: which jobs are at risk and how we can arm ourselves now.

Which jobs will be eliminated by AI in the future

Hardly any other technological change has crept into our everyday lives as quickly as artificial intelligence. What was considered a visionary technology of the future yesterday is already a reality today - whether in texting, programming, diagnosing, translating or even creating music, art or legal briefs.

Read more

Integration of MLX in FileMaker 2025: Local AI as the new standard

Local AI with MLX and FileMaker

While MLX originally started as an experimental framework from Apple Research, a quiet but significant development has taken place in recent months: With the release of FileMaker 2025, Claris has firmly integrated MLX into the server as a native AI infrastructure for Apple Silicon. This means that anyone working with a Mac and relying on Apple Silicon can not only run MLX models locally, but also use them directly in FileMaker - with native functions, without any intermediate layers.

Read more

MLX on Apple Silicon as local AI in comparison with Ollama & Co.

Local AI on the Mac with MLX

At a time when centralized AI services such as ChatGPT, Claude or Gemini are dominating the headlines, many professional users are increasingly looking for an alternative - a local, self-controllable AI infrastructure. Especially for creative processes, sensitive data or recurring workflows, a local solution is often the more sustainable and secure option.

Anyone working with a Mac - especially with Apple Silicon (M1, M2, M3 or M4) - can now find amazingly powerful tools to run their own language models directly on the device. At the center of this is a new, largely unknown component: MLX, a machine learning framework developed by Apple that is likely to play an increasingly central role in the company's AI ecosystem in the coming years.

Read more