Reach is not ownership - Why visibility is no longer enough today

Reach vs. ownership

A good ten years ago, I happened to watch a lecture on the transition from the information society to the knowledge society. At the time, much of it still sounded theoretical, almost academic. It was about concepts such as data sovereignty, ownership of information and the question of who will actually determine what is accessible in the future - and what is not. Today, with a little distance, this lecture seems surprisingly precise. After all, much of what was described as a development back then has now become reality. More and more data has migrated to the cloud. More and more information is no longer stored on in-house systems, but in external infrastructures. And increasingly, it is no longer the user but a provider, a platform or a set of rules that decides what is possible.

To understand this development, it is worth taking a step back. The information society in which many of us grew up was not a normal state. It was a historical exception.

Read more

Apple MLX vs. NVIDIA: How local AI inference works on the Mac

Local AI on Silicon with Apple Mac

Anyone working with artificial intelligence today often first thinks of ChatGPT or similar online services. You type in a question, wait a few seconds - and receive an answer as if a very well-read, patient conversation partner were sitting at the other end of the line. But what is easily forgotten: Every input, every sentence, every word travels via the Internet to external servers. That's where the real work is done - on huge computers that you never get to see yourself.

In principle, a local language model works in exactly the same way - but without the Internet. The model is stored as a file on the user's own computer, is loaded into the working memory at startup and answers questions directly on the device. The technology behind it is the same: a neural network that understands language, generates texts and recognizes patterns. The only difference is that the entire calculation remains in-house. You could say: ChatGPT without the cloud.

Read more

LoRA training: How FileMaker 2025 simplifies the fine-tuning of large language models

LoRA Fine tuning - FileMaker 2025

The world of artificial intelligence is on the move. New models, new methods and, above all, new possibilities are emerging on an almost weekly basis - and yet one thing remains constant: not every technical innovation automatically leads to a better everyday life. Many things remain experimental, complex or simply too costly for productive use. This is particularly evident in the so-called fine-tuning of large language models - a method of specializing generative AI to its own content, terms and tonalities.

I have accompanied this process intensively over the last few months - first in the classic form, with Python, terminal, error messages and nerve-wracking setup loops. And then: with FileMaker 2025. A step that surprised me - because it wasn't loud, but clear. And because it showed that there is another way.

Read more

Electronic invoices for SMEs: XRechnung, ZUGFeRD and ERP at a glance

Overview of the obligation to issue electronic invoices

Germany did not invent the e-invoice overnight - it is the result of years of standardization work (EN 16931), federal and state regulations (B2G) and now, via the Growth Opportunities Act, the gradual expansion into everyday B2B life. Since January 1, 2025, a new legal situation has applied: an "electronic invoice" is only an e-invoice if it is structured and machine-readable - pure PDF attachments by email are no longer an e-invoice according to the definition. This sounds technical, but has operational consequences from invoice receipt to accounting and archiving.

Read more

Digital dependency: how we have lost our self-determination to the cloud

Digital dependency with cloud systems

I've always thought it was a mistake for people to hand over their data - be it in the cloud, via apps or with any "free" services. For me, data sovereignty has never been a buzzword, but a question of self-respect. Anyone who uses technology without considering the consequences is entering into a dependency that often only becomes noticeable years later - but then has an even deeper impact.

Read more

gFM-Business and the future of ERP: local intelligence instead of cloud dependency

gFM-Business and AI + knowledge graph

For over a decade, the gFM-Business software has stood for something special in the German ERP market: it is not based on a cumbersome, difficult-to-maintain system, but on the lightweight, customizable and visually modelled FileMaker platform. This has many advantages: gFM-Business can be individually expanded, runs on Windows, macOS and iOS, and can be customized by both developers and ambitious power users.

With the advent of artificial intelligence (AI) - especially through so-called language models such as ChatGPT - new opportunities are now emerging that go far beyond traditional automation. gFM-Business is actively preparing for this future: with the aim of not only managing data, but also unlocking knowledge.

Read more

FileMaker Conference 2025: AI, community and an unexpected incident

FileMaker Conference 2025: Fire alarm with fire department

The FileMaker Conference 2025 in Hamburg is over - and it was a special milestone in many respects. Not only because this year's conference focused on many topics related to artificial intelligence, performance and modern workflows - but also because the personal exchange and the "family atmosphere" of the FileMaker community once again came into its own. For me personally, it was an intensive, inspiring and all-round enriching time - right from the very first evening.

Read more

Integration of MLX in FileMaker 2025: Local AI as the new standard

Local AI with MLX and FileMaker

While MLX originally started as an experimental framework from Apple Research, a quiet but significant development has taken place in recent months: With the release of FileMaker 2025, Claris has firmly integrated MLX into the server as a native AI infrastructure for Apple Silicon. This means that anyone working with a Mac and relying on Apple Silicon can not only run MLX models locally, but also use them directly in FileMaker - with native functions, without any intermediate layers.

Read more