LoRA training: How FileMaker 2025 simplifies the fine-tuning of large language models

The world of artificial intelligence is on the move. New models, new methods and, above all, new possibilities are emerging on an almost weekly basis - and yet one thing remains constant: not every technical innovation automatically leads to a better everyday life. Many things remain experimental, complex or simply too costly for productive use. This is particularly evident in the so-called fine-tuning of large language models - a method of specializing generative AI to its own content, terms and tonalities.

I have accompanied this process intensively over the last few months - first in the classic form, with Python, terminal, error messages and nerve-wracking setup loops. And then: with FileMaker 2025. A step that surprised me - because it wasn't loud, but clear. And because it showed that there is another way.

In a detailed Technical article on gofilemaker.de I have documented precisely this change: the transition from open, flexible but unstable PEFT-LoRA training (e.g. with Axolotl, LLaMA-Factory or kohya_ss) to the integrated solution from Claris - via script, locally, traceable.


Current topics on artificial intelligence

What is LoRA anyway - and why is it so important?

LoRA stands for Low-Rank Adaptation. Behind this technical-sounding term lies a simple but powerful principle: instead of retraining an entire AI model, only very specific parts are adapted - using so-called adapter weights, which are inserted and trained in a targeted manner. In recent years, this method has established itself as the gold standard for domain-specific fine-tuning - because it requires little computing power and still delivers excellent results.

The classic approach requires an arsenal of tools:

  • a functioning Python environment,
  • suitable CUDA and PyTorch versions,
  • a training engine like Axolotl or kohya_ss,
  • GPU resources to handle the whole thing,
  • and last but not least: Patience. A lot of patience.

Because between YAML files, tokenizer conflicts and format conversions (from safetensors to GGUF to MLX and back), it often takes days before a usable result is achieved. It works - but it's not something to do on the side.

And then came FileMaker 2025.

With the introduction of the AI Model Server and a new script step called Fine-Tune Model, Claris is bringing this method into an environment in which it would not have been expected: a relational database.

What sounds unusual at first makes a lot of sense on closer inspection. Because what does good fine-tuning need?

  • Structured data,
  • a stable environment,
  • clear parameters,
  • and a defined application context.

FileMaker offers all of this - which is why the integration of LoRA in this environment does not look like a foreign body, but rather like a logical extension.

Training without a terminal - but not without control

In my article, I describe in detail what the training process in FileMaker feels like:

  • Data input directly from existing tables or JSONL files,
  • Hyperparameters such as learning rate or layer depth can be controlled directly in the script,
  • Complete local operation on Apple-Silicon - without cloud, without upload,
  • and above all: results that are reproducible and suitable for everyday use.

Of course there are limits. FileMaker does not (yet) allow multi-model serving, layer freeze strategies or export to other formats such as GGUF or ONNX. It is not a research tool, but a tool for clear use cases - such as adapting language models to company-specific terms, responses, product descriptions or internal dialog structures.

And therein lies the charm: it works. Stable. Repeatable. And faster than I ever thought possible.


Current survey on the future of FileMaker and AI

Do you think Claris FileMaker will be more closely associated with AI in the coming years?

Who should take a closer look - and why?

This article is aimed at anyone who not only understands AI, but wants to use it:

  • Managing Directorwho want to harmonize data protection and efficiency.
  • Developerwho do not want to start from scratch every time.
  • Strategistswho realize that AI can not only be "bought in" externally, but can also be trained internally.

In FileMaker 2025, the fine-tuning of language models will become part of the workflow - not as a foreign body, but as a real tool. This is a quiet but lasting change that shows how far we have come in the suitability of AI for everyday use.

To the article:
From terminal to script: FileMaker 2025 makes LoRA fine-tuning suitable for everyday use

In the next article, I will describe how a language model can be trained in practice with FileMaker and will also provide a corresponding example script.

Leave a Comment