While MLX originally started as an experimental framework from Apple Research, a quiet but significant development has taken place in recent months: With the release of FileMaker 2025, Claris has firmly integrated MLX into the server as a native AI infrastructure for Apple Silicon. This means that anyone working with a Mac and relying on Apple Silicon can not only run MLX models locally, but also use them directly in FileMaker - with native functions, without any intermediate layers.
MLX
MLX is a new open source technology from Apple that makes it possible to run powerful AI models (Large Language Models, LLMs for short) directly and locally on Apple Silicon Macs - without an internet connection or cloud constraints. MLX stands for maximum control, data protection and independence: users load a model onto their Mac and can use it offline - e.g. for texts, analyses or creative tasks. MLX is a significant step towards sovereign, locally usable artificial intelligence.
MLX on Apple Silicon as local AI in comparison with Ollama & Co.
At a time when centralized AI services such as ChatGPT, Claude or Gemini are dominating the headlines, many professional users are increasingly looking for an alternative - a local, self-controllable AI infrastructure. Especially for creative processes, sensitive data or recurring workflows, a local solution is often the more sustainable and secure option.
Anyone working with a Mac - especially with Apple Silicon (M1, M2, M3 or M4) - can now find amazingly powerful tools to run their own language models directly on the device. At the center of this is a new, largely unknown component: MLX, a machine learning framework developed by Apple that is likely to play an increasingly central role in the company's AI ecosystem in the coming years.