While MLX originally started as an experimental framework from Apple Research, a quiet but significant development has taken place in recent months: With the release of FileMaker 2025, Claris has firmly integrated MLX into the server as a native AI infrastructure for Apple Silicon. This means that anyone working with a Mac and relying on Apple Silicon can not only run MLX models locally, but also use them directly in FileMaker - with native functions, without any intermediate layers.
Vector database
Articles on vector databases and their importance for AI-supported systems. Topics include data organization, semantic search and integration in business applications.
RAG with Ollama and Qdrant as a universal search engine for own data
In an increasingly confusing world of information, it is becoming more and more important to make your own databases searchable in a targeted manner - not via classic full-text searches, but through semantically relevant answers. This is exactly where the principle of the RAG database comes into play - an AI-supported search solution consisting of two central components:
Ollama meets Qdrant: A local memory for your AI on the Mac
Local AI with memory - without cloud, without subscription, without detour
In a previous articles I explained how to configure Ollama on the Mac install. If you have already completed this step, you now have a powerful local language model - such as Mistral, LLaMA3 or another compatible model that can be addressed via REST API.
However, the model only "knows" what is in the current prompt on its own. It does not remember previous conversations. What is missing is a memory.