In an increasingly confusing world of information, it is becoming more and more important to make your own databases searchable in a targeted manner - not via classic full-text searches, but through semantically relevant answers. This is exactly where the principle of the RAG database comes into play - an AI-supported search solution consisting of two central components:
Docker
Article on the topic of Docker. It covers the basics, possible applications and practical experience with containers in development and business environments.
Ollama meets Qdrant: A local memory for your AI on the Mac
Local AI with memory - without cloud, without subscription, without detour
In a previous articles I explained how to configure Ollama on the Mac install. If you have already completed this step, you now have a powerful local language model - such as Mistral, LLaMA3 or another compatible model that can be addressed via REST API.
However, the model only "knows" what is in the current prompt on its own. It does not remember previous conversations. What is missing is a memory.