Ollama meets Qdrant: A local memory for your AI on the Mac

Memory for local AI with Ollama and Qdrant

Local AI with memory - without cloud, without subscription, without detour

In a previous articles I explained how to configure Ollama on the Mac install. If you have already completed this step, you now have a powerful local language model - such as Mistral, LLaMA3 or another compatible model that can be addressed via REST API.

However, the model only "knows" what is in the current prompt on its own. It does not remember previous conversations. What is missing is a memory.

Read more

Local AI on the Mac: How to install a language model with Ollama

Local AI on the Mac has long been practical - especially on Apple-Silicon computers (M series). With Ollama you get a lean runtime environment for many open source language models (e.g. Llama 3.1/3.2, Mistral, Gemma, Qwen). The current Ollama version now also comes with a user-friendly app that allows you to set up a local language model on your Mac at the click of a mouse. In this article you will find a pragmatic guide from installation to the first prompt - with practical tips on where things traditionally go wrong.

Read more