Local AI with memory - without cloud, without subscription, without detour
In a previous articles I explained how to configure Ollama on the Mac install. If you have already completed this step, you now have a powerful local language model - such as Mistral, LLaMA3 or another compatible model that can be addressed via REST API.
However, the model only "knows" what is in the current prompt on its own. It does not remember previous conversations. What is missing is a memory.