{"id":2502,"date":"2025-08-20T13:39:23","date_gmt":"2025-08-20T13:39:23","guid":{"rendered":"https:\/\/www.markus-schall.de\/?p=2502"},"modified":"2026-03-16T08:38:18","modified_gmt":"2026-03-16T08:38:18","slug":"ollama-meets-qdrant-a-local-memory-for-your-ki-on-the-mac","status":"publish","type":"post","link":"https:\/\/www.markus-schall.de\/en\/2025\/08\/ollama-meets-qdrant-a-local-memory-for-your-ki-on-the-mac\/","title":{"rendered":"Ollama meets Qdrant: A local memory for your AI on the Mac"},"content":{"rendered":"<h2>Local AI with memory - without cloud, without subscription, without detour<\/h2>\n<p>In a <a href=\"https:\/\/www.markus-schall.de\/en\/2025\/08\/local-ki-on-the-mac-like-this-installo-create-a-language-model-with-ollama\/\"><strong>previous articles<\/strong><\/a> I explained how to configure Ollama on the Mac install. If you have already completed this step, you now have a powerful local language model - such as Mistral, LLaMA3 or another compatible model that can be addressed via REST API.<\/p>\n<p>However, the model only \"knows\" what is in the current prompt on its own. It does not remember previous conversations. <strong>What is missing is a memory<\/strong>.<!--more--><\/p>\n<hr \/>\n\n\t\t\t<div class=\"display-post-types\">\n\n\t\t\t\t\t\t\t<style type=\"text\/css\">\n\t\t\t#dpt-wrapper-901 { --dpt-text-align: left;--dpt-image-crop: center;--dpt-border-radius: 5px;--dpt-h-gutter: 10px;--dpt-v-gutter: 9px; }\t\t\t<\/style>\n\t\t\t<style type=\"text\/css\">#dpt-wrapper-901 { --dpt-title-font-style:normal;--dpt-title-font-weight:600;--dpt-title-line-height:1.5;--dpt-title-text-decoration:none;--dpt-title-text-transform:none;--dpt-excerpt-font-style:normal;--dpt-excerpt-font-weight:400;--dpt-excerpt-line-height:1.5;--dpt-excerpt-text-decoration:none;--dpt-excerpt-text-transform:none;--dpt-meta1-font-style:normal;--dpt-meta1-font-weight:400;--dpt-meta1-line-height:1.9;--dpt-meta1-text-decoration:none;--dpt-meta1-text-transform:none;--dpt-meta2-font-style:normal;--dpt-meta2-font-weight:400;--dpt-meta2-line-height:1.9;--dpt-meta2-text-decoration:none;--dpt-meta2-text-transform:none; }<\/style><div class=\"dpt-main-header\">\n\t\t\t\t\t\t<div class=\"dpt-main-title\">\n\t\t\t\t\t\t\t<span class=\"dpt-main-title-text\">Social issues of the present<\/span>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t<\/div>\t\t\t\n\t\t\t\t<div id=\"dpt-wrapper-901\" class=\"dpt-wrapper dpt-mag1 land1 dpt-cropped dpt-flex-wrap\" >\n\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"propaganda: geschichte, methoden, moderne formen und wie man sie erkennt\" data-id=\"4229\"  data-category=\"allgemein gesellschaft kunst &amp; kultur\" data-post_tag=\"denkmodelle energiepolitik europa geopolitik krisen meinungsfreiheit pers\u00f6nlichkeitsentwicklung sicherheitspolitik spieltheorie\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2026\/01\/propaganda-history-methods-modern-forms-and-how-to-recognize-them\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">Propaganda: history, methods, modern forms and how to recognize them<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"What is propaganda?\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Propaganda-titel.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Propaganda-titel.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Propaganda-titel-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Propaganda-titel-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Propaganda-titel-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Propaganda-titel-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2026\/01\/propaganda-history-methods-modern-forms-and-how-to-recognize-them\/\" rel=\"bookmark\">Propaganda: history, methods, modern forms and how to recognize them<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"klimaschutz mit tunnelblick \u2013 elektromobilit\u00e4t, lobbyarbeit und die verdr\u00e4ngten kosten\" data-id=\"4729\"  data-category=\"allgemein gesellschaft\" data-post_tag=\"deutschland energiepolitik erfahrungen eu-gesetze europa\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2026\/02\/climate-protection-with-tunnel-vision-electromobility-lobbying-and-the-hidden-costs\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">Climate protection with tunnel vision - electromobility, lobbying and the suppressed costs<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Electromobility without ideology\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Elektromobilitaet-ohne-Ideologie.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Elektromobilitaet-ohne-Ideologie.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Elektromobilitaet-ohne-Ideologie-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Elektromobilitaet-ohne-Ideologie-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Elektromobilitaet-ohne-Ideologie-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Elektromobilitaet-ohne-Ideologie-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2026\/02\/climate-protection-with-tunnel-vision-electromobility-lobbying-and-the-hidden-costs\/\" rel=\"bookmark\">Climate protection with tunnel vision - electromobility, lobbying and the suppressed costs<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"cancel culture im westen: sport, universit\u00e4ten, milit\u00e4r und eu-sanktionen analysiert\" data-id=\"5009\"  data-category=\"allgemein gesellschaft kunst &amp; kultur\" data-post_tag=\"denkmodelle deutschland europa geopolitik krisen meinungsfreiheit sicherheitspolitik spieltheorie\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2026\/02\/cancel-culture-in-the-west-sports-universities-military-and-eu-sanctions-analyzed\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">Cancel Culture in the West: Sport, universities, the military and EU sanctions analyzed<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Cancel Culture in the West\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Cancel-Culture-Westen.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Cancel-Culture-Westen.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Cancel-Culture-Westen-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Cancel-Culture-Westen-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Cancel-Culture-Westen-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Cancel-Culture-Westen-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2026\/02\/cancel-culture-in-the-west-sports-universities-military-and-eu-sanctions-analyzed\/\" rel=\"bookmark\">Cancel Culture in the West: Sport, universities, the military and EU sanctions analyzed<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"warum ein eigenes magazin f\u00fcr unternehmen heute wichtiger ist als werbung\" data-id=\"4476\"  data-category=\"gesellschaft ki-systeme tipps &amp; anleitungen\" data-post_tag=\"datenlogik datenschutz digitales eigentum k\u00fcnstliche intelligenz ratgeber\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2026\/01\/why-a-company-magazine-is-more-important-than-advertising-today\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">Why having your own magazine is more important for companies today than advertising<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Magazine as property\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/magazin-gastbeitraege.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/magazin-gastbeitraege.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/magazin-gastbeitraege-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/magazin-gastbeitraege-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/magazin-gastbeitraege-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/magazin-gastbeitraege-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2026\/01\/why-a-company-magazine-is-more-important-than-advertising-today\/\" rel=\"bookmark\">Why having your own magazine is more important for companies today than advertising<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\n\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\n<hr \/>\n<p>This is exactly why we use Qdrant, a modern semantic vector database.<br \/>\nIn this article I will show you step by step:<\/p>\n<ul>\n<li>how to installier Qdrant on the Mac (via Docker)<\/li>\n<li>How to create embeddings with Python<\/li>\n<li>how to save, search and integrate content into the Ollama workflow<\/li>\n<li>and what a complete prompt\u2192memory\u2192response sequence looks like<\/li>\n<\/ul>\n<h2>Why Qdrant?<\/h2>\n<p>Qdrant does not store traditional texts, but vectors that represent the meaning of a text as a numerical code. This means that content can not only be found exactly, but also semantically similar - even if the words vary.<\/p>\n<p>Ollama + Qdrant therefore results:<\/p>\n<p>A local language model with long-term memory - secure, controllable and expandable.<\/p>\n<h3>Prerequisites<\/h3>\n<ul>\n<li>Ollama is installiert and runs (\u2192 e.g. ollama run mistral)<\/li>\n<li>Docker is installiert: <a href=\"https:\/\/www.docker.com\/products\/docker-desktop\" target=\"_blank\" rel=\"noopener\">https:\/\/www.docker.com\/products\/docker-desktop<\/a><\/li>\n<li>Python 3.9+<\/li>\n<\/ul>\n<h3>Packageinstallation from Qdrant:<\/h3>\n<pre class=\"notranslate\" data-no-translation=\"\">pip install qdrant-client sentence-transformers<\/pre>\n<h3>Start Qdrant (Docker)<\/h3>\n<pre class=\"notranslate\" data-no-translation=\"\">docker run -p 6333:6333 -p 6334:6334 qdrant\/qdrant<\/pre>\n<p>Qdrant then runs:<\/p>\n<p><strong>http:\/\/localhost:6333 <\/strong>(REST API)<\/p>\n<p>http:\/\/localhost:6334 (gRPC, not required for this article)<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2506\" src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Docker-Qdrant-scaled.jpg\" alt=\"Qdrant on Docker under Apple macOS\" width=\"2560\" height=\"1541\" srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Docker-Qdrant-scaled.jpg 2560w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Docker-Qdrant-300x181.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Docker-Qdrant-1024x616.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Docker-Qdrant-768x462.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Docker-Qdrant-1536x925.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Docker-Qdrant-2048x1233.jpg 2048w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Docker-Qdrant-498x300.jpg 498w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><\/p>\n<h2>Python example for Ollama + Qdrant<\/h2>\n<p>We now write a simple basic script that:<\/p>\n<ul>\n<li>accepts the user prompt<\/li>\n<li>generates an embedding vector from this<\/li>\n<li>searches for semantically similar memories in Qdrant<\/li>\n<li>the response is generated with context via Ollama<\/li>\n<li>saves the new conversation as a reminder<\/li>\n<\/ul>\n<pre class=\"notranslate\" data-no-translation=\"\">Python-Script: <strong>ollama_memory.py<\/strong><\/pre>\n<pre class=\"notranslate\" data-no-translation=\"\">import requests\r\nfrom sentence_transformers import SentenceTransformer\r\nfrom qdrant_client import QdrantClient\r\nfrom qdrant_client.models import Distance, VectorParams, PointStruct\r\n\r\n# Einstellungen\r\nOLLAMA_URL = \"http:\/\/localhost:11434\/api\/generate\"\r\nCOLLECTION_NAME = \"memory\"\r\nVECTOR_SIZE = 384 # f\u00fcr 'all-MiniLM-L6-v2'\r\n\r\n# Lade Embedding-Modell\r\nembedder = SentenceTransformer(\"all-MiniLM-L6-v2\")\r\n\r\n# Verbinde mit Qdrant\r\nqdrant = QdrantClient(host=\"localhost\", port=6333)\r\n\r\n# Erstelle Collection (einmalig)\r\ndef create_collection():\r\nif COLLECTION_NAME not in qdrant.get_collections().collections:\r\nqdrant.recreate_collection(\r\ncollection_name=COLLECTION_NAME,\r\nvectors_config=VectorParams(size=VECTOR_SIZE, distance=Distance.COSINE)\r\n)\r\n\r\n# F\u00fcge Eintrag ins Ged\u00e4chtnis hinzu\r\ndef add_to_memory(text: str):\r\nvector = embedder.encode(text).tolist()\r\npoint = PointStruct(id=hash(text), vector=vector, payload={\"text\": text})\r\nqdrant.upsert(collection_name=COLLECTION_NAME, points=[point])\r\n\r\n# Suche im Ged\u00e4chtnis\r\ndef search_memory(query: str, top_k=3):\r\nvector = embedder.encode(query).tolist()\r\nhits = qdrant.search(\r\ncollection_name=COLLECTION_NAME,\r\nquery_vector=vector,\r\nlimit=top_k\r\n)\r\nreturn [hit.payload[\"text\"] for hit in hits]\r\n\r\n# Sende Anfrage an Ollama\r\ndef query_ollama(context: list[str], user_prompt: str):\r\nprompt = \"\\n\\n\".join(context + [user_prompt])\r\nresponse = requests.post(OLLAMA_URL, json={\r\n\"model\": \"mistral\",\r\n\"prompt\": prompt,\r\n\"stream\": False\r\n})\r\nreturn response.json()[\"response\"]\r\n\r\n# Ablauf\r\ndef main():\r\ncreate_collection()\r\nprint(\"Frage an die KI:\")\r\nuser_prompt = input(\"&gt; \")\r\ncontext = search_memory(user_prompt)\r\nanswer = query_ollama(context, user_prompt)\r\nprint(\"\\nAntwort von Ollama:\")\r\nprint(answer.strip())\r\n\r\n# Speichern der Konversation\r\nfull_entry = f\"Frage: {user_prompt}\\nAntwort: {answer.strip()}\"\r\nadd_to_memory(full_entry)\r\n\r\nif __name__ == \"__main__\":\r\nmain()<\/pre>\n<h2>Notes on practice<\/h2>\n<p>You can also use your own embedding models, e.g. via Ollama (e.g. nomic-embed-text) or Hugging Face models<\/p>\n<p>Qdrant supports payload filters, time periods and fields (very useful for later expansion!)<\/p>\n<p>The hash(text)-ID is sufficient for simple tests, for professional applications you should use UUIDs<\/p>\n<h2>Local AI with memory - and what you can do with it<\/h2>\n<p>In the previous chapters, I showed you how to build a real, local AI memory on a Mac with Ollama and Qdrant. A setup that works without the cloud, without a subscription and without external servers - fast, secure, private.<\/p>\n<h3>But what now?<\/h3>\n<p>What can this technology actually be used for? What is possible with it - today, tomorrow, the day after tomorrow?<\/p>\n<p>The answer: quite a lot.<\/p>\n<p>Because what you have here is more than just a chatbot. It's a platform-independent thinking machine with a long-term memory. And that opens doors.<\/p>\n<h3>\ud83d\udd0d 1. personal knowledge database<\/h3>\n<p>You can use Ollama + Qdrant as your personal long-term memory.<br \/>\nDocuments, notes from conversations, ideas - everything you tell him can be semantically stored and retrieved.<\/p>\n<p><strong>Example:<\/strong><\/p>\n<blockquote><p>\"What was my business idea from last Thursday again?\"<\/p>\n<p>\"Which customers wanted an upgrade in March?\"<\/p><\/blockquote>\n<p>Instead of searching through folders, you simply ask your system. What's particularly exciting is that it also works with imprecise questions because Qdrant searches semantically, not just for keywords.<\/p>\n<h3>\ud83d\udcc4 2. automatic logging and summary<\/h3>\n<p>In combination with audio or text input, the system can keep a running log:<\/p>\n<ul>\n<li>Notes in meetings<\/li>\n<li>Calls with customers<\/li>\n<li>Daily logs or project histories<\/li>\n<\/ul>\n<p>This data is automatically fed into the Qdrant memory and can therefore be queried later like an assistant:<\/p>\n<blockquote><p>\"What did Mr. Meier say again about the delivery?\"<\/p>\n<p>\"What was the process like in project XY?\"<\/p><\/blockquote>\n<h3>\ud83e\udde0 3. personal coach or diary assistant<\/h3>\n<p>By regularly jotting down thoughts, moods or decisions, you can create a reflective companion:<\/p>\n<blockquote><p>\"What was my biggest progress this month?\"<\/p>\n<p>\"How did I react to setbacks back then?\"<\/p><\/blockquote>\n<p>The system gets to know you over time - and becomes a real mirror, not just a chatbot.<\/p>\n<h3>\ud83d\udcbc 4. business applications with FileMaker<\/h3>\n<p>If you - like me - use FileMaker, you can connect this setup directly:<\/p>\n<ul>\n<li>Send prompts from FileMaker<\/li>\n<li>Automatically retrieve and save answers<\/li>\n<li>Control memory access directly via REST API or shell script<\/li>\n<\/ul>\n<p>This creates an extremely powerful combination:<\/p>\n<ul>\n<li><strong>FileMaker<\/strong> = Front end, user interface, control center<\/li>\n<li><strong>Ollama<\/strong> = Language intelligence<\/li>\n<li><strong>Qdrant<\/strong> = semantic long-term memory<\/li>\n<\/ul>\n<p>The result: a genuine AI component for FileMaker solutions, local, secure, individual.<\/p>\n<h3>\ud83d\udee0\ufe0f 5. Support in everyday life: reminders, ideas, recommendations<\/h3>\n<blockquote><p>\"Remind me of this idea next week\"<\/p>\n<p>\"What books have I already recommended to you?\"<\/p>\n<p>\"What could I offer Mr. M\u00fcller next?\"<\/p><\/blockquote>\n<p>With targeted memory logic (time stamps, categories, users), you can structure your memory in a targeted way and use it for many areas of life and business.<\/p>\n<h3>\ud83e\udd16 6. basis for an agent system<\/h3>\n<p>If you think ahead, you can also build agent-like systems with this setup:<\/p>\n<ul>\n<li>AI takes over simple tasks<\/li>\n<li>AI recognizes patterns over time<\/li>\n<li>AI gives proactive hints<\/li>\n<\/ul>\n<p><strong>Example:<\/strong><\/p>\n<blockquote><p>\"You've asked the same question four times this week - do you want to save a note?\"<\/p>\n<p>\"A striking number of customers have mentioned this product - shall I summarize that for you?\"<\/p><\/blockquote>\n<h3>\ud83c\udf10 7. integration with other tools<\/h3>\n<p>The system can be easily linked with other tools:<\/p>\n<ul>\n<li><strong>Neo4j<\/strong>to graphically depict semantic relationships<\/li>\n<li><strong>Files &amp; PDFs<\/strong>to index content automatically<\/li>\n<li><strong>Mail parser<\/strong>to analyze and memorize emails<\/li>\n<li><strong>Voice assistants<\/strong>to interact via voice<\/li>\n<\/ul>\n<h3>\ud83d\udd10 8. everything remains local - and under control<\/h3>\n<p>The biggest advantage: you decide what is saved. You decide how long it stays saved. And: it never leaves your computer if you don't want it to. In a world where many people blindly rely on cloud AI, this is a powerful counterbalance - especially for freelancers, developers, authors and entrepreneurs.<\/p>\n<hr \/>\n<h3>Current survey on the use of local AI systems<\/h3>\n<div class='bootstrap-yop yop-poll-mc'>\n\t\t\t\t\t\t\t<div class=\"basic-yop-poll-container\" style=\"background-color:#ffffff; border:0px; border-style:solid; border-color:#000000; border-radius:5px; padding:0px 5px;\" data-id=\"9\" data-temp=\"basic-pretty\" data-skin=\"square\" data-cscheme=\"blue\" data-cap=\"0\" data-access=\"guest\" data-tid=\"\" data-uid=\"79e46c32dd89f2ad0071a98e1cb0a437\" data-pid=\"4476\" data-resdet=\"votes-number,percentages\" data-show-results-to=\"guest\" data-show-results-moment=\"after-vote\" data-show-results-only=\"false\" data-show-message=\"true\" data-show-results-as=\"bar\" data-sort-results-by=\"as-defined\" data-sort-results-rule=\"asc\"data-is-ended=\"0\" data-percentages-decimals=\"2\" data-gdpr=\"no\" data-gdpr-sol=\"consent\" data-css=\".basic-yop-poll-container[data-uid] .basic-vote {\t\t\t\t\t\t\t\t\ttext-align: center;\t\t\t\t\t\t\t\t}\" data-counter=\"0\" data-load-with=\"1\" data-notification-section=\"top\"><div class=\"row\"><div class=\"col-md-12\"><div class=\"basic-inner\"><div class=\"basic-message hide\" style=\"border-left: 10px solid #008000; padding: 0px 10px;\" data-error=\"#ff0000\" data-success=\"#008000\"><p class=\"basic-message-text\" style=\"color:#000000; font-size:14px; font-weight:normal;\"><\/p><\/div><div class=\"basic-overlay hide\"><div class=\"basic-vote-options\"><\/div><div class=\"basic-preloader\"><div class=\"basic-windows8\"><div class=\"basic-wBall basic-wBall_1\"><div class=\"basic-wInnerBall\"><\/div><\/div><div class=\"basic-wBall basic-wBall_2\"><div class=\"basic-wInnerBall\"><\/div><\/div><div class=\"basic-wBall basic-wBall_3\"><div class=\"basic-wInnerBall\"><\/div><\/div><div class=\"basic-wBall basic-wBall_4\"><div class=\"basic-wInnerBall\"><\/div><\/div><div class=\"basic-wBall basic-wBall_5\"><div class=\"basic-wInnerBall\"><\/div><\/div><\/div><\/div><\/div><form class=\"basic-form\" action=\"\"><input type=\"hidden\" name=\"_token\" value=\"2c457bbaf8\" autocomplete=\"off\"><div class=\"basic-elements\"><div class=\"basic-element basic-question basic-question-text-vertical\" data-id=\"9\" data-uid=\"3589997f2c38b8c0ea62a0e9501849c8\" data-type=\"question\" data-question-type=\"text\" data-required=\"yes\" data-allow-multiple=\"no\" data-min=\"1\" data-max=\"7\" data-display=\"vertical\" data-colnum=\"\" data-display-others=\"no\" data-others-color=\"\" data-others=\"\" data-others-max-chars=\"0\"><div class=\"basic-question-title\"><h5 style=\"color:#000000; font-size:16px; font-weight:normal; text-align:left;\">What do you think of locally running AI software such as MLX or Ollama?<\/h5><\/div><ul class=\"basic-answers\"><li class=\"basic-answer\" style=\"padding:0px 0px;\" data-id=\"45\" data-type=\"text\" data-vn=\"126\" data-color=\"#000000\" data-make-link=\"no\" data-link=\"\"><div class=\"basic-answer-content basic-text-vertical\"><label for=\"answer[45]\" class=\"basic-answer-label\"><input type=\"radio\" id=\"answer[45]\" name=\"answer[9]\" value=\"45\"><span class=\"basic-text\" style=\"color: #000000; font-size: 14px; font-weight: normal;\">Ingenious - finally independent of the cloud<\/span><\/label><\/div><\/li><li class=\"basic-answer\" style=\"padding:0px 0px;\" data-id=\"46\" data-type=\"text\" data-vn=\"24\" data-color=\"#000000\" data-make-link=\"no\" data-link=\"\"><div class=\"basic-answer-content basic-text-vertical\"><label for=\"answer[46]\" class=\"basic-answer-label\"><input type=\"radio\" id=\"answer[46]\" name=\"answer[9]\" value=\"46\"><span class=\"basic-text\" style=\"color: #000000; font-size: 14px; font-weight: normal;\">Interesting, but (still) too complicated<\/span><\/label><\/div><\/li><li class=\"basic-answer\" style=\"padding:0px 0px;\" data-id=\"47\" data-type=\"text\" data-vn=\"25\" data-color=\"#000000\" data-make-link=\"no\" data-link=\"\"><div class=\"basic-answer-content basic-text-vertical\"><label for=\"answer[47]\" class=\"basic-answer-label\"><input type=\"radio\" id=\"answer[47]\" name=\"answer[9]\" value=\"47\"><span class=\"basic-text\" style=\"color: #000000; font-size: 14px; font-weight: normal;\">I will try it out soon<\/span><\/label><\/div><\/li><li class=\"basic-answer\" style=\"padding:0px 0px;\" data-id=\"48\" data-type=\"text\" data-vn=\"5\" data-color=\"#000000\" data-make-link=\"no\" data-link=\"\"><div class=\"basic-answer-content basic-text-vertical\"><label for=\"answer[48]\" class=\"basic-answer-label\"><input type=\"radio\" id=\"answer[48]\" name=\"answer[9]\" value=\"48\"><span class=\"basic-text\" style=\"color: #000000; font-size: 14px; font-weight: normal;\">I don't need it - the cloud is enough for me<\/span><\/label><\/div><\/li><li class=\"basic-answer\" style=\"padding:0px 0px;\" data-id=\"49\" data-type=\"text\" data-vn=\"3\" data-color=\"#000000\" data-make-link=\"no\" data-link=\"\"><div class=\"basic-answer-content basic-text-vertical\"><label for=\"answer[49]\" class=\"basic-answer-label\"><input type=\"radio\" id=\"answer[49]\" name=\"answer[9]\" value=\"49\"><span class=\"basic-text\" style=\"color: #000000; font-size: 14px; font-weight: normal;\">I don't know exactly what that's about<\/span><\/label><\/div><\/li><\/ul><\/div><div class=\"clearfix\"><\/div><\/div><div class=\"basic-vote\"><a href=\"#\" class=\"button basic-vote-button\" role=\"button\" style=\"background:#027bb8; border:0px; border-style: solid; border-color:#000000; border-radius:5px; padding:10px 10px; color:#ffffff; font-size:14px; font-weight:normal;\">Vote<\/a><\/div><input type=\"hidden\" name=\"trp-form-language\" value=\"en\"\/><\/form><\/div><\/div><\/div><\/div>\n\t\t\t\t\t\t<\/div>\n<hr \/>\n<h2>Tame Ollama + Qdrant: How to give your local AI structure, rules and fine-tuning<\/h2>\n<p>Anyone who has taken the trouble to install Ollama and Qdrant locally on the Mac has already achieved great things. You now have:<\/p>\n<ul>\n<li>A local voice AI<\/li>\n<li>A semantic memory<\/li>\n<li>And a working pipeline that maps Prompt \u2192 Memory \u2192 Ollama \u2192 Response<\/li>\n<\/ul>\n<p>But anyone who works with it quickly realizes: It needs rules. Structure. Order.<br \/>\nBecause without control, your assistant quickly becomes a chatterbox who remembers too much, constantly repeats himself or pulls up irrelevant memories.<\/p>\n<h2>\ud83e\udded What's still missing?<\/h2>\n<p>An orchestra also has a conductor. And that's exactly your job now: to control instead of just use.<\/p>\n<h3>Module 1: A \"router\" for memory logic<\/h3>\n<p>Instead of bluntly saving everything or bluntly searching for everything, you should decide in advance whether anything should be saved or loaded at all. You can do this, for example, with a simple relevance router that you place between the prompt and the memory:<\/p>\n<p><strong>Example<\/strong>Check relevance via prompt to Ollama itself<\/p>\n<pre class=\"notranslate\" data-no-translation=\"\">def is_relevant_for_memory(prompt, response):\r\npr\u00fcf_prompt = f\"\"\"\r\nNutzer hat gefragt: \"{prompt}\"\r\nDie KI hat geantwortet: \"{response}\"\r\nSollte man sich diesen Dialog langfristig merken? Antworte nur mit 'Ja' oder 'Nein'.\r\n\"\"\"\r\nresult = query_ollama([], pr\u00fcf_prompt).strip().lower()\r\nreturn result.startswith(\"ja\")<\/pre>\n<p>So you give Ollama the task of evaluating its answer - and only if it is classified as relevant do you save it in Qdrant.<\/p>\n<h3>Module 2: Exclude older messages (context limitation)<\/h3>\n<p>With longer sessions in particular, it becomes problematic if old messages keep reappearing in context. The model does not forget - it gets bogged down.<\/p>\n<p><strong>Solution<\/strong>Limit context window.<\/p>\n<p>You can do this in two ways:<\/p>\n<p><strong>Method 1<\/strong>: Limit the number of hits<\/p>\n<pre class=\"notranslate\" data-no-translation=\"\">context = search_memory(user_prompt, top_k=3)<\/pre>\n<p>Only what is semantically relevant is loaded here - not everything.<\/p>\n<p><strong>Method 2<\/strong>: Limit the time<\/p>\n<pre class=\"notranslate\" data-no-translation=\"\"># Nur Nachrichten der letzten 7 Tage\r\nnow = datetime.utcnow()\r\nfilter = Filter(\r\nmust=[\r\nFieldCondition(key=\"timestamp\", range=Range(gte=now - timedelta(days=7)))\r\n]\r\n)<\/pre>\n<p>You can therefore \"cut off\" the time if the system reaches too far into the past.<\/p>\n<h3>Module 3: Introducing context weights and labels<\/h3>\n<p>Not every entry in your memory is of equal value. You can give them weight or categories:<\/p>\n<ul>\n<li><strong>Fixed<\/strong> (e.g. \"User is called Markus\")<\/li>\n<li><strong>Temporary<\/strong> (e.g. \"Today is Tuesday\")<\/li>\n<li><strong>Situational<\/strong> (e.g. \"Chat from today 10:30 am\")<\/li>\n<\/ul>\n<p>Qdrant supports so-called payloads - i.e. additional information per entry. This allows you to filter or prioritize later.<\/p>\n<h3>Module 4: Fine-tuning via the prompt<\/h3>\n<p>The prompt itself is a powerful control unit.<br \/>\nHere are a few tricks you can use to make Ollama smarter:<\/p>\n<p><strong>Example prompt with instructions:<\/strong><\/p>\n<blockquote><p>You are a local assistant with a semantic memory. If you find several memories, only use the three most relevant ones. Do not refer to information older than 10 days unless it is explicitly marked. Ignore trivial reminders such as \"Good morning\" or \"Thank you\". Answer precisely and in the style of an experienced consultant.<\/p><\/blockquote>\n<p>This allows you to carry out fine-tuning directly in the prompt itself - without new models, without training.<\/p>\n<p>And: You can generate the prompt dynamically - depending on the situation.<\/p>\n<h3>Module 5: Storage hygiene<\/h3>\n<p>As the memory grows, it becomes confusing.<br \/>\nA simple maintenance script that deletes irrelevant or duplicate content is worth its weight in gold.<\/p>\n<p><strong>Example:<\/strong><\/p>\n<blockquote><p>\"Forget everything to do with 'weather'.\"<\/p>\n<p>\"Delete entries that are older than 3 months and have never been retrieved.\"<\/p><\/blockquote>\n<p>Qdrant supports this via API - and you can automate it once a week, for example.<\/p>\n<h3>Module 6: FileMaker as control panel<\/h3>\n<p>If you - like me - work with FileMaker, you can control all of this remotely via REST-API:<\/p>\n<ul>\n<li>Send promptly<\/li>\n<li>Retrieve context<\/li>\n<li>Answer received<\/li>\n<li>Have a valuation carried out<\/li>\n<li>Save or forget<\/li>\n<\/ul>\n<p>All you need is a small REST module in FileMaker (Insert from URL with JSON) and a few scripts.<\/p>\n<p>The result: an interface that lets you control your AI like a living notebook - but with intelligence.<\/p>\n<h2>\ud83d\udd1a Conclusion: AI is only as good as its leadership<\/h2>\n<p>Ollama is powerful. Qdrant is flexible. But without clear rules, both become an unstructured pile of data. The trick is not to store everything - but to keep only what is relevant available and to think in a targeted way instead of just remembering.<\/p>\n<h3>New article series: ChatGPT histories as a knowledge base for your AI<\/h3>\n<p><a href=\"https:\/\/www.markus-schall.de\/en\/2026\/03\/chatgpt-data-export-explains-how-your-ki-chats-become-a-personal-knowledge-system\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignright size-medium wp-image-5296\" src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-300x200.jpg\" alt=\"ChatGPT data export\" width=\"300\" height=\"200\" srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-18x12.jpg 18w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-200x133.jpg 200w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-120x80.jpg 120w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-264x176-264x176.jpg 264w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport.jpg 1536w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>If you have already built up your own AI memory with Ollama and Qdrant, it is worth taking a look at a new series of articles that starts right here. It's about how the <a href=\"https:\/\/www.markus-schall.de\/en\/2026\/03\/chatgpt-data-export-explains-how-your-ki-chats-become-a-personal-knowledge-system\/\"><strong>Integrate ChatGPT data export into this system<\/strong><\/a> lets. Many users don't even know that they can export their entire chat history - and that this data is a valuable source of knowledge. In this series, I will show you how to analyze these conversations, convert them into embeddings and then import them into a vector database. This allows your local AI to access previous conversations later and use them as context for answers. In this way, a personal knowledge archive grows step by step from individual dialogs.<\/p>\n<hr \/>\n\n\t\t\t<div class=\"display-post-types\">\n\n\t\t\t\t\t\t\t<style type=\"text\/css\">\n\t\t\t#dpt-wrapper-902 { --dpt-text-align: left;--dpt-image-crop: center;--dpt-border-radius: 5px;--dpt-h-gutter: 10px;--dpt-v-gutter: 9px; }\t\t\t<\/style>\n\t\t\t<style type=\"text\/css\">#dpt-wrapper-902 { --dpt-title-font-style:normal;--dpt-title-font-weight:600;--dpt-title-line-height:1.5;--dpt-title-text-decoration:none;--dpt-title-text-transform:none;--dpt-excerpt-font-style:normal;--dpt-excerpt-font-weight:400;--dpt-excerpt-line-height:1.5;--dpt-excerpt-text-decoration:none;--dpt-excerpt-text-transform:none;--dpt-meta1-font-style:normal;--dpt-meta1-font-weight:400;--dpt-meta1-line-height:1.9;--dpt-meta1-text-decoration:none;--dpt-meta1-text-transform:none;--dpt-meta2-font-style:normal;--dpt-meta2-font-weight:400;--dpt-meta2-line-height:1.9;--dpt-meta2-text-decoration:none;--dpt-meta2-text-transform:none; }<\/style><div class=\"dpt-main-header\">\n\t\t\t\t\t\t<div class=\"dpt-main-title\">\n\t\t\t\t\t\t\t<span class=\"dpt-main-title-text\">Current articles on artificial intelligence<\/span>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t<\/div>\t\t\t\n\t\t\t\t<div id=\"dpt-wrapper-902\" class=\"dpt-wrapper dpt-mag1 land1 dpt-cropped dpt-flex-wrap\" >\n\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"chatgpt-datenexport erkl\u00e4rt: wie deine ki-chats zu einem pers\u00f6nlichen wissenssystem werden\" data-id=\"5259\"  data-category=\"ki-systeme tipps &amp; anleitungen\" data-post_tag=\"apple datenbanken datenschutz digitales eigentum k\u00fcnstliche intelligenz llama llm mac mlx ollama sprachmodell\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2026\/03\/chatgpt-data-export-explains-how-your-ki-chats-become-a-personal-knowledge-system\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">ChatGPT data export explained: How your AI chats become a personal knowledge system<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"ChatGPT data export\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-18x12.jpg 18w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-200x133.jpg 200w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-120x80.jpg 120w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ChatGPT-Datenexport-264x176-264x176.jpg 264w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2026\/03\/chatgpt-data-export-explains-how-your-ki-chats-become-a-personal-knowledge-system\/\" rel=\"bookmark\">ChatGPT data export explained: How your AI chats become a personal knowledge system<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"lokale ki auf dem mac: so installieren sie ein sprachmodell mit ollama\" data-id=\"2442\"  data-category=\"ki-systeme tipps &amp; anleitungen\" data-post_tag=\"datenschutz k\u00fcnstliche intelligenz llama llm mac mistral ollama sprachmodell\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2025\/08\/local-ki-on-the-mac-like-this-installo-create-a-language-model-with-ollama\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">Local AI on the Mac: How to install a language model with Ollama<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Local AI installieren with Ollama on the Mac\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Lokale-KI-auf-Apple-Mac.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Lokale-KI-auf-Apple-Mac.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Lokale-KI-auf-Apple-Mac-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Lokale-KI-auf-Apple-Mac-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Lokale-KI-auf-Apple-Mac-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Lokale-KI-auf-Apple-Mac-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2025\/08\/local-ki-on-the-mac-like-this-installo-create-a-language-model-with-ollama\/\" rel=\"bookmark\">Local AI on the Mac: How to install a language model with Ollama<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"ki-studio 2025: welche hardware wirklich lohnt \u2013 vom mac studio bis zur rtx 3090\" data-id=\"3704\"  data-category=\"apple macos featured hardware ki-systeme\" data-post_tag=\"apple digitales eigentum k\u00fcnstliche intelligenz llm mlx neo4j sprachmodell\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2025\/11\/ki-studio-2025-which-hardware-is-really-worth-it-from-mac-studio-to-rtx-3090\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">AI Studio 2025: Which hardware is really worth it - from the Mac Studio to the RTX 3090<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Hardware 2025 for AI studio\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/KI-Studio-Hardware-2025.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/KI-Studio-Hardware-2025.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/KI-Studio-Hardware-2025-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/KI-Studio-Hardware-2025-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/KI-Studio-Hardware-2025-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/KI-Studio-Hardware-2025-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2025\/11\/ki-studio-2025-which-hardware-is-really-worth-it-from-mac-studio-to-rtx-3090\/\" rel=\"bookmark\">AI Studio 2025: Which hardware is really worth it - from the Mac Studio to the RTX 3090<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"wenn der mac zuh\u00f6rt: was apples integrierte ki mit gemini und siri k\u00fcnftig f\u00fcr nutzer bedeutet\" data-id=\"4813\"  data-category=\"apple iphone &amp; ipad apple macos ki-systeme\" data-post_tag=\"apple datenlogik datenschutz k\u00fcnstliche intelligenz llm mac prozesse sprachmodell\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2026\/02\/if-the-mac-listens-to-what-apples-integrated-ki-with-gemini-and-siri-will-mean-for-users-in-the-future\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">When the Mac listens: What Apple's integrated AI with Gemini and Siri will mean for users in the future<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Apple, Siri and Gemini\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Apple-Siri-Gemini.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Apple-Siri-Gemini.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Apple-Siri-Gemini-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Apple-Siri-Gemini-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Apple-Siri-Gemini-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Apple-Siri-Gemini-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2026\/02\/if-the-mac-listens-to-what-apples-integrated-ki-with-gemini-and-siri-will-mean-for-users-in-the-future\/\" rel=\"bookmark\">When the Mac listens: What Apple's integrated AI with Gemini and Siri will mean for users in the future<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\n\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\n<hr \/>\n<h2>Frequently asked questions<\/h2>\n<ol>\n<li><strong>Why does a local AI need a \u201ememory\u201c at all? Is the language model not enough?<\/strong><br \/>\nA language model only works with the current prompt and the context that you are currently giving it. It therefore does not permanently remember previous conversations, documents or information. This is exactly where a local memory comes in. An additional database allows the AI to save previous content and retrieve it when required. The model then not only receives your current question when answering, but also relevant information from this memory. This results in much more consistent and informed answers. Without such a system, a language model basically remains a pure text generator without any long-term knowledge of your own data or projects.<\/li>\n<li><strong>What exactly is Qdrant - and why is it used in this system?<\/strong><br \/>\nQdrant is a modern vector database that has been specially developed for semantic searches. Unlike traditional databases, it stores information not just as text, but as so-called vectors - mathematical representations of meaning. This allows it to search content not only for identical words, but also for proximity of content. So if you ask a question, Qdrant can find suitable text passages from your knowledge base, even if they do not contain exactly the same terms. In combination with a language model, this creates a kind of intelligent memory for the AI.<\/li>\n<li><strong>What does the term \u201eRAG\u201c, which is often used in this context, mean?<\/strong><br \/>\nRAG stands for \u201eRetrieval Augmented Generation\u201c. This is a technique in which a language model retrieves additional information from a database before giving an answer. The model therefore not only generates its answer from the training, but also supplements it with suitable information from a knowledge source. This method solves a typical problem of language models: They only know what was learned during training. RAG allows them to access current or personal data instead - such as documentation, websites or their own notes.<\/li>\n<li><strong>How do Ollama and Qdrant actually work together?<\/strong><br \/>\nIn this setup, Ollama takes on the role of the language model, while Qdrant acts as a semantic memory. When you ask a question, Qdrant first searches for relevant text fragments. These results are then passed to the language model together with your question. The model uses this additional information to formulate a well-founded answer. The typical sequence is therefore: Prompt \u2192 Search in memory \u2192 Expand context \u2192 Generate answer.<\/li>\n<li><strong>What types of data can I include in this AI memory?<\/strong><br \/>\nBasically almost anything that can be converted into text. This includes documentation, websites, Markdown files, PDFs, database entries or even personal notes. The only important thing is that the content can be broken down into smaller text sections before it is saved in the database. These so-called \u201echunks\u201c later form the basis for the semantic search. This allows the AI to specifically access individual relevant sections instead of having to search through entire documents.<\/li>\n<li><strong>Why is a vector database used instead of a normal text search?<\/strong><br \/>\nClassic search engines usually work with keywords. This means that they only find results that contain exactly the same terms. A vector database, on the other hand, searches for meaning. It can therefore also find texts that are similar in content, even if different words have been used. This is crucial for AI systems because questions are often formulated differently from the original documents. Semantic searches make the link between question and answer much more reliable.<\/li>\n<li><strong>How are texts actually converted into vectors?<\/strong><br \/>\nEmbedding models are used for this purpose. These models analyze texts and convert them into number vectors that represent their meaning. Each section of text is therefore given a mathematical representation in what is known as vector space. Similar content is closer together than completely different topics. If a question is asked later, this is also converted into a vector. Qdrant can then very quickly find the most similar entries in the memory.<\/li>\n<li><strong>Why is Qdrant often used via Docker installiert?<\/strong><br \/>\nDocker considerably simplifies the installation of complex software. Instead of setting up many individual dependencies manually, Qdrant simply runs in a container. This means that the installation works reliably on different systems and can be started or stopped easily. This method is particularly practical on the Mac because it keeps the system clean and provides a stable environment for the database at the same time.<\/li>\n<li><strong>Can I operate this system completely offline?<\/strong><br \/>\nYes, that is one of the biggest advantages of this architecture. Both the language model and the vector database run locally on your own computer. This means that no data is sent to external servers. This creates a completely private AI environment. This is a decisive advantage over cloud systems, especially for sensitive data or internal company documents.<\/li>\n<li><strong>How big can such a local AI memory become?<\/strong><br \/>\nThis depends above all on your storage space and the performance of the system. Modern vector databases can easily manage millions of text fragments. For many personal projects, however, just a few thousand documents are enough to create a very powerful knowledge system. The quality of the data structure is more important than the sheer quantity of information.<\/li>\n<li><strong>Can AI really \u201elearn\u201c with this system?<\/strong><br \/>\nNot in the classical sense. The language model itself is not retrained. Instead, the knowledge is stored outside the model and retrieved when required. Although this makes the AI appear capable of learning, it actually only accesses an ever-growing store of knowledge. This approach has one major advantage: new information can be added at any time without having to retrain the model.<\/li>\n<li><strong>What practical applications result from such a local AI memory?<\/strong><br \/>\nThe possibilities are amazingly diverse. For example, you can build up a personal knowledge database, make technical documentation searchable or have internal company documents analyzed. Authors, developers or researchers also benefit from this because they can make large amounts of information accessible in a structured way. Basically, a kind of personal research assistant is created that understands your own data.<\/li>\n<li><strong>Can I integrate several data sources at the same time?<\/strong><br \/>\nYes, Qdrant allows each text fragment to be given additional metadata, such as source, category or language. This allows different databases to be managed together. This metadata can even be specifically filtered during the search. For example, the AI can only consider content from a specific documentation or a specific project.<\/li>\n<li><strong>How does this system differ from classic chatbots?<\/strong><br \/>\nMost chatbots work exclusively with the knowledge of their training data set. They therefore cannot provide any specific information about your own content. A RAG system, on the other hand, combines a language model with an individual knowledge base. This allows the AI to provide answers that are directly tailored to your own data. This makes it much more useful for productive work.<\/li>\n<li><strong>What role does Python play in this setup?<\/strong><br \/>\nPython is often used to control the connection between the language model and the database. With just a few scripts, texts can be read in, converted into vectors and saved in Qdrant. Python can also perform the search and transfer the results found to the language model. This creates a flexible pipeline that can be adapted to your own requirements.<\/li>\n<li><strong>Is setting up such a system only for developers?<\/strong><br \/>\nNot necessarily. Although the setup requires a certain amount of technical understanding, many of the tools required have now become much simpler. With a little patience, a functioning system can be set up even without in-depth programming knowledge. Once you get to grips with it, you quickly realize the enormous potential of such local AI infrastructures.<\/li>\n<li><strong>What are the limits of a local AI memory?<\/strong><br \/>\nThe most important limitation is the computing power of your own computer. Large models or huge knowledge databases can require more memory and CPU power. In addition, the quality of the answers depends heavily on the structure of the data. If documents are poorly prepared, AI can only deliver good results to a limited extent.<\/li>\n<li><strong>Why is this combination of Ollama and Qdrant considered a particularly interesting architecture for local AI?<\/strong><br \/>\nBecause it brings together two crucial components: a powerful language model and a fast semantic database. Together, they create a complete AI working environment that can be operated entirely locally. This allows personal knowledge systems, intelligent search engines or specialized assistants to be set up - without cloud dependency and with full control over your own data.<\/li>\n<\/ol>\n<hr \/>\n\n\t\t\t<div class=\"display-post-types\">\n\n\t\t\t\t\t\t\t<style type=\"text\/css\">\n\t\t\t#dpt-wrapper-903 { --dpt-text-align: left;--dpt-image-crop: center;--dpt-border-radius: 5px;--dpt-h-gutter: 10px;--dpt-v-gutter: 9px; }\t\t\t<\/style>\n\t\t\t<style type=\"text\/css\">#dpt-wrapper-903 { --dpt-title-font-style:normal;--dpt-title-font-weight:600;--dpt-title-line-height:1.5;--dpt-title-text-decoration:none;--dpt-title-text-transform:none;--dpt-excerpt-font-style:normal;--dpt-excerpt-font-weight:400;--dpt-excerpt-line-height:1.5;--dpt-excerpt-text-decoration:none;--dpt-excerpt-text-transform:none;--dpt-meta1-font-style:normal;--dpt-meta1-font-weight:400;--dpt-meta1-line-height:1.9;--dpt-meta1-text-decoration:none;--dpt-meta1-text-transform:none;--dpt-meta2-font-style:normal;--dpt-meta2-font-weight:400;--dpt-meta2-line-height:1.9;--dpt-meta2-text-decoration:none;--dpt-meta2-text-transform:none; }<\/style><div class=\"dpt-main-header\">\n\t\t\t\t\t\t<div class=\"dpt-main-title\">\n\t\t\t\t\t\t\t<span class=\"dpt-main-title-text\">Current articles on art &amp; culture<\/span>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t<\/div>\t\t\t\n\t\t\t\t<div id=\"dpt-wrapper-903\" class=\"dpt-wrapper dpt-mag1 land1 dpt-cropped dpt-flex-wrap\" >\n\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"gr\u00f6nland, trump und die frage der zugeh\u00f6rigkeit: geschichte, recht und realit\u00e4t\" data-id=\"4325\"  data-category=\"allgemein gesellschaft kunst &amp; kultur\" data-post_tag=\"europa geopolitik krisen meinungsfreiheit sicherheitspolitik\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2026\/01\/groenland-trump-and-the-question-of-belonging-history-law-and-reality\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">Greenland, Trump and the question of belonging: history, law and reality<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Greenland in the crosshairs: USA and Trump\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Groenland-USA-Trump-Titel.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Groenland-USA-Trump-Titel.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Groenland-USA-Trump-Titel-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Groenland-USA-Trump-Titel-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Groenland-USA-Trump-Titel-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Groenland-USA-Trump-Titel-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2026\/01\/groenland-trump-and-the-question-of-belonging-history-law-and-reality\/\" rel=\"bookmark\">Greenland, Trump and the question of belonging: history, law and reality<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"beruf, weltbild, zukunft: entscheidungen im schatten des umbruchs\" data-id=\"3197\"  data-category=\"allgemein gesellschaft kunst &amp; kultur tipps &amp; anleitungen\" data-post_tag=\"erfahrungen krisen pers\u00f6nlichkeitsentwicklung ratgeber\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2025\/10\/career-world-view-future-decisions-in-the-shadow-of-upheaval\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">Career, world view, future: Decisions in the shadow of upheaval<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Decisions in the shadow of upheaval\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Entscheidungen-im-Schatten-Umbruch.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Entscheidungen-im-Schatten-Umbruch.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Entscheidungen-im-Schatten-Umbruch-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Entscheidungen-im-Schatten-Umbruch-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Entscheidungen-im-Schatten-Umbruch-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Entscheidungen-im-Schatten-Umbruch-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2025\/10\/career-world-view-future-decisions-in-the-shadow-of-upheaval\/\" rel=\"bookmark\">Career, world view, future: Decisions in the shadow of upheaval<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"ulrike gu\u00e9rot: eine europ\u00e4erin zwischen idee, universit\u00e4t und \u00f6ffentlichem diskurs\" data-id=\"4039\"  data-category=\"allgemein gesellschaft kunst &amp; kultur\" data-post_tag=\"denkmodelle deutschland europa geopolitik krisen meinungsfreiheit portrait sicherheitspolitik spieltheorie\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2025\/12\/ulrike-guerot-a-european-between-the-idea-of-university-and-public-discourse\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">Ulrike Gu\u00e9rot: A European between idea, university and public discourse<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Ulrike Gu\u00e9rot and Europe\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/ulrike-guerot-portrait.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/ulrike-guerot-portrait.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ulrike-guerot-portrait-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ulrike-guerot-portrait-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ulrike-guerot-portrait-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/ulrike-guerot-portrait-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2025\/12\/ulrike-guerot-a-european-between-the-idea-of-university-and-public-discourse\/\" rel=\"bookmark\">Ulrike Gu\u00e9rot: A European between idea, university and public discourse<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"dpt-entry has-thumbnail\" data-title=\"helge schneider: haltung, humor und die freiheit, sich nicht erkl\u00e4ren zu m\u00fcssen\" data-id=\"4756\"  data-category=\"allgemein gesellschaft kunst &amp; kultur\" data-post_tag=\"deutschland erfahrungen meinungsfreiheit musik portrait\">\n\t\t\t\t\t\t\t<div class=\"dpt-entry-wrapper\"><div class=\"dpt-featured-content\"><div class=\"dpt-permalink\"><a href=\"https:\/\/www.markus-schall.de\/en\/2026\/02\/helge-schneider-attitude-humor-and-the-freedom-not-to-have-to-explain-yourself\/\" class=\"dpt-permalink\"><span class=\"screen-reader-text\">Helge Schneider: Attitude, humor and the freedom of not having to explain yourself<\/span><\/a><\/div><div class=\"dpt-thumbnail\"><div class=\"dpt-thumbnail-inner\"><img width=\"1536\" height=\"1024\" class=\"attachment-full size-full\" alt=\"Helge Schneider Portrait\" context=\"dpt\" data-dpt-src=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Helge-Schneider.jpg\" data-dpt-sizes=\"(max-width: 1536px) 100vw, 1536px\" data-dpt-srcset=\"https:\/\/www.markus-schall.de\/wp-content\/uploads\/Helge-Schneider.jpg 1536w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Helge-Schneider-300x200.jpg 300w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Helge-Schneider-1024x683.jpg 1024w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Helge-Schneider-768x512.jpg 768w, https:\/\/www.markus-schall.de\/wp-content\/uploads\/Helge-Schneider-18x12.jpg 18w\" \/><\/div><span class=\"dpt-thumbnail-aspect-ratio\" style=\"padding-top: 75%\"><\/span><\/div><\/div><div class=\"sub-entry\"><h3 class=\"dpt-title\"><a class=\"dpt-title-link\" href=\"https:\/\/www.markus-schall.de\/en\/2026\/02\/helge-schneider-attitude-humor-and-the-freedom-not-to-have-to-explain-yourself\/\" rel=\"bookmark\">Helge Schneider: Attitude, humor and the freedom of not having to explain yourself<\/a><\/h3><\/div><\/div>\n\t\t\t\t\t\t<\/div><!-- .dpt-entry -->\n\t\t\t\t\t\t\n\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\n<hr \/>","protected":false},"excerpt":{"rendered":"<p>Local AI with memory - without cloud, without subscription, without detour In a previous article, I explained how to Ollama on the Mac installiert. If you have already taken this step, you now have a powerful local language model - such as Mistral, LLaMA3 or another compatible model that can be accessed via REST API. But from ... <a title=\"Helge Schneider: Attitude, humor and the freedom of not having to explain yourself\" class=\"read-more\" href=\"https:\/\/www.markus-schall.de\/en\/2026\/02\/helge-schneider-attitude-humor-and-the-freedom-not-to-have-to-explain-yourself\/\" aria-label=\"Read more about Helge Schneider: Attitude, humor and the freedom of not having to explain yourself\">Read more<\/a><\/p>","protected":false},"author":1,"featured_media":2504,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":638,"footnotes":""},"categories":[431,15,3,4],"tags":[452,410,471,435,433,437,453,432,450,434,451],"class_list":["post-2502","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ki-systeme","category-apple-macos","category-filemaker","category-tipps-anleitungen","tag-docker","tag-filemaker","tag-kuenstliche-intelligenz","tag-llama","tag-llm","tag-mistral","tag-neo4j","tag-ollama","tag-qdrant","tag-sprachmodell","tag-vektordatenbank"],"_links":{"self":[{"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/posts\/2502","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/comments?post=2502"}],"version-history":[{"count":12,"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/posts\/2502\/revisions"}],"predecessor-version":[{"id":5333,"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/posts\/2502\/revisions\/5333"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/media\/2504"}],"wp:attachment":[{"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/media?parent=2502"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/categories?post=2502"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.markus-schall.de\/en\/wp-json\/wp\/v2\/tags?post=2502"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}