From ChatGPT data export to your own knowledge AI: step-by-step with Ollama and Qdrant

The path to your own AI memory

In the first part of this article series, we saw that the ChatGPT data export is much more than just a technical function. Your exported data contains a collection of thoughts, ideas, analyses and conversations that have accumulated over a long period of time. But as long as this data is only stored as an archive on your hard disk, it remains just that: an archive. The crucial step is to make this information usable again. This is exactly where the development of a personal knowledge AI begins.

The idea is actually surprisingly simple: an AI should not only work with general knowledge, but also be able to access your own data. It should search through previous conversations, find suitable content and incorporate this into new answers. This turns an ordinary AI into a kind of digital memory. This is the second part of the article series, which now looks at the practical aspects.

Read more

ChatGPT data export explained: How your AI chats become a personal knowledge system

ChatGPT data export

If you regularly work with an AI, then you probably know this: one thought leads to the next. You ask a question, get an answer, reformulate, develop an idea further. A short question suddenly turns into a longer dialog. Sometimes it even leads to entire projects.

But most of these conversations disappear again. They lie somewhere in the chat list, slide down and are forgotten over time. This is precisely one of the great features of modern AI systems: While previous conversations with colleagues, friends or advisors only existed in our memories, AI dialogs are completely preserved.

This means something crucial: With every conversation, a digital archive of your thinking is created. This is the first part of a small series of articles that will allow you to export your chat history from ChatGPT and use it effectively as a personal treasure trove of knowledge with your local AI system.

Read more

Learning to think dialogically with AI: Why good questions are more important than good models

Learning to think dialogically with AI

The term „AI as a sparring partner“ now appears frequently. It usually means that an AI helps with writing, generates ideas or completes tasks faster. A first basic article on this has already been published in the magazine. This article now aims to show in reality how AI can be used as an effective thinking partner. In practice, it is clear that AI only becomes really interesting when it is not treated as a tool, but as a counterpart. Not in the human sense, but as something that answers, contradicts, leads on - or even mercilessly reveals where your own thinking is flawed.

This is exactly where the real benefit begins. Not where the AI „delivers“, but where it reacts. Where it does not simply process, but makes thought processes visible. This is more inconvenient than a classic tool - but also more sustainable.

Read more

How animals perceive time - and what this means for the future of AI

Animals, AI and time perception

A cat is lying on the carpet. It does not move. It may blink briefly, turn an ear, sigh inwardly at the impositions of existence - and nothing else happens. The human looks at it and thinks: „Typical. Lazy cattle“. But what if the exact opposite is true? What if the cat is not too slow - but we are? This article was written after I watched a video by Gerd Ganteför on this topic and found it so interesting that I would like to present it here.

Humans have been observing animals for centuries and always come to the same wrong conclusions. We interpret their behavior with our speed, our perception, our inner clock. And this clock is, soberly considered, more of a cozy wall calendar than a high-speed processor. Perhaps the cat only seems so disinterested because its environment feels about as dynamic to it as a queue of officials on a Friday afternoon.

Read more

When the Mac listens: What Apple's integrated AI with Gemini and Siri will mean for users in the future

Apple, Siri and Gemini

When you open a Mac today, you expect reliability. Programs start, files are in their place, processes are well practiced. Many have built up a way of working over years - some over decades - that works. You know where to click. You know your tools. And this is precisely where the quiet comfort lies. But for some time now, a change has been brewing in the background that is bigger than new colors, new icons or additional menu items. For the first time, a form of artificial intelligence is moving in not just as a single application, but closer to the heart of the operating system itself. Where daily routines are created.

That sounds abstract at first. Perhaps even a little futuristic. But basically it's about something very down-to-earth: the computer should better understand what is meant. Not just what is clicked on. Many people have so far experienced AI outside of their actual work. In chat windows, on websites, as an experiment or a gimmick. You try something out, perhaps be amazed, close the window again - and return to your normal everyday life.

Read more

Artificial intelligence without the hype: why fewer AI tools often mean better work

Artificial intelligence without the hype

Anyone who deals with the topic of artificial intelligence today almost inevitably encounters a strange feeling: constant restlessness. No sooner have you got used to one tool than the next ten appear. One video follows the next on YouTube: „This AI tool changes everything“, „You absolutely have to use this now“, „Those who miss out are left behind“. And every time, the same message resonates subliminally: You're too late. The others are further ahead. You have to catch up.

This doesn't just affect IT people. Self-employed people, creative professionals, entrepreneurs and ordinary employees are also feeling the pressure. Many don't even know exactly what these tools actually do - but they have the feeling that they could be missing out on something. And that's exactly what creates stress.

Read more

Using AI as a sparring partner: How thinking in dialog becomes more productive

AI as a savings partner

I've been using artificial intelligence for almost exactly two years now. In the beginning, it was sober and technical: entering text, typing prompts, reading answers, correcting, retyping. The way many people did it - carefully, in a controlled manner, with a certain distance. It worked, no question. But there was still something mechanical about it. You asked questions, got answers, ticked them off.

I realized relatively early on that I was missing something: flow. Thinking is not a form. Good thoughts don't come from a corset of neatly formulated input, but from talking, trying things out, thinking aloud. So I started to use the AI app on my cell phone more often - and at some point I simply started speaking instead of typing. That was the real turning point.

Read more

Cloud AI as head teacher: why the future of work lies with local AI

Cloud AI becomes the head teacher

When the large language models began their triumphal march a few years ago, they almost seemed like a return to the old virtues of technology: a tool that does what it is told. A tool that serves the user, not the other way around. The first versions - from GPT-3 to GPT-4 - had weaknesses, yes, but they were amazingly helpful. They explained, analyzed, formulated and solved tasks. And they did this largely without pedagogical ballast.

You talked to these models as if you were talking to an erudite employee who sometimes got lost, but basically just worked. Anyone who wrote creative texts, generated program code or produced longer analyses back then experienced how smoothly it went. There was a feeling of freedom, of an open creative space, of technology that supported people instead of correcting them.

Read more