When the large language models began their triumphal march a few years ago, they almost seemed like a return to the old virtues of technology: a tool that does what it is told. A tool that serves the user, not the other way around. The first versions - from GPT-3 to GPT-4 - had weaknesses, yes, but they were amazingly helpful. They explained, analyzed, formulated and solved tasks. And they did this largely without pedagogical ballast.
You talked to these models as if you were talking to an erudite employee who sometimes got lost, but basically just worked. Anyone who wrote creative texts, generated program code or produced longer analyses back then experienced how smoothly it went. There was a feeling of freedom, of an open creative space, of technology that supported people instead of correcting them.