Artificial intelligence and energy: what the AI boom really costs

AI, energy and sustainability

At first glance, artificial intelligence seems almost weightless. You type in a question and an answer appears seconds later. No noise, no smoke, no visible movement. Everything seems to happen „in the cloud“. This is precisely the error in thinking. AI is not abstract magic, but the result of very concrete, physical processes. Behind every answer are data centers, power lines, cooling systems, chips and entire infrastructures. The more AI enters our everyday lives, the more visible this reality becomes. And this is where the question of sustainability begins.

Anyone who talks about AI without talking about energy, resources and infrastructure is only describing the surface. This article goes deeper. Not with alarmism, but with a sober look at what AI actually needs to function - today and in the future.

Read more

Artificial intelligence without the hype: why fewer AI tools often mean better work

Artificial intelligence without the hype

Anyone who deals with the topic of artificial intelligence today almost inevitably encounters a strange feeling: constant restlessness. No sooner have you got used to one tool than the next ten appear. One video follows the next on YouTube: „This AI tool changes everything“, „You absolutely have to use this now“, „Those who miss out are left behind“. And every time, the same message resonates subliminally: You're too late. The others are further ahead. You have to catch up.

This doesn't just affect IT people. Self-employed people, creative professionals, entrepreneurs and ordinary employees are also feeling the pressure. Many don't even know exactly what these tools actually do - but they have the feeling that they could be missing out on something. And that's exactly what creates stress.

Read more

Understanding Taiwan: History, status issues and the risks of an interconnected world

Taiwan as a tipping point

Taiwan has been in the headlines for years - sometimes because of military maneuvers in the Taiwan Strait, sometimes because of diplomatic tensions, sometimes because of the question of how reliable international rules are in an emergency. In recent days, this impression has become even more acute for many observers: the US operation in Venezuela, in which Venezuela's President Nicolás Maduro was detained, is the subject of controversial international debate, not only politically but also in terms of international law.

Why this could be relevant for Taiwan is less a question of “Who's right?”, When major players interpret rules selectively or enforce them harshly, other powers ask themselves - soberly and guided by their own interests - where their own leeway begins and ends. And it is precisely at this point that Taiwan becomes more than a distant island issue.

Read more

Cloud AI as head teacher: why the future of work lies with local AI

Cloud AI becomes the head teacher

When the large language models began their triumphal march a few years ago, they almost seemed like a return to the old virtues of technology: a tool that does what it is told. A tool that serves the user, not the other way around. The first versions - from GPT-3 to GPT-4 - had weaknesses, yes, but they were amazingly helpful. They explained, analyzed, formulated and solved tasks. And they did this largely without pedagogical ballast.

You talked to these models as if you were talking to an erudite employee who sometimes got lost, but basically just worked. Anyone who wrote creative texts, generated program code or produced longer analyses back then experienced how smoothly it went. There was a feeling of freedom, of an open creative space, of technology that supported people instead of correcting them.

Read more

AI Studio 2025: Which hardware is really worth it - from the Mac Studio to the RTX 3090

Hardware 2025 for AI studio

Anyone working with AI today is almost automatically pushed into the cloud: OpenAI, Microsoft, Google, any web UIs, tokens, limits, terms and conditions. This seems modern - but is essentially a return to dependency: others determine which models you can use, how often, with which filters and at what cost. I'm deliberately going the other way: I'm currently building my own little AI studio at home. With my own hardware, my own models and my own workflows.

My goal is clear: local text AI, local image AI, learning my own models (LoRA, fine-tuning) and all of this in such a way that I, as a freelancer and later also an SME customer, am not dependent on the daily whims of some cloud provider. You could say it's a return to an old attitude that used to be quite normal: „You do important things yourself“. Only this time, it's not about your own workbench, but about computing power and data sovereignty.

Read more

The silent danger of wearables: when convenience becomes surveillance

Wearables, smartwatch, in-ear headphones

Wearables are now part of everyday life. Many people now wear a smartwatch as a matter of course, count their steps, monitor the quality of their sleep or set reminders to take breaks during the day. And I'm happy to admit it: I also have a Apple Watch myself, and I find this technology absolutely fascinating in its own way. It can do things that would have been pure dreams of the future just a few years ago. Nevertheless, I rarely use my Apple Watch.

And just now, after the latest reports and statements from experts, I realize once again that this reticence is not so wrong. After all, many modern headphones and wearables now contain sensors that can measure far more than you might think at first glance. Not all headphones do - but the trend is clear: more and more technology is moving inconspicuously into small devices that we wear close to our bodies.

Read more

Apple MLX vs. NVIDIA: How local AI inference works on the Mac

Local AI on Silicon with Apple Mac

Anyone working with artificial intelligence today often first thinks of ChatGPT or similar online services. You type in a question, wait a few seconds - and receive an answer as if a very well-read, patient conversation partner were sitting at the other end of the line. But what is easily forgotten: Every input, every sentence, every word travels via the Internet to external servers. That's where the real work is done - on huge computers that you never get to see yourself.

In principle, a local language model works in exactly the same way - but without the Internet. The model is stored as a file on the user's own computer, is loaded into the working memory at startup and answers questions directly on the device. The technology behind it is the same: a neural network that understands language, generates texts and recognizes patterns. The only difference is that the entire calculation remains in-house. You could say: ChatGPT without the cloud.

Read more

Sending e-mails fails with T-Online and Speedport W 724V

The things you experience when you simply want to send an e-mail with Mac OS X 10.10 "Yosemite" in a foreign WLAN! For example, that it doesn't work with any of the SMTP servers set up, even though sending emails on the home network is completely problem-free. Even more puzzling was that another MacBook Pro under Mac ... Read more