We can no longer trust software

# Ai # Software

If you told someone back in early 2000s that they would be able to update the software on their TV or fridge, they would look at you very confusingly. And yet, here we are.

As I was powering my LG “smart” TV a few months ago, I was greeted with a message telling me that there is a new update available. The thing with TV updates is that they always have bad timing. I turned the TV on to watch a movie, not to wait 7 minutes until it updates and restarts itself, while my popcorn is getting cold. And so, I declined the update. Little I knew that I dodged a bullet there.

Apparently, this update introduced Microsoft Copilot1, which can not be removed. Moreover, the same update included a feature called “Live Plus” which does Automatic Content Recognition (i.e. analyze what you watch), sell it to LG Ad Solutions (yes, ad as in “advertisement”), and create a “viewer profile” of you, in order to… Go on, guess… Yes! Serve. You. ADS.

After that, I have disconnected my TV from the internet. I use an Android TV box anyway, and unless we find a way to update our TVs from 4k to 8k via software, or upgrade from 50” to, say, 70”, I’m done with software updates of my appliances.

I guess smartphones, and broadband internet, has accustomed us to get updates in real-time the moment they appear. Back in the “old days”, I would download a new version of Miranda or Winamp, only if my current version had issues that I was aware of. But today, most devices seem to update themselves in the background, and I got “the Wi-Fi symbol” blinking on my fridge and my washing machine, both of them are begging me to connect them to the internet so I can monitor… how many avocados are left in the fridge?!

Jeff Geerling wrote almost a year ago, how he bought a dishwasher that required cloud subscription to work. Automakers tried to pay-wall features like heated seats behind a subscription, despite the fact that the heating element is installed in every, fucking, car. And yet, to use it, you have to pay a subscription. Predatory business 101.

NOTE

This initiative, popularized by BMW, seems to have died. But have no fear, there are rooms full of “smart business people”, who brainstorm day-and-night, how to bring it back in a different packaging.

And so, my conclusion is that we no longer can trust software.

In the past, you used to buy an appliance, and kind-of commit to a known feature set, since it was mostly impossible to update the software, because the software was actually called firmware, and was embedded into the microchip of the appliance. Today, every appliance is basically a computer. Your TV, dishwasher, fridge, and car — all run some kind of Linux with custom UI on top of it. And by connecting them to the internet, you essentially give the maker of said appliance a control over how this appliance will function. So I’d argue that it is better to avoid “internet connected” appliances all together, or at least, not connect them to the internet.

But it’s not only consumer appliances software that we can no longer trust.

With raise of AI for coding, and so called “vibe-coders”2, we see an influx of software that was created either by engineers who have no knowledge in the domain they operate in, or software that simply lacks any review process that it is free from malicious code elements. Moreover, said AI tools are not free from injecting malicious code, or simply running destructive operations, hence they require a process of sandboxing — isolating them from actual computers so if they make a mess, they ruin some virtual environment instead. I have written about the need to isolate such AI tools on my tech blog: Isolating Claude Code.

As software engineers, we rely on open-source software and libraries, that are built and maintained by other people and companies. Any while these libraries are not free from attempts to inject malicious code by malicious actors (see: xz backdoor, SHA1-HULUD injections, and various other npm and pypi attacks that happened recently), we are, mostly, able to catch them in time. The reasons, I believe, are because writing malicious code is hard, and in order to achieve desired effect you need to either target core libraries (as the case with xz), or play the numbers game by infecting as many packages as possible (as the case with SHA1-HULUD). But, as the amount of code produced with AI is growing at a very fast rate, it is becoming hard to review software. Many popular OSS libraries either suspended their bug bounty programs, or modified contribution guidelines, citing influx of AI generated code as the main reason3. Human fatigue is a real thing, and people tend to become sloppier as they deal with bigger workload.

I guess what I’m trying to say is that if in the past it was somewhat safe to run arbitrary code downloaded from the internet (as long as it was downloaded from a reputable source), in today’s AI-era, everything needs to be sandboxed. Every project you develop — needs to be run in an isolated VM that has no access to your machine and the secrets that you store there. We use our computers for banking, and we tend to store credentials, which malicious code can extract.

Household appliances, and other electronics devices, need to be either disconnected from the network, or run in an isolated VLAN networks that have no access to your main network. I am sure that as we continue to navigate this new reality, new products will come up to serve for better security, but as for now, I conclude that we can no longer trust software.

Footnotes

  1. For those who don’t know, Microsoft Copilot is Microsoft’s attempt to shove their AI in every place possible.

  2. Vibe-coder — a person who uses AI prompting to create software, often times knowing nothing about programming, or not having understanding of the problem’s domain in which they operate.

  3. curl shutters bug bounty program, LLVM AI Tool Policy: human in the loop, Ghostty: AI tooling must be disclosed