Saturday, February 7, 2026

AI and Piracy

 So I have spent the last couple of months playing with AI. I have gotten paid subscriptions to both ChatGPT and Claude. I have also been experimenting with Local AI and I have come to a couple of conclusions. 

Local LLM's pretty much suck, unless you are prepared to spend a lot of money. On the low price range, the models you can use are only good for task specific things like object identification in videos or audio to text conversion, stuff like that. General purpose LLM's at the low end are dumb and never meet expectations. If you are looking for truly useful general purpose LLM's with a usable token/second speeds, you need both larger models, 70 billion parameter or larger and the hardware to run them. A Raspberry Pi 5 with a AI Hat, a Jetson Nano or a graphics card with 8 GB or less of VRAM is not going to cut it. There is no cheap way to implement local LLM's as anything more than a toy. If you are looking to get into locally run AI, your starting budget needs to be in the $2000 range, either for a really good graphics card or a purpose built system.

What this means is, in the cost analysis, you can get 9-10 years of a ChatGPT or Claude subscription at $20 per month for the cost of the hardware you will need to build a reasonably useful local LLM. I understand that ChatGPT and Claude are data mining you, but honestly, everyone is and if you are giving them intimate details of your life, you are kind of getting what you deserve. So unless you have a big budget and a really good use case, just choose a subscription and run with it.

During these months of playing with local LLM's, I did purchase a Jetson Nano, it cost me about $250 and it was absolutely not worth the money for what it was intended to be used for unless all you want to do is setup a AI security cameras. What I did find it useful for though was a video streaming device. The 8 GB of shared RAM/VRAM is more than enough to run a handful of Docker containers and since it does have a GPU, it can do on the fly transcoding of video codec formats.

Before I continue, let me just say, I am not an advocate of the piracy of copyrighted media. I believe that media producers should get paid for their work, and I am very willing to pay for a good product provided at a reasonable price, I don't even mind ads and commercials. My problem is, streaming services are at a tipping point, where the cost of those services are getting to the point where they are no longer worth it. If I am paying money, I should not be seeing commercial, or if I am then they should be at the very beginning and MAYBE one in the middle. If I am seeing 4-5 minutes of commercials every 10-15 minutes of the show, that is too much for a service I am already paying for.

 What I did was, using the modified Ubuntu image that came with the Nano, I installed what is called an aar stack. For those not in the know, an *arr stack is a suite of automated media management software—commonly Radarr, Sonarr, Lidarr, and Prowlarr, used to automatically download, organize, and manage movies, TV shows, and music. Often deployed via Docker, this stack connects with download clients (like qBittorrent) and media servers (like Plex/Jellyfin) to automate the entire media acquisition process. 

The Docker compose file I used and rough instructions for building it, can be found here;

https://github.com/cjstoddard/Jetson-Nano/tree/main/aar-stack 

Now, I know what you are thinking, this is just piracy, which I sort of railed against earlier. I am not going to try and justify this, because here is the thing, there are plenty of legit uses for this setup. There is a lot of public domain content out there, there is plenty of free copyrighted content out there. In fact, there is enough free content out there that I cannot possibly watch it all in my lifetime, so fuck off about piracy.

So, to sum all this up, the Jetson Nano is nearly useless as an AI client, but is pretty great as a media server. If you are going to buy one for AI, don't, instead put that money towards a graphics card with 16+ GB of VRAM, in the long run, you will be much happier.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Mastodon