19 Nov 2025
Woo! I got more peripherals working! Er… well, would it be AI’s copyright? Hm… more on that later.
Anyways, let’s start with the technical discussion first, shall we?
The Patch Series
Here’s the preliminary patch series!
Now, how does this work? Well… I wish I could go into the details, but I don’t have a schematic for this device. Since most of these peripherals’ drivers already exist upstream, it was just a matter of reading the downstream board file and converting that into nodes I could use in the DTS. So while I could describe the patch itself, I think you’re smart enought to read the patches yourself. Or you could give it to AI for a decent understand of how it works.
I mainly am writing this post on my thoughts on using AI to generate these patches though.
Diving into AI
Manually converting the board files into nodes was very problematic. For example, I was stuck on getting NFC working for my device for MONTHS, and boy was that frustrating.
That frustration finally made me cave into trying AI. You see, there aren’t a lot of guides on the internet how to do hardware bringup. And online, people either tell you to RTFM (even when there is no documentation) or ghost you. A lot of this information is locked away behind companies wanting to protect their copyright.
That’s where the appeal of AI came from. It has a good high level overview of the Linux kernel and how it’s changed over the years. It should be relatively easy for it to figure out the conversion, right? After all, it’s a rubber duck that talks back.
AI’s Speedup
I first tried to toss it something hard. I wanted to get the GPU working, and for that you need IOMMU. I think I got IOMMU working, but I just couldn’t get GPU working because it seems that the Adreno 225 doesn’t have a DTS compatible string. I’m still struggling to figure out if the approach I’m taking is correct or not.
If you noticed, in the patch series I didn’t include IOMMU by itself. Even though it probes successfully now, apparently IOMMU is broken on APQ8064 since at least 5.10 (if there is anyone with an APQ8064 device, please help bisect it and fix it <3). Someone did come into the Matrix channel to test the patch I created with AI, but it didn’t work, so I suspect my IOMMU patch doesn’t actually work unfortunately.
I then had the idea to shift to peripherals whose drivers were already in Linux. After all, half of the work was cut out already!
Here’s roughly how the patches came to be:
- Started with the magnetometer. It struggled a bit, but eventually got it. I think I was just not prompting it well enough (This was also my first time doing this. I’m also not super familiar with Linux’s codebase.)
- Next, I tried the accelerometer, and that was a bit faster as it was on the same bus as the magnetometer.
- The NFC was the fastest to get right. In mere minutes I had a working NFC implementation where I was failing to get it working for MONTHS. This absolutely blew my mind. Maybe it’s because I knew where to look and guided it well?
- Finally got the light sensor working. Proximity seems broken for now, but I’ll save that for another date.
I was playing around with trying to get other peripherals (touchkey, vibrator, the camera’s led, and gpu) working in between these ones, but I don’t remember anything notable in trying to get it to work. And since they’re still broken, it wasn’t worth diving into.
Tips For Using AI
After a little bit of experience using AI to write DTS patches, I can now hopefully give some helpful tips for others:
- Depending on the task at hand, the model you use can either help or hurt you. Here’s my observations:
- Claude: Great for coding. Best way I can describe it is depth first solutions (eg. while looking for errors, it typically does
head -5and then operates on the first error). Uses command line tools to search for what it wants. - GPT: Great for writing (Though I like to write my blogs myself). If Claude is depth first, this is more like breadth first solutions. Basically reads everything you gave it and then tries to come up with a solution.
- Your own noggin!: Best for quick tasks and seeing if the AI solutions make sense.
- Claude: Great for coding. Best way I can describe it is depth first solutions (eg. while looking for errors, it typically does
It’s also useful to cut down on the search space if you can beforehand. In terms of hardware enablement, what that means for me is compiling the downstream kernel and getting rid of all the uncessary .c files that I don’t need (wanted to get rid of .h files, but I did this long ago and don’t really remember how I did it. I noted it somewhere on Mastodon).
If you can, try speeding up the test times as well, otherwise you will spend a lot of time waiting for things to flash, test, and send the errors back to the AI.
I also noticed that AI tends to go in a loop sometimes. For example, I was either trying to get touchkey or the vibrator working, and the AI kept thinking that the DTS formatting was the issue because that’s what the error during the build was saying. It messed up so much that I just had to scrap that work, sometimes losing my progress. Oh well, lesson learned that git is also useful during AI’s coding sessions.
AI’s Usecase
In it’s current state, AI will not be replacing humans anytime soon. I find that it’s very good in finding patterns in the realm of what it was trained on. Asking things beyond what it knows yields gibberish. It’s kinda like Nobel disease.
I think it’ll be usefull for repetitive work. Instead of yielding human hours, just train an AI with data on the task at hand and fewer humans can manage it’s output. It will be hard to tell what data is within AI’s known and unknown realm, as it will be responding to both confidentally. That’s where humans have to come in and filter the output.
AI Ethics
- Water
- Governments should mandate that drinking water is only for humans. Companies can use salt water.
- Electricity
- Just use solar panels. Interesting to see governments are okay with nuclear fusion now that companies are demanding more power.
- Consolidation of power
- The rich get richer and the poor get poorer. It’s a dangerous trend in modern times and AI is accelerating that.
- Copyright
- If you steal things fast enough and make money off that stolen content, you can pay your legal fees with that money! How is that even legal? Copyright was fundamentally broken from the start, and instead of rewarding humans for the work they did from predators, it’s turned the creators into predators themselves. Things like medicines costing millions of dollars because one company owns the copyright. Instead we should guarentee basic neccessities for humans all over the world. Things like food, water, shelter, and healthcare. But that probably will never happen in my lifetime unfortunately… I do want to write a blog post about this, but I’ve been putting it off for a while
- What most likely will happen is that AI will generate it’s own training data to start with and then use the user’s input as more training data once it gets kick started. This evades copyright entirely as the AI is generating its own data and then using the user’s data as additional training data.
Let’s see what happens with AI in the near future. I will continue to try and play around with AI, and hopefully can land more patches upstream!