I’m sure there is some over eager product manager sitting in such companies, trying to splits markets into customer and enterprise sections, just by making APIs not useable by humans and adding 200% useless “security by obscurity”.
The idea is that tmpServer listens on localhost, but dropbear allows port forwarding with admin creds (you’ll need to specify -N). That program has full device access and is the API the Tether app primarily uses to interact with the device.
My solution is really just using their pseudo-JWT over their obscured APIs (with reverse-engineered names of endpoints and params). Limitation is that there is still only one client allowed to be authenticated at one moment, so my daemon has priority and I need to stop it to actually access Admin panel.
That's a very narrow read of the word "hacking".
We're literally on a website called "Hacker News". We're not all trying to break things.
Definition 7 would be the relevant one here.
Since it needs sensor data, WINE would not work here and I didn't want to do something funky with editing WINE or granting non-typical permissions.
I was able to reverse engineer the software using Claude, some Python, and a few hours of probing sensor data to understand how it worked and what was available.
I wrote most of the code myself (it was dead simple), but Claude was extremely useful in understanding what byte packets were being sent to the USB controller, what they meant, and what the controller was expecting.
I was able to make it into a service so now it Just Works(tm).
Probably the first time I've used it to "hack" something, but now I have a service that works great, I understand it, and I learned a ton about how Linux controls some low-level hardware.
My router has a backup/restore feature with an encrypted export, I figured I could use that to control or at least inspect all of its state, but I/codex could not figure out the encryption.
I started with a simple assumption: if I can access the router via web-browser, then I can also automate that. From that the proof-of-concept was headless Chrome in Docker and AI-directed code (code written via LLM, not using it all the time) that uses Selenium to navigate the code. This worked, but it internally hurt me to run 300MiB browser just to access like 200B of metrics every 10s or so. So from there we (me + codex) worked together towards reverse engineering their minimised JS and their funky encryption scheme, and it eventually worked (in the end it's just OpenSSL with some useless paddings here or there). Give it a shot, it's a fun day adventure. :)
Edit: that's the end result (kinda, I have whole infra around it, and another story with WiFi extender with another semi-broken different encryption scheme from the same provider) - https://imgur.com/a/VGbNmBp
In the course of driving Codex to the final destination, it definitely was about to go off-track if we did not steer it back immediately.I have a fairly specialized bit of hardware here on my desk. It's a rackmount, pro audio DSP that runs embedded Linux. I want to poke at it (specifically, I want to know why it takes like 5 or 6 minutes to boot up since that is a problem for me).
The firmware is published and available, and it's just a tarball, but the juicy bits inside are encrypted. It has network connectivity for various things, including its own text-based control protocol over SSH. No shell access is exposed (or at least, not documented as being exposed).
So I pointed codex at that whole mess.
It seems to have deduced that the encryption was done with openssl, and is symmetric. It also seems to have deduced that it is running a version of sshd that is vulnerable to CVE-2024-6387, which allows remote code execution.
It has drawn up a plan to prove whether the vulnerability works. That's the next step.
If the vulnerability works, then it should be a hop, skip, and a jump to get in there, enable a path to a shell (it's almost certainly got busybox on there already), and find the key so that the firmware can be decrypted and analyzed offline.
---
If I weren't such a pussy, I'd have started that next step. But I really like this box, and right now it's a black box that I can't recover (I don't have a cleartext firmware image) if things go very wrong. It's not a particularly expensive machine on the used market, but things are tight right now.
And I'm not all that keen on learning how to extract flash memory in-situ in this instance, either.
So it waits. :)
It's also scary where this is going. LLMs are getting fantastic at breaking into things. I sometimes have to dance around the topic with them because they start to get suspicious I'm trying to hack something that doesn't belong to me, which is not the case.
I had some ebooks I bought last year which I managed to pull down the encrypted PDFs for from the web site where you could read them. Claude looked at the PDF and all the data I could find (user ID etc) and it came up with "147 different ideas for a decryption algorithm" which it went through in turn until it found a combination of using parts of the userID value and parts of other data concatenated together which produced the key. Something I would never have figured out. Then recently the company changed the algo for their newer books so Claude took another look and determined they were modifying the binary data of the PDFs to make them non-standard, so it patched them back first.
You use decompilation tools and hope they left debug symbols in and it turns it into somewhat human-readable language which is often enough. Even when you don't binaries use libraries which are known or at some point hit documented interfaces so things can be reasoned about.
How I wish I could just strip this thing down into a monitor with a set of speakers... Screen itself is perfect condition of course but the OS turned it into ewaste.
I've opted just to not plug it in to the network and not provide a WiFi password.
IIRC the last public exploit for all LG TVs for webOS > 5 was in the beginning of 2025 (so pretty recent), but as most sellers on the second hand market have auto-updates turned on, there's no way to know which TVs are vulnerable.
It should be doable to strip down much of webOS with root access. It's nice that webOS in general is very well documented and much is implemented around the Luna service bus. LG offers a developer mode for non-rooted TVs, and there's an active homebrew community because of it. It's a pity that you can't modify the boot partitions, as the firmware verifies their integrity. It would be nice to have an exploit for that.
Meanwhile my ancient 1080p panel still works, and I noticed I can't actually see the pixels from my couch so, ehh, I guess...
Truly, no one should ever connect their "Smart" TV to the internet when better hardware and control is available to you in perpetuity via the HDMI port.
Now why does that sound familiar...?
Lol, a true classic in the embedded world. Some hardware company (it appears these guys make display panel controllers?) ships a piece of hardware, half-asses a barely working driver for it, another company integrates this with a bunch of other crap from other vendors into a BSP, another company uses the hardware and the BSP to create a product and ships it. And often enough the final company doesn't even have an idea about what's going on in the innards of the BSP - as long as it's running their layer of slop UI and it doesn't crash half the time, it's fine, and if it does, it's off to the BSP provider to fix the issues.
But at no stage anywhere is there a security audit, code quality checks or even hardware quality checks involved - part of why BSPs (and embedded product firmwares in general) are full of half-assed code is because often enough the drivers have to work around hardware bugs / quirks somehow that are too late to fix in HW because tens to hundreds of thousands of units have already been produced and the software people are heavily pressured to "make it work or else we gotta write off X million dollars" and "make it work fast because the longer you take, the more money we lose on interest until we can ship the hardware and get paid for it", and if they are particularly unlucky "it MUST work until deadline X because we need to get the products shipped to hit Christmas/Black Friday sales windows or because we need to beat <competitor> in time-to-market, it's mandatory overtime until it works".
And that is how you get exploits so braindead easy that AI models can do the job. What a disgusting world, run to the ground by beancounters.
Most of the BSP is GPL'd software where the final product manufacturer should provide the sources to the general public, but all too often that obligation gets sharted upon, in way too many cases you have to be happy if there are at least credits provided in the user manual or some OSD menu.
Does anyone know what the author meant by this? Are they talking about a web browser run on the TV?
I also think taking credit for writing an exploit that you didn't write and may not even have the knowledge to do yourself is a bit gray.
Could a script kiddy stear an LLM? How much does this reduce the cost of attacks? Can this scale?
What does this mean for the future of cyber security?
AI without a prompt is a hammer sitting in a drawer.
This is really just closer to a drill in that it automated the grunt work with full guidance.
They are the customer who just tell their wishes, can't handle a hammer, can't handle a drill, don't know which nail, hammer or drill to use. Still the nail is in the wall.
Who did it?
Give an experienced human this tool at hand he can achieve exploitation with only a few steering inputs.
Cool stuff
Finding the initial foothold is the hardest part. Codex didn't have anything to do with it.
I think by the point you're swearing at it or something, it's a good sign to switch to a session with fresh context.
If I see it misunderstood, I just Esc to stop it, /clear, and try again (or /rewind if I'm deeper into Planning).
I am very here for a world where we can take back control, at scale, of the enshittified, you'll-own-nothing, ad-ridden consumer electronics our capitalist overlords have decided we deserve, by investing some amount of collective token-$, instead of having to pray one smart adhd nerd buys the same TV and decides to take a look.
OTOH, as with anything LLMs take over, I'm concerned we'll soon have very few smart adhd nerds left to work on liberating the next generation of hardened devices.
But I'm happy about any feedback or critique, I might just be wrong honestly.
Philosophically you could try to differentiate between the human side of the effort versus the computer side. You could also differentiate from a really dumb model and a really smart model. A dumb model just spinning its wheels and hoping it gets lucky, versus a smart model actually trying intelligent things and collecting relevant details.
In these cases I think we're assuming a sufficiently smart model making well reasoned headway on a problem. Not sure I would fall on the side of the camp that would label this as brute force by default in all cases. That said, there may be specific scenarios where it might seem fitting even when using a smart model.