Where else would you put the repository domains?
Generally I advice against hard doing stuff that changes often and may need to be adjusted for different users or organizations.
They can't just be "configured" by changing a URL. I guess maybe you could self-host the search page for some of the distros, and reuse the parser, but are people really doing that? Otherwise, you'd have to write new code to parse the results, at which point you might as well soft-fork the script anyway.
> Generally I advice against hard doing stuff that changes often and may need to be adjusted for different users or organizations.
YAGNI. And if your org does need it for some reason, you're probably better off running something specifically tailored for your own needs instead of whatever implementation makes it in.
The whole script's only 1300 lines. Would adding spending 150 lines on configuration and littering the user's dotfiles be worth it? Now what happens if the configuration's missing/corrupted? When you update the script, do you keep the old dotfile that might be using a deprecated API, or do you replace the old configuration and clobber any customization the user's done? Oops, there go another 1,000 lines, on edge cases, option flags, conf merging, warning messages... And good luck getting bug reporters to explain their configuration changes!
Also, this stuff doesn't "change often". The distros literally can't change it often, because doing so might break LTS stability. I know it's fun to point out perceived flaws in other people's work, but in this case, the URLs are tightly bound to the parsing logic, which is the right place to put them IMO.
Nixpkgs has. :)
Nowadays the only search like this I need to run is
nix-locate -r 'bin/foo$'
It would be nice to have a CLI alternative to Repology, though.Abandoned, but forkable (since FOSS), and a decent idea.
Probably nowadays this gets done in Node, parsing the package search websites. Preferably, this would be done via an API though.
Repology provides an API but it's unstable: https://repology.org/api/v1
First thought, which came to my mind, was a security use case to get it to a point for sbom handling and tracking. In particular, respective to all the recent package vulnerabilities.
Since switching to that and flatpak my distro choice is "what sticks closest to the upstream of [my preferred DE]"
Nix is not there yet in terms of user friendliness. homebrew for linux is pretty awesome.
Only issue i have is that it creates a separate user and doesn’t support custom prefixes (their page says you are on your own if using custom prefixes). While their reasoning is sound, not having an easy way to know which programs will break if using custom prefix is a bummer for me at work.
So I actually vibe coded a script that does this against a sqlite db I've been considering to bundle with my task manager so it can know this stuff on the fly.
But yea this is a key missing component in Linux user space. Windows let's you encode organizational stuff into an exe but on Linux binaries don't really have that.
$ apt info whohas
...
Homepage: http://www.philippwesche.org/200811/whohas/intro.html
...
The distribution model on Linux (generally speaking) is different from Windows, though, so I don't think it makes sense to view processes as fully "owned" by the upstream in the same way as on Windows. Instead of letting each individual organization directly have administrator access to rummage around on our machines and install packages, this is mostly delegated to the Linux distribution, which may customize the packages. (And of course the user has the right to customize the program as well, assuming it's FOSS, so ultimately the user is the owner of their own processes.)The tldr is binaries on linux really should have org unit as a meta data field because when I write a task manager in C it needs to be fast.
List of linux package search databases:
Go and find me all the repolists and package/software metadata for any distro and OS ever released. Write the results to a local SQLite. Incrementally update, but don't hammer the sources to death. Provide a web UI and CLI.