There probably is not "an algorithm" on a site as large and complex as Twitter, no. There are probably dozens if not hundreds of algorithms spread throughout the codebase which affect the timeline for individual users, possibly even code entirely self-generated by ML systems.
Well sure there is "ORDER BY" something, and there are batch jobs to calculate this something from different variables. Maybe it is partially a ML black box, but I'm pretty sure they are not flying completely blind. There's got to be an internal Wiki or some PPTs where they list what goes into the mix. Number of likes, retweets. Attached media, hashtags, trending or not. Do they do sentiment analysis and use positivity as a factor? Do they do PageRank so that likes from important people have more weight? Are there manual debuffs for certain topics? Are certain posts removed or added in a second pass? These are all answerable questions in principle.
In the end, it's still code that is being executed, which can be called "an algorithm". Algorithms are under no obligation to be simple or only include "one algorithm".
At this point, what harm would that do to Twitter? The company's tech has been stagnant for years, and the only real features recently have been attempts to exploit Twitter's network effects to ape Clubhouse & Substack. (Seems successful for the former, kind of a failure for the latter.)