Sure there are. If nothing else, hard disks have much more consistent latency characteristics for reads and writes. So, for example, you might trade some extra write IOs to ensure data is organized efficiently on disk, reducing the number of read IOs you will subsequently have. With an SSD it's largely a waste of time, because the random reads are so much cheaper and the "contiguous" blocks you think you are seeing are mapped all over the drive anyway. You want to organize things reasonably efficiently
when you write, and then rewrite as little as possible, ideally never. LSM's tend to fit the SSD paradigm so much better than say... balanced trees for this reason. Similar story with clustered indexes in databases. If you use a clustered index on an SSD, usually it's for an index on something like time where new records are invariably going to go near the end of the index; anything else will have bad write performance on a hard disk, but it might be worth it for the read performance... with the SSD, it is just an unmitigated disaster.
There was a time where people thought of hard drives as "just random access storage" and consequently "there is little to worry about" and "unoptimized workloads are always going to work better on SSDs". Yup, SSDs are way faster than what came before them, but that if anything tends to mean that data structures & algorithms that used to make sense might not make much sense any more.