Since I am hitting the reply depth: You “solve” a dataset or task when you translate some model into actual real world problems by creating a model that actually “works” (not just high accuracy). What is otherwise the point of training the model other than writing blog posts? Second to that, you can train a model that performs well on the dataset but is less useful in the real world.
This is a health dataset, there are many inputs and outputs to health (e.g., cell level, protein level, tumors, organs, etc.). In this case, it is mRNA focused, which is a broad category that translates to potentially immune responses like vaccines (exactly what kind of therapy, I’m not sure other than “25 species”). Once the model is trained, you can use it to solve real problems, perhaps to develop a therapy that makes its way to clinical trials and eventually actually treats some disease. The model by itself is useless without the ability to have that impact.
So for other examples, take any disease (e.g., Covid19), create a dataset to mirror that problem using some technique (e.g., Covid19 mRNA prediction of some sort), and solve it to create a treatment (e.g., get a safe and effective vaccine). Obviously, you can say the vaccine can be improved so it is not “solved”, but most people would be quite happy with a “almost cure for cancer” even if it wasn’t literally optimal (we don’t even know if a cure for cancer is possible).
My suggestion and question to the author is to outline what is the implications of the work rather than focusing on accuracy statistics that are meaningless without such context.