You can fine tune a model with new information, but it is not the same thing as training it from scratch, and can only get you so far.
You might even be able to poison a model against being fine-tuned on certain information, but that's just a conjecture.