You can't determine that purely from Unicode, you have to also know the conventions used in writing Arabic script. However Unicode is not intended to encode such conventions.
Suppose these conventions change, as they have throughout history? Or if there are different variations of these conventions in different regions or sub-dialects? Also for example in Arabic it's often possible to determine the pronunciation of a word from it's context in a sentence, but in other contexts it isn't and so the tashkil are added. There's no way for a system like Unicde to ddecide that for you. For example suppose you cut-and-paste the word from one sentence into another, should Unicode somehow automatically add or remove the tashkil? No, that's up to the author (e.g. performing the edit in a word processor) or the program performing the operation if it's being done programatically.
Unicode provides one layer in the stack. Fonts provide another layer. Program code or editorial sensibility provides another layer. Many criticisms of Unicode are premised on the expectation that it should be solving problems that belong to another layer. Not all criticisms, it's a complex system that has had to make many compromises and there have been a series of mistakes in it's history, but taken overall it's been unbelievably successful and useful.
I'm in awe of the way it solves such a huge range of problems in the space, that people picking nits about the gaps that remain are piss me off, especially when they're based on a fundamental misunderstanding of the problem it's actually solving. Cynicism is easy, solving hard problems is not. I know who gets my respect.