One of the proposed selling points of representing your data semantically is that you can infer across it, disjoint assertions are pretty important in this regard. How disjoint assertions are used (Smith) is another issue.
A related beef though, is that with any reasonable size dataset ontology-based inference is computationally very difficult, you have to cut all sorts of corners and know all sorts of tricks to actually infer across your data. In other words- if semantic data are going to truly become ubiquitous we need to infer in real time across them. Inference takes everything in your dataset into account, so adding a single axiom means if you want to be complete, you have to compute all over again -> slow.