The way I see it is there's a spectrum of ways of handling this, from type systems, validation code, documentation, integration tests (validating runtime behaviour), and static analysis tooling, but I don't agree that a static type system is the best (and is often only barely adequate) way of integrating modules. The optimal solution is going to vary based on each project's requirements: e.g., is the modular code going to be consumed via networked API or as a library, and is it internally controlled or third-party? How many teams are going to be working on it? How much care has been taken with backwards compatibility? If we break the interface of some random minor function every update, a static type system may help, then again if it's just for our team: who cares? I'm sure we've all seen updates make internal modifications that break runtime behaviour but don't alter data models or function signatures in a way that get's picked up by a compiler.
Even in the most extreme type systems, interfaces are eventually going to need documentation and examples to explain domain/business logic peculiarities. What if a library interface requires a list in sorted order? Is it better to leak a LibrarySortedList type into the caller codebase? The modularity starts to break down. The alternative is use a standard library generic List type, but you can't force the data to be sorted. To encode this type of info we need dependent types or similar. A different example would be a database connection library, every database supports different key/value pairs for connection strings. If the database library released a patch which deprecated support for some arbitrary connection string param, you wouldn't find out until someone tried to run the code. Static analysis tools may catch common things like connection strings, but IME there's always some custom "stringly" typed values in business applications, living in a DB schema written 10+ years ago.
We also have to consider that the majority of our data arrives serialised, from files or over the wire. It's necessarily constrained to generic wire protocols, which have lower fidelity than data parsed into a more featured type system. Given that this type of data is getting validated or rejected directly after deserialisation, how much extra value is derived from having the compiler reiterate your validation code? Non-zero for sure, but probably not as much as we like to think.