C's built-in arrays are super weak, so you need some library to do proper resizable arrays. Since C doesn't have generics, such a library will use void * as the type for putting values into the array and getting them back out again. You'll be casting at every point of use, and nothing will check to make sure you got the cast right, other than running the code and crashing.
There are other options though like macros and code generation. Code gen in particular can give you more options than generics without sacrificing any type safety.
http://attractivechaos.github.io/klib/#Khash%3A%20generic%20...
This approach isn't just more strongly typed than using void * for everything, it's also typically more efficient. For an array, you can put larger structs directly in the array rather than being forced to use a pointer. For a hash table, the same applies, plus you can avoid expensive indirect calls to compute hashes. (There are alternative non-macro approaches that trade off that overhead for other types of overhead, but you can't do as well as with a specialized container.)
I guess you could ask, at that point why not just use C++? And a lot of people do, and the people left writing new C programs are often traditionalists who don't want to switch to new approaches. And to be fair, there are disadvantages to macro-based containers, like increasing build time. But I still think there's room for them to see more adoption.
Static typing versus dynamic typing is fairly binary: if your types are checked at compile time they're probably static, while if they're checked at run time they are probably dynamic. Haskell, C are statically typed, Python, JavaScript are dynamically typed.
Strong/weak typing is more of a spectrum. A strong type system can check many properties of programs and accommodate many patterns as types. A weak type system, on the other hand, can't check many properties of programs, and has to be bypassed to accommodate common patterns. JavaScript has probably the weakest type system, because it checks almost nothing ("hi" + 42 returns "hi42" even though this is nonsensical, {}.foo returns undefined rather than throwing a type error). C is fairly weakly typed because you can add disparate types (int* + int returns int* even if you intended to add two integers) and the type system has to be bypassed with void* to do anything sizeable. Python, ironically, is slightly stronger, in that applying operators to objects of types with no defined relationship throws exceptions ("hi" + 42 errors). A spectrum from weakest to strongest might look something like: JavaScript, C, Go, Ruby, Python, Java, C#, OCaml, Haskell.
My personal experience is that the difference between static and dynamic types isn't very important to my development process or code quality. I have to run and unit test my code to verify it, so the checks happen regardless of whether they happen at compile time or run time. But the difference between strong and weak typing is huge. Strong types catch more bugs, but perhaps more importantly, they catch those bugs where they occur. A type error when adding "hi" + 42 is far more useful for debugging than a mysterious unit test failure on a completely different function where it's returning "Hi42username" instead of "Hi Username" because you added the wrong variable. A segfault 30 lines later is harder to debug than an error when trying to add an int to the value at an int*.
Except when it's not. Just five days ago I was debugging an error in my Erlang port driver that was caused by me passing receiver (ErlDrvTerm, an int in disguise) in the place where I wanted number of iterations. The funnier thing was that the declaration of the function had the arguments in correct order (and that's what guided me), but definition had them swapped. The compiler did not catch that bug, because, well, both are ints, so apparently the declaration and definition match, don't they?