The extended Kalman filter allows for state updates and observations to be nonlinear, by the straightforward expedient of replacing them with linear approximations near to the current estimate.
The ensemble Kalman filter also allows for nonlinear updates and observations; instead of keeping track of the expectation and covariance of the state (i.e., everything you need to define a Gaussian model, things that behave nicely under linear transformations but not under nonlinear ones) it keeps track of an "ensemble" of sample values, applies the update and observation functions to those, and then estimates expectations and covariances from this ensemble. It's a sort of Monte Carlo Kalman filter.
A couple of other things worth knowing about:
Intermediate between the extended KF and the ensemble KF is the "unscented Kalman filter". Like the ensemble KF it estimates things using a number of samples; but instead of propagating those samples through repeated steps, it picks the sample points at each step on the basis of the estimated expectation and covariance, and uses them only to compute new expectations and covariances. More expensive than the EKF but copes better with substantial nonlinearities.
Extrapolating beyond the ensemble KF is the "particle filter", which uses the same "track an ensemble of samples" approach but gives up the assumption that all the errors are Gaussian. You need a larger ensemble to get good results, I think, but it can cope with a wider range of scenarios. (I find the name "particle filter" annoyingly distracting; the "particles" are the samples, which I guess you're supposed to think of as a cloud of points in possible-configuration-of-the-system space, and of course it's a "filter" in the same way as the Kalman filter is.)