In scientific investigations, it's best to look at one component, or feature, at a time. It's also common to put the feature in an existing architecture to assess the difference that feature makes in isolation. Many papers trying to imitate brain architecture only use one feature in the study. I've seen them try stateful neurons, spiking, sparsity, Hebbian learning, hippocampus-like memory, etc. Others will study combinations of such things.
So, the field looks at brain-inspired changes to common ML, specific components that closely follow brain design (software or hardware), and whole architectures imitating brain principles with artificial deviations. And everything in between. :)