Which was a day late and a dollar short, even at release. Those ANEs are only really good at inference, and even
then you get faster results using your Apple Silicon GPU. They're slow, incomplete, not integrated into the GPU architecture (like Nvidia's) therefore killing any chance of Apple Silicon seeing serious AI server usage.
If you want to brag about Apple's AI hardware prowess, talk about MLX. The ANE was a pretty obvious mistake compared to Nvidia's approach and hundreds of businesses had their own, even before Apple made theirs.