There's a few aspects to discuss here.
Firstly, in 2006 (5 years ago), x64 chips were Core2Duos clocked at about 2.2Ghz. This is top of the line, but even a mid-level processor would be something like 1.6Ghz. A PC would probably have about 2GB of RAM.
An iPad2 is a 1GHz, dual core device. Further, it is built for a mobile device, and thus will sacrifice performance for the sake of battery life, at the very least in subtle ways. It has 512MB of RAM. That's not quite equivalent, not yet. Certainly the next round of mobile hardware will be above and beyond the level of said desktops by a good margin.
Second, there is the performance profile characteristics I mentioned before. I don't know much of the details of ARM, but for example: you can only load a 12-bit immediate. Larger constants must be loaded into a constant pool. The iPad's L1 cache is significantly smaller than that of a desktop processor. Both have 16 general purpose registers, but spilling to the stack will take more of a hit on ARM.
Then there is code quality. Current codegen for ARM is not as good as x64--checking the tag and performing a conditional jump in Jaegermonkey currently produces 3 instructions where it could be done in 1 instruction. When an constant is loaded into the constant pool there is no check to see whether it was necessary to load it (was it already loaded in?) There was a lot of focus on code generation quality in x86-land which has only recently been shifting to ARM--I expect those pieces of the puzzle to be solved very quickly.
To address your other point, what information is indicative of Mozilla being "stretched too thin"?
(For my credentials when talking about Firefox's Javacsript JIT--I am an intern currently working on the next generation JS JIT, Ionmonkey. Code is here: http://hg.mozilla.org/projects/ionmonkey/)