The lower "HazCams" hazard avoidance cameras (which captured those initial photos) are there to detect hazards (rocks, trenches, etc.). They are stereoscopic, lightweight, and high resolution.
My guess is that using color sensors would have either increased the 3D mapping precision or added weight/power/bandwidth requirements, or otherwise been less robust in that environment.
Those cameras were also pre-deployed for the landing phase and likely transmit more quickly due to the lower data information. The other cameras were shielded for the landing phase.
The navigation and other cameras are in color, and I expect we'll be seeing better images shortly.
[1] This comes to mind whenever a question like that is asked: http://4.bp.blogspot.com/-CWM1zDcmWXs/TroD0VsX4WI/AAAAAAAAAV...