I think this could have been done with GNU parallel. One advantage I see with Spark is that is that it is easier to interact with Python, for example these two lines are all is needed to call the relevant Python function:
urls = sc.parallelize(batched_data)
labelled_images = urls.flatMap(apply_batch)
So if you already have a cluster with Spark installed (like Databrick does) then it takes less work to just call your Python code than setting up a GNU Parallel cluster and a writing a small wrapper script. Additionally a Python script would have to load/init the models on every call from Parallel. I agree that this is not a great demonstration of Spark main strengths.