I agree about live coding which is why we send that test to them pre-interview - it's to screen potential candidates.
If they get to an interview, a reasonable amount of it is to ask what they've done and why, e.g.
* Did they write a CLI parser?
* If so, did they add sensible options (e.g. to force overwriting files that exist on the destination or not), or aren't they bothering to check whether files already exist on the destination
* Did they add/stub out tests? If so, to test what?
* What does error handling look like? Boto performs retries by default. Is this their justification or didn't they think about it?
* How else could they have done it (e.g. use a library vs shelling out to the aws cli binary).
It's complicated enough that it helps show their proactiveness. The instructions include 'making it good', and also say they can just stub out/write comments for what they would do if they were doing it for real.
It's saved loads of time in the past by filtering out people who don't even have the basics, and it's relevant to the work.