And how exactly do you propose verifying that the user agent purporting to be Googlebot or Firefox is actually who they are? They're inherently unenforceable.
robots.txt is basically a list of rules that lay out "This is how we'd like you to crawl us. We might stop serving you if you don't comply", rather than a hard-and-fast set of directives that specify how a webcrawler will be guaranteed to behave.