Sort of. Kind of.
googlebot only respects part of robots.txt, the part that refers specifically to itself. It doesn't respect global robots.txt rules.
Google also explicitly don't really respect the disallow rules:
> However, robots.txt Disallow does not guarantee that a page will not appear in results: Google may still decide, based on external information such as incoming links, that it is relevant. If you wish to explicitly block a page from being indexed, you should instead use the noindex robots meta tag or X-Robots-Tag HTTP header. In this case, you should not disallow the page in robots.txt, because the page must be crawled in order for the tag to be seen and obeyed. [0]
[0] https://developers.google.com/search/docs/advanced/robots/ro...