Implemented APIs for clients that were based on scraping data from other competitor sites that did not give permission, it's some kind of service hi-jacking. Similarly, implemented some clone sites that just rip off other people's work #zuckerberging Some bug bounties ask pentesters not to hit their production servers hard with automated tools... I've ignored this to find some bugs in production servers on occasion.
Scraping becomes unethical when it turns into a DOS, doesn't it? In some cases it wouldn't be an issue, but some scraping definitely makes the service less responsive.
Yep, this is the only thing which I agree with. When I scrape I always try to use as little resources as possible and make sure it’s appropriate for the size of the site I’m scraping (I wouldn’t care about sending 1k requests/second to Facebook, but wouldn’t send more than a dozen few per second to a little e-commerce store).