This is exactly why I'm one of the (probably many) silent switchers away from Fitbit now that this acquisition has happened. Whatever Google says now, in one or two or ten years, that could change, and my data will still be there.
> I'm sure there will be a number of "incidents" in which teams were given approval to use the data in unsavory ways.
This is my other concern, closely related to the first. Data companies (Google included) have a very different idea of what is "savory" w.r.t data usage. Not from a place of malice, necessarily, but innocence/privilege/not thinking about the consequences.
Let's say the engineers are building a data-using feature, such as one which takes Fitbit health data and links it to your medical record to recommend tests or interventions that might benefit you. Those engineers may only think about how many lives this will save - the benefits of sharing this data. Because there are some benefits, for some people, in that use case. The problem is when those engineers do not consider all the many ways that sharing could go wrong, and how many other people could be hurt. Discrimination, denial of insurance, stalking, etc.
Personally, I think it would be incredibly beneficial for most software engineers to spend time learning hacking and adversarial thinking. Teaching the people who build these features to think about how the features could, and will, be misused would likely help them build better, safer features. (/soapbox :) )