We've got a few companies using it, and had other users in our community tell us "It's a very helpful tool that I didn't even know I needed". Even we ourselves have made a number of surprising discoveries about how often some of our hourly/nightly workflows have been failing by dogfooding this.
Let us know what you think! :)
Have you considered integration with other CI/CD pipeline engines or it doesn't matter as long as you git-commit?
Thanks in advance.
With regards to cost, you can self-host GitHub Actions runners. Internally we use an autoscaling group of spot instances via k8s, which is much cheaper than GitHub-hosted runners. We actually didn't do it for the cost, we did it because we wanted a local cache to persist on machines during the day, for performance reasons. When cost is a factor, having an analytics system help you optimize your jobs is super important.
I really like this - with this product, we're focusing on trying to tell the story and also make the data actionable. I'm hoping we can both show that there is a problem, as well as guide you to how to fix it. Otherwise, at the end of the day it's just colorful lines.
If you're already paying for Datadog the price probably isn't an issue for you though.