Sure you can reuse tools to achieve similar results. As with everything else the devil is in the details. Does your monitoring system saves results forever or it only let you report 90 days back? Can you compare two runs in a meaningful way? i.e not just logs but also interactively plotting exploring your results? Do you need to spend hours to instrument your code? Can you sort jenkins job by a parameter/metric? What about reporting new results to an existing experiment? There's many more examples. But in any case if you can reuse your CI/CD system for ML experiment management you should do that. Another question worth considering is that if this is a "failed idea" why would engineering led tech companies build these systems? Obviously they tried reusing their current tooling.
The tools we've been building for the past fifty years were designed for software engineering. Machine learning workflows are different in many ways and as such require new tools and approachs. That's at least our perspective.