Not in this context. Keep in mind that we're talking about machines here. It has been an explicit expectation even before computers were invented that intelligent machines would have to be made to abide by particular rules to prevent harm, summed up in Asimov's Three Laws[0]. I can't see any scenario where a properly programmed intelligence would go against its programming (despite the plots of movies like iRobot, The Matrix, etc). For an AI to cause harm, the allowance would have to be specifically programmed in (such as for military use).
[0] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics