You might be right on the first point. Edit: actually, you might not be. There are languages with compound lvalues and CPU architectures with multiple result registers (x86 being the best-known example). E.g. you can do "(result, flags, err) = do_stuff(a, b, c)" in Go, and x86 DIV storing different parts of the division result in different registers:
https://c9x.me/x86/html/file_module_x86_id_72.html
And generally with common CPU architectures, flags are another such result register that is always written, such that any operation like c := a+b is actually something like (c, flags) := a+b. And for stuff like multiplication, there is actually the notion of two result registers being the higher and lower part of the resulting operation, like (a * 2^32 + b) = c * d (see x86 MUL). Therefore some precision in language is necessary for the discussion (and yes, the different meanings of ==, =, := in various languages and mathematics are also confusing, even to me ;).
I do strongly disagree on the second one about pass/fail. This kind of nitpicking is necessary here, because the discussion is about a standard intended to precisely describe such operations, and how the underlying hardware might be utilized to execute them. Being imprecise in this context is dangerous, wrong, problematic and leads to the whole point of the discussion being lost in a sea of handwaving.