> Well, that's too generic to even be searchable
You might want to search what articles were discussed here on the subject.
> Whatever standard I use for logical or illogical decisions, wherever I put that line, humans and AI seem to be on the same side
You can make an illogical decision. No computer program can make an illogical decision, though the pathways followed through the code may give rise to something unusual, it still follows the logic written in the program, it cannot do otherwise. Even if you have random values occurring on inputs, the code itself will follow predictable paths. If you write self-modifying code, this too is done in accordance with specific rules.
Your example of neurotransmitters and comparing to neural networks is unfortunately flawed. We developed the idea of neural networks based on a very simplified understanding of what is happening in living neural systems. Our neural network are (whatever you might think of them) extremely simple and follow very specific logical paths. We may not be able to identify what those paths will be a priori but we can analyse what has happened after the event and it will still be based on the underlying logic we put into these systems.
> Just to check, you are aware that the weights and biases of an artificial neural network are basically never set by humans? That this process has to be automated?
Yes. The automatic processes still relies of specific logical rules.
If we use only goto's, we can write all sorts of "unpredictable" programs based on random inputs. Does this generate any intelligence? If we know the code and the inputs, we do have the ability to determine what has happened.
> AI are inhuman, certainly, but still learning
I would say it this way AS systems are simply artificial constructs and the data generated is simply generated and not learnt.