Kafka is a bit more durable, you can build exactly once delivery with it (but only in order), but no selective ACK. RabbitMQ (AMQP in general) has selective ACK, so it has a state DB, and needs compacting.
A task/job queue is of course pretty meaningless without a task/job executor. And so there every message has some metadata, such as when to execute it, how many times to retry, timeout, where to execute (if you have labels or otherwise tagged workers/executors, you can think of this as channels of course).
The distinction is not very clear, just as you implied. A message queue with selective ACK can be used to build a job queue, but then you need to create a small library to serialize and deserialize your arguments, to register workers/tasks, and you need an event loop that listens for the messages, unpacks it, loads the task, runs it, and ACKs it.
In advanced (complicated/complex) job queues you can report progress (which is handy for checkpointing - but that's very much just a DB that the running task uses to persist some data, so almost orthogonal to the task scheduling function).