For example, an in-place algorithm like Bubble Sort would have a O(1) space complexity, because it requires no extra memory (and 0 memory is a constant). Merge sort on the other hand is O(n) because it always uses additional memory for its intermediate stages, and that additional memory scales with n.
Doing a quick google, the first few sites I find seem to use a similar understanding https://www.geeksforgeeks.org/time-and-space-complexity-anal...
space complexity is O(n) but auxiliary space complexity uses Theta for notation instead.
But people aren't too picky on the notation and usually say something like "O(1) extra space" instead of using theta.
Saying something is O(n) tells you it grows at most linearly, but this would also admit e.g. log n.
Saying something is Theta(n) tells you it grows exactly linearly: that is, it is not a slower growth rate like log n, nor a faster growth rate like n^2.
Heavily simplified due to caches etc. To the point where people sometimes measure in cache misses instead as that is usually what actually matters.