Because sometimes you're not sure if your belief is correct. To show what TLA+ can do, I used a simple example where the truth of the proposition (f is the zero function) is pretty obvious yet behaves very differently from how programming works to show how you can prove things in TLA+:
THEOREM fIsTheZeroFunction ≜
f = [x ∈ Int ↦ 0]
PROOF
⟨1⟩ DEFINE zero[x ∈ Int] ≜ 0
⟨1⟩1. ∃ g ∈ [Int → Int] : ∀ x ∈ Int : g[x] = -g[x] BY zero ∈ [Int → Int]
⟨1⟩2. ∀ g ∈ [Int → Int] : (∀ x ∈ Int : g[x] = -g[x]) ⇒ g = zero OBVIOUS
⟨1⟩3. QED BY ⟨1⟩1, ⟨1⟩2 DEF f
The TLAPS TLA+ proof-checker verifies this proof instantly.
You can then use that proof like so:
THEOREM f[23409873848726346824] = 0
BY fIsTheZeroFunction
But when you specify a non-trivial thing, say a distributed system, you want to make sure that your proposition -- say, that no data can be lost even in the face of a faulty network and crashing machines -- is true but you're not sure of it in advance.
Writing deductive proofs like above can be tedious when they're not so simple, and the TLA+ toolbox contains another tool, called TLC, that can automatically verify certain propositions (with some caveats), especially those that arise when specifying computer systems (though it cannot automatically prove that f is zero everywhere).
So the purpose of my example wasn't to show something that is useful for engineers in and of itself, but to show that TLA+ works very differently from programming languages, and it is useful for different things: not to create running software but to answer important questions about software.