There are two types of nodes in the system, "servers" and "users". All servers communicate with each other so that they are all aware of all existing "servers" globally. There will likely be a limit on who is allowed and authorized to be a server. ( likely a list of their public keys published to the main site for the project )
All packets must be hashed and authorized ( signed ) by the server before they can be sent out. Any packet not authorized by a server will be rejected. In this way, servers will rate limit originating packets.
The traffic is distinguishable because the onion layer is identified at each level. If, after unwrapping your layer, you discover the underlying "level" is not 1 less than your own informed level, the ip origination will be flagged back to the server for spamming.
Current estimate for number of onion levels is 100. Actual destination should be somewhere in the middle 60 or so, to prevent government tracing of the packet because all hops.
At each hop, random delays will be implemented, as well as injection of extra data and/or disguising the entire packets as other forms of tcp communication. ( to prevent dropping of the packets through filtering )
The "servers" know that users are sending out origination packets at a specific rate and to whom, but nothing about the layers beyond the first. Additionally, the entire onion packet is never sent to any server, only a hash of it with specific other information. This way, "servers" can never be accused of harboring illegal data of any sort.
Forwarding rates are measured because each onion layer containing a "message of receipt" that is sent back to the server. The server receieves all of those from the originating user before approving the onion packet.
The only think attackers of the system could try to do is attempt to DDOS users of the system. This is prevented inherently by it being known at all types all users logged into the system ( this is public knowledge in the system ). Communication is only allowed from valid behaving users.