It goes without saying that all binary network protocols should document their byte order, and that if you're implementing a protocol documented as big endian you should use ntohl and friends to ensure correctness.
However if designing a new network protocol, choosing big endian is insanity. Use little endian, skip the macros, and just add
#ifndef LITTLE_ENDIAN
#error
Or the like to a header somewhere.
What does it actually cost you to define a macro which is a no-op on little endian architectures and then use it at the point of serialization/deserialization?