We do often find add(a, b, c), just written as Σ(a, b, c). Similar for mul and Π. The binary sub operator can be simply rewritten in terms of add and unary minus; the fact that we write (a - b) instead of (a + [-b]) or perhaps Σ(a, [-b]) is ultimately a matter of notational convenience, but comes at some cost in mathematical elegance. Considering operators that are commutative yet not associative is not very useful; ultimately we want more from our expression rewriting than just flipping left and right subexpressions within an expression tree while keeping the overall complexity unchanged.
Usually you'd have to write that as \sum_{v \in \{a, b, c\}} v; one of the ways I think conventional math notation could in fact be improved would be by separating the aggregate function of summation from the generation of the items, allowing you to write \sum \{a, b, c\}, at the minor cost of having to write \sum_{i = 1}^N i^2 as something like \sum |_{i=1}^N i^2.
It's not conventional to write commutative-but-not-associative functions as infix operators, but I don't think that's due to some principled reason, but just because they're not very common; non-associative operators such as subtraction and function application are almost universally written with infix operators, even the empty-string operator in the case of function application. The most common one is probably the Sheffer stroke for NAND (although Sheffer himself used it to mean NOR in his 01913 paper: https://www.ams.org/journals/tran/1913-014-04/S0002-9947-191...).
You can go a bit further in the direction of logical manipulability, as George Spencer Brown did with "Laws of Form" (LoF): his logical connective, the "cross", is an N-ary negation function whose arguments are written under the operation symbol without separators between them, and he denotes one of the elementary boolean values as the empty string (let's call it false, making the cross NOR). ASCII isn't good at reproducing his "cross" notation, but if we use brackets instead, we can represent his two axioms as:
In this way Spencer Brown harnesses the free monoid on his symbols: the empty string is the identity element of the free monoid, so appending it to the arguments of a cross doesn't change them and thus can't change the cross's value. Homomorphically, false is the identity element of disjunction, which is a bounded semilattice, and thus a monoid.This allows not only the associative axiom but also the identity axiom to be simple string identity, which seems like a real notational advantage. (Too bad there isn't any equivalent for the commutative axiom.) It allows Spencer Brown to derive all of Boolean logic from those two simple axioms.
However, so far, I haven't found that the LoF notation is an actual improvement over conventional algebraic notation. Things like normalization to disjunctive normal form seem much more confusing:
It's a little less noisy in Spencer Brown's original two-dimensional representation (note that the vertical breaks between the U+2502 BOX DRAWINGS LIGHT VERTICAL characters are not supposed to be there; possibly if you paste this into a text editor or terminal it will look better) but not, to my eye, any less confusing.