Probably not. The conventional math notation has three major advantages over the "[n]o superscripts or subscripts or [G]reek letters and weird symbols" you're proposing:
1. It's more human-readable. The superscripts and subscripts and weird symbols permit preattentive processing of formula structures, accelerating pattern recognition.
2. It's familiar. Novel math notations face the same problem as alternative English orthographies like Shavian (https://en.wikipedia.org/wiki/Shavian_alphabet) in that, however logical they may be, the audience they'd need to appeal to consists of people who have spent 50 years restructuring their brains into specialized machines to process the conventional notation. Aim t3mpted te rait qe r3st ev q1s c0m3nt 1n mai on alterned1v i6gl1c orx2grefi http://canonical.org/~kragen/alphanumerenglish bet ai qi6k ail rez1st qe t3mpt8cen because, even though it's a much better way to spell English, nobody would understand it.
3. It's optimized for rewriting a formula many times. When you write a computer program, you only write it once, so there isn't a great burden in using a notation like (eq (deriv x (pow e y)) (mul (pow e y) (deriv x y)) 1), which takes 54 characters to say what the conventional math notation¹ says in 16 characters³. But, when you're performing algebraic transformations of a formula, you're writing the same formula over and over again in different forms, sometimes only slightly transformed; the line before that one said (eq (deriv x (pow e y)) (deriv x x) 1), for example². For this purpose, brevity is essential, and as we know from information theory, brevity is proportional to the logarithm of the number of different weird symbols you use.
We could certainly improve conventional math notation, and in fact mathematicians invent new notation all the time in order to do so, but the direction you're suggesting would not be an improvement.
People do make this suggestion all the time. I think it's prompted by this experience where they have always found math difficult, they've always found math notation difficult, and they infer that the former is because of the latter. This inference, although reasonable, is incorrect. Math is inherently difficult, as far as anybody knows (an observation famously attributed to Euclid) and the difficult notation actually makes it easier. Undergraduates routinely perform mental feats that defied Archimedes because of it.
______
¹ \frac d{dx}e^y = e^y\frac{dy}{dx} = 1
² \frac d{dx}e^y = \frac d{dx}x = 1
³ See https://nbviewer.org/url/canonical.org/~kragen/sw/dev3/logar... for a cleaned-up version of the context where I wrote this equation down on paper the other day.
AsciiMath makes easy equations read easy.
1 and 2 would be
1) d/dx e^y = e^y dy/dx = 1
2) d/dx e^y = d/dx x = 1
edit: edited, first got them wrong
> ... It's optimized for rewriting a formula many times.
It's not just "rewriting" arbitrarily either, but rewriting according to well-known rules of expression manipulation such as associativity, commutativity, distributivity of various operations, the properties of equality and order relations, etc. It's precisely when you have such strong identifiable properties that you tend to resort to operator-like notation in any formalism (including a programming language) - not least because that's where a notion of "rewriting some expression" will be at its most effective.
(This is generally true in reverse too; it's why e.g. text-like operators such as fadd() and fmul() are far better suited to the actual low-level properties of floating-point computation than FORTRAN-like symbolic expressions, which are sometimes overly misleading.)