;)
Keywords: Jacobian, Newton-Raphson, Levenberg-Marquardt, Powell dog leg, Schur complements, sparse QR/Cholesky, and so on. The LLM can figure the rest out. Try it yourself!
I recommend Rust because the methods are old and most of the algorithms are already implemented by crates, you just have to wire them together. Like I said the hard part is the b-rep: you’re not going to find anything equivalent to Parasolid or ACIS in the literature or open source.
Look, I'm not trying to decimate you here but your list of keywords is wrong and I know it because I explored that list last month for a completely different application.
The Jacobian is the first order derivative for a function that accepts a vector as an input and produces a vector as an output, hence it must be a matrix.
Newton-Raphson is an algorithm for finding the roots(=zeroes) of a function. Since the derivative of the minimum of a function is zero, it can be used for solving convex optimization problems.
Levenberg-Marquardt is another way to solve optimization problems.
The Powell dog leg method is new to me, but it is just an extension of Gauss-Newton which you could think of a special casing of Newton-Raphson where the objective function is quadratic (useful for objectives with vector norms aka distances between positions).
Most of the algorithms require solving a linear system for finding the zero of the derivative. The Schur complement is a way to factor the linear system into a bunch of smaller linear systems and sparse QR/Cholesky are an implementation detail of solving linear systems.
Now that we got the buzzwords out of the way I will tell you the problem with your buzzwords. Constraint solving algorithms are SAT or SMT based and generally not optimization based.
Consider the humble circle constraint: a^2 + b^2 = c^2. If you have two circles with differing centers and radii, they may intersect and if they do, they will intersect at two points and this is readily apparent in the equations since c = sqrt(a^2 + b^2) has two solutions. This means you will need some sort of branching inside your algorithm and the optimization algorithms you listed are terrible at this.
Yeah but have you tried it? You can throw as many keywords as you want into Claude but it does get things wrong in sometimes subtle ways. I’ve tried it, I know.