Swift is a good reference point in this area because Swift essentially took the dictionary-passing approach of Haskell and added the ‘low level’ type information like bit-width, offsets, etc as a Type Metadata parameter. The big upside is that Swift gets a good deal of performance boost compared to Haskell and other languages that have uniform datatypes (boxed values). So to extend the concept I was describing from Haskell to Swift would be creating concrete syntax for the language, which has no semantic identity outside those already existing in the semantics of Witness Tables. Sometimes the term “derived form” is used to talk about syntax like this. It is the term for concrete syntactic forms that are only defined in terms of syntax forms in the base/core language. It is, for the most part, correct to view derived forms as macros or as a more general metaprogramming construct, but but in Haskell’s case the macros are implemented in the compiler and expanded in a particular phase of compilation.
Swift is also good touch point for considering what the compiler does with the actual implementation of ad-hoc polymorphism. Swift is optimizing to monomorphized instances of a generic function give some heuristics about expect performance gains by specializing the function for a given type (Haskell also does this in GHC, but still pays a performance price for using boxed values).
So to answer the question: the part of Haskell’s implementation of typeclasses that I think is the correct method is that it is merely a derived syntactic form that is expanded by the compiler into Haskell’s actual language (its abstract syntax as encoded as ghc-core in GHC in particular). From this perspective Swift doesn’t provide the derived form, it just provides the implementation directly for developers to use omitting the sugar that Haskell provides. I tend towards explicitness as a strong default in language design.
Rust doesn’t have a formal semantics currently, so they could certainly adopt a derived form approach to traits, but I don’t know enough Rust to be able to determine what issues existing thoughts, assumptions, and type theory ‘commitments’ would act as obstacles to such a semantic.
As to MiniRust, Ralf Jung (at ETH Zurich) has done some excellent work, along with some of his students (Max Vistrup’s recent paper on Logics a la Carte is very, very good). MiniRust does attempt to be comparable to Haskell’s GHC-core. So in the sense of being or approaching what I would view as the (excluding type theory based options) correct way to implement ad/hoc polymorphism, yes, to sum: MiniRust as the semantic definition of a derived syntactic form and a compilation phase for ‘expansion’ of the trait macro.
Those explanations aside, my issues with ad-hoc polymorphism do not go away under this implementation, I’m generally opposed to generic functions (especially function name overloading). But I think that if a language is pursuing ad-hoc polymorphism as a feature they should pursue it in a well founded and formal manner.
> I’m generally opposed to generic functions
I agree there are significant costs to generics (especially readability), but there are large classes of problems that are a right pain without them. Even Go added them eventually.
Thanks again for taking the time to answer! I think I have a bit more comfortable understanding of what's going on. I appreciate it!
> I’m generally opposed to generic functions
I'd be interested to know how, in your preferred model, you'd handle things like `Vec<T>` or `HashMap<K, V>`, without duplicating code.