An example: there is a text "shaping" library that takes a font, an input string and produces a sequence of glyphs to typeset that string. Modern fonts and certain scripts are very complex and this task is not trivial. Now, this particular library takes an UTF-8 string. Which means it has an UTF-8 decoder inside.
But a text shaping library does not need an UTF-8 decoder. The product it is used in will certainly have one or, if it works in UTF-16 or, as Python, uses 3-way encoding, may not even need it and thus will have to add an UTF-8 encoding step only to communicate with that library. A simpler design would be to remove that UTF-8 decoder and make the library to accept Unicode characters as integers. If we need UTF-8, it is trivial to decode a string and feed the resulting Unicode into the shaper; if we don't, it is equally trivial to use the library with any other encoding.
(I guess I ended up with a slightly different example than I intended.) Anyway, removing an UTF-8 decoder here would result in a simpler and more universal design, although - this is an unexpected development - it may superficially look more complex to many people who have the "standard" UTF-8 string and just need to get the job done.
If this makes the library harder to use because most people will have UTF-8 strings, I’m not sure that’s a win.