But have a look at the "Thresholding" section. It appears to me that AI would be much better at this operation.
It can benefit from more complex algorithms, but I would stay away from "AI" as much as possible unless there is indeed need of it. You can analyse your data and make some dynamic thresholds, you can make some small ML models, even some tiny DL models, and I would try the options in this order. Some cases do need more complex techniques, but more often than not, you can solve most of your problems by preprocessing your data. I've seen too many solutions where a tiny algorithm could do exactly what a junior implemented using a giant model that takes forever to run.
It indeed would be much better. There’s a reason the old CV methods aren’t used much anymore.
If you want to anything even moderately complex, deep learning is the only game in town.
There are also many other classical thresholding algos. Don't worry about it :)
sure, if you don't mind it hallucinating different numbers into your image
It really depends on the application. If the illumination is consistent, such as in many machine vision tasks, traditional thresholding is often the better choice. It’s straightforward, debuggable, and produces consistent, predictable results. On the other hand, in more complex and unpredictable scenes with variable lighting, textures, or object sizes, AI-based thresholding can perform better.
That said, I still prefer traditional thresholding in controlled environments because the algorithm is understandable and transparent.
Debugging issues in AI systems can be challenging due to their "black box" nature. If the AI fails, you might need to analyze the model, adjust training data, or retrain, a process that is neither simple nor guaranteed to succeed. Traditional methods, however, allow for more direct tuning and certainty in their behavior. For consistent, explainable results in controlled settings, they are often the better option.