I find it the opposite. Unsigned integers are intuitive, while signed integers are unintuitive and cause a lot of tricky bugs. Especially in languages, where signed overflow is undefined behavior.
It's pretty rare to have values that can be negative but are always integers. At least in the work I do. The most common case I encounter are approximations of something related to log probability. Such as various scores in dynamic programming and graph algorithms.
Most of the time, when you deal with integers, you need special handling to avoid negative values. Once you get used to thinking about unsigned integers, you quickly develop robust ways of avoiding situations where the values would be negative.