One salient point not touched on here, is that an awful lot of the time, the things folks are blurring out specifically is text. And since we know an awful lot about what text ought to look like, we have a lot more information to guide the reconstruction...
Captain Disillusion recently covered this subject in a more popular science format as well
Can this be applied to camera shutter/motion blur, at low speeds the slight shake of the camera produces this type of blur. This is usually resolved with IBIS to stabilize the sensor.
reminds me of the guy who used the photoshop swirl effect to mask his face in csam he produced, who was found out when someone just undid the swirl
Ok, what about gaussian blur?
Encode the image as a boundary condition of a laminar flow and you can recover the original image from an observation.
If, however, you observe after turbulence has set in, then some of the information has been lost, it's in the entropy now. How much, that depends on the turbulent flow.
Don't miss out on this video by smarter every day
https://youtu.be/j2_dJY_mIys?si=ArMd0C5UzbA8pmzI
Treat the dynamics and time of evolution as your private key, laminar flow is a form of encryption.
This is classical deconvolution. Modern de-blurring implementations are DNN-based.
How do we apply this to geospatial face and licence plate blurs?
Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.
The reason the filters used in the post are easily reversible is because none of them are binomial (i.e. the discrete equivalent of a gaussian blur). A binomial blur uses the coefficients of a row of Pascal's triangle, and thus is what you get when you repeatedly average each pixel with its neighbor (in 1D).
When you do, the information at the Nyquist frequency is removed entirely, because a signal of the form "-1, +1, -1, +1, ..." ends up blurred _exactly_ into "0, 0, 0, 0...".
All the other blur filters, in particular the moving average, are just poorly conceived. They filter out the middle frequencies the most, not the highest ones. It's equivalent to doing a bandpass filter and then subtracting that from the original image.
Here's an interactive notebook that explains this in the context of time series. One important point is that the "look" that people associate with "scientific data series" is actually an artifact of moving averages. If a proper filter is used, the blurryness of the signal is evident. https://observablehq.com/d/a51954c61a72e1ef
Those unblurring methods look "amazing" like that but they are just very fragile, add even a modicum of noise to the blurred image and the deblurring will almost certainly completely fail, this is well-known in signal-processing
My (admittedly superficial) knowledge about blur reversibility is that an attacker may know what kind of stuff is behind the blur.
I mean knowledge like "a human face, but the potential set of humans is known to the attacker" or even worse "a text, but the font is obvious from the unblurred part of the doc".
[dead]
Blur is perhaps surprisingly one of the degradations we know best how to undo. It's been studied extensively because there's just so many applications, for microscopes, telescopes, digital cameras. The usual tricks revolve around inverting blur kernels, and making educated guesses about what the blur kernel and underlying image might look like. My advisors and I were even able to train deep neural networks using only blurry images using a really mild assumption of approximate scale-invariance at the training dataset level [1].
[1] https://ieeexplore.ieee.org/document/11370202