Lossy and lossless are essentially fundamentally the same actually. All state of the art lossy compressors use arithmetic coding as an example and they still do prediction. Eg. your favourite video codecs predict not only the next bit in the 2D frame, but also the next bit when modelling past frames (becomes a 3D cube of data at that point) and they also do things like motion prediction of individual objects in frame to help make a better prediction. They all use arithmetic encoders to encode the data.
Where the lossy part comes in is the point at which humans notice/don't notice data being thrown away. Got a bit that was waaay out of line in the prediction and going to cost you 10bits to correct? Perhaps to humans it doesn't matter? Can we throw it away? This throwing away of data is often done before the prediction+compression stage (eg. maybe quantizing the color space to 8bits from 32bits?) but from there on it's the same thing.
Lossy and lossless are essentially fundamentally the same actually. All state of the art lossy compressors use arithmetic coding as an example and they still do prediction. Eg. your favourite video codecs predict not only the next bit in the 2D frame, but also the next bit when modelling past frames (becomes a 3D cube of data at that point) and they also do things like motion prediction of individual objects in frame to help make a better prediction. They all use arithmetic encoders to encode the data.
Where the lossy part comes in is the point at which humans notice/don't notice data being thrown away. Got a bit that was waaay out of line in the prediction and going to cost you 10bits to correct? Perhaps to humans it doesn't matter? Can we throw it away? This throwing away of data is often done before the prediction+compression stage (eg. maybe quantizing the color space to 8bits from 32bits?) but from there on it's the same thing.