Nilay Patel, writing for The Verge:
Essentially, Smart HDR was choosing the wrong base frame for HDR processing when you took a selfie. Instead of choosing a frame with a short shutter speed to freeze motion and preserve detail, it would sometimes choose a frame with longer shutter speed. The front camera also does not have optical image stabilization, so it takes blurrier shots at the same shutter speed as the rear, stabilized camera. The result is a loss of detail that looks like smoothing on the front camera.
I knew it was something to do with Smart HDR, but it’s interesting to know the exact detail of why it was happening.
Maybe one of the main benefits of computational photography, is that it can be continuously improved, and sent out in regular software updates. It’s intriguing to think what the difference in the camera will be in a years time, compared to how good it is now, even with no hardware change.