In a discussion with Phil Harvey of exiftool fame, it became apparent that the first problem I have to solve in respect of Micro Four Thirds lens correction is to understand the formula, or formulae, being used to apply the correction.
Most image processing software supports geometric correction via three parameters labelled a, b and c. These are the parameters in the following formula:
Ru = scale*(Rd + a*Rd^3 + b*Rd^5 + c*Rd^7)
In this Rd is the distance of a point in the image from the centre in the distorted image, and Ru is the distance it was in the undistorted image. The model is that distortion is radially symmetric, and has the effect that concentric circles of image points move either closer to or further from the centre than they should be. This translates into the more recognisable types of distortion when straight lines in the image cut across these imaginary concentric circles.
There’s a couple of useful pictures here.
There are several variants on this formula. Wikipedia has a much more complex looking version which appears completely different, as it allows for the effects of off-centre lens elements and different profiles in different directions, but if you ignore these effects then with a little bit of factoring it boils down to exactly the same equation. Bibble, for example, switches the labels a and c, and other versions factor “scale” into the individual parameters, but the basic formula is the same.
The problem is that if this is the formula used in MFT in-camera corrections, then the data isn’t the right shape. We should just see three or maybe four fractional values, and the rest should be zeros, or maybe constants for a given lens/camera combination. While in some cases you can select values from the MFT data which work, it’s inconsistent and there’s no explanation for all the other data.
We know that MFT cameras also correct in-camera for chromatic aberrations. Maybe this could explain the other data points? The trouble is that this doesn’t work either. CA correction formulae work in one of two ways. They either provide a pair of shifts for the different colour channels (requiring two further parameters in addition to the three or four for geometric correction), or you get three sets of geometric correction parameters, one for each colour channel, as per the following taken from a DNG file using one of Raphael Rigo’s tools:
r : 1.000168 -0.128185 0.052356 -0.005116 0.000000 0.000000
g : 0.999694 -0.127995 0.052335 -0.004995 0.000000 0.000000
b : 0.999967 -0.127973 0.052642 -0.005050 0.000000 0.000000
While this might explain the number of values, you’d expect to see three sets of very similar values in the MFT data, and that doesn’t happen.
There are other ways of doing geometric correction. There are other formulae, but they don’t seem to be in common use. There’s also a non-linear approach (see http://paulbourke.net/miscellaneous/lenscorrection/ again), but this would need either a series of small values with the same sign (for a cumulative curve), or a progressive sequence (for an explicit curve). Of course, there could be some sort of complex differential version, but that’s cheating!
I have to assume that the model is capable of interpretation, especially since for some lenses a simple mapping works pretty well. However, it’s clearly not as simple as we’d hoped.
Apologies for hijacking this post. I’m trying to do something very similar for Sony’s A7 raw format, focusing on the CA-correction. I have identified the right data tag and a bit of the format but am struggling to fully decode its meaning. I am not even looking to fully identify Sony’s format but would already be happy to construct the transfer function that Adobe’s DNG converter is employing to construct the standardised (in DNG specs) polygonal function f=p0 + p1*r^2 + p2*r^4 + p3*r^6.
I have documented all my findings so far at https://variousphotography.wordpress.com/2015/05/13/crowd-sourcing-sonys-arw-2-3-1-embedded-lens-ca-corrections/. Would highly appreciate any help in looking into this!
As noted above, this project is no longer running. I found the best solution was to use a RAW processor created by someone who has open lines of communication with the MFT manufacturers and can get access to the correct information, i..e Capture One. I suggest it might save you some effort to go the same route. If not, you’re welcome to make use of any of the information on these pages.