Ok... I think I've figured it out...
Each byte in the the calibration file seems to correspond to the difference between what one would expect the 8bit->16bit value to be (using simple LSL scaling) and what the Amiga actually produces... This is the only way I can make sense of the file... anyone have any other ideas?