The calibration factor `F` of a calorimeter is by definition to the heat supplied to the calorimeter divided by the temperature change.
To determine the value of ` ` the calorimeter you heat a known mass of water in the calorimeter from an initial temperature to a final temperature with a known heat `Q` provided by an electric heater supplied with a known voltage and current.
Thus if `C` is the heat capacity of the calorimeter itself:
`Q =C*Delta(T) +m*c_(water)*Delta(T)`
The calibration factor is
`F = Q/(Delta(T)) (=(U*I*t)/(Delta(T))) = C +m*c_(water)`
`F =C +m*c_(water)`
Usually a calorimeter is made of a metal and has small mass. Thus the value of calorimeter heat capacity `C` is about ten times smaller than the heat capacity of water inside it, `m*c_("water")` . Hence the above relation can be approximated well with
Now we have the answer: if we use half of initial mass of water to calibrate the calorimeter the calibration factor will be reduced to about half of its initial value.