
Description
So, there's this pleasant little piece of code here, in DoubleType.java that theoretically lets us parse floaty doubles (1/2, 1.44423/2344.234234, 1.4e10/2.5e120, etc):
public static double rationalToDouble(String rationalString)
{
String[] stringD = rationalString.split("/", 2);
double d0 = Double.parseDouble(stringD[0]);
double d1 = Double.parseDouble(stringD[1]);
return d0 / d1;
}
This gets called by getInstance() if the string is a floaty fraction string:
@Override
public Double getInstance(String value, String[] formatStrings,
ScalarUnmarshallingContext scalarUnmarshallingContext)
{
return "null".equalsIgnoreCase(value) ? null : (value != null && value.contains("/")) ? rationalToDouble(value) : new Double(value);
}
but doesn't get called by getValue(...)
/**
* Convert the parameter to double.
*/
public double getValue(String valueString)
{
return Double.parseDouble(valueString);
}
Which means that it doesn't get called by setValue(...) when we do deserialization:
/**
* This is a primitive type, so we set it specially.
*
* @see simpl.types.ScalarType#setField(Object, Field, String)
*/
@Override
public boolean setField(Object object, Field field, String value)
{
boolean result = false;
try
{
field.setDouble(object, getValue(value));
result = true;
}
catch (Exception e)
{
setFieldError(field, value, e);
}
return result;
}
So, the current status quo in our code is to use a semi-sane double format, rather than a weird fractiony format. Should the new scalar types keep the fractiony format, or should we just have a standard double format?
I"m voting that we pick a double format that makes sense / is relatively standard / can be parsed by anyone... the fraction format as is doesn't seem to support that goal.