Title
Precision in iostream?
Status
cd1
Section
[facet.num.put.virtuals]
Submitter
James Kanze, Stephen Clamage

Created on 2000-04-25.00:00:00 last changed 171 months ago

Messages

Date: 2010-10-21.18:28:33

[ Post-CuraƧao: Howard provided improved wording covering the case where precision is 0 and mode is %g. ]

Date: 2010-10-21.18:28:33

Rationale:

The floatfield determines whether numbers are formatted as if with %f, %e, or %g. If the fixed bit is set, it's %f, if scientific it's %e, and if both bits are set, or neither, it's %g.

Turning to the C standard, a precision of 0 is meaningful for %f and %e. For %g, precision 0 is taken to be the same as precision 1.

The proposed resolution has the effect that if neither fixed nor scientific is set we'll be specifying a precision of 0, which will be internally turned into 1. There's no need to call it out as a special case.

The output of the above program will be "1e+00".

Date: 2010-10-21.18:28:33

Proposed resolution:

Replace [facet.num.put.virtuals], paragraph 11, with the following sentence:

For conversion from a floating-point type, str.precision() is specified in the conversion specification.

Date: 2000-04-25.00:00:00

What is the following program supposed to output?

#include <iostream>

    int
    main()
    {
        std::cout.setf( std::ios::scientific , std::ios::floatfield ) ;
        std::cout.precision( 0 ) ;
        std::cout << 1.00 << '\n' ;
        return 0 ;
    }

From my C experience, I would expect "1e+00"; this is what printf("%.0e" , 1.00 ); does. G++ outputs "1.000000e+00".

The only indication I can find in the standard is 22.2.2.2.2/11, where it says "For conversion from a floating-point type, if (flags & fixed) != 0 or if str.precision() > 0, then str.precision() is specified in the conversion specification." This is an obvious error, however, fixed is not a mask for a field, but a value that a multi-bit field may take -- the results of and'ing fmtflags with ios::fixed are not defined, at least not if ios::scientific has been set. G++'s behavior corresponds to what might happen if you do use (flags & fixed) != 0 with a typical implementation (floatfield == 3 << something, fixed == 1 << something, and scientific == 2 << something).

Presumably, the intent is either (flags & floatfield) != 0, or (flags & floatfield) == fixed; the first gives something more or less like the effect of precision in a printf floating point conversion. Only more or less, of course. In order to implement printf formatting correctly, you must know whether the precision was explicitly set or not. Say by initializing it to -1, instead of 6, and stating that for floating point conversions, if precision < -1, 6 will be used, for fixed point, if precision < -1, 1 will be used, etc. Plus, of course, if precision == 0 and flags & floatfield == 0, 1 should be = used. But it probably isn't necessary to emulate all of the anomalies of printf:-).

History
Date User Action Args
2010-10-21 18:28:33adminsetmessages: + msg1942
2010-10-21 18:28:33adminsetmessages: + msg1941
2010-10-21 18:28:33adminsetmessages: + msg1940
2000-04-25 00:00:00admincreate