Title
precision of hexfloat
Status
c++11
Section
[facet.num.put.virtuals]
Submitter
John Salmon

Created on 2007-04-20.00:00:00 last changed 154 months ago

Messages

Date: 2011-04-24.20:26:46

Proposed resolution:

Change [facet.num.put.virtuals], Stage 1, under p5 (near the end of Stage 1):

For conversion from a floating-point type, str.precision() is specified as precision in the conversion specification if floatfield != (ios_base::fixed | ios_base::scientific), else no precision is specified.

Date: 2011-04-24.20:26:46

[ 2009-10 Santa Cruz: ]

Move to Ready.

Date: 2011-04-24.20:26:46

[ 2009-07 Frankfurt ]

Leave this open for Robert and Daniel to work on.

Straw poll: Disposition?

  • Default is %.6a (i.e. NAD): 2
  • Always %a (no precision): 6
  • precision(-1) == %a: 3

Daniel and Robert have direction to write up wording for the "always %a" solution.

2009-07-15 Robert provided wording.
Date: 2011-04-24.20:26:46

[ Howard: I think the fundamental issue we overlooked was that with %f, %e, %g, the default precision was always 6. With %a the default precision is not 6, it is infinity. So for the first time, we need to distinguish between the default value of precision, and the precision value 6. ]

Date: 2007-04-20.00:00:00

I am trying to understand how TR1 supports hex float (%a) output.

As far as I can tell, it does so via the following:

8.15 Additions to header <locale> [tr.c99.locale]

In subclause [facet.num.put.virtuals], Table 58 Floating-point conversions, after the line:
floatfield == ios_base::scientific %E

add the two lines:

floatfield == ios_base::fixed | ios_base::scientific && !uppercase %a
floatfield == ios_base::fixed | ios_base::scientific %A 2

[Note: The additional requirements on print and scan functions, later in this clause, ensure that the print functions generate hexadecimal floating-point fields with a %a or %A conversion specifier, and that the scan functions match hexadecimal floating-point fields with a %g conversion specifier. end note]

Following the thread, in [facet.num.put.virtuals], we find:

For conversion from a floating-point type, if (flags & fixed) != 0 or if str.precision() > 0, then str.precision() is specified in the conversion specification.

This would seem to imply that when floatfield == fixed|scientific, the precision of the conversion specifier is to be taken from str.precision(). Is this really what's intended? I sincerely hope that I'm either missing something or this is an oversight. Please tell me that the committee did not intend to mandate that hex floats (and doubles) should by default be printed as if by %.6a.

History
Date User Action Args
2011-08-23 20:07:26adminsetstatus: wp -> c++11
2011-04-24 20:26:46adminsetmessages: + msg5737
2011-04-24 20:26:46adminsetmessages: + msg5736
2011-04-24 20:26:46adminsetmessages: + msg5735
2011-04-24 20:26:46adminsetmessages: + msg5734
2007-04-20 00:00:00admincreate