[m-dev.] for discussion: design issue for new integer types

Zoltan Somogyi zoltan.somogyi at runbox.com
Sun Oct 30 16:33:39 AEDT 2016


>> A followup question: should we require that the _s be where western
>> convention dictates the decimal commas should go, i.e. between
>> every third digit? I for one would prefer that, but people using the
>> indian number system, which puts commas around groups of *two*
>> digits above the thousands, would probably prefer that there
>> not be such a rule (look up "lakh" or "crore" on wikipedia).
> 
> As Peter has mentioned elsewhere in this thread, there are *good* reasons
> why their positioning should be left up to the programmer.

My use of “require” was the wrong word. As I thought my next paragraph
made clear, I was thinking only about a warning, and only if the
programmer explicitly *asked* for it.

>> We would need to delete the _s at some point anyway. If we do it
>> in the compiler, we can make the coding doing the deletion
>> generate a warning if the _s are in the "wrong" place, with the
>> notion of "wrong" being selected by compiler options such as
>> --warn-misplaced-integer-underscores-{western,indian}.
> 
> I don't think that's something the compiler should be concerned with
> (except possibly in the formatting of error messages).

What is the “that” the compiler shouldn’t be concerned with?
Generating warnings, or deleting the _s?

Deleting the _s has to be done *somewhere*, either the library
or the compiler.

> Yes.  The existing numeric types (int, float, rational, integer) already
> define these sort of coercions; with the new types there's just going
> to be a lot more of them.

With N integer types, there will need to be N*(N-1) coercion functions.
I guess N is just low enough for that to be manageable.

Do you propose to avoid a further multiplication by two by allowing
just one, not both, of e.g. int8_to_int32 and int32_from_int8?

>> I would instead suggest that we keep just the existing
>> integer and big_integer functors, and add a new argument to both.
>> This argument would say int vs uint, and 8 vs 16 vs 32 vs 64 vs
>> default size, *purely on the basis of the suffix, without any check
>> in the scanner*, for reason given above.
>> 
>> To allow the underscore check mentioned above, the existing argument
>> of the integer and big_integer functors would need to be a string,
>> with the conversion done in the compiler. However, doing that
>> would erase the need for the big_integer functor, since the integer
>> functor would then be able to represent everything it can.
> 
> I prefer the second scheme.

Do you mean the one in the paragraph that starts with “To allow the
underscore …”?

>> I would even prefer to erase the distinction between int_const and
>> uint_const, but realize that this cannot be done, because in the HLDS, we
>> definitely want the constant in integer, not string, form, and there is no
>> word sized type that can hold both all ints and all uints. However, we could
>> switch to int_const(integer, signedness, maybe(size)).
> 
> Ok.

Ok to which part of the paragraph? Switching to
"int_const(integer, signedness, maybe(size))”?

> Do you have a preference as to the type of the second operand of the
> shift operations (point 6 in my original post).

Not yet; I will have to think more about that.

Zoltan.


More information about the developers mailing list