# mixed-sign arithmetic and auto - c++

This is a discussion on mixed-sign arithmetic and auto - c++ ; I have had a mini- or probably micro- or even milli-epiphany: using &quot;auto&quot; will exacerbate C's broken unsigned arithmetic, which C++ also inherited. As we all know, in C, any expression that has unsigned within a radius of a mile ...

1. ## mixed-sign arithmetic and auto

I have had a mini- or probably micro- or even milli-epiphany: using
"auto" will exacerbate C's broken unsigned arithmetic, which C++ also
inherited.

As we all know, in C, any expression that has unsigned within a radius
of a mile will also have type unsigned. This is a simple rule but one of
remarkable bluntness because it assign many operators the wrong result
type. Consider u an unsigned int value and i a signed int value.

1. u * i and i * u yield unsigned, although it should yield a signed value.

2. u / i and i / u also yield unsigned, although again they should both
return a signed value.

3. u + i and i + u again yield unsigned. Here it is not clear which
signedness would be more helpful. In my personal opinion, the
tie-breaker should be a rule that "does not yield the wrong result for
small, reasonable inputs". I'm basing this on the assumption that most
integrals are small in absolute value, something that I recall was
measured in the context of conservative garbage collectors. By that
rule, u + i and i + u should be typed as signed. Typing them as unsigned
make the operation fail for small numbers, e.g. 0 - 1u yields a large
unsigned number.

4. This is the funniest one: -u actually returns unsigned!

(As an aside to my point: comparisons convert both numbers to unsigned,
so i < u will first convert i to unsigned. This again fails for small
reasonable numbers because -1 will never be smaller than anything. Some

C and C++ partly compensate their mishandling of mixed-sign operations
by being generous with implicit conversions: int converts to unsigned
and back no problem. So to avoid the whole "I got the wrong signedness"
business, you don't even need a cast - only a named value:

int a = i + u; // fine
unsigned b = i + u; // also fine

Complex expressions still are exposed to issues, but they are only a
subset of mixed-sign code. Here's an example that might surprise some:

int i = -3;
unsigned u = 2;
int x = (i + u) / 2;

The "correct" value is zero, but x receives the largest integer value.

This all is hardly news to anyone hanging out around here. My
milli-epiphany is that "auto" will make all of the ambiguities worse.
Why? Because C and C++98 require a type specification whenever a value
is defined. But in C++0x, if auto is successful, people will use "auto"
knowing it does the right thing without so much as thinking about it:

auto a = i + u;

Oops... a will be unsigned, even though the user meant it (and actually,
without being an expert in the vagaries of integral arithmetic sincerely
thought) it is int. After all, the code:

int a = i + u;

so it's intuitive that replacing "int" with "auto" is harmless and
actually better, because it will nicely become "long" if necessary. So
with all things considered, "auto" does not always do the right thing!

any ideas on how to solve the problem elegantly. My prediction is that,
if we keep the current rules, "auto" will actually do more harm than
good for mixed-sign arithmetic. As changing semantics is not an option,
it might be useful to look into statically disabling certain mixed-sign
operations.

Andrei

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

2. ## Re: mixed-sign arithmetic and auto

Andrei Alexandrescu (See Website For Email) wrote:

> any ideas on how to solve the problem elegantly. My prediction is that,
> if we keep the current rules, "auto" will actually do more harm than
> good for mixed-sign arithmetic. As changing semantics is not an option,
> it might be useful to look into statically disabling certain mixed-sign
> operations.
>

I suppose that we could require a cast when auto is used in the context
of a mixed arithmetic initialisation expression. However, as you
illustrated, that still leaves the door open for errors.

I think I would prefer to strongly encourage compilers to warn any time
that mixed mode arithmetic is used in an initialiser expression.

That gives me pause for a moments thought, perhaps we could require a
cast when using the new initialisation syntax with a mixed mode
initialiser expression:

int something { u + i}; // error
requires that you write it as either

int something(u + i); // hopefully generating a compile time warning

or

int something { int (u + i) };

and:

auto something { u + i}; // error
requires that you write it as either

auto something(u + i); // hopefully generating a compile time warning

or

auto something { int (u + i) };

--
Note that robinton.demon.co.uk addresses are no longer valid.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

3. ## Re: mixed-sign arithmetic and auto

Francis Glassborow wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>
>> any ideas on how to solve the problem elegantly. My prediction is that,
>> if we keep the current rules, "auto" will actually do more harm than
>> good for mixed-sign arithmetic. As changing semantics is not an option,
>> it might be useful to look into statically disabling certain mixed-sign
>> operations.
>>

(All -- sorry for the few incoherent sentences toward the end of my
previous message. I haven't proofread it, and it shows.)

> I suppose that we could require a cast when auto is used in the context
> of a mixed arithmetic initialisation expression. However, as you
> illustrated, that still leaves the door open for errors.
>
> I think I would prefer to strongly encourage compilers to warn any time
> that mixed mode arithmetic is used in an initialiser expression.

There's a possibility for the compiler to properly track ambiguous-sign
results. Imagine the compiler defines internal types intbits and
longbits, which mean "value of ambiguous signedness". Then any of the
mixed-sign operation yields either intbits or longbits.

These types would not be accessible to user code, so if somebody tries
to write this:

auto a = u + i;

they see the error message: "Cannot infer type of a from a value of
ambiguous signedness".

The beauty of the scheme is that intbits does implicitly convert to int
and unsigned int, so as long as the user _does_ decide the desired
signedness of the result, the code goes through:

int a = u + i; // fine, intbits -> int
unsigned b = u + i; // fine, intbits -> unsigned

Another nice element of the scheme is that the sign ambiguity is
properly taken care of in complex expressions:

int a = (u + i) & i;

This works because:

a) u + i returns intbits

b) it's legal to do bitwise AND between intbits and int (the sign is
irrelevant) returning intbits

c) the intbits result gets converted to a

Again, if a were auto, the code would not compile.

So in a nutshell the compiler would use these two types to transport the
information that ambiguous signedness is in vigor. As soon as the user
tries something that has sign-dependent semantics, an error would occur
(or warning for legacy code):

int a = (u + i ) / 2; // warning: ambiguous-sign operation

> That gives me pause for a moments thought, perhaps we could require a
> cast when using the new initialisation syntax with a mixed mode
> initialiser expression:
>
> int something { u + i}; // error
> requires that you write it as either
>
> int something(u + i); // hopefully generating a compile time warning
>
> or
>
> int something { int (u + i) };
>
> and:
>
> auto something { u + i}; // error
> requires that you write it as either
>
> auto something(u + i); // hopefully generating a compile time warning
>
> or
>
> auto something { int (u + i) };

This scheme is imperfect because it requires a cast to an actual type,
so the code is brittle when "something" changes type from int to long.
There should be library functions that do the cast taking size into account:

auto something { std::tosigned (u + i) };

or

auto something { std::tounsigned (u + i) };

Andrei

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

4. ## Re: mixed-sign arithmetic and auto

In article <477F1858.6090909@erdani.org>,
"Andrei Alexandrescu (See Website For Email)"
<SeeWebsiteForEmail@erdani.org> wrote:

> any ideas on how to solve the problem elegantly. My prediction is that,
> if we keep the current rules, "auto" will actually do more harm than
> good for mixed-sign arithmetic. As changing semantics is not an option,
> it might be useful to look into statically disabling certain mixed-sign
> operations.

While auto might do "more harm than good" for those folks who mix
fundamental types, in my opinion it is more important that the rules for
auto remain as close as possible to that of template deduction. Making
rules that are inconsistent are the things that make the language more
expert friendly at the expense of everyone else, because only the
experts will spend enough time learning the deep dark corners of the
language to even know about the existence of these tricks.

the ramifications of doing so. Do you really expect them to discover
this feature of auto? Or worse, their fix might be along the lines of:

auto x = static_cast<int>((i + u) / 2);

because they "know" that casts always "fix" these kinds of problems.

Your change would make the following code transformation very fragile:

template<typename T> void func(T const& t) { /* ... */ }
//...
func(a + b);

into

template<typename T> void func(T const& t) { /* ... */ }
//...
auto c = a + b;
func(c);

Having that exception to the use of auto makes the overall language
harder, not easier to use.

--
Nevin ":-)" Liber <mailto:nevin@eviloverlord.com> 773 961-1620

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

5. ## Re: mixed-sign arithmetic and auto

Nevin :-] Liber wrote:
> In article <477F1858.6090909@erdani.org>,
> "Andrei Alexandrescu (See Website For Email)"
> <SeeWebsiteForEmail@erdani.org> wrote:
>
>> any ideas on how to solve the problem elegantly. My prediction is that,
>> if we keep the current rules, "auto" will actually do more harm than
>> good for mixed-sign arithmetic. As changing semantics is not an option,
>> it might be useful to look into statically disabling certain mixed-sign
>> operations.

>
> While auto might do "more harm than good" for those folks who mix
> fundamental types, in my opinion it is more important that the rules for
> auto remain as close as possible to that of template deduction. Making
> rules that are inconsistent are the things that make the language more
> expert friendly at the expense of everyone else, because only the
> experts will spend enough time learning the deep dark corners of the
> language to even know about the existence of these tricks.

This might be a misunderstanding. My idea was to _disable_ "auto" (i.e.,
render it uncompilable) when combined with certain mixed-sign
operations, not to impart to it different semantics than the rest of the
type inference mechanism.

I agree that making auto smarter than template deduction would be the
wrong fight to fight.

> We are talking about people who already mix types without understanding
> the ramifications of doing so. Do you really expect them to discover
> this feature of auto? Or worse, their fix might be along the lines of:
>
> auto x = static_cast<int>((i + u) / 2);
>
> because they "know" that casts always "fix" these kinds of problems.

People tend to take the path of least resistance. In this case, I think
they'd write:

int c = (i + u) / 2;

which is shorter and does the same thing.

> Your change would make the following code transformation very fragile:
>
> template<typename T> void func(T const& t) { /* ... */ }
> //...
> func(a + b);
>
> into
>
> template<typename T> void func(T const& t) { /* ... */ }
> //...
> auto c = a + b;
> func(c);

Nonono, that's certainly not what I had in mind. Again: in my opinion,
it might be useful to just disallow (or warn on) the use of auto in the
most flagrant cases of unsigned type mishandling.

Andrei

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

6. ## Re: mixed-sign arithmetic and auto

On Jan 4, 11:57 pm, "Andrei Alexandrescu (See Website For Email)"
<SeeWebsiteForEm...@erdani.org> wrote:
> I have had a mini- or probably micro- or even milli-epiphany: using
> "auto" will exacerbate C's broken unsigned arithmetic, which C++ also
> inherited.

The new use of the "auto" keyword does not change anything abut C++'s
unsigned arithmetic - and that is exactly how it should be.

> As we all know, in C, any expression that has unsigned within a radius
> of a mile will also have type unsigned. This is a simple rule but one of
> remarkable bluntness because it assign many operators the wrong result
> type. Consider u an unsigned int value and i a signed int value.
>
> 1. u * i and i * u yield unsigned, although it should yield a signed

value.

No. The product of u and i should not be signed. Here's why: an
unsigned int in C++ is not simply a signed integer value that happens
to have a non-negative value. In C++, an unsigned int is a member of a
"finite field". Signed values, in contrast, are members of a non-
finite field (the set of integers) - even though an int type in C++
can hold only a finite number of values.

Finite fields have some interesting properties. For one, all
operations performed within a finite field result in an element within
that field. Therefore, it must be the case that all arithmetic
operations involving an unsigned int must yield an unsigned int. So, i
* u has to produce an unsigned value - even if the multiplication has
to "wrap around" the edges of the field in either a forward (for
positive multipliers) or a backward (for negative multipliers)
direction in order to ensure that the result of the multiplication is
a member of the finite field.

> 2. u / i and i / u also yield unsigned, although again they should both
> return a signed value.

No. Just like any other arithmetic operation performed over a finite
field, the quotient yielded by division must yield a member of the
finite field, that is, an unsigned value.

> 3. u + i and i + u again yield unsigned. Here it is not clear which
> signedness would be more helpful. In my personal opinion, the
> tie-breaker should be a rule that "does not yield the wrong result for
> small, reasonable inputs". I'm basing this on the assumption that most
> integrals are small in absolute value, something that I recall was
> measured in the context of conservative garbage collectors. By that
> rule, u + i and i + u should be typed as signed. Typing them as unsigned
> make the operation fail for small numbers, e.g. 0 - 1u yields a large
> unsigned number.
>
> 4. This is the funniest one: -u actually returns unsigned!

Naturally, for anyone familiar only with standard arithmetic, finite
field arithmetic does seem odd. But being different does not imply
being wrong. As it turns out, finite field arithmetic is just as well-
defined (but not as well-known) as the "ordinary" arithmetic taught in
first grade. So -u yielding an unsigned value in a finite field is
really no more surprising than -i yielding a signed value over a non-
finite field.

> This all is hardly news to anyone hanging out around here. My
> milli-epiphany is that "auto" will make all of the ambiguities worse.
> Why? Because C and C++98 require a type specification whenever a value
> is defined. But in C++0x, if auto is successful, people will use "auto"
> knowing it does the right thing without so much as thinking about it:
>
> auto a = i + u;

The above expression adds i and u and stores the result a variable
named "a" of some integral type. Now, clearly the programmer does not
care whether "a" happens to be signed or unsigned. After all, if the
programmer had a preference regarding a's signedness, then the
programmer would have specified whether "a" was a signed or unsigned
type. But since no such type is specified, it must be the case that
whatever type the compiler does select for "a" - is a matter of
complete indifference to the programmer.

> Oops... a will be unsigned, even though the user meant it (and actually,
> without being an expert in the vagaries of integral arithmetic sincerely
> thought) it is int. After all, the code:

> int a = i + u;
>
> so it's intuitive that replacing "int" with "auto" is harmless and
> actually better, because it will nicely become "long" if necessary. So
> with all things considered, "auto" does not always do the right thing!

If by the "right thing" you mean that "auto" should somehow intuit
whether the programmer wants the result of a particular expression to
be stored as a signed or unsigned value - then, yes, the "auto"
keyword clearly does not do the right thing. Nor will it ever. But the
purpose of the "auto" keyword is not to declare a variable of a type
identical to the type that the programmer would have declared - if the
programmer had been required to specify a type.

The presence of the "auto" keyword therefore is not a shorthand
notation for the programmer's preferred type - instead, "auto"
indicates that no such preferred type exists. For example, when
storing a result of an unspecified type, the programmer would clearly
have no preference with regard to type. Another example: when
declaring a variable to hold an intermediate result of a longer
calculation, the programmer would want the type of the intermediate
result to be the same as the type the result would have had as a
subexpression of the entire calculation.

> any ideas on how to solve the problem elegantly. My prediction is that,
> if we keep the current rules, "auto" will actually do more harm than
> good for mixed-sign arithmetic. As changing semantics is not an option,
> it might be useful to look into statically disabling certain mixed-sign
> operations.

The only potential problem that I can foresee is that C++ programmers
might not understand how to use the "auto" keyword appropriately. The
new use of "auto" does not mean that programmers will be able to
replace explicit type declarations with vague ones. Yet, I doubt that
many programmers would use "auto" with such an expectation.

Programmers after all are quite familiar with the penalties of being
vague in their programming. So how many programmers, needing to
declare an "int" variable, would instead of declaring the "int"
variable - opt to declare an "auto" variable instead? Most programers
I would think, would instinctively would favor the explicit
declaration over the implicit one.

Greg

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

7. ## Re: mixed-sign arithmetic and auto

Greg Herlihy wrote:
> The presence of the "auto" keyword therefore is not a shorthand
> notation for the programmer's preferred type - instead, "auto"
> indicates that no such preferred type exists.

I am not sure that I agree with that assertion. one of the main purposes
of auto is to simplify the writing of template code.

template<typename T, typename U>
foo(T t, U u)-> typeof(t * u){
auto temp(t * u);
//do something
return temp;
}

Yes, I know that 'typeof' is not correct but I lack the time to go and
look up the correct syntax etc.

The point is that as a programmer I do care what the type is but I
cannot hard code it because I do not know what it will be.

And yes I understand your rationale of why unsigned v signed works that
way but nonetheless it is not the only sane choice and other languages
do it differently. Indeed I remain of the opinion that the way overflow
works for signed integer types is dangerous and not understood by many
programmers.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

8. ## Re: mixed-sign arithmetic and auto

Greg Herlihy wrote:
>> As we all know, in C, any expression that has unsigned within a radius
>> of a mile will also have type unsigned. This is a simple rule but one of
>> remarkable bluntness because it assign many operators the wrong result
>> type. Consider u an unsigned int value and i a signed int value.
>>
>> 1. u * i and i * u yield unsigned, although it should yield a signed

> value.
>
> No. The product of u and i should not be signed. Here's why: an
> unsigned int in C++ is not simply a signed integer value that happens
> to have a non-negative value. In C++, an unsigned int is a member of a
> "finite field". Signed values, in contrast, are members of a non-
> finite field (the set of integers) - even though an int type in C++
> can hold only a finite number of values.

Interesting! As your entire argument hinges on the fact that unsigned
models finite fields, let's focus on that. First off, I did not even
know what a finite field is, so I searched around and saw that it's the
same as a Galois field, at which point a lonely neuron fired reminding
me of a class taken a long time ago.

I could check quite easily that indeed unsigned int models the finite
field e.g. 2**32 (on 32-bit machines). So, point taken.

However, your arguments fail to convince me for the following reasons.

First, one issue with unsigned is that it converts to and from int. I
agree that there is an isomorphism between int and unsigned, as the sets
have the same number of elements; but in order to derive anything
useful, we must make sure that the isomorphism is interesting. If you
consider int to model, as you say, integers small in absolute value,
then I fail to find the isomorphism between int and unsigned as very
interesting.

Second (and I agree that this is an argument by authority) I failed to
find much evidence that people generally use unsigned to model a finite
field in actual programs. To the best of my knowledge, the uses of
unsigned types I've seen were:

1. As a model for natural numbers

2. As a "bag of bits" where the sign is irrelevant

3. As a natural number modulo something. (This use would be closest to
the finite field use.)

For example, I doubt that somebody said: "I need to model the number of
elements in a container, so a finite field would be exactly what the
doctor prescribed." More likely, the person has thought of a natural
number. I conjecture that more people mean "natural number" than "finite
field" when using unsigned types.

> Finite fields have some interesting properties. For one, all
> operations performed within a finite field result in an element within
> that field. Therefore, it must be the case that all arithmetic
> operations involving an unsigned int must yield an unsigned int.

This argument does not even follow. Int is also a finite field
isomorphic with unsigned, so it's completely arbitrary in a mixed
operation which field you want the result to "fall". On what grounds was
unsigned preferred? For all I know, u1 - u2 produces a useful result for
small values of u1 and u2 if it's typed as int.

> So, i
> * u has to produce an unsigned value - even if the multiplication has
> to "wrap around" the edges of the field in either a forward (for
> positive multipliers) or a backward (for negative multipliers)
> direction in order to ensure that the result of the multiplication is
> a member of the finite field.
>
>> 2. u / i and i / u also yield unsigned, although again they should both
>> return a signed value.

>
> No. Just like any other arithmetic operation performed over a finite
> field, the quotient yielded by division must yield a member of the
> finite field, that is, an unsigned value.

Nope. You again assume the same thing without proving it: why would the
result fall in the finite field unsigned and not in the finite field
int? And if you claim that int is not intended to model a finite field,
then I come and ask - then on what grounds do you define an morphism
from int to unsigned?

If we continue to pull on that string, it pretty much unweaves your
entire finite-field-based argument. So I snipped some of it in wait for

> Another example: when
> declaring a variable to hold an intermediate result of a longer
> calculation, the programmer would want the type of the intermediate
> result to be the same as the type the result would have had as a
> subexpression of the entire calculation.

I agree that this is a good argument. It does not dilute my point, which
was: since the rules for typing mixed-sign arithmetic might surprise
some, something that explicit typing cloaked by allowing free
conversions to and fro, it might be useful to disallow certain uses of auto.

>> any ideas on how to solve the problem elegantly. My prediction is that,
>> if we keep the current rules, "auto" will actually do more harm than
>> good for mixed-sign arithmetic. As changing semantics is not an option,
>> it might be useful to look into statically disabling certain mixed-sign
>> operations.

>
> The only potential problem that I can foresee is that C++ programmers
> might not understand how to use the "auto" keyword appropriately. The
> new use of "auto" does not mean that programmers will be able to
> replace explicit type declarations with vague ones. Yet, I doubt that
> many programmers would use "auto" with such an expectation.
>
> Programmers after all are quite familiar with the penalties of being
> vague in their programming. So how many programmers, needing to
> declare an "int" variable, would instead of declaring the "int"
> variable - opt to declare an "auto" variable instead? Most programers
> I would think, would instinctively would favor the explicit
> declaration over the implicit one.

With this point I flat out disagree as I have extensive experience with
"auto" in another language. Defining symbols with "auto" makes the code
more robust - if the operands change type from int to long or even
double, MyNum or whatnot, the result would follow. If you explicitly
type the result as int, then a long will be silently truncated, and all
you must rely on for debugging are the non-standard compiler warnings.

Andrei

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

9. ## Re: mixed-sign arithmetic and auto

Greg Herlihy wrote:

> On Jan 4, 11:57 pm, "Andrei Alexandrescu (See Website For Email)"
> <SeeWebsiteForEm...@erdani.org> wrote:
>> I have had a mini- or probably micro- or even milli-epiphany: using
>> "auto" will exacerbate C's broken unsigned arithmetic, which C++ also
>> inherited.

>
> The new use of the "auto" keyword does not change anything abut C++'s
> unsigned arithmetic - and that is exactly how it should be.

Yes, auto doesn't change anything about C++ arithmetic, which is good. But
that doesn't mean that C++ arithmetic should not change per se.

>> As we all know, in C, any expression that has unsigned within a radius
>> of a mile will also have type unsigned. This is a simple rule but one of
>> remarkable bluntness because it assign many operators the wrong result
>> type. Consider u an unsigned int value and i a signed int value.
>>
>> 1. u * i and i * u yield unsigned, although it should yield a signed

> value.
>
> No. The product of u and i should not be signed. Here's why: an
> unsigned int in C++ is not simply a signed integer value that happens
> to have a non-negative value. In C++, an unsigned int is a member of a
> "finite field". Signed values, in contrast, are members of a non-
> finite field (the set of integers) - even though an int type in C++
> can hold only a finite number of values.

Sorry, but that is a misunderstanding. unsigneds with their usual operations
do not form a field, but a ring.

> Finite fields have some interesting properties. For one, all
> operations performed within a finite field result in an element within
> that field. Therefore, it must be the case that all arithmetic
> operations involving an unsigned int must yield an unsigned int. So, i
> * u has to produce an unsigned value - even if the multiplication has
> to "wrap around" the edges of the field in either a forward (for
> positive multipliers) or a backward (for negative multipliers)
> direction in order to ensure that the result of the multiplication is
> a member of the finite field.

However, be it a ring or field or whatever, any properties of operation on
it only apply to the operations which are part of the structure. In case of
rings, these are (let U denote unsigned):

+: U x U -> U ... the usual addition (modulo)
-: U -> U ... the opposite element (as in -1==0xFFFF)
*: U x U -> U ... the usual multiplication (modulo)

So, the properties of rings tell us absolutely _nothing_ about how u*i, u+i
or u/i should behave (it also doesn't tell us anything about u/u, because
rings have no division, also, it doesn't tell us anything about
relationals, because there's no way a finite additive group can be ordered

> Naturally, for anyone familiar only with standard arithmetic, finite
> field arithmetic does seem odd. But being different does not imply
> being wrong. As it turns out, finite field arithmetic is just as well-
> defined (but not as well-known) as the "ordinary" arithmetic taught in
> first grade. So -u yielding an unsigned value in a finite field is
> really no more surprising than -i yielding a signed value over a non-
> finite field.

This is OK. However, if, in C++, it would hold that for each unsigned u,
(int)(-u)==-(int)u (which does not, unfortunately), it would also hold that
any calculation that only uses +, -, * would yield the same result
regardless of signedness/unsignedness of the arguments or any
subexpressions, provided the "signed" subexpressions do not overflow.

>> This all is hardly news to anyone hanging out around here. My
>> milli-epiphany is that "auto" will make all of the ambiguities worse.
>> Why? Because C and C++98 require a type specification whenever a value
>> is defined. But in C++0x, if auto is successful, people will use "auto"
>> knowing it does the right thing without so much as thinking about it:
>>
>> auto a = i + u;

>
> The above expression adds i and u and stores the result a variable
> named "a" of some integral type. Now, clearly the programmer does not
> care whether "a" happens to be signed or unsigned. After all, if the
> programmer had a preference regarding a's signedness, then the
> programmer would have specified whether "a" was a signed or unsigned
> type. But since no such type is specified, it must be the case that
> whatever type the compiler does select for "a" - is a matter of
> complete indifference to the programmer.

No, that is not the purpose of auto. If the programmer really didn't care,
you could "infer" some type like void for every auto in every program. The
thing a programmer wants to accomplish using auto is to infer the most
generic type that can hold the value of rhs. Something like if E is an
expression and S is an expression containing E as a subexpression, and

auto a = E;

then S and S with the subexpression E replaced by a should be equivalent if
both have defined behaviour.

However, auto does that quite well, so this is really not the problem. The
problem is that C++ lets you mix signed and unsigned types in expressions
even if it has effect on the value, which would be solved Alexei's
proposal.

> Programmers after all are quite familiar with the penalties of being
> vague in their programming. So how many programmers, needing to
> declare an "int" variable, would instead of declaring the "int"
> variable - opt to declare an "auto" variable instead? Most programers
> I would think, would instinctively would favor the explicit
> declaration over the implicit one.

I don't think there will be many programmers trying to write "auto" instead
of "int" (after all, it is one character longer :-) However, there might be
programmers who, in templated code, would think at this place

T t=getT();
unsigned u=...;
T something=t*u;

something like "what if the class T is so clever it returns something
magical from t*u, like an expression template? I can make use of that,
after all, I only need it for further computation, I don't extract the
value." And then, changes it into

T t=getT();
unsigned u=...;
auto something=t*u;
... do more computation with something ...

Regards
Jiri Palecek

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

10. ## Re: mixed-sign arithmetic and auto

Greg Herlihy wrote:
> Here's why: an
> unsigned int in C++ is not simply a signed integer value that happens
> to have a non-negative value. In C++, an unsigned int is a member of a
> "finite field". Signed values, in contrast, are members of a non-
> finite field (the set of integers) - even though an int type in C++
> can hold only a finite number of values.

What's the basis for the assertion that unsigned are finite fields and
signed are infinite fields?

I always thought the difference between signed and unsigned ints was the
bias in the range of values, not any theoretical difference that is not
reflected in the actual machine. ints in C++ reflect the underlying
reality of the hardware. Pretending that reality doesn't exist will get
one into big (programming) trouble.

For example, many program bugs result from integer overflow and
subsequent wraparound (the language offers no straightforward way to
detect and trap such errors). You can't program as if ints had infinite
range.

--------
Walter Bright
http://www.digitalmars.com
C, C++, D programming language compilers

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]