8-bit | 16-bits | 32-bits | 64-bits |
---|---|---|---|
256 bytes | 64 KB | 4GB | 2^{64} bytes |
+-128 | +-32768 |
C standard: compiler define size of an int (implementation-defined behavior)
programmer can find out size by INT_MIN, INT_MAX in limits.h
but representation of pixel as int need specific size of integer
Safety violation in C0
Overflow!
-fwrapv
force use of two's complementlong: 64 bits short: 16 bits char: 8 bits (1 byte) unsigned variations: always follow modular arithmetic
size_t: size of a pointer
argument of malloc() calloc()
array indices
return type of sizeof()
Literal numbers have type int
compiler will do implicit casts if needed
but this implicit cast can have
Casting between signed and unsigned:
implementation defined: bit patter is preserved
so no change in bits representation
Casting from small to big:
the value is preserved
casting unsigned small to unsigned big preserves both
Casting from big to small
the value is preserved if it fits
so chop left bits
Casting between both size and sign/unsigned
compiler may apply rules in either order
(the order of operation is defined in a complicated way)
Solution:
Float: 32 bits Double: 64 bits Calculation rounding error depend on compiler
enum season {WINTER, SPRING, SUMMER, FALL};
emun season today = FALL;
if (today == WINTER) printf("snow!\n");
switch (today) {
case WINTER:
printf("snow!\n");
break;
default:
printf("other\n");
}
Union Type allow using the same space in different ways:
Table of Content