The Computer Oracle

If 32-bit machines can only handle numbers up to 2^32, why can I write 1000000000000 (trillion) without my machine crashing?

--------------------------------------------------
Hire the world's top talent on demand or became one of them at Toptal: https://topt.al/25cXVn
and get $2,000 discount on your first invoice
--------------------------------------------------


Take control of your privacy with Proton's trusted, Swiss-based, secure services.
Choose what you need and safeguard your digital life:
Mail: https://go.getproton.me/SH1CU
VPN: https://go.getproton.me/SH1DI
Password Manager: https://go.getproton.me/SH1DJ
Drive: https://go.getproton.me/SH1CT


Music by Eric Matyas
https://www.soundimage.org
Track title: Quirky Dreamscape Looping

--

Chapters
00:00 If 32-Bit Machines Can Only Handle Numbers Up To 2^32, Why Can I Write 1000000000000 (Trillion) With
00:26 Accepted Answer Score 794
00:53 Answer 2 Score 397
02:20 Answer 3 Score 190
06:15 Answer 4 Score 31
08:02 Thank you

--

Full question
https://superuser.com/questions/698312/i...

--

Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...

--

Tags
#memory #64bit #cpu #32bit #computerarchitecture

#avk47



ACCEPTED ANSWER

Score 794


I answer your question by asking you a different one:

How do you count on your fingers to 6?

You likely count up to the largest possible number with one hand, and then you move on to your second hand when you run out of fingers. Computers do the same thing, if they need to represent a value larger than a single register can hold they will use multiple 32bit blocks to work with the data.




ANSWER 2

Score 397


You are correct that a 32-bit integer cannot hold a value greater than 2^32-1. However, the value of this 32-bit integer and how it appears on your screen are two completely different things. The printed string "1000000000000" is not represented by a 32-bit integer in memory.

To literally display the number "1000000000000" requires 13 bytes of memory. Each individual byte can hold a value of up to 255. None of them can hold the entire, numerical value, but interpreted individually as ASCII characters (for example, the character '0' is represented by decimal value 48, binary value 00110000), they can be strung together into a format that makes sense for you, a human.


A related concept in programming is typecasting, which is how a computer will interpret a particular stream of 0s and 1s. As in the above example, it can be interpreted as a numerical value, a character, or even something else entirely. While a 32-bit integer may not be able to hold a value of 1000000000000, a 32-bit floating-point number will be able to, using an entirely different interpretation.

As for how computers can work with and process large numbers internally, there exist 64-bit integers (which can accommodate values of up to 16-billion-billion), floating-point values, as well as specialized libraries that can work with arbitrarily large numbers.




ANSWER 3

Score 190


First and foremost, 32-bit computers can store numbers up to 232-1 in a single machine word. Machine word is the amount of data the CPU can process in a natural way (ie. operations on data of that size are implemented in hardware and are generally fastest to perform). 32-bit CPUs use words consisting of 32 bits, thus they can store numbers from 0 to 232-1 in one word.

Second, 1 trillion and 1000000000000 are two different things.

  • 1 trillion is an abstract concept of a number
  • 1000000000000 is text

By pressing 1 once and then 0 12 times you're typing text. 1 inputs 1, 0 inputs 0. See? You're typing characters. Characters aren't numbers. Typewriters had no CPU or memory at all and they were handling such "numbers" pretty well, because it's just text.

Proof that 1000000000000 isn't a number, but text: it can mean 1 trillion (in decimal), 4096 (in binary) or 281474976710656 (in hexadecimal). It has even more meanings in different systems. Meaning of 1000000000000 is a number and storing that number is a different story (we'll get back to it in a moment).

To store the text (in programming it's called a string) 1000000000000 you need 14 bytes (one for each character plus a terminating NULL byte that basically means "the string ends here"). That's 4 machine words. 3 and a half would be enough, but as I said, operations on machine words are the fastest. Let's assume ASCII is used for text encoding, so in memory it will look like this: (converting ASCII codes corresponding to 0 and 1 to binary, each word in a separate line)

00110001 00110000 00110000 00110000
00110000 00110000 00110000 00110000
00110000 00110000 00110000 00110000
00110000 00000000 00000000 00000000

Four characters fit in one word, the rest is moved to the next one. The rest is moved to next word until everything (including first NULL byte) fits.

Now, back to storing numbers. It works just like with overflowing text, but they are fitted from right to left. It may sound complicated, so here's an example. For the sake of simplicity let's assume that:

  • our imaginary computer uses decimal instead of binary
  • one byte can hold numbers 0..9
  • one word consists of two bytes

Here's an empty 2-word memory:

0 0
0 0

Let's store the number 4:

0 4
0 0

Now let's add 9:

1 3
0 0

Notice that both operands would fit in one byte, but not the result. But we have another one ready to use. Now let's store 99:

9 9
0 0

Again, we have used second byte to store the number. Let's add 1:

0 0
0 0

Whoops... That's called integer overflow and is a cause of many serious problems, sometimes very expensive ones.

But if we expect that overflow will happen, we can do this:

0 0
9 9

And now add 1:

0 1
0 0

It becomes clearer if you remove byte-separating spaces and newlines:

0099    | +1
0100

We have predicted that overflow may happen and we may need additional memory. Handling numbers this way isn't as fast as with numbers that fit in single words and it has to be implemented in software. Adding support for two-32-bit-word-numbers to a 32-bit CPU effectively makes it a 64-bit CPU (now it can operate on 64-bit numbers natively, right?).

Everything I have described above applies to binary memory with 8-bit bytes and 4-byte words too, it works pretty much the same way:

00000000 00000000 00000000 00000000 11111111 11111111 11111111 11111111    | +1
00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000000

Converting such numbers to decimal system is tricky, though. (but it works pretty well with hexadecimal)




ANSWER 4

Score 31


The key is understanding how computers encode numbers.

True, if a computer insists on storing numbers using a simple binary representation of the number using a single word (4 bytes on a 32 bit system), then a 32 bit computer can only store numbers up to 2^32. But there are plenty of other ways to encode numbers depending on what it is you want to achieve with them.

One example is how computers store floating point numbers. Computers can use a whole bunch of different ways to encode them. The standard IEEE 754 defines rules for encoding numbers larger than 2^32. Crudely, computers can implement this by dividing the 32 bits into different parts representing some digits of the number and other bits representing the size of the number (i.e. the exponent, 10^x). This allows a much larger range of numbers in size terms, but compromises the precision (which is OK for many purposes). Of course the computer can also use more than one word for this encoding increasing the precision of the magnitude of the available encoded numbers. The simple decimal 32 version of the IEEE standard allows numbers with about 7 decimal digits of precision and numbers of up to about 10^96 in magnitude.

But there are many other options if you need the extra precision. Obviously you can use more words in your encoding without limit (though with a performance penalty to convert into and out of the encoded format). If you want to explore one way this can be done there is a great open-source add-in for Excel that uses an encoding scheme allowing hundreds of digits of precision in calculation. The add-in is called Xnumbers and is available here. The code is in Visual Basic which isn't the fastest possible but has the advantage that it is easy to understand and modify. It is a great way to learn how computers achieve encoding of longer numbers. And you can play around with the results within Excel without having to install any programming tools.