Following on from my previous post on binary, in this series of posts we will examine exactly how data is stored by a computer. Just as each programming instruction and memory address in assembly language can be converted into a hexadecimal value that ultimately represents a binary number, so too can data. In this post, we will look at numbers.
A natural number is any non-negative integer, starting from 0 (what we might also call “counting numbers”). In binary, it is possible to represent the natural numbers 0 to 255 with a single 8-bit byte. Thus, if we wanted a computer to store the number 197, we would convert that into a hexadecimal value of &C5, pass it to the assembler and it would encode it in binary as 11000101.
Incidentally, there is a neat trick to convert a binary number (of any length) back into its decimal counterpart by simply taking the first digit on the left, multiplying it by 2 and adding it to the next number (reading left to right); then take that sum, multiply it by 2 and add it to the next number in sequence – and so on – until you reach the end:
11000101 1 = 1 1*2 + 1 = 3 3*2 + 0 = 6 6*2 + 0 = 12 12*2 + 0 = 24 24*2 + 1 = 49 49*2 + 0 = 98 98*2 + 1 = 197
What happens, though, if we want the computer to store a natural number greater than 255? The short answer is, we’ll need more bytes; but how does it work in practice? It will help if we examine the binary representation of a number greater than 255. Let’s use the example of 489, which in binary is 111101001, using 9 bits. Since a byte is only 8 bits long, the computer will need two bytes to store the number 489:
Byte 1: 11101001 Byte 2: 00000001
Even though we only need one additional bit in this case, a byte is nevertheless the lowest unit of storage for a computer, so we pad out the second byte with seven 0 bits. Note that although we are inclined to read the binary number from left to right, the computer reads it from right to left, hence it will store the bytes right to left as well, starting with byte 1, which is called the ‘low’ byte and ending with byte 2, which is called the ‘high’ byte.
So far, all we have actually done is ask the computer to store two 8-bit binary numbers. Individually, these two bytes have decimal values of 233 (byte 1) and 1 (byte 2). But 233 + 1 = 234 so how do these two bytes actually represent 489? It all comes back to the binary system, also known as “base-2”. In our first byte, although it is a convenient shortcut to say that each value is just a direct binary representation of a decimal value, the correct formulation is to say that it is a representation of the decimal value multiplied by 20 (1). In the second byte, this goes up by an exponent of 8, so each binary value is actually a representation of the equivalent decimal value multiplied by 28 (256). So, back to our example, the reason why these two bytes make the number 489 is because the high byte value actually represents (1 * 28) = 256, which we then add to the low byte value of (233 * 20) = 233 (from the low byte) giving us a total of 489.
This might sound complicated, but it’s really just because we have moved away from the more intuitive “base-10” (i.e. decimal) system we are used to working in. Although we don’t spend much time thinking about it, each digit within a base-10 (decimal) number is actually a multiple of 10y where y increases by 1 for each additional digit to the left. The decimal number 489 can be understood as:
4 * 102 = 400 + 8 * 101 = 80 + 9 * 100 = 9 = 489
In binary, or “base-2”, each digit is instead a multiple of 2y where y increases by 1 for each additional digit to the left. Thus the binary number 111101001 can be understood as:
1 * 28 = 256 + 1 * 27 = 128 + 1 * 26 = 64 + 1 * 25 = 32 + 0 * 24 = 0 + 1 * 23 = 8 + 0 * 22 = 0 + 0 * 21 = 0 + 1 * 20 = 1 = 489
The magic of base-2 is that while each individual byte can only store a maximum decimal value of 255, each additional byte increases in value by a factor of 28 (256). So, the value of the second byte is multiplied by 256, but the value of the third byte would be multiplied by 65,536 (256*256) and the value of the fourth byte would be multiplied by 16,777,216 (256*256*256). Thus, with four bytes, a computer can store any natural number from 0 all the way up to 4,294,967,295:
First byte: 255 = 255 + Second byte: 255*256 = 65280 + Third byte: 255*256*256 = 16711680 + Fourth byte: 255*256*256*256 = 4278190080 = 4294967295
As a point of computing history trivia, the original 8-bit architecture computers from the 1980s had microprocessors that could only handle one byte (i.e. 8 bits) of data at a time. That is to say, they could only load a single byte into the microprocessor’s internal register before having to move it into memory. Today’s 64-bit microprocessors can handle up to 8 bytes in parallel, meaning that they can store up to 8 bytes of information within their internal register before needing to move it into memory. That is what makes them so much faster than their 1980s counterparts: where the 1980s computer had to read and write numbers larger than 255 in multiple steps, today’s machines can read and write much larger numbers internally. However: the method of storing the resulting data in memory still works in exactly the same way, with one or more bytes being used in combination to store a natural number.
A ‘real’ number is essentially any number, positive or negative, including its precision, i.e. the digits that come after the decimal point. The signed aspect of a number (whether it is positive or negative) does not by itself pose a huge challenge to a computer, since we could tell it to just use the first bit of a byte to indicate the sign (i.e. if the first bit is 0, the number is positive, but if it’s 1, the number is negative). Where it becomes a challenge is in defining the level of precision, i.e. how many digits after the decimal place will we store. Although a series of positive integers (see above) is technically infinite, there is nevertheless a finite degree of separation between each number. That is to say, between 1 and 2, there is no other whole number, just as between 1,000,000 and 1,000,001 there is no other whole number. While we can always add another whole number to the series, placing a ceiling on the highest integer a computer can store does not affect the accuracy of what it does with any of the numbers in the series below that ceiling. Real numbers are different: between 0.01 and 0.02 there are an infinity of numbers, just as between 0.000001 and 0.000002 there are an infinity of numbers. Defining a precision limitation on a real number will always compromise the accuracy with which a computer is then able to work with those numbers. The trick, therefore, is to minimise the inaccuracy as much as possible such that where such inaccuracy does occur, it is so minuscule as to make little-to-no difference when the computer carries out calculations using real numbers.
Real numbers are therefore typically stored by a computer using something called a floating point. This method slices up the 32 bits that make up the four bytes of storage into three components: the first bit is the sign (0 for positive, 1 for negative); the next eight bits are the exponent, which is an integer that can range from -127 to 127; and the final 23 bits are something called the ‘mantissa’, which is a binary fraction. Overall, they allow the computer to represent numbers in the form:
(-)1+m * 2e-127
This looks more complicated than it is in reality, it just requires a little unpacking. First off, what is a binary fraction? Essentially it is just a case of taking each digit of the mantissa and dividing the first bit by 2, the second bit by 4, the third bit by 8, the fourth bit by 16 and so on until you reach the end of the mantissa, and adding them up. Obviously if you can see that everything after the third bit is zero, there is no need to continue with the binary fractions, since they will all come out as zero. After adding up each of these fractions, you add 1 to the result (this is something called the ‘hidden bit’, which is simply part of what makes up a floating point representation). As for the exponent, why do we subtract 127? Well, if you think about it, the exponent is represented by 8 bits, which just means it can be a whole number anywhere between 0 and 255; however, our actual exponent that we want to use needs to be a whole number value between -127 and +127. To do this, we take the exponent value as stored and then subtract 127 from it to get the actual exponent that the computer would use if it was displaying the number to us. It will help to illustrate this with an example. Below, we will show how 0.000000000000000000000000000000000001316553672921 can be stored in four bytes by the computer:
00000011 11100000 00000000 00000000 sign: 0 exponent: 00000111 mantissa: 11000000000000000000000
The sign is 0, so this is a positive number. The exponent value is the binary number for 7, so we subtract 127 from it to get an actual exponent of -120. This already indicates that the number we are going to obtain is a very, very small one. Next, we take the mantissa value and determine the binary fraction it represents. The first bit is 1, so that equates to 1/2 and the second bit is 1, so that equates to 1/4. All the remaining digits are zeros, so our binary fraction comes out as 1/2 (0.5) + 1/4 (0.25) = 0.75 and to this we finally add the ‘hidden bit’ making 1.75 our overall value. Putting it all together, we get 1.75 * 2-120 which equals 0.000000000000000000000000000000000001316553672921
This example demonstrates how floating point can be used to store very small numbers; it can equally be used to store very large numbers. The benefit of floating point is that because the decimal point is not in a fixed position, the four bytes of storage can be used flexibly to create very large or very small numbers. The accuracy problem is still there (what if we tried to get the square root of our very small number above?) but at degrees so infinitesimally small that they (usually!) don’t make a significant impact on the calculations a computer performs. Of course, there are always exceptions to every rule…