In the previous post, we looked at how a computer program works, and discovered that for it to be able to do anything at all, the program must first be loaded into memory. In that post, we started to look at memory addresses and variables, which I felt required a bit more unpacking. How, exactly, does a computer know where to load a program into memory, and how does that program then access and retrieve information from memory? The key to answering both of these questions lies in the concept of memory addressing, which will be the subject of this post.
A memory address is nothing more than a way of signposting where a value can be located in memory. Think of it as a simple, unique number that points to a specific byte of memory. Both the memory address and the value stored at the memory address are measured in bytes, but it is important not to mix one up with the other. A memory address is just that: an address. It is not the same as the value found at that address. For example, if a program tells the computer to retrieve the value at memory address 210 (hex: &D2), it should not return the value 210, but rather the byte value in memory at the 210th location. If the values in memory look like this:
Address | Value ================== ... | ... 0xD0 | &10 0xD1 | &70 0xD2 | &FE 0xD3 | &11 0xD4 | &B0 ... | ...
…then I will expect the computer to retrieve byte &FE from memory. One point on memory address naming conventions: the use of ‘&’ followed by two alphanumerics to represent a byte in hexadecimal is fairly commonplace. However, because of the ease with which it is possible to confuse a memory address with the byte value it contains, it has become conventional to label memory addresses with a ‘0x’ prefix instead followed by the hexadecimal value of the address.
A cornerstone of computer architecture since at least the 1980s, if not before, is that memory is ‘byte-addressable’, i.e. a single memory address refers to a single byte. The total amount of memory theoretically available to the CPU, as determined by the physical architecture of the system, dictates how many bytes are needed to uniquely identify a single byte in memory. If we had a simplistic computer that could only use up to 256 bytes of memory, the memory address for each byte of memory would itself be a single byte (like in the example table above). A memory address of 0x00 (or 00000000 in binary) would refer to the first byte of memory and a memory address of 0xFF (or 11111111 in binary) would refer to the 256th byte of memory. (To understand why a single byte could not store the memory address for the 257th byte of memory, see here). In a more advanced 8-bit computer, such as the BBC Micro, which has 32 kilobytes (i.e. 32,000 bytes) of memory, each memory address needs to be two bytes long. The two bytes in combination allow for 65,536 (216) different memory addresses, which is more than adequate for the 32k of addressable memory. Since the dawn of the 21st century, 32-bit systems like the Intel Pentium series have been able to address up to 4,294,967,296 bytes (232) and so require four-byte long memory addresses to uniquely identify each byte of memory. This is also, incidentally, why such computers cannot use more than 4GB of RAM: inserting extra RAM chips beyond 4GB, while physically possible assuming there are enough slots, is futile since the architecture has no way of addressing anything beyond the 4,294,967,296th byte of memory. Up to the present day, the 64-bit architecture of most modern computers can technically address up to 18,446,744,073,709,551,616 bytes of memory (264), and so require eight-byte long memory addresses. It is worth noting that at present, very few 64-bit machines are physically capable of addressing that many bytes of memory due to various limitations imposed by hardware and operating systems, but the theory still holds.
As the capacity of CPUs has grown over time, the number of bytes they can process in a single cycle has increased. We call this the ‘word size’ of the system; an old 8-bit microprocessor could only process one byte at a time, so its word size was said to be one byte. A modern 64-bit CPU, by contrast, can process eight bytes at a time, so its word size is eight bytes. It is important not to confuse word size with memory address size, however. Although it is true that a memory address in a 64-bit computer is 8 bytes and likewise a 64-bit CPU can process 8 bytes in a single cycle, by contrast most 8-bit computers have two-byte memory addresses even though their microprocessors can typically only process one byte at a time. If this were not the case, they could only work with 256 bytes of memory, which would be far too little to do anything of consequence. The trade-off, however, is that because two bytes are needed for each memory address in an 8-bit system, the microprocessor has to carry out two cycles to access one 2-byte memory address. By contrast, the 64-bit CPU only needs a single cycle to access one 8-byte memory address.
In programming terms, a memory address is what underpins a ‘variable’. A variable is essentially just a named memory address coupled with a reserved amount of memory. They are easier to work with because instead of referring to the memory address by its unique identifier it can instead be referred to by a more meaningful label. Variables also have an associated datatype, such as integer, string, timestamp and so on. When a variable is declared inside a program, the program’s compiler will check the datatype and manage the available memory addresses to ensure a sufficient number of bytes are reserved for that variable without overwriting anything else significant in memory. Remember that a memory address by itself only ever refers to a single byte, so if more than one byte is required for a variable, e.g. a floating point representation of a number, which requires four bytes, then the program must reserve four bytes and make sure that the next declared variable starts from a memory address four bytes after the memory address used for the previous variable. For example, if I wanted to declare two floating point variables in the C programming language, I would simply write:
float myFloat1; float myFloat2;
Assume a 32-bit architecture with freely available memory starting from memory address 0x00000070. The C compiler will treat 0x00000070 as the memory address for myFloat1 and allocate four bytes. This means that the memory address for myFloat2 will be 0x00000074, since addresses 71, 72 and 73 have all been reserved for the additional three bytes needed by myFloat1. As with myFloat1, myFloat2 also requires four bytes, so it will occupy addresses 74, 75, 76 and 77. Higher programming languages like C will manage all of this on the programmer’s behalf. When writing programs in assembly code, however, the programmer has to manage the memory addresses and the allocation of bytes directly.
In the next post, we will examine the different modes of addressing memory…
[…] by computer programs. In so doing, we will also start to consider the concept of memory addressing (explored in more detail here) and why it is fundamental to the workings of any computer program. As in previous posts, we will […]
[…] adds 5 to the value already in the accumulator and line three stores the sum (i.e. 9) at a specific memory address, designated by &70. Don’t worry too much about what an ‘accumulator’ is for […]
[…] 256 ‘pages’, with each page having 256 locations, giving a total of 2562 (65,536) memory addresses. Since the microprocessor has just eight wires connecting it to memory via the data bus, it can […]