Almost everyone has a passing familiarity with the idea that computers – and, by extension, tablets, smartphones and other digital devices – have ‘memory’. But what does it actually mean? How does a computer ‘remember’ something, and furthermore how is it able to recall some things but not others? Why is certain information available even after we turn off our computer and turn it on again, but other information disappears? In this post, we will explore the concept of memory, what it means and how a computer uses it.
The first important distinction to make is between ‘memory’ and ‘storage’. These two terms used to be fairly well defined, but since the smartphone revolution and the accompanying sales blurb that accompanies every newly released model, this distinction has been significantly blurred. It doesn’t help that both memory and storage are measured using the same unit, namely the byte – or its many multiples, the kilobyte, megabyte, gigabyte, terabyte and so on. However, the simplest way of distinguishing memory from storage is that storage persists even after a device has been switched off, whereas memory is transient and anything captured in memory is lost if the device is switched off or reset. Thus, the difference between a 32GB iPhone and a 64GB iPhone is actually a difference of storage, not memory. It is the amount of space the phone has to store photos, e-mails, videos, apps and even the phone’s operating system itself. All of that data is still there after you restart the phone, and so it is more properly called storage, not memory – even though many phones will complain that they have ‘run out of memory’ when you can no longer fit any more photos on them, forcing you to go through your collection and delete some to be able to store new ones. All of this is not to say that a phone doesn’t also have something called memory: it does, but you won’t generally find that being promoted in the same way by the phone’s manufacturer or reseller, unless you delve into the detailed technical specifications.
This distinction is a little better understood in the computer world. Storage is equivalent to hard disk space (or indeed any externally connected storage media, including floppy disks, USB pen drives, DVDs etc.) and it is where the computer can write information and that information will persist. Memory, meanwhile, is equivalent to how much RAM the computer has and, just like storage space, RAM is also measured in bytes. If you’re wondering just what ‘RAM’ actually is, hold that thought.
What, then, is ‘memory’ in the true sense of the word? It refers to an area of temporary storage that a computer uses to handle the requests you make of it as a user. The computer can use this area to pin information needed to carry out operations. Once those operations have been completed, there is no need for the computer to continue to remember the information it used to carry out the operation. Imagine you are in a café and make a decision to buy a coffee; this will prompt you to take the action needed to make that purchase. You will take note of the price, contemplate whether you can really afford it, decide you’re going to pay using your debit card, possibly glance at your watch while in the queue, but all of that mental processing has no deep significance to you. If I were to ask you a week later what the price was for that coffee you bought, what time of day you bought it and how much money you had in your bank account at the time, doubtless you will have completely forgotten. Contrast that with your bank statement recording the sale of the coffee you bought: that information is recorded and stored, and you could look it up many years later should you so choose. This is the difference between memory and storage, in computing terms: memory is for things that only matter at a particular point in time with no long-term significance, whereas storage is for things that need to be recorded and preserved indefinitely.
Remember RAM from earlier? RAM refers to the physical computer chips that are used for holding computer memory. Just as storage requires a physical component like a hard drive, so computer memory also requires a physical component: RAM chips. Indeed, the nature of computer memory is neatly captured by the acronym of RAM: it stands for Random Access Memory. It is ‘random’ in the sense that the computer can use any byte or set of bytes within RAM to temporarily store something, but having subsequently used it for the operational purpose it had at that time, it no longer cares where in memory it stored it. Indeed, it will likely overwrite the bytes with something else when it carries out another operation. Think of it like a whiteboard with a to-do list on it. You write items on there, wipe them off when you’ve done them and then write new ones as they arise. It isn’t important to preserve the old to-do items, because you’ve done them. The amount of RAM a computer has will ultimately restrict how much information it can store temporarily – just like the physical size of the whiteboard will restrict your to-do list to as many items as will fit on the board. The more RAM the computer has, the more information it can juggle, and this in turn will dictate how many operations it can carry out in parallel. Each new program or app you open will typically grab hold of a chunk of memory to use for its own operations, and so subsequent programs that are opened have less overall memory to grab for themselves. Eventually, if too many programs are opened, you will exhaust the available memory and the computer grinds to a halt – until or unless you terminate one or more open programs.
This may all sound fairly theoretical and abstruse, but some examples will hopefully make it clearer. For a computer to do anything – including what might seem like simple things, such as opening a document or displaying a photo – the information required must be loaded into memory. Stored information, whether on a hard disk or a USB pen drive, is essentially passive information. It is only when that stored information gets loaded into memory that the computer can do something with it. When you ask a computer to open a photo, for example, all the many millions of bytes (megabytes) that make up that photo in binary are loaded into an area of memory that is then passed to an output port that controls the display of your phone’s screen or your computer’s monitor. Perhaps the most illuminating example is when you first start to write a new document but have not yet saved it. From the moment you start writing it, everything you type is held purely in memory. Your interaction with that document is mediated by the computer’s memory, with every keystroke you make simultaneously captured in memory. It does not persist anywhere, until or unless you hit save. Even after you do hit save, the document is stored as of that point in time: everything you continue to type thereafter is again only held in memory. This is why if you haven’t hit save and the computer crashes or there is a power cut, the unsaved portion of the document is lost. When a computer is turned off (intentionally or otherwise!) and comes back on again, the memory is automatically reset, and only the information that was stored to disk can be retrieved back into memory and made visible to you on the screen. (Modern word processing applications like Microsoft Word have certain autosave functions built in nowadays to guard against this problem, but in the background all they are doing is saving the document to a secret area of the hard disk so that it can be retrieved again in the event of a crash.)
Memory is not simply about retrieving files from storage and displaying them, however. Bear in mind that to do anything on a computer – including opening a file – requires a program to do it, and every program is itself composed of bytes. If you recall the previous blog series on binary, you will remember that a computer encodes any instruction – or series of instructions, aka a program – in units of 8-bit bytes. When you launch a program, therefore, it has to go somewhere in order for the computer to be able to run it. That place is memory. Though the lines of code that make up the program will probably have been stored on disk, when you ask the computer to run the program, it has first to be loaded into an area of memory so that the computer’s CPU can execute the operations of that program. Imagine you have a simple (and, frankly, pointless!) program that adds together the integers 4 and 5. In 6502 assembly code, it would look something like this:
LDA #4 ADC #5 STA &70
Line one loads the value of 4 into the accumulator, line two then adds 5 to the value already in the accumulator and line three stores the sum (i.e. 9) at a specific memory address, designated by &70. Don’t worry too much about what an ‘accumulator’ is for now, the main point to take away here is that each piece of code takes up space in memory. This program would occupy six bytes of memory, one byte each for the three ‘opcodes’ (LDA, ADC and STA), one byte each for the values of 4 and 5, and one byte for the memory address &70. To run the program, all six bytes have to first be loaded into memory so that the computer can then execute each line of code in turn. Not only that, but an additional byte of memory is used by the program to store the sum of 4 and 5. Programs thus both occupy areas of memory and use areas of memory for their outputs. This is an important principle to grasp: it means that when executing a program, it is not enough to have just the memory needed to load that program into memory, there must then also be enough memory available for any outputs that the program needs to temporarily store, manipulate and operate upon while it is running.
How do programs themselves access memory? This will be the subject of the next post…
[…] the last post, we looked at what computer memory is, how it differs from storage, and why it is so significant to […]