RAM vs ROM: What are The Differences Between ROM and RAM?

With regard to the hardware of the computers, we understand as memory the devices that store the data with which the processor works. There are essentially two categories of memories: ROM (Read-Only Memory), which allows only the reading of the data and does not lose information in the absence of energy; and RAM (Random-Access Memory), which allows the processor to both read and write data and loses information when there is no power. In this article, we will present the main types of ROM and RAM memories, as well as shows the most important characteristics of these devices, such as frequency, latency, encapsulation, technology, among others.

Memory ROM

The ROM memory (Read-Only Memory) are so named because the data is written in them only once. After that, this information can not be deleted or changed, read only by the computer, except through special procedures. Another feature of ROM memories is that they are non-volatile , that is, the recorded data is not lost in the absence of electrical power to the device. Here are the main types of ROM memory:

– Programmable Read-Only Memory (PROM): This is one of the first types of ROM memory. The recording of data in this type is performed by means of devices that work through a physical reaction with electrical elements. Once this occurs, the data stored in the PROM memory can not be erased or changed;

– EPROM (Erasable Programmable Read-Only Memory): EPROM memories have as their main characteristic the ability to allow data to be rewritten in the device. This is done with the aid of a component that emits ultraviolet light. In this process, the recorded data needs to be completely erased. Only then can a new recording be made;

– EEPROM (Electrically-Erasable Programmable Read-Only Memory): This type of ROM also allows data rewriting, however, unlike EPROM memories, processes for erasing and writing data are made electrically, making with which it is not necessary to move the device from its place to a special apparatus for re-recording to take place;

– EAROM (Electrically-Alterable Programmable Read-Only Memory): EAROM memories can be viewed as a type of EEPROM. Its main feature is the fact that the recorded data can be changed gradually, reason why this type is generally used in applications that require only partial rewriting of information;

– Flash : Flash memories can also be viewed as a type of EEPROM, however, the process of recording (and rewriting) is much faster. In addition, Flash memories are more durable and can save a high volume of data.

– CD-ROMs , DVD-ROMs and the like: this is a category of optical discs where data is recorded only once, whether it is a factory, such as music CDs, or with user data, when recording . There is also a category that can be compared to the type EEPROM, since it allows the re-recording of data: CD-RW and DVD-RW and the like.

RAM memory

The RAMs (Random-Access Memory) is one of the most important parts of computers as they are on them that the processor stores the data with which it is dealing. This type of memory has an extremely fast data writing process when compared to various types of ROM. However, the recorded information is lost when there is no more electrical power, ie when the computer is turned off, and therefore is a type of volatile memory .

There are two types of RAM technology that are widely used: static and dynamic, ie SRAM and DRAM, respectively. There is also a newer type called MRAM. Here is a brief explanation of each type:

– Static Random-Access Memory (SRAM): This type is much faster than DRAM memories, but it stores less data and has a higher price if you consider the cost per megabyte. SRAM memories are often used as cache;

– Dynamic Random-Access Memory (DRAM): memories of this type have high capacity, that is, they can hold large amounts of data. However, access to this information is often slower than access to static memories. This type also tends to have much lower price when compared to the static type;

– MRAM (Magnetoresistive Random-Access Memory): MRAM memory has been studied for some time, but it is only in recent years that the first units have emerged. It is a kind of memory somewhat similar to DRAM, but it uses magnetic cells. Thanks to this, these memories consume less energy, are faster and store data for a long time, even in the absence of electricity. The problem with MRAM memories is that they store very little data and are very expensive, so they are unlikely to be adopted on a large scale.

Aspects of RAM memory operation

DRAM memories are made up of chips that contain a very large amount of capacitors and transistors. Basically, a capacitor and a transistor, together, form a memory cell . The former has the function of storing electric current for a certain time, while the latter controls the passage of this current.

If the capacitor is current storage, we have a bit 1. If it is not, we have a 0 bit. The problem is that the information is kept for a short period of time and, so that there is no data loss of the A memory controller component is responsible for the refresh (or refresh) function, which consists of rewriting the contents of the cell from time to time. Note that this process is performed thousands of times per second.

Refresh is a solution, but accompanied by “collateral effects”: this process increases energy consumption and, consequently, increases the heat generated. In addition, the speed of memory access ends up being reduced.

SRAM memory, in turn, is quite different from DRAM and the main reason for this is the fact that it uses six transistors (or four transistors and two resistors) to form a memory cell. In fact, two transistors are responsible for the control task, while the others are responsible for the electrical storage, that is, for the bit formation.

The advantage of this scheme is that the refresh is not needed, making the SRAM memory faster and consuming less energy. On the other hand, because its manufacturing is more complex and requires more components, its cost ends up being extremely high, making the construction of a computer based on this type too expensive. That is why its most common use is as a cache, because it requires small amounts of memory.

As DRAM memories are more common, they will be the focus of this text from this point.


The processor stores in RAM the information it works with, so data write, delete and access operations are performed at all times. This whole work is possible thanks to the work of a circuit already mentioned called memory controller .

To facilitate these operations, the memory cells are organized into a sort of array, that is, they are oriented in a scheme that resembles rows and columns. The crossing of a certain line (also called a wordline), with a certain column (also called a bitline) forms what we know as memory address. Thus, to access the address of a position in memory, the controller obtains its column value, that is, the RAS (Row Address Strobe) value and its line value, that is, the CAS (Column Address Strobe) .

Timing and memory latency

The timing and latency parameters indicate how much time the memory controller expends with read and write operations. In general, the lower these values, the faster the operations.

For you to understand, let’s take as an example a memory module that reports the following values ​​in relation to latency: 5-4-4-15-1T. This value is written in this form: tCL-tRCD-tRP-tRAS-CR . Let’s see what each of these parameters means:

– tCL (CAS Latency): When a memory read operation is started, signals are triggered to activate the corresponding RAS and columns (RAS), determine whether the operation is read or written (CS – Chip Select) and so on. The CAS Latency parameter indicates, in clock cycles, what is the period between the sending of the CAS signal and the availability of the respective data. In other words, it is the interval between the request of a given by the processor and the delivery of this one by memory. Thus, in the case of our example, this value is 5 clock cycles;

– tRCD (RAS to CAS Delay): this parameter also indicates the interval between the activation of the row and the column of a given data, also in clock cycles. In the example above, this value corresponds to 4;

– tRP (RAS Precharge): interval in clocks that tells the time spent between disabling access to one line and activating access to another. In our example, this value is 4 cycles;

– tRAS (Active to Precharge Delay): This parameter indicates the interval, also in clocks, required between a command to activate line and the next action of the same type. In our example, this value is 15 clock cycles;

– CR (Command Rate): The interval between the activation of the CS signal and any other command. In general, this value is 1 or 2 clock cycles and is accompanied by the letter T. In our example this value is 1 cycle.

These parameters are usually reported by the manufacturer on a label attached to the memory comb (often the value of CMD is not reported). When this does not occur, this information can be obtained through specific software (such as the free CPU-Z, for Windows, shown below) or even through the BIOS setup.

The timing parameters provide a good idea of ​​the access time of the memories. Note that when we talk about this, we refer to the time the memory takes to provide the requested data. What has not been said above is that this time is measured in nanoseconds (ns), that is, 1 second divided by 1,000,000,000.

Thus, to get a sense of the maximum frequency used by memory, simply divide 1000 by its access time in nanoseconds (this information may be on a label in the module or can be informed through special software). For example: if a memory comb works with 15 ns, its frequency is 66 MHz, as 1000/15 = 66.

Other parameters

Some current or overclocked motherboards (in a nutshell, a practice where hardware devices are set to work beyond factory specifications) or software that details the hardware characteristics of the computer, often inform others parameters, in addition to those mentioned above. Generally, these additional parameters are reported as follows: tRC-tRFC-tRRD-tWR-tWTR-tRTP (for example: 22-51-3-6-3-3), also considering clock cycles. Let’s see what each one means:

– tRC (Row Cycle): is the time required to complete a cycle of access to a memory line;

– tRFC (Row Refresh Cycle): is the time required to execute the memory refresh cycles;

– tRRD (Row To Row Delay): similar to tRP, but considers the time the controller needs to wait after a new line has been activated;

– tWR (Write Recovery): informs the time required for the memory controller to start performing a write operation after performing an operation of the same type;

– tWTR (Write to Read Delay): is the time it takes for the memory controller to start performing read operations after performing a write operation;

– tRTP (Read to Precharge Delay): Indicates the time required between a read operation performed and activation of the next signal.


In comparison to other items on a computer, memories are one of the least energy-consuming components. The interesting thing is that this consumption has decreased with the evolution of technology. For example, DDR2 memory modules (technology that will still be covered in this text) generally require between 1.8 V and 2.5 V. You can find DDR3 memory combs (a standard that will also be covered in this article) whose requirement is 1.5 V. Old memory modules required about 5 V.

Some people with enough knowledge in the subject overclock the memories by increasing their voltage. With this adjustment, when within certain limits, it is possible to obtain higher clock levels.

SPD (Serial Presence Detect)

The SPD is a small chip (usually type EEPROM) inserted in the memory modules which contains various information about the device specifications such as type (DDR, DDR2, etc), voltage, timing / latency, manufacturer, serial number, etc. .

Many motherboards come with a BIOS setup that allows for a number of configuration adjustments. In these cases, a user can try to set the parameters of the memory, however, who does not want to have this work, can maintain the default setting. Sometimes this setting is indicated by something related to the SPD, as shown in the image below:

Error detection

Some mechanisms have been developed to help detect memory errors, which can have several causes. These features are especially useful in high-reliability applications such as mission-critical servers, for example.

One such mechanism is parity , which is only capable of helping to detect errors but not to correct them. In this scheme, one bit is added to each byte of memory (remember: 1 byte corresponds to 8 bits). This bit assumes a value of 1 if the number of bits 1 in the byte is even and assumes the value 0 (zero) if the quantity is odd (otherwise, 1 for odd and 0 for even). When data is read, a circuit checks whether the parity corresponds to the number of bits 1 (or 0) of the byte. If it is different, an error has been detected.

Parity, however, may not be as accurate as a two-bit error, for example, may cause the parity bit to match the odd or even number of bits 1 of the byte. Thus, for applications that require high data accuracy, you can rely on ECC (Error Checking and Correction) memories , a more complex mechanism capable of detecting and correcting bit errors.

Types of memory tunneling

The encapsulation corresponding to the artifact that physically shapes the memory chips. Here is a brief description of the most commonly used encapsulation types in the industry:

– Dual In-Line Package (DIP): one of the first types of encapsulation used in memories, being especially popular in the times of computers XT and 286. As it has contact terminals – “legs” – of great thickness, its fit or even its bonding by welding on plates can be done easily by hand;

– SOJ (Small Outline J-Lead): This encapsulation is named because its contact terminals resemble the letter ‘J’. It has been widely used in SIMM modules (seen later) and its form of plate fixation is done through welding, not requiring holes in the surface of the device;

– TSOP (Thin Small Outline Package): type of encapsulation whose thickness is very reduced in relation to the standards mentioned above (about 1/3 smaller than SOJ). Because of this, their contact terminals are smaller, as well as thinner, reducing the incidence of interference in communication. It is a type applied in SDRAM and DDR memory modules (which will be discussed later). There is a variation of this encapsulation called STSOP (Shrink Thin Small Outline Package) which is even finer;

– CSP (Chip Scale Package): more recent, the CSP package stands out for being “thin” and for not using contact pins that resemble the traditional “legs”. Instead, it uses a docking type called BGA (Ball Grid Array). This type is used in modules such as DDR2 and DDR3 (which will be seen at the front).

Memory Modules

We understand as a module or, even, comb , a small board where the memory tunnels are installed. This card is attached to the motherboard by means of specific slots . Here is a brief description of the most common types of modules:

– SIPP (Single In-Line Pins Package): is one of the first types of modules that have hit the market. It is format by chips with DIP encapsulation. In general, these modules were soldered on the motherboard;

– SIMM (Single In-Line Memory Module): modules of this type were not soldered but embedded in the motherboard. The first version contained 30 contact terminals (30-way SIMM) and consisted of a set of 8 chips (or 9, for parity). With this, they could transfer one byte per clock cycle. Later, a 72-pin version (72-way SIMM) came up, so it was larger and capable of transferring 32 bits at a time. 30-way SIMM modules could be found with capacities ranging from 1 MB to 16 MB. 72-way SIMM modules, in turn, were commonly found with capacities ranging from 4 MB to 64 MB;

– Double In-Line Memory Module (DIMM): DIMMs are named because they have contact terminals on both sides of the comb. They are capable of transmitting 64 bits at a time. The first version – applied in SDRAM SDRAM – had 168 pins. Then, 184-way modules, used in DDR memories, and 240-way modules used in DDR2 and DDR3 modules were launched. There is a small size DIMM standard called SODIMM (Small Outline DIMM), which is mainly used in portable computers, such as notebooks;

– Rambus In-Line Memory Module (RIMM): formed by 168 tracks, this module is used by the Rambus memories, which will be discussed later in this article. A curious fact is that for each Rambus memory stick installed in the computer it is necessary to install an “empty” 184-way module called the C-RIMM (Continuity-RIMM).

Memory Technologies

Various memory technologies have been (and are) created over time. It is thanks to this that, periodically, we find faster memories, with greater capacity and even memories that require less and less energy. Here is a brief description of the main types of RAM:

– FPM (Fast-Page Mode): one of the earliest RAM memory technologies. With FPM, the first read of the memory has a longer access time than the following readings. This is because four read operations are actually done, rather than just one, in a xyyy-like scheme, for example: 3-2-2-2 or 6-3-3-3. The first reading ends up taking longer, but the next three are faster. This is because the memory controller works only once with the address of a line (RAS) and then works with a four-column sequence (CAS), instead of working with a RAS signal and a CAS signal for each bit. FPM memories used both 30 and 72-way SIMM modules;

– Extended Data Output (EDO): The successor of FPM technology is EDO, which highlights the ability to allow a memory address to be accessed at the same time that a previous request is still in progress. This type was mainly applied in SIMM modules, but was also found in 168-way DIMMs. There was also a similar technology, called BEDO (Burst EDO), which worked faster due to having less access time, but was almost unused since it had a higher cost because it was owned by Micron. In addition, it was “overshadowed” by the arrival of SDRAM technology;

– Synchronous Dynamic Random Access Memory (SDRAM): The FPM and EDO memories are asynchronous, which means they do not work synchronously with the processor. The problem is that with faster and faster processors, this started to become a problem, because often the processor had to wait too long to gain access to the data in memory. SDRAM memories, in turn, work synchronously with the processor, avoiding delay problems. From this technology, we began to consider the frequency with which the memory works for speed measurement. Then arose the memories SDR SDRAM (Single Data Rate SDRAM), which could work with 66 MHz, 100 MHz and 133 MHz (also called PC66, PC100 and PC133, respectively). Many people refer to this memory only as “SDRAM memories” or even as “DIMM memories” because of their module. However, the name SDR is the most appropriate;

– DDR SDRAM (Double Data Rate SDRAM): DDR memories show a significant evolution compared to the SDR standard, because they are able to handle twice as much data in each clock cycle (SDR memories only work with one operation per cycle) . Thus, a DDR memory operating at a frequency of 100 MHz, for example, doubles its performance, as if working at the rate of 200 MHz. Visually, it is possible to easily identify them in relation to the SDR modules, because the latter contains two divisions at the bottom, where your contacts are, whereas DDR2 memories have only one division. You can learn more about this technology in the matter DDR Memory, published here in InfoWester;

– DDR2 SDRAM : As the name implies, DDR2 memories are an evolution of DDR memories. Its main feature is the ability to work with four operations per clock cycle, thus doubling the previous standard. The DDR2 modules also have only one partition on the bottom, however, that opening is a bit more offset to the side.

– DDR3 SDRAM: DDR3 memories are obviously an evolution of DDR2 memories. Again, here doubles the number of operations per clock cycle, this time, eight. A novelty here is the possibility of using Triple-Channel .

– Rambus (Rambus DRAM): The Rambus memories are named after being created by the company Rambus Inc. and reached the market with the support of Intel. They are different from the SDRAM standard because they work only 16 bits at a time. In contrast, Rambus memories work with 400 MHz frequency and with two operations per clock cycle. They had, however, disadvantages, however, very high latency rates, high heating and higher cost. Rambus memories have never been widely accepted in the market, but they were not a total fiasco: they were used, for example, on the Nintendo 64 gaming console. Interestingly, Rambus memories work in pairs with “empty modules” or “blind combs.” This means that for each Rambus module installed, an “empty module” has to be installed in another slot. This technology eventually lost ground to DDR memories.


Over time, the evolution of memory technologies not only makes them faster, but also makes them more capable of storing data. Flash ROMs, for example, can store several gigabytes. As far as RAM memories are concerned, the same thing happens. Because of this, the natural question is: how much to use? The answer depends on a number of factors, however, the industry does not stop working to further increase the speed and capacity of these devices. So do not be surprised: when you least expect it, you will hear about a new memory technology that could become a new market standard 🙂