RAM/Memory - Could you explain its different speeds & timings please?

Discussion in 'Hardware' started by Maniac618, 12 Aug 2005.

  1. Maniac618

    Maniac618 What's a Dremel?

    Joined:
    10 Apr 2004
    Posts:
    376
    Likes Received:
    0
    If there is already a thread please redirect me otherwise...


    I have Corsair 3200 Xtreme Low Latency RAM 2x512mb installed.

    My question is, what exactly is the "3200", is it the best, if not, what will be in a year's time?

    How much effect does the "3200" have or the DDR-SDRAM? What about timings, I've seen some people writing 2-5-5- etc etc.

    Please help! Thanks in advance. :wallbash:
     
  2. Krikkit

    Krikkit All glory to the hypnotoad! Super Moderator

    Joined:
    21 Jan 2003
    Posts:
    24,063
    Likes Received:
    763
    3200 is the description of the memory speed; 3200 equates to a 400Mhz speed.
    Basically, divide the larger number by 8 and you get the bus speed, so PC4000 is a 500Mhz speed, or 250Mhz SDR.

    The timings are a description of long something takes to complete, lower is generally better. They're written 2-2-2-5 for the best ones, the CAS latency (the buzzword of timings) is generally the first quoted number, the others make a difference too, so don't be lulled by CAS2 RAM unless it's all low numbers, e.g. 2-2-2-5, rather than 2-3-4-5.

    As for which is the "Best", it depends on your system. For Athlon XP's you'll generally want very low-latency 3200-3700. For P4's you'll be wanting high-frequency RAM where the timings don't matter as much, and for AMD64's, you'll be wanting a combination of the two.
     
  3. Glider

    Glider /dev/null

    Joined:
    2 Aug 2005
    Posts:
    4,173
    Likes Received:
    21
    PC 3200 is a JEDEC Stanard wich represents the the memory's speed (actually, it's a multiplication of the width of the DIMM module (64bit = 8byte) * 400 MHz speed [8*400=3200]). 3200 is how much MB/s that the memory can provide (bandwith). like PC2100(8*266) provides 2100MB/s bandwith.
    The 400MHz (in your case) is the frequency at wich the module "communicates" with the processor, this is the FSB setting (HTbusspeed in case of A64 ;)), maybe you've heard of that...

    The XXL part (2-2-2-5) are the timings or latencies at wich the modules work. As a rule of thumb, tighter is better... and 2-2-2-5 is about as good as it gets. To explain everything about the timings would lead a bit too far, but I'll give you a brief expaination.

    Memory is a matrix with rows and columns where all the data is stored. To write/read from a specific cell, the row and column of that cell must be activated. The pc does this by first activating the row (RAS) and then the column (CAS). Because it happens at such high speeds (400 times per sec) it tends to "lag" a bit. This lag is represented by those numbers... and we all know, less lag = better...

    I know my expanation is far from complete, but for a more detailed one, you could google the JEDEC standard and DDR RAM Timings
     
  4. MonkeySpank

    MonkeySpank Banned

    Joined:
    10 Aug 2005
    Posts:
    357
    Likes Received:
    0
    Command Per Clock(CPC)

    Settings: Auto, Enable(1T), Disable(2T)

    Command Per Clock(CPC) is also called Command Rate. It may be best in some instances to Disable (2T) w/ 2x512 RAM modules. It has a large Influence on Bandwidth/Stability.

    From Adrian Wong’s site: http://www.rojakpot.com/
    “This BIOS feature allows you to select the delay between the assertion of the Chip Select signal till the time the memory controller starts sending commands to the memory bank. The lower the value, the sooner the memory controller can send commands out to the activated memory bank. When this feature is enabled, the memory controller will only insert a command delay of one clock cycle or 1T. When this feature is disabled, the memory controller will insert a command delay of two clock cycles or 2T. The Auto option allows the memory controller to use the memory module's SPD value for command delay. If the SDRAM command delay is too long, it can reduce performance by unnecessarily preventing the memory controller from issuing the commands sooner. However, if the SDRAM command delay is too short, the memory controller may not be able to translate the addresses in time and the "bad commands" that result will cause data loss and corruption. It is recommended that you try enabling SDRAM 1T Command for better memory performance. But if you face stability issues, disable this BIOS feature."

    Large Influence on Bandwidth/Stability.



    CAS Latency Control(tCL)

    Settings = Auto, 1, 1.5, 2, 2.5 3, 3.5, 4, 4.5.

    This is the first timing that most ram companies rate their ram with. For example, you might see RAM rated at 3-4-4-8 @275mhz. this is the 3, in that situation. 2 yields the best performance, CAS 3 usually gives better stability. Please note; if you have Winbond-BH-5/6, you may not be able to use CAS3.

    From Lost Circuits: http://www.lostcircuits.com/
    “CAS is Column Address Strobe or Column Address Select. CAS controls the amount of time (in cycles (2, 2.5,& 3) between receiving a command and acting on that command. Since CAS primarily controls the location of HEX addresses, or memory columns, within the memory matrix, this is the most important timing to set as low as your system will stably accept it. There are both rows and columns inside a memory matrix. When the request is first electronically set on the memory pins, the first triggered response is tRAS (Active to Precharge Delay). Data requested electronically is precharge, and the memory actually going to initiate RAS is activation. Once tRAS is active, RAS, or Row Address Strobe begins to find one half of the address for the required data. Once the row is located, tRCD is initiated, cycles out, and then the exact HEX location of the data required is accessed via CAS. The time between CAS start and CAS end is the CAS latency. Since CAS is the last stage in actually finding the proper data, it's the most important step of memory timing.”

    From Adrian Wong’s site: http://www.rojakpot.com/
    “This BIOS feature controls the delay (in clock cycles) between the assertion of the CAS signal and the availability of the data from the target memory cell. It also determines the number of clock cycles required for the completion of the first part of a burst transfer. In other words, the lower the CAS latency, the faster memory reads or writes can occur. Please note that some memory modules may not be able to handle the lower latency and may lose data. Therefore, while it is recommended that you reduce the SDRAM CAS Latency Time to 2 or 2.5 clock cycles for better memory performance, you should increase it if your system becomes unstable. Interestingly, increasing the CAS latency time will often allow the memory module to run at a higher clock speed. So, if you hit a snag while overclocking your SDRAM modules, try increasing the CAS latency time.”

    Slight Influence on Bandwidth / Large Influence on Stability



    RAS# to CAS# Delay(tRCD)

    Settings = Auto, 0, 1, 2, 3, 4, 5, 6, 7.

    This is the second timing that most ram companies rate there ram with. For example, you might see ram rated at 3-4-4-8@275mhz. This is the first 4, in that situation.

    From Adrian Wong’s site: http://www.rojakpot.com/
    ”This BIOS feature allows you to set the delay between the RAS and CAS signals. The appropriate delay for your memory module is reflected in its rated timings. In JEDEC specifications, it is the second number in the three or four number sequence. Because this delay occurs whenever the row is refreshed or a new row is activated, reducing the delay improves performance. Therefore, it is recommended that you reduce the delay to 3 or 2 for better memory performance. Please note that if you use a value that is too low for your memory module, this can cause the system to be unstable. If your system becomes unstable after you reduce the RAS-to-CAS delay, you should increase the delay or reset it to the rated delay. Interestingly, increasing the RAS-to-CAS delay may allow the memory module to run at a higher clock speed. So, if you hit a snag while overclocking your SDRAM modules, you can try increasing the RAS-to-CAS delay.”

    Large Influence on Bandwidth/ Stability.




    Min RAS# Active Timing(tRAS)

    Settings = Auto, 00, 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12, 13, 14, 15.

    This is the fourth timing that most ram companies rate there ram with. For example, you might see ram rated at 3-4-4-8 @275mhz. this is the 8, in that situation.

    From Adrian Wong’s site: http://www.rojakpot.com/
    ”This BIOS feature controls the memory bank's minimum row active time (tRAS). This constitutes the time when a row is activated until the time the same row can be deactivated. If the tRAS period is too long, it can reduce performance by unnecessarily delaying the deactivation of active rows. Reducing the tRAS period allows the active row to be deactivated earlier. However, if the tRAS period is too short, there may not be enough time to complete a burst transfer. This reduces performance and data may be lost or corrupted. For optimal performance, use the lowest value you can. Usually, this should be CAS latency + tRCD + 2 clock cycles. For example, if you set the CAS latency to 2 clock cycles and the tRCD to 3 clock cycles, the optimum tRAS value would be 7 clock cycles. But if you start getting memory errors or system crashes, increase the tRAS value one clock cycle at a time until your system becomes stable.”

    It appears throughout the web that this is a much debated timing. Some may argue that 00, 05, or 10 is the faster/most stable. There probably isn’t a right answer for this one, it all depends on your ram. If you need a good starting point, usually most/all ram can achieve their max OC on 10 tRAS, even if one of the other settings is faster.

    Slight Influence on Bandwidth/Stability.




    Row Precharge Timing(tRP)

    Settings = Auto, 0, 1, 2, 3, 4, 5, 6, 7

    This is the third timing that most ram companies rate there ram with. For example, you might see ram rated at 3-4-4-8 @275mhz. this is the second 4, in that situation.

    From Adrian Wong’s site: http://www.rojakpot.com/
    ”This BIOS feature specifies the minimum amount of time between successive ACTIVATE commands to the same DDR device. The shorter the delay, the faster the next bank can be activated for read or write operations. However, because row activation requires a lot of current, using a short delay may cause excessive current surges. For desktop PCs, a delay of 2 cycles is recommended as current surges aren't really important. The performance benefit of using the shorter 2 cycles delay is of far greater interest. The shorter delay means every back-to-back bank activation will take one clock cycle less to perform. This improves the DDR device's read and write performance. Switch to 3 cycles only when there are stability problems with the 2 cycles setting.”

    Large Influence on Bandwidth/Stability.
     
  5. MonkeySpank

    MonkeySpank Banned

    Joined:
    10 Aug 2005
    Posts:
    357
    Likes Received:
    0
    Row Cycle Time(tRC)

    Settings = Auto, 7-22 in 1.0 increments.

    From Adrian Wong’s site: http://www.rojakpot.com/
    ”This BIOS feature controls the memory module's Row Cycle Time or tRC. The row cycle time determines the minimum number of clock cycles a memory row takes to complete a full cycle, from row activation up to the precharging of the active row. Formula-wise, the row cycle time (tRC) = minimum row active time (tRAS) + row precharge time (tRP). Therefore, it is important to find out what the tRAS and tRP parameters are before setting the row cycle time. If the row cycle time is too long, it can reduce performance by unnecessarily delaying the activation of a new row after a completed cycle. Reducing the row cycle time allows a new cycle to begin earlier. However, if the row cycle time is too short, a new cycle may be initiated before the active row is sufficiently precharged. When this happens, there may be data loss or corruption. For optimal performance, use the lowest value you can, according to the tRC = tRAS + tRP formula. For example, if your memory module's tRAS is 7 clock cycles and its tRP is 4 clock cycles, then the row cycle time or tRC should be 11 clock cycles. However, if the row cycle time is too short, a new cycle may be initiated before the active row is sufficiently precharged. When this happens, there may be data loss or corruption.”

    Large Influence on Bandwidth/Stability.




    Row Refresh Cycle Time(tRFC)

    Settings = Auto, 9-24 in 1.0 increments.

    From the DFI BIOS: “This bios setting represents time to refresh a single row on the same bank of memory. This value is also the time interval between a refresh (REF command) to another REF command to different rows of the same bank. The tRFC value is higher than tRC as column access gates are not turned on during it’s issue.”

    Large Influence on Bandwidth/Stability.





    Row to Row Delay(also called RAS to RAS delay)(tRRD)

    Settings = Auto, 0-7 in 1.0 increments.

    From Adrian Wong’s site: http://www.rojakpot.com/
    “This BIOS feature specifies the minimum amount of time between successive ACTIVATE commands to the same DDR device. The shorter the delay, the faster the next bank can be activated for read or write operations. However, because row activation requires a lot of current, using a short delay may cause excessive current surges. For desktop PCs, a delay of 2 cycles is recommended as current surges aren't really important. The performance benefit of using the shorter 2 cycles delay is of far greater interest. The shorter delay means every back-to-back bank activation will take one clock cycle less to perform. This improves the DDR device's read and write performance. Switch to 3 cycles or higher only when there are stability problems with the 2 cycles setting.”

    Slight Influence on Bandwidth/Stability.




    Write Recovery Time(tWR)

    Settings = Auto, 2, 3.

    From Adrian Wong’s site: http://www.rojakpot.com/
    “This BIOS feature controls the Write Recovery Time (tWR) of the memory modules. It specifies the amount of delay (in clock cycles) that must elapse after the completion of a valid write operation, before an active bank can be precharged. This delay is required to guarantee that data in the write buffers can be written to the memory cells before precharge occurs. The shorter the delay, the earlier the bank can be precharged for another read/write operation. This improves performance but runs the risk of corrupting data written to the memory cells. It is recommended that you select 2 Cycles if you are using DDR200 or DDR266 memory modules and 3 Cycles if you are using DDR333 or DDR 400 memory modules. You can try using a shorter delay for better memory performance but if you face stability issues, revert to the specified delay to correct the problem.”

    Slight Influence on Bandwidth/Stability.




    Write to Read Delay(tWTR)

    Settings: Auto, 1, 2

    From Adrian Wong’s site: http://www.rojakpot.com/
    ”This BIOS feature controls the Write Data In to Read Command Delay (tWTR) memory timing. This constitutes the minimum number of clock cycles that must occur between the last valid write operation and the next read command to the same internal bank of the DDR device. The 1 Cycle option naturally offers faster switching from writes to reads and consequently better read performance. The 2 Cycles option reduces read performance but it will improve stability, especially at higher clock speeds. It may also allow the memory chips to run at a higher speed. In other words, increasing this delay may allow you to overclock the memory module higher than is normally possible. It is recommended that you select the 1 Cycle option for better memory read performance if you are using DDR266 or DDR333 memory modules. You can also try using the 1 Cycle option with DDR400 memory modules. But if you face stability issues, revert to the default setting of 2 Cycles.”

    From the DFI BIOS: “This Bios setting specifies the write to read delay. Samsung calls this TCDLR (last data in to read command). It is measured from the rising edge and following the last non-mask data strobe to the rising edge of the next read command. JDEC usually specifies this as one clock.”

    Slight Influence on Bandwidth/Stability.




    Read to Write Delay(tRTW)

    Settings = Auto, 1-8 in 1.0 increments.

    Paraphrased From Adrian Wong’s site: http://www.rojakpot.com/
    ”When the memory controller receives a write command immediately after a read command, an additional period of delay is normally introduced before the write command is actually initiated. As its name suggests, this BIOS feature allows you to skip (or raise) that delay. This improves the write performance of the memory subsystem. Therefore, it is recommended that you enable this feature for faster read-to-write turn-arounds. However, not all memory modules can work with the tighter read-to-write turn-around. If your memory modules cannot handle the faster turn-around, the data that was written to the memory module may be lost or become corrupted. So, when you face stability issues, disable (or raise the value) of this feature to correct the problem.”

    From the DFI BIOS: “This field specifies the read to write delay. This is not a DRAM specified timing parameter, but must be considered due to the routing latencies on the clock forwarded bus. It is counted from the first address bus slot which was not associated with part of the read burst.”

    Slight Influence on Bandwidth/Stability.




    Refresh Period(tREF)

    Settings = Auto, 0032-4708 in variable increments.

    1552= 100mhz(?.?us)
    2064= 133mhz(?.?us)
    2592= 166mhz(?.?us)
    3120= 200mhz(?.?us)(seems to be a/ Bh-5,6 sweet spot at 250+mhz)
    ---------------------
    3632= 100mhz(?.?us)
    4128= 133mhz(?.?us)
    4672= 166mhz(?.?us)
    0064= 200mhz(?.?us)
    ---------------------
    0776= 100mhz(?.?us)
    1032= 133mhz(?.?us)
    1296= 166mhz(?.?us)
    1560= 200mhz(?.?us)
    ---------------------
    1816= 100mhz(?.?us)
    2064= 133mhz(?.?us)
    2336= 166mhz(?.?us)
    0032= 200mhz(?.?us)
    ---------------------
    0388= 100mhz(15.6us)
    0516= 133mhz(15.6us)
    0648= 166mhz(15.6us)
    0780= 200mhz(15.6us)
    ---------------------
    0908= 100mhz(7.8us)
    1032= 133mhz(7.8us)
    1168= 166mhz(7.8us)
    0016= 200mhz(7.8us)
    ---------------------
    1536= 100mhz(3.9us)
    2048= 133mhz(3.9us)
    2560= 166mhz(3.9us)
    3072= 200mhz(3.9us)
    ---------------------
    3684= 100mhz(1.95us)
    4196= 133mhz(1.95us)
    4708= 166mhz(1.95us)
    0128= 200mhz(1.95us)

    Paraphrased From Adrian Wong’s site: http://www.rojakpot.com/
    ”This BIOS feature allows you to set the refresh interval of the memory chips. There are (several) different settings as well as an Auto option. If the Auto option is selected, the BIOS will query the memory modules' SPD chips and use the lowest setting found for maximum compatibility. For better performance, you should consider increasing the Refresh Interval from the default values (15.6 µsec for 128Mbit or smaller memory chips and 7.8 µsec for 256Mbit or larger memory chips) up to 128 µsec. Please note that if you increase the Refresh Interval too much, the memory cells may lose their contents. Therefore, you should start with small increases in the Refresh Interval and test your system after each hike before increasing it further. If you face stability problems upon increasing the refresh interval, reduce the refresh interval step by step until the system is stable.

    From Sierra at ABXzone: The information below is taken from an old RAM guide. In a nutshell a memory module is made up of electrical cells. The refresh process recharges these cells, which are arranged on the chips in rows. The refresh cycle refers to the number of rows that must be refreshed.

    "Periodically the charge stored in each bit must be refreshed or the charge will decay and the value of the bit of data will be lost. DRAM (Dynamic Random Access Memory) is really just a bunch of capacitors that can store energy in an array of bits. The array of bits can be accessed randomly. However, the capacitors can only store this energy for a short time before it discharges it. Therefore DRAM must be refreshed (re-energizing of the capacitors) every 15.6µs (a microsecond equals 10-6 seconds) per row. Each time the capacitors are refreshed the memory is re-written. For this reason DRAM is also called volatile memory. Using the RAS-ONLY refresh (ROR) method, the refresh is done is a systematic manner, each column is refreshed row by row in sequence. In a typical EDO module each row takes 15.6µs to refresh. Therefore in a 2K module the refresh time per column would be 15.6µs x 2048 rows = 32ms (1 millisecond equals 10-6 seconds). This value is called the tREF. It refers to the refresh interval of the entire array."

    Slight Influence on Stability/Bandwidth.
     
  6. MonkeySpank

    MonkeySpank Banned

    Joined:
    10 Aug 2005
    Posts:
    357
    Likes Received:
    0
    Write CAS# Latency(tWCL)

    Settings = Auto, 1-8

    Paraphrased from Lost Circuits: http://www.lostcircuits.com/
    ”Variable Write CAS Latency (tWCL): Conventional SDRAM including DDR I uses random accesses as the name implies. This means that the controller is free to write to any location within the physical memory space, which, in most cases, means that it will write to whichever page is open and to the column address closest to the (CAS) strobe. The result is a write latency of 1T, as opposed to read or CAS-Latency values of 2, 2.5 or 3. (This setting should almost) always be set to 1 unless using DDRII.”

    Large Influence on Stability/ Unknown Influence on bandwidth.



    DRAM Bank Interleave

    Settings = Enable, Disable

    Paraphrased from Adrian Wong’s site: http://www.rojakpot.com/
    ”This BIOS feature enables you to set the interleave mode of the SDRAM interface. Interleaving allows banks of SDRAM to alternate their refresh and access cycles. One bank will undergo its refresh cycle while another is being accessed. This improves memory performance by masking the refresh cycles of each memory bank. A close examination will reveal that since the refresh cycles of all the memory banks are staggered, this produces a kind of pipelining effect. However, bank interleaving only works if the addresses requested consecutively are not in the same bank. If they are in the same memory bank, then the data transactions behave as if the banks were not interleaved. The processor will have to wait until the first data transaction clears and that memory bank refreshes before it can send another address to that bank. All current SDRAM modules support bank interleaving. It is recommended to enable this feature whenever possible.”

    Large Influence on Bandwidth/Stability




    DQS Skew Control

    Settings = Auto, Increase Skew, Decrease Skew

    From Lost Circuits: http://www.lostcircuits.com/
    "It is true that lower voltage swings enable higher frequencies but after a certain point, the ramping of the voltages will show a significant skew. The skew can be reduced by increased drive strength, however, with the drawback of a voltage overshoot / undershoot at the rising and falling edges, respectively. One additional problem with high frequency signaling is the phenomenon of trace delays. The solution in DDR was to add clock forwarding in form of a simple data strobe. DDR II takes things further by introducing a bidirectional, differential I/O buffer strobe consisting of DQS and /DQS as pull-up and pull-down signals. Differential means that the two signals are measured against each other instead of using a simple strobe signal and a reference point. In theory the pull-up and pull-down signals should be mirror-symmetric to each other but reality shows otherwise. That means that there will be skew-induced delays to reaching the output high and low voltages (VOH and VOL) and the cross points between DQS and /DQS used for clock forwarding will not necessarily coincide with the DQ crossing the reference voltage (Vref) or even be consistent from one clock to the next. The mismatch between clock and data reference points is referred to as the DQ-DQS skew."

    [​IMG]

    [​IMG]

    Slight Influence on Bandwidth/Stability.




    DQS Skew Value

    Settings = Auto, 0-255 in 1.0 increments.

    This is the value that is Increased or Decreased when you set the DQS skew control. It does not appear to be a very sensitive timing.

    Slight Influence on Bandwidth/Stability.




    DRAM Drive Strength

    Settings = Auto, 1-8 in 1.0 increments.

    Paraphrased From Adrian Wong’s site:http://www.rojakpot.com/ “Sometimes called driving strength. This feature allows you to control the memory data bus' signal strength. Increasing the drive strength of the memory bus can increase stability during overclocking. DRAM drive strength refers to the signal strength of the memory data line. A higher number means a stronger signal and is generally recommended for an overclocked module to improve stability. Supposedly TCCD works better with weak drive strength while just about everything else prefers a stronger signal.”

    From bigtoe: “If you leave the option at Auto this will set a weak drive strength, this is good for TCCD based modules but bad for anything else. From testing and debugging the board I have concluded the following. Options 1 3 5 7 are all weak, as is Auto, setting. 1 is actually the weakest option with 7 being as close to the normal weak setting DFI will allow us. Options 2 4 6 8 are the Normal settings, with 8 being the highest strength setting. If you are using TCCD you may want to try 3 5 or 7 as the drive settings as they usually seem to allow the modules to clock well. If you are using VX, or the new BH Gold, or any other modules from the OCZ range you may want to try 8 or 6.”

    Large Influence on Stability.
     
  7. MonkeySpank

    MonkeySpank Banned

    Joined:
    10 Aug 2005
    Posts:
    357
    Likes Received:
    0
    DRAM Data Drive Strength

    Settings = Levels 1-4 in 1.0 increments.

    From Adrian Wong’s site: http://www.rojakpot.com/
    "The MD Driving Strength determines the signal strength of the memory data line. The higher the value, the stronger the signal. It is mainly used to boost the DRAM driving capability with heavier DRAM loads (multiple and/or double-sided DIMMs). So, if you are using a heavy DRAM load, you should set this function to Hi or High. Due to the nature of this BIOS option, it's possible to use it as an aid in overclocking the memory bus. Your SDRAM DIMM may not overclock as well as you wanted it to. But by raising the signal strength of the memory data line, it is possible to improve its stability at overclocked speeds. But this is not a surefire way of overclocking the memory bus. In addition, increasing the memory bus signal strength will not improve the performance of the SDRAM DIMMs. So, it's advisable to leave the MD Driving Strength at Lo/Low unless you have a high DRAM load or if you are trying to stabilize an overclocked DIMM."

    Large Influence on Stability.




    Max Async Latency

    Settings = Auto, 0-15 in 1.0 increments.

    I could not find anything on this particular setting and am not sure what portion of RAM functions it affects. If you have information on this setting, please post and I will update this section. From HiJon89: “The Max Async Latency setting will show its biggest difference in the Everest Latency Test. Going from 8ns to 7ns on my BH-6 made a 1ns difference in Everest Latency. Going from 7ns to 6ns dropped it another 2ns.”

    Slight Influence on Bandwidth/Stability.




    Read Preamble Time

    Settings = Auto, 2.0-9.5 nanoseconds, in 0.5 increments.

    From the DFI BIOS: “This BIOS setting specifies the time prior to the max-read DQS return. It shows when the DQS should be turned on.” From an old Samsung memory guide: “Preamble of DQS on reads: DDR SGRAM uses a data strobe signal(s),DQS, to increase performance. The DQS signal is bidirectional which toggles when there is any data transfer from DDR SGRAM to graphic controller or from graphic controller to DDR SGRAM. Prior to a burst of read data, DQS signal transitions from Hi-Z to a valid logic low. This is referred to as the data strobe preamble. This transition from Hi-Z to logic low nominally happens one clock cycle prior to the first edge of valid data.”

    Slight Influence on Bandwidth/Stability.





    Idle Cycle Limit

    Settings = Auto, 0-256 in varied increments.

    From the DFI BIOS: “This BIOS setting specifies the number of memclocks before forcibly closing (pre-charging) an open page.” It appears that this setting is the maximum number of tries allowed for a page of memory to be read before arbitration kicks in and forces pre-charge once again for that page.

    Slight Influence on Bandwidth/Larger Influence on Stability.




    Dynamic Counter

    Settings = Auto, Enable, Disable.

    From the DFI BIOS: “This BIOS setting specifies dynamic idle cycle counter to enable or disable. If enabled, it forces each entry in the page table to dynamically adjust the idle cycle limit based on page conflict/page miss (PC/PM) traffic.” It appears that this setting is directly related to Idle Cycle Limit and if enabled, would override the existing clock settings for Idle Cycle Limit and force that setting to dynamically adjust based upon conflicts occurring.

    Slight Influence on Bandwidth/Stability for some----- Large Influence on Bandwidth/Stability for others.



    R/W Queue Bypass

    Settings = Auto, 2x, 4x, 8x, 16x.

    From the DFI BIOS: “This BIOS setting specifies the number of times the oldest operation in the DCI (Device Control Interface) read/write queue can be bypassed before the arbiter is overwritten and the oldest operation is chosen.” Similar to Idle Cycle Limit except that this arbiter affects the Read/Write que of the memory page.

    Slight Influence on Bandwidth/Larger Influence on Stability.




    Bypass Max

    Settings = Auto, 0x-7x in 1.0 increments.

    From the DFI BIOS: “This BIOS setting specifies the number of times the oldest entry in DCQ (Dependence Chain Que?) can be bypassed in arbitration before the arbiter choice is vetoed.” I looked all over for this one and I believe it has to do with the memory’s link to the CPU memory controller. If you find other information please feel free to post it and I will update this.

    Slight Influence on Bandwidth/Stability.




    32 Byte Granulation

    Settings = Auto, Disable (8burst), Enable (4burst).

    From the DFI BIOS: “This BIOS setting specifies if the burst counter should be chosen to optimize data bus bandwidth for 32 byte accesses.” Disabling allows for the best performance (largest size of burst).

    Slight Influence on Bandwidth/Larger Influence onStability.
     
  8. MonkeySpank

    MonkeySpank Banned

    Joined:
    10 Aug 2005
    Posts:
    357
    Likes Received:
    0
    Copy and pasted from the dfi-street forums. Basically, this guide explains every single timing there is and helps you especially if you have a dfi mobo:)
     
  9. Maniac618

    Maniac618 What's a Dremel?

    Joined:
    10 Apr 2004
    Posts:
    376
    Likes Received:
    0
    LOL madness is all I can say about that info.

    Thanks though. So in a year 2GB (2x512 and 2x1gb) 3200 DDR-SDRAM at 2-2-2-5 will still be very nice for my A64 3500+ processor?
     

Share This Page