AM5 - DDR5 Tuning Cheat Sheet
Observations and notes on tuning DDR5 memory on AMD AM5 platforms.
Why this guide exists
There are a lot of posts where people only show ZenTimings (and AIDA64 memory benchmark). The majority of these posts have timings that will error out within a couple of minutes running any memory stress test.
I see the same issue where someone is asking for help with their timings, and in almost all posts, more than half of the answers OP gets are wrong and/or extremely dependent on bin, DRAM IC and IMC quality, as well as values being different between motherboard manufacturers.
Never trust unvalidated timings
Never trust a post that doesn't include a minimum of two stress tests that stress memory in different ways:
- TM5 (preferably 2 different configs) - validates memory timings and voltages.
- Y-Cruncher / Karhu / OCCT - validates IMC stability.
Posts without validation get copy-pasted, and other users end up having to reset CMOS - worst case, panicking because their PC won't boot anymore.
ZenTimings versions
Credits
Veii, anta777, 1usmusv3, gupsterg and others over at overclock.net are the ones that put together everything referenced in this post (with some additions from my own experience).
I also want to mention that the "Sweet Spot" for DDR5 being 6000 MT/s, UCLK=MCLK is false. The higher you can run 1:1 mode, the better - as long as the power needed to drive higher frequencies doesn't eat into your max PPT. If you often do CPU-intensive workloads that max out PPT, aim for lower vSOC and other voltages that eat into max PPT.
2:1 mode note
From what little I've read about 2:1 mode, dual CCDs benefit even further from 8000 MT/s 2:1 (threshold might be lower than 8000 MT/s for dual CCD CPUs - I believe it might also be the case for single CCD CPUs at a threshold slightly above the threshold for dual CCD CPUs). Corrections welcome.
1. Memory stability
TL;DR
If you just want the tuning cheat sheet / tuning tips, skip to DDR5 Tuning Cheat List.
Before you start with anything, I want to stress the importance of testing memory stability - it can save you a lot more time than the stress tests themselves.
Also, be 110% sure your CO is stable. If you aren't 110% sure, I recommend disabling everything PBO - if CO is not stable, some of the tests will throw errors which can make you think it's a memory issue when it's not. Something I learned the hard way.
There is a collection of different tests to stress memory. None is able to replace all.
2. Stability test suite
2.1 Testing stability on the memory side
TM5 (Free) - TestMem5 v0.13.1 on GitHub, includes 9 different configurations. Excellent for testing timings, voltages and resistance values on the memory side.
There's also a TM5 Error Cheat Sheet that can help identify what timings, resistances and/or voltages might need tuning depending on the error - see DDR4/5 Helper by Veii on Google Sheets, sheet TM5 Error Description (the other sheets make no sense - at least not to me, as they are part DDR4, part DDR5 but not fully updated, or just Veii shenanigans).
2.2 Testing stability on the IMC side
There is a collection of stress tests that target IMC + memory. The three go-to tests below put stress on the CPU/IMC side of memory stability - which is just as important (FCLK, vSOC, cldo vddp, vddg, etc.). These tests are also very intense on CPU and will error out if PBO is unstable (especially Y-Cruncher and AIDA64).
| Tool | Recommended settings | Price |
|---|---|---|
| Y-Cruncher | VT3 (can combine other tests, but VT3 tends to be enough) | Free |
| OCCT | CPU + Memory, Extreme, Variable, All Threads w/ AVX2 instructions | Free version is enough |
| Karhu (RAM Test) | w/ CPU Cache: Enabled | $10 |
| AIDA64 | CPU + FPU + Cache Enabled | 30-day free trial available |
Unsure if the free version of AIDA64 allows the combined stress test, but you can get a 30-day free trial.
Edit 1 - Prime95 and VDDG IOD/CCD
Added comment with Prime95 stresstest and some extra food for thought by u/yellowtoblerone:
P95 large should also be in the guide. Run P95 custom config when errors are detected - it will speed things up. There's guides on OCnet on how to use p95 custom config. (Ping me if anyone got a link to the guide.)
After applying CO again once memory is stable, you have to test UCLK and FCLK again.
Benchmarking is very important to these processes. If you're pushing your OC but not getting better results, something's wrong. Or when you dial it back and your results get better, something was wrong. You have to have a baseline.
On Zen 5 it seems VDDG IOD voltage defaults to 903 mV since you're OC'ing, and increasing that drastically increases stability if you're pushing OC past PBO via eCCL. Increasing VDDG CCD also helps but, according to Buildzoid, setting VDDG IOD/CCD >=1000 mV can introduce instabilities in idle in some instances. I've yet to have that issue. Misc voltage can be increased to help stability, as well as increasing total rail (MEM VPP) to 1.9 V. Setting higher Load Line Calibration can also help with stability, especially when setting aggressive PBO.
Adding to this: when it comes to setting VDDG IOD/CCD voltages, according to gupsterg - who's done extensive testing on multiple CPUs and DIMMs - he found the following pattern:
| FCLK | Optimal VDDG IOD/CCD |
|---|---|
| 2000 MHz | 900 mV |
| 2100 MHz | 920 mV |
| 2200 MHz | 940 mV |
I have personally not tested this or read about it elsewhere, but it might be worth testing if voltages are set to auto and the user has issues with FCLK stability.
2.2.1 vSOC voltage
vSOC depends on CPU/IMC and CPU silicon quality, which makes it a value unique to every CPU. I recommend testing stability of vSOC early - it will help once you start pushing higher MT/s in 1:1 mode.
- vSOC default is 1.2 V with EXPO 6000 MT/s enabled (typically you need less to run 6000 1:1 unless extremely unlucky with CPU silicon lottery).
- When running 2:1 mode, vSOC is less deciding - vSOC drives UCLK, and in 2:1 mode UCLK is a lot lower than in >=6000 MT/s 1:1 mode.
Rule of thumb
For every 100 MHz increase on UCLK in 1:1 mode (= 200 MT/s), you need ~100 mV extra vSOC.
See AM5 CPU DDR5 and Infinity Fabric OC by Buildzoid for more in-depth information (timestamped to where he starts discussing the relation between vSOC voltage and UCLK frequency; however, I recommend watching the video from start to finish).
Example: if you need 1.15 V vSOC to run 6000 MT/s 1:1 stable, you will need ~1.25 V vSOC when increasing to 6200 MT/s 1:1. If you need 1.25 V vSOC to run 6200 1:1, there is no point in trying 6400 1:1.
2.2.2 Infinity Fabric Clock (FCLK)
A few simple rules regarding FCLK based on my own experience and most other users I've discussed with. For more in-depth information, refer to the Buildzoid video above.
FCLK general rules
- FCLK in 1:1 mode - set
fclk = (uclk/3)*2or 2 steps above. The benefit of running FCLK in 3:2 is minimal as it's not truly synced. Typically set FCLK as high as is stable. VDDG IOD/CCD, vSOC and VDDP voltage can help stabilize FCLK. - FCLK in 2:1 mode - an area I lack experience in. Since 8000 MT/s 2:1 = UCLK 2000 MHz, you get
FCLK = UCLKat FCLK = 2000 MHz - UCLK is synced with FCLK. If there is a point where higher FCLK outweighs the benefits of being synced 1:1, I can't say.
FCLK stability testing
Edit 3 - Infinity Fabric retransmissions
Comment by u/Niwrats regarding FCLK and using the term "error correction" which is incorrect:
Discussing "memory auto correcting" is awful in the context of infinity fabric tuning. So for IF retransmissions here is a BZ video for reference: Infinity Fabric Retransmissions (YouTube)
Correct wording is "Infinity Fabric Retransmissions". See the video above by BZ for reference.
In the end, the same rules still apply: FCLK stability depends on IMC stability/quality - the mentioned parameters can help stabilize FCLK. Worth noting: BZ also mentions that vSOC at >=1.2 V can reduce FCLK stability; however, he also mentions in the same video that the main priority is to push Data Rate as high as possible first, and high MT/s requires more vSOC. Once the limit is reached, push FCLK until unstable and take 2 steps back.
FCLK stability can be difficult to pinpoint, but there are ways to verify it. If FCLK is unstable, it causes Infinity Fabric retransmissions - so running tests that finish depending on memory speed can help identify when retransmission kicks in.
- Y-Cruncher VT3 puts stress on the IMC and prints how long it took to complete each iteration. If test speed remains the same every iteration (a deviation of 0.1-0.2 is reasonable, more than that might point towards retransmission kicking in).
- As always, confirm by running other tests, not only Y-Cruncher.
- Linpack Xtreme (set 10 GB, 8 iterations) is another test that prints test duration.
Linpack is intense
Linpack Xtreme is one of the most intense CPU stress tests out there. Limit PPT, EDC and TDC in BIOS if you run it - my 9950X3D normally tops out at ~250-260W, while Linpack pushed it well past that.
2.3 Stability testing summary
Initial validation
- TM5 configs
1usmus v3and/orRyzen3D@anta777- minimum 25+ cycles recommended (runtime per cycle increases with memory capacity). - Followed by 3+ cycles of
absolut@anta777and/orextreme@anta777to make sure timings and VDDs are valid.
Once TM5 tests pass without errors:
- Karhu with CPU Cache: Enabled - overnight. Aim for 50,000% coverage or a minimum of 12 hours.
If you think you can tighten timings, lower voltages, or change other values to increase memory performance after the above - do so now and run the same tests again.
Once you're satisfied (or believe you've reached the limit of your memory tune), do final stability tests:
- 2-3 different TM5 configs - 4h-8h per config.
- Karhu - 24h+.
- Y-Cruncher FFTv4 + N63 + VT3 - 8h+.
3. AM5 DDR5 general guidelines and notes
Below is a post made by gupsterg, which started as a post focused on optimizing PBO per core but has grown to contain a collection of close to everything memory do's and don'ts scattered in the main AMD DDR5 OC thread at overclock.net (currently 28,000+ replies, but no summary of findings and general guidelines - though they are in there somewhere). The first 3 replies are updated frequently with information about DDR5 and optimizing PBO.
- AMD Ryzen Curve Optimizer Per Core - overclock.net
- AMD DDR5 OC And 24/7 Daily Memory Stability Thread - overclock.net
(Almost) everything quoted below can be found in the above threads.
4. DDR5 terminology - what each parameter controls
A quick glossary of every clock, voltage and timing referenced in this guide - what it does, what tightening it improves, and important caveats.
Quoted from Buildzoid (AM5 CPU DDR5 and Infinity Fabric OC).
4.1 Clocks
| Parameter | Description | Effect when tightened/increased |
|---|---|---|
| MCLK | DRAM + PHY clock | Less latency + more bandwidth |
| UCLK | Memory controller clock | Less latency + more bandwidth |
| FCLK | Infinity Fabric clock | Less latency + more bandwidth |
MCLK : UCLK ratio
The ratio between DRAM + PHY clock (MCLK) and memory controller clock (UCLK). 1:1 mode is preferred for performance; 2:1 mode is used at very high MT/s where the IMC can't keep up.
4.2 Voltages
| Voltage | Description | Range | Notes |
|---|---|---|---|
| vSOC | SOC voltage | 0.9-1.3 V max | Primarily affects max UCLK. In 2:1 mode the low UCLK allows very low SOC voltages. |
| DRAM VDD | DRAM "core" voltage | 1.35-1.75 V | High VDD (1.5 V+) is mainly about running low tCL (like 8000 C34 or 6400 C26). More than 1.65 V can cause worse stability. |
| DRAM VDDQ | DRAM IO voltage (RAM → CPU signaling) | 1.1-1.65 V | Don't go over 1.65 V. Optimal varies by motherboard + RAM + BIOS version. Usually between 1.25 V and 1.45 V. |
| VDDIO | PHY IO voltage (CPU → RAM signaling) | 1.1-1.5 V | 1.5 V max. Optimal varies by motherboard + RAM + BIOS version. Usually between 1.25 V and 1.5 V. |
| VDDP | PHY voltage | 0.95-1.15 V | 1.15 V max. If set too low or too high the system won't be stable. |
4.3 Primary / refresh timings
| Timing | Description | Effect when tightened | Notes |
|---|---|---|---|
| tCL | READ command to data burst output | Reduces read and write latency | |
| tRCDWR | ACT to WRITE command delay (how long to wait for a row to open for WRITE) | Reduces write latency | Can be very short because we're writing over the data in the row. |
| tRCDRD | ACT to READ command delay (how long to wait for a row to open for READ) | Reduces read latency | |
| tRP | PRE to ACT in same bank (how soon you try to open a different row) | Faster row change | |
| tRAS | ACT to PRE in same bank (how soon after opening a row you close it) | Row hopping | Only relevant if the MC opens a row and then immediately wants to close it. |
| tRC | ACT to ACT in same bank (how quickly you change active rows) | Row hopping | Only relevant if the MC opens a row and immediately wants to open a different one. |
| tWR | WRITE to PRE in same bank (close row after WRITE) | Faster row change | |
| tRFC | REFRESH duration | More "usable" clock cycles | tRFC / tREFI = fraction of clock cycles "wasted" on refresh. |
| tRFC2 | - | - | Unused on DDR5 / AM5 (leave on Auto). |
| tRFCSB | - | - | Unused on DDR5 / AM5 (leave on Auto). |
| tREFI | REFRESH interval (time between REF commands) | More "usable" clock cycles | DRAM data retention gets worse with temperature. High tREFI needs low temps (usually < 55 °C for 65000). |
| tRTP | READ to PRE (close row after READ) | Faster row change |
4.4 Sub-timings (bank-group / activation)
| Timing | Description | Effect when tightened | Notes |
|---|---|---|---|
| tRRDL | ACT to ACT same bank group (different bank within the same group) | Faster row change | Setting these below 8 doesn't do much since DDR5's burst length is 8 cycles. |
| tRRDS | ACT to ACT different bank group (any bank in a different group) | Faster row change | Setting these below 8 doesn't do much since DDR5's burst length is 8 cycles. |
| tFAW | Four ACT Window - delays a 5th ACT command if 4 happened too quickly | Faster row change | |
| tWTRL | WRITE to READ same bank group (how soon to READ same group after WRITE) | More memory bandwidth | |
| tWTRS | WRITE to READ different bank group (how soon to READ different group) | More memory bandwidth |
4.5 Read / Write spacing timings
| Timing | Description | Effect when tightened | Notes |
|---|---|---|---|
| tRDRDscl | READ to READ same bank group | More memory bandwidth | |
| tRDRDsc | READ to READ different bank group | More memory bandwidth | Anything more than 1 is a disaster for memory bandwidth. |
| tRDRDsd | READ to READ different rank, same DIMM | More memory bandwidth | |
| tRDRDdd | READ to READ different rank, different DIMM | More memory bandwidth | |
| tWRWRscl | WRITE to WRITE same bank group | More memory bandwidth | |
| tWRWRsc | WRITE to WRITE different bank group | More memory bandwidth | Anything more than 1 is a disaster for memory bandwidth. |
| tWRWRsd | WRITE to WRITE different rank, same DIMM | More memory bandwidth | |
| tWRWRdd | WRITE to WRITE different rank, different DIMM | More memory bandwidth | |
| tWRRD | WRITE to READ (between ranks) | More memory bandwidth | |
| tRDWR | READ to WRITE (any scenario) | More memory bandwidth |
5. DDR5 Tuning Cheat List
Summarized by gupsterg. Includes his own findings plus notes from Veii and anta777 - additional comments in italic.
Guidance, not law
Check performance and stability for your kit. There could be errors.
Watch Karhu RAM Test MB/s (hover mouse over coverage to see, or use KGuiX). Karhu RAM Test needs to be run a bare minimum of ~15 min to see better sustained MB/s; even after 15 min it can rise by ~0.5 MB/s. After 15 min the rise is smaller (~0.1 MB/s over 30-45 min) but you will still see one.
Run benchmarks like AIDA64 Memory, Super Pi, PyPrime. On 9000 series, run AIDA64 with advanced prefetchers and cache retention policies disabled (see below for how).
Multiple options
Where there are multiple options to set a DRAM timing, one may be more optimal than another - trial what works best.
5.1 Primary timings
- tCL - Set as desired, can only be even. Lower needs more VDD.
- tRCD - Set as desired. Within the AMD Overclocking menu, separate
tRCDWRandtRCDRDcan be set; value is entered as hexadecimal (newer UEFI is decimal). Too tighttRCDWRmay lose performance in some benchmarks - see data ZIP. Optimal seems to be aroundtRCDWR 16-20. - tRP - Lowest
tCL+4, loosetRP=tRCD. If TM5 throws errors and every change you make just causes another error, trytRP = tRCDif you settRP < tRCD. - tRAS - Optimal
tRCD+tRTP+4or8,tRAS=tRCD+16(see post), tighttRCD+tRTP(see post), only iftRC=tRCD+tRP+tRTP,tRC-tRP(see UEFI Defaults / JEDEC profile screenshot in notes). - tRC - Lowest
tRP+tRAS, looser>=tRCD+tRP+tRTP.tRCD+tRP+tRTP+2may be optimal (seen MB/s improvement in Karhu vstRCD+tRP+tRTP),tRP+tRAS(see UEFI Defaults / JEDEC profile screenshot in notes). - tWR - Lowest 48, multiple of 6.
5.2 Refresh timings
- tREFI - Set as desired, calc multiple of 8192, input in BIOS is
calc-1. Higher (looser value) gives gains; temperature-sensitive - lower if heat issues. - tRFC - Set as desired, multiple of 32, input in BIOS is
calc-1. See further down for guidance; temperature-sensitive - increase if heat issues. - tRFC2 - Used on AM5, ensures data integrity at high DIMM temperature (>85 °C). To be confirmed how to calculate. Leave on Auto.
- tRFCsb - Used on AM5. To be confirmed how to calculate.
5.3 Sub-timings
- tRTP - Set as desired. Lower than 12 unstable.
- tRRDL - Optimal 8 or 12. Lower than 7 not recommended because
tWTRL = tRRDL*2. - tRRDS - Optimal 8. Anything below 6 makes no sense because
tFAW = tRRDS*4andtWTRS = tRRDS/2. - tFAW - Optimal 32.
tRRDS*4. - tWTRL - Optimal 16. If setting as desired, observe
tWTRL <= tWR - tRTP. Safe calc:tRDRDscl+7 = tCDDL,tWTRL = tCCDLx2(see UEFI Defaults / JEDEC profile screenshot in notes).tWTRL = tRRDL*2. - tWTRS - Optimal 4 or 3. Safe calc:
tRDRDscl+7 = tCDDL,tWTRS = tCCDL/2(see UEFI Defaults / JEDEC profile screenshot in notes).tWTRS = tRRDS/2.
5.4 Read / Write delay timings
- tRDRDscl - Set as desired. Lower than 4 unstable. 7 or 8 may be sweet spot for performance/stability.
- tRDRDsc -
[Auto]is 1, lowering not possible. - tRDRDsd - Only relevant for dual-sided DIMMs. Set as desired, match to
tRDRDdd. - tRDRDdd - Only relevant for multi-rank (4x DIMMs or 2x dual-rank DIMMs). Set as desired, match to
tRDRDsd. - tWRWRscl - Match to
tRDRDscl. 7 or 8 may be sweet spot for performance/stability. Safe calc:((tRDRDscl+7) * 2) - 7(see UEFI Defaults / JEDEC profile screenshot in notes). Setting to 1 has been reported as performance loss. - tWRWRsc -
[Auto]is 1, lowering not possible. - tWRWRsd - Only relevant for dual-sided DIMMs. Set as
tRDRDsd+1, match totWRWRdd. - tWRWRdd - Only relevant for multi-rank (4x DIMMs or 2x dual-rank DIMMs). Set as
tRDRDdd+1, match totWRWRsd. - tWRRD - Lowest 1. 1DPC single-sided DIMMs aim for 1; 2DPC or dual-sided DIMMs aim for 2.
- tRDWR - Greater than or equal to 14. 15 for 1DPC, 16 for 2DPC.
- tCWL - No setting. "Auto" rule makes it
tCL-2.
5.5 tREFI / tRFC calculation
tREFI - multiples of 8192, -1 in BIOS:
Valid values: 65535 (8192*8 - 1)
57343 (8192*7 - 1)
49151 (8192*6 - 1)
40959 (8192*5 - 1)
...tRFC - depends on RAM IC (DRAM manufacturer, e.g. SK Hynix A-die/M-die, Samsung). See DDR5 tRFC IC ns table for more info per RAM IC.
tRFC = tRFCns * MCLK[GHz]
Example: SK Hynix A-die tRFCns 120 at 6400 MT/s 1:1 -> MCLK = 3.1 GHz -> tRFC = 3.1 * 120 = 384
Example: SK Hynix M-die tRFCns 160 at 6400 MT/s 1:1 -> MCLK = 3.2 GHz -> tRFC = 3.2 * 160 = 512BIOS input rule
According to the thread at overclock.net, actual BIOS input is tRFC in multiples of 32 - 1: tRFC = 32 * 12 - 1 = 383. Though I rarely see anyone following this rule.
SCLs - see performance increase down to 5/5. Affects read/write bandwidth.
6. Personal observations - BIOS settings and lessons learned
- UCLK DIV1 MODE - When setting DRAM Speed >6000, this setting needs to be set to
UCLK = MCLKor BIOS will default to 2:1 mode (massively decreasing performance). Validate with ZenTimings - MCLK should match UCLK. - BankSwapMode - Can be set to
Swap APUassuming iGPU is disabled, or you might face stability issues. This changes the order in which the IMC accesses memory banks and can potentially improve performance in certain workloads. Should not impact stability or require any timing tuning - just make sure iGPU is disabled. - GearDownMode (GDM) - If disabled, can lower latency and increase bandwidth. Bigger impact on dual CCD CPUs. Typically requires slightly more VDD; looser SCLs if user set SCLs <=4 (I've personally not been able to boot with SCLs at 4/4, but 5/5 works - I've seen users with GDM Off running 4/4).
PowerDown: Disabledcan help with GDM Off stability. More recent AGESA (AMD BIOS) versions tend to be optimized and run GDM Off more easily. - FCH Spread Spectrum - Set to Disable. Typically disabled if set to Auto, but manually disabling removes potential issues.
- VDD voltages -
tCL 30 at 6400 MT/sresults in almost exactly the same latency astCL 28 at 6000 MT/s. To calculatetRFCns- or absolute latency for DDR memory access in ns using the data rate (MT/s) - use the RAM Latency Calculator. Test the calculator with the inputs above (cl30 6400 and cl28 6000) to see the actual latency difference. Why they can be run at similar voltages becomes obvious. - Voltage headroom - If you have a kit advertised EXPO 6000 MT/s
CL30 @ 1.4 V, it can potentially run stable atVDD 1.3 Vdepending on bin (similar to how AMD CPUs don't ship with optimized CO values). Manufacturers need headroom to make sure all DIMMs can run the advertised speed. Example: my 2x16 GB 6000 MT/s CL28 1.4 V SK-Hynix A-die kit runs 6400 1:1 CL30 with tightened tertiaries at 1.38 V vdimm/vddq/vddio. The Linpack Xtreme run referenced earlier used the same timings and voltages. I've run the same kit at 1.35 V too (only 3 cycles TM5 Ryzen3D before stopping, so not worth posting). - MSI MAG-series warning - Don't touch anything in the AMD Overclocking menu (the one that prompts a warning) except for Nitro values. Just testing to set an EXPO profile via AMD Overclocking will lock certain voltages until CMOS reset. This was the reason I booted my timings at 1.35 V - the SK Hynix 2x16 GB preset (only visible if mobo detects an SK Hynix kit) runs 1.35 V vdimm/vddq/vddio.
Edit 4 - Run Windows file integrity checks after a BSOD
A good habit when facing errors while testing memory: run the Windows commands that look for corrupted files and try to fix them. In case of a BSOD while stress-testing - or a BSOD due to memory in general - run these commands first thing after booting back into Windows.
Open PowerShell with admin rights and run the following three commands one after the other:
DISM /Online /Cleanup-Image /ScanHealth
DISM /Online /Cleanup-Image /Restorehealth
sfc /scannowEach command typically takes 15 s to 3 min to finish.
There's a lot more information to be found in the threads linked at overclock.net.
I hope this will help some of you on your memory tuning journey.
Edit 2 - Comment by u/Delfringer165
The first comment refers to the Buildzoid video where he discusses tRC and tRAS not following DDR5 rules - see tRAS on AMD's AM5 CPUs is weird.
Regarding tRAS testing by Buildzoid, the only thing he proved was that if
tRCis at min value thentRASdoes nothing (that is how I see that testing data). LowtRCcan improve some benchmarks like PyPrime 4b but won't help in CPU benchmarks or gaming benchmarks from what I tested. I tested with GDM Off - maybe you will also run some tests with high/lowtRASand again with lowtRC. (Only thing Veii stated was that iftRAStoo low = looped and too high = timebroken.) BZ also did use some kind of random / EXPO thingytRASandtRCvalues.
tFAW = tRRDS*4is no longer the case from my understanding and should always be 32, unless you run something liketRRDS 6or4- lower can be better. (Veii's opinion istRRD_S 8&tFAW 32on UDIMM, forever.) This matches the quotes regarding these timings noted in the DDR5 Cheat List quote.Regarding
tWTRL, from my testing - regardless oftRRDSandtRRDL- should always be 24.Currently testing some SCL settings. For me, SCLs at 5 = a little bit better CPU performance, and 5 & 17 a little bit better performance in GPU+CPU benchmarks / gaming benchmarks (running 48 GB M-die GDM Off).
Since
tRRDS,tRRDL,tWTRLand SCLs all somehow interact withccdlandccdlwr/ccdlwr2, I think these are probably system / IMC dependent.Also maybe include the 12.3 TM5 Version from Veii before it went closed-source (read more in the Testing with TM5 thread on OCnet). It is fixed for Intel (P/E core loading patches & pagefile fixes) and comes with the 1usmus config set to 25 cycles - but you would need to get the other configs yourself (
absoluteneeds to be edited based on cores and is set to 8 cores by default, x3d & DDR5). Editing TM5 configs can be done by opening the.cfgfiles with a text editor.TM5 test length is always cycles - min 25 for stable.
7. References
7.1 Tools
| Link | Description |
|---|---|
| ZenTimings v1.36 beta | Official release |
| ZenTimings v1.36 unofficial beta | overclock.net post |
| TestMem5 (CoolCmd) | v0.13.1, 9 included configs |
| TM5 v0.12.3 from Veii | Pre-closed-source, fixed for Intel P/E cores |
| RAM Latency Calculator | Absolute ns from MT/s and CL |
7.2 TM5 configs
| Link | Description |
|---|---|
| absolutnew.cfg | Edit thread count based on your CPU |
| ryzen3dfebr.cfg | x3d & DDR5 tuned config |
7.3 Cheat sheets and data
| Link | Description |
|---|---|
| DDR4/5 Helper by Veii | TM5 error decoder + timing references (Google Sheets) |
| 9800X3D tRCDWR Testing Data | Data ZIP referenced for tRCDWR observations |
| Linpack Xtreme on 9950X3D | Power-draw screenshot |
7.4 Overclock.net threads
| Link | Description |
|---|---|
| AMD Ryzen Curve Optimizer Per Core | gupsterg's main thread, first 3 replies updated regularly |
| AMD DDR5 OC And 24/7 Daily Memory Stability | 28k+ replies, primary AM5 DDR5 reference |
| tRAS post #1 | tRAS = tRCD+16 reference |
| tRAS post #2 | Tight tRCD+tRTP reference |
7.5 Buildzoid videos
| Link | Description |
|---|---|
| AM5 CPU DDR5 and Infinity Fabric OC | vSOC vs UCLK relation (timestamped) |
| Infinity Fabric Retransmissions | Correct terminology and behavior |
| tRAS on AMD's AM5 CPUs is weird | tRC/tRAS DDR5 rules investigation |