![]() | ||
|
|
Cooling News From Around The Web You can post links, or comments about cooling related articles and reviews from around the web. |
![]() |
Thread Tools |
![]() |
#1 |
Responsible for 2%
of all the posts here. Join Date: May 2002
Location: Texas, U.S.A.
Posts: 8,302
|
![]()
PCStats compares the latest fast RAM, with slower memory with tight timings, here:
http://www.pcstats.com/articleview.c...eid=873&page=1 Very interesting results. |
![]() |
![]() |
![]() |
#2 |
Thermophile
Join Date: Sep 2002
Location: Melbourne, Australia
Posts: 2,538
|
![]()
Well, out here in the real world, synthetic streamlined burst memory bandwidth tests mean jack.
The Sandra and PC-Mark memory tests fall so far short of reality that they are a total joke. All they are, are just hand-coded assembler loops that pull/push data in the most optimised fashion that basically negates any effect of memory latency due to issuing data-cache-line prefetch commands that pre-loads the L1/L2 caches with information while the CPU is doing something else. Basically they are measuring degenerate best-possible-case scenarios, and we all know how often they occur. Business win-stone just execute tightly-loops scripts that mostly operate out of L2 cache, or pull in large sequential batches of data, which again is largely a faulty measure of performance. 3DMark2001 is perhaps the first real representative benchmark application in their list, and despite being somewhat video card speed dependent, still shows low latency memory as coming out in front. Aquamark, which is an almost wholly video card dependent benchmark, still shows a significant win to lower latency. Quake 3, which is largely CPU dependent nowadays, shows a good benefit to lower latency, and the highly video card dependent UT2, still shows a win to low latency. i.e. the only real tests there that show any real benefit are the extremely artificial bandwidth-only penis-measurement tests which do not reflect upon reality. Working in transaction based applications as my real job where neither artificial bandwidth acceleration, nor L2 cache-bound scenarios exist, I can tell you that the benefits of using lower latency memory is quite extreme. Sorry, the whole article seems to me to be more like an infomercial, with carefully chosen and crafted test conditions, and then in the conclusion talking about "beginners who just want something that works would do well with the `high speed' memory" as they're "easier to setup". How hard is it to plug in a low latency memory stick as opposed to a high memory MHz stick, one needs to ask oneself. |
![]() |
![]() |
![]() |
#3 |
Pro/Staff
Join Date: Oct 2001
Location: Klamath Falls, OR
Posts: 1,439
|
![]()
On the other hand, Quake3 got a boost from the higher-bandwidth and worse latency system called Rambus. Was that not the case? Perhaps it was simply a different factor that caused the performance delta.
|
![]() |
![]() |
![]() |
#4 | |
Thermophile
Join Date: Sep 2002
Location: Melbourne, Australia
Posts: 2,538
|
![]() Quote:
On a technical aspect, I have no real problem with RDRAM myself. It's actually a fairly good way to solve the issues that they were (and still are) trying to solve with SDRAM. What is a little annoying though is the large amount of IP puffery that has somewhat stalled DRAM development as this area is a bit of a patent minefield. I'm just waiting to see if IBM's MRAM technology becomes market ready in the next few years. A 6-fold reduction in latency over what we now see is a pretty nice thing to look forward to. |
|
![]() |
![]() |
![]() |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|