Jump to content

cusideabelincoln

Members
  • Posts

    9,173
  • Joined

  • Last visited

Everything posted by cusideabelincoln

  1. In the Digital Foundry video I think he tries to say that when they show Spiderman DLSS 2 only slightly faster than native while DLSS 3 is twice as fast. DLSS 3 is not exactly relieving CPU bandwidth compared to DLSS 2, as they both require the same CPU cycles. But in a hard CPU or memory bottleneck, DLSS 3 will double the effective frame rate because every other frame is AI generated entirely by the GPU. You can see that in the Spiderman charts and video. Pick any DDR 4 CPU to compare at 1080p and 4k the performance is only a few frames different. DLSS 2 renders the game at 1080p so that explains why the DF video shows little difference over native. But DLSS 3 is twice as fast since half the frames aren't constrained by the rest of the system.
  2. Spider-Man with RT does need a lot of CPU and memory speed to play at a locked 120 fps. Check these RT charts and you'll see, no matter the resolution, anything that isn't a 12900k with high speed DDR5 is bottlenecked. So a 12900k with DDR4 all the way down to a four core CPUs show no performance difference from 1080p to 4k. And the new Ryzen CPUs don't quite match Intel in this game. So if you want 120 fps in Spider-Man, you will need an Intel CPU with the fastest DDR5 RAM you can get.
  3. If you don't like the CPU running so hot or using so much power, or you're running one of the new Ryzen chips with a weaker cooler, tweaking the Precision Boost algorithm will bring those thresholds back to a normal level: These chips were definitely made to be extremely efficient and provide the best performance per watt for mobile and servers. BIOS updates will bring about an ECO mode that will do the same as the manual tweaking above for you:
  4. In Cyberpunk the 4090 with DLSS 3 frame generation is 2.5x faster than a 3099 Ti using DLSS 2 performance mode. In Spiderman the 4090 provides twice the performance using DLSS 3 frame generation compared to native resolution. In Portal the 4090 gives 5.5x performance using DLSS 3 frame generation, DLSS 2 is 3.33x faster than native rendering. Latency using DLSS 3 frame gen is similar to using DLSS 2. Worst case is it's still better than latency without using DLSS. So this does mean the higher frame rate using frame generation does not give you the latency benefit if that frame rate was rendered without frame gen. Good news about frame generation is the image quality is top notch and will make the game look smoother.
  5. Ryzen 5800X3D should be a drop-in upgrade for you and deliver similar gaming performance gains. It's going for less than $400 now.
  6. Intel's next CPUs are coming out this year too, and they will probably jump ahead if rumors are true. Have no idea what power consumption or heat will be like for those chips, though, so AMD might lead Intel in that department. And then AMD will probably bring out a refresh with the 3D cache within a year, which is REALLY good for gaming performance. I doubt even a 4090 will show differences in CPU performance at 4k. It's just too demanding of a resolution and even 5 year old CPUs are good enough. Also a lot of the new Nvidia features are targeted at removing CPU limitations anyway, so CPU might matter less.
  7. We have new GPUs coming out soon, but right now there's also some new CPUs to get excited about. Here are some key takeaways about AMD's new CPUs: -DDR5 only, AMD officially recommends 6000 MHz as the top speed most chips can do although you can push it higher. Look for sets with EXPO, which is AMD's alternative to XMP profiles. -New AM5 socket, not backwards compatible with AM4. Coolers are backwards compatible though *if they use the stock AMD backplate. -New boosting algorithm is based on temperature with a 95C limit. So the CPU will try to run at 95C all the time, boosting voltage and clockspeeds as necessary until it hits that limit. So the better cooling you have on the chip, the more performance you will gain out of the box without any tweaking, particularly with the 12 and 16 core parts. Obviously you won't need as much cooling for the 6 and 8 core parts. -As a result of this potentially unlimited boosting, power consumption has increased over previous Ryzen chips. Out of the box, the 16 core will chug down 200w. Depending on the workload Ryzen appears to be on the same power level as Intel 12th gen, sometimes a little more efficient. -Another side effect of all these new features: More expensive motherboards. PCIe 5.0 support, along with the beefier VRMs and DDR5 all require better and more expensive motherboard components. -All CPUs have an iGPU now, but 3D performance is on par with Intel's iGPUs and not nearly as powerful as the iGPUs in their APUs like the 5600G. TechPowerUp WWW.TECHPOWERUP.COM AMD Ryzen 5 7600X Review: Mainstream Zen 4 | TechSpot WWW.TECHSPOT.COM The new Ryzen 5 7600X is a 6-core/12-thread CPU that replaces the popular 5600X, built on TSMC's 5nm process, it clocks up to 5.3 GHz, packs 32MB... AMD Zen 4 Ryzen 9 7950X and Ryzen 5 7600X Review: Retaking The High-End WWW.ANANDTECH.COM Delidding the 7950 reduces temps by a whopping 20 C, and also reduces power consumption (transistor current leakage correlates to temperature). But delidding is pretty complicated.
  8. That's what I'm trying to say, in a worst case scenario the 4090 is still the typical generational leap we've seen lately. AC is also a very CPU and memory intensive game, so the gains in that chart could be limited by that. Nvidia charts don't technically lie; they've been pretty accurate. But there a million different ways to configure a test so how they get their numbers is as important as what the numbers actually are.
  9. Being stated by Nvidia and some of their charts clearly showed, without DLSS 3, the 4080 12GB slower than a 3090 Ti, so putting it around 3080 Ti, with the 16GB being 15% faster. Resident Evil, Assassin's Creed, and Division don't support DLSS 3, while the other games do support DLSS 3.0 and that's where we're seeing the 2+ times performance jumps. DLSS 3.0 and Ray Tracing performance is where the real performance will come from.
  10. The 4090 will be one of the bigger generational leaps we've seen in a while with or without DLSS 3.0. The 4080 will be very meh or average generational leap over the 3080 without DLSS 3. And yep, the 4080 16GB pricing makes no sense. Seems to be well worth it to upgrade to the 4090, even with the power requirements.
  11. Welp, maybe the Dualsense Edge will be the perfect controller.
  12. Holy shit, I did not realize missed inputs was a problem. There were several times in Elden Ring were I was absolutely sure I pressed dodge at the perfect time yet nothing happened. I chalked it up to maybe wireless lag or framerate issue, but mostly that I was a bad gamer. I don't feel so shitty now and will safely blame half my deaths on the controller issue.
  13. Seems like the melting isn't isolated to the adapter; it can happen with an official 3.0 PSU and cable. The TLDW: Don't bend the cables really hard and carefully plug in the connector to make sure to get a perfect connection with the pins.
  14. The execution of this show is so flawless it makes make me sad Rogue One wasn't this. Jyn Erso needed this type of Cassian character development.
  15. All 4090s will use this new 12pin PCIe 5.0 connector, so every card will come with an adapter. It's highly recommended to have a PSU well above what you need when using this adapter with a non-ATX 3.0. 1000w should be safe, and perhaps bare minimum you can safely use an 850w. A new ATX 3.0 power supply has much stricter requirements to meet so you don't have to go quite above and beyond on wattage.
  16. Because people would be even more upset if they charged $900 for a 4070. Honestly, to me it looks like the announced 4080 specs is what would historically be a xx70 class card, as they are so much worse than the 4090 than what the 3080 was to the 3090.
  17. The dirty truth to 2-4x performance is definitely in the DLSS technology, along with one other trick: This other dirty truth as to how they can claim the 4090 is 2-4x faster than a 3090 Ti is by limiting the 3090 Ti to 350 watts while their performance numbers have the 4090 at at least 450w. Likewise the "imaginary numbers" they are using for the 4080 is probably when its running at 450+ watts, while they are definitely limiting their "3080 numbers" to 350w. Not really a completely apples to apples comparison. While the 3080 and 3090 launched with a 350w TDP, many cards were able to push up to 450w. And the 3080, 3090, and Ti variants are all power constrained. They'll hit their power limits before their clock limits, and since they are all so close in shader count they all basically perform the same when limited to the same TDP.
  18. That cheeky bastard definitely avoided the word "exclusive" when telling us all about how awesome 3.0 is. I guess they still need to get rid of the 3000 inventory sitting in warehouses lol.
  19. It's possible RT may not be a huge deal for the next few years anyway, until a console refresh comes along. Developers are going to focus on the current consoles, and their RT performance will be limited to AMD's current capabilities. So that's probably one reason Nvidia focused on modding older games with RT effects in this keynote, because there will probably be only a few upcoming AAA games to really push RT.
  20. Yeah, I don't think they're going to have answer for Nvidia's AI accelerated computing, which they are going all-in for. For pure shader performance AMD is on par with, maybe even a little ahead, of Nvidia. And AMD's first gen RT performance was on par with Nvidia's first gen RT performance. But unless they can just brute force the performance with their chiplet design, they're going to be so far behind in RT performance this gen.
  21. I'm not sure how to take the fact they are going to wait until after Nvidia launches the 4090 to even tell us about what their chips can even do. Lack of confidence in not being able to beat the 4090? Or do they want the press coverage all to themselves for a time? Either way, I think it's a mistake to wait. Gamers have barely been able to buy anything for the last 2 years. I would let them know what they could be buying if they could wait a little bit longer.
  22. Yeah, should be for games that don't use DLSS because it has like 50% more conventional shaders running 30+% higher clocks to give it the raw horsepower.
  23. 4x faster is definitely only in games that are optimized for DLSS 3.0, where they can use the tensor cores to predict frames and bypass any CPU or GPU limitations.
×
×
  • Create New...