Jump to content

AMD needs frame generation so consoles can have it.


Spork3245

Recommended Posts

1 hour ago, cusideabelincoln said:

"Consoles need this" is the first thought when they announced it, but we should also see how it functions and performs on lower tier GPUs before putting it into everything.

 

If developers can't even hit 30 fps as a baseline for Frame Generation, it may look like ass when pumped up to 50-60 fps.

 

Console players don't notice if a frame rate looks like ass so it's perfect for them

  • Sicko 1
Link to comment
Share on other sites

Nvidia have been LONG time proponents of gpu computing and they are now about as competitive in all machine learning sectors with the top of the class. Their offhand duty of gaming dev is reaping big rewards from early CUDA initiatives. In all honesty there's no real competitor in the gaming space.

Link to comment
Share on other sites

8 hours ago, cusideabelincoln said:

"Consoles need this" is the first thought when they announced it, but we should also see how it functions and performs on lower tier GPUs before putting it into everything.

 

If developers can't even hit 30 fps as a baseline for Frame Generation, it may look like ass when pumped up to 50-60 fps.


I think we’re at the equivalent of DLSS 1.0 with the current frame generation tech, so, theoretically and hopefully it only improves. I don’t think bumping 30 to 60 could ever be realistic, but 30 to 45, or, better yet, 40 to 60 (1.5x boosts) as the tech improves may be very doable without noticeable artifacting or input issues. 

Link to comment
Share on other sites

11 hours ago, cusideabelincoln said:

"Consoles need this" is the first thought when they announced it, but we should also see how it functions and performs on lower tier GPUs before putting it into everything.

 

If developers can't even hit 30 fps as a baseline for Frame Generation, it may look like ass when pumped up to 50-60 fps.


I’m still skeptical that we can get to a 30 to 60fps presentation where generated frames don’t present hard to ignore issues in motion.

 

Really hoping Nvidia figures it out.  If they do, I’ll get a 40 series card eventually. 

Link to comment
Share on other sites

2 hours ago, Spork3245 said:


I think we’re at the equivalent of DLSS 1.0 with the current frame generation tech, so, theoretically and hopefully it only improves. I don’t think bumping 30 to 60 could ever be realistic, but 30 to 45, or, better yet, 40 to 60 (1.5x boosts) as the tech improves may be very doable without noticeable artifacting or input issues. 


Could it even work like that, with a 1.5x boost?

 

I could see 1 in every 3 frames being generated as sort of a quality mode.  But I’d want some improvements to the algorithm with it to better fix dirty frames.  Otherwise, similar problems, just more flickery.

Link to comment
Share on other sites

33 minutes ago, crispy4000 said:

Could it even work like that, with a 1.5x boost?


Yea, every third frame could be AI generated.  Why not? How is that worse than the current every other frame? If anything, I’d imagine the reduced frequency of AI generated frames would lessen artifacts/lag.
 

||x||x||x||x||x||x||x||x||x||x||x||x||x||x||x||x||x||x||x||x

| = “real frames”, there are 40

x = AI generated frames, there are 20

 

Link to comment
Share on other sites

2 hours ago, crispy4000 said:


I’m still skeptical that we can get to a 30 to 60fps presentation where generated frames don’t present hard to ignore issues in motion.

 

Really hoping Nvidia figures it out.  If they do, I’ll get a 40 series card eventually. 

 

 

I'm less sure if general-purpose plug-in-play AI frame generation would work well at lower framerates, but there is a possible world in which you fine tune the model on your game so that captures the game-specific stuff that happens at that time scale.

Link to comment
Share on other sites

2 hours ago, crispy4000 said:


I’m still skeptical that we can get to a 30 to 60fps presentation where generated frames don’t present hard to ignore issues in motion.

 

Really hoping Nvidia figures it out.  If they do, I’ll get a 40 series card eventually. 

 

From the limited analyses I've seen the biggest hurdle to improving the garbled image will revolve around HUD elements, especially transparent ones like waypoint markers.  The blocky, imperfect AI-generated frame mostly helps improve the motion clarity because it exists between two good frames and 95% of the pixels of that frame are constantly changing. However, HUD elements are fairly static and typically reside in the same pixel location of a frame, so when the imperfect AI-generated frame doesn't get the HUD correct you'll notice flicker. 

 

I don't see an easy fix for any HUD element that is also transparent because of the way the frame generation works now - needing to have the entire frame rendered. They could tell the algorithm to basically "crop" the portions of the frame with HUD elements and keep those pixels unchanged between normal frames, but then there may be a noticeable jitteriness if those pixels were supposed to change due to the transparency. But this could be less noticeable than if the AI were to try and guess those pixels anyway. The next evolution of this tech would be to implement frame generation into the game engine, and then the developer can tell the algorithm what is and is not a HUD element. Otherwise, Nvidia will have to optimize in the algorithm on a per game basis what the HUD elements are. 

 

A fascinating comparison I would love to see is how Nvidia's AI stacks up to some HDTV's motion interpolation, strictly in terms of motion clarity. I have no idea how good smart TVs have gotten since my 4k TV does a shit job at motion interpolation, so this could be a good way to judge how much improvement Nvidia has left to go.

Link to comment
Share on other sites

4 hours ago, Spork3245 said:


Yea, every third frame could be AI generated.  Why not? How is that worse than the current every other frame?

 


On paper it’s not.  But I question if this has more to do with how the human eye perceives motion at high framerates.

 

If DLSS3 perceptively show flaws below 80fps currently, just having fewer AI generated frames at 60fps doesn’t mean the issue could be completely resolved.

 

I don’t want to notice clear artifacting at 60fps at all ideally.  Not merely notice there’s a few less of them but the problem persists.

 

Link to comment
Share on other sites

21 minutes ago, crispy4000 said:


On paper it’s not.  But I question if this has more to do with how the human eye perceives motion at high framerates.

 

If DLSS3 perceptively show flaws below 80fps currently, just having fewer AI generated frames at 60fps doesn’t mean the issue could be completely resolved.

 

I don’t want to notice clear artifacting at 60fps at all ideally.  Not merely notice there’s a few less of them but the problem persists.

 


Reducing how often there are AI generated frames is what would theoretically mitigate that 

Link to comment
Share on other sites

2 hours ago, cusideabelincoln said:

 

From the limited analyses I've seen the biggest hurdle to improving the garbled image will revolve around HUD elements, especially transparent ones like waypoint markers.  The blocky, imperfect AI-generated frame mostly helps improve the motion clarity because it exists between two good frames and 95% of the pixels of that frame are constantly changing. However, HUD elements are fairly static and typically reside in the same pixel location of a frame, so when the imperfect AI-generated frame doesn't get the HUD correct you'll notice flicker. 

 

I don't see an easy fix for any HUD element that is also transparent because of the way the frame generation works now - needing to have the entire frame rendered. They could tell the algorithm to basically "crop" the portions of the frame with HUD elements and keep those pixels unchanged between normal frames, but then there may be a noticeable jitteriness if those pixels were supposed to change due to the transparency. But this could be less noticeable than if the AI were to try and guess those pixels anyway. The next evolution of this tech would be to implement frame generation into the game engine, and then the developer can tell the algorithm what is and is not a HUD element. Otherwise, Nvidia will have to optimize in the algorithm on a per game basis what the HUD elements are. 

 

A fascinating comparison I would love to see is how Nvidia's AI stacks up to some HDTV's motion interpolation, strictly in terms of motion clarity. I have no idea how good smart TVs have gotten since my 4k TV does a shit job at motion interpolation, so this could be a good way to judge how much improvement Nvidia has left to go.

I ain't no fancy big city graphical engineer, but I figure with how prevalent dynamic resolution is, as well as temporal reconstruction, upscaling in general and whatnot, UI elements should already generally be segmented out so that they're not affected by said resolution changes. I can't imagine it'd be too difficult to use that same exact data as a little dealio for DLSS Frame Generation to deal with. I'm sure there's more to it than that, but it doesn't seem impossible at all.

Link to comment
Share on other sites

1 hour ago, Spork3245 said:


Reducing how often there are AI generated frames is what would theoretically mitigate that 


That’s the thing, it’s theoretical.  Think about a 30fps presentation where only 1 in 6 frames is generated.  Would it be noticeable?  What about 40fps on a vrr screen?  FPS itself could be a huge factor in what can trick our eyes convincingly.  As is the type of motion in a game itself.


This is completely uncharted territory.  All we know for sure is that 60fps with half the frames generated isn’t really there (right now) for 3rd person action games.

Link to comment
Share on other sites

20 minutes ago, crispy4000 said:

That’s the thing, it’s theoretical.


It’s all theoretical, including your statements. The very post you replied to stated “theoretically” and filled with notions of “future versions” and the word “may”, and was wondering if this would improve much like DLSS did. 

 

4 minutes ago, crispy4000 said:

Not a typical use case.


What isn’t a typical use case?

Link to comment
Share on other sites

4 minutes ago, Spork3245 said:


It’s all theoretical, including your statements. The very post you replied to stated “theoretically” and filled with notions of “future versions” and the word “if”, wondering if this would improve much like DLSS did. 


I was merely stating reasons it might not work out in practice when it seemingly does on paper.

 

4 minutes ago, Spork3245 said:

What isn’t a typical use case?


Flight Sim.

 

Link to comment
Share on other sites

4 minutes ago, crispy4000 said:

Flight Sim.


How so? People on the Flight Sim forums were freaking out about artifacts claimed by a single user there, so I played for an hour to see if I noticed any (I didn’t)
I have zero artifacts in Plague Tale Requiem as well, and I’ve been actively looking for them.

There’s been a lot of hyperbole about the AI Frame Gen causing bad IQ and distortions all over reddit and social media. Maybe my 4090 is just special, though.

Link to comment
Share on other sites

16 minutes ago, Spork3245 said:


How so? People on the Flight Sim forums were freaking out about artifacts claimed by a single user there, so I played for an hour to see if I noticed any (I didn’t)
I have zero artifacts in Plague Tale Requiem as well, and I’ve been actively looking for them.

There’s been a lot of hyperbole about the AI Frame Gen causing bad IQ and distortions all over reddit and social media. Maybe my 4090 is just special, though.

 

Alex said that Flight Sim represents the type of motion that won’t show notable artifacting at 60fps.  3rd person games with looping animations are where the bigger problems lie, and don’t resolve convincingly (to his eyes) until roughly 80fps.

 

I don’t doubt that you aren’t perceiving any errors in Plauge’s Tale Requiem.  It’s subjective to some degree.  Did you mention what FPS you’re averaging?  That matters quite a bit here.

Link to comment
Share on other sites

Just now, crispy4000 said:

 

Alex said that Flight Sim represents the type of motion that won’t show much artifacts at 60fps.  3rd person games with looping animations are where the bigger problems lie, and don’t resolve convincingly (to his eyes) until roughly 80fps.


 

 

The flight sim video wasn’t directed at anyone itt, so I’m not sure why you’re replying to it as if I’m challenging something said.

 

Just now, crispy4000 said:

I don’t doubt that you aren’t perceiving any errors in Plauge’s Tale Requiem.  It’s subjective to some degree.  Did you mention what FPS you’re averaging it at?  That matters quite a bit here.

 

 

I’m not talking about sub-60 artifacting with that video or my previous post.

 

5 minutes ago, Xbob42 said:

Considering how few 4090s exist in the wild, I imagine a lot of the outrage is just speculation or echoing what people heard from others.

Inglourious Basterds Bingo GIF

Link to comment
Share on other sites

12 minutes ago, Spork3245 said:

The flight sim video wasn’t directed at anyone itt, so I’m not sure why you’re replying to it as if I’m challenging something said.

 

I’m not talking about sub-60 artifacting with that video or my previous post.


Fair enough.  I just think with 60fps still being a threshold the current consoles target (or fall short of), Flight Sim is the kind of exception that works out at DLSS3’s current quality.  Other games at 60fps DLSS3, not so much.

 

Asking again, what FPS are you averaging in Plauge’s Tale R?

Link to comment
Share on other sites

23 minutes ago, crispy4000 said:


Fair enough.  I just think with 60fps still being a threshold the current consoles target (or fall short of), Flight Sim is the kind of exception that works out at DLSS3’s current quality.  Other games at 60fps DLSS3, not so much.


 

 

FWIW on my 3080 Ti I was barely getting 40 fps with basically everything set to a mix of medium and low, now it’s basically a solid 120fps at ultra :p 

 

23 minutes ago, crispy4000 said:

Asking again, what FPS are you averaging in Plauge’s Tale R?


I honestly don’t know since I was playing on my OLED and had the fps counter off. I did run it at full 4k using DLAA instead of DLSS with frame generation on for a bit and still saw no artifacts. I flipped back to DLSS quality and it was certainly smoother to where I’d guess 100+fps. This is probably the first time in many years I just maxed a game out and didn’t bother checking fps, but will do so when I play it on my monitor. :p 

Link to comment
Share on other sites

11 minutes ago, Spork3245 said:

I honestly don’t know since I was playing on my OLED and had the fps counter off. I did run it at full 4k using DLAA instead of DLSS with frame generation on for a bit and still saw no artifacts. I flipped back to DLSS quality and it was certainly smoother to where I’d guess 100+fps


I’d be surprised if anyone is perceiving major artifacting beyond 90fps, outside of screen caps and whatnot.

 

Or maybe it has the same issues as Spider-Man if you’re playing KB&M and spaz out the player camera.  Which is no dealbreaker at all for me.

Link to comment
Share on other sites

I wonder how AMD is leveraging their Xilinx acquisition. 
 

Doesn’t DLSS 3 require dedicated hardware? It seems unlikely AMD could do this without some of their own, making pro models of consoles getting frame generation very unlikely since it would require a new soc. Then again I have learned to not underestimate AMD, but AI has never been their strong point.

Link to comment
Share on other sites

2 hours ago, Massdriver said:

I wonder how AMD is leveraging their Xilinx acquisition. 
 

Doesn’t DLSS 3 require dedicated hardware? It seems unlikely AMD could do this without some of their own, making pro models of consoles getting frame generation very unlikely since it would require a new soc. Then again I have learned to not underestimate AMD, but AI has never been their strong point.

 

Tensor cores certainly help (not sure if they had any other relevant hardware for the frame generation added), but the bigger issue, IMO, is AMD just hasn't invested nearly as much in their libraries for ML. Nvidia is constantly improving their low-level libraries, which provide super optimized implementations for common ML architectures. These speed ups are non-trivial. Merely updating your Nvidia drivers and their library (CuDNN) can result in big speed boosts for your ML workload. AMD could gain a lot of ground simply by getting their software up to speed and leveraging standard matrix multiply operations that GPUs are already good at.

Link to comment
Share on other sites

6 hours ago, Massdriver said:

I wonder how AMD is leveraging their Xilinx acquisition. 
 

Doesn’t DLSS 3 require dedicated hardware? It seems unlikely AMD could do this without some of their own, making pro models of consoles getting frame generation very unlikely since it would require a new soc. Then again I have learned to not underestimate AMD, but AI has never been their strong point.


DLSS 1 and 2 requires dedicated hardware as well. AMD made their version, FSR, that runs on a much broader range of cards when compared to DLSS, even a 500 series AMD card or nVidia 1000 series can use it. An AI Frame Generation from AMD could theoretically support some of the GPUs that FSR supports, however, as I mentioned in the original post, if AMD is able to make a viable version of this, I’d expect it on the “pro versions” of the current consoles. :sun:

Link to comment
Share on other sites

6 hours ago, Spork3245 said:


DLSS 1 and 2 requires dedicated hardware as well. AMD made their version, FSR, that runs on a much broader range of cards when compared to DLSS, even a 500 series AMD card or nVidia 1000 series can use it. An AI Frame Generation from AMD could theoretically support some of the GPUs that FSR supports, however, as I mentioned in the original post, if AMD is able to make a viable version of this, I’d expect it on the “pro versions” of the current consoles. :sun:

I'm aware of FSR, but I was uncertain if taking it to the next level of frame generation would require specialized hardware whereas FSR 1&2 did not. @crispy4000pointed to some possibilities which I was not aware of. I have heard rumors that AMD has some sort of plan for dedicated hardware for future FSR versions, but nothing concrete.

Link to comment
Share on other sites

1 hour ago, Massdriver said:

I'm aware of FSR, but I was uncertain if taking it to the next level of frame generation would require specialized hardware whereas FSR 1&2 did not. @crispy4000pointed to some possibilities which I was not aware of. I have heard rumors that AMD has some sort of plan for dedicated hardware for future FSR versions, but nothing concrete.

 

Having dedicated hardware for FSR and/or Frame Generation would more-than likely be superior than having software that uses resources from hardware (if that makes sense :p ). My thought is for this to potentially be on updated PS5s/XSXs with new hardware (ie: PS4 Pro and Xbox One X) or next generation in 5-6 years, not the currently released consoles. :santasun:

  • Halal 1
Link to comment
Share on other sites

18 hours ago, Spork3245 said:

This is probably the first time in many years I just maxed a game out and didn’t bother checking fps, but will do so when I play it on my monitor. :p 

 

For whatever reason I cannot get GeForce Experience to show me the fps in this game and Afterburner always gives me a headache when I try to use it. So, what I did was force VSync in the control panel, then drop my monitors resolution down to 60hz to cap the game at 60fps. At this frame rate, there is indeed rendering errors (I didn't doubt that there would be), however, I will say that if I were just playing the game and not swinging the camera all around and trying to look for them, I don't think they'd be that noticeable as they only occur in the distance at the edges of the screen (typically in the area between two objects, like a hill in the distance you can see from out of a window at the last 20% of your screen). What would be undeniably noticeable, however, is the render latency (GeForce Experience still showed this) which was bouncing between 250-400ms and felt like trying to play a game with motion smoothing enabled on a TV - with my fps not capped to 60hz, this is typically around 30-60 with frame gen on iirc. That all being said, as I already mentioned (multiple times :p ), in it's current state/version, I wouldn't want frame generation being used on consoles to get 40-45fps up to 60fps, I am talking about future versions as the tech improves. You can throw "that's theoretical!" at it, sure, but then you're simultaneously saying that you think the technology is at it's peak already.

  • True 1
Link to comment
Share on other sites

14 hours ago, Massdriver said:

I wonder how AMD is leveraging their Xilinx acquisition. 
 

Doesn’t DLSS 3 require dedicated hardware? It seems unlikely AMD could do this without some of their own, making pro models of consoles getting frame generation very unlikely since it would require a new soc. Then again I have learned to not underestimate AMD, but AI has never been their strong point.

 

Actually I have found a rumor of Xilinx chip included with Zen 5 APUs atleast Phoenix Point. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...