Jump to content

Nvidia Gamescon Confrence Today - 2080 series to be announced


Mr.Vic20

Recommended Posts

1 minute ago, mikechorney said:

Based on the rumours, I agree that they will likely launch a chip on 2019, but I haven't seen anything that makes me think it will be a GeForce gaming GPU.

 

A x2 or x4 performance increase in 2 years is unlikely until we get to a new chip technology.

 

Hard to say what sacrifices they had to make for this first generation and how fast they move down in node process.  I am not talking about a 2x to 4x performance increase in normal rasterization (you are lucky to see a 30-50% increase every 2 years these days), but specifically Ray Tracing, as they dig into it, they will make considerable improvements.  

 

Take the original CUDA cores that came out, they totally wrapped that thing performance wise with the 2nd and third revisions.  Obviously, process tech was changing too quickly, but it wasn't just due to that.  

 

 

Link to comment
Share on other sites

  • 2 weeks later...

One week from tomorrow, some of us will have our 20xx series cards (or at least they'll be shipping to us!). With that in mind, who pre-ordered? Who is trying to buy day one but did not pre-order? Obviously, at these prices most will pass on buying any of these cards for now. Much has been made of the Ray Tracing feature, but I am currently much more interested in the immediate benefits of DLSS on 4K performance and if this feature, once commonly used, will give these cards a significant boost in FPSs at high resolutions or if its all marketing fluff. I'm honestly just excited to try out new tech, as per usual, but DLSS seems to not be getting much attention and given that Ray Tracing is still not quite ready for prime time, I surprised Nvidia hasn't spoken more  on this feature.  

  • Like 1
Link to comment
Share on other sites

11 minutes ago, Mr.Vic20 said:

One week from tomorrow, some of us will have our 20xx series cards (or at least they'll be shipping to us!). With that in mind, who pre-ordered? Who is trying to buy day one but did not pre-order? Obviously, at these prices most will pass on buying any of these cards for now. Much has been made of the Ray Tracing feature, but I am currently much more interested in the immediate benefits of DLSS on 4K performance and if this feature, once commonly used, will give these cards a significant boost in FPSs at high resolutions or if its all marketing fluff. I'm honestly just excited to try out new tech, as per usual, but DLSS seems to not be getting much attention and given that Ray Tracing is still not quite ready for prime time, I surprised Nvidia hasn't spoken more  on this feature.  

 

Yeah the RT is clearly people's focus, but I absolutely agree that what this amount of tensor cores provides may have some really exciting results even in addition to the DLSS. It's more of a wild card at this stage though.

Link to comment
Share on other sites

11 minutes ago, legend said:

 

Yeah the RT is clearly people's focus, but I absolutely agree that what this amount of tensor cores provides may have some really exciting results even in addition to the DLSS. It's more of a wild card at this stage though.

This image is a wealth of questions! Obviously one can dismiss this as nothing but marketing PR, but there is always a sliver of truth wedged into these things that only becomes clear retrospectively. The first question I have is, why does DLSS have support on the games on the left, but not the games on the right? Is adoption from game to game based on how well these features can "bolt on" to each respective engine? Of does it reflect a time and commitment cost? Why will NO ONE reward my impatience?! :p

 

NVIDIA GeForce RTX 2080 vs GTX 1080: Official Benchmarks Compared

Link to comment
Share on other sites

Apparently details on the embargos have leaked.

"On September 14, editors will go live with a deep dive into the Turing architecture. On September 17 the reviews for RTX 2080 will be published, and gamers curious about the flagship RTX 2080 Ti will have to wait until September 19."

 

I haven't pre-ordered.  Some unexpected expenses have come up (I am dropping about $8,000 on my fireplace) -- so I might look at getting one as an Xmas present (depending on reviews).

Link to comment
Share on other sites

14 minutes ago, Mr.Vic20 said:

This image is a wealth of questions! Obviously one can dismiss this as nothing but marketing PR, but there is always a sliver of truth wedged into these things that only becomes clear retrospectively. The first question I have is, why does DLSS have support on the games on the left, but not the games on the right? Is adoption from game to game based on how well these features can "bolt on" to each respective engine? Of does it reflect a time and commitment cost? Why will NO ONE reward my impatience?! :p

 

NVIDIA GeForce RTX 2080 vs GTX 1080: Official Benchmarks Compared

 

Speculating: 

The DLSS isn't a super sampler for any image that could be provided. Instead, it requires a developer to collect a lot of frames from the game rendered at very high resolutions. Those frames are then input as training data to their DLSS neural *architecture* and they train a neural net specific to your game. Once you've done that, you can have the game load the model it trained and use it for DLSS.

 

While that would require developer support, collecting many high resolution frames throughout the game is not especially difficult and should be something that could be easily adopted.

Link to comment
Share on other sites

19 minutes ago, legend said:

 

Speculating: 

The DLSS isn't a super sampler for any image that could be provided. Instead, it requires a developer to collect a lot of frames from the game rendered at very high resolutions. Those frames are then input as training data to their DLSS neural *architecture* and they train a neural net specific to your game. Once you've done that, you can have the game load the model it trained and use it for DLSS.

 

While that would require developer support, collecting many high resolution frames throughout the game is not especially difficult and should be something that could be easily adopted.

Forgive my ignorance, but does this mean that each game would have a complied neural architecture library of samples that it would save a s a temp file? I assume that once the DL cores understand the reference material they will create their own reference file to draw from, one that I assume would not be a huge file(s)? 

Link to comment
Share on other sites

1 hour ago, Mr.Vic20 said:

Forgive my ignorance, but does this mean that each game would have a complied neural architecture library of samples that it would save a s a temp file? I assume that once the DL cores understand the reference material they will create their own reference file to draw from, one that I assume would not be a huge file(s)? 

 

The developers would collect a bunch of data (i.e., lots and lots of high resolution screenshots). This can be collected trivially, but is large in file size.

 

Then the developers train a neural network on it. Nvidia probably has a built in API where the developers just point it to the directory of images they saved and let it crank for maybe at most days. 

 

After training the neural net, you no longer need that data to run the network. Instead you only need to know the weights that the neural network used. (A bunch of floating point numbers.) The weights only total in the megabytes of size, even for a large neural network. 

 

What that means is for a gamer running the game, they only need to load that weight file into GPU memory and the GPU runs the network using those weights on each frame.

Link to comment
Share on other sites

10 minutes ago, legend said:

 

The developers would collect a bunch of data (i.e., lots and loys of high resolution screenshots). This can be collected trivially, but is large in file size.

 

Then the developers train a neural network on it. Nvidia probably has a built in API where the developers just point it to the directory of images they saved and let it crank for maybe at most days. 

 

After training the neural net, you no longer need that data to run the network. Instead you only need to know the weights that the neural network used. (A bunch of floating point numbers.) The weights only total in the megabytes of size, even for a large neural network. 

 

What that means is for a gamer running the game, they only need to load that weight file into GPU memory and the GPU runs the network using those weights on each frame.

Yes, yes! That is what I was trying to say, except your version made sense when read from left to right! :sun:

  • Thanks 1
Link to comment
Share on other sites

3 hours ago, mikechorney said:

Why would the data be game/engine specific?  Wouldn't the data be more generic?  (i.e. wouldn't the "weights" be pretty standard across the board?)

 

Neural nets are only as good as the data you give them. If you wanted one single purpose neural net that worked well for all games you'd (1) need a very large neural net to be able to express everything ; and (2) need a truly absurd amount of data to cover all variations in images that games span. 

 

Is there enough data available to train such a net? It's possible but I'm not sure. You get much better guarantees if you train on the actual data you want to upscale and you can do it with a computationally simpler net. 

Link to comment
Share on other sites

23 minutes ago, legend said:

 

Neural nets are only as good as the data you give them. If you wanted one single purpose neural net that worked well for all games you'd (1) need a very large neural net to be able to express everything ; and (2) need a truly absurd amount of data to cover all variations in images that games span. 

 

Is there enough data available to train such a net? It's possible but I'm not sure. You get much better guarantees if you train on the actual data you want to upscale and you can do it with a computationally simpler net. 

Why does anti-aliasing require less data if it is game specific?  I would have thought the process was more generic.

Link to comment
Share on other sites

17 minutes ago, mikechorney said:

Why does anti-aliasing require less data if it is game specific?  I would have thought the process was more generic.

 

It's not anti-aliasing (except only in a more broad definition of anti-aliasing that isn't going to help in understanding the issue). It's a machine-learned neural net that does super scaling. It's "guessing" about what data is missing in the low res image from only the low res image.

 

Machine learning of this sort says "Find a function F, such that F(x) = y for a  number of examples of x and y"

 

For illustration, lets not worry about images. Suppose we were finding a function of two inputs to one output. We give to the machine learning algorithm the following two pieces of data:

 

F(3, 2) = 7

F(1, 4) = 7.1

 

If I then ask you want the value is for F(2, 3), you can probably give a guess that's not bad, because those inputs are pretty close to the examples we saw. For example, a guess of around 7.2 might not be unreasonable. 

 

But what if I asked you about F(203, -6.3)? Well, you can maybe extrapolate pretty far, but you're going to be less confident about the answer here. Especially if the function ends up being really complex even around the examples you've seen. If we wanted to be able to make predictions about those numbers too, we really ought to have provided more data that covered that space.

 

If you want a single general purpose neural net that works for every game without new training, you're going to have to provide huge amounts of data that will well cover the space of all conceivable game images. But if you know you only have to learn the function for a narrow range of images--those generated by a specific game, then you can just grab the data for it and only it, and custom tailor the learned function for it.

Link to comment
Share on other sites

Just now, legend said:

 

It's not anti-aliasing. It's a machine-learned neural net that does super scaling. It's "guessing" about what data is missing in the low res image from only the low res image.

 

Machine learning of this sort says "Find a function F, such that F(x) = y for a  number of examples of x and y"

 

For illustration, lets not worry about images. Suppose we were finding a function of two inputs to one output. We give to the machine learning algorithm the following tow pieces of data:

 

F(3, 2) = 7

F(1, 4) = 7.1

 

If I then ask you want the value is for F(2, 3), you can probably give a guess that's not bad, because those inputs are pretty close to the examples we saw. For example, a guess of around 7.2 might not be unreasonable. 

 

But what if I asked you about F(203, -6.3)? Well, you can maybe extrapolate pretty far, but you're going to be less confident about the answer here. Especially if the function ends up being really complex even around the examples you've seen. If we wanted to be able to make predictions about those numbers too, we really ought to have provided more data that covered that space.

 

If you want a single general purpose neural net that works for every game without new training, you're going to have to provide huge amounts of data that will well cover the space of all conceivable game images. But if you know you only have to learn the function for a narrow range of images--those generated by a specific game, then you can just grab the data for it and custom tailor the learned function for it.

What makes that function simpler if it's game specific?  Any game can still make a near-infinite number of individual frames -- so the function is still using "rules".  Why would the "rules" on BFV be any different than CoD?

Link to comment
Share on other sites

3 minutes ago, mikechorney said:

What makes that function simpler if it's game specific?  Any game can still make a near-infinite number of individual frames -- so the function is still using "rules".  Why would the "rules" on BFV be any different than CoD?

 

Which is bigger, the space of all images that a game can generate from playing it, or the space of images of all possible games ever made?

Link to comment
Share on other sites

1 minute ago, legend said:

 

Which is bigger, the space of all images that a game can generate from playing it, or the space of images of all possible games ever made?

Yes.  Infinity is bigger than almost infinity.

 

I still don't understand why scaling an image would materially differ from game to game.  On a 3D rendered image, I don't understand what about a specific engine/game would make the "guessing" any different on a sub-pixel basis.  I'm not challenging you -- just trying to understand your POV to increase my own knowledge.

Link to comment
Share on other sites

4 minutes ago, mikechorney said:

Yes.  Infinity is bigger than almost infinity.

 

The numbers are absolutely not infinity in either case. They're finite in both cases and one is vastly smaller than the other.

 

This still works for infinity, but I'd have to think about how to explain in words without invoking mathematical concepts like Rademacher complexity :p So for now, lets stick to the easier to discuss scenario, that also happens to be the scenario we're in.

 

Tell me, if you play doom, will you ever see in the game an image of my office?

 

Quote

I still don't understand why scaling an image would materially differ from game to game.  On a 3D rendered image, I don't understand what about a specific engine/game would make the "guessing" any different on a sub-pixel basis.  I'm not challenging you -- just trying to understand your POV to increase my own knowledge.

 

Answer me this: why are current anti-aliasing techniques that don't render in a higher resolution and down scale inferior to actually rendering in that higher resolution?

Link to comment
Share on other sites

1 minute ago, legend said:

 

The numbers are absolutely not infinity in either case. They're finite in both cases and one is vastly smaller than the other.

 

Tell me, if you play doom, will you ever see in the game an image of my office?

 

 

Answer me this: why are current anti-aliasing techniques that don't render in a higher resolution and down scale inferior to actually rendering in that higher resolution?

I am not asking about whether super-scaling or anti-aliasing is better.  I am wondering why super-scaling needs to tailored on a game-specific level?

Link to comment
Share on other sites

56 minutes ago, mikechorney said:

I am not asking about whether super-scaling or anti-aliasing is better.  I am wondering why super-scaling needs to tailored on a game-specific level?

 

I'm asking you those question to try and walk through why you're not understanding because I already answered why but you didn't understand :p

 

I'll give the answers to the questions I posed, but if you would have actually given different answers to those questions, what I'm going to say won't help you, because there is something else you might be misunderstanding. And so if you still don't understand after this, you need to work with me here to help you understand, which means answering the questions I ask to walk you though the logic.

 

Q1: "Tell me, if you play doom, will you ever see in the game an image of my office?"

A: No. Doom will show only a very specific set of images much different than many other images like my office, race tracks, space battles, etc.

 

Q2: "Why are current anti-aliasing techniques that don't render in a higher resolution and down scale inferior to actually rendering in that higher resolution?"

A: Because there is missing information in a low res image. Consequently, running smoothing techniques or anything less of fully rendering at a higher resolution will never recover what all of the actual missing information is.

 

From the answers what we've revealed two things.

1. From Q2 we conclude that to do better superscaling, we need to some how have access to more knowledge about what would actually be in the higher scale image.

2. From Q1 we know that if there is only one game we're trying to work with, the total information it defines is *FAR* less than the total information covered by all games.

 

When we train a neural network we're compressing information from the source dataset down into a more compact form. In this setting, what that means is the neural net encodes information from the whole set of images from the game that you showed it. As a result, using a neural net we evade the missing information problem faced by anti-aliasing: the structure and weights of the neural net actually give us access to information that isn't in the single current frame we're looking at!

 

And if there are a lot of regularities in the training data, that is, the total information in the target task is small, then we can do a lot of compression and we also do not need a lot of examples. If outputs smoothly interpolate between two examples, we don't need to see what the outputs are inbetween values.

 

Because a single game has a shit ton more regularities in its outputs than the space of all games, the neural net can much more effectively compress only the information that game constructs and we don't need to see as many examples, because we can quickly latch onto the general regularities and safely exploit them across.

 

 

 

It's *possible* that Nvidia does have enough data from existing games to make a single one-size-fits-all neural network for superscaling. But it will always be computaitonally cheaper and require less data if you only have to worry about a narrower distribution (that is, on a game-to-game basis). Consequently, if Nvidia is requiring per-game support, it's probably because trying to make a monolithic single network fits all is either too computationally expensive or they simply don't have enough data to do it with the same level of quality as doing a per-game network (or maybe not as good as tuning the net on a per-game basis with game-specific data).

Link to comment
Share on other sites

1 hour ago, legend said:

 

From the answers what we've revealed two things.

1. From Q2 we conclude that to do better superscaling, we need to some how have access to more knowledge about what would actually be in the higher scale image.

 

 

1.  Super-scaling does not require more knowledge.  By definition, you have the knowledge already with super-scaling

 

1 hour ago, legend said:

 

2. From Q1 we know that if there is only one game we're trying to work with, the total information it defines is *FAR* less than the total information covered by all games.

 

How does this help?

 

1 hour ago, legend said:

 

When we train a neural network we're compressing information from the source dataset down into a more compact form. In this setting, what that means is the neural net encodes information from the whole set of images from the game that you showed it. As a result, using a neural net we evade the missing information problem faced by anti-aliasing: the structure and weights of the neural net actually give us access to information that isn't in the single current frame we're looking at!

 

And if there are a lot of regularities in the training data, that is, the total information in the target task is small, then we can do a lot of compression and we also do not need a lot of examples. If outputs smoothly interpolate between two examples, we don't need to see what the outputs are inbetween values.

I don't believe this is how neural networks work.  My understanding is that neural networks "learn" by testing algorithms (generally at random), evaluating the results, and reinforcing those algorithms that provide better results.  By providing more data (rather than less), the algorithms get better and more efficient. 

 

Unless of course, there needed to be different algorithms for different games -- because there was some fundamental difference in what worked in different games.

 

1 hour ago, legend said:

It's *possible* that Nvidia does have enough data from existing games to make a single one-size-fits-all neural network for superscaling. But it will always be computaitonally cheaper and require less data if you only have to worry about a narrower distribution (that is, on a game-to-game basis). Consequently, if Nvidia is requiring per-game support, it's probably because trying to make a monolithic single network fits all is either too computationally expensive or they simply don't have enough data to do it with the same level of quality as doing a per-game network (or maybe not as good as tuning the net on a per-game basis with game-specific data).

I would guess that DLSS uses a method of machine learning to intelligently use super-sampling on a small % of the pixels -- i.e. the neural network learns where super-sampling provides an IQ benefit, and does super-sampling there, and avoids supersampling on image elements where it doesn't believe it will provide benefit.

 

While it is certainly possible that it is too computationally difficult to have a "generic algorithm" -- my initial supposition was that games had to "turn-on" DLSS in order for them to be benchmarked.

Link to comment
Share on other sites

31 minutes ago, mikechorney said:

 

1.  Super-scaling does not require more knowledge.  By definition, you have the knowledge already with super-scaling

 

Okay, see, this is why I asked the question to start. You're misunderstanding what's going on and that's probably why you're having trouble understanding the answers I'm giving you. No, the information is not there. What the DLSS is doing is taking a low resolution image and upscaling it to a higher resolution. The information is *not* there in the image because no where in the active rendering pipeline did the system render the scene at a higher resolution. It's rendered at a low resolution and the DLSS estimates from that low resolution what a higher resolution image would look like.

 

There is a fairly large body of work in deep learning looking at how to make reasonable "guesses" and what Nvidia is doing is applying that tech in a way custom tailored for games.

 

31 minutes ago, mikechorney said:

I don't believe this is how neural networks work.  My understanding is that neural networks "learn" by testing algorithms (generally at random), evaluating the results, and reinforcing those algorithms that provide better results.  By providing more data (rather than less), the algorithms get better and more efficient. 

 

Um... dude. Do you realize what I do? I'm expert in machine learning. I work on deep learning networks daily. I've taught and mentored students in this topic and am the research lead at my company doing reinforcement learning with deep neural networks for robotics.

Link to comment
Share on other sites

1 minute ago, legend said:

 

Okay, see, this is why I asked the question to start. You're misunderstanding what's going on and that's probably why you're having trouble understanding the answers I'm giving you. No, the information is not there. What the DLSS is doing is taking a low resolution image and upscaling it to a higher resolution. The information is *not* there in the image because no where in the active rendering pipeline did the system render the scene at a higher resolution. It's rendered at a low resolution and the DLSS estimates from that low resolution what a higher resolution image would look like.

 

There is a fairly large body of work in deep learning looking at how to make reasonable "guesses" and what Nvidia is doing is applying that tech in a way custom tailored for games.

 

Do you have a link to what Nvidia is doing?  Because I have only seen the generic/marketing bullshit from their launch -- which basically says nothing.

Link to comment
Share on other sites

I don't think Nvidia has released a paper on their specific implementation. But there are plenty of papers that work on this topic. Just do a search on it:

https://scholar.google.com/scholar?q=deep+learning+super+scaling&hl=en&as_sdt=0&as_vis=1&oi=scholart

 

 

 

 

Also, please see my edit as well. I'm not talking out of my ass about this. I'm an expert in this field. Please don't tell me how neural nets work :p 

Link to comment
Share on other sites

1 hour ago, legend said:

 

I think I had overlooked some the other tech that article described, like the variable shader. Sounds pretty cool.

Very cool ! The long term benefits of this tech are the only way I see us ultimately getting to 8K and beyond! Intelligently regulated asset quality management, as apposed to a brute force technique. And I'm particularly glad they are getting on this now, "before its ready" (:p)! Can you imagine the kind of hardware needed for 8K at 60fps?! Good lord, even I know to avoid the death bath that will be the push for 8K, and that is saying something! :lol: And for those who believe we will merrily bop along using the classic "Go smaller to go faster", we are rapidly running out of runway for that particular approach. Economizing leaps in power will become increasingly challenging with existing tech approaches.  

  • Like 1
Link to comment
Share on other sites

1 hour ago, Mr.Vic20 said:

Very cool ! The long term benefits of this tech are the only way I see us ultimately getting to 8K and beyond! Intelligently regulated asset quality management, as apposed to a brute force technique. And I'm particularly glad they are getting on this now, "before its ready" (:p)! Can you imagine the kind of hardware needed for 8K at 60fps?! Good lord, even I know to avoid the death bath that will be the push for 8K, and that is saying something! :lol: And for those who believe we will merrily bop along using the classic "Go smaller to go faster", we are rapidly running out of runway for that particular approach. Economizing leaps in power will become increasingly challenging with existing tech approaches.  

 

Yeah without stuff like that 8K is a pipedream for a long time :p Also VR could stand to benefit a lot from these effective resolution increases.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...